• Stars
    star
    115
  • Rank 305,916 (Top 7 %)
  • Language
    HTML
  • License
    MIT License
  • Created about 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Extract text from HTML

HTML to Text

PyPI Version Build Status Code Coverage

Extract text from HTML

  • Free software: MIT license

How is html_text different from .xpath('//text()') from LXML or .get_text() from Beautiful Soup?

  • Text extracted with html_text does not contain inline styles, javascript, comments and other text that is not normally visible to users;
  • html_text normalizes whitespace, but in a way smarter than .xpath('normalize-space()), adding spaces around inline elements (which are often used as block elements in html markup), and trying to avoid adding extra spaces for punctuation;
  • html-text can add newlines (e.g. after headers or paragraphs), so that the output text looks more like how it is rendered in browsers.

Install

Install with pip:

pip install html-text

The package depends on lxml, so you might need to install additional packages: http://lxml.de/installation.html

Usage

Extract text from HTML:

>>> import html_text
>>> html_text.extract_text('<h1>Hello</h1> world!')
'Hello\n\nworld!'

>>> html_text.extract_text('<h1>Hello</h1> world!', guess_layout=False)
'Hello world!'

Passed html is first cleaned from invisible non-text content such as styles, and then text is extracted.

You can also pass an already parsed lxml.html.HtmlElement:

>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello</h1> world!')
>>> html_text.extract_text(tree)
'Hello\n\nworld!'

If you want, you can handle cleaning manually; use lower-level html_text.etree_to_text in this case:

>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello<style>.foo{}</style>!</h1>')
>>> cleaned_tree = html_text.cleaner.clean_html(tree)
>>> html_text.etree_to_text(cleaned_tree)
'Hello!'

parsel.Selector objects are also supported; you can define a parsel.Selector to extract text only from specific elements:

>>> import html_text
>>> sel = html_text.cleaned_selector('<h1>Hello</h1> world!')
>>> subsel = sel.xpath('//h1')
>>> html_text.selector_to_text(subsel)
'Hello'

NB parsel.Selector objects are not cleaned automatically, you need to call html_text.cleaned_selector first.

Main functions and objects:

  • html_text.extract_text accepts html and returns extracted text.
  • html_text.etree_to_text accepts parsed lxml Element and returns extracted text; it is a lower-level function, cleaning is not handled here.
  • html_text.cleaner is an lxml.html.clean.Cleaner instance which can be used with html_text.etree_to_text; its options are tuned for speed and text extraction quality.
  • html_text.cleaned_selector accepts html as text or as lxml.html.HtmlElement, and returns cleaned parsel.Selector.
  • html_text.selector_to_text accepts parsel.Selector and returns extracted text.

If guess_layout is True (default), a newline is added before and after newline_tags, and two newlines are added before and after double_newline_tags. This heuristic makes the extracted text more similar to how it is rendered in the browser. Default newline and double newline tags can be found in html_text.NEWLINE_TAGS and html_text.DOUBLE_NEWLINE_TAGS.

It is possible to customize how newlines are added, using newline_tags and double_newline_tags arguments (which are html_text.NEWLINE_TAGS and html_text.DOUBLE_NEWLINE_TAGS by default). For example, don't add a newline after <div> tags:

>>> newline_tags = html_text.NEWLINE_TAGS - {'div'}
>>> html_text.extract_text('<div>Hello</div> world!',
...                        newline_tags=newline_tags)
'Hello world!'

Apart from just getting text from the page (e.g. for display or search), one intended usage of this library is for machine learning (feature extraction). If you want to use the text of the html page as a feature (e.g. for classification), this library gives you plain text that you can later feed into a standard text classification pipeline. If you feel that you need html structure as well, check out webstruct library.


define hyperiongray

More Repositories

1

eli5

A library for debugging/inspecting machine learning classifiers and explaining their predictions
Jupyter Notebook
2,758
star
2

scrapy-rotating-proxies

use multiple proxies with Scrapy
Python
656
star
3

tensorboard_logger

Log TensorBoard events without touching TensorFlow
Python
625
star
4

sklearn-crfsuite

scikit-learn inspired API for CRFsuite
Python
421
star
5

aquarium

Splash + HAProxy + Docker Compose
Python
192
star
6

deep-deep

Adaptive crawler which uses Reinforcement Learning methods
Jupyter Notebook
165
star
7

arachnado

Web Crawling UI and HTTP API, based on Scrapy and Tornado
Python
156
star
8

autologin

A project to attempt to automatically login to a website given a single seed
Python
115
star
9

Formasaurus

Formasaurus tells you the type of an HTML form and its fields using machine learning
HTML
110
star
10

page-compare

Simple heuristic for measuring web page similarity (& data set)
HTML
88
star
11

autopager

Detect and classify pagination links
HTML
86
star
12

undercrawler

A generic crawler
Python
75
star
13

scrapy-crawl-once

Scrapy middleware which allows to crawl only new content
Python
74
star
14

soft404

A classifier for detecting soft 404 pages
Jupyter Notebook
53
star
15

agnostic

Agnostic Database Migrations
Python
51
star
16

autologin-middleware

Scrapy middleware for the autologin
Python
37
star
17

json-lines

Read JSON lines (jl) files, including gzipped and broken
Python
34
star
18

extract-html-diff

extract difference between two html pages
HTML
29
star
19

scrapy-kafka-export

Scrapy extension which writes crawled items to Kafka
Python
28
star
20

MaybeDont

A component that tries to avoid downloading duplicate content
Python
27
star
21

sitehound-frontend

Site Hound (previously THH) is a Domain Discovery Tool
HTML
23
star
22

imageSimilarity

Given a new image, determine if it is likely derived from a known image.
Python
20
star
23

domain-discovery-crawler

Broad crawler for domain discovery
Python
17
star
24

url-summary

Show summary of a large number of URLs in a Jupyter Notebook
Python
17
star
25

sitehound

This is the facade for installation and access to the individual components
Shell
16
star
26

tor-proxy

a tor socks proxy docker image
11
star
27

scrapy-dockerhub

[UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.
Python
10
star
28

web-page-annotator

Annotate parts of web pages in the browser
Python
9
star
29

scrash-lua-examples

A collection of example LUA scripts and JS utilities
JavaScript
7
star
30

scrapy-cdr

Item definition and utils for storing items in CDR format for scrapy
Python
7
star
31

hh-page-classifier

Headless Horseman Page Classifier service
Python
6
star
32

privoxy

Privoxy HTTP Proxy based on jess/privoxy
6
star
33

sitehound-backend

Sitehound's backend
HTML
6
star
34

fortia

[UNMAINTAINED] Firefox addon for Scrapely
JavaScript
5
star
35

proxy-middleware

Scrapy middleware that reads proxy config from settings
Python
5
star
36

linkrot

[UNMAINTAINED] A script (Scrapy spider) to check a list of URLs.
Jupyter Notebook
4
star
37

hgprofiler

JavaScript
4
star
38

linkdepth

[UNMAINTAINED] scrapy spider to check link depth over time
Python
4
star
39

common-crawl-mapreduce

A naive scoring of commoncrawl's content using MR
Java
3
star
40

captcha-broad-crawl

Broad crawl of onion sites in search for captchas
Python
3
star
41

frontera-crawler

Crawler-specific logic for Frontera
Python
3
star
42

hh-deep-deep

THH ↔ deep-deep integration
Python
3
star
43

scrapy-login

[UNMAINTAINED] A middleware that provides continuous site login facility
Python
3
star
44

bk-string

A BK Tree based approach to storing and querying strings by Levenshtein Distance.
C
3
star
45

domainSpider

Simple web crawler that sticks to a set list of domains. Work in progress.
Python
3
star
46

quickpin

New iteration of QuickPin with Flask & AngularDart
Python
2
star
47

py-bkstring

A python wrapper for the bk-string C project.
Python
2
star
48

broadcrawl

Middleware that limits number of internal/external links during broad crawl
Python
2
star
49

sshadduser

A simple tool to add a new user with OpenSSH keys.
Python
2
star
50

autoregister

Python
2
star
51

quickpin-api

Python wrapper for the QuickPin API
Python
1
star
52

muricanize

A translation API
Python
1
star
53

rs-bkstring

Rust
1
star
54

scrash-pageuploader

[UNMAINTAINED] S3 Uploader pipelines for HTML and screenshots rendered by Splash
Python
1
star
55

site-checker

JavaScript
1
star
56

frontera-scripts

A set of scripts to spin up EC2 Frontera cluster with spiders
Python
1
star