• Stars
    star
    2,557
  • Rank 17,285 (Top 0.4 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created about 5 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Python & command-line tool to gather text on the Web: web crawling/scraping, extraction of text, metadata, comments

A Python package & command-line tool to gather text on the Web

Logo as PNG image


Python package Python versions Documentation Status Code Coverage Downloads Reference DOI: 10.18653/v1/2021.acl-demo.15

Demo as GIF image

Description

Trafilatura is a Python package and command-line tool designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are web crawling, downloads, scraping, and extraction of main texts, metadata and comments. It aims at staying handy and modular: no database is required, the output can be converted to various commonly used formats.

Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the noise caused by recurring elements (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to make sense of the data. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be robust and reasonably fast, it runs in production on millions of documents.

This tool can be useful for quantitative research in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security.

Features

  • Web crawling and text discovery:
    • Focused crawling and politeness rules
    • Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
    • URL management (blacklists, filtering and de-duplication)
  • Seamless and parallel processing, online and offline:
    • URLs, HTML files or parsed HTML trees usable as input
    • Efficient and polite processing of download queues
    • Conversion of previously downloaded files
  • Robust and efficient extraction:
    • Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
    • Metadata (title, author, date, site name, categories and tags)
    • Formatting and structural elements: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting
    • Comments (if applicable)
  • Output formats:
    • Text (minimal formatting or Markdown)
    • CSV (with metadata, tab-separated values)
    • JSON (with metadata)
    • XML (with metadata, text formatting and page structure) and TEI-XML
  • Optional add-ons:
    • Language detection on extracted content
    • Graphical user interface (GUI)
    • Speed optimizations

Evaluation and alternatives

For more detailed results see the benchmark and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.

750 documents, 2236 text & 2250 boilerplate segments (2022-05-18), Python 3.8
Python Package Precision Recall Accuracy F-Score Diff.
html_text 0.5.2 0.529 0.958 0.554 0.682 2.2x
inscriptis 2.2.0 (html to txt) 0.534 0.959 0.563 0.686 3.5x
newspaper3k 0.2.8 0.895 0.593 0.762 0.713 12x
justext 3.0.0 (custom) 0.865 0.650 0.775 0.742 5.2x
boilerpy3 1.0.6 (article mode) 0.814 0.744 0.787 0.777 4.1x
baseline (text markup) 0.757 0.827 0.781 0.790 1x
goose3 3.1.9 0.934 0.690 0.821 0.793 22x
readability-lxml 0.8.1 0.891 0.729 0.820 0.801 5.8x
news-please 1.5.22 0.898 0.734 0.826 0.808 61x
readabilipy 0.2.0 0.877 0.870 0.874 0.874 248x
trafilatura 1.2.2 (standard) 0.914 0.904 0.910 0.909 7.1x

Other evaluations:

Usage and documentation

For more information please refer to the documentation:

For video tutorials see this Youtube playlist:

License

Trafilatura is distributed under the GNU General Public License v3.0. If you wish to redistribute this library but feel bounded by the license conditions please try interacting at arms length, multi-licensing with compatible licenses, or contacting me.

See also GPL and free software licensing: What's in it for business?

Context

Contributing

Contributions are welcome! See CONTRIBUTING.md for more information. Bug reports can be filed on the dedicated page.

Many thanks to the contributors who submitted features and bugfixes!

Roadmap

For planned enhancements and relevant milestones see issues page.

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge for those who conduct such research. Web corpus construction involves numerous design decisions, and this software package can help facilitate text data collection and enhance corpus quality.

Reference DOI: 10.18653/v1/2021.acl-demo.15 Zenodo archive DOI: 10.5281/zenodo.3460969
@inproceedings{barbaresi-2021-trafilatura,
  title = {{Trafilatura: A Web Scraping Library and Command-Line Tool for Text Discovery and Extraction}},
  author = "Barbaresi, Adrien",
  booktitle = "Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
  pages = "122--131",
  publisher = "Association for Computational Linguistics",
  url = "https://aclanthology.org/2021.acl-demo.15",
  year = 2021,
}

You can contact me via my contact page or on GitHub.

Software ecosystem

Software ecosystem

Trafilatura: Italian word for wire drawing.

Known uses of the software.

Corresponding posts on Bits of Language (blog).

More Repositories

1

German-NLP

Curated list of open-access/open-source/off-the-shelf resources and tools developed with a particular focus on German
402
star
2

simplemma

Simple multilingual lemmatizer for Python, especially useful for speed and efficiency
Python
120
star
3

htmldate

Fast and robust date extraction from web pages, with Python or on the command-line
Python
105
star
4

courlan

Clean, filter and sample URLs to optimize data collection – includes spam, content type and language filters
Python
61
star
5

geokelone

integrates spatial and textual data processing tools into a modular software package which features preprocessing, geocoding, disambiguation and visualization
Python
5
star
6

german-reddit

Extraction of a German Reddit Corpus
Python
3
star
7

tweets-tools

Diverse tools used with Twitter data
Python
2
star
8

flux-toolchain

Filtering and Language-identification for URL Crawling Seeds (FLUCS) a.k.a. FLUX-Toolchain
Perl
2
star
9

jlcl-style

Experiments to modernize the LaTeX class of the JLCL
TeX
1
star
10

trafilatura_gui

Python
1
star
11

toponyms

Old prototype for toponym extraction in historical texts written in German
1
star
12

zeitcrawler

Automatically exported from code.google.com/p/zeitcrawler
Java
1
star
13

url-compressor

A fast pattern-based URL compression for lists of links
Pascal
1
star
14

coronakorpus

Material zum Aufbau eines deutschsprachigen COVID-19-Webkorpus / Building a corpus in German dedicated to coronavirus
1
star
15

vardial-experiments

Experiments conducted on the occasion of the VarDial shared tasks
Python
1
star
16

microblog-explorer

Perform crawls of social networks (identi.ca, reddit, friendfeed) to gather internal and external links and identify their language
Python
1
star