• Stars
    star
    2,171
  • Rank 20,522 (Top 0.5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 8 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A set of tools for extracting tables from PDF files helping to do data mining on (OCR-processed) scanned documents.

pdftabextract - A set of tools for data mining (OCR-processed) PDFs

July 2016 / Feb. 2017, Markus Konrad [email protected] [email protected] / Berlin Social Science Center

This project is currently not maintained.

IMPORTANT INITIAL NOTES

From time to time I receive emails from people trying to extract tabular data from PDFs. I'm fine with that and I'm glad to help. However, some people think that pdftabextract is some kind of magic wand that automatically extracts the data they want by simply running one of the provided examples on their documents. This, in the very most cases, won't work. I want to clear up a few things that you should consider before using this software and before writing an email to me:

  1. pdftabextract is not an OCR (optical character recognition) software. It requires scanned pages with OCR information, i.e. a "sandwich PDF" that contains both the scanned images and the recognized text. You need software like tesseract or ABBYY Finereader for OCR. In order to check if you have a "sandwich PDF", open your PDF and press "select all". This usually reveals the OCR-processed text information.
  2. pdftabextract is some kind of last resort when all other things fail for extracting tabular data from PDFs. Before trying this out, you should ask yourself the following questions:
  • Is there really no other way / no other format for which the data is available?
  • Can a special OCR software like ABBYY Finereader detect and extract the tables (you need to try this with a large sample of pages -- I found the table recognition in Finereader often unreliable)?
  • Is it possible to extract the recognized text as-is from the PDFs and parse it? Try using the pdftotext tool from poppler-utils, a package which is part of most Linux distributions and is also available for OSX via Homebrew or MacPorts: pdftotext -layout yourdocument.pdf. This will create a file yourdocument.txt containing the recognized text (from the OCR) with a layout that hopefully resembles your tables. Often, this can be parsed directly (e.g. with a Python script using regular expressions). If it can't be parsed (e.g. if the columns are not well separated in the text, the tables on each page are too different to each other in order to come up with a common structure for parsing, the pages are too skewed or rotated) then pdftabextract is the right software for you.
  1. pdftabextract is a set of tools. As such, it contains functions that are suitable for certain documents but not for others and many functions require you to set parameters that depend on the layout, scan quality, etc. of your documents. You can't just use the example scripts blindly with your data. You will need to adjust parameters in order that it works well with your documents. Below are some hints and explanations regarding those tools and their parameters.

Introduction

This repository contains a set of tools written in Python 3 with the aim to extract tabular data from (OCR-processed) PDF files. Before these files can be processed they need to be converted to XML files in pdf2xml format. This is very simple -- see section below for instructions.

Module overview

After that you can view the extracted text boxes with the pdf2xml-viewer tool if you like. The pdf2xml format can be loaded and parsed with functions in the common submodule. Lines can be detected in the scanned images using the imgproc module. If the pages are skewed or rotated, this can be detected and fixed with methods from imgproc and functions in textboxes. Lines or text box positions can be clustered in order to detect table columns and rows using the clustering module. When columns and rows were successfully detected, they can be converted to a page grid with the extract module and their contents can be extracted using fit_texts_into_grid in the same module. extract also allows you to export the data as pandas DataFrame.

If your scanned pages are double pages, you will need to pre-process them with splitpages.

Examples and tutorials

An extensive tutorial was posted here and is derived from the Jupyter Notebook contained in the examples. There are more use-cases and demonstrations in the examples directory.

Features

  • load and parse files in pdf2xml format (common module)
  • split scanned double pages (splitpages module)
  • detect lines in scanned pages via image processing (imgproc module)
  • detect page rotation or skew and fix it (imgproc and textboxes module)
  • detect clusters in detected lines or text box positions in order to find column and row positions (clustering module)
  • extract tabular data and convert it to pandas DataFrame (which allows export to CSV, Excel, etc.) (extract module)

Installation

This package is available on PyPI and can be installed via pip: pip install pdftabextract

Requirements

The requirements are listed in requirements.txt and are installed automatically if you use pip.

Only Python 3 -- No Python 2 support.

Converting PDF files to XML files with pdf2xml format

You need to convert your PDFs using the poppler-utils, a package which is part of most Linux distributions and is also available for OSX via Homebrew or MacPorts. From this package we need the command pdftohtml and can create an XML file in pdf2xml format in the following way using the Terminal:

pdftohtml -c -hidden -xml input.pdf output.xml

The arguments input.pdf and output.xml are your input PDF file and the created XML file in pdf2xml format respectively. It is important that you specify the -hidden parameter when you're dealing with OCR-processed ("sandwich") PDFs. You can furthermore add the parameters -f n and -l n to set only a range of pages to be converted.

Usage and examples

For usage and background information, please read my series of blog posts about data mining PDFs.

See the following images of the example input/output:

Original page

original page

Generated (and skewed) pdf2xml file viewed with pdf2xml-viewer

OCR PDF example in the viewer

Detected lines

Detected lines

Detected clusters of vertical lines (columns)

Detected clusters of vertical lines (columns)

Generated page grid viewed in pdf2xml-viewer

Generated page grid viewed in pdf2xml-viewer

Excerpt of the extracted data

Excerpt of the extracted data

License

Apache License 2.0. See LICENSE file.

More Repositories

1

tmtoolkit

Text Mining and Topic Modeling Toolkit for Python with parallel processing power
Python
192
star
2

geovoronoi

a package to create and plot Voronoi regions within geographic boundaries
Python
128
star
3

pdf2xml-viewer

A simple viewer and inspection tool for text boxes in PDF documents
HTML
89
star
4

germalemma

A lemmatizer for German language text
Python
86
star
5

plz_geocoord

Dataset of all German postal codes and their geographic center as geo-coordinates.
31
star
6

otreeutils

Facilitate oTree experiment implementation with extensions for custom data models, surveys, understanding questions, timeout warnings and more.
Python
17
star
7

otree_custom_models

Example project showing how to use custom models in oTree for recording complex decisions in experiments
Python
8
star
8

pandas-excel-styler

Styling individual cells in Excel output files created with pandas.
Python
8
star
9

otree_iat

Implicit Association Test (IAT) experiment for oTree
Python
6
star
10

mdb-twitter-network

Twitter network of members of the 19th German Bundestag
R
5
star
11

tm_bundestag

An example topic model for debates from the 18th German Bundestag
Jupyter Notebook
5
star
12

r-geodata-workshop

Workshop held at WZB: Working with geo-spatial data in R - Obtaining, linking and plotting geographic data
R
5
star
13

gemeindeverzeichnis

Python-Modul zum Einlesen von Gemeindeverzeichnisdaten des Statistischen Bundesamts als pandas DataFrame
Python
5
star
14

tm_corona

A small showcase for topic modeling with the tmtoolkit Python package. I use a corpus of articles from the German online news website Spiegel Online (SPON) to create a topic model for before and during the COVID-19 pandemic.
Jupyter Notebook
3
star
15

spatially_weighted_avg

Code for "Spatially weighted averages in R with sf"
R
2
star
16

r_clustered_se

Code for blog post "Clustered standard errors with R: Three ways, one result".
R
2
star
17

d3-balloon

d3.js extension for interactive balloon plots
HTML
2
star
18

covid19-placesapi

Code to obtain and analyse "popular times" data from Google Places. Also contains data fetched between March 22nd and April 15th 2020 for different places world-wide.
R
2
star
19

r_simplify_features

Code for blog post showing how to simplify spatial features with R.
R
2
star
20

wzb_r_tutorial

Documents for R tutorial given at WZB accompanying the lecture "Studying Social Stratification with Big Data" (Hipp, Ulbricht) in winter semester 2018
1
star