• Stars
    star
    593
  • Rank 75,443 (Top 2 %)
  • Language
    Scala
  • License
    Apache License 2.0
  • Created over 8 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Given a scholarly PDF, extract figures, tables, captions, and section titles.

PDFFigures 2.0

PDFFigures 2.0 is a Scala based project built to extract figures, captions, tables and section titles from scholarly documents, with a strong focus on documents from the domain of computer science. See our paper for more details.

Input and Output

PDFFigures 2.0 takes as input a scholarly document in PDF form. Its output will be a list of 'Figure' objects where, for each figure, we have identified:

  1. The page the figure occurs in (0 based).
  2. The bounding box of the figure within that page, given as pixel coordinates where (0,0) is the top left of the PDF's cropbox and the page is assumed to be rendered at 72 DPI.
  3. Any text that occurs inside the figure.
  4. The caption of the figure.
  5. The bounding box of the caption.
  6. The 'name' of the figure as deduced from the caption. Usually, this is a number (ex. the name of a figure captioned "Figure 1" would be "1"), but it might take on some other form depending on the PDF parsed.
  7. Whether the figure was labelled as a Table or a Figure, again based on the caption.

PDFFigures 2 also supports the ability to save images of the extracted figures as rasterized images. Currently, we support any format that a BufferedImage can be saved to (png, jpeg, etc.). More experimentally, if pdftocairo is installed it can be used to save the figures to a selection of vector graphics formats (svg, ps, eps, etc.).

PDFFigures 2 only seeks to extract figures or tables that have been captioned, in which case we define a figure to be all elements on the page that the caption refers to. If a figure has subfigures, the returned figure will include all the subfigures. If a table or figure includes text titles or comments, those elements will be included in the figure.

Installation

Clone the repo and then run with sbt.

For licensing reasons, PDFFigures2 does not include libraries for some image formats. Without these libraries, PDFFigures2 cannot process PDFs that contain images in these formats. If you have no licensing restrictions in your project, we recommend you add these additional dependencies to your project as well:

  "com.github.jai-imageio" % "jai-imageio-core" % "1.2.1",
  "com.github.jai-imageio" % "jai-imageio-jpeg2000" % "1.3.0", // For handling jpeg2000 images
  "com.levigo.jbig2" % "levigo-jbig2-imageio" % "1.6.5", // For handling jbig2 images

Command Line Tools

PDFFigures 2 provides two CLI tools. One, 'FigureExtractorBatchCli', can be used to extract figures from a large number of PDFs and save the results to disk. The second, 'FigureExtractorVisualizationCli', works on a single PDF and provides extensive debug visualizations. Note it is recommended to use the "-Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider" to get the best performance out of the PDF parser, see here[https://pdfbox.apache.org/2.0/getting-started.html]

To run on a PDF and get a preview of the results use:

sbt "runMain org.allenai.pdffigures2.FigureExtractorVisualizationCli /path/to/pdf"

To get a visualization of how the PDF was parsed:

sbt "runMain org.allenai.pdffigures2.FigureExtractorVisualizationCli /path/to/pdf" -r

To get a visualization of all the intermediate steps:

sbt "runMain org.allenai.pdffigures2.FigureExtractorVisualizationCli /path/to/pdf" -s

To run on lots of PDFs while saving the images, figure objects, and run statistics:

sbt "runMain org.allenai.pdffigures2.FigureExtractorBatchCli /path/to/pdf_directory/ -s stat_file.json -m /figure/image/output/prefix -d /figure/data/output/prefix"

To compile a stand-alone JAR with these tools:

sbt assembly

Section Titles

FigureExtractor has experimental support for additionally identifying section titles. Section titles, along with the PDF's text, can be returned from the BatchCli using the "-g" flag. The output will the full text of the PDF, organized into sections. An effort is made to identify the abstract, if there is one, and to exclude text like page headers, authors names, and page numbers. Text inside figures and captions will also be excluded from the main text and encoded separately. Note that while the extracted section titles have been found to be reliable, the quality of the returned text itself has not been tested and is mostly what is returned by PDFBox's ExtractText

Interface

FigureExtractor exports its high level programmatic interfaces in FigureExtractor.scala

Multithreading

FigureExtractor rigorously checks Thread.interrupted and so can be timed out easily. FigureExtractorBatchCli supports multi-threading.

Implementation Overview

See the paper for more details. In brief, the input PDF is pushed through the following steps:

  1. Text is extracted from the PDF. See TextExtractor.scala.
  2. Page numbers, page headers, and abstracts are identified and removed. See FormattingTextExtractor.scala.
  3. Some statistics are gathered about the remaining text. We use these statistics later to identify text that is atypical/unusual since that text is likely to be part of a figure. See DocumentLayout.scala.
  4. The locations of captions within the text are identified. See CaptionDetector.scala.
  5. For each page with captions, we identify where any graphical/non-textual elements are. See GraphicExtractor.scala.
  6. We determine the entirety of each caption. (Previous steps just identified the line that started each caption; this step identifies the full text of each caption.) See CaptionBuilder.scala.
  7. The text in each page that contained a caption is classified as "BodyText" or "Other." Future steps will assume "BodyText" is never part of a Figure/Table but "Other" text might be. See RegionClassifier.scala.
  8. Figures are located using the classification from the previous step. See FigureDetector.scala. This has two substeps:
  • For each caption, a number of regions within the page are "proposed" as possible figure regions. We propose regions that are adjacent to the caption and contain only "Other" text and graphical elements.
  • A scoring function is used to select the best proposal to match to each caption. This process also makes sure we don't select overlapping figure regions for two captions.
  1. Finally, the figures are optionally rendered to images using PDFBox. See FigureRenderer.scala.
  2. More experimentally, section titles can be also extracted. See SectionTitleExtractor.scala.
  3. Then, the document can be broken up to logical sections. See SectionedTextBuilder.scala.

Evaluation

This repo includes python-based scripts to evaluate figure extractors and two datasets with ground truth labels. See the evaluation directory.

Common Sources of Errors

FigureExtractor has been tested on papers selected from Semantic Scholar. It is not well tested on domains outside of computer science. When errors do occur, some common causes are:

  1. Poorly Encoded PDFs: Some PDFs can appear to be perfectly fine in a PDF viewer, but when we try to extract the text we might get garbage, or we might get a bunch of extraneous text that is not visible to the eye. Trying to ignore text that is encoded in the PDF as being the background color (might?) be a good to start to solving these issues, but is not implemented.
  2. Text Classification: Text classification works well for tables and most figures, but we get some errors for text-heavy figures (such as a figure outlining the steps in an algorithm). RegionClassifier.scala will sometimes classify bullet points and equations as non-body text, which can cause those text elements to get incorrectly chunked into figures.
  3. Region Proposing: Even when text classification is accurate, generating good proposed figure regions can be a non-trivial task. It is in particular important to build proposals that do not encompass multiple figures, which is sometimes quite difficult if there are many figures on a single page.
  4. Caption Building: For some cases where captions are very closely packed to the following text, our returned captions will include too much text. For some some papers with unusual caption formats, we might fail to include some text in the captions.

Unhandled Edge Cases

There are a few edge cases were we consistently fail, due to hard coded assumptions or special cases we do not handle at the moment:

  1. "L" shaped figures. For example, see evaluation/datasets/s2/pdfs/202042e6f88abe690a55e136475053a3eac68d40.pdf, page 7. To handle these one would need to adjust the API to allow figure regions to be described by multiple bounding boxes and then adjust "FigureDetector.scala" to return them.
  2. Three adjacent figures, where the figures share borders with each other. For example, see evaluation/datasets/conference/pdfs/icml10_4.pdf. I think this would not be too difficult to handle as a special case. It would require heuristically guessing how to split up the region all the figures occupy.
  3. Rotated text. For example, see evaluation/datasets/conference/pdfs/W10-1721.pdf page 6. Unfortunately PDFBox does not handle extracting rotated text; it tends to group rotated text as paragraphs with each character being line. This means for pages with captions rotated at a 90 degree angle we will be unable to detect any captions that exist on that page and then be unable to extract the figures. Handling this might require post-processing the text we get from PDFBox to attempt to get coherent lines of text for these cases.
  4. Captions on the same line. For example, see evaluation/datasets/conference/pdfs/icml14_9.pdf, page 5. PDFBox will group both captions into the same line. Our caption detection code assumes that captions always start lines (this assumption is almost never wrong outside of these cases), which causes us to miss the second caption. This might be relatively easy to address by checking each line we find to have a caption for large gap between the words in the line, followed by a second caption.
  5. Figures above the abstract. Currently it is assumed all text above the abstract is not part of a figure. This helps avoid false positives induced by including emails/names/title from the header in figures on the first page, but there is probably a way to relax this assumption to resolve this issue.

Contact

Christopher Clark, [email protected]

More Repositories

1

allennlp

An open-source NLP research library, built on PyTorch.
Python
11,751
star
2

OLMo

Modeling, training, eval, and inference code for OLMo
Python
4,535
star
3

RL4LMs

A modular RL library to fine-tune language models to human preferences
Python
2,101
star
4

longformer

Longformer: The Long-Document Transformer
Python
2,022
star
5

bilm-tf

Tensorflow implementation of contextualized word representations from bi-directional language models
Python
1,621
star
6

scispacy

A full spaCy pipeline and models for scientific/biomedical documents.
Python
1,618
star
7

bi-att-flow

Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
Python
1,533
star
8

scibert

A BERT model for scientific text.
Python
1,495
star
9

open-instruct

Python
1,185
star
10

ai2thor

An open-source platform for Visual AI.
C#
1,160
star
11

dolma

Data and tools for generating and inspecting OLMo pre-training data.
Python
961
star
12

XNOR-Net

ImageNet classification using binary Convolutional Neural Networks
Lua
839
star
13

s2orc

S2ORC: The Semantic Scholar Open Research Corpus: https://www.aclweb.org/anthology/2020.acl-main.447/
Python
817
star
14

mmc4

MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.
Python
793
star
15

scitldr

Python
734
star
16

objaverse-xl

πŸͺ Objaverse-XL is a Universe of 10M+ 3D Objects. Contains API Scripts for Downloading and Processing!
Python
701
star
17

papermage

library supporting NLP and CV research on scientific papers
Python
692
star
18

natural-instructions

Expanding natural instructions
Python
690
star
19

visprog

Official code for VisProg (CVPR 2023 Best Paper!)
Python
686
star
20

science-parse

Science Parse parses scientific papers (in PDF form) and returns them in structured form.
Java
611
star
21

writing-code-for-nlp-research-emnlp2018

A companion repository for the "Writing code for NLP Research" Tutorial at EMNLP 2018
Python
558
star
22

tango

Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.
Python
528
star
23

allennlp-models

Officially supported AllenNLP models
Python
521
star
24

specter

SPECTER: Document-level Representation Learning using Citation-informed Transformers
Python
506
star
25

dont-stop-pretraining

Code associated with the Don't Stop Pretraining ACL 2020 paper
Python
488
star
26

unified-io-2

Python
471
star
27

macaw

Multi-angle c(q)uestion answering
Python
451
star
28

lumos

Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"
Python
433
star
29

document-qa

Python
420
star
30

scholarphi

An interactive PDF reader.
Python
418
star
31

deep_qa

A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
Python
404
star
32

acl2018-semantic-parsing-tutorial

Materials from the ACL 2018 tutorial on neural semantic parsing
402
star
33

unifiedqa

UnifiedQA: Crossing Format Boundaries With a Single QA System
Python
384
star
34

pawls

Software that makes labeling PDFs easy.
Python
380
star
35

OLMoE

OLMoE: Open Mixture-of-Experts Language Models
Jupyter Notebook
374
star
36

kb

KnowBert -- Knowledge Enhanced Contextual Word Representations
Python
359
star
37

PeerRead

Data and code for Kang et al., NAACL 2018's paper titled "A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications"
Python
354
star
38

reward-bench

RewardBench: the first evaluation tool for reward models.
Python
346
star
39

naacl2021-longdoc-tutorial

Python
342
star
40

openie-standalone

Quality information extraction at web scale. Edit
Scala
327
star
41

Holodeck

CVPR 2024: Language Guided Generation of 3D Embodied AI Environments.
Python
319
star
42

python-package-template

A template repo for Python packages
Python
318
star
43

allenact

An open source framework for research in Embodied-AI from AI2.
Python
316
star
44

ir_datasets

Provides a common interface to many IR ranking datasets.
Python
314
star
45

s2orc-doc2json

Parsers for scientific papers (PDF2JSON, TEX2JSON, JATS2JSON)
Python
302
star
46

acl2022-zerofewshot-tutorial

291
star
47

OLMo-Eval

Evaluation suite for LLMs
Python
280
star
48

procthor

🏘️ Scaling Embodied AI by Procedurally Generating Interactive 3D Houses
Python
257
star
49

fm-cheatsheet

Website for hosting the Open Foundation Models Cheat Sheet.
JavaScript
255
star
50

FineGrainedRLHF

Python
243
star
51

beaker-cli

A collaborative platform for rapid and reproducible research.
Go
230
star
52

comet-atomic-2020

Python
228
star
53

spv2

Science-parse version 2
Python
225
star
54

scifact

Data and models for the SciFact verification task.
Python
217
star
55

objaverse-rendering

πŸ“· Scripts for rendering Objaverse
Python
206
star
56

ScienceWorld

ScienceWorld is a text-based virtual environment centered around accomplishing tasks from the standardized elementary science curriculum.
Scala
197
star
57

unified-io-inference

Jupyter Notebook
196
star
58

allennlp-demo

Code for the AllenNLP demo.
TypeScript
191
star
59

citeomatic

A citation recommendation system that allows users to find relevant citations for their paper drafts. The tool is backed by Semantic Scholar's OpenCorpus dataset.
Jupyter Notebook
189
star
60

cartography

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics
Jupyter Notebook
188
star
61

savn

Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)
Python
175
star
62

vampire

Variational Methods for Pretraining in Resource-limited Environments
Python
173
star
63

vila

Incorporating VIsual LAyout Structures for Scientific Text Classification
Python
172
star
64

s2-folks

Public space for the user community of Semantic Scholar APIs to share scripts, report issues, and make suggestions.
171
star
65

hidden-networks

Python
164
star
66

cord19

Get started with CORD-19
161
star
67

mmda

multimodal document analysis
Jupyter Notebook
158
star
68

PRIMER

The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Python
150
star
69

catwalk

This project studies the performance and robustness of language models and task-adaptation methods.
Python
141
star
70

dnw

Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)
Python
139
star
71

deepfigures-open

Companion code to the paper "Extracting Scientific Figures with Distantly Supervised Neural Networks" πŸ€–
Python
133
star
72

tpu_pretrain

LM Pretraining with PyTorch/TPU
Python
132
star
73

allentune

Hyperparameter Search for AllenNLP
Python
128
star
74

SciREX

Data/Code Repository for https://api.semanticscholar.org/CorpusID:218470122
Python
128
star
75

scidocs

Dataset accompanying the SPECTER model
Python
127
star
76

lm-explorer

interactive explorer for language models
Python
127
star
77

pdffigures

Command line tool to extract figures, tables, and captions from scholarly documents in PDF form.
C++
125
star
78

OpenBookQA

Code for experiments on OpenBookQA from the EMNLP 2018 paper "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering"
Python
121
star
79

peS2o

Pretraining Efficiently on S2ORC!
120
star
80

gooaq

Question-answers, collected from Google
Python
116
star
81

allennlp-as-a-library-example

A simple example for how to build your own model using AllenNLP as a dependency.
Python
113
star
82

embodied-clip

Official codebase for EmbCLIP
Python
111
star
83

multimodalqa

Python
109
star
84

alexafsm

With alexafsm, developers can model dialog agents with first-class concepts such as states, attributes, transition, and actions. alexafsm also provides visualization and other tools to help understand, test, debug, and maintain complex FSM conversations.
Python
108
star
85

allennlp-semparse

A framework for building semantic parsers (including neural module networks) with AllenNLP, built by the authors of AllenNLP
Python
107
star
86

scicite

Repository for NAACL 2019 paper on Citation Intent prediction
Python
106
star
87

ai2thor-rearrangement

πŸ”€ Visual Room Rearrangement
Python
104
star
88

commonsense-kg-completion

Python
102
star
89

medicat

Dataset of medical images, captions, subfigure-subcaption annotations, and inline textual references
Python
102
star
90

real-toxicity-prompts

Jupyter Notebook
101
star
91

s2search

The Semantic Scholar Search Reranker
Python
99
star
92

aristo-mini

Aristo mini is a light-weight question answering system that can quickly evaluate Aristo science questions with an evaluation web server and the provided baseline solvers.
Python
96
star
93

gpv-1

A task-agnostic vision-language architecture as a step towards General Purpose Vision
Jupyter Notebook
92
star
94

flex

Few-shot NLP benchmark for unified, rigorous eval
Python
91
star
95

elastic

Python
91
star
96

manipulathor

ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm
Jupyter Notebook
88
star
97

spoc-robot-training

SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
Python
85
star
98

S2AND

Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite
Python
85
star
99

propara

ProPara (Process Paragraph Comprehension) dataset and models
Python
82
star
100

ARC-Solvers

ARC Question Solvers
Python
82
star