• Stars
    star
    1,614
  • Rank 28,982 (Top 0.6 %)
  • Language
    HTML
  • License
    MIT License
  • Created over 11 years ago
  • Updated almost 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A batteries-included framework for easy web-scraping. Just add CSS! (Or do more.)

Upton

Upton is a framework for easy web-scraping with a useful debug mode that doesn't hammer your target's servers. It does the repetitive parts of writing scrapers, so you only have to write the unique parts for each site.

Installation

Add the gem to your Gemfile and run the bundle command:

gem 'upton'

Documentation

With Upton, you can scrape complex sites to a CSV in just a few lines of code:

scraper = Upton::Scraper.new("http://www.propublica.org", "section#river h1 a")
scraper.scrape_to_csv "output.csv" do |html|
  Nokogiri::HTML(html).search("#comments h2.title-link").map &:text
end

Just specify a URL to a list of links -- or simply a list of links --, an XPath expression or CSS selector for the links and a block of what to do with the content of the pages you've scraped. Upton comes with some pre-written blocks (Procs, technically) for scraping simple lists and tables, like the list function above.

Upton operates on the theory that, for most scraping projects, you need to scrape two types of pages:

  1. Instance pages, which are the goal of your scraping, e.g. job listings or news articles.
  2. Index pages, which list instance pages. For example, a job search site's search page or a newspaper's homepage.

For more complex use cases, subclass Upton::Scraper and override the relevant methods. If you're scraping links from an API, you would override get_index; if you need to log in before scraping a site or do something special with the scraped instance page, you would override get_instance.

The get_instance and get_index methods use a protected method get_page(url) which, well, gets a page. That's not very special. The more interesting part is that get_page(url, stash) transparently stashes the response of each request if the second parameter, stash, is true. Whenever you repeat a request (with true as the second parameter), the stashed HTML is returned without going to the server. This is helpful in the development stages of a project when you're testing some aspect of the code and don't want to hit a server each time. If you are using get_instance and get_index, this can be en/disabled per instance of Upton::Scraper or its subclasses with the @debug option. Setting the stash parameter of the get_page method should only be used if you've overridden get_instance or get_index in a subclass.

Upton also sleeps (by default) 30 seconds between non-stashed requests, to reduce load on the server you're scraping. This is configurable with the @sleep_time_between_requests option.

Upton can handle pagination too. Scraping paginated index pages that use a query string parameter to track the current page (e.g. /search?q=test&page=2) is possible by setting @paginated to true. Use @pagination_param to set the query string parameter used to specify the current page (the default value is page). Use @pagination_max_pages to specify the number of pages to scrape (the default is two pages). You can also set @pagination_interval` if you want to increment pages by a number other than 1 (i.e. if the first page is 1 and lists instances 1 through 20, the second page is 21 and lists instances 21-41, etc.) See the Examples section below.

To handle non-standard pagination, you can override the next_index_page_url and next_instance_page_url methods; Upton will get each page's URL returned by these functions and return their contents.

For more complete documentation, see the RDoc.

Important Note: Upton is alpha software. The API may change at any time.

How is this different than Nokogiri?

Upton is, in essence, sugar around RestClient and Nokogiri. If you just used those tools by themselves to write scrapers, you'd be responsible for writing code to fetch, save (maybe), debug and sew together all the pieces in a slightly different way for each scraper. Upton does most of that work for you, so you can skip the boilerplate.

Upton doesn't quite fit your needs?

Here are some similar libraries to check out for inspiration. No promises, since I've never used them, but they seem similar and were recommended by various HN commenters:

And these are some libraries that do related things:

Examples

If you want to scrape ProPublica's website with Upton, this is how you'd do it. (Scraping our RSS feed would be smarter, but not every site has a full-text RSS feed...)

scraper = Upton::Scraper.new("http://www.propublica.org", "section#river section h1 a")
scraper.scrape do |article_html_string|
  puts "here is the full html content of the ProPublica article listed on the homepage: "
  puts "#{article_html_string}"
  #or, do other stuff here.
end

Simple sites can be scraped with pre-written list block in `Upton::Utils', as below:

scraper = Upton::Scraper.new("http://nytimes.com", "ul.headlinesOnly a")
scraper.scrape_to_csv("output.csv", &Upton::Utils.list("h6.byline"))

A table block also exists in Upton::Utils to scrape tables to an array of arrays, as below:

> scraper = Upton::Scraper.new(["http://website.com/story.html"])
> scraper.scrape(&Upton::Utils.table("//table[2]"))
[["Jeremy", "$8.00"], ["John Doe", "$15.00"]]

This example shows how to scrape the first three pages of ProPublica's search results for the term tools:

scraper = Upton::Scraper.new("http://www.propublica.org/search/search.php?q=tools",
                             ".compact-list a.title-link")
scraper.paginated = true
scraper.pagination_param = 'p'    # default is 'page'
scraper.pagination_max_pages = 3  # default is 2
scraper.scrape_to_csv("output.csv", &Upton::Utils.list("h2"))

Contributing

I'd love to hear from you if you're using Upton. I also appreciate your suggestions/complaints/bug reports/pull requests. If you're interested, check out the issues tab or drop me a note.

In particular, if you have a common, abstract use case, please add them to lib/utils.rb. Check out the table_to_csv and list_to_csv methods for examples.

(The pull request process is pretty easy. Fork the project in Github (or via the git CLI), make your changes, then submit a pull request on Github.)

Why "Upton"

Upton Sinclair was a pioneering, muckraking journalist who is most famous for The Jungle, a novel portraying the reality of immigrant labor struggles in Chicago meatpacking plants at the start of the 1900s. Upton, the gem, sprang out of a ProPublica project pertaining to labor issues.

Notes

Test data is copyrighted by either ProPublica or various Wikipedia contributors. In either case, it's reproduced here under a Creative Commons license. In ProPublica's case, it's BY-NC-ND; in Wikipedia's it's BY-SA.

More Repositories

1

guides

ProPublica's News App and Data Style Guides
1,163
star
2

compas-analysis

Data and analysis for 'Machine Bias'
Jupyter Notebook
600
star
3

weepeople

A typeface of people sillhouettes, to make it easy to build web graphics featuring little people instead of dots.
489
star
4

stateface

A typeface of U.S. state shapes to use in web apps.
HTML
359
star
5

timeline-setter

A tool to create HTML timelines from spreadsheets of events.
JavaScript
328
star
6

nyc-dna-software

The source code, acquired by ProPublica, for New York City's Forensic Statistical Tool.
C#
318
star
7

facebook-political-ads

Monitoring Facebook Political Ads
HTML
237
star
8

daybreak

A simple-dimple key value store for ruby.
HTML
236
star
9

sunlight-congress

The Sunlight Foundation's Congress API. Shut down on Oct. 1, 2017.
Ruby
169
star
10

landline

Simple SVG maps that work everywhere.
HTML
166
star
11

qis

Quick Instagram search tool
HTML
158
star
12

column-setter

Custom responsive grids in Sass that work in older browsers.
SCSS
130
star
13

Capitol-Words

Scraping, parsing and indexing the daily Congressional Record to support phrase search over time, and by legislator and date
Python
121
star
14

politwoops-tweet-collector

Python workers that collect tweets from the twitter streaming api and track deletions
Python
120
star
15

simple-tiles

Simple tile generation for maps.
C
106
star
16

django-collaborative

ProPublica's collaborative tip-gathering framework. Import and manage CSV, Google Sheets and Screendoor data with ease.
Python
99
star
17

transcribable

Drop in crowdsourcing for your Rails app. Extracted from Free the Files.
Ruby
84
star
18

schooner-tk

A collection of (hopefully) useful utilities for working with satellite images.
C++
71
star
19

newsappmodel

Conceptual Model for News Applications
58
star
20

table-setter

Easy Peasy CSV to HTML
JavaScript
57
star
21

ilcampaigncash

Load Illinois political contribute and spending data efficiently
TSQL
57
star
22

congress-api-docs

Documentation for the ProPublica Congress API
HTML
54
star
23

campaign_cash

A Ruby client for interacting with ProPublica Campaign Finance API
Ruby
52
star
24

politwoops_sunlight

Politwoops web front end
CSS
44
star
25

data-institute-2019

Materials for the ProPublica Data Institute 2019
43
star
26

table-fu

A utility for spreadsheet-style handling of arrays (e.g. filtering, formatting, and sorting)
Ruby
35
star
27

fakenator

PHP
27
star
28

staffers

Interactive and searchable House staffer directory, based on House disbursement data.
HTML
26
star
29

data-institute-2018

For students of https://projects.propublica.org/graphics/ida-propublica-data-institute
26
star
30

vid-skim

Transcripts and commentary for long boring videos on YouTube!
Ruby
26
star
31

simpler-tiles

Ruby bindings for Simple Tiles
HTML
25
star
32

cookcountyjail2

A new version of the cook county jail scraper, inspired by the Supreme Chi-Town Coding Crew
HTML
23
star
33

disbursements

Data and scripts relating to the publishing of the House expenditure reports, and hopefully the Senate's in future.
Ruby
23
star
34

pixel-pong

Interface and data-collection backend for PixelPing.
JavaScript
22
star
35

propertyassessments

Analysis behind the "How the Cook County Assessor Failed Taxpayers"
R
22
star
36

data-nicar-2019

Nicar ML/NLP workshop by J Kao
Jupyter Notebook
19
star
37

thinner

Slow purges for varnish useful on app deploys.
Ruby
17
star
38

data-institute-2021

13
star
39

transcript-audio-sync

Tools for synchronizing audio and text on a webpage
JavaScript
12
star
40

pac-donor-similarity

Cosine similarity scores for PAC donors to federal candidates
10
star
41

capitol_words_nlp

Experimenting with parsing the congressional record using NLP techniques and tools
Python
9
star
42

redactor

Tool to remove email addresses, person entities, and phone numbers from a text
Python
9
star
43

auditData

data and scripts for https://projects.propublica.org/graphics/eitc-audit
R
9
star
44

il-tickets-notebooks

Explore Chicago ticket data.
Jupyter Notebook
9
star
45

il-ticket-loader

Load and analyze Chicago parking and camera ticket data
Jupyter Notebook
7
star
46

fbpac-api-public

API supporting more complex queries on the database of ads gathered by github.com/propublica/facebook-political-ads
Ruby
6
star
47

data-institute-2022

6
star
48

collaborative-playbook

5
star
49

northern-il-federal-gun-cases

Jupyter Notebook
5
star
50

d4dPartD-analysis

analysis of doctors' promotional payments from drug companies and their prescribing behavior
R
4
star
51

table-setter-generator

A rails generator for table-setter
JavaScript
4
star
52

institute-files

Data Institute Lessons
4
star
53

pentagon

CartoCSS
3
star
54

campaign-finance-api-docs

Documentation for campaign finance API
3
star
55

collaborative-playbook-pt

Collaborative Playbook in Portuguese
2
star
56

vital-signs-hackathon

1
star
57

political-ad-collector

web landing page for propublica's political ad collector
CSS
1
star