• Stars
    star
    184
  • Rank 209,187 (Top 5 %)
  • Language
    Ruby
  • License
    MIT License
  • Created over 12 years ago
  • Updated about 9 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A loose framework for crawling and scraping web sites.

Spidey

Build Status Gem Version

Spidey provides a bare-bones framework for crawling and scraping web sites. Its goal is to keep boilerplate scraping logic out of your code.

Example

This example spider crawls an eBay page, follows links to category pages, continues to auction detail pages, and finally records a few scraped item details as a result.

class EbayPetSuppliesSpider < Spidey::AbstractSpider
  handle "http://pet-supplies.shop.ebay.com", :process_home

  def process_home(page, default_data = {})
    page.search("#AllCats a[role=menuitem]").each do |a|
      handle resolve_url(a.attr('href'), page), :process_category, category: a.text.strip
    end
  end

  def process_category(page, default_data = {})
    page.search("#ResultSetItems table.li td.dtl a").each do |a|
      handle resolve_url(a.attr('href'), page), :process_auction, default_data.merge(title: a.text.strip)
    end
  end

  def process_auction(page, default_data = {})
    image_el = page.search('div.vi-ipic1 img').first
    price_el = page.search('span[itemprop=price]').first
    record default_data.merge(
      image_url: (image_el.attr('src') if image_el),
      price: price_el.text.strip
    )
  end

end

spider = EbayPetSuppliesSpider.new verbose: true
spider.crawl max_urls: 100

spider.results  # => [{category: "Aquarium & Fish", title: "5 Gal. Fish Tank"...

Implement a spider class extending Spidey::AbstractSpider for each target site. The class can declare starting URLs by calling handle at the class level. Spidey invokes each of the methods specified in those calls, passing in the resulting page (a Mechanize Page object) and, optionally, some scraped data. The methods can do whatever processing of the page is necessary, calling handle with additional URLs to crawl and/or record with scraped results.

Storage Strategies

By default, the lists of URLs being crawled, results scraped, and errors encountered are stored as simple arrays in the spider (i.e., in memory):

spider.urls     # => ["http://www.ebay.com", "http://www.ebay.com/...", ...]
spider.results  # => [{auction_title: "...", sale_price: "..."}, ...]
spider.errors   # => [{url: "...", handler: :process_home, error: FooException}, ...]

Add the spidey-mongo gem and include Spidey::Strategies::Mongo in your spider to instead use MongoDB to persist these data. See the docs for more information. Or, you can implement your own strategy by overriding the appropriate methods from AbstractSpider.

Logging

You may set Spidey.logger to a logger of your choosing. When used in a Rails environment, the logger defaults to the Rails logger. Otherwise, it's directed to STDOUT.

Contributing

Spidey is very much a work in progress. See CONTRIBUTING for details.

To Do

  • Spidey works well for crawling public web pages, but since little effort is undertaken to preserve the crawler's state across requests, it works less well when particular cookies or sequences of form submissions are required. Mechanize supports this quite well, though, so Spidey could grow in that direction.

Copyright

Copyright (c) 2012-2015 Joey Aghion, Artsy Inc. See LICENSE.txt for further details.

More Repositories

1

opsworks_custom_env

This chef cookbook writes custom app configuration values from the OpsWorks stack's custom JSON to a config/application.yml file for each app. The figaro gem can help load those values directly into the application's ENV.
Ruby
57
star
2

statsd_setup

Set up a simple EC2 instance with Statsd and Graphite, from beginning to end.
Ruby
28
star
3

opsworks_delayed_job

Chef cookbook for configuring an AWS OpsWorks layer of delayed_job workers.
Ruby
19
star
4

spidey-mongo

Implements a MongoDB back-end for Spidey (https://github.com/joeyAghion/spidey), a framework for crawling and scraping web sites.
Ruby
15
star
5

rerouter

Generates 301 redirects from old domains to new domains. Minimal, rack-based.
Ruby
13
star
6

unique_tabs

A Chrome extension that closes duplicate tabs when you open a new tab or navigate to a new page.
JavaScript
9
star
7

satellite_setup

Chef recipes and rake tasks for building and managing Delayed Job workers on EC2
Ruby
8
star
8

releasecop

Given a list of projects and environments pipelines, report which environments are "behind" and by which commits.
Ruby
8
star
9

console_color

Add color-coded app and environment information to the Rails console prompt.
Ruby
6
star
10

joey.aghion.com

CSS
5
star
11

artsy-timelines

A simple web application for rendering artworks, artists, "genes," and tags on an interactive timeline (via the Artsy API).
JavaScript
5
star
12

opsworks_papertrail

Custom cookbook for configuring OpsWorks instances to send logs to papertrail.
HTML
4
star
13

multiapp_example-tests

Ruby
2
star
14

fakemytweet

fakemytweet.com
Ruby
2
star
15

caniwatchitwithmyparents

caniwatchitwithmyparents.com
Ruby
1
star
16

lambda-s3-redshift

Builds an AWS Lambda function that can load gzipped JSON files from S3 into Redshift via a staging table.
JavaScript
1
star