• Stars
    star
    165
  • Rank 228,906 (Top 5 %)
  • Language
    Jupyter Notebook
  • Created over 8 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Adaptive crawler which uses Reinforcement Learning methods

Deep-Deep: Adaptive Crawler

Build Status Code Coverage

Deep-Deep is a Scrapy-based crawler which uses Reinforcement Learning methods to learn which links to follow.

It is called Deep-Deep, but it doesn't use Deep Learning, and it is not only for Deep web. Weird.

Running

In order to run the spider, you need some seed urls and a relevancy function that will provide reward value for each crawled page. There are some scripts in ./scripts with common use-cases:

  • crawl-forms.py learns to find password recovery forms (they are classified with Formasaurus). This is a good benchmark task, because the spider must learn to plan several steps ahead (they are often best reachable via login links).
  • crawl-keywords.py starts a crawl where relevance function is determined by a keywords file (keywords starting with "-" are considered negative).
  • crawl-relevant.py start a crawl where reward is given by a classifier that returns a score with .predict_proba method.

There is also an extraction spider deepdeep.spiders.extraction.ExtractionSpider that learns to extract unique items from a single domain given an item extractor.

For keywords and relevancy crawlers, the following files will be created in the result folder:

  • items.jl.gz - depending on the value of the export_cdr argument, either items in CDR format will be exported (default), or spider stats, including learning statistics (pass -a export_cdr=0)
  • meta.json - arguments of the spider
  • params.json - full spider parameters
  • Q-*.joblib - Q-model snapshots
  • queue-*.csv.gz - queue snapshots
  • events.out.tfevents.* - a log in TensorBoard format. Install TensorFlow to view it with tensorboard --logdir <result folder parent> command.

Using trained model

You can use deep-deep to just run adaptive crawls, updating link model and collecting crawled data at the same time. But in some cases it is more efficient to first train a link model with deep-deep, and then use this model in another crawler. Deep-deep uses a lot of memory to store page and link features, and more CPU to update the link model. So if the link model is general enough to freeze it, you can run a more efficient crawl. Or you might want to just use deep-deep link model in an existing project.

This is all possible with deepdeep.predictor.LinkClassifier: just load it from Q-*.joblib checkpoint and use .extract_urls_from_response or .extract_urls methods to get a list of urls with scores. An example of using this classifier in a simple scrapy spider is given in examples/standalone.py. Note that in order to use default scrapy queue, a float link score is converted to an integer priority value.

Note that in some rare cases the model might fail to generalize from the crawl it was trained on to the new crawl.

Model explanation

It's possible to explain model weights and predictions using eli5 library. For that you'll need to crawl with model checkpointing enabled and storing items in CDR format. Crawled items are used in order to invert the hashing vectorizer features, and also for prediction explanation.

./scripts/explain-model.py can save a model explanation to pickle, html, or print it in the terminal. But it is hard to analyze because character ngram features are used.

./scripts/explain-predictions.py will produce an html file for each crawled page, where explanations for all link scores will be shown.

Testing

To run tests, execute the following command from the deep-deep folder:

./check.sh

It requires Python 3.5+, pytest, pytest-cov, pytest-twisted and mypy.

Alternatively, run tox from deep-deep folder.


define hyperiongray

More Repositories

1

eli5

A library for debugging/inspecting machine learning classifiers and explaining their predictions
Jupyter Notebook
2,758
star
2

scrapy-rotating-proxies

use multiple proxies with Scrapy
Python
656
star
3

tensorboard_logger

Log TensorBoard events without touching TensorFlow
Python
625
star
4

sklearn-crfsuite

scikit-learn inspired API for CRFsuite
Python
421
star
5

aquarium

Splash + HAProxy + Docker Compose
Python
192
star
6

arachnado

Web Crawling UI and HTTP API, based on Scrapy and Tornado
Python
156
star
7

autologin

A project to attempt to automatically login to a website given a single seed
Python
115
star
8

html-text

Extract text from HTML
HTML
115
star
9

Formasaurus

Formasaurus tells you the type of an HTML form and its fields using machine learning
HTML
110
star
10

page-compare

Simple heuristic for measuring web page similarity (& data set)
HTML
88
star
11

autopager

Detect and classify pagination links
HTML
86
star
12

undercrawler

A generic crawler
Python
75
star
13

scrapy-crawl-once

Scrapy middleware which allows to crawl only new content
Python
74
star
14

soft404

A classifier for detecting soft 404 pages
Jupyter Notebook
53
star
15

agnostic

Agnostic Database Migrations
Python
51
star
16

autologin-middleware

Scrapy middleware for the autologin
Python
37
star
17

json-lines

Read JSON lines (jl) files, including gzipped and broken
Python
34
star
18

extract-html-diff

extract difference between two html pages
HTML
29
star
19

scrapy-kafka-export

Scrapy extension which writes crawled items to Kafka
Python
28
star
20

MaybeDont

A component that tries to avoid downloading duplicate content
Python
27
star
21

sitehound-frontend

Site Hound (previously THH) is a Domain Discovery Tool
HTML
23
star
22

imageSimilarity

Given a new image, determine if it is likely derived from a known image.
Python
20
star
23

domain-discovery-crawler

Broad crawler for domain discovery
Python
17
star
24

url-summary

Show summary of a large number of URLs in a Jupyter Notebook
Python
17
star
25

sitehound

This is the facade for installation and access to the individual components
Shell
16
star
26

tor-proxy

a tor socks proxy docker image
11
star
27

scrapy-dockerhub

[UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.
Python
10
star
28

web-page-annotator

Annotate parts of web pages in the browser
Python
9
star
29

scrash-lua-examples

A collection of example LUA scripts and JS utilities
JavaScript
7
star
30

scrapy-cdr

Item definition and utils for storing items in CDR format for scrapy
Python
7
star
31

hh-page-classifier

Headless Horseman Page Classifier service
Python
6
star
32

privoxy

Privoxy HTTP Proxy based on jess/privoxy
6
star
33

sitehound-backend

Sitehound's backend
HTML
6
star
34

fortia

[UNMAINTAINED] Firefox addon for Scrapely
JavaScript
5
star
35

proxy-middleware

Scrapy middleware that reads proxy config from settings
Python
5
star
36

linkrot

[UNMAINTAINED] A script (Scrapy spider) to check a list of URLs.
Jupyter Notebook
4
star
37

hgprofiler

JavaScript
4
star
38

linkdepth

[UNMAINTAINED] scrapy spider to check link depth over time
Python
4
star
39

common-crawl-mapreduce

A naive scoring of commoncrawl's content using MR
Java
3
star
40

captcha-broad-crawl

Broad crawl of onion sites in search for captchas
Python
3
star
41

frontera-crawler

Crawler-specific logic for Frontera
Python
3
star
42

hh-deep-deep

THH โ†” deep-deep integration
Python
3
star
43

scrapy-login

[UNMAINTAINED] A middleware that provides continuous site login facility
Python
3
star
44

bk-string

A BK Tree based approach to storing and querying strings by Levenshtein Distance.
C
3
star
45

domainSpider

Simple web crawler that sticks to a set list of domains. Work in progress.
Python
3
star
46

quickpin

New iteration of QuickPin with Flask & AngularDart
Python
2
star
47

py-bkstring

A python wrapper for the bk-string C project.
Python
2
star
48

broadcrawl

Middleware that limits number of internal/external links during broad crawl
Python
2
star
49

sshadduser

A simple tool to add a new user with OpenSSH keys.
Python
2
star
50

autoregister

Python
2
star
51

quickpin-api

Python wrapper for the QuickPin API
Python
1
star
52

muricanize

A translation API
Python
1
star
53

rs-bkstring

Rust
1
star
54

scrash-pageuploader

[UNMAINTAINED] S3 Uploader pipelines for HTML and screenshots rendered by Splash
Python
1
star
55

site-checker

JavaScript
1
star
56

frontera-scripts

A set of scripts to spin up EC2 Frontera cluster with spiders
Python
1
star