• Stars
    star
    656
  • Rank 68,675 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

use multiple proxies with Scrapy

scrapy-rotating-proxies

PyPI Version Build Status Code Coverage

This package provides a Scrapy middleware to use rotating proxies, check that they are alive and adjust crawling speed.

License is MIT.

Installation

pip install scrapy-rotating-proxies

Usage

Add ROTATING_PROXY_LIST option with a list of proxies to settings.py:

ROTATING_PROXY_LIST = [
    'proxy1.com:8000',
    'proxy2.com:8031',
    # ...
]

As an alternative, you can specify a ROTATING_PROXY_LIST_PATH options with a path to a file with proxies, one per line:

ROTATING_PROXY_LIST_PATH = '/my/path/proxies.txt'

ROTATING_PROXY_LIST_PATH takes precedence over ROTATING_PROXY_LIST if both options are present.

Then add rotating_proxies middlewares to your DOWNLOADER_MIDDLEWARES:

DOWNLOADER_MIDDLEWARES = {
    # ...
    'rotating_proxies.middlewares.RotatingProxyMiddleware': 610,
    'rotating_proxies.middlewares.BanDetectionMiddleware': 620,
    # ...
}

After this all requests will be proxied using one of the proxies from the ROTATING_PROXY_LIST / ROTATING_PROXY_LIST_PATH.

Requests with "proxy" set in their meta are not handled by scrapy-rotating-proxies. To disable proxying for a request set request.meta['proxy'] = None; to set proxy explicitly use request.meta['proxy'] = "<my-proxy-address>".

Concurrency

By default, all default Scrapy concurrency options (DOWNLOAD_DELAY, AUTHTHROTTLE_..., CONCURRENT_REQUESTS_PER_DOMAIN, etc) become per-proxy for proxied requests when RotatingProxyMiddleware is enabled. For example, if you set CONCURRENT_REQUESTS_PER_DOMAIN=2 then spider will be making at most 2 concurrent connections to each proxy, regardless of request url domain.

Customization

scrapy-rotating-proxies keeps track of working and non-working proxies, and re-checks non-working from time to time.

Detection of a non-working proxy is site-specific. By default, scrapy-rotating-proxies uses a simple heuristic: if a response status code is not 200, response body is empty or if there was an exception then proxy is considered dead.

You can override ban detection method by passing a path to a custom BanDectionPolicy in ROTATING_PROXY_BAN_POLICY option, e.g.:

# settings.py
ROTATING_PROXY_BAN_POLICY = 'myproject.policy.MyBanPolicy'

The policy must be a class with response_is_ban and exception_is_ban methods. These methods can return True (ban detected), False (not a ban) or None (unknown). It can be convenient to subclass and modify default BanDetectionPolicy:

# myproject/policy.py
from rotating_proxies.policy import BanDetectionPolicy

class MyPolicy(BanDetectionPolicy):
    def response_is_ban(self, request, response):
        # use default rules, but also consider HTTP 200 responses
        # a ban if there is 'captcha' word in response body.
        ban = super(MyPolicy, self).response_is_ban(request, response)
        ban = ban or b'captcha' in response.body
        return ban

    def exception_is_ban(self, request, exception):
        # override method completely: don't take exceptions in account
        return None

Instead of creating a policy you can also implement response_is_ban and exception_is_ban methods as spider methods, for example:

class MySpider(scrapy.Spider):
    # ...

    def response_is_ban(self, request, response):
        return b'banned' in response.body

    def exception_is_ban(self, request, exception):
        return None

It is important to have these rules correct because action for a failed request and a bad proxy should be different: if it is a proxy to blame it makes sense to retry the request with a different proxy.

Non-working proxies could become alive again after some time. scrapy-rotating-proxies uses a randomized exponential backoff for these checks - first check happens soon, if it still fails then next check is delayed further, etc. Use ROTATING_PROXY_BACKOFF_BASE to adjust the initial delay (by default it is random, from 0 to 5 minutes). The randomized exponential backoff is capped by ROTATING_PROXY_BACKOFF_CAP.

Settings

  • ROTATING_PROXY_LIST - a list of proxies to choose from;

  • ROTATING_PROXY_LIST_PATH - path to a file with a list of proxies;

  • ROTATING_PROXY_LOGSTATS_INTERVAL - stats logging interval in seconds, 30 by default;

  • ROTATING_PROXY_CLOSE_SPIDER - When True, spider is stopped if there are no alive proxies. If False (default), then when there is no alive proxies all dead proxies are re-checked.

  • ROTATING_PROXY_PAGE_RETRY_TIMES - a number of times to retry downloading a page using a different proxy. After this amount of retries failure is considered a page failure, not a proxy failure. Think of it this way: every improperly detected ban cost you ROTATING_PROXY_PAGE_RETRY_TIMES alive proxies. Default: 5.

    It is possible to change this option per-request using max_proxies_to_try request.meta key - for example, you can use a higher value for certain pages if you're sure they should work.

  • ROTATING_PROXY_BACKOFF_BASE - base backoff time, in seconds. Default is 300 (i.e. 5 min).

  • ROTATING_PROXY_BACKOFF_CAP - backoff time cap, in seconds. Default is 3600 (i.e. 60 min).

  • ROTATING_PROXY_BAN_POLICY - path to a ban detection policy. Default is 'rotating_proxies.policy.BanDetectionPolicy'.

FAQ

Q: Where to get proxy lists? How to write and maintain ban rules?

A: It is up to you to find proxies and maintain proper ban rules for web sites; scrapy-rotating-proxies doesn't have anything built-in. There are commercial proxy services like https://crawlera.com/ which can integrate with Scrapy (see https://github.com/scrapy-plugins/scrapy-crawlera) and take care of all these details.

Contributing

To run tests, install tox and run tox from the source checkout.


define hyperiongray

More Repositories

1

eli5

A library for debugging/inspecting machine learning classifiers and explaining their predictions
Jupyter Notebook
2,758
star
2

tensorboard_logger

Log TensorBoard events without touching TensorFlow
Python
625
star
3

sklearn-crfsuite

scikit-learn inspired API for CRFsuite
Python
421
star
4

aquarium

Splash + HAProxy + Docker Compose
Python
192
star
5

deep-deep

Adaptive crawler which uses Reinforcement Learning methods
Jupyter Notebook
165
star
6

arachnado

Web Crawling UI and HTTP API, based on Scrapy and Tornado
Python
156
star
7

autologin

A project to attempt to automatically login to a website given a single seed
Python
115
star
8

html-text

Extract text from HTML
HTML
115
star
9

Formasaurus

Formasaurus tells you the type of an HTML form and its fields using machine learning
HTML
110
star
10

page-compare

Simple heuristic for measuring web page similarity (& data set)
HTML
88
star
11

autopager

Detect and classify pagination links
HTML
86
star
12

undercrawler

A generic crawler
Python
75
star
13

scrapy-crawl-once

Scrapy middleware which allows to crawl only new content
Python
74
star
14

soft404

A classifier for detecting soft 404 pages
Jupyter Notebook
53
star
15

agnostic

Agnostic Database Migrations
Python
51
star
16

autologin-middleware

Scrapy middleware for the autologin
Python
37
star
17

json-lines

Read JSON lines (jl) files, including gzipped and broken
Python
34
star
18

extract-html-diff

extract difference between two html pages
HTML
29
star
19

scrapy-kafka-export

Scrapy extension which writes crawled items to Kafka
Python
28
star
20

MaybeDont

A component that tries to avoid downloading duplicate content
Python
27
star
21

sitehound-frontend

Site Hound (previously THH) is a Domain Discovery Tool
HTML
23
star
22

imageSimilarity

Given a new image, determine if it is likely derived from a known image.
Python
20
star
23

domain-discovery-crawler

Broad crawler for domain discovery
Python
17
star
24

url-summary

Show summary of a large number of URLs in a Jupyter Notebook
Python
17
star
25

sitehound

This is the facade for installation and access to the individual components
Shell
16
star
26

tor-proxy

a tor socks proxy docker image
11
star
27

scrapy-dockerhub

[UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.
Python
10
star
28

web-page-annotator

Annotate parts of web pages in the browser
Python
9
star
29

scrash-lua-examples

A collection of example LUA scripts and JS utilities
JavaScript
7
star
30

scrapy-cdr

Item definition and utils for storing items in CDR format for scrapy
Python
7
star
31

hh-page-classifier

Headless Horseman Page Classifier service
Python
6
star
32

privoxy

Privoxy HTTP Proxy based on jess/privoxy
6
star
33

sitehound-backend

Sitehound's backend
HTML
6
star
34

fortia

[UNMAINTAINED] Firefox addon for Scrapely
JavaScript
5
star
35

proxy-middleware

Scrapy middleware that reads proxy config from settings
Python
5
star
36

linkrot

[UNMAINTAINED] A script (Scrapy spider) to check a list of URLs.
Jupyter Notebook
4
star
37

hgprofiler

JavaScript
4
star
38

linkdepth

[UNMAINTAINED] scrapy spider to check link depth over time
Python
4
star
39

common-crawl-mapreduce

A naive scoring of commoncrawl's content using MR
Java
3
star
40

captcha-broad-crawl

Broad crawl of onion sites in search for captchas
Python
3
star
41

frontera-crawler

Crawler-specific logic for Frontera
Python
3
star
42

hh-deep-deep

THH ↔ deep-deep integration
Python
3
star
43

scrapy-login

[UNMAINTAINED] A middleware that provides continuous site login facility
Python
3
star
44

bk-string

A BK Tree based approach to storing and querying strings by Levenshtein Distance.
C
3
star
45

domainSpider

Simple web crawler that sticks to a set list of domains. Work in progress.
Python
3
star
46

quickpin

New iteration of QuickPin with Flask & AngularDart
Python
2
star
47

py-bkstring

A python wrapper for the bk-string C project.
Python
2
star
48

broadcrawl

Middleware that limits number of internal/external links during broad crawl
Python
2
star
49

sshadduser

A simple tool to add a new user with OpenSSH keys.
Python
2
star
50

autoregister

Python
2
star
51

quickpin-api

Python wrapper for the QuickPin API
Python
1
star
52

muricanize

A translation API
Python
1
star
53

rs-bkstring

Rust
1
star
54

scrash-pageuploader

[UNMAINTAINED] S3 Uploader pipelines for HTML and screenshots rendered by Splash
Python
1
star
55

site-checker

JavaScript
1
star
56

frontera-scripts

A set of scripts to spin up EC2 Frontera cluster with spiders
Python
1
star