scrapy-deltafetch
This is a Scrapy spider middleware to ignore requests to pages seen in previous crawls of the same spider, thus producing a "delta crawl" containing only new requests.
This also speeds up the crawl, by reducing the number of requests that need to be crawled, and processed (typically, item requests are the most CPU intensive).
DeltaFetch middleware uses Python's dbm package to store requests fingerprints.
Installation
Install scrapy-deltafetch using pip
:
$ pip install scrapy-deltafetch
Configuration
Add DeltaFetch middleware by including it in
SPIDER_MIDDLEWARES
in yoursettings.py
file:SPIDER_MIDDLEWARES = { 'scrapy_deltafetch.DeltaFetch': 100, }
Here, priority
100
is just an example. Set its value depending on other middlewares you may have enabled already.Enable the middleware using
DELTAFETCH_ENABLED
in yoursettings.py
:DELTAFETCH_ENABLED = True
Usage
Following are the different options to control DeltaFetch middleware behavior.
Supported Scrapy settings
DELTAFETCH_ENABLED
β to enable (or disable) this extensionDELTAFETCH_DIR
β directory where to store stateDELTAFETCH_RESET
β reset the state, clearing out all seen requests
These usually go in your Scrapy project's settings.py
.
Supported Scrapy spider arguments
deltafetch_reset
β same effect as DELTAFETCH_RESET setting
Example:
$ scrapy crawl example -a deltafetch_reset=1
Supported Scrapy request meta keys
deltafetch_key
β used to define the lookup key for that request. by default it's Scrapy's default Request fingerprint function, but it can be changed to contain an item id, for example. This requires support from the spider, but makes the extension more efficient for sites that many URLs for the same item.deltafetch_enabled
- if set to False it will disable deltafetch for some specific request