• Stars
    star
    192
  • Rank 202,019 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created about 9 years ago
  • Updated almost 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Splash + HAProxy + Docker Compose

Aquarium

Aquarium is a cookiecuter template for hassle-free Docker Compose + Splash setup. Think of it as a Splash instance with extra features and without common pitfalls.

Usage

First, make sure Docker and Docker Compose are installed.

Then install cookiecutter:

pip install cookiecutter

or (on OS X + homebrew):

brew install cookiecutter

Then generate a folder with config files:

cookiecutter gh:TeamHG-Memex/aquarium

With all default options it'll create an aquarium folder in the current path. Go to this folder and start the Splash cluster:

cd ./aquarium
docker-compose up

Then use http://<host>:8050 as a regular Splash instance. On Linux http://0.0.0.0:8050 should work; on OS X and Windows IP address depends on boot2docker or docker-machine.

Options

When generating a config, cookiecutter will ask a bunch of questions.

  • folder_name (default is "aquarium") - a name of the target folder.

  • num_splashes (default is "3") - a number of Splash instances to create. To utilize full server capacity it makes sense to create slightly more Splash instances than CPU cores - e.g. on 2-core machine 3 instances often work best.

  • splash_version (default is "3.0") - a version of scrapighub/splash Docker image.

  • auth_user (default is "user"), auth_password (default is "userpass") - HTTP Basic Auth credentials for Splash.

  • splash_verbosity (default is "1") - Splash log verbosity, from 0 to 5.

  • max_timeout (default is "3600") - maximum allowed timeout.

  • maxrss_mb (default is "3000") - a soft memory limit, in MB. Splash container will be restarted after some time if it starts to use more memory then this value.

  • splash_slots (default is 5) - a number of Splash slots to use, i.e. how many render jobs to run in parallel in a single Splash process.

  • stats_enabled (default is "1") - whether to enable HAProxy stats. If stats are enabled visit http://<host>:8036 to see stats page.

  • stats_auth (default is "admin:adminpass") - HTTP Basic Auth credentials for HAProxy stats.

  • tor (default is "1") - enter 0 to disable Tor support. When Tor support is enabled, all .onion links are opened using Tor. In addition to that, there is tor Splash proxy profile which you can use to render any page using Tor.

  • adblock (default is "1") - Enter 0 to disable AdBlock Plus

    request filters (FIXME: this option is not working yet; filters are always available). By default, the following filters are available:

    • easylist: default set of EasyList filters for English;
    • easyprivacy: EasyPrivacy filters remove tracking scripts;
    • easylist_noadult: EasyList variant without filters for adult domains;
    • fanboy-social: removes social media content such as the Facebook like buttons and other widgets.
    • fanboy-annoyance: blocks Social Media content, in-page pop-ups and other annoyances; use it to decrease loading times and uncluttering pages. fanboy-social is already included in this filter.

Contributing

License is MIT.


define hyperiongray

More Repositories

1

eli5

A library for debugging/inspecting machine learning classifiers and explaining their predictions
Jupyter Notebook
2,758
star
2

scrapy-rotating-proxies

use multiple proxies with Scrapy
Python
656
star
3

tensorboard_logger

Log TensorBoard events without touching TensorFlow
Python
625
star
4

sklearn-crfsuite

scikit-learn inspired API for CRFsuite
Python
421
star
5

deep-deep

Adaptive crawler which uses Reinforcement Learning methods
Jupyter Notebook
165
star
6

arachnado

Web Crawling UI and HTTP API, based on Scrapy and Tornado
Python
156
star
7

autologin

A project to attempt to automatically login to a website given a single seed
Python
115
star
8

html-text

Extract text from HTML
HTML
115
star
9

Formasaurus

Formasaurus tells you the type of an HTML form and its fields using machine learning
HTML
110
star
10

page-compare

Simple heuristic for measuring web page similarity (& data set)
HTML
88
star
11

autopager

Detect and classify pagination links
HTML
86
star
12

undercrawler

A generic crawler
Python
75
star
13

scrapy-crawl-once

Scrapy middleware which allows to crawl only new content
Python
74
star
14

soft404

A classifier for detecting soft 404 pages
Jupyter Notebook
53
star
15

agnostic

Agnostic Database Migrations
Python
51
star
16

autologin-middleware

Scrapy middleware for the autologin
Python
37
star
17

json-lines

Read JSON lines (jl) files, including gzipped and broken
Python
34
star
18

extract-html-diff

extract difference between two html pages
HTML
29
star
19

scrapy-kafka-export

Scrapy extension which writes crawled items to Kafka
Python
28
star
20

MaybeDont

A component that tries to avoid downloading duplicate content
Python
27
star
21

sitehound-frontend

Site Hound (previously THH) is a Domain Discovery Tool
HTML
23
star
22

imageSimilarity

Given a new image, determine if it is likely derived from a known image.
Python
20
star
23

domain-discovery-crawler

Broad crawler for domain discovery
Python
17
star
24

url-summary

Show summary of a large number of URLs in a Jupyter Notebook
Python
17
star
25

sitehound

This is the facade for installation and access to the individual components
Shell
16
star
26

tor-proxy

a tor socks proxy docker image
11
star
27

scrapy-dockerhub

[UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.
Python
10
star
28

web-page-annotator

Annotate parts of web pages in the browser
Python
9
star
29

scrash-lua-examples

A collection of example LUA scripts and JS utilities
JavaScript
7
star
30

scrapy-cdr

Item definition and utils for storing items in CDR format for scrapy
Python
7
star
31

hh-page-classifier

Headless Horseman Page Classifier service
Python
6
star
32

privoxy

Privoxy HTTP Proxy based on jess/privoxy
6
star
33

sitehound-backend

Sitehound's backend
HTML
6
star
34

fortia

[UNMAINTAINED] Firefox addon for Scrapely
JavaScript
5
star
35

proxy-middleware

Scrapy middleware that reads proxy config from settings
Python
5
star
36

linkrot

[UNMAINTAINED] A script (Scrapy spider) to check a list of URLs.
Jupyter Notebook
4
star
37

hgprofiler

JavaScript
4
star
38

linkdepth

[UNMAINTAINED] scrapy spider to check link depth over time
Python
4
star
39

common-crawl-mapreduce

A naive scoring of commoncrawl's content using MR
Java
3
star
40

captcha-broad-crawl

Broad crawl of onion sites in search for captchas
Python
3
star
41

frontera-crawler

Crawler-specific logic for Frontera
Python
3
star
42

hh-deep-deep

THH ↔ deep-deep integration
Python
3
star
43

scrapy-login

[UNMAINTAINED] A middleware that provides continuous site login facility
Python
3
star
44

bk-string

A BK Tree based approach to storing and querying strings by Levenshtein Distance.
C
3
star
45

domainSpider

Simple web crawler that sticks to a set list of domains. Work in progress.
Python
3
star
46

quickpin

New iteration of QuickPin with Flask & AngularDart
Python
2
star
47

py-bkstring

A python wrapper for the bk-string C project.
Python
2
star
48

broadcrawl

Middleware that limits number of internal/external links during broad crawl
Python
2
star
49

sshadduser

A simple tool to add a new user with OpenSSH keys.
Python
2
star
50

autoregister

Python
2
star
51

quickpin-api

Python wrapper for the QuickPin API
Python
1
star
52

muricanize

A translation API
Python
1
star
53

rs-bkstring

Rust
1
star
54

scrash-pageuploader

[UNMAINTAINED] S3 Uploader pipelines for HTML and screenshots rendered by Splash
Python
1
star
55

site-checker

JavaScript
1
star
56

frontera-scripts

A set of scripts to spin up EC2 Frontera cluster with spiders
Python
1
star