• Stars
    star
    842
  • Rank 52,196 (Top 2 %)
  • Language
    HTML
  • License
    Apache License 2.0
  • Created about 11 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A scalable, mature and versatile web crawler based on Apache Storm

storm-crawler

license Build Status javadoc Coverage Status

StormCrawler is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.

Quickstart

NOTE: These instructions assume that you have Apache Maven installed. You will need to install Apache Storm to run the crawler.

StormCrawler requires Java 11 or above.

The version of Storm to use must match the one defined in the pom.xml file of your topology. The major version of StormCrawler mirrors the one from Apache Storm, i.e whereas StormCrawler 1.x used Storm 1.2.3, the current version now requires Storm 2.5.0. Our Ansible-Storm repository contains resources to install Apache Storm using Ansible. Alternatively, our stormCrawler-docker project should help you run Apache Storm on Docker.

Once Storm is installed, the easiest way to get started is to generate a brand new StormCrawler project using :

mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -DarchetypeArtifactId=storm-crawler-archetype -DarchetypeVersion=2.8

You'll be asked to enter a groupId (e.g. com.mycompany.crawler), an artefactId (e.g. stormcrawler), a version and package name.

This will not only create a fully formed project containing a POM with the dependency above but also the default resource files, a default CrawlTopology class and a configuration file. Enter the directory you just created (should be the same as the artefactId you specified earlier) and follow the instructions on the README file.

Alternatively if you can't or don't want to use the Maven archetype above, you can simply copy the files from archetype-resources.

Have a look at the code of the CrawlTopology class, the crawler-conf.yaml file as well as the files in src/main/resources/, they are all that is needed to run a crawl topology : all the other components come from the core module.

Getting help

The WIKI is a good place to start your investigations but if you are stuck please use the tag stormcrawler on StackOverflow or ask a question in the discussions section.

DigitalPebble Ltd provide commercial support and consulting for StormCrawler.

Note for developers

Please format your code before submitting a PR with

mvn git-code-format:format-code -Dgcf.globPattern=**/*

Each commit must include a DCO which looks like this

Signed-off-by: Jane Smith <[email protected]>

You may type this line on your own when writing your commit messages. However, if your user.name and user.email are set in your git config, you can use -s or --signoff to add the Signed-off-by line to the end of the commit message.

Thanks

alt tag

YourKit supports open source projects with its full-featured Java Profiler. YourKit, LLC is the creator of YourKit Java Profiler and YourKit .NET Profiler, innovative and intelligent tools for profiling Java and .NET applications.

More Repositories

1

behemoth

Behemoth is an open source platform for large scale document analysis based on Apache Hadoop.
Java
282
star
2

TextClassification

A Text Classification API in Java originally developed by DigitalPebble Ltd. The API is independent from the ML implementations used and can be used as a front end to various ML algorithms. libSVM and liblinear are currently embedded.
Java
48
star
3

textclassification-examples

Use cases for DigitalPebble's TextClassification API
Java
10
star
4

stormcrawlerfight

Crawl configurations for benchmarking / testing StormCrawler
Shell
9
star
5

stormcrawler-docker

Resources for running StormCrawler with Docker services
Dockerfile
7
star
6

ansible-storm

Ansible playbook for deploying a Storm cluster
7
star
7

TextClassificationPlugin

GATE Processing Resource wrapping DigitalPebble's TextClassification API
Java
5
star
8

ngrams-api

Java API for querying a N-Grams corpus. Uses Lucene for searching and indexing from the Google Web-1T format
Java
4
star
9

behemoth-commoncrawl

Support for old (pre 2013) CommonCrawl dataset in Behemoth
Java
4
star
10

NutchFight

Resources for comparison between 1.8 and 2.x of Apache Nutch
Java
4
star
11

tescobank

Setup for crawling tescobank with SC
Java
4
star
12

sc-warc

WARC resources for StormCrawler
2
star
13

behemoth-textclassification

Module for classifying Behemoth documents with a model from our Text Classification API
Java
1
star
14

crawlurlfrontier

Crawl config used to test URL Frontier on a large scale and produce WARCs for CommonCrawl.
FLUX
1
star
15

behemoth-elasticsearch

ElasticSearch module for Behemoth
Java
1
star
16

urlfrontier-client

URLFrontier client written in Rust (mostly as a way of learning Rust)
Rust
1
star