• Stars
    star
    377
  • Rank 113,535 (Top 3 %)
  • Language
    Python
  • Created about 11 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

WARC writing MITM HTTP/S proxy

Warcprox - WARC writing MITM HTTP/S proxy

Warcprox is an HTTP proxy designed for web archiving applications. When used in parallel with brozzler it supports a comprehensive, modern, and distributed archival web capture system. Warcprox stores its traffic to disk in the Web ARChive (WARC) file format, which may then be accessed with web archival replay software like OpenWayback and pywb. It captures encrypted HTTPS traffic by using the "man-in-the-middle" technique (see the Man-in-the-middle section for more info).

Warcprox was originally based on pymiproxy by Nadeem Douba.

Getting started

Warcprox runs on python 3.4+.

To install latest release run:

# apt-get install libffi-dev libssl-dev
pip install warcprox

You can also install the latest bleeding edge code:

pip install git+https://github.com/internetarchive/warcprox.git

To start warcprox run:

warcprox

Try warcprox --help for documentation on command line options.

Man-in-the-middle

Normally, HTTP proxies can't read encrypted HTTPS traffic. The browser uses the HTTP CONNECT method to establish a tunnel through the proxy, and the proxy merely routes raw bytes between the client and server. Since the bytes are encrypted, the proxy can't make sense of the information that it proxies. This nonsensical encrypted data is not typically useful for web archiving purposes.

In order to capture HTTPS traffic, warcprox acts as a "man-in-the-middle" (MITM). When it receives a CONNECT directive from a client, it generates a public key certificate for the requested site, presents to the client, and proceeds to establish an encrypted connection with the client. It then makes a separate, normal HTTPS connection to the remote site. It decrypts, archives, and re-encrypts traffic in both directions.

Configuring a warcprox instance as a browser’s HTTP proxy will result in security certificate warnings because none of the certificates will be signed by trusted authorities. However, there is nothing malicious about warcprox functions. To use warcprox effectively, the client needs to disable certificate verification or add the CA certificate generated by warcprox as a trusted authority. When using the latter, remember to undo this change when finished using warcprox.

API

The warcprox API may be used to retrieve information from and interact with a running warcprox instance, including:

  • Retrieving status information via /status URL
  • Writing WARC records via WARCPROX_WRITE_RECORD HTTP method
  • Controlling warcprox settings via the Warcprox-Meta HTTP header

For warcprox API documentation, see: api.rst.

Deduplication

Warcprox avoids archiving redundant content by "deduplicating" it. The process for deduplication works similarly to deduplication by Heritrix and other web archiving tools:

  1. While fetching URL, calculate payload content digest (typically SHA1 checksum value)
  2. Look up digest in deduplication database (warcprox currently supports sqlite by default, rethinkdb with two different schemas, and trough)
  3. If found, write warc revisit record referencing the url and capture time of the previous capture
  4. If not found,
    1. Write response record with full payload
    2. Store new entry in deduplication database (can be disabled, see Warcprox-Meta HTTP request header)

The deduplication database is partitioned into different "buckets". URLs are deduplicated only against other captures in the same bucket. If specified, the dedup-buckets field of the Warcprox-Meta HTTP request header determines the bucket(s). Otherwise, the default bucket is used.

Deduplication can be disabled entirely by starting warcprox with the argument --dedup-db-file=/dev/null.

Statistics

Warcprox stores some crawl statistics to sqlite or rethinkdb. These are consulted for enforcing limits and soft-limits (see Warcprox-Meta fields), and can also be consulted by other processes outside of warcprox, such as for crawl job reporting.

Statistics are grouped by "bucket". Every capture is counted as part of the __all__ bucket. Other buckets can be specified in the Warcprox-Meta request header. The fallback bucket in case none is specified is called __unspecified__.

Within each bucket are three sub-buckets:

  • new - tallies captures for which a complete record (usually a response record) was written to a WARC file
  • revisit - tallies captures for which a revisit record was written to a WARC file
  • total - includes all URLs processed, even those not written to a WARC file, and so may be greater than the sum of new and revisit records

Within each of these sub-buckets, warcprox generates two kinds of statistics:

  • urls - simple count of URLs
  • wire_bytes - sum of bytes received over the wire from the remote server for each URL, including HTTP headers

For historical reasons, the default sqlite store keeps statistics as JSON blobs:

sqlite> select * from buckets_of_stats;
bucket           stats
---------------  ---------------------------------------------------------------------------------------------
__unspecified__  {"bucket":"__unspecified__","total":{"urls":37,"wire_bytes":1502781},"new":{"urls":15,"wire_bytes":1179906},"revisit":{"urls":22,"wire_bytes":322875}}
__all__          {"bucket":"__all__","total":{"urls":37,"wire_bytes":1502781},"new":{"urls":15,"wire_bytes":1179906},"revisit":{"urls":22,"wire_bytes":322875}}

Plugins

Warcprox supports a limited notion of plugins by way of the --plugin command line argument. Plugin classes are loaded from the regular python module search path. They are instantiated with one argument that contains the values of all command line arguments, warcprox.Options. Legacy plugins with constructors that take no arguments are also supported. Plugins should either have a method notify(self, recorded_url, records) or should subclass warcprox.BasePostfetchProcessor. More than one plugin can be configured by specifying --plugin multiples times.

See a minimal example here.

Architecture

Warcprox is multithreaded. It has pool of http proxy threads (100 by default). When handling a request, a proxy thread records data from the remote server to an in-memory buffer that spills over to disk if necessary (after 512k by default), while it streams the data to the proxy client. Once the HTTP transaction is complete, it puts the recorded URL in a thread-safe queue, to be picked up by the first processor in the postfetch chain.

The postfetch chain normally includes processors for loading deduplication information, writing records to the WARC, saving deduplication information, and updating statistics. The exact set of processors in the chain depends on command line arguments; for example, plugins specified with --plugin are processors in the postfetch chain. Each postfetch processor has its own thread or threads. Thus the processors are able to run in parallel, independent of one another. This design also enables them to process URLs in batch. For example, the statistics processor gathers statistics for up to 10 seconds or 500 URLs, whichever comes first, then updates the statistics database with just a few queries.

License

Warcprox is a derivative work of pymiproxy, which is GPL. Thus warcprox is also GPL.

  • Copyright (C) 2012 Cygnos Corporation
  • Copyright (C) 2013-2018 Internet Archive

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

More Repositories

1

openlibrary

One webpage for every book ever published!
Python
5,180
star
2

heritrix3

Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project.
Java
2,821
star
3

bookreader

The Internet Archive BookReader
JavaScript
975
star
4

wayback-machine-webextension

A web browser extension for Chrome, Firefox, Edge, and Safari 14.
JavaScript
657
star
5

brozzler

brozzler - distributed browser-based web crawler
Python
657
star
6

openlibrary-client

Python Client Library for the Archive.org OpenLibrary API
Python
377
star
7

warc

Python library for reading and writing warc files
Python
237
star
8

dweb-mirror

Offline Internet Archive project
JavaScript
232
star
9

warctools

Command line tools and libraries for handling and manipulating WARC files (and HTTP contents)
Python
149
star
10

internetarchivebot

PHP
120
star
11

bookserver

Archive.org OPDS Bookserver - A standard for digital book distribution
Python
119
star
12

fatcat

Perpetual Access To The Scholarly Record
Python
114
star
13

archive-pdf-tools

Fast PDF generation and compression. Deals with millions of pages daily.
Python
97
star
14

fatcat-scholar

search interface for scholarly works
Python
78
star
15

Zeno

State-of-the-art web crawler 🔱
HTML
70
star
16

iaux

Monorepo for Archive.org UX development and prototyping.
JavaScript
63
star
17

openlibrary-bots

A repository of cleanup bots implementing the openlibrary-client
Python
62
star
18

umbra

A queue-controlled browser automation tool for improving web crawl quality
Python
60
star
19

dweb-archive

JavaScript
54
star
20

hind

Hashistack-IN-Docker (single container with nomad + consul + caddy)
Shell
53
star
21

wayback-machine-firefox

Reduce annoying 404 pages by automatically checking for an archived copy in the Wayback Machine. Learn more about this Test Pilot experiment at https://testpilot.firefox.com/
JavaScript
53
star
22

cdx-summary

Summarize web archive capture index (CDX) files.
Python
52
star
23

internet-archive-voice-apps

Voice Apps (Actions on Google, Alexa Skill) of Internet Archive. Just say: "Ok Google, Ask Internet Archive to Play Jazz" or "Alexa, Ask Internet Internet Archive to play Instrumental Music"
JavaScript
46
star
24

liveweb

Liveweb proxy of the Wayback Machine project
Python
44
star
25

epub

For code related to making ePub files
Python
40
star
26

surt

Sort-friendly URI Reordering Transform (SURT) python module
Python
40
star
27

archive-hocr-tools

Efficient hOCR tooling
Python
39
star
28

trough

Trough: Big data, small databases.
Python
36
star
29

dweb-transport

Internet Archive Decentralized Web Common API
36
star
30

wayback-diff

React components to render differences between captures at the Wayback Machine
JavaScript
31
star
31

dweb-transports

JavaScript
25
star
32

sandcrawler

Backend, IA-specific tools for crawling and processing the scholarly web. Content ends up in https://fatcat.wiki
HTML
24
star
33

iiif

The official Internet Archive IIIF service
JavaScript
22
star
34

crawling-for-nomore404

Python
22
star
35

snakebite-py3

Pure python HDFS client: python3.x version
Python
22
star
36

newsum

Daily TV News Summary using GPT
Python
21
star
37

ia-hadoop-tools

Java
21
star
38

arklet

ARK minter, binder, resolver
Python
21
star
39

dweb-gateway

Decentralized web Gateway for Internet Archive
Python
21
star
40

xfetch

Cache stampede test harness. Code accompanies the presentation made at RedisConf 2017, 30 May to 1 June, 2017, in San Francisco.
PHP
18
star
41

openlibrary-librarians

Coordination between the OpenLibrary.org Librarian community
18
star
42

arch

Web application for distributed compute analysis of Archive-It web archive collections.
Scala
15
star
43

cicd

build & test using github registry; deploy to nomad clusters
13
star
44

scrapy-warcio

Support for writing WARC files with Scrapy
Python
13
star
45

iacopilot

Summarize and ask questions about items in the Internet Archive
Python
13
star
46

iari

Import workflows for the Wikipedia Citations Database
Python
12
star
47

doublethink

rethinkdb python library
Python
11
star
48

s3_loader

Watch for local files to appear and move them into S3
Python
11
star
49

Sparkling

Internet Archive's Sparkling Data Processing Library
Scala
11
star
50

wayback-machine-android

Kotlin
10
star
51

archive-commons

Java
10
star
52

draintasker

a tool for continuously ingesting w/arc files into the archive
Python
9
star
53

ias3

Internet Archive S3-like connector
Python
8
star
54

wayback-radial-tree

JavaScript
7
star
55

chocula

journal-level metadata munging. part of fatcat project
Python
7
star
56

read_api_extras

Demo code for the Open Library Read API
7
star
57

wikibase-patcher

Python library for interacting with the Wikibase REST API
Python
7
star
58

dweb-archivecontroller

JavaScript
7
star
59

web_collection_search

An API wrapper to the Elasticsearch index of web archival collections and a web UI to explore those indexes.
Python
7
star
60

epub-labs

epub-labs
6
star
61

iaux-typescript-wc-template

IAUX Typescript WebComponent Template
TypeScript
6
star
62

ia

A JS interface to archive.org
JavaScript
6
star
63

archive-ocr-tools

Python
6
star
64

offlinesolr

Tool to build solr index offline
Java
6
star
65

ia-bin-tools

Internet Archive Command-line Utilities
C
6
star
66

dweb-objects

JavaScript
5
star
67

iare

An interactive IARI JSON viewer
JavaScript
5
star
68

iaux-collection-browser

TypeScript
5
star
69

wayback-machine-safari

JavaScript
5
star
70

collections-cleaners

Shell
5
star
71

trendmachine

A mathematical model to calculate a normalized score to quantify the temporal resilience of a web page as a time-series data based on the historical observations of the page in web archives.
Python
5
star
72

acs4_py

Python interface to ACS4
Python
4
star
73

esbuild_es5

minify JS/TS files using `esbuild` and `swc` down to ES5 (uses `deno`)
TypeScript
4
star
74

iaux-search-service

TypeScript
4
star
75

map-of-the-web

Python
4
star
76

eventer

Eventer is a simple event dispatching library in Python
Python
4
star
77

iaux-donation-form

The Internet Archive Donation Form
TypeScript
4
star
78

internetarchive.github.com

Internet Archive Open Source Blog
CSS
4
star
79

isodos

Go module to interact with Internet Archive's Isodos API
Go
4
star
80

strainer

Heritrix frontier files manipulation tool.
Go
4
star
81

internet-archive-alexa-skill

JavaScript
3
star
82

btget

Command line retrieval of torrents using transmission-daemon (via transmission-remote)
Python
3
star
83

mediawiki-extension-archive-leaf

A MediaWiki extension that supports importing of Archive.org palm leaf items
JavaScript
3
star
84

hashitalksdemo

JavaScript
3
star
85

openlibrary-api

API documentation for https://github.com/internetarchive/openlibrary
HTML
3
star
86

httpd

Fast and easy-to-use web server, using the Deno native http server (hyper in rust). It serves static files & dirs, with arbitrary handling using an optional `handler` argument.
JavaScript
3
star
87

wbm_ai_kg

Google Summer of Code (GSoC) 2024 Wayback Machine GenAI Knowledge Graph project
HTML
3
star
88

file_server_plus

`deno` static file webserver, clone of `file_server.ts`, PLUS an additional final "404 handler" to run arbitrary JS/TS
TypeScript
2
star
89

dyno

JavaScript
2
star
90

archiveorg-e2e-playwright

TypeScript
2
star
91

tarb_insights

A Streamlit application to visualize Wikipedia IABot statistics
Python
2
star
92

rulesengine-client

Python client package for the playback rules engine
Python
2
star
93

coderunr

deploy saved changes to website unique hostnames instantly -- can skip commits, pushes & full CI/CD
Shell
2
star
94

deferred

Redis promises & futures library for Predis / PHP
PHP
2
star
95

hello-js

an example of full CI/CD from GitHub to a nomad cluster
JavaScript
2
star
96

wiki-references-db

Data models and scripts to build a database of references (broadly defined) appearing on Wikipedia and other wikis
Python
2
star
97

maisy

Project Gutenberg collection importation via IAS3 interface
Python
2
star
98

kohacon2011-presentation

Presentation for KohaCon 2011
Shell
2
star
99

rulesengine

model and front-end for rules for managing wayback playback
Python
2
star
100

deploy

GitHub Action to deploy to a nomad cluster
2
star