• Stars
    star
    544
  • Rank 78,481 (Top 2 %)
  • Language
    Shell
  • License
    GNU General Publi...
  • Created over 14 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Refactored version of code.google.com/hadoop-gpl-compression for hadoop 0.20

Hadoop-LZO Build Status

Hadoop-LZO is a project to bring splittable LZO compression to Hadoop. LZO is an ideal compression format for Hadoop due to its combination of speed and compression size. However, LZO files are not natively splittable, meaning the parallelism that is the core of Hadoop is gone. This project re-enables that parallelism with LZO compressed files, and also comes with standard utilities (input/output streams, etc) for working with LZO files.

Origins

This project builds off the great work done at https://code.google.com/p/hadoop-gpl-compression. As of issue 41, the differences in this codebase are the following.

  • it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files, and also fixes the compressor to follow the lzo standard when compressing small or uncompressible chunks. it also fixes a number of inconsistently caught and thrown exception cases that can occur when the lzo writer gets killed mid-stream, plus some other smaller issues (see commit log).
  • it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
  • it adds an easier way to index lzo files (com.hadoop.compression.lzo.LzoIndexer)
  • it adds an even easier way to index lzo files, in a distributed manner (com.hadoop.compression.lzo.DistributedLzoIndexer)

Hadoop and LZO, Together at Last

LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an in-process indexer (com.hadoop.compression.lzo.LzoIndexer) and a map-reduce style indexer which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

You can read more about Hadoop, LZO, and how we're using it at Twitter at https://www.cloudera.com/blog/2009/11/17/hadoop-at-twitter-part-1-splittable-lzo-compression/.

Building and Configuring

To get started, see https://code.google.com/p/hadoop-gpl-compression/wiki/FAQ. This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page, or follow the summarized version here.

You need JDK 1.6 or higher to build hadoop-lzo (1.7 or higher on Mac OS).

LZO 2.x is required, and most easily installed via the package manager on your system. If you choose to install manually for whatever reason (developer OSX machines is a common use-case) this is accomplished as follows:

  1. Download the latest LZO release from https://www.oberhumer.com/opensource/lzo/
  2. Configure LZO to build a shared library (required) and use a package-specific prefix (optional but recommended): ./configure --enable-shared --prefix /usr/local/lzo-2.10
  3. Build and install LZO: make && sudo make install
  4. On Windows, you can build lzo2.dll with this command: B\win64\vc_dll.bat

Now let's build hadoop-lzo.

C_INCLUDE_PATH=/usr/local/lzo-2.10/include \
LIBRARY_PATH=/usr/local/lzo-2.10/lib \
  mvn clean package

Running tests on Windows also requires setting PATH to include the location of lzo2.dll.

set PATH=C:\lzo-2.10;%PATH%

Additionally on Windows, the Hadoop core code requires setting HADOOP_HOME so that the tests can find winutils.exe. If you've built Hadoop trunk in directory C:\hdc, then the following would work.

set HADOOP_HOME=C:\hdc\hadoop-common-project\hadoop-common\target

Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

    export HADOOP_CLASSPATH=/path/to/your/hadoop-lzo-lib.jar
    export JAVA_LIBRARY_PATH=/path/to/hadoop-lzo-native-libs:/path/to/standard-hadoop-native-libs

Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

    JAVA_LIBRARY_PATH=''

because it keeps Hadoop from keeping the alteration you made to JAVA_LIBRARY_PATH above. (Update: see https://issues.apache.org/jira/browse/HADOOP-6453). Make sure you restart your jobtrackers and tasktrackers after uploading and changing configs so that they take effect.

Build Troubleshooting

The following missing LZO header error suggests LZO was installed in non-standard location and cannot be found at build time. Double-check the environment variable C_INCLUDE_PATH is set to the LZO include directory. For example: C_INCLUDE_PATH=/usr/local/lzo-2.10/include

[exec] checking lzo/lzo2a.h presence... no
[exec] checking for lzo/lzo2a.h... no
[exec] configure: error: lzo headers were not found...
[exec]                gpl-compression library needs lzo to build.
[exec]                Please install the requisite lzo development package.

The following Can't find library for '-llzo2' error suggests LZO was installed to a non-standard location and cannot be located at build time. This could be one of two issues:

  1. LZO was not built as a shared library. Double-check the location you installed LZO contains shared libraries (probably something like /usr/lib64/liblzo2.so.2 on Linux, or /usr/local/lzo-2.10/lib/liblzo2.dylib on OSX).

  2. LZO was not added to the library path. Double-check the environment varialbe LIBRARY_PATH points as the LZO lib directory (for example LIBRARY_PATH=/usr/local/lzo-2.10/lib).

    [exec] checking lzo/lzo2a.h usability... yes [exec] checking lzo/lzo2a.h presence... yes [exec] checking for lzo/lzo2a.h... yes [exec] checking Checking for the 'actual' dynamic-library for '-llzo2'... configure: error: Can't find library for '-llzo2'

The following "Native java headers not found" error indicates the Java header files are not available.

[exec] checking jni.h presence... no
[exec] checking for jni.h... no
[exec] configure: error: Native java headers not found. Is $JAVA_HOME set correctly?

Header files are not available in all Java installs. Double-check you are using a JAVA_HOME that has an include directory. On OSX you may need to install a developer Java package.

$ ls -d /Library/Java/JavaVirtualMachines/1.6.0_29-b11-402.jdk/Contents/Home/include
/Library/Java/JavaVirtualMachines/1.6.0_29-b11-402.jdk/Contents/Home/include
$ ls -d /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/include
ls: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/include: No such file or directory

Maven repository

The hadoop-lzo package is available at https://maven.twttr.com/.

For example, if you are using ivy, add the repository in ivysettings.xml:

  <ibiblio name="twttr.com" m2compatible="true" root="https://maven.twttr.com/"/>

And include hadoop-lzo as a dependency:

  <dependency org="com.hadoop.gplcompression" name="hadoop-lzo" rev="0.4.17"/>

Using Hadoop and LZO

Reading and Writing LZO Data

The project provides LzoInputStream and LzoOutputStream wrapping regular streams, to allow you to easily read and write compressed LZO data.

Indexing LZO Files

At this point, you should also be able to use the indexer to index lzo files in Hadoop (recall: this makes them splittable, so that they can be analyzed in parallel in a mapreduce job). Imagine that big_file.lzo is a 1 GB LZO file. You have two options:

  • index it in-process via:

      hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.LzoIndexer big_file.lzo
    
  • index it in a map-reduce job via:

      hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer big_file.lzo
    

Either way, after 10-20 seconds there will be a file named big_file.lzo.index. The newly-created index file tells the LzoTextInputFormat's getSplits function how to break the LZO file into splits that can be decompressed and processed in parallel. Alternatively, if you specify a directory instead of a filename, both indexers will recursively walk the directory structure looking for .lzo files, indexing any that do not already have corresponding .lzo.index files.

Running MR Jobs over Indexed Files

Now run any job, say wordcount, over the new file. In Java-based M/R jobs, just replace any uses of TextInputFormat by LzoTextInputFormat. In streaming jobs, add "-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat" (streaming still uses the old APIs, and needs a class that inherits from org.apache.hadoop.mapred.InputFormat). Note that to use the DeprecatedLzoTextInputFormat properly with hadoop-streaming, you should also set the jobconf property stream.map.input.ignoreKey=true. That will replicate the behavior of the default TextInputFormat by stripping off the byte offset keys from the input lines that get piped to the mapper process. For Pig jobs, email me or check the pig list -- I have custom LZO loader classes that work but are not (yet) contributed back.

Note that if you forget to index an .lzo file, the job will work but will process the entire file in a single split, which will be less efficient.

More Repositories

1

the-algorithm

Source code for Twitter's Recommendation Algorithm
Scala
60,968
star
2

twemoji

Emoji for everyone. https://twemoji.twitter.com/
HTML
16,575
star
3

typeahead.js

typeahead.js is a fast and fully-featured autocomplete library
JavaScript
16,526
star
4

twemproxy

A fast, light-weight proxy for memcached and redis
C
12,000
star
5

the-algorithm-ml

Source code for Twitter's Recommendation Algorithm
Python
9,844
star
6

finagle

A fault tolerant, protocol-agnostic RPC system
Scala
8,742
star
7

hogan.js

A compiler for the Mustache templating language
JavaScript
5,143
star
8

labella.js

Placing labels on a timeline without overlap.
JavaScript
3,869
star
9

scala_school

Lessons in the Fundamentals of Scala
HTML
3,700
star
10

AnomalyDetection

Anomaly Detection with R
R
3,529
star
11

scalding

A Scala API for Cascading
Scala
3,469
star
12

twitter-text

Twitter Text Libraries. This code is used at Twitter to tokenize and parse text to meet the expectations for what can be used on the platform.
HTML
3,051
star
13

TwitterTextEditor

A standalone, flexible API that provides a full-featured rich text editor for iOS applications.
Swift
2,950
star
14

opensource-website

Twitter's open source website, identifying projects we've released, organizations we support, and the work we do to support open source.
SCSS
2,918
star
15

util

Wonderful reusable code from Twitter
Scala
2,679
star
16

algebird

Abstract Algebra for Scala
Scala
2,288
star
17

finatra

Fast, testable, Scala services built on TwitterServer and Finagle
Scala
2,271
star
18

effectivescala

Twitter's Effective Scala Guide
HTML
2,241
star
19

summingbird

Streaming MapReduce with Scalding and Storm
Scala
2,136
star
20

pelikan

Pelikan is Twitter's unified cache backend
C
1,921
star
21

ios-twitter-image-pipeline

Twitter Image Pipeline is a robust and performant image loading and caching framework for iOS clients
C
1,851
star
22

twurl

OAuth-enabled curl for the Twitter API
Ruby
1,790
star
23

twitter-server

Twitter-Server defines a template from which services at Twitter are built
Scala
1,542
star
24

rezolus

Systems performance telemetry
Rust
1,541
star
25

activerecord-reputation-system

An Active Record Reputation System for Rails
Ruby
1,334
star
26

communitynotes

Documentation and source code powering Twitter's Community Notes
Python
1,319
star
27

compose-rules

Static checks to aid with a healthy adoption of Compose
Kotlin
1,311
star
28

fatcache

Memcache on SSD
C
1,301
star
29

rsc

Experimental Scala compiler focused on compilation speed
Scala
1,245
star
30

elephant-bird

Twitter's collection of LZO and Protocol Buffer-related Hadoop, Pig, Hive, and HBase code.
Java
1,137
star
31

cassovary

Cassovary is a simple big graph processing library for the JVM
Scala
1,039
star
32

Serial

Light-weight, fast framework for object serialization in Java, with Android support.
Java
988
star
33

hbc

A Java HTTP client for consuming Twitter's realtime Streaming API
Java
963
star
34

twemcache

Twemcache is the Twitter Memcached
C
925
star
35

innovators-patent-agreement

Innovators Patent Agreement (IPA)
919
star
36

vireo

Vireo is a lightweight and versatile video processing library written in C++11
C++
919
star
37

twitter-korean-text

Korean tokenizer
Scala
834
star
38

scrooge

A Thrift parser/generator
Scala
785
star
39

BreakoutDetection

Breakout Detection via Robust E-Statistics
C++
753
star
40

GraphJet

GraphJet is a real-time graph processing library.
Java
696
star
41

twitter-cldr-rb

Ruby implementation of the ICU (International Components for Unicode) that uses the Common Locale Data Repository to format dates, plurals, and more.
Ruby
667
star
42

bijection

Reversible conversions between types
Scala
657
star
43

chill

Scala extensions for the Kryo serialization library
Scala
607
star
44

ios-twitter-network-layer

Twitter Network Layer is a scalable and feature rich network layer built on top of NSURLSession for Apple platforms
Objective-C
574
star
45

storehaus

Storehaus is a library that makes it easy to work with asynchronous key value stores
Scala
464
star
46

rpc-perf

A tool for benchmarking RPC services
Rust
458
star
47

d3kit

D3Kit is a set tools to speed D3 related project development
JavaScript
429
star
48

scoot

Scoot is a distributed task runner, supporting both a proprietary API and Bazel's Remote Execution.
Go
347
star
49

twitter-cldr-js

JavaScript implementation of the ICU (International Components for Unicode) that uses the Common Locale Data Repository to format dates, plurals, and more. Based on twitter-cldr-rb.
JavaScript
345
star
50

scala_school2

Scala School 2
Scala
340
star
51

rustcommon

Common Twitter Rust lib
Rust
339
star
52

wordpress

The official Twitter plugin for WordPress. Embed Twitter content and grow your audience on Twitter.
PHP
310
star
53

ios-twitter-logging-service

Twitter Logging Service is a robust and performant logging framework for iOS clients
Objective-C
299
star
54

nodes

A library to implement asynchronous dependency graphs for services in Java
Java
246
star
55

SentenTree

A novel text visualization technique
JavaScript
226
star
56

interactive

Twitter interactive visualization
HTML
213
star
57

joauth

A Java library for authenticating HTTP Requests using OAuth
Java
211
star
58

thrift_client

A Thrift client wrapper that encapsulates some common failover behavior
Ruby
196
star
59

hpack

Header Compression for HTTP/2
Java
192
star
60

zktraffic

ZooKeeper protocol analyzer and stats gathering daemon
Python
165
star
61

twemoji-parser

A simple library for identifying emoji entities within a string in order to render them as Twemoji.
Scala
162
star
62

cache-trace

A collection of Twitter's anonymized production cache traces.
Shell
162
star
63

sbf

Java
159
star
64

tormenta

Scala extensions for Storm
Scala
132
star
65

whiskey

HTTP library for Android (beta)
Java
131
star
66

hraven

hRaven collects run time data and statistics from MapReduce jobs in an easily queryable format
Java
127
star
67

netty-http2

HTTP/2 for Netty
Java
120
star
68

sqrl

A Safe, Stateful Rules Language for Event Streams
TypeScript
100
star
69

ccommon

Cache Commons
C
99
star
70

focus

Focus aligns Git worktree content based on outlines of a repository's Bazel build graph. Focused repos are sparse, shallow, and thin and unlock markedly better performance in large repos.
Rust
91
star
71

dict_minimize

Access scipy optimizers from your favorite deep learning framework.
Python
77
star
72

metrics

76
star
73

twitter.github.io

HTML
71
star
74

go-bindata

Go
68
star
75

diffusion-rl

Python
66
star
76

birdwatch

64
star
77

cloudhopper-commons

Cloudhopper Commons
Java
57
star
78

twitter-cldr-npm

TwitterCldr npm package
JavaScript
49
star
79

.github

Twitter GitHub Organization-wide files
48
star
80

bazel-multiversion

Bazel rules to resolve, fetch and manage 3rdparty JVM dependencies with support for multiple parallel versions of the same dependency. Powered by Coursier.
Scala
47
star
81

libwatchman

A C interface to watchman
C
44
star
82

sslconfig

Twitter's OpenSSL Configuration
42
star
83

ios-twitter-apache-thrift

A thrift encoding and decoding library for Swift
Swift
41
star
84

gatekeeper-service

GateKeeper is a service built to automate the manual steps involved in onboarding, offboarding, and lost asset scenarios.
Python
36
star
85

dodo

The Twitter OSS Project Builder
Shell
35
star
86

repo-scaffolding

Tools for creating repos based on open source standards and best practices
33
star
87

iago2

A load generator, built for engineers
Scala
24
star
88

caladrius

Performance modelling system for Distributed Stream Processing Systems (DSPS) such as Apache Heron and Apache Storm
Python
22
star
89

ossdecks

Repository for Twitter Open Source Decks
10
star
90

curation-style-guide

Document Repository for Twitter's Curation Style Guide
10
star
91

analytics-infra-governance

Description of the process for how to commit, review, and release code to the Scalding OSS family (Scalding, Summingbird, Algebird, Bijection, Storehaus, etc)
9
star
92

gpl-commitment

Twitter's GPL Cooperation Commitment
5
star
93

second-control-probability-distributions

4
star
94

google-tag-manager-event-tag

Smarty
3
star
95

google-tag-manager-base-tag

Smarty
2
star