• Stars
    star
    1,968
  • Rank 23,561 (Top 0.5 %)
  • Language
    Python
  • License
    Other
  • Created over 13 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Parse log files, generate metrics for Graphite and Ganglia

Logster - generate metrics from logfiles Build Status

Logster is a utility for reading log files and generating metrics to configurable outputs. It is ideal for visualizing trends of events that are occurring in your application/system/error logs. For example, you might use logster to graph the number of occurrences of HTTP response code that appears in your web server logs.

Logster maintains a cursor, via a tailer, on each log file that it reads so that each successive execution only inspects new log entries. In other words, a 1 minute crontab entry for logster would allow you to generate near real-time trends in the configured output for anything you want to measure from your logs.

This tool is made up of a framework script, logster, and parsing classes that are written to accommodate your specific log format. Sample parsers are included in this distribution. The parser classes essentially read a log file line by line, apply a regular expression to extract useful data from the lines you are interested in, and then aggregate that data into metrics that will be submitted to the configured output. The sample parsers should give you some idea of how to get started writing your own. A list of available parsers can be found on the Parsers page.

Graphite, Ganglia, Amazon CloudWatch, Nagios, StatsD and stdout outputs are provided, and Logster also supports the use of third-party output classes. A list of available output classes can be found on the Outputs page.

History

The logster project was created at Etsy as a fork of ganglia-logtailer (https://bitbucket.org/maplebed/ganglia-logtailer). We made the decision to fork ganglia-logtailer because we were removing daemon-mode from the original framework. We only make use of cron-mode, and supporting both cron- and daemon-modes makes for more work when creating parsing scripts. We care strongly about simplicity in writing parsing scripts -- which enables more of our engineers to write log parsers quickly.

Installation

Logster supports two methods for gathering data from a logfile:

  1. By default, Logster uses the "logtail" utility that can be obtained from the logcheck package, either from a Debian package manager or from source:

    http://packages.debian.org/source/sid/logcheck
    

    RPMs for logcheck can be found here:

    http://rpmfind.net/linux/rpm2html/search.php?query=logcheck
    
  2. Optionally, Logster can use the "Pygtail" Python module instead of logtail. You can install Pygtail using pip

    $ pip install pygtail
    

    To use Pygtail, supply the --tailer=pygtail option on the Logster commandline.

Also, Logster supports two methods for locking files (which it has to do):

  1. By default, Logster uses fcntl.flock.

  2. Optionally, Logster can use the "Portalocker" Python module instead of fcntl (which is not available on Windows). You can install Portalocker using pip, similar to Pygtail above.

    To use Portalocker, supply the --locker=portalocker option on the Logster commandline.

Once you have logtail or Pygtail installed, install Logster using the setup.py file:

$ sudo python setup.py install

Usage

You can test logster from the command line. The --dry-run option will allow you to see the metrics being generated on stdout rather than sending them to your configured output.

$ sudo /usr/bin/logster --dry-run --output=ganglia SampleLogster /var/log/httpd/access_log

$ sudo /usr/bin/logster --dry-run --output=graphite --graphite-host=graphite.example.com:2003 SampleLogster /var/log/httpd/access_log

You can use the provided parsers, or you can use your own parsers by passing the complete module and parser name. In this case, the name of the parser does not have to match the name of the module (you can have a logster.py file with a MyCustomParser parser). Just make sure the module is in your Python path - via a virtualenv, for example.

$ /env/my_org/bin/logster --dry-run --output=stdout my_org_package.logster.MyCustomParser /var/log/my_custom_log

Additional usage details can be found with the -h option:

$ logster -h
Usage: logster [options] parser logfile

Tail a log file and filter each line to generate metrics that can be sent to
common monitoring packages.

Options:
  -h, --help            show this help message and exit
  -t TAILER, --tailer=TAILER
                        Specify which tailer to use. Options are logtail and
                        pygtail. Default is "logtail".
  --logtail=LOGTAIL     Specify location of logtail. Default
                        "/usr/sbin/logtail2"
  -p METRIC_PREFIX, --metric-prefix=METRIC_PREFIX
                        Add prefix to all published metrics. This is for
                        people that may multiple instances of same service on
                        same host.
  -x METRIC_SUFFIX, --metric-suffix=METRIC_SUFFIX
                        Add suffix to all published metrics. This is for
                        people that may add suffix at the end of their
                        metrics.
  --parser-help         Print usage and options for the selected parser
  --parser-options=PARSER_OPTIONS
                        Options to pass to the logster parser such as "-o
                        VALUE --option2 VALUE". These are parser-specific and
                        passed directly to the parser.
  -s STATE_DIR, --state-dir=STATE_DIR
                        Where to store the tailer state file.  Default
                        location /var/run
  -l LOG_DIR, --log-dir=LOG_DIR
                        Where to store the logster logfile.  Default location
                        /var/log/logster
  --log-conf=LOG_CONF   Logging configuration file. None by default
  -o OUTPUT, --output=OUTPUT
                        Where to send metrics (can specify multiple times).
                        Choices are statsd, stdout, cloudwatch, graphite,
                        ganglia, nsca or a fully qualified Python class name
  -d, --dry-run         Parse the log file but send stats to standard output.
  -D, --debug           Provide more verbose logging for debugging.

Contributing

  • Fork the project
  • Add your feature
  • If you are adding new functionality, document it in the README
  • Verify your code by running the test suite, and adding additional tests if able.
  • Push the branch up to GitHub (bonus points for topic branches)
  • Send a pull request to the etsy/logster project.

If you have questions, you can find us on IRC in the #codeascraft channel on Freenode.

More Repositories

1

AndroidStaggeredGrid

An Android staggered grid view which supports multiple columns with rows of varying sizes.
Java
4,756
star
2

skyline

It'll detect your anomalies! Part of the Kale stack.
Python
2,135
star
3

deployinator

Deployinate!
Ruby
1,878
star
4

morgue

post mortem tracker
PHP
1,017
star
5

411

An Alert Management Web Application
PHP
971
star
6

feature

Etsy's Feature flagging API used for operational rampups and A/B testing.
PHP
869
star
7

MIDAS

Mac Intrusion Detection Analysis System
833
star
8

opsweekly

On call alert classification and reporting
JavaScript
761
star
9

oculus

The metric correlation component of Etsy's Kale system
Java
707
star
10

mctop

a top like tool for inspecting memcache key values in realtime
Ruby
507
star
11

supergrep

realtime log streamer
JavaScript
411
star
12

Conjecture

Scalable Machine Learning in Scalding
Java
361
star
13

statsd-jvm-profiler

Simple JVM Profiler Using StatsD and Other Metrics Backends
Java
330
star
14

nagios-herald

Add context to Nagios alerts
Ruby
322
star
15

dashboard

JavaScript
308
star
16

boundary-layer

Builds Airflow DAGs from configuration files. Powers all DAGs on the Etsy Data Platform
Python
262
star
17

Testing101

Etsy's educational materials on testing and design
PHP
262
star
18

DebriefingFacilitationGuide

Leading Groups at Etsy to Learn From Accidents
247
star
19

phpunit-extensions

Etsy PHPUnit Extensions
PHP
228
star
20

nagios_tools

Tools for use with Nagios
Python
173
star
21

open-api

We are working on a new version of Etsy’s Open API and want feedback from developers like you.
166
star
22

TryLib

TryLib is a simple php library that helps you generate a diff of your working copy and send it to Jenkins to run the test suite(s) on the latest code patched with your changes.
PHP
155
star
23

BugHunt-iOS

Objective-C
148
star
24

mod_realdoc

Apache module to support atomic deploys - http://codeascraft.com/2013/07/01/atomic-deploys-at-etsy/
C
128
star
25

ab

Etsy's little framework for A/B testing, feature ramp up, and more.
128
star
26

wpt-script

Scripts to generate WebPagetest tests and download results
PHP
121
star
27

applepay-php

A PHP extension that verifies and decrypts Apple Pay payment tokens
C
118
star
28

foodcritic-rules

Etsy's foodcritic rules
Ruby
115
star
29

kevin-middleware

This is an Express middleware that makes developing javascript in a monorepo easier.
JavaScript
110
star
30

mixer

a tool to initiate meetings by randomly pairing individuals
Go
100
star
31

cloud-jewels

Estimate energy consumption using GCP Billing Data
TSQL
96
star
32

jenkins-master-project

Jenkins Plugin: Master Project. Jenkins project type that allows for selection of sub-jobs to execute, watch, and report worst status of all sub-projects.
Java
83
star
33

Sahale

A Cascading Workflow Visualizer
JavaScript
83
star
34

PushBot

An IRC Bot for organizing code pushes
Java
79
star
35

cdncontrol

CLI tool for working with multiple CDNs
Ruby
79
star
36

rules_grafana

Bazel rules for building Grafana dashboards
Starlark
70
star
37

chef-whitelist

Simple library to enable host based rollouts of changes
Ruby
68
star
38

rfid-checkout

Low Frequency RFID check out/in client for Raspberry Pi
Python
64
star
39

Etsy-Engineering-Career-Ladder

Etsy's Engineering Career Ladder
HTML
61
star
40

Evokit

Rust
60
star
41

ELK-utils

Utilities for working with the ELK (Elasticsearch, Logstash, Kibana) stack
Ruby
59
star
42

incpath

PHP extension to support atomic deploys
C
52
star
43

arbiter

A utility for generating Oozie workflows from a YAML definition
Java
48
star
44

VIPERBuilder

Scaffolding for building apps in a clean way with VIPER architecture
Swift
41
star
45

chef-handlers

Chef handlers we use at Etsy
Ruby
40
star
46

sbt-checkstyle-plugin

SBT Plugin for Running Checkstyle on Java Sources
Scala
32
star
47

es-restlog

Plugin for logging Elasticsearch REST requests
Java
29
star
48

yubigpgkeyer

Script to make RSA authentication key generation on Yubikeys differently painful
Python
28
star
49

Apotheosis

Python
28
star
50

jenkins-deployinator

Jenkins Plugin: Deployinator. Links key deployinator information to Jenkins builds via the CLI.
Java
25
star
51

sbt-compile-quick-plugin

SBT Plugin for Compiling a Single File
Scala
25
star
52

geonames

Scripts for using Geonames
PHP
24
star
53

jading

cascading.jruby build and execution tool
16
star
54

etsy.github.com

Etsy! on Github!
HTML
16
star
55

divertsy-client

The Android client for running DIVERTsy, a waste stream recording tool to help track diversion rates.
Java
13
star
56

cdncontrol_ui

A web UI for Etsy's cdncontrol tool
CSS
13
star
57

terraform-demux

A user-friendly launcher (à la bazelisk) for Terraform.
Go
12
star
58

logstash-plugins

Ruby
11
star
59

jenkins-triggering-user

Jenkins Plugin: Triggering User. Populates a $TRIGGERING_USER environment variable from the build cause and other sources, a best guess.
10
star
60

EtsyCompositionalLayoutBridge

iOS framework that allows for simultaneously leveraging flow layout and compositional layout in collection views
Swift
3
star
61

consulkit

Ruby API for interacting with HashiCorp's Consul.
Ruby
1
star
62

soft-circuits-workshop

Etsy Soft Circuits Workshop
Arduino
1
star