• Stars
    star
    110
  • Rank 316,770 (Top 7 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created almost 9 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Logstash replacement in Python

Build Status PyPI version

What is this?

Stashpy aims to be a slimmed-down Python 3 replacement for Logstash, a log aggregator. Stashpy accepts connections on a TCP port, parses messages passed to it over this connection, and indexes them on an ElasticSearch cluster.

Stashpy is still in development.

Installing and running

Stashpy requires Python 3. All Linux distros have a relatively new version in their official repositories. On Mac OS, the Homebrew version is recommended.

Among the various methods of installing package dependencies, virtualenv is recommended. Python 3.5 comes with pyvenv, which is a built-in equivalent of virtualenv. For earlier versions, you will need to manually install virtualenv with sudo pip install virtualenv. If you already have a 2.* version of Python installed together with virtualenv, you can install Python 3 and call virtualenv with the --python argument to use the new version, as in virtualenv stashpy --python `which python3` .

It is possible to install Stashpy simply as a dependency or with pip install stashpy as it is on PyPI. For development purposes, you can simply create a virtualenv wherever you prefer, and run Stashpy from there. The entry point is named stashpy, and accepts a configuration file as the only argument. A sample configuration file sample-config.yml is provided. See the Usage section for documentation on different configuration options.

The recommended way to run Stashpy as a service is by checking it out into /opt, and also creating a virtualenv there. To run as an upstart-managed service, consult the provided stashpy.conf. Unfortunately, we haven't moved on to systemd yet, so no sample configuration for that, but it shouldn't be too difficult.

Usage

As can be seen from the extension of sample-config.yml, the configuration format is YAML. Configuration options are the following:

  • address, port: The address and port on which Stashpy should listen.

  • indexer_config: Configuration options for the ElasticSearch cluster to index on. If this key is excluded, nothing will be indexed, which is a useful setup for debugging purposes. Must have the following keys:

    • host: ES host

    • port: ES port

    • index_pattern: The base pattern to be used for determining index name. This pattern will be passed on to datetime.strftime, and will then be formatted with the parsed values dictionary.

  • logging: This option will be passed on as-is to the logging.config.dictConfig method. If it is not supplied, stashpy.main.DEFAULT_LOGGING, which simply logs to stdout, will be used.

  • heartbeat_count: The number of messages at which a heartbeat log is written.

  • processor_spec: The parsing specification. See the next section for details.

Parsing Specification

Stashpy turns log lines (i.e. strings that end with a newline) supplied through a TCP connection into JSON documents, and indexes these in ElasticSearch. The log lines are parsed using regular expressions, which can be specified in one of two different formats: the parse format, or Oniguruma-flavored named regular expressions. The first of these is a normal Python 3 formattable string that is processed using the parse library. An example for such an expression is the following:

  • My name is {name} and I'm {age:d} years old

Parsing the sentence My name is Afro and I'm 40 years old would lead to the JSON document {"age": 40, "name": "Afro"}.

The second format, Oniguruma-flavored regular expressions, uses the regex library to provide an experience similar to that of the grok plugin for Logstash. In an Oniguruma RE, you can build complex regular expressions by combining simpler components. For example, take the following regular expression:

  • My name is %{USERNAME:name} and I'm %{INT:age} years old

This expression, when used as a specification, leads to the following regular expression:

  • My name is (?P<name>[a-zA-Z0-9._-]+) and I'm (?P<age>(?:[+-]?(?:[0-9]+))) years old

See the file stashpy/patterns/grok_patterns.txt for a list of the various components you can use in your regular expressions.

For both of these parsing options, it is possible to specify a type to which the parsed string should be converted. parse does this automatically, converting e.g. a age above to integer. With Oniguruma, you can specify the type the parsed value should be converted to with a constructor appended to the pattern name, separated by a colon. For the above example, converting age to an integer would work with the following expression:

  • My name is %{USERNAME:name} and I'm %{INT:age:int} years old

The type conversion key is used for looking up a function from the __builtins__ dictionary; any constructor available there can be used.

Specifying the parsing pipeline

There are two alternative ways of providing the order of processing.

By processor_spec

The first method is by providing the processor_spec option in the configuration file. This option can have two keys:

  • to_dict: A list of expressions that are turned into JSON documents without further processing. These expressions can be a parse string or an Oniguruma RE.

  • to_format: A list of dictionaries whose keys are specifications and values are dictionariers that are to be formatted based on parsed values.

Here's the relevant part from sample-config.yml:

processor_spec:
  to_dict:
    - "My name is {name} and I'm {age:d} years old."

  to_format:
    "Her name is {name} and she's {age:d} years old.":
      name_line: "Name is {name}"
      age_line: "Age is {age:d}"

Passing the log line My name is Afro and I'm 40 years old. to Stashpy with this configuration will result in the JSON document {"age": 40, "name": "Afro"}. The log line Her name is Luna and she's 4 years old. will, however, activate the to_format section and lead to the JSON document {"age_line": "Age is 4", "name_line": "Name is Luna"}.

By custom class

The second method of processing log lines is by specifying a class that is responsible for accepting them and returning dictionaries. The path of this class can be passed using the processor_class option. This class must stashpy.LineProcessor and implement the method for_line(self, line), which will be called for each log line. Two useful methods from the parent class that can be used for more specialized processing are do_dict_specs(self, line), and do_format_specs(self, line). The first method returns the result for the first match from the self.dict_specs list, while the second does the same for the self.format_specs attribute. Both return None if there are no matches. If your class has the class attributes TO_DICT or TO_FORMAT, these will be used to populate the instance attributes. The following custom class is equivalent to the processor_spec example above:

class KitaHandlerTwo(stashpy.LineProcessor):

    TO_DICT = ["My name is {name} and I'm {age:d} years old."]
    TO_FORMAT = {"Her name is {name} and she's {age:d} years old.":
                 {'name_line':"Name is {name}", 'age_line':"Age is {age}"}}

    def for_line(self, line):
        dict_result = self.do_dict_specs(line)
        if dict_result:
            return dict_result
        format_result = self.do_format_specs(line)
        if format_result:
            return format_result
        return None

Processing pipeline

The order of processing in Stashpy is relatively straightforward. First, the to_dict specs are applied; if any of the patterns match, the resulting dictionary is returned. If there are no such matches, the to_format specs are applied, and the result from the first match is returned. If you are using a custom class for processing, you can introduce your own ordering and logic.

Testing

One thing that really annoyed me about Logstash was that testing patterns was incredibly difficult; the only reliable test could be done on the live system. Stashpy aims to make testing patterns simpler. In order to test a parsing specification, simply subclass stashpy.tests.PatternTest. This class has three methods that can be used to process a logline with a parsing specification:

  • process_to_dict(self, to_dict_spec, logline): Processes a list of to-dictionary parsing specifications, and returns result.

  • process_to_format(self, to_format_spec, logline): Processes a list of to-format parsing specifications, and returns result.

Here is a sample test:

import unittest

from stashpy.tests import PatternTest

class SampleTest(PatternTest):

    def test_pattern(self):
        self.assertDictEqual(
            self.process_to_dict(["My name is {name} and I'm {age:d} years old."],
                                 "My name is Yuri and I'm 3 years old."),
            {'name': 'Yuri', 'age': 3})

if __name__ == '__main__':
    unittest.main()

More Repositories

1

miniboss

The most versatile way to manage containers locally
Python
722
star
2

abl-mode

Emacs minor mode to simplify test-driven development using virtual environments
Emacs Lisp
38
star
3

practical-ansible-intro

A practical guide to Ansible
Python
28
star
4

fedora-provisioning

Ansible playbook example for provisoning a Fedora workstation
22
star
5

logfigure

Python logging configuration that respects your sanity
Python
18
star
6

real-world-haskell-chapter-10

A walk-through of Real World Haskell 10th Chapter
Haskell
17
star
7

cython-samples

Python
12
star
8

gimme-cat

Elisp snippet to give you a cat in emacs.
Emacs Lisp
11
star
9

sitegen

An easty-to-read and change static site generator
Python
3
star
10

kubernetes-repository

Sample projects and configuration files for a future Kubernetes tutorial
Python
2
star
11

pivotal-printer

Python package to create PDFs from Pivotal Tracker stories through the API
Python
2
star
12

discovering-sqlalchemy

A book in the works
2
star
13

django-drapes

django-drapes is a small library (of mostly decorators) to ease authorization and user input verification
Python
2
star
14

sympy-jupiter-calculus

Jupyter Notebook
1
star
15

pynavigator

Emacs module to navigate Python code bases
Emacs Lisp
1
star
16

rdbms-reading-list

A reading list for RDBMS-related topics
1
star
17

LogAnalyzer

Smart analyzer of linear textual logs
Haskell
1
star
18

find-here-mode

Command and minor mode to run the shell find command and open from results
Emacs Lisp
1
star
19

package-sandbox

Sandbox functionality for Elisp packages (WIP)
Emacs Lisp
1
star
20

schmoxy

A proxy for mirroring other web pages with minor replacements
Python
1
star
21

django-local-configs

Command extensions for creating environment-local config files
Python
1
star
22

elisp-venv

Emacs package to enable running code in virtual environments
Emacs Lisp
1
star
23

aws-containers

Shell
1
star
24

reading-lists

A collection of reading lists on various topics
1
star