• Stars
    star
    430
  • Rank 101,083 (Top 2 %)
  • Language
    Clojure
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

μ/log is a micro-logging library that logs events and data, not words!

μ/log

Clojars Project cljdoc badge CircleCi last-commit

mulog

μ/log (Pronounced: /mjuːlog/) is a micro-logging library that logs events and data, not words!

From the Greek letter μ, mu (Pronunciation: /mjuː/)
The twelfth letter of the Greek alphabet (Μ, μ), often used as a prefix for micro- which is 10-6 in the SI (System of Units). Lowercase letter "u" is often substituted for "μ" when the Greek character is not typographically available.

(source: https://en.wikipedia.org/wiki/Mu_(letter))

Features

Here some features and key design decisions that make μ/log special:

  • Effortlessly, logs events as data points.
  • No need to construct strings that then need to be deconstructed later.
  • Fast, extremely fast, under 300 nanoseconds per log entry
  • Memory bound; no unbounded use of memory
  • All the processing and rendering happens asynchronously.
  • Ability to add contextual logging.
  • Adding publishers won't affect logging performances
  • Extremely easy to create stateful publishers for new systems
  • Wide range of publishers available (see available list)
  • Event logs are useful, but not as important as process flow (therefore preferable to drop events rather than crashing the process)
  • Because it is cheap to log events, you can freely log plenty.
  • And events are just data so can process, enrich, filter, aggregate, visualise the data with your own tools.

Motivation

It is not the intention of µ/log to be a logging system in the sense of Log4j et al. In any significant project I worked in the last 15 years, logging text messages resulted in a large amount of strings which was hard to make sense of, thus mostly ignored. µ/log's idea is to replace the "3 Pillars of Observability" with a more fundamental concept: "the event". Event-based data is easy to index, search, augment, aggregate and visualise therefore can easily replace traditional logs, metrics and traces.

Existing logging libraries are based on a design from the 80s and early 90s. Most of the systems at the time where developed in standalone servers where logging messages to console or file was the predominant thing to do. Logging was mostly providing debugging information and system behavioural introspection.

Most of modern systems are distributed in virtualized machines that live in the cloud. These machines could disappear any time. In this context logging on the local file system isn't useful as logs are easily lost if virtual machines are destroyed. Therefore it is common practice to use log collectors and centralized log processors. The ELK stack it has been predominant in this space for years, but there are a multitude of other commercial and open-source products.

Most of these systems have to deal with non structured data represented as formatted strings in files. The process of extracting information out of these strings is very tedious, error prone, and definitely not fun. But the question is: why did we encode these as strings in the first place? This is just because existing log frameworks, which have been redesigned in various decades follow the same structure as when systems lived on the same single server for decades.

I believe we need the break free of these anachronistic designs and use event loggers, not message loggers, which are designed for dynamic distributed systems living in cloud and using centralized log aggregators. So here is μ/log designed for this very purpose.

Watch my talk on μ/log at the London Clojurians Meetup:

μ/log and the next 100 logging systems

Table of contents

Usage

In order to use the library add the dependency to your project.clj

;; Leiningen project
[com.brunobonacci/mulog "0.9.0"]

;; deps.edn format
{:deps { com.brunobonacci/mulog {:mvn/version "0.9.0"}}}

Current version: Clojars Project

Then require the namespace:

(ns your-ns
  (:require [com.brunobonacci.mulog :as μ]))

;; or for the more ASCII traditionalists
(ns your-ns
  (:require [com.brunobonacci.mulog :as u]))

Check the online documentation

Then instrument your code with the log you deem useful. The general structure is

(μ/log event-name, key1 value1, key2 value2, ... keyN valueN)

For example:

;; good to use namespaced keywords for the event-name
(μ/log ::hello :to "New World!")

However you will NOT be able to see any events until you add a publisher which will take your events and send them to a distributed logger or your local console (if you are developing).

You can add as many key-value pairs as you deem useful to express the event in your system

(μ/start-publisher! {:type :console})

At this point you should be able to see the previous event in your REPL terminal and it will look as follows:

{:mulog/trace-id #mulog/flake "4VTBeu2scrIEMle9us8StnmvRrj9ThWP", :mulog/timestamp 1587500402972, :mulog/event-name :your-ns/hello, :mulog/namespace "your-ns", :to "New World!"}

Here are some example of events you might want to log.

;; The general form is
(μ/log ::event-name, :key1 "value1", :key2 :value2, :keyN "valueN")

;; examples
(μ/log ::system-started :version "0.1.0" :init-time 32)

(μ/log ::user-logged :user-id "1234567" :remote-ip "1.2.3.4" :auth-method :password-login)

(μ/log ::http-request :path "/orders", :method :post, :remote-ip "1.2.3.4", :http-status 201, :request-time 129)

(def x (RuntimeException. "Boom!"))
(μ/log ::invalid-request :exception x, :user-id "123456789", :items-requested 47)

(μ/log ::position-updated :poi "1234567" :location {:lat 51.4978128, :lng -0.1767122} )

All above are examples of events you might want to track, collect and aggregate on it in a specialized timeseries database.

Use of context

Adding events which are rich in attributes and dimensions is extremely useful, however it is not easy to have all the attributes and dimensions at your disposal everywhere in the code. To get around this problem μ/log supports the use of context.

There are two levels of context, a global level and a local one.

The global context allows you to define properties and values which will be added to all the events logged afterwards.

For example:

(μ/log ::system-started :init-time 32)
;; {:mulog/timestamp 1572709206048, :mulog/event-name :your-ns/system-started, :mulog/namespace "your-ns", :init-time 32}

;; set global context
(μ/set-global-context! {:app-name "mulog-demo", :version "0.1.0", :env "local"})

(μ/log ::system-started :init-time 32)
;; {:mulog/event-name :your-ns/system-started,
;;  :mulog/timestamp  1587501375129,
;;  :mulog/trace-id   #mulog/flake "4VTCYUcCs5KRbiRibgulnns3l6ZW_yxk",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :init-time        32,
;;  :version          "0.1.0"}

Typically, you will set the global context once in your main function at the starting of your application with properties which are valid for all events emitted by the process. Use set-global-context! to specify a given value, or update-global-context! with a update function to change some of the values. Examples of properties you should consider adding in the global context are app-name, version, environment, process-id, host-ip, os-type, jvm-version etc etc

The second type of context is the (thread) local context. It can be used to inject information about the current processing and all the events within the scope of the context will inherit the properties and their values.

For example the following line will contain all the properties of the global context, all the properties of the local context and all inline properties.

(μ/with-context {:order "abc123"}
  (μ/log ::item-processed :item-id "sku-123" :qt 2))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501473472,
;;  :mulog/trace-id   #mulog/flake "4VTCdCz6T_TTM9bS5LCwqMG0FhvSybkN",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :version          "0.1.0"}

The local context can be nested:

(μ/with-context {:transaction-id "tx-098765"}
  (μ/with-context {:order "abc123"}
    (μ/log ::item-processed :item-id "sku-123" :qt 2)))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501492168,
;;  :mulog/trace-id   #mulog/flake "4VTCeIc_FNzCjegzQ0cMSLI09RqqC2FR",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :transaction-id   "tx-098765",
;;  :version          "0.1.0"}

Local context works across function boundaries:

(defn process-item [sku quantity]
    ;; ... do something
    (μ/log ::item-processed :item-id sku :qt quantity)
    ;; ... do something
    )

(μ/with-context {:order "abc123"}
    (process-item "sku-123" 2))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501555926,
;;  :mulog/trace-id   #mulog/flake "4VTCi08XrCWQLrR8vS2nP8sG1zDTGuY_",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :version          "0.1.0"}

Best practices

Here some best practices to follow while logging events:

  • Use namespaced keywords or qualified strings for the event-name
  • Log plain values, not opaque objects, objects will be turned into strings which diminishes their value
  • Do not log mutable values, since rendering is done asynchronously you could be logging a different state. If values are mutable capture the current state (deref) and log it.
  • Avoid logging deeply nested maps, they are hard to query.
  • Log timestamps with milliseconds precision.
  • Use global context to enrich events with application name (:app-name), version (:version), environment (:env), host, OS pid, and other useful information so that it is always possible to determine the source of the event. See example here.
  • If you have to log an error/exception put the exception object with a :exception key. For example:
    (try
      (something)
      (catch Exception x
        (μ/log ::actionX :exception x :status :failed)))
    It will be easier to search for all the error in Elasticsearch just by looking the presence of the exception key (Elasticsearch query example exception:*)

μ/trace

since v0.2.0

mutrace

μ/trace (Pronounced: /mjuːtrace/) is a micro distributed tracing library with the focus on tracking data with custom attributes.

μ/trace is a subsystem of μ/log and it relies heavily on it. While the objective of μ/log is to record and publish a event which happens in a single point in time, the objective of μ/trace is to record and publish an event that spans over a short period of time, and potentially, spans across multiple systems.

μ/trace can be used within a single system and it will provide accurate data around instrumented operation of that system. μ/trace can also be used in a distributed setup and in conjunction with other distributed tracers such as Zipkin and participate into distributed traces.

μ/trace data points are not confined to distributed tracers, but the data can be used and interpreted in Elasticsearch, in real-time streaming system which use Apache Kafka etc.

Assume that you have a complex operation which you want to track the rate, the outcome, the latency and have contextual information about the call.

One example of such calls is the call to an external service or database to retrieve the current product availability.

Here an example of such call:

;; example call to external service
(defn product-availability [product-id]
  (http/get availability-service {:product-id product-id}))

We want to track how long this operation takes and if it fails, what's the reason. With μ/trace we can instrument the request as follow:

;; same require as mulog
;; (require '[com.brunobonacci.mulog :as μ])

;; wrap the call to the `product-availability` function with μ/trace
(μ/trace ::availability
  []
  (product-availability product-id))

μ/trace will start a timer before calling (product-availability product-id) and when the execution completes it will log an event using μ/log. To the caller it will be like calling (product-availability product-id) directly as the caller will receive the evaluation result of the body. However, μ/log will publish the following event:

;; {:mulog/event-name :your-ns/availability,
;;  :mulog/timestamp 1587504242983,
;;  :mulog/trace-id #mulog/flake "4VTF9QBbnef57vxVy-b4uKzh7dG7r7y4",
;;  :mulog/root-trace #mulog/flake "4VTF9QBbnef57vxVy-b4uKzh7dG7r7y4",
;;  :mulog/duration 254402837,
;;  :mulog/namespace "your-ns",
;;  :mulog/outcome :ok,
;;  :app-name "mulog-demo",
;;  :env "local",
;;  :version "0.1.0"}

There are a few things to notice here:

  • Firstly, it inherited the global context which we set for μ/log (:app-name, :version and :env)
  • Next, we have the same keys which are available in μ/log events, such as: :mulog/event-name, :mulog/timestamp, :mulog/namespace and :mulog/trace-id.
  • In addition to the :mulog/trace-id, which identified this particular trace event, we have two more IDs. One called :mulog/root-trace and the second one called :mulog/parent-trace. The latter one is missing because this trace doesn't have a parent μ/trace block. The :mulog/root-trace is the id of the originating trace which could be coming from another system. The :mulog/root-trace is the same as the :mulog/trace-id because, in this example, this trace is the first one (and the only one) of the stack.
  • Next, we have :mulog/duration which is the duration of the evaluation of the body ( the product-availability call) expressed in nanoseconds
  • Whether the call succeeded or failed, this is specified in :mulog/outcome which it can be :ok or :error. The latter will be set in case an exception is raised, and in this case, an additional :exception property will be added with the actual exception. In case of errors, the exception will be thrown back to the caller for further handling.

In the above example we are missing some contextual information. For example, we know that someone is enquiring about product availability but we don't know about which product. This information is available at the point of call, it would be nice to be able to see this information in the trace as well. That's easily done.

Like μ/log events, we can add key/value pairs to the trace as well:

(μ/trace ::availability
  [:product-id product-id]
  (product-availability product-id))

Note that within square brackets we have added the info we need. But we can go one step further. Let's assume that we had the order-id and the user-id who is enquiring about the availability as local context then we would have the following trace event.

(def product-id "2345-23-545")
(def order-id   "34896-34556")
(def user-id    "709-6567567")

(μ/with-context {:order order-id, :user user-id}
  (μ/trace ::availability
    [:product-id product-id]
    (product-availability product-id)))

;; {:mulog/event-name :your-ns/availability,
;;  :mulog/timestamp 1587506497789,
;;  :mulog/trace-id #mulog/flake "4VTHCez0rr3TpaBmUQrTb2DZaYmaWFkH",
;;  :mulog/root-trace #mulog/flake "4VTHCez0rr3TpaBmUQrTb2DZaYmaWFkH",
;;  :mulog/duration 280510026,
;;  :mulog/namespace "your-ns",
;;  :mulog/outcome :ok,
;;  :app-name "mulog-demo",
;;  :env "local",
;;  :order "34896-34556",
;;  :product-id "2345-23-545",
;;  :user "709-6567567",
;;  :version "0.1.0"}

One important difference between with-context and the μ/trace pairs is that with-context will propagate that information to all nested calls while the μ/trace pairs will be only added to that specific trace event and not the nested ones.

If we had the following set of nested calls:

(process-order)
└── (availability)
    ├── (warehouse-availability)
    ├── (shopping-carts)
    └── (availability-estimator)

Where process-order check the availability of each product, and to check the availability of each product you need to verify what is available in the warehouse as well as how many items are locked in in-flight shopping carts and have this information provided to an estimator you would end-up with a trace which looks like the following:

nested traces

Publishers

Publishers allow to send the events to external system where they can be stored, indexed, transformed or visualised.

Most of the publishers are in separated modules to reduce the risk of dependencies clash. Please see the specific publisher documentation for the name of the module to add in your dependencies.

Modules can be started as follow:

(def pub (μ/start-publisher! {:type :console :pretty? true}))

The map contains the configuration which is specific to the publisher.

It returns a function with no arguments which when called stops the publisher and flushes the records currently present in the buffer. Finally, if the publisher implements the java.io.Closeable it will call the close method to release/close external resources.

Here the list of all available publishers:

Additional topics

Contributions

I do consider the core pretty much feature complete, therefore I won't accept changes in the core module. However, there is loads of work to be done on supporting libraries and publishers for various systems, Here your help if welcome, you can have a look at the list of open issues marked as help wanted.

PRs are welcome ;-)

To contribute:

  • pick an issue you would like to work on.
  • drop a message to the issue so that I know someone else is working on it
  • follow the guidelines in the ticket
  • in doubt, just ask!

Need help?

If you have questions or you need help please open an issue or post your questions into Github Discussions board.

Alternatively you can post a question to the #mulog channel in the Clojurians Slack team.

Related projects

Here there are some other open-source projects which are related to μ/log:

  • slf4j-mulog - a SLF4j backend for μ/log.

    It enables you to send your traditional logs from your existing projects via μ/log and leverage all μ/log's capability to filter/transform/enrich events before publishing.

License

Copyright © 2019-2021 Bruno Bonacci - Distributed under the Apache License v2.0

More Repositories

1

safely

Safely is a Clojure's circuit-breaker library for handling retries in an elegant declarative way.
Clojure
196
star
2

where

Human readable predicate functions and `filter` best companion.
Clojure
76
star
3

lein-binplus

A Leiningen plugin for producing standalone console executables that work on OS X, Linux, and Windows.
Clojure
66
star
4

1config

A command line tool and a library to manage application secrets and configuration safely and effectively.
Clojure
27
star
5

clj-sophia

An idiomatic Clojure driver for Sophia DB. Sophia is RAM-Disk hybrid storage designed to provide best possible on-disk performance without degradation in time.
Clojure
18
star
6

hdr-plot

A simple standalone script to plot HdrHistograms directly from the output of wrk load testing tool
Python
15
star
7

rdt

RDT is a REPL-Driven Test library for Clojure
Clojure
10
star
8

learn-clojure

Material for introductory Clojure course
Clojure
9
star
9

ring-boost

A library to boost performances of Clojure web applications with off-heap serverside caching.
Clojure
9
star
10

ip-geoloc

A fully automated Clojure library for IP geo-location.
Clojure
6
star
11

dragonfiles

A Clojure scriptable file processor (awk on steroids)
Clojure
5
star
12

reservoir

A fast implementation of Reservoir Sampling with Immutable Persistent data structures.
Clojure
4
star
13

synapse

The smart way to link containers
Clojure
4
star
14

lambdamacs

(λmacs) is Emacs for Clojure development on macOS X.
Emacs Lisp
4
star
15

preludex

Prelude fork with some customisations for Clojure development
Emacs Lisp
4
star
16

easy-subnet

A command-line tool for easy split subnets into equally sized networks
Clojure
3
star
17

clj4azure-storage

A Clojure wrapper for Azure Storage SDK
Clojure
2
star
18

city-mapper

London tube graph analysis
Clojure
2
star
19

vertx-perf-test

vert.x performance suite
Java
2
star
20

clojure-http-servers-benchmark

Chasing the long tail of Clojure HTTP servers.
Clojure
2
star
21

pig-s3

Pig Store function for S3
Groovy
2
star
22

vagrant-docker-weave

Multi nodes vagrant setup for docker and weave testing
Shell
1
star
23

vertx-telnet

Simple Vert.x Telnet Server
Groovy
1
star
24

.dotfiles

my .dofiles config
Lua
1
star
25

lemma

A text lemmatizer based on Stanford CoreNLP
Clojure
1
star
26

aoc2019

Advent of Code 2019 solutions (in Clojure)
Clojure
1
star
27

rolling-update

A command line tool for automated rolling update of auto-scaling groups.
Clojure
1
star
28

optimus

A REST key/value store with mass-upload support and transactional semantics
Clojure
1
star
29

game-of-life

Code from a Clojure Dojo session to build the Conway's Game Of Life.
Clojure
1
star