• Stars
    star
    296
  • Rank 139,608 (Top 3 %)
  • Language
    Java
  • License
    Other
  • Created almost 9 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays

Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays

Build Status

Goal

The goal of this project is to assess whether a language implementation is highly optimizing and thus is able to remove the overhead of programming abstractions and frameworks. We are interested in comparing language implementations with each other and optimize their compilers as well as the run-time representation of objects, closures, and arrays.

This is in contrast to other projects such as the Computer Language Benchmark game, which encourage finding the smartest possible way to express a problem in a language to achieve best performance.

Approach

To allow us to compare the degree of optimization done by the implementations as well as the absolute performance achieved, we set the following basic rules:

  1. The benchmark is 'identical' for all languages.
    This is achieved by relying only on a widely available and commonly used subset of language features and data types.

  2. The benchmarks should use language 'idiomatically'.
    This means, they should be realized as much as possible with idiomatic code in each language, while relying only on the core set of abstractions.

For the detailed set of rules see the guidelines document. For a description of the set of common language abstractions see the core language document.

The initial publication describing the project is Cross-Language Compiler Benchmarking: Are We Fast Yet? and can be cited as follows:

Stefan Marr, Benoit Daloze, Hanspeter Mössenböck. 2016. Cross-Language Compiler Benchmarking: Are We Fast Yet? In Proceedings of the 12th Symposium on Dynamic Languages (DLS '16). ACM.

Disclaimer: This is an Academic Project to Facilitate Research on Languages

To facilitate research, the goal of this project is specifically to assess the effectiveness of compiler and runtime optimizations for a common set of abstractions between languages. As such, many other relevant aspects such as GC, standard libraries, and language-specific abstractions are not included here. However, by focusing on one aspect, we know exactly what is compared.

Current Status

Currently, we have 14 benchmarks ported to seven different languages, including Crystal, Java, JavaScript, Python, Ruby, SOM Smalltalk, and SOMns (a Newspeak implementation).

The graph below shows some older results for different implementations after warmup, to ensure peak performance is reported:

Peak Performance of Java, Node.js, JRuby, JRuby+Truffle, MRI, and SOMns, last update 2016-06-20

A detailed overview of the results is in docs/performance.md.

For a performance comparison over time, see the timeline view on awfy-speed.stefan-marr.de. The runs are managed at smarr/awfy-runs.

The benchmarks are listed below. A detailed analysis including metrics for the benchmarks is in docs/metrics.md.

Macro Benchmarks

  • CD is a simulation of an airplane collision detector. Based on WebKit's JavaScript CDjs. Originally, CD was designed to evaluate real-time JVMs.

  • DeltaBlue is a classic VM benchmark used to tune, e.g., Smalltalk, Java, and JavaScript VMs. It implements a constraint solver.

  • Havlak implements a loop recognition algorithm. It has been used to compare C++, Java, Go, and Scala performance.

  • Json is a JSON string parsing benchmark derived from the minimal-json Java library.

  • Richards is a classic benchmark simulating an operating system kernel. The used code is based on Wolczko's Smalltalk version.

Micro Benchmarks

Micro benchmarks are based on SOM Smalltalk benchmarks unless noted otherwise.

  • Bounce simulates a ball bouncing within a box.

  • List recursively creates and traverses lists.

  • Mandelbrot calculates the classic fractal. It is derived from the Computer Languages Benchmark Game.

  • NBody simulates the movement of planets in the solar system. It is derived from the Computer Languages Benchmark Game.

  • Permute generates permutations of an array.

  • Queens solves the eight queens problem.

  • Sieve finds prime numbers based on the sieve of Eratosthenes.

  • Storage creates and verifies a tree of arrays to stress the garbage collector.

  • Towers solves the Towers of Hanoi game.

Contributing

Considering the large number of languages out there, we are open to contributions of benchmark ports to new languages. We would also be interested in new benchmarks that are in the range of 300 to 1000 lines of code.

When porting to a new language, please carefully consider the guidelines and description of the core language to ensure that we can compare results.

A list of languages we would definitely be interested in is on the issues tracker.

This includes languages like Dart, Scala, Python, and Go. Other interesting ports could be for Racket, Clojure, or CLOS, but might require more carefully thought-out rules for porting. Similarly, ports to C++ or Rust need additional care to account for the absence of a garbage collector.

Getting the Code and Running Benchmarks

Quick Start

To obtain the code, benchmarks, and documentation, checkout the git repository:

git clone --depth 1 https://github.com/smarr/are-we-fast-yet.git

Note that the repository relies on git submodules, which won't be loaded at that point. They are only needed to run the full range of language implementations and experiments.

Run Benchmarks for a Specific Language

The benchmarks are sorted by language in the benchmarks folder. Each language has its own harness. For JavaScript and Ruby, the benchmarks are executed like this:

cd benchmarks/JavaScript
node harness.js Richards 5 10
cd ../Ruby
ruby harness.rb Queens 5 20

The harness takes three parameters: benchmark name, number of iterations, and problem size. The benchmark name corresponds to a class or file of a benchmark. The number of iterations defines how often a benchmark should be executed. The problem size can be used to influence how long a benchmark takes. Note that some benchmarks rely on magic numbers to verify their results. Those might not be included for all possible problem sizes.

The rebench.conf file specifies the supported problem sizes for each benchmark.

Using the Full Benchmark Setup

The setup and building of benchmarks and VMs is automated via implementations/setup.sh. Benchmark are configured and executed with the ReBench tool.

To execute the benchmarks on all supported VMs, the following implementations are expected to be already available on the benchmark machine:

This repository uses git submodules for some languages implementations. To build these, additional tools are required. These include Ant, Make, Python, and a C/C++ compiler.

The implementations folder contains wrapper scripts such as mri-23.sh, java8.sh, and node.sh to execute all language implementations in a common way by ReBench.

ReBench can be installed via the Python package manager pip:

pip install ReBench

The benchmarks can be executed with the following command in the root folder:

rebench -d --without-nice rebench.conf all

The -d gives more output during execution, and --without-nice means that the nice tool enforcing high process priority is not used. We don't use it here to avoid requiring root rights.

Note: The rebench.conf file specifies how and which benchmarks to execute. It also defines the arguments to be passed to the benchmarks.

Academic Work using this benchmark suite

More Repositories

1

latex-to-html5

Scripts for Latex to HTML5 conversion
TeX
723
star
2

RoarVM

RoarVM is a manycore Smalltalk Virtual Machine
C
237
star
3

CLIPS

A Tool for Building Expert Systems
C
83
star
4

ReBench

Execute and document benchmarks reproducibly.
Python
74
star
5

SOMns

SOMns: A Newspeak for Concurrency Research
Java
65
star
6

googletest

Google C++ Testing Framework
C++
41
star
7

WritingStats

Track and Visualize Your Writing Progress
XSLT
37
star
8

TruffleSOM

A SOM Smalltalk implemented on top of Oracle's Truffle Framework
Java
31
star
9

SOM

SOM - Simple Object Machine
ANTLR
21
star
10

selfopt-interp-performance

A reporting project on the performance of self-optimizing interpreters
Shell
16
star
11

PySOM

PySOM - The Simple Object Machine Smalltalk implemented in Python
Python
15
star
12

SMark

Write Benchmarks like Tests
Smalltalk
13
star
13

effortless-language-servers

Framework for Effortless Language Servers with Language Servers for SOM, SOMns, and SimpleLanguage
Java
13
star
14

SOMpp

SOM++ - C++ implementation of the Simple Object Machine Smalltalk
C++
12
star
15

CSOM

CSOM - C implementation of the Simple Object Machine Smalltalk
C
11
star
16

graal

Clone of the OpenJDK Graal Project Mercurial repository
C++
10
star
17

ReBenchDB

ReBenchDB records benchmark results and provides customizable reporting to track and analyze run-time performance of software programs.
TypeScript
10
star
18

perf-latex-paper

Setup for a Paper with Performance Evaluation, Latex, ReBench, and KnitR
TeX
10
star
19

LibGit

A Smalltalk Binding for libgit2
Smalltalk
9
star
20

Classic-Benchmarks

A collection of classic object-oriented benchmarks
C
8
star
21

Snake

Snake - The Language Game
Clojure
6
star
22

truffle-notes

Notes on the Truffle Language Framework and Graal JIT Compiler
R
6
star
23

BenchR

R scripts to evaluate and visualize benchmarking results
Perl
6
star
24

OmniVM

A RoarVM fork with support for the Ownership-based Metaobject Protocol (OMOP) to facilitate the implementation of concurrent programming abstractions.
C
5
star
25

som-java

SOM - Simple Object Machine (plain Java implementation)
Java
5
star
26

collab.sty

A Latex Package for collaboration and editing
TeX
4
star
27

JsSOM

JsSOM - The SOM (Simple Object Machine) Smalltalk implemented in JavaScript
JavaScript
3
star
28

academic-notes

Notes on Supervising, Organizing, and other Academic things
3
star
29

eg

EasyGit is a wrapper for git, designed to make git easy to learn and use for former SVN and CVS users.
Shell
3
star
30

conf-twitter-bot

A Twitter Bot to post about academic papers
TypeScript
2
star
31

ruby-stats

Analyze Ruby code bases to get some basic structural statistics
Ruby
2
star
32

COLL-PX

Paper on Collections Design and Programming Experience
TeX
2
star
33

wp-bib2html

Wordpress bib2html Plugin
PHP
1
star
34

AmbientTalk

AmbientTalk development repository. It is the root project for the different AT repositories.
1
star
35

OldTruffle

Java
1
star
36

truffleruby-faststart

Setup and benchmarks for TruffleRuby, part of the FastStart project.
Ruby
1
star
37

PDFTitlePageRefQR

Generate PDF Title Page with Full Reference and QR Code
Python
1
star
38

continuous-perf-tracking

Material for the Talk: Continuous Performance Tracking for Better “Everything”!
Java
1
star
39

barriers

Automatically exported from code.google.com/p/barriers
C++
1
star
40

GraalBasic

A dummy project to checkout and build Graal for a specific revision
Shell
1
star
41

graal-jvmci-8

My personal mirror of http://hg.openjdk.java.net/graal/graal-jvmci-8
C++
1
star
42

morevms-author-kit

MoreVMs'17 Workshop Author Kit with adapted Latex template
TeX
1
star
43

artifacts-base

Generate VirtualBox base images for artifacts using packer.io
Shell
1
star
44

takikawa-paper

TeX
1
star