• Stars
    star
    785
  • Rank 57,957 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 8 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Toolkit to run Python benchmarks

pyperf

Latest release on the Python Cheeseshop (PyPI) Build status of pyperf on GitHub Actions

The Python pyperf module is a toolkit to write, run and analyze benchmarks.

Features

  • Simple API to run reliable benchmarks
  • Automatically calibrate a benchmark for a time budget.
  • Spawn multiple worker processes.
  • Compute the mean and standard deviation.
  • Detect if a benchmark result seems unstable.
  • JSON format to store benchmark results.
  • Support multiple units: seconds, bytes and integer.

Usage

To run a benchmark use the pyperf timeit command (result written into bench.json):

$ python3 -m pyperf timeit '[1,2]*1000' -o bench.json
.....................
Mean +- std dev: 4.22 us +- 0.08 us

Or write a benchmark script bench.py:

#!/usr/bin/env python3
import pyperf

runner = pyperf.Runner()
runner.timeit(name="sort a sorted list",
              stmt="sorted(s, key=f)",
              setup="f = lambda x: x; s = list(range(1000))")

See the API docs for full details on the timeit function and the Runner class. To run the script and dump the results into a file named bench.json:

$ python3 bench.py -o bench.json

To analyze benchmark results use the pyperf stats command:

$ python3 -m pyperf stats telco.json
Total duration: 29.2 sec
Start date: 2016-10-21 03:14:19
End date: 2016-10-21 03:14:53
Raw value minimum: 177 ms
Raw value maximum: 183 ms

Number of calibration run: 1
Number of run with values: 40
Total number of run: 41

Number of warmup per run: 1
Number of value per run: 3
Loop iterations per value: 8
Total number of values: 120

Minimum:         22.1 ms
Median +- MAD:   22.5 ms +- 0.1 ms
Mean +- std dev: 22.5 ms +- 0.2 ms
Maximum:         22.9 ms

  0th percentile: 22.1 ms (-2% of the mean) -- minimum
  5th percentile: 22.3 ms (-1% of the mean)
 25th percentile: 22.4 ms (-1% of the mean) -- Q1
 50th percentile: 22.5 ms (-0% of the mean) -- median
 75th percentile: 22.7 ms (+1% of the mean) -- Q3
 95th percentile: 22.9 ms (+2% of the mean)
100th percentile: 22.9 ms (+2% of the mean) -- maximum

Number of outlier (out of 22.0 ms..23.0 ms): 0

There's also:

  • pyperf compare_to command tests if a difference is significant. It supports comparison between multiple benchmark suites (made of multiple benchmarks)

    $ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json
    +----------------+----------------+-----------------------+-----------------------+
    | Benchmark      | mult_list_py36 | mult_list_py37        | mult_list_py38        |
    +================+================+=======================+=======================+
    | [1]*1000       | 2.13 us        | 2.09 us: 1.02x faster | not significant       |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2]*1000     | 3.70 us        | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2,3]*1000   | 4.61 us        | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | Geometric mean | (ref)          | 1.22x slower          | 1.09x faster          |
    +----------------+----------------+-----------------------+-----------------------+
    
  • pyperf system tune command to tune your system to run stable benchmarks.

  • Automatically collect metadata on the computer and the benchmark: use the pyperf metadata command to display them, or the pyperf collect_metadata command to manually collect them.

  • --track-memory and --tracemalloc options to track the memory usage of a benchmark.

Quick Links

Command to install pyperf on Python 3:

python3 -m pip install pyperf

pyperf requires Python 3.7 or newer.

Python 2.7 users can use pyperf 1.7.1 which is the last version compatible with Python 2.7.

pyperf is distributed under the MIT license.

The pyperf project is covered by the PSF Code of Conduct.

More Repositories

1

requests

A simple, yet elegant, HTTP library.
Python
51,920
star
2

black

The uncompromising Python code formatter
Python
38,653
star
3

requests-html

Pythonic HTML Parsing for Humansâ„¢
Python
13,722
star
4

cachecontrol

The httplib2 caching algorithms packaged up for use with requests.
Python
467
star
5

fundable-packaging-improvements

Packaging improvements that could be funded
51
star
6

webassembly

A repo to track the progress of Python on WebAssembly (WASM)
48
star
7

request-for

Canonical location of Python Software Foundation Request for Information/Proposal documents.
44
star
8

gh-migration

This repo is used to manage the migration from bugs.python.org to GitHub.
42
star
9

python-in-edu

website for educational python resources
Python
41
star
10

black-pre-commit-mirror

Python
35
star
11

pycon-us-mobile

TypeScript
29
star
12

advisory-database

This is a repository of vulnerability advisories for projects in scope for the Python Software Foundation CVE Numbering Authority (CNA)
Python
25
star
13

bpo-tracker-cpython

Python
24
star
14

project-funding-wg

21
star
15

community-code-of-conduct

The Python Software Foundation Community Code of Conduct
20
star
16

psf-tuf-runbook

A runbook for the PSF, for TUF key setup and initial signing operations to bootstrap signing for PyPI.
Rust
15
star
17

bylaws

PSF Bylaws in markdown format
10
star
18

diversity-and-inclusion-wg

The Diversity and Inclusion Working Group is a volunteer workgroup of the Python Software Foundation. The workgroup's purpose is to further the PSF’s mission to ‘support and facilitate the growth of a diverse and international community of Python programmers.’ We also aim to provide guidance to the PSF Board of Directors in line with this mandate.
CSS
9
star
19

elections

Tools and documentation around running a PSF election
Python
8
star
20

the-invisibles

Pypodcats website
JavaScript
8
star
21

policies

Dockerfile
6
star
22

bpo-roundup

Python
5
star
23

.github

Organization-wide GitHub settings
4
star
24

bpo-tracker-roundup

HTML
1
star
25

bpo-tracker-jython

HTML
1
star
26

fides-deploy

PSFs Deployment of Fides
Python
1
star
27

bpo-rietveld

Python
1
star
28

bpo-django-gae2django

Python
1
star
29

user-success-wg

Repository for the User Success working group at the Python Software Foundation
1
star