Overview
docs | |
---|---|
tests | |
package |
A pytest
fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen
timer.
See calibration and FAQ.
- Free software: BSD 2-Clause License
Installation
pip install pytest-benchmark
Documentation
For latest release: pytest-benchmark.readthedocs.org/en/stable.
For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.
Examples
But first, a prologue:
This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory material or watch talks.
Few notes:
- This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function.
- In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use parametrization.
- To run the benchmarks you simply use pytest to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run
pytest --help
for more details.
This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.
Example:
def something(duration=0.000001):
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)
# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
result = benchmark(something)
# Extra code, to verify that the run completed correctly.
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
You can also pass extra arguments:
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.02)
Or even keyword arguments:
def test_my_stuff(benchmark):
benchmark(time.sleep, duration=0.02)
Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:
def test_my_stuff(benchmark):
@benchmark
def something(): # unnecessary function call
time.sleep(0.000001)
A better way is to just benchmark the final function:
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.000001) # way more accurate results!
If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:
def my_special_setup():
...
def test_with_setup(benchmark):
benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)
Screenshots
Normal run:
Compare mode (--benchmark-compare
):
Histogram (--benchmark-histogram
):
Also, it has nice tooltips.
Development
To run the all tests run:
tox
Credits
- Timing code and ideas taken from: https://github.com/vstinner/misc/blob/34d3128468e450dad15b6581af96a790f8bd58ce/python/benchmark.py