• Stars
    star
    118
  • Rank 299,923 (Top 6 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 4 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Python Performance Testing with Austin


pytest-austin

Python Performance Testing with Austin

         

GitHub Actions: Tests Travis CI Codecov PyPI LICENSE

Synopsis • Installation • Usage • Compatibility • Contribute

Buy Me A Coffee

Synopsis

The pytest-austin plugin for pytest brings Python performance testing right into your CI pipelines. It uses Austin to profile your test runs without any instrumentation. All you have to do is simply mark the tests on which you want to execute checks for preventing performance regressions.

import pytest


@pytest.mark.total_time(td(milliseconds=50), function="fibonacci")
@pytest.mark.total_time("99%", line=8)
@pytest.mark.total_time("50.3141592653%", line=9)
def test_hello_default():
    fibonacci(27)
    fibonacci(25)

Any failed tests will be reported by pytest at the end of the session. All the collected statistics are written on a file prefixed with austin_ and followed by a truncated timestamp, inside pytest rootdir. You can drop it onto Speedscope for a quick visual representation of your tests.

================================================================ test session starts ================================================================
platform linux -- Python 3.6.9, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /home/gabriele/.cache/pypoetry/virtualenvs/pytest-austin-yu27Ep_e-py3.6/bin/python3.6
cachedir: .pytest_cache
rootdir: /tmp/pytest-of-gabriele/pytest-226/test_austin_time_checks0
plugins: cov-2.10.0, austin-0.1.0
collecting ... collected 3 items

test_austin_time_checks.py::test_lines PASSED
test_austin_time_checks.py::test_check_fails PASSED
test_austin_time_checks.py::test_check_succeeds PASSED

=================================================================== Austin report ===================================================================
austin 2.0.0
Collected stats written on /tmp/pytest-of-gabriele/pytest-226/test_austin_time_checks0/.austin_97148135487643.aprof

🕑 Sampling time (min/avg/max) : 376/3327/18019 μs
🐢 Long sampling rate : 87/87 (100.00 %) samples took longer than the sampling interval
💀 Error rate : 0/87 (0.00 %) invalid samples

test_austin_time_checks.py::test_lines test_lines:19 (test_austin_time_checks.py) -16.0 ms (-78.2% of 20.5 ms)
test_austin_time_checks.py::test_lines test_lines:18 (test_austin_time_checks.py) -4.1 ms (-10.1% of 40.3 ms)
test_austin_time_checks.py::test_lines fibonacci (test_austin_time_checks.py) -9.3 ms (-18.6% of 50.0 ms)
test_austin_time_checks.py::test_check_fails test_check_fails (test_austin_time_checks.py) +99.8 ms (9978.6% of 1000.0 μs)
test_austin_time_checks.py::test_check_succeeds test_check_succeeds (test_austin_time_checks.py) -9.0 ms (-8.1% of 110.0 ms)

================================================================== 1 check failed ===================================================================

================================================================= 3 passed in 0.35s =================================================================

Installation

pytest-austin can be installed directly from PyPI with

pip install pytest-austin --upgrade

NOTE In order for the plugin to work, the Austin binary needs to be on the PATH environment variable. See Austin installation instructions to see how you can easily install Austin on your platform.

For platform-specific issues and remarks, refer to the Compatibility section below.

Usage

Once installed, the plugin will try to attach Austin to the pytest process in order to sample it every time you run pytest. If you want to prevent Austin from profiling your tests, you have to steal its mojo. You can do so with the --steal-mojo command line argument.

Time checks

The plugin looks for the total_time marker on collected test items, which takes a mandatory argument time and three optional ones: function, module and line.

If you simply want to check that the duration of a test item doesn't take longer than time, you can mark it with @pytest.mark.total_time(time). Here, time can either be a float (in seconds) or an instance of datetime.timedelta.

from datetime import timedelta as td

import pytest


@pytest.mark.total_time(td(milliseconds=50))
def test_hello_default():
    ...

In some cases, you would want to make sure that a function or method called on a certain line in your test script executes in under a certain amount of time, say 5% of the total test time. You can achieve this like so

import pytest


@pytest.mark.total_time("5%", line=7)
def test_hello_default():
    somefunction()
    fastfunction()  # <- this is line no. 7 in the test script
    someotherfunction()

In many cases, however, one would want to test that a function or a method called either directly or indirectly by a test doesn't take more than a certain overall time to run. This is where the remaining arguments of the total_test marker come into play. Suppose that you want to profile the procedure bar that is called by method foo of an object of type Snafu. To ensure that bar doesn't take longer than, say, 50% of the overall test duration, you can write

import pytest


@pytest.mark.total_time("50%", function="bar")
def test_snafu():
    ...
    snafu = Snafu()
    ...
    snafu.foo()
    ...

You can use the module argument to resolve function name clashes. For example, if the definition of the function/method bar occurs within the modules somemodule.py and someothermodule.py, but you are only interested in the one defined in somemodule.py, you can change the above into

import pytest


@pytest.mark.total_time("50%", function="bar", module="somemodule.py")
def test_snafu():
    ...
    snafu = Snafu()
    ...
    snafu.foo()
    ...

And whilst you can also specify a line number, this is perhaps not very handy and practical outside of test scripts themselves, unless the content of the module is stable enough that line numbers don't need to be updated very frequently.

When the pluing runs, it will produce an output containing lines of the form

test_austin_time_checks.py::test_lines test_lines:19 (test_austin_time_checks.py) -16.0 ms (-78.2% of 20.5 ms)

In this case, a negative number, such as -16.0 ms, indicates that the total time spent on test_lines:19 (test_austin_time_checks.py) was 16.0 ms less than the total allowed time specified with the total_time marker, which in this example is 20.5 ms. Hence, negative numbers indicate a successful check.

Failing tests will have a positive delta reported, e.g.

test_austin_time_checks.py::test_check_fails test_check_fails (test_austin_time_checks.py) +99.8 ms (9978.6% of 1000.0 μs)

This indicates that the total time spent on test_check_fails (test_austin_time_checks.py) was 99.8 ms more than the required threshold, which was set to 1 ms.

Memory checks

One can perform memory allocation checks with the total_memory marker. The first argument is size, which can be a percentage of the total memory allocation of the marked test case, as well as an absolute measure of the maximum amount of memory, e.g., "24 MB". The function, module and line are the same as for the total_time marker. The extra net argument can be set to True to check for the total net memory usage, that is the difference between memory allocations and deallocations.

import pytest


@pytest.mark.total_memory("24 MB")
def test_snafu():
    allota_memory()

In order to perform memory checks, you need to specify either the memory or all profile mode via the --profile-mode option.

The negative and positive memory deltas reported by the plugin in the report behave like the time deltas described in the previous section. That is, a negative memory delta indicates a successful check, whereas a positive delta indicates a check that has failed.

Mixed checks

When in the all profile mode, you can perform both time and memory checks by stacking total_time and total_memory markers.

import pytest


@pytest.mark.total_time(5.15)
@pytest.mark.total_memory("24 MB")
def test_snafu():
    allota_memory_and_time()

Multi-processing

If your tests spawn other Python processes, you can ask pytest-austin to profile them too with the --minime option. Note that if your tests are spawning too many non-Python processes, the sampling rate might be affected because of the way that Austin tries to discover Python child processes.

Reporting

This plugins generate a report on terminal and dumps the collected profiling statistics on the file system as well, for later analysis and visualisation. The verbosity of the terminal report can be controlled with the --austin-report option. By default, it is set to minimal, which means that only checks that have failed will be reported. Use full to see the results for all the checks that have been detected and executed by the plugin.

Regarding the dump of the profiling statistics, the generated file is in the Austin format by default (this is a generalisation of the collapsed stack format). If you want the plugin to dump the data in either the pprof or speedscope format, you can set the --profile-format option accordingly.

Compatibility

This plugin has been tested on Linux, MacOS and Windows. Given that it relies on Austin for sampling the frame stacks of the pytest process, its compatibility considerations apply to pytest-austin as well.

On Linux, the use of sudo is required, unless the CAP_SYS_PTRACE capability is granted to the Austin binary with, e.g.

sudo setcap cap_sys_ptrace+ep `which austin`

Then the use of sudo is no longer required to allow Austin to attach and sample pytest.

On MacOS, the use of sudo is also mandatory, unless the user that is invoking pytest belongs to the procmod group.

Contribute

If you like pytest-austin and you find it useful, there are ways for you to contribute.

If you want to help with the development, then have a look at the open issues and have a look at the contributing guidelines before you open a pull request.

You can also contribute to the development of the pytest-austin by becoming a sponsor and/or by buying me a coffee on BMC or by chipping in a few pennies on PayPal.Me.

Buy Me A Coffee

More Repositories

1

austin

Python frame stack sampler for CPython
C
1,413
star
2

austin-tui

The top-like text-based user interface for Austin
Python
622
star
3

austin-python

Python wrapper for Austin, the CPython frame stack sampler.
Python
31
star
4

m27q

Simple Python object to control the Gigabyte M27Q OSD settings
Python
28
star
5

austin-vscode

Austin extension for Visual Studio Code
TypeScript
17
star
6

asm

Assembly Code Public Repository
C++
16
star
7

git-railway

🚂💨✨ Visualise local git branches as neat interactive HTML pages
Python
14
star
8

echion

Near-zero-overhead, in-process CPython frame stack sampler with async support
C++
13
star
9

blighty

Desktop Widget Manager. Think of conky, but with Python instead of Lua.
Python
13
star
10

f1-telemetry

F1 telemetry data collection and visualisation
HTML
10
star
11

austin-web

An Austin web interface for remote profiling
Python
7
star
12

spasmlang

Simple Python Assembly Language
Python
7
star
13

dragonfly

Lightweight CPython Debugger
Python
5
star
14

f1-packets

Python library for the official F1 game UDP telemetry data
C
4
star
15

betsy

The incy-wincy Python static dependencies crawler
Python
4
star
16

sibilla

Python ORM for the Oracle database
Python
4
star
17

marvin

Marvin is a collection of machine learning utilities for Python 3.6+
Python
4
star
18

aoc

Advent of Code
HTML
2
star
19

bpf

BPF CO-RE tools
C
2
star
20

erwin

Erwin likes putting stuff in boxes. He's simultaneously useless and useful until you try him.
Python
2
star
21

flamegraph-experiment

Experiment for the paper "On the Algebraic Properties of Flame Graphs" https://arxiv.org/abs/2301.08941
Python
2
star
22

P403n1x87.github.io

The Hub of Heliopolis is the tech bay of http://thenestofheliopolis.blogspot.co.uk/. It is the home of all my experiences with technology that I consider worth sharing to the World.
HTML
1
star
23

jupyter-notebook-launcher

Jupyter Notebook Launcher and Indicator for Ubuntu
Python
1
star
24

dead-bytecode

Backport of bytecode to Python 2
Python
1
star
25

f1-eval

Evaluation race data collector
Python
1
star
26

pi-mcqueen

Code repository for a self-driving toy car project using a single-board computer
Python
1
star
27

gymtax

scripting language interpreter for gym sessions to turn your devices into your personal trainer
Python
1
star
28

podb

Python bytecode-level debugger
Python
1
star