DIO-JIT: General-purpose Python JIT
Important:
- DIO-JIT now works for Python >= 3.8. We heavily rely on the
LOAD_METHOD
bytecode instruction. - DIO-JIT is not production-ready. a large number of specialisation rules are required to make DIO-JIT batteries-included.
- This document is mainly provided for prospective developers. Users are not required to write any specialisation rules, which means that users need to learn nothing but
@jit.jit
andjit.spec_call
.
Benchmark
Item | PY38 | JIT PY38 | PY39 | JIT PY39 |
---|---|---|---|---|
BF | 265.74 | 134.23 | 244.50 | 140.34 |
append3 | 23.94 | 10.70 | 22.29 | 11.21 |
DNA READ | 16.96 | 14.82 | 15.03 | 14.38 |
fib(15) | 11.63 | 1.54 | 10.41 | 1.51 |
hypot(str, str) | 6.19 | 3.87 | 6.53 | 4.29 |
selectsort | 46.95 | 33.88 | 38.71 | 29.49 |
trans | 24.22 | 7.79 | 23.23 | 7.71 |
The benchmark item "DNA READ" does not show a significant performance gain, this is because "DNA READ" heavily uses bytearray
and bytes
, whose specialised C-APIs
are not exposed. In this case, although the JIT can infer the types, we have to fall back to CPython's default behaviour, or even worse: after all, the interpreter can access internal things, while we cannot.
P.S: DIO-JIT can do very powerful partial evaluation, which is disabled in default but you can leverage it in your domain specific tasks. Here is an example of achieving 500x speed up against pure Python: fibs.py
Install Instructions
Step 1: Install Julia as an in-process native code compiler for DIO-JIT
There are several options for you to install Julia:
- scoop (Windows)
- julialang.org (recommended for Windows users)
- jill.py:
$ pip install jill && jill install 1.6 --upstream Official
- jill (Mac and Linux only!):
$ bash -ci "$(curl -fsSL https://raw.githubusercontent.com/abelsiqueira/jill/master/jill.sh)"
Step 2: Install DIO.jl in Julia
Type julia
and open the REPL, then
julia>
# press ]
pkg> add https://github.com/thautwarm/DIO.jl
# press backspace
julia> using DIO # precompile
Step 3: Install Python Package
$ pip install git+https://github.com/thautwarm/diojit
How to fetch latest DIO-JIT?(if you have installed DIO)
$ pip install -U diojit
$ julia -e "using Pkg; Pkg.update(string(:DIO));using DIO"
Usage from Python side is quite similar to that from Numba.
import diojit
from math import sqrt
# eagerjit: assuming all global references are fixed
@diojit.eagerjit
def fib(a):
if a <= 2:
return 1
return fib(a + -1) + fib(a + -2)
jit_fib = diojit.spec_call(fib, diojit.oftype(int), diojit.oftype(int))
jit_fib(15) # 600% faster than pure python
It might look strange to you that we use a + -1
and a + -2
here.
Clever observation! And that's the point!
DIO-JIT relies on specilisation rules. We have written one for additions, more specifically, operator.__add__
: specilisation for operator.__add__
.
However, due to the bandwidth limitation, rules for operator.__sub__
is not implemented yet.
(P.S: why operator.__add__
.)
Although specilisation is common in the scope of optimisation, unlike many other JIT attempts, DIO-JIT doesn't need to hard encode rules at compiler level. The DIO-JIT compiler implements the skeleton of abstract interpretation, but concrete rules for specialisation and other inferences can be added within Python itself in an extensible way!
See an example below.
list.append
Contribution Example: Add a specialisation rule for - Python Side:
import diojit as jit
import timeit
jit.create_shape(list, oop=True)
@jit.register(list, attr="append")
def list_append_analysis(self: jit.Judge, *args: jit.AbsVal):
if len(args) != 2:
# rollback to CPython's default code
return NotImplemented
lst, elt = args
return jit.CallSpec(
instance=None, # return value is not static
e_call=jit.S(jit.intrinsic("PyList_Append"))(lst, elt),
possibly_return_types=tuple({jit.S(type(None))}),
)
jit.intrinsic("PyList_Append")
mentioned in above code means the intrinsic provided by the Julia codegen backend.
Usually it's calling a CPython C API, but sometimes may not.
No matter if it is an existing CPython C API, we can implement intrinsics in Julia.
-
generate PyList_Append calling convention:
@autoapi PyList_Append(PyPtr, PyPtr)::Cint != Cint(-1) cast(_cint2none) nocastexc
As a consequence, we automatically generate an instrinsic function for DIO-JIT. This intrinsic function is capable of handling CPython exception and reference counting.
You can either do step 2) at Python side. It might looks more intuitive.
import diojit as jit
from diojit.runtime.julia_rt import jl_eval
jl_implemented_intrinsic = """
function PyList_Append(lst::Ptr, elt::PyPtr)
if ccall(PyAPI.PyList_Append, Cint, (PyPtr, PyPtr), lst, elt) == -1
return Py_NULL
end
nothing # automatically maps to a Python None
end
DIO.DIO_ExceptCode(::typeof(PyList_Append)) != Py_NULL
"""
jl_eval(jl_implemented_intrinsic)
You immediately get a >100% time speed up:
@jit.jit
def append3(xs, x):
xs.append(x)
xs.append(x)
xs.append(x)
jit_append3 = jit.spec_call(append3, jit.oftype(list), jit.Top) # 'Top' means 'Any'
xs = [1]
jit_append3(xs, 3)
print("test jit_append3, [1] append 3 for 3 times:", xs)
# test jit func, [1] append 3 for 3 times: [1, 3, 3, 3]
xs = []
%timeit append3(xs, 1)
# 293 ns ± 26.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
xs = []
%timeit jit_append3(xs, 1)
# 142 ns ± 14.9 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Why Julia?
We don't want to maintain a C compiler, and calling gcc
or others will introduce cross-process IO, which is slow.
We prefer compiling JITed code with LLVM, and Julia is quite a killer tool for this use case.
Current Limitations
-
Support for
*varargs
and**kwargs
are not ready: we do can immediately support them with very tiny JIT performance gain, but considering backward compatibility we decide not to do this. -
Exception handling is not yet supported inside JIT functions.
Why?
We haven't implemented the translation from exception handling bytecode to untyped DIO IR (
jit.absint.abs.In_Stmt
).Will support?
Yes.
In fact, now a callsite in any JIT function can raise an exception. It will not be handled by JIT functions, instead, it is lifted up to the root call, which is a pure Python call.
Exception handling will be supported when we have efforts on translating CPython bytecode about exception handling into untyped DIO IR (
jit.absint.abs.In_Stmt
).P.S: This will be finished simultaneously with the support for
for
loop. -
Support for
for
loop is missing.Why?
Firstly, in CPython,
for
loop relies on exception handling, which is not supported yet.Secondly, we're considering a fast path for
for
loop, maybe proposing a__citer__
protocol for faster iteration for JIT functions, which requires communications with Python developers.Will support?
Yes.
This will be finished simultaneously with support for exception handling (faster
for
loop might come later). -
Closure support is missing.
Why?
In imperative languages, closures use cell structures to achieve mutable free/cell variables.
However, a writable cell makes it hard to optimise in a dynamic language.
We recommend using
types.MethodType
to create immutable closures,which can be highly optimised in DIO-JIT(near future).import types def f(freevars, z): x, y = freevars return x + y + z def hof(x, y): return types.MethodType(f, (x, y))
Will support?
Still yes. However, don't expect much about the performance gain for Python's vanilla closures.
-
Specifying fixed global references(
@diojit.jit(fixed_references=['isinstance', 'str', ...]
) too annoying?Sorry, you have to. We are thinking about the possibility about automatic JIT covering all existing CPython code, but the biggest impediment is the volatile global variables.
You might use
@eagerjit
, and in this case you'd be cautious in making global variables unchangeable.Possibility?
Recently we found CPython's newly(
:)
) added featureDict.ma_version_tag
might be used to automatically notifying JITed functions to re-compile when the global references change.More research is required.
Contributions
- Add more prescribed specialisation rules at
jit.absint.prescr
. - TODO
Benchmarks
Check benchmarks
directory.