There are no reviews yet. Be the first to send feedback to the community and the maintainers!
whisper-turbo
Cross-Platform, GPU Accelerated Whisper ποΈlaserbeak
Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPUembd
GPU accelerated client-side embeddings for vector search, RAG etc.wgpu-mm
steelix
Your one stop CLI for ONNX model analysis.rustDTW
Python extension backed by a multi-threaded Rust implementation of Dynamic Time Warping (DTW).deCoreML
Find out why your CoreML model isn't running on the Neural Engine!wgpu-bench
wasm-boids
A Three.JS + WASM implementation of Boids.wgpu-tensor
inline-wgsl
modcomp
Can you compress ML models with traditional compression? Answer: Nostream_to_asynciter
ogs_repro
FL33TW00D
Love Open Source and this site? Check out how you can help us