There are no reviews yet. Be the first to send feedback to the community and the maintainers!
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.BigDL-2.x
BigDL: Distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & RayBigDL-Tutorials
Step-by-step Deep Leaning Tutorials on Apache Spark using BigDLipex-llm-tutorial
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llmBigDL-core
Core HW bindings and optimizations for BigDLBigDL-trainings
Training materials for BigDLanalytics-zoo
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Rayait2019
zoo-tutorials
Tutorials for Analytics ZooOreillyAI2019
notebooks and tutorials for O'REILLY AI San Jose 2019Chronos-workshop
Love Open Source and this site? Check out how you can help us