• Stars
    star
    1,453
  • Rank 32,372 (Top 0.7 %)
  • Language
    Swift
  • License
    MIT License
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Chat with your favourite LLaMA models in a native macOS app

LlamaChat banner

Chat with your favourite LLaMA models, right on your Mac


LlamaChat is a macOS app that allows you to chat with LLaMA, Alpaca and GPT4All models all running locally on your Mac.

๐Ÿš€ Getting Started

LlamaChat requires macOS 13 Ventura, and either an Intel or Apple Silicon processor.

Direct Download

Download a .dmg containing the latest version ๐Ÿ‘‰ here ๐Ÿ‘ˆ.

Building from Source

git clone https://github.com/alexrozanski/LlamaChat.git
cd LlamaChat
open LlamaChat.xcodeproj

NOTE: LlamaChat includes Sparkle for autoupdates, which will fail to load if LlamaChat is not signed. Ensure that you use a valid signing certificate when building and running LlamaChat.

NOTE: model inference runs really slowly in Debug builds, so if building from source make sure that the Build Configuration in LlamaChat > Edit Scheme... > Run is set to Release.

โœจ Features

  • Supported Models: LlamaChat supports LLaMA, Alpaca and GPT4All models out of the box. Support for other models including Vicuna and Koala is coming soon. We are also looking for Chinese and French speakers to add support for Chinese LLaMA/Alpaca and Vigogne.
  • Flexible Model Formats: LLamaChat is built on top of llama.cpp and llama.swift. The app supports adding LLaMA models in either their raw .pth PyTorch checkpoints form or the .ggml format.
  • Model Conversion: If raw PyTorch checkpoints are added these can be converted to .ggml files compatible with LlamaChat and llama.cpp within the app.
  • Chat History: Chat history is persisted within the app. Both chat history and model context can be cleared at any time.
  • Funky Avatars: LlamaChat ships with 7 funky avatars that can be used with your chat sources.
  • Advanced Source Naming: LlamaChat uses Special Magicโ„ข to generate playful names for your chat sources.
  • Context Debugging: For the keen ML enthusiasts, the current model context can be viewed for a chat in the info popover.

๐Ÿ”ฎ Models

NOTE: LlamaChat doesn't ship with any model files and requires that you obtain these from the respective sources in accordance with their respective terms and conditions.

  • Model formats: LlamaChat allows you to use the LLaMA family of models in either their raw Python checkpoint form (.pth) or pre-converted .ggml file (the format used by llama.cpp, which powers LlamaChat).
  • Using LLaMA models: When importing LLaMA models in the .pth format:
    • You should select the appropriate parameter size directory (e.g. 7B, 13B etc) in the conversion flow, which includes the consolidated.NN.pth and params.json files.
    • As per the LLaMA model release, the parent directory should contain tokenizer.model. E.g. to use the LLaMA-13B model, your model directory should look something like the below, and you should select the 13B directory:
.
โ”‚   ...
โ”œโ”€โ”€ 13B
โ”‚ย ย  โ”œโ”€โ”€ checklist.chk.txt
โ”‚ย ย  โ”œโ”€โ”€ consolidated.00.pth
โ”‚ย ย  โ”œโ”€โ”€ consolidated.01.pth
โ”‚ย ย  โ””โ”€โ”€ params.json
โ”‚   ...
โ””โ”€โ”€ tokenizer.model

๐Ÿ‘ฉโ€๐Ÿ’ป Contributing

Pull Requests and Issues are welcome and much appreciated. Please make sure to adhere to the Code of Conduct at all times.

LlamaChat is fully built using Swift and SwiftUI, and makes use of llama.swift under the hood to run inference and perform model operations.

The project is mostly built using MVVM and makes heavy use of Combine and Swift Concurrency.

โš–๏ธ License

LlamaChat is licensed under the MIT license.