• Stars
    star
    1,498
  • Rank 31,119 (Top 0.7 %)
  • Language
    Python
  • License
    MIT License
  • Created 11 months ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Digital Avatar Conversational System - Linly-Talker. 😄✨ Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction method. 🤝🤖 It integrates various technologies like Whisper, Linly, Microsoft Speech Services, and SadTalker talking head generation system. 🌟🔬

Digital Avatar Conversational System - Linly-Talker —— "Digital Persona Interaction: Interact with Your Virtual Self”

Linly-Talker WebUI

madewithlove


Open In Colab Licence Huggingface

English | 中文简体

2023.12 Update 📆

Users can upload any images for the conversation

2024.01 Update 📆📆

  • Exciting news! I've now incorporated both the powerful GeminiPro and Qwen large models into our conversational scene. Users can now upload images during the conversation, adding a whole new dimension to the interactions.
  • The deployment invocation method for FastAPI has been updated.
  • The advanced settings options for Microsoft TTS have been updated, increasing the variety of voice types. Additionally, video subtitles have been introduced to enhance visualization.
  • Updated the GPT multi-turn conversation system to establish contextual connections in dialogue, enhancing the interactivity and realism of the digital persona.

2024.02 Update 📆

  • Updated Gradio to the latest version 4.16.0, providing the interface with additional functionalities such as capturing images from the camera to create digital personas, among others.
  • ASR and THG have been updated. FunASR from Alibaba has been integrated into ASR, enhancing its speed significantly. Additionally, the THG section now incorporates the Wav2Lip model, while ER-NeRF is currently in preparation (Coming Soon).
  • I have incorporated the GPT-SoVITS model, which is a voice cloning method. By fine-tuning it with just one minute of a person's speech data, it can effectively clone their voice. The results are quite impressive and worth recommending.
  • I have integrated a web user interface (WebUI) that allows for better execution of Linly-Talker.

Introduction

Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction method. It integrates various technologies like Whisper, Linly, Microsoft Speech Services and SadTalker talking head generation system. The system is deployed on Gradio to allow users to converse with an AI assistant by providing images as prompts. Users can have free-form conversations or generate content according to their preferences.

The system architecture of multimodal human–computer interaction.

You can watch the demo video here.

TO DO LIST

  • Completed the basic conversation system flow, capable of voice interactions.
  • Integrated the LLM large model, including the usage of Linly, Qwen, and GeminiPro.
  • Enabled the ability to upload any digital person's photo for conversation.
  • Integrated FastAPI invocation for Linly.
  • Utilized Microsoft TTS with advanced options, allowing customization of voice and tone parameters to enhance audio diversity.
  • Added subtitles to video generation for improved visualization.
  • GPT Multi-turn Dialogue System (Enhance the interactivity and realism of digital entities, bolstering their intelligence)
  • Optimized the Gradio interface by incorporating additional models such as Wav2Lip, FunASR, and others.
  • Voice Cloning Technology (Synthesize one's own voice using voice cloning to enhance the realism and interactive experience of digital entities)
  • Integrate the Langchain framework and establish a local knowledge base.
  • Real-time Speech Recognition (Enable conversation and communication between humans and digital entities using voice)

🔆 The Linly-Talker project is ongoing - pull requests are welcome! If you have any suggestions regarding new model approaches, research, techniques, or if you discover any runtime errors, please feel free to edit and submit a pull request. You can also open an issue or contact me directly via email. 📩⭐ If you find this repository useful, please give it a star! 🤩

Example

文字/语音对话 数字人回答
应对压力最有效的方法是什么?
example_answer1.mp4
如何进行时间管理?
example_answer2.mp4
撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。
example_answer3.mp4
翻译成中文:Luck is a dividend of sweat. The more you sweat, the luckier you get.
example_answer4.mp4

Setup Environment

To install the environment using Anaconda and PyTorch, follow the steps below:

conda create -n linly python=3.10
conda activate linly

# PyTorch Installation Method 1: Conda Installation (Recommended)
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

# PyTorch Installation Method 2: Pip Installation
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

conda install -q ffmpeg # ffmpeg==4.2.2

pip install -r requirements_app.txt

If you want to use models like voice cloning, you may need a higher version of PyTorch. However, the functionality will be more diverse. You may need to use CUDA 11.8 as the driver version, which you can choose.

conda create -n linly python=3.10  
conda activate linly

pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118

conda install -q ffmpeg # ffmpeg==4.2.2

pip install -r requirements_app.txt

# Install dependencies for voice cloning
pip install -r VITS/requirements_gptsovits.txt

Next, you need to install the corresponding models. You can download them using the following methods. Once downloaded, place the files in the specified folder structure (explained at the end of this document).

# Download pre-trained models from Hugging Face
git lfs install
git clone https://huggingface.co/Kedreamix/Linly-Talker

# Move all models to the current directory
# Checkpoint contains SadTalker and Wav2Lip
mv Linly-Talker/chechpoints/* ./checkpoints/

# Enhanced GFPGAN for SadTalker
# pip install gfpgan
# mv Linly-Talker/gfpan ./

# Voice cloning models
mv Linly-Talker/GPT_SoVITS/pretrained_models/ ./GPT_SoVITS/pretrained_models/

# Qwen large model
mv Linly-Talker/Qwen ./

For the convenience of deployment and usage, an configs.py file has been updated. You can modify some hyperparameters in this file for customization:

# 设备运行端口 (Device running port)
port = 7870
# api运行端口及IP (API running port and IP)
ip = '127.0.0.1' 
api_port = 7871
# Linly模型路径 (Linly model path)
mode = 'api' # api 需要先运行Linly-api-fast.py
mode = 'offline'
model_path = 'Linly-AI/Chinese-LLaMA-2-7B-hf'
# ssl证书 (SSL certificate) 麦克风对话需要此参数
# 最好绝对路径
ssl_certfile = "./https_cert/cert.pem"
ssl_keyfile = "./https_cert/key.pem"

This file allows you to adjust parameters such as the device running port, API running port, Linly model path, and SSL certificate paths for ease of deployment and configuration.

ASR - Speech Recognition

For detailed information about the usage and code implementation of Automatic Speech Recognition (ASR), please refer to ASR - Bridging the Gap with Digital Humans.

Whisper

To implement ASR (Automatic Speech Recognition) using OpenAI's Whisper, you can refer to the specific usage methods provided in the GitHub repository: https://github.com/openai/whisper

FunASR

The speech recognition performance of Alibaba's FunASR is quite impressive and it is actually better than Whisper in terms of Chinese language. Additionally, FunASR is capable of achieving real-time results, making it a great choice. You can experience FunASR by accessing the FunASR file in the ASR folder. Please refer to https://github.com/alibaba-damo-academy/FunASR for more information.

TTS - Edge TTS

For detailed information about the usage and code implementation of Text-to-Speech (TTS), please refer to TTS - Empowering Digital Humans with Natural Speech Interaction.

To use Microsoft Edge's online text-to-speech service from Python without needing Microsoft Edge or Windows or an API key, you can refer to the GitHub repository at https://github.com/rany2/edge-tts. It provides a Python module called "edge-tts" that allows you to utilize the service. You can find detailed installation instructions and usage examples in the repository's README file.

Voice Clone

For detailed information about the usage and code implementation of Voice Clone, please refer to Voice Clone - Stealing Your Voice Quietly During Conversations.

GPT-SoVITS(Recommend)

Thank you for your open source contribution. I have also found the GPT-SoVITS voice cloning model to be quite impressive. You can find the project at https://github.com/RVC-Boss/GPT-SoVITS.

XTTS

Coqui XTTS is a leading deep learning toolkit for Text-to-Speech (TTS) tasks, allowing for voice cloning and voice transfer to different languages using a 5-second or longer audio clip.

🐸 TTS is a library for advanced text-to-speech generation.

🚀 Over 1100 pre-trained models for various languages.

🛠️ Tools for training new models and fine-tuning existing models in any language.

📚 Utility programs for dataset analysis and management.

THG - Avatar

Detailed information about the usage and code implementation of digital human generation can be found in THG - Building Intelligent Digital Humans.

SadTalker

Digital persona generation can utilize SadTalker (CVPR 2023). For detailed information, please visit https://sadtalker.github.io.

Before usage, download the SadTalker model:

bash scripts/sadtalker_download_models.sh  

Baidu (百度云盘) (Password: linl)

If downloading from Baidu Cloud, remember to place it in the checkpoints folder. The model downloaded from Baidu Cloud is named sadtalker by default, but it should be renamed to checkpoints.

Wav2Lip

Digital persona generation can also utilize Wav2Lip (ACM 2020). For detailed information, refer to https://github.com/Rudrabha/Wav2Lip.

Before usage, download the Wav2Lip model:

Model Description Link to the model
Wav2Lip Highly accurate lip-sync Link
Wav2Lip + GAN Slightly inferior lip-sync, but better visual quality Link
Expert Discriminator Weights of the expert discriminator Link
Visual Quality Discriminator Weights of the visual disc trained in a GAN setup Link

ER-NeRF (Coming Soon)

ER-NeRF (ICCV 2023) is a digital human built using the latest NeRF technology. It allows for the customization of digital characters and can reconstruct them using just a five-minute video of a person. For more details, please refer to https://github.com/Fictionarry/ER-NeRF.

Further updates will be provided regarding this.

LLM - Conversation

For detailed information about the usage and code implementation of Large Language Models (LLM), please refer to LLM - Empowering Digital Humans with Powerful Language Models.

Linly-AI

Linly-AI is a Large Language model developed by CVI at Shenzhen University. You can find more information about Linly-AI on their GitHub repository: https://github.com/CVI-SZU/Linly

Download Linly models: https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf

You can use git to download:

git lfs install
git clone https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf

Alternatively, you can use the huggingface download tool huggingface-cli:

pip install -U huggingface_hub

# Set up mirror acceleration
# Linux
export HF_ENDPOINT="https://hf-mirror.com"
# Windows PowerShell
$env:HF_ENDPOINT="https://hf-mirror.com"

huggingface-cli download --resume-download Linly-AI/Chinese-LLaMA-2-7B-hf --local-dir Linly-AI/Chinese-LLaMA-2-7B-hf

Qwen

Qwen is an AI model developed by Alibaba Cloud. You can check out the GitHub repository for Qwen here: https://github.com/QwenLM/Qwen

If you want to quickly use Qwen, you can choose the 1.8B model, which has fewer parameters and can run smoothly even with limited GPU memory. Of course, this part can be replaced with other options.

You can download the Qwen 1.8B model from this link: https://huggingface.co/Qwen/Qwen-1_8B-Chat

You can use git to download:

git lfs install
git clone https://huggingface.co/Qwen/Qwen-1_8B-Chat

Alternatively, you can use the huggingface download tool huggingface-cli:

pip install -U huggingface_hub

# Set up mirror acceleration
# Linux
export HF_ENDPOINT="https://hf-mirror.com"
# Windows PowerShell
$env:HF_ENDPOINT="https://hf-mirror.com"

huggingface-cli download --resume-download Qwen/Qwen-1_8B-Chat --local-dir Qwen/Qwen-1_8B-Chat

Gemini-Pro

Gemini-Pro is an AI model developed by Google. To learn more about Gemini-Pro, you can visit their website: https://deepmind.google/technologies/gemini/

If you want to request an API key for Gemini-Pro, you can visit this link: https://makersuite.google.com/

LLM Model Selection

In the app.py file, tailor your model choice with ease.

# Uncomment and set up the model of your choice:

# llm = LLM(mode='offline').init_model('Linly', 'Linly-AI/Chinese-LLaMA-2-7B-hf')
# llm = LLM(mode='offline').init_model('Gemini', 'gemini-pro', api_key = "your api key")
# llm = LLM(mode='offline').init_model('Qwen', 'Qwen/Qwen-1_8B-Chat')

# Manual download with a specific path
llm = LLM(mode=mode).init_model('Qwen', model_path)

Optimizations

Some optimizations:

  • Use fixed input face images, extract features beforehand to avoid reading each time
  • Remove unnecessary libraries to reduce total time
  • Only save final video output, don't save intermediate results to improve performance
  • Use OpenCV to generate final video instead of mimwrite for faster runtime

Gradio

Gradio is a Python library that provides an easy way to deploy machine learning models as interactive web apps.

For Linly-Talker, Gradio serves two main purposes:

  1. Visualization & Demo: Gradio provides a simple web GUI for the model, allowing users to see the results intuitively by uploading an image and entering text. This is an effective way to showcase the capabilities of the system.

  2. User Interaction: The Gradio GUI can serve as a frontend to allow end users to interact with Linly-Talker. Users can upload their own images and ask arbitrary questions or have conversations to get real-time responses. This provides a more natural speech interaction method.

Specifically, we create a Gradio Interface in app.py that takes image and text inputs, calls our function to generate the response video, and displays it in the GUI. This enables browser interaction without needing to build complex frontend.

In summary, Gradio provides visualization and user interaction interfaces for Linly-Talker, serving as effective means for showcasing system capabilities and enabling end users.

Start WebUI

Previously, I had separated many versions, but it became cumbersome to run multiple versions. Therefore, I have added a WebUI feature to provide a single interface for a seamless experience. I will continue to update it in the future.

The current features available in the WebUI are as follows:

  • Text/Voice-based dialogue with virtual characters (fixed characters with male and female roles)
  • Dialogue with virtual characters using any image (you can upload any character image)
  • Multi-turn GPT dialogue (incorporating historical dialogue data to maintain context)
  • Voice cloning dialogue (based on GPT-SoVITS settings for voice cloning, including a built-in smoky voice that can be cloned based on the voice of the dialogue)
# WebUI
python webui.py

There are three modes for the current startup, and you can choose a specific setting based on the scenario.

The first mode involves fixed Q&A with a predefined character, eliminating preprocessing time.

python app.py

The first mode has recently been updated to include the Wav2Lip model for dialogue.

python appv2.py

The second mode allows for conversing with any uploaded image.

python app_img.py

The third mode builds upon the first one by incorporating a large language model for multi-turn GPT conversations.

python app_multi.py

Now, the part of voice cloning has been added, allowing for freely switching between cloned voice models and corresponding person images. Here, I have chosen a deep, smoky voice and an image of a male.

python app_vits.py

Folder structure

The folder structure of the weight files is as follows:

  • Baidu (百度云盘): You can download the weights from here (Password: linl).
  • huggingface: You can access the weights at this link.
  • modelscope: The weights will be available soon at this link.
Linly-Talker/ 
├── checkpoints
│   ├── hub
│   │   └── checkpoints
│   │       └── s3fd-619a316812.pth
│   ├── lipsync_expert.pth
│   ├── mapping_00109-model.pth.tar
│   ├── mapping_00229-model.pth.tar
│   ├── SadTalker_V0.0.2_256.safetensors
│   ├── visual_quality_disc.pth
│   ├── wav2lip_gan.pth
│   └── wav2lip.pth
├── gfpgan
│   └── weights
│       ├── alignment_WFLW_4HG.pth
│       └── detection_Resnet50_Final.pth
├── GPT_SoVITS
│   └── pretrained_models
│       ├── chinese-hubert-base
│       │   ├── config.json
│       │   ├── preprocessor_config.json
│       │   └── pytorch_model.bin
│       ├── chinese-roberta-wwm-ext-large
│       │   ├── config.json
│       │   ├── pytorch_model.bin
│       │   └── tokenizer.json
│       ├── README.md
│       ├── s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt
│       ├── s2D488k.pth
│       ├── s2G488k.pth
│       └── speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
├── Qwen
│   └── Qwen-1_8B-Chat
│       ├── assets
│       │   ├── logo.jpg
│       │   ├── qwen_tokenizer.png
│       │   ├── react_showcase_001.png
│       │   ├── react_showcase_002.png
│       │   └── wechat.png
│       ├── cache_autogptq_cuda_256.cpp
│       ├── cache_autogptq_cuda_kernel_256.cu
│       ├── config.json
│       ├── configuration_qwen.py
│       ├── cpp_kernels.py
│       ├── examples
│       │   └── react_prompt.md
│       ├── generation_config.json
│       ├── LICENSE
│       ├── model-00001-of-00002.safetensors
│       ├── model-00002-of-00002.safetensors
│       ├── modeling_qwen.py
│       ├── model.safetensors.index.json
│       ├── NOTICE
│       ├── qwen_generation_utils.py
│       ├── qwen.tiktoken
│       ├── README.md
│       ├── tokenization_qwen.py
│       └── tokenizer_config.json
└── README.md

Reference

ASR

TTS

LLM

THG

Voice Clone

Star History

Star History Chart

More Repositories

1

Awesome-Talking-Head-Synthesis

💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
622
star
2

Pytorch-Image-Classification

用于pytorch的图像分类,包含多种模型方法,比如AlexNet,VGG,GoogleNet,ResNet,DenseNet等等,包含可完整运行的代码。除此之外,也有colab的在线运行代码,可以直接在colab在线运行查看结果。也可以迁移到自己的数据集进行迁移学习。
Jupyter Notebook
179
star
3

PaddleAvatar

你是否曾经幻想过与自己的虚拟人交互?现在,使用PaddleAvatar,您可以将自己的图像、音频和视频转化为一个逼真的数字人视频,与其进行人机交互。 PaddleAvatar是一种基于PaddlePaddle深度学习框架的数字人生成工具,基于Paddle的许多套件,它可以将您的数字图像、音频和视频合成为一个逼真的数字人视频。除此之外,PaddleAvatar还支持进一步的开发,例如使用自然语言处理技术,将数字人视频转化为一个完整的人机交互系统,使得您能够与虚拟的自己进行真实的对话和互动。 使用PaddleAvatar,您可以将数字人视频用于各种场合,例如游戏、教育、虚拟现实等等。PaddleAvatar为您提供了一个自由创作的数字世界,让您的想象力得到了充分的释放!
Jupyter Notebook
156
star
4

YoloGesture

基于计算机视觉手势识别控制系统YoLoGesture (利用YOLO实现),利用yolo进行手势识别的控制系统,最后利用streamlit进行了部署,可在线体验尝试https://kedreamix-yologesture.streamlit.app , huggingface也有https://huggingface.co/spaces/Kedreamix/YoloGesture ,除此之外,还可以将方法运用到其他数据集中,都可以完成目标检测任务,并且进行部署,一通百通
Python
88
star
5

MAE-for-CIFAR

MAE for CIFAR,由于可用资源有限,我们仅在 cifar10 上测试模型。我们主要想重现这样的结果:使用 MAE 预训练 ViT 可以比直接使用标签进行监督学习训练获得更好的结果。这应该是自我监督学习比监督学习更有效的数据的证据。
Jupyter Notebook
49
star
6

ChatPaperFree

Python
22
star
7

Image-Web-App-using-Streamlit

Image Web-App using Streamlit
Python
11
star
8

GAN_Step_By_Step

GAN Step By Step -- GSBS,顾名思义,我希望我自己能够一步一步的学习GAN。GAN 又名 生成对抗网络,是最近几年很热门的一种无监督算法,他能生成出非常逼真的照片,图像甚至视频。GAN是一个图像的全新的领域,从2014的GAN的发展现在,在计算机视觉中扮演这越来越重要的角色,并且到每年都能产出各色各样的东西,GAN的理论和发展都蛮多的。我感觉最近有很多人都在学习GAN,但是国内可能缺少比较多的GAN的理论及其实现,所以我也想着和大家一起学习,并且提供主流框架下 **pytorch,tensorflow,keras** 的一些实现教学。 在一个2016年的研讨会,`杨立昆`描述生成式对抗网络是“`机器学习这二十年来最酷的想法`”。
Python
9
star
9

Deep_learning

在这里面我会记录一下我在机器学习中一步一步的路程,我一定会慢慢努力的哈哈
Jupyter Notebook
5
star
10

YOLO-Object-Detection

YOLO-Object-Detection 集成多种yolo模型,作为一个模板进行目标检测
Python
5
star
11

eat_tensorflow2_in_30_days

30天吃掉那只 TensorFlow2.0
3
star
12

weibo_monitor

weibo_monitor
Python
1
star
13

Deep-Learning-Paper-Reading

Deep Learning Paper Reading 论文泛读
1
star
14

Kedreamix.github.io

1
star
15

PigBACSeg

基于计算机视觉的猪胴体背膘厚度检测技术(基于UNet)
1
star
16

Kedreamix

1
star
17

MyBlog

Python
1
star