[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
-
Updated
Aug 5, 2024 - Python
[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based on provided audio, supports CPU/CUDA, and uses caching for faster processing.
A package for simple, expressive, and customizable text-to-speech with an animated face.
End-to-end speech-to-speech translation pipeline with voice cloning (RVC) and automatic lip-sync (Wav2Lip).
Demonstrating a small demo that is proficient in lip-syncing i.e. synchronizing an audio file with a video file. The task is to ensure the model is accurately matching the lip movements of the characters in the given video file with the corresponding audio file.
Advanced AI Assistant powered by Python (Backend) and React Three Fiber (Frontend). Includes Object Detection (YOLOv8), Lip-Synced 3D Avatar (WebSockets), Neural TTS, and System Automation
Add a description, image, and links to the lip-synchronization topic page so that developers can more easily learn about it.
To associate your repository with the lip-synchronization topic, visit your repo's landing page and select "manage topics."