HA
Haoke98/FrameDiffusion
A frame2frame, video2video Video Editor based on the stable-diffusion
FrameDiffusion
A frame2frame, video2video Video Editor based on the stable-diffusion

Survey
T2V ~ text2video
- nateraw / stable-diffusion-videos : 4.1k
- Picsart-AI-Research / Text2Video-Zero : 3.7k
- lucidrains / video-diffusion-pytorch : 1.1k
V&I2V ~ video2video & image2video
- Make-A-Video: Site:MetaAI
and SearchPaper: Cornell University / arXiv:2209.14792
V2V ~ vide2video
- showlab / Tune-A-Video : 4k
- omerbt / TokenFlow : 1.4k
- rese1f / StableVideo : 1.3k
- ChenyangQiQi / FateZero : 1k
I2V ~ image2video
- HumanAIGC / AnimateAnyone : 13.5k and AcadamicPaper : Cornell University / arXiv:2311.17117 ( Paper and Code: moorethreads / moore-animateanyone : 2.3k )
- AILab-CVC / VideoCrafter : 3.8k
- ali-vilab / i2vgen-xl : 2.3k
others survey
Preliminary investigation
Core
- leandromoreira/digital_video_introduction:
There are more theoretical studies on video
GUI
- gradio docs
- TkInter and CustomTKinter
and CustomTkinter docs - ParthJadhav / Tkinter-Designer
- PyQtGraph
联系我们
- 如果二次开发或者部署过程中有什么问题,可以随时联系我们。
- QQ邮箱:1903249375@qq.com
On this page
Languages
Python94.0%JavaScript6.0%
Contributors
Apache License 2.0
Created February 29, 2024
Updated January 16, 2026