186 results for “topic:text2image”
Stable Diffusion web UI
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
一个基于nano banana pro🍌的原生AI PPT生成应用,迈向真正的"Vibe PPT"; 支持上传任意模板图片;上传任意素材&智能解析;一句话/大纲/页面描述自动生成PPT;口头修改指定区域、一键导出可编辑ppt - An AI-native slides generator based on nano banana pro🍌
Multi-Platform Package Manager for Stable Diffusion
OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++
Kandinsky 2 — multilingual text2image latent diffusion model
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
PALLAIDIUM — a generative AI movie studio, seamlessly integrated into the Blender Video Editor (VSE), enabling end-to-end production from script to screen and back.
Beautiful and Easy to use Stable Diffusion WebUI
GLM-Image: Auto-regressive for Dense-knowledge and High-fidelity Image Generation.
Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
Personalization for Stable Diffusion via Aesthetic Gradients 🎨
基于Stable Diffusion优化的AI绘画模型。支持输入中英文文本,可生成多种现代艺术风格的高质量图像。| An optimized text-to-image model based on Stable Diffusion. Both Chinese and English text inputs are available to generate images. The model can generate high-quality images in several modern art styles.
[SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
ComfyUI DyPE, enabling artifact-free 4K+ image generation for Qwen, Flux + Nunchacku
Not only automatic, but also intelligent. An Intelligent data Visualization System, based on LLM.
T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!
Z-Image workflow with predefined styles for high-quality image generation and a user-friendly experience. Includes pre-configured versions for GGUF and SAFETENSORS checkpoint formats.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Open reproduction of MUSE for fast text2image generation.
Official implementation for "DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion".
Colab notebook for Stable Diffusion Hyper-SDXL.
Diffusers-Interpret 🤗🧨🕵️♀️: Model explainability for 🤗 Diffusers. Get explanations for your generated images.
Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation
web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
A collection of arbitrary text to image papers with code (constantly updating)
A set of ComfyUI nodes designed specifically for the Z-Image / Z-Image Turbo model.
[ICLR2023] Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation (CDCD).