Technology
Diffusion model
Diffusion models are generative AI systems that synthesize high-fidelity data (images, audio, video) by reversing a process of iterative noise corruption.
This technology operates through a fixed forward process that systematically adds Gaussian noise to training data, then learns to reverse it: the denoising diffusion probabilistic model (DDPM). This iterative noise-removal process allows the model to generate entirely new, high-quality samples from pure noise. Key industry models like Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2 leverage this architecture for state-of-the-art text-to-image generation, inpainting, and super-resolution tasks. The core strength lies in their stable training and superior output quality compared to older generative models (e.g., GANs).
Related technologies
Recent Talks & Demos
Showing 1-5 of 5