Technology
Diffusion
Diffusion models are generative AI: they synthesize high-fidelity images and data by iteratively denoising a random starting point over hundreds of steps.
Diffusion is a class of generative AI models (like Stable Diffusion XL and DALL-E 2) that synthesize high-fidelity content, primarily images, from pure noise. The core mechanism is a two-phase process: 'forward diffusion' gradually adds noise to training data; 'reverse diffusion' then iteratively learns to remove that noise, reconstructing the image. This iterative denoising, often executed by a UNet architecture, allows for fine-grained, text-prompt control (text-to-image) over the final output, consistently outperforming previous generative adversarial networks (GANs) in both quality and sample diversity.
Related technologies
Recent Talks & Demos
Showing 1-11 of 11