.

Technology

Diffusion models

Generative AI models that synthesize high-fidelity data (e.g., images, audio) by learning to iteratively reverse a fixed, step-by-step noise addition process.

Diffusion models operate on a two-part mechanism: a forward diffusion process and a reverse sampling process. The forward process systematically corrupts training data—like a clean image—by adding Gaussian noise over hundreds or thousands of steps until only pure noise remains. The model then trains a neural network, typically a U-Net, to master the reverse process: iteratively predicting and removing that noise to reconstruct the original data distribution. This denoising capability, starting from a random noise seed, allows for the generation of entirely new, high-quality samples. Key commercial examples, like OpenAI's DALL-E 2 and Stability AI's Stable Diffusion, leverage this core technology for state-of-the-art text-to-image synthesis.

https://en.wikipedia.org/wiki/Diffusion_model
5 projects · 5 cities

Related technologies

Recent Talks & Demos

Showing 1-5 of 5

Members-Only

Sign in to see who built these projects