.

Technology

Variational Autoencoder

Variational Autoencoders (VAEs) are deep generative models: they compress data into a smooth, probabilistic latent space, then sample from that space to generate novel, realistic data instances.

The Variational Autoencoder (VAE) is a powerful deep generative model, introduced by Diederik P. Kingma and Max Welling in 2013. It uses an encoder-decoder architecture, but with a critical difference: the encoder maps input data to a probability distribution—defined by mean ($mu$) and variance ($sigma^2$) vectors—in a continuous latent space, not a fixed point. This probabilistic encoding, coupled with the reparameterization trick, allows for effective sampling. The VAE trains by minimizing a loss function that includes both reconstruction error and a Kullback-Leibler (KL) divergence term. The KL term regularizes the latent space, forcing it to follow a simple prior distribution like a Gaussian. This structured latent space is the key enabler for generating new, high-quality data samples, making VAEs valuable for tasks like image synthesis and anomaly detection.

https://en.wikipedia.org/wiki/Variational_autoencoder
2 projects · 2 cities

Related technologies

Recent Talks & Demos

Showing 1-2 of 2

Members-Only

Sign in to see who built these projects