Variational Autoencoders (VAEs) are a type of autoencoder that leverages variational inference to learn a probabilistic latent space representation of input data. By combining ideas from Bayesian inference and deep learning, VAEs aim to generate new data points by sampling from the learned latent space. The architecture of VAEs consists of an encoder network that maps input data to the latent space, a decoder network that reconstructs the input data from the latent space, and a reparameterization trick to enable end-to-end training using gradient descent.