mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

11K
active users

Si può ottenere un programma funzionante partendo da un testo casuale? Pare di sì, o almeno questo sostengono gli autori di Mercury-Coder, un sistema per creare programmi che non opera come i normali LLM. Sarà davvero più veloce a parità di precisione? Provare per credere! #ai #intelligenzaartificiale #programming #diffusionmodels #mercurycoder youtube.com/watch?v=rU-PfRJN0x

Breaking the Curse of Dimensionality: Insights from Diffusion Models in Machine Learning

Recent research demonstrates how diffusion models can effectively learn low-dimensional distributions, revolutionizing the way we approach image data in machine learning. This article delves into the ...

news.lavx.hu/article/breaking-

New paper: ‘An Analytic Theory of Creativity in Convolutional Diffusion Models’ explores how diffusion models generate creative outputs by combining training data patches. A groundbreaking foundation for understanding generative AI.
#AI #GenerativeAI #DiffusionModels
arxiv.org/abs/2412.20292

arXiv.orgAn analytic theory of creativity in convolutional diffusion modelsWe obtain the first analytic, interpretable and predictive theory of creativity in convolutional diffusion models. Indeed, score-based diffusion models can generate highly creative images that lie far from their training data. But optimal score-matching theory suggests that these models should only be able to produce memorized training examples. To reconcile this theory-experiment gap, we identify two simple inductive biases, locality and equivariance, that: (1) induce a form of combinatorial creativity by preventing optimal score-matching; (2) result in a fully analytic, completely mechanistically interpretable, equivariant local score (ELS) machine that, (3) without any training can quantitatively predict the outputs of trained convolution only diffusion models (like ResNets and UNets) with high accuracy (median $r^2$ of $0.90, 0.91, 0.94$ on CIFAR10, FashionMNIST, and MNIST). Our ELS machine reveals a locally consistent patch mosaic model of creativity, in which diffusion models create exponentially many novel images by mixing and matching different local training set patches in different image locations. Our theory also partially predicts the outputs of pre-trained self-attention enabled UNets (median $r^2 \sim 0.75$ on CIFAR10), revealing an intriguing role for attention in carving out semantic coherence from local patch mosaics.

Diffusion Models are Evolutionary Algorithms
arxiv.org/abs/2410.02543
old.reddit.com/r/MachineLearni

* diffusion models inherently perform evolutionary algorithms
* naturally encompass selection, mutation, reproductive isolation

Diffusion model: en.wikipedia.org/wiki/Diffusio
Stable Diffusion: en.wikipedia.org/wiki/Stable_D
Evolutionary algorithm: en.wikipedia.org/wiki/Evolutio
Genetic programming: en.wikipedia.org/wiki/Genetic_

arXiv.orgDiffusion Models are Evolutionary AlgorithmsIn a convergence of machine learning and biology, we reveal that diffusion models are evolutionary algorithms. By considering evolution as a denoising process and reversed evolution as diffusion, we mathematically demonstrate that diffusion models inherently perform evolutionary algorithms, naturally encompassing selection, mutation, and reproductive isolation. Building on this equivalence, we propose the Diffusion Evolution method: an evolutionary algorithm utilizing iterative denoising -- as originally introduced in the context of diffusion models -- to heuristically refine solutions in parameter spaces. Unlike traditional approaches, Diffusion Evolution efficiently identifies multiple optimal solutions and outperforms prominent mainstream evolutionary algorithms. Furthermore, leveraging advanced concepts from diffusion models, namely latent space diffusion and accelerated sampling, we introduce Latent Space Diffusion Evolution, which finds solutions for evolutionary tasks in high-dimensional complex parameter space while significantly reducing computational steps. This parallel between diffusion and evolution not only bridges two different fields but also opens new avenues for mutual enhancement, raising questions about open-ended evolution and potentially utilizing non-Gaussian or discrete diffusion models in the context of Diffusion Evolution.

Scientists have devised a technique called "#DistributionMatchingDistillation" (DMD) that teaches new #AIModels to mimic established image generators, known as #DiffusionModels, such as #DALLE3, #Midjourney and #StableDiffusion.

This framework results in smaller and leaner #AI models that can generate images much more quickly while retaining generated image quality.

MIT scientists have just figured out how to make the most popular #AIImageGenerators 30 times faster
livescience.com/technology/art

Live Science · MIT scientists have just figured out how to make the most popular AI image generators 30 times fasterBy Keumars Afifi-Sabet
404Not Found