mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

11K
active users

#EvolutionaryAlgorithms

1 post1 participant0 posts today
Hacker News<p>Baby Steps into Genetic Programming</p><p><a href="https://aerique.blogspot.com/2011/01/baby-steps-into-genetic-programming.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aerique.blogspot.com/2011/01/b</span><span class="invisible">aby-steps-into-genetic-programming.html</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/GeneticProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeneticProgramming</span></a> <a href="https://mastodon.social/tags/BabySteps" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BabySteps</span></a> <a href="https://mastodon.social/tags/AIInnovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIInnovation</span></a> <a href="https://mastodon.social/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a></p>
AnthonyNot that I have the free time to take on another project, but there's a part of me that wants to do a thorough exploration of argmax and write up what I find, if only as notes. Math-y and science-y people take it for granted; search engines prefer telling you about the numpy function of that name. But it turns out argmax has (what I think are) interesting subtleties.<br><br>Here's one. If you're given a function, you can treat argmax of that function as a set-valued function varying over all subsets of its domain, returning a subset--the argmaxima let's call them--of each subset. argmax x∈S f(x) is a subset of <i>S</i>, for any <i>S</i> that is a subset of the function <i>f</i>'s domain. Another way to think of this is that argmax induces a 2-way partitioning of any such input set <i>S</i> into those elements that are in the argmax, and those that are not.<br><br>Now imagine you have some way of splitting any subset of some given set into two pieces, one piece containing the "preferred" elements and the other piece the rest, separating the chaff from the wheat if you will. It turns out that in a large variety of cases, given only a partitioning scheme like this, you can find a function for which the partitioning is argmax of that function. In fact you can say more: you can find a function whose codomain is (a subset of) some <i>n</i>-dimensional Euclidean space. You might have to relax the definition of argmax slightly (but not fatally) to make this work, but you frequently can (1). It's not obvious this should be true, because the partitioning scheme you started with could be anything at all (as long as it's deterministic--that bit's important). That's one thing that's interesting about this observation.<br><br>Another, deeper reason this is interesting (to me) is that it connects two concepts that superficially look different, one being "local" and the other "global". This notion of partitioning subsets into preferred/not preferred pieces is sometimes called a "solution concept"; the notion shows up in game theory, but is more general than that. You can think of it as a local way of identifying what's good: if you have a solution concept, then given a set of things, you're able to say which are good, regardless of the status of other things you can't see (because they're not in the set you're considering). On the other hand, the notion of argmax of a function is global in nature: the function is globally defined, over its entire domain, and the argmax of it tells you the (arg)maxima over the entire domain.<br><br>In evolutionary computation and artificial life, which is where I'm coming from, such a function is often called an "objective" (or "multiobjective") function, sometimes a "fitness" function. One of the provocative conclusions of what I've said above for these fields is that as soon as you have a deterministic way of discerning "good" from "bad" stuff--aka a solution concept--you automatically have globally-defined objectives. They might be unintelligible, difficult to find, or not very interesting or useful for whatever you're doing, but they are there nevertheless: the math says so. The reason this is provocative is that every few years in the evolutionary computation or artificial life literature there pops up some new variation of "fitnessless" or "objective-free" algorithms that claim to find good stuff of one sort of another without the need to define objective function(s), and/or without the need to explicitly climb them (2). The result I'm alluding to here strongly suggests that this way of thinking lacks a certain incisiveness: if your algorithm has a deterministic solution concept, and the algorithm is finding good stuff according to that solution concept, then it absolutely is ascending objectives. It's just that you've chosen to ignore them (3).<br><br>Anyway, returning to our friend argmax, it looks like it has a kind of inverse: given only the "behavior" of argmax of a function <i>f</i> over a set of subsets, you're often able to derive a function <i>g</i> that would lead to that same behavior. In general <i>g</i> will not be the same as <i>f</i>, but it will be a sibling of sorts. In other words there's an adjoint functor or something of that flavor hiding here! This is almost surely not a novel observation, but I can say that in all my years of math and computer science classes I never learned this. Maybe I slept through that lecture!<br><br><a href="https://buc.ci?t=computerscience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#ComputerScience</a> <a href="https://buc.ci?t=math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#math</a> <a href="https://buc.ci?t=argmax" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#argmax</a> <a href="https://buc.ci?t=solutionconcepts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#SolutionConcepts</a> <a href="https://buc.ci?t=coevolutionaryalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#CoevolutionaryAlgorithms</a> <a href="https://buc.ci?t=cooptimizationalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#CooptimizationAlgorithms</a> <a href="https://buc.ci?t=optimization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#optimization</a> <a href="https://buc.ci?t=evolutionarycomputation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#EvolutionaryComputation</a> <a href="https://buc.ci?t=evolutionaryalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#EvolutionaryAlgorithms</a> <a href="https://buc.ci?t=geneticalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#GeneticAlgorithms</a> <a href="https://buc.ci?t=artificiallife" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#ArtificialLife</a> <a href="https://buc.ci?t=informativedimensions" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#InformativeDimensions</a><br><br><br> <br>(1) If you're familiar with my work on this stuff then the succinct statement is: partial order decomposition of the weak preference order induced by the solution concept, when possible, yields an embedding of weak preference into ℝ^n for some finite natural number <i>n</i>; the desired function can be read off from this (the proofs about when the solution concept coincides with argmax of this function have some subtleties but aren't especially deep or hard). I skipped this detail, but there's also a "more local" version of this observation, where the domain of applicability of weak preference is itself restricted to a subset, and the objectives found are restricted to that subdomain rather than fully global.<br><br>(2) The latest iteration of "open-endedness" has this quality; other variants include "novelty search" and "complexification".<br><br>(3) Which is fair of course--maybe these mystery objectives legitimately don't matter to whatever you're trying to accomplish. But in the interest of making progress at the level of ideas, I think it's important to be precise about one's commitments and premises, and to be aware of what constitutes an impossible premise.<br><br><br>
Nate Gaylinn<p>I'm really enjoying my latest research project. This one's exploring how different spatial environments can lead to different evolutionary dynamics. Here we see an environment where it's harder to survive in the middle than on the edges (that is, it requires higher scores from a fitness function). We can see the population evolve increasing fitness as it spreads into the interior space.</p><p><a href="https://tech.lgbt/tags/EvolutionaryComputation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryComputation</span></a> <a href="https://tech.lgbt/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> <a href="https://tech.lgbt/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a> <a href="https://tech.lgbt/tags/evolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolution</span></a></p>
Farooq Karimi Zadeh<p>Just checked <a href="https://blackrock.city/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> and <a href="https://blackrock.city/tags/EvolutionaryComputation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryComputation</span></a> tags. And I found posts from just two: myself and <span class="h-card" translate="no"><a href="https://mastodon.social/@moshesipper" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>moshesipper</span></a></span> </p><p>It's time to feel scientifically lonely :)</p><p>But really, why some people like me are attracted to less or more unpopulated regions of science and engineering?</p><p><a href="https://blackrock.city/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a></p>
Farooq Karimi Zadeh<p>A <a href="https://blackrock.city/tags/GeneticProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeneticProgramming</span></a> question. There is this Lexicase selection algorithm which is basically terminating individuals which don't perform good on a single testcase. If it was regression, it could make sense, but on a binary classification, this means suddenly a huge number of programs in the population will vanish as they misclassify a single data sample. I haven't tested yet. But to me it makes little sense. Where am I wrong?</p><p><a href="https://blackrock.city/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a> <a href="https://blackrock.city/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> <br>ping <span class="h-card" translate="no"><a href="https://sigmoid.social/@lspector" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>lspector</span></a></span></p>
Moshe Sipper<p>New paper out ✒️😊</p><p>We present a novel approach to performing fitness approximation in <a href="https://mastodon.social/tags/geneticalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>geneticalgorithms</span></a> (<a href="https://mastodon.social/tags/GAs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GAs</span></a>) using <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a> (<a href="https://mastodon.social/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a>) models, focusing on dynamic adaptation to the evolutionary state.</p><p><a href="https://www.mdpi.com/2078-2489/15/12/744" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">mdpi.com/2078-2489/15/12/744</span><span class="invisible"></span></a> </p><p>With talented grad students Itai Tzruia and Tomer Halperin, and my colleague Dr. Achiya Elyasaf.</p><p><a href="https://mastodon.social/tags/evolutionaryalgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolutionaryalgorithms</span></a></p><p><a href="https://mastodon.social/tags/evolutionarycomputation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolutionarycomputation</span></a></p>
Nate Gaylinn<p>I'm at uni, studying both evolutionary computation and deep learning. I'm a seasoned programmer, but both of these tools are new and challenging, because I'm used to having <em>control</em> over the programs I write. These techniques are both about getting computers to figure out their <em>own</em> way to do things.</p><p>The difference is, with DL, there's some very specific thing I want to accomplish, and when I can't get the computer to do that, it's quite frustrating. With EC, I have way less of an expectation about what's supposed to happen, and often the weird stuff the computer comes up with is a delightful surprise.</p><p>I guess the flip side is that EC is a bit too wild and unpredictable to be a very profitable enterprise at this point, but it's fun for research!</p><p><a href="https://tech.lgbt/tags/programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>programming</span></a> <a href="https://tech.lgbt/tags/ml" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ml</span></a> <a href="https://tech.lgbt/tags/deep_learning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deep_learning</span></a> <a href="https://tech.lgbt/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a></p>
Nate Gaylinn<p>I keep designing weird bio-inspired evolutionary algorithms, only to discover (while building them) that they're sorta inside-out versions of groundbreaking EAs of the past couple of decades.</p><p>This is both frustrating, and tremendously exciting. On the one hand, I keep feeling scooped. On the other hand, I'm rediscovering known good ideas, and I hope my flip in perspective brings something interesting and important to the story! I really do think we've been thinking about evolution wrong all this time.</p><p>I guess the challenge is for me to show that.</p><p><a href="https://tech.lgbt/tags/alife" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>alife</span></a> <a href="https://tech.lgbt/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> <a href="https://tech.lgbt/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a></p>
Victoria Stuart 🇨🇦 🏳️‍⚧️<p>Diffusion Models are Evolutionary Algorithms<br><a href="https://arxiv.org/abs/2410.02543" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2410.02543</span><span class="invisible"></span></a><br><a href="https://old.reddit.com/r/MachineLearning/comments/1fzbvq3/r_diffusion_models_are_evolutionary_algorithms" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">old.reddit.com/r/MachineLearni</span><span class="invisible">ng/comments/1fzbvq3/r_diffusion_models_are_evolutionary_algorithms</span></a></p><p>* diffusion models inherently perform evolutionary algorithms<br>* naturally encompass selection, mutation, reproductive isolation</p><p>Diffusion model: <a href="https://en.wikipedia.org/wiki/Diffusion_model" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Diffusio</span><span class="invisible">n_model</span></a><br>Stable Diffusion: <a href="https://en.wikipedia.org/wiki/Stable_Diffusion" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Stable_D</span><span class="invisible">iffusion</span></a><br>Evolutionary algorithm: <a href="https://en.wikipedia.org/wiki/Evolutionary_algorithm" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Evolutio</span><span class="invisible">nary_algorithm</span></a><br>Genetic programming: <a href="https://en.wikipedia.org/wiki/Genetic_programming" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Genetic_</span><span class="invisible">programming</span></a></p><p><a href="https://mastodon.social/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://mastodon.social/tags/DiffusionModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DiffusionModels</span></a> <a href="https://mastodon.social/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> <a href="https://mastodon.social/tags/GeneticProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeneticProgramming</span></a> <a href="https://mastodon.social/tags/StableDiffusion" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StableDiffusion</span></a> <a href="https://mastodon.social/tags/evolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolution</span></a></p>
Farooq Karimi Zadeh<p>From the paper "Cartestian Genetic Programming: its status and future"</p><p><a href="https://blackrock.city/tags/gp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gp</span></a> <a href="https://blackrock.city/tags/geneticprogramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>geneticprogramming</span></a> <a href="https://blackrock.city/tags/evolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolutionaryAlgorithms</span></a> <a href="https://blackrock.city/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://blackrock.city/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://blackrock.city/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a> <a href="https://blackrock.city/tags/artificial_intellignce" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificial_intellignce</span></a> <a href="https://blackrock.city/tags/cgp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cgp</span></a> <a href="https://blackrock.city/tags/book" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>book</span></a> <a href="https://blackrock.city/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a> <a href="https://blackrock.city/tags/CS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CS</span></a> <a href="https://blackrock.city/tags/computers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>computers</span></a> <a href="https://blackrock.city/tags/computing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>computing</span></a></p>
synth.is<p>Very happy to see our article on a System for <a href="https://sigmoid.social/tags/sonic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sonic</span></a> Explorations with <a href="https://sigmoid.social/tags/EvolutionaryAlgorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EvolutionaryAlgorithms</span></a> published in the <a href="https://sigmoid.social/tags/journal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>journal</span></a> of the <a href="https://sigmoid.social/tags/audio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>audio</span></a> <a href="https://sigmoid.social/tags/engineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>engineering</span></a> <a href="https://sigmoid.social/tags/society" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>society</span></a> <a href="https://doi.org/10.17743/jaes.2022.0137" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.17743/jaes.2022.013</span><span class="invisible">7</span></a> which enables <a href="https://sigmoid.social/tags/SoundSynthesis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoundSynthesis</span></a> with <a href="https://sigmoid.social/tags/QualityDiversity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>QualityDiversity</span></a> <a href="https://sigmoid.social/tags/algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algorithms</span></a> and is among other things based on <a href="https://sigmoid.social/tags/NodeJS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NodeJS</span></a> and <a href="https://sigmoid.social/tags/DNN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DNN</span></a></p>

🧬🚀 #EvolutionaryAlgorithms and #ScrumTeams are a perfect match!
In the quest for the epitome of agility, you need a diverse population within your teams. With each iteration, our evolutionary algorithm sifts through individuals for their #AgileCheese quotient - the true essence of #agile mastery. Only those infused with the finest #cheese ascend to the echelons of genuine #agility, propelling our teams to unparalleled levels of innovation! 🌟🧀 #EvolutionaryAgility #CheeseFitness

One idea I'm thinking about for #GeneticProgramming

In GP, we usually need to evolve multiple subpopulations named "demes" rather than one big population as the converge on smaller population happens faster. Then we have small, infrequent migrations between subpopulations, like 5% of population of each subpopulation/deme.

#GP #GeneticProgramming #EvolutionaryAlgorithms #EA #GeneticAlgorithm #EvolutionaryMachineLearning #AI #ML #EML #research #researching #Question #Idea

1/2