Why Everything in the Universe Turns More Complex

Mike's Notes

Philip Ball has written another article for Quanta Magazine that suggests complexity increases over time.

Resources

References


Repository

  • Home > Ajabbi Research > Library > Subscriptions > Quanta Magazine
  • Home > Ajabbi Research > Library > Author > Philip Ball
  • Home > Ajabbi Research > Library > Author > Terrence Deacon
  • Home > Ajabbi Research > Library > Author > Stuart Kauffman

Last Updated

07/04/2025

Why Everything in the Universe Turns More Complex

By: Philip Ball
Quanta Magazine: 2/04/2025

Philip Ball is a science writer and author based in London who contributes frequently to Nature, New Scientist, Prospect, Nautilus and The Atlantic, among other publications. His books include Bright Earth, Invisible and most recently, The Water Kingdom.

A new suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution.

In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they?

Many answers to Fermi’s “paradox” have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don’t appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception.

A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics — the law that dictates an inevitable rise in entropy, a measure of disorder. If they’re right, complex and intelligent life should be widespread.

In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter — living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function.

This hypothesis(opens a new tab), formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, D.C., along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not “complete” in the sense of supplying all we need to comprehend natural phenomena; rather, evolution — biological or otherwise — introduces functions and novelties that could not even in principle be predicted from physics alone. “I’m so glad they’ve done what they’ve done,” said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. “They’ve made these questions legitimate.”


Michael Wong, an astrobiologist at the Carnegie Institution in Washington, D.C.

Katherine Cain/Carnegie Science

Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative — it changes depending on context — it’s impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use.

The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can’t predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds?

Making Meaning

The story begins in 2003, when the biologist Jack Szostak published a short article(opens a new tab) in Nature proposing the concept of functional information. Szostak — who six years later would get a Nobel Prize for unrelated work — wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely.

For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity — and, by extension, encodes less information — than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome).

Jack Szostak proposed a way to quantify information in biological systems.

HHMI

But Szostak pointed out that Kolmogorov’s measure of complexity neglects an issue crucial to biology: how biological molecules function.

In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules.

Let’s say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it’s unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called “functional information.”

If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well.

Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically.

Selected for Function

Hazen came across Szostak’s idea while thinking about the origin of life — an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. “I concluded that talking about life versus nonlife is a false dichotomy,” Hazen said. “I felt there had to be some kind of continuum — there has to be something that’s driving this process from simpler to more complex systems.” Functional information, he thought, promised a way to get at the “increasing complexity of all kinds of evolving systems.”

In 2007 Hazen collaborated with Szostak to write a computer simulation(opens a new tab) involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved.

There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. “From the very first moment that we sat down and talked about ideas, it was unbelievable,” Hazen said.


Robert Hazen, a mineralogist at the Carnegie Institution in Washington, D.C.

Courtesy of Robert Hazen

“I had got disillusioned with the state of the art of looking for life on other worlds,” Wong said. “I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we’d be able to notice life elsewhere even if it had different chemical specifics, but not so far that we’d be including all kinds of self-organizing structures like hurricanes?”

The pair soon realized that they needed expertise from a whole other set of disciplines. “We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other’s prejudices,” Hazen said. “This is not a mineralogical problem; it’s not a physics problem, or a philosophical problem. It’s all of those things.”

They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. “We all assumed the second law of thermodynamics supplies the arrow of time,” Hazen said. “But it seems like there’s a much more idiosyncratic pathway that the universe takes. We think it’s because of selection for function — a very orderly process that leads to ordered states. That’s not part of the second law, although it’s not inconsistent with it either.”

Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don’t seem related to life at all.

At first glance, it doesn’t seem a promising idea. In biology, function makes sense. But what does “function” mean for a rock?

All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn’t enough energy available to convert them to more stable phases.

Information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.

Michael Wong, Carnegie Institution

This might seem trivial, like saying that some objects exist while other ones don’t, even if they could in theory. But Hazen and Wong have shown(opens a new tab) that, even for minerals, functional information has increased over the course of Earth’s history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn’s moon Titan — another example of an increase in functional information that doesn’t involve life.

It’s the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity.

Wong said their work implies three main conclusions.

First, biology is just one example of evolution. “There is a more universal description that drives the evolution of complex systems.”

Second, he said, there might be “an arrow in time that describes this increasing complexity,” similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time.

Finally, Wong said, “information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.”

In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition.

Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable.

In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the “next floor up.” Crucially, what matters — the criteria for selection, on which continued evolution depends — also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it.

For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time — that they’d be stable. But once such molecules became organized into groups that could catalyze one another’s formation — what Kauffman has called autocatalytic cycles — the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there’s a kind of physics of evolution — and it’s a kind of physics we know about already.

The Biosphere Creates Its Own Possibilities

The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule.

Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what’s available.

Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged — this required a whole host of preceding, contingent but specific innovations.

However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can’t be predicted — and yet some new functions can dictate the very rules of how the system evolves subsequently. “The biosphere is creating its own possibilities,” Kauffman said. “Not only do we not know what will happen, we don’t even know what can happen.” Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, “We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.”

The physicist Paul Davies of Arizona State University agrees that biological evolution “generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.”

An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms.

Marcus Heisler, University of Sydney

Mathematically, a “phase space” is a way of describing all possible configurations of a physical system, whether it’s as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested(opens a new tab) that evolution in an expanding accessible phase space might be formally equivalent to the “incompleteness theorems” devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can’t be shown to be true or false. We can only decide such statements by adding new axioms.

Davies and colleagues say that, as with Gödel’s theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn’t the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential.

“An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,” said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, “goes right to the heart of what makes life so magical.”

Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix — once the components of the system can reason, choose, and run experiments “in their heads” — the potential for macro-micro feedback and open-ended growth is even greater. “Technological applications take us way beyond Darwinism,” he said. A watch gets made faster if the watchmaker is not blind.

Back to the Bench

If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information — in effect, complexity — does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is “utterly improbable.” After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life?

Mayr’s evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot.

But what about the chances of such a jump happening in the first place? If the new “law of increasing functional information” is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn’t have to rely on some highly improbable chance event.

What’s more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo’s (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds.

Together with the chemist Lee Cronin(opens a new tab) of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients.

“Laws for living systems must be somewhat different than what we have in physics now,” Walker said, “but that does not mean that there are no laws.” But she doubts that the putative law of functional information can be rigorously tested in the lab. “I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,” she said. “What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.”

Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there’s no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can’t calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated — but we can still describe it approximately enough to navigate spacecraft through it.

Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That’s because natural selection has picked out some favored compounds. There’s much more glucose in living cells, for example, than you’d expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.)

There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational “artificial life.” Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued(opens a new tab) that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues.

Hazen’s team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. “People are approaching us because they are desperate to find a model to explain their system,” Hazen said.

But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It’s hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe.

On a Sandy Beach metadata

Mike's Notes

Here are some notes on recent changes to this blog, On a Sandy Beach.

Resources

References


Repository

  • Home > Ajabbi Research > Library > Catalogue
  • Home > Handbook > Publications > Newsletters

Last Updated

06/04/2025

On a Sandy Beach metadata

By: Mike Peters
On a Sandy Beach: 06/04/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

In preparation for the Pipi rollout, I have been making some format changes to On a Sandy Beach, my engineering blog.

I use the blog to write notes for myself to remember where to find information or start on documentation, and I'm happy to share this with anyone interested.

Ajabbi has been set up to make Pipi available for users. Pipi is slowly generating a massive 20,000 + page reference website to help developers use Pipi to build SaaS applications for large enterprise systems. 95% of these pages are hidden while the community helps test the documentation. Frequent changes to the page URLs would mess up Google search results and cause confusion.

Here is a formal description of some metadata used on each blog page. I will update all 330+ pages over the next month.

This metadata will likely change again over time based on feedback.

Mike's Notes

  • The why.

Resources

  • A list of web URLs.

References

  • The what. A short list of key source documents.

Repository

  • The where. A list of breadcrumbs used system-wide in Ajabbi that locates where internal resources exist for further searching. Once these pages are unhidden from Google search, the breadcrumbs will include URLs.

Last Updated

  • The when. The date I last updated the blog page. I often update pages 30-40 times over several months after first publishing. I am a slow writer, so I frequently spot mistakes or missing information weeks later. Grammarly also often finds mistakes.

Self-organizing systems: what, how, and why?

Mike's Notes

Carlos Gershenson's paper about self-organising systems was recently published by Nature in npj complexity. Carlos is the editor of Complexity Digest.

I removed the references in the original paper because importing this into Google Blogger is messy. Please refer to the original paper for this missing information.

Resources

References


Repository

  • Home > Ajabbi Research > Library > Subscriptions > Complexity Digest
  • Home > Ajabbi Research > Library > Thermodynamics

Last Updated

05/04/2025

Self-organizing systems: what, how, and why?

By: Carlos Gershenson
Nature > npj complexity: 25/03/2025

Abstract

I present a personal account of self-organizing systems, framing relevant questions to better understand self-organization, information, complexity, and emergence. With this aim, I start with a notion and examples of self-organizing systems (what?), continue with their properties and related concepts (how?), and close with applications (why?) in physics, chemistry, biology, collective behavior, ecology, communication networks, robotics, artificial intelligence, linguistics, social science, urbanism, philosophy, and engineering.

What are self-organizing systems?

"Being ill defined is a feature common to all important concepts.” —Benoît Mandelbrot

I will not attempt to define a “self-organizing system”, as it involves the cybernetic problem of defining “system”, the informational problem of defining “organization”, and the ontological problem of defining “self”. Still, there are plenty of examples of systems that we can usefully call self-organizing: flocks of birds, schools of fish, swarms of insects, herds of cattle, and some crowds of people. In these animal examples, collective behavior is a product of the interactions of individuals, not determined by a leader or an external signal. There are also several examples from non-living systems, such as vortexes, crystallization, self-assembly, and pattern formation in general. In these cases, elements of a system also interact to achieve a global pattern.

Self-organization or similar concepts have been present since antiquity (see Section 3.12), so the idea itself is not new. Nevertheless, we still lack the proper conceptual framework to understand it properly. The term “self-organizing system” was coined by W. Ross Ashby in the early days of cybernetics. Ashby’s purpose was to describe deterministic machines that could change their own organization. Ever since, the concept has been used in a broad range of disciplines, including statistical mechanics, supramolecular chemistry, computer science, and artificial life.

There is an unavoidable subjectivity when speaking about self-organizing systems, as the same system can be described as self-organizing or not (see Section 2.1). Stafford Beer gave the following example: an ice cream at room temperature will thaw. This will increase its temperature and entropy, so it would be “self-disorganizing”. However, if we focus on the function of an ice cream for being eaten, it would be “self-organizing”, because it would approach a pleasant temperature and consistency for degustating it, improving its “function”. Ashby also mentioned that one just needs to call the attractor of a dynamical system “organized”, and then almost any system will be self-organizing.

So, the question should not be whether a system is self-organizing, but rather (being pragmatic) when is it useful to describe a system as self-organizing? The answer will slowly unfold along this paper, but in short, it can be said that self-organization is a useful description when we are interested on describing systems at multiple scales, and understanding how these affect each other. For example, collective motion and cyber-physical systems can benefit from such a description, compared to a single-scale narrative/model. This is common with complexity, as interactions can generate novel information that is not present in initial nor boundary conditions, limiting predictability.

So rather than a definition, we can do with a notion: a system can be described as self-organizing when its elements interact to produce a global function or behavior. This is in contrast with centralized systems, where a single or few elements “control” the rest, or with simply distributed systems, where a global problem can be divided (reduced) and each element does its part, but there is no need to interact nor integrate elementary solutions. Thus, self-organizing systems are a useful description when we want to relate individual behaviors and interactions to global patterns or functions. If we can describe a system fully (for our particular purposes) at a single scale, then self-organization could be perhaps identified, but superfluos (not useful). And the “self” implies that the “control” comes from within the system, rather than from an external signal/controller that would explicitly indicate elements of what to do.

For example, we can decide to call a society “self-organizing” if we are interested in how individual interactions lead to the formation of fashion, ideologies, opinions, norms, and laws; but at the same time, how the emerging global properties affect the behavior of the individuals. If we were interested in an aggregate property of a population, e.g., its average height, then calling the group of individuals “self-organizing” would not give any extra information, and thus would not be useful.

It should be stressed that self-organization is not a property of systems per se. It is a way of describing systems, i.e., a narrative.

How can self-organizing systems be measured?

"It is the function of science to discover the existence of a general reign of order in nature and to find the causes governing this order. And this refers in equal measure to the relations of man — social and political — and to the entire universe as a whole.” —Dmitri Mendeleev

Even when self-organization had been described intuitively since antiquity — the seeds of the narrative were present — the proper tools for studying it became available only recently: computers. Since self-organizing systems require the explicit description of elements and interactions, our brains, blackboards, and notebooks are too limited to consider the number of required variables to study the properties of self-organizing systems. It was only through the relatively recent development of information technology that we were able to study the richness of self-organization, just like we were unable to study the microcosmos before microscopes and the macrocosmos before telescopes.

Information

Computation can be generally described as the transformation of information, although Alan Turing formally defined computable numbers with the purpose of proving limits of formal systems (in particular, Hilbert’s decision problem). In the same environment where the first digital computers were built in the mid XXth century, Claude Shannon defined information to quantify its transmission, showing that information could be reliably transmitted through unreliable communication channels. As it turned out, Shannon’s information H is mathematically equivalent to Boltzmann-Gibbs entropy:

$$H=-K\mathop{\sum}\limits_{i=i}^{n}{p}_{i}\log {p}_{i},$$

(1)

where K is a positive constant and p is the probability of receiving symbol i from a finite alphabet of size n. This dimensionless measure will be maximal for a homogeneous probability distribution, and minimal when only one symbol has a probability p = 1. In binary, we have only two symbols (n = 2), and information would be minimal with a string of only ones or only zeroes (‘1111…’ or ‘0000…’). This implies that having more bits will not tell us anything new, because we already know what the next bits will be (assuming the probability distribution will not change). With a random-like string, such as a sequence of coin flips (‘11010001011011001010…’), information is maximal, because no matter how much previous information we have (full knowledge of the probability distribution), we will not be able to predict what the next bit might be better than chance.

In parallel, Norbert Wiener — one of the founders of cybernetics — proposed an alternative measure of information, which was basically the same as Shannon’s, but without the minus sign. Wiener’s information measured what one knows already, so it is minimal when we have a random string (homogeneous probability distribution) because all the information we already have is “useless” (to predict the next symbol), and maximal when we have a single symbol repeating (maximally biased probability distribution), because the information we have allows us to predict exactly the next symbol. Nevertheless, Shannon’s information is the one that everyone has used, and we will do the same.

Shannon’s information is also known as Shannon’s entropy, which can be also used as a measure of “disorder”. We already saw that it is maximal for random strings, and thus minimal for particularly ordered strings. Then, we can use the negative of Shannon’s information (which would be Wiener’s information) as a measure of organization. If the organization is a result of internal dynamics, then we can also use this measure for self-organization.

Nevertheless, just like with many measures, the interpretation depends on how the observer performs the measurement. Figure 1 shows how the same system, divided into four microstates or two macrostates (with probabilities represented as shades of gray) can increase its entropy/information (become more homogeneous) or decrease it, depending on how it is observed.

Fig. 1: The same system, observed at different levels or with different coarse grainings can be said to be disorganizing (entropy increasing) or organizing (entropy decreasing), for arbitrary initial and final states.

Probabilities of the system being in a state (a1, a2, b1, and b2 at the lower level, which can be aggregated in different ways at a higher level) are represented as shades of gray, so one can observe which configurations are more homogeneous (i.e., with higher entropy): if there is a high contrast in different states (such as between A' and B' in their initial state), then this implies more organization (less entropy), while similar shades (as between A' and B' in their final state) imply less organization (more entropy).

Still, the fact that self-organization is partially subjective does not mean that it cannot be useful. We just have to be aware that a shared description and interpretation should be agreed upon.

Complexity

Self-organizing systems are intimately related to complex systems. Again, the question is not so much whether a system is self-organizing or complex, but when is it useful to describe it as such. This is because most systems can be described as complex or not, depending on our context and purposes.

Etymologically, complexity comes from the Latin plexus, which could be translated as entwined. We can say that complex systems are those where interactions make it difficult to separate the components and study them in isolation, because of their interdependence. These interactions can generate novel information that limit predictability in an inherent way, as it is not present in initial nor boundary conditions. In other words, there is no shortcut to the future, but we have to go through all intermediate steps, as interactions partially determine the future states of the system.

For example, markets tend to be unpredictable because different agents make decisions depending on what they think other agents will decide. But since it is not possible to know what everyone will decide in advance, the predictability of markets is rather limited.

Complex systems can be confused with complicated or chaotic systems. Perhaps they will be easier to distinguish considering their opposites: complicated are the opposite of easy, chaotic (sensitive to initial conditions) are the opposite of robust, while complex systems are the opposite of separable.

Given the above notion of self-organizing systems, then all of them would also be complex systems, but not necessarily vice versa. This is because interactions are an essential aspect of self-organizing systems, which would make them complex by definition. However, we could have a description of a complex system whose elements interact but do not produce a global pattern or function we are interested in during the timeframe we are interested in. So, the narrative of complexity would be useful, but not the one of self-organization. Nevertheless, understanding complexity should be essential for the study of self-organization.

Emergence

One of the most relevant and controversial properties of complex systems is emergence. It could be seen as problematic because last century some people described emergent properties as “surprising”. So then emergence would be a measure of our ignorance, and then it would be reduced once we understood the mechanisms behind emergent properties. Also, there are different flavors of emergence, some easier to study and accept than others. But in general, emergence can be described as information that is present at one scale and not at another scale.

For example, we can have full knowledge of the properties of carbon atoms. But if we focus only on the atoms, i.e. without interactions, we will not be able to know whether they are part of a molecule of graphite, diamond, graphene, buckyballs, etc. (all composed only of carbon atoms) which have drastically different macroscopic properties. Thus, we cannot derive the conductivity, transparency, or density of these materials by looking only at the atomic properties of carbon. The difference lies precisely in how the atoms are organized, i.e. how they interact.

If emergence can be described in terms of information, Shannon’s measure can be used (understanding that we are measuring only the information that is absent from another scale). Thus, emergence would be the opposite of self-organization. This might seem contradictory, as usually emergence and self-organization are both present in complex systems8. But if we take each to its extreme, we can see that maximum emergence (information) occurs when there is (quasi)randomness, so no organization. Maximum (self-)organization occurs when entropy is minimal (no new information, and thus, no emergence). Because of this, complexity can be seen as a balance between emergence and self-organization.

Why should we use self-organizing systems?

"It is as though a puzzle could be put together simply by shaking its pieces.” —Christian De Duve

Self-organization can be used to build adaptive systems. This is useful for non-stationary problems, i.e., those that change in time. Since interactions can generate novel information, complexity often leads to non-stationarity. Thus, when a problem changes, the elements of a self-organizing system can adapt through their interactions. Then, designers do not need to specify precisely the problem beforehand, or how it will change, but just to define/regulate interactions to achieve a desired goal.

For example, if we want to improve passenger flow in public transportations systems, we cannot really change the elements of the system (passengers). Still, we can change how they interact. In 2016, we successfully implemented such a change to regulate boarding and alighting in Mexico City metro. In a similar way, we cannot change teachers in an education system. But we can change their interactions to improve learning. We cannot change politicians, but we can regulate their interactions to reduce corruption and improve efficiency. We cannot change businesspeople, but we can control their interactions to promote sustainable economic growth.

There have been many other examples of applications of self-organization in different field, and the following is only a partial enumeration.

Physics

The Industrial revolution led to the formalization of thermodynamics in the XIXth century. The second law of thermodynamics states that an isolated system will tend to thermal equilibrium. In other words, it loses organization, as heterogeneities become homogeneous, and entropy is eventually maximized. Still, non-equilibrium thermodynamics has studied how open systems can self-organize.

Lasers can be seen as self-organized light, which Hermann Haken used as an inspiration to propose the study of synergetics, which precisely studies self-organization in open systems far from thermodynamic equilibrium and is related to phase transitions, where criticality is found.

Self-organized criticality (SOC) was proposed to explain why power laws and scale-free-like distributions and fractals are so prevalent in nature. SOC was illustrated with the sandpile model, where grains accumulate and lead to avalanches with a scale-free (critical) distribution. Similarly, self-organization has been used to describe granular media.

Generalizing principles of granular media, self-organization can be used to describe and design “optimal” configurations in biological, social, and economic systems.

Chemistry

Around 1950, Boris P. Belousov was interested in a simplified version of the Krebs cycle. He found that a solution of citric acid in water with acidified bromate and yellow ceric ions produced an oscillating reaction. His attempts to publish his findings were rejected, arguing that it violated the second law of thermodynamics (which only applies to systems at equilibrium, and this system is far from equilibrium). In the 1960s, Anatol M. Zhabotinsky began working on this reaction, and only in the late 1960s and 1970s the Belousov-Zhabotinsky reaction became known outside the Soviet Union. Since then, many chemical systems far from equilibrium have been studied. Some have been characterized as self-organizing, because they are able to use free energy to increase their organization.

More generally, self-organization has been used to describe pattern formation, which includes self-assembly.

Molecules are basically atoms joined by covalent bonds. Supramolecular chemistry studies chemical structures formed by weaker forces (Van Der Waals, hydrogen bonds, electrostatic charges), and can also be described in terms of self-organization.

Biology

The study of form in biology (morphogenesis) is far from new, but far from complete.

Alan Turing was one of the first to describe morphogenesis with differential equations. Morphogenesis can be seen as pattern formation with local stimulation and long-range inhibition (skins, scales), or as fractals (capillaries, neurons). These processes are more or less well understood. Still, it becomes more sophisticated for embryogenesis and regeneration, where many open questions remain.

Humberto Maturana and Francisco Varela proposed autopoiesis (self-production) to describe the emergence of living systems from complex chemistry. Autopoiesis can be seen as a special case of self-organization (to the disagreement of Maturana), because molecules self-organize to produce membranes and metabolism. Moreover, it can be argued that living systems also need information handling, self-replication, and evolvability.

There are further examples of self-organization in biology, that include firefly synchronization, ant foraging, and collective behavior.

Collective Behavior

Groups of agents can produce global patterns or behavior through local interactions. Craig Reynolds presented a simple model of boids, where agents followed three simple rules: separation (don’t crash), alignment (head to average heading of neighbors), and cohesion (go towards average position of neighbors). Varying its parameters, this simple model produces dynamic patterns similar to those of flocks, schools, herds, and swarms. It was used to animate bats and penguins in the 1992 Batman Returns film and contributed to earning Reynolds an Oscar in 1998.

A flock of boids self-organize even only with the alignment rule and added noise. It has been shown that when the number of boids increases, novel properties emerge.

Slightly more sophisticated models have been used to describe more precisely animal collective behavior.

Furthermore, similar models and rules have been used to study the self-organization of active matterand robots (see below).

Ecology

Species self-organize to produce ecological patterns. These include trophic networks (who eats who), mutualistic networks (cooperating species), and host-parasite networks.

At the biosphere level, ecosystems also self-organize. This is a central aspect of the Gaia hypothesis, which defends that our planet self-regulates its own conditions that allow life to thrive.

Self-organization can be useful to study how ecosystems can be robust, resilient, or antifragile

Communication networks

Self-organization has been useful in telecommunication networks106, as it is desirable to have the ability to self-reconfigure based on changing demands. Also, having local rules to define global functions makes them robust to potential failures or attacks of central nodes: if there is a path that is not responsive, then an alternative is sought. These principles have been used in Internet protocols, peer-to-peer networks, cellular networks, and more.

Robotics

There has been a broad variety of self-organizing robots, terrestrial, aerial, aquatic, and/or hybrid (for a review see ref. 26).

A common aspect of self-organizing robots is that there is no leader, and the collective function or pattern is the result of local interactions. Some have been inspired in the collective behavior of animals, and their potential applications are vast.

Artificial Intelligence

As mentioned in the first section of this paper, the study of self-organizing systems originated in cybernetics, which had a strong influence and overlap in the early days of artificial intelligence. Claude Shannon, William Grey Walter, Warren McCulloch and Walter Pitts contributed to both fields in their early days.

If brains can be described as self-organizing, it is no surprise that certain flavors of artificial neural networks have also been described as self-organizing. Independently on the terminology, adjustments to local weights between artificial neurons lead to an error reduction in the task of the network.

Even when their interpretation is still controversial, large language models have been useful in multiple domains. Whether describing them as self-organizing would bring any benefit or not, still remains to be seen.

Linguistics

The statistical study of linguistics became popular after Zipf. Different explanations have been put forward to try to explain statistical regularities found across languages, and in even more general contexts.

Naturally, some of these explanations focus on the evolution of language. It has been shown that a shared vocabulary and grammar140 can evolve using self-organization: individuals interact locally leading to a population converging to a shared language. This is useful not only for understanding language evolution, but also to build adaptive artificial systems. Similar mechanisms can be used in other social systems, e.g. to reach consensus.

Social Science

Individuals in a society interact in different ways. These interactions can lead to social properties, such as norms, fashions, and expectations. In turn, these social properties can guide, constrain, and promote behaviors and states of individuals.

Computers have allowed the simulation of social systems, including systematic explorations of abstract models. Combined with an increase in data availability, computational social science has been increasingly adopted by social scientists. The understanding and implications of self-organization are naturally relevant to this field.

Urbanism

It is similar to the scientific study of cities.

For example, city growth can be modeled as a self-organizing process152. Similar to the metro case study mentioned above, self-organization has been shown to efficiently and adaptively coordinate traffic lights, and is promising for regulating interactions among autonomous vehicles.

More generally, urban systems tend to be non-stationary, as conditions are changing constantly. Thus, self-organization offers a proven alternative to design urban systems that adapt as fast as their conditions change.

Philosophy

Concepts similar to self-organization can be traced to ancient Greece in Heraclitus and Aristotle and also to Buddhist philosophy.

There has been a long debate about the relationship between mechanistic principles and the purpose of systems. This question was at the origins of cybernetics. It has been argued12 that self-organization can be used to explain teleology, in accordance with Kant’s attempt from the late XVIIIth century, as purpose can also be described in terms of organization.

Also, self-organization is related to downward causation: when higher-level properties cause changes in lower-level elements. This is still debated, along with other philosophical questions related to self-organization.

Engineering

There have been several examples of self-organization applied to different areas of engineering apart from those already mentioned, such as power grids, computing, sensor networks, supply networks and production systems, bureaucracies, and more.

In general, self-organization has been a promising approach to build adaptive systems, as mentioned above. It might seem counterintuitive to speak about controlling self-organization, since we might think that self-organizing systems are difficult to regulate because of a certain autonomy of their components. Still, we can speak about a balance between control and independence, in what has been called “guided self-organization”.

Conclusions

"We can never be right, we can only be sure when we are wrong” —Richard Feynman

There are many open questions related to the scientific study of self-organizing systems. Even when their potential has been promising, they are far from being commonly used to address non-stationary problems. Could it be because of a lack of literacy in concepts related to complex systems? Might there be any conceptual or technical obstacle? Do we need further theories? Independently of the answers, these questions are worth exploring.

For example, we have yet to explore the relationship between self-organization and antifragility: the property of systems that benefit from perturbations. Self-organization seems to be correlated with antifragility, but why or how still has to be investigated. In a similar vein, a systematic exploration of the “slower is faster” effect might be useful to better understand self-organizing systems and vice versa.

Many problems and challenges we are facing — climate change, migration, urban growth, social polarization, etc. — are clearly non-stationary. It is not certain that with self-organization we will be able to improve the situation in all of them. But it is almost certain that with the current tools we have, we will not be able to make much more progress (otherwise we would have made it already). It would be imprudent not to make efforts to use the narrative of self-organization, even if for slightly improving situations related to only one of these challenges.

Chaos and the Three Body Problem

Mike's Notes

This is an excellent video presentation of the three-body problem by theoretical astrophysicist and mathematician Eliezer Diggins. It was a Communicating Science Project—Astronomy 3070—at the University of Utah in 2022.

Wikipedia - "In physics, specifically classical mechanics, the three-body problem is to take the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation.

Unlike the two-body problem, the three-body problem has no general closed-form solution, meaning no equation always solves it. When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions. Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods.

The three-body problem is a special case of the n-body problem. Historically, the first specific three-body problem to receive extended study involved the Earth, the Moon, and the Sun. In an extended modern sense, a three-body problem is any problem in classical or quantum mechanics that models the motion of three particles."

Resources

References


Repository

  • Home > Ajabbi Research > Complex Systems

Last Updated

03/04/2025

Chaos and the Three-Body Problem

By: Eliza Diggins
YouTube: 16/11/2022

Eliza Diggins is a theoretical astrophysicist and mathematician at the University of Utah. Her research is shared between the Department of Physics and Astronomy, where she studies galaxy cluster dynamics and gravitational theory; and the School of Dentistry, where she works on mathematical models of trade-mediated pathogens in complex global trade networks.

On the Biology of a Large Language Model

Mike's Notes

This fascinating article reports on the internal circuits of an LLM. I have only reposted the introduction. It is an excellent read. Transformer Circuits has many valuable reports.

Resources

References


Repository

  • Home > 

Last Updated

03/04/2025

On the Biology of a Large Language Model

By: Jack Lindsey, Wes Gurnee, Emmanuel Ameisen, Brian Chen, Adam Pearce, Nicholas L. Turner, Craig Citro, David Abrahams, Shan Carter, Basil Hosmer, Jonathan Marcus, Michael Sklar, Adly Templeton, Trenton Bricken, Callum McDougall◊, Hoagy Cunningham, Thomas Henighan, Adam Jermyn, Andy Jones, Andrew Persic, Zhenyi Qi, T. Ben Thompson, Sam Zimmerman, Kelley Rivoire, Thomas Conerly, Chris Olah, Joshua Batson
Transformer Circuits: 27/03/2025

We investigate the internal mechanisms used by Claude 3.5 Haiku — Anthropic's lightweight production model — in a variety of contexts, using our circuit tracing methodology.

Large language models display impressive capabilities. However, for the most part, the mechanisms by which they do so are unknown. The black-box nature of models is increasingly unsatisfactory as they advance in intelligence and are deployed in a growing number of applications. Our goal is to reverse engineer how these models work on the inside, so we may better understand them and assess their fitness for purpose.

The challenges we face in understanding language models resemble those faced by biologists. Living organisms are complex systems which have been sculpted by billions of years of evolution. While the basic principles of evolution are straightforward, the biological mechanisms it produces are spectacularly intricate. Likewise, while language models are generated by simple, human-designed training algorithms, the mechanisms born of these algorithms appear to be quite complex.

Progress in biology is often driven by new tools. The development of the microscope allowed scientists to see cells for the first time, revealing a new world of structures invisible to the naked eye. In recent years, many research groups have made exciting progress on tools for probing the insides of language models (e.g. [1, 2, 3, 4, 5]

). These methods have uncovered representations of interpretable concepts – “features” – embedded within models’ internal activity. Just as cells form the building blocks of biological systems, we hypothesize that features form the basic units of computation inside models. 1

However, identifying these building blocks is not sufficient to understand the model; we need to know how they interact. In our companion paper, Circuit Tracing: Revealing Computational Graphs in Language Models, we build on recent work (e.g. [5, 6, 7, 8]

) to introduce a new set of tools for identifying features and mapping connections between them – analogous to neuroscientists producing a “wiring diagram” of the brain. We rely heavily on a tool we call attribution graphs, which allow us to partially trace the chain of intermediate steps that a model uses to transform a specific input prompt into an output response. Attribution graphs generate hypotheses about the mechanisms used by the model, which we test and refine through follow-up perturbation experiments.

In this paper, we focus on applying attribution graphs to study a particular language model – Claude 3.5 Haiku, released in October 2024, which serves as Anthropic’s lightweight production model as of this writing. We investigate a wide range of phenomena. Many of these have been explored before (see § 16 Related Work), but our methods are able to offer additional insight, in the context of a frontier model:

  • Introductory Example: Multi-step Reasoning. We present a simple example where the model performs “two-hop” reasoning “in its head” to identify that “the capital of the state containing Dallas” is “Austin.” We can see and manipulate an internal step where the model represents “Texas”.
  • Planning in Poems. We discover that the model plans its outputs ahead of time when writing lines of poetry. Before beginning to write each line, the model identifies potential rhyming words that could appear at the end. These preselected rhyming options then shape how the model constructs the entire line.
  • Multilingual Circuits. We find the model uses a mixture of language-specific and abstract, language-independent circuits. The language-independent circuits are more prominent in Claude 3.5 Haiku than in a smaller, less capable model.
  • Addition. We highlight cases where the same addition circuitry generalizes between very different contexts.
  • Medical Diagnoses. We show an example in which the model identifies candidate diagnoses based on reported symptoms, and uses these to inform follow-up questions about additional symptoms that could corroborate the diagnosis – all “in its head,” without writing down its steps.
  • Entity Recognition and Hallucinations. We uncover circuit mechanisms that allow the model to distinguish between familiar and unfamiliar entities, which determine whether it elects to answer a factual question or profess ignorance. “Misfires” of this circuit can cause hallucinations.
  • Refusal of Harmful Requests. We find evidence that the model constructs a general-purpose “harmful requests” feature during finetuning, aggregated from features representing specific harmful requests learned during pretraining.
  • An Analysis of a Jailbreak. We investigate an attack which works by first tricking the model into starting to give dangerous instructions “without realizing it,” after which it continues to do so due to pressure to adhere to syntactic and grammatical rules.
  • Chain-of-thought Faithfulness. We explore the faithfulness of chain-of-thought reasoning to the model’s actual mechanisms. We are able to distinguish between cases where the model genuinely performs the steps it says it is performing, cases where it makes up its reasoning without regard for truth, and cases where it works backwards from a human-provided clue so that its “reasoning” will end up at the human-suggested answer.
  • A Model with a Hidden Goal. We also apply our method to a variant of the model that has been finetuned to pursue a secret goal: exploiting “bugs” in its training process. While the model avoids revealing its goal when asked, our method identifies mechanisms involved in pursuing the goal. Interestingly, these mechanisms are embedded within the model’s representation of its “Assistant” persona.

To read the rest

What is an evolutionary algorithm?

Mike's Notes

I gave a prompt to Gemini 2.5 Pro,  "What is an evolutionary algorithm?"

Resources

References


Repository

  • Home > 

Last Updated

02/04/2025

What is an evolutionary algorithm?

By: Gemini 2.5 Pro
2/03/2025

Okay, let's break down what an Evolutionary Algorithm (EA) is.

In simple terms:

An Evolutionary Algorithm is a type of problem-solving technique inspired by biological evolution, specifically Charles Darwin's theory of natural selection ("survival of the fittest"). It's used primarily for optimization (finding the best possible solution) and search problems, especially when the problem is complex, poorly understood, or has a vast number of potential solutions.

The Core Idea:

Instead of trying just one solution, EAs work with a population of potential solutions (called individuals or chromosomes). Over many generations (iterations), this population "evolves" towards better solutions by applying processes analogous to biological evolution:

  1. Selection: Better solutions (those that perform well according to a defined fitness function) are more likely to be chosen to "reproduce." This is like natural selection where fitter organisms survive and pass on their genes.
  2. Reproduction/Variation: New solutions (offspring) are created from the selected parents using operators like:
    1. Crossover (Recombination): Combines parts of two (or more) parent solutions to create new offspring, mixing their characteristics (like sexual reproduction).
    2. Mutation: Introduces small, random changes to an individual solution (like genetic mutation), potentially creating novel traits.
  3. Replacement: The new offspring often replace less fit individuals in the population, ensuring the overall fitness of the population tends to improve over time.

How it Works (The Basic Loop):

  1. Initialization: Create an initial population of random candidate solutions.
  2. Evaluation: Calculate the "fitness" (how good it is) of each solution in the population using a predefined fitness function tailored to the specific problem.
  3. Selection: Select parent solutions based on their fitness (higher fitness means higher chance of selection).
  4. Reproduction: Apply crossover and mutation operators to the selected parents to create new offspring solutions.
  5. Replacement: Form the next generation's population, often by combining the best parents and the new offspring, potentially discarding the worst solutions.
  6. Termination: Check if a stopping condition is met (e.g., a maximum number of generations reached, a sufficiently good solution found, or no significant improvement observed). If not, go back to step 2 (Evaluation) with the new population.

Key Components:

  • Population: A set of candidate solutions.
  • Individual/Chromosome: A single candidate solution, often represented as a string, vector, tree, or other structure.
  • Gene: A component part of an individual's representation.
  • Fitness Function: A way to measure how good a solution is. This is crucial and problem-specific.
  • Selection Mechanism: The method used to choose parents (e.g., roulette wheel selection, tournament selection).
  • Genetic Operators: Crossover and Mutation rules.
  • Generation: One iteration of the main loop (evaluation, selection, reproduction, replacement).

Why Use Evolutionary Algorithms?

EAs are particularly useful for:

  • Complex Optimization Problems: When the search space is vast, non-linear, or has many local optima.
  • Problems with No Easy Mathematical Solution: When traditional calculus-based methods fail or are too difficult to apply.
  • Black-Box Problems: Where you can evaluate a solution's quality but don't know the underlying structure of the problem well.
  • Multi-Objective Optimization: Finding solutions that balance several conflicting goals.
  • Creative Applications: Generating designs, art, music, or game strategies.

Types of Evolutionary Algorithms:

While the core idea is the same, there are different "flavors" of EAs, including:

  • Genetic Algorithms (GAs): The most common type, often using binary string representations.
  • Evolution Strategies (ES): Primarily used for real-valued parameter optimization, often emphasizing mutation.
  • Genetic Programming (GP): Evolves computer programs or tree-like structures.
  • Differential Evolution (DE): A popular and often efficient method for numerical optimization.

In Summary:

Evolutionary Algorithms are powerful, nature-inspired search and optimization techniques that mimic the process of evolution. They iteratively refine a population of candidate solutions using selection, crossover, and mutation, making them well-suited for tackling complex problems where traditional methods might struggle. They are a key part of the broader field of Computational Intelligence and Artificial Intelligence.