Mike's Notes
Carlos Gershenson's paper about self-organising systems was recently published by Nature in npj complexity. Carlos is the editor of Complexity Digest.
I removed the references in the original paper because importing this into Google Blogger is messy. Please refer to the original paper for this missing information.
Resources
- https://www.nature.com/articles/s44260-025-00031-5
- https://www.nature.com/npjcomplex
- https://doi.org/10.1038/s44260-025-00031-5
- https://complexes.blogspot.com/
- https://tendrel.binghamton.edu/
- https://cssociety.org/home
- https://comdig.unam.mx/
- https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)
References
Repository
- Home > Ajabbi Research > Library > Subscriptions > Complexity Digest
- Home > Ajabbi Research > Library > Thermodynamics
Last Updated
05/04/2025
Self-organizing systems: what, how, and why?
Abstract
I present a personal account of self-organizing systems, framing relevant questions to better understand self-organization, information, complexity, and emergence. With this aim, I start with a notion and examples of self-organizing systems (what?), continue with their properties and related concepts (how?), and close with applications (why?) in physics, chemistry, biology, collective behavior, ecology, communication networks, robotics, artificial intelligence, linguistics, social science, urbanism, philosophy, and engineering.
What are self-organizing systems?
"Being ill defined is a feature common to all important concepts.” —Benoît Mandelbrot
I will not attempt to define a “self-organizing system”, as it involves the cybernetic problem of defining “system”, the informational problem of defining “organization”, and the ontological problem of defining “self”. Still, there are plenty of examples of systems that we can usefully call self-organizing: flocks of birds, schools of fish, swarms of insects, herds of cattle, and some crowds of people. In these animal examples, collective behavior is a product of the interactions of individuals, not determined by a leader or an external signal. There are also several examples from non-living systems, such as vortexes, crystallization, self-assembly, and pattern formation in general. In these cases, elements of a system also interact to achieve a global pattern.
Self-organization or similar concepts have been present since antiquity (see Section 3.12), so the idea itself is not new. Nevertheless, we still lack the proper conceptual framework to understand it properly. The term “self-organizing system” was coined by W. Ross Ashby in the early days of cybernetics. Ashby’s purpose was to describe deterministic machines that could change their own organization. Ever since, the concept has been used in a broad range of disciplines, including statistical mechanics, supramolecular chemistry, computer science, and artificial life.
There is an unavoidable subjectivity when speaking about self-organizing systems, as the same system can be described as self-organizing or not (see Section 2.1). Stafford Beer gave the following example: an ice cream at room temperature will thaw. This will increase its temperature and entropy, so it would be “self-disorganizing”. However, if we focus on the function of an ice cream for being eaten, it would be “self-organizing”, because it would approach a pleasant temperature and consistency for degustating it, improving its “function”. Ashby also mentioned that one just needs to call the attractor of a dynamical system “organized”, and then almost any system will be self-organizing.
So, the question should not be whether a system is self-organizing, but rather (being pragmatic) when is it useful to describe a system as self-organizing? The answer will slowly unfold along this paper, but in short, it can be said that self-organization is a useful description when we are interested on describing systems at multiple scales, and understanding how these affect each other. For example, collective motion and cyber-physical systems can benefit from such a description, compared to a single-scale narrative/model. This is common with complexity, as interactions can generate novel information that is not present in initial nor boundary conditions, limiting predictability.
So rather than a definition, we can do with a notion: a system can be described as self-organizing when its elements interact to produce a global function or behavior. This is in contrast with centralized systems, where a single or few elements “control” the rest, or with simply distributed systems, where a global problem can be divided (reduced) and each element does its part, but there is no need to interact nor integrate elementary solutions. Thus, self-organizing systems are a useful description when we want to relate individual behaviors and interactions to global patterns or functions. If we can describe a system fully (for our particular purposes) at a single scale, then self-organization could be perhaps identified, but superfluos (not useful). And the “self” implies that the “control” comes from within the system, rather than from an external signal/controller that would explicitly indicate elements of what to do.
For example, we can decide to call a society “self-organizing” if we are interested in how individual interactions lead to the formation of fashion, ideologies, opinions, norms, and laws; but at the same time, how the emerging global properties affect the behavior of the individuals. If we were interested in an aggregate property of a population, e.g., its average height, then calling the group of individuals “self-organizing” would not give any extra information, and thus would not be useful.
It should be stressed that self-organization is not a property of systems per se. It is a way of describing systems, i.e., a narrative.
How can self-organizing systems be measured?
"It is the function of science to discover the existence of a general reign of order in nature and to find the causes governing this order. And this refers in equal measure to the relations of man — social and political — and to the entire universe as a whole.” —Dmitri Mendeleev
Even when self-organization had been described intuitively since antiquity — the seeds of the narrative were present — the proper tools for studying it became available only recently: computers. Since self-organizing systems require the explicit description of elements and interactions, our brains, blackboards, and notebooks are too limited to consider the number of required variables to study the properties of self-organizing systems. It was only through the relatively recent development of information technology that we were able to study the richness of self-organization, just like we were unable to study the microcosmos before microscopes and the macrocosmos before telescopes.
Information
Computation can be generally described as the transformation of information, although Alan Turing formally defined computable numbers with the purpose of proving limits of formal systems (in particular, Hilbert’s decision problem). In the same environment where the first digital computers were built in the mid XXth century, Claude Shannon defined information to quantify its transmission, showing that information could be reliably transmitted through unreliable communication channels. As it turned out, Shannon’s information H is mathematically equivalent to Boltzmann-Gibbs entropy:
$$H=-K\mathop{\sum}\limits_{i=i}^{n}{p}_{i}\log {p}_{i},$$
(1)
where K is a positive constant and p is the probability of receiving symbol i from a finite alphabet of size n. This dimensionless measure will be maximal for a homogeneous probability distribution, and minimal when only one symbol has a probability p = 1. In binary, we have only two symbols (n = 2), and information would be minimal with a string of only ones or only zeroes (‘1111…’ or ‘0000…’). This implies that having more bits will not tell us anything new, because we already know what the next bits will be (assuming the probability distribution will not change). With a random-like string, such as a sequence of coin flips (‘11010001011011001010…’), information is maximal, because no matter how much previous information we have (full knowledge of the probability distribution), we will not be able to predict what the next bit might be better than chance.
In parallel, Norbert Wiener — one of the founders of cybernetics — proposed an alternative measure of information, which was basically the same as Shannon’s, but without the minus sign. Wiener’s information measured what one knows already, so it is minimal when we have a random string (homogeneous probability distribution) because all the information we already have is “useless” (to predict the next symbol), and maximal when we have a single symbol repeating (maximally biased probability distribution), because the information we have allows us to predict exactly the next symbol. Nevertheless, Shannon’s information is the one that everyone has used, and we will do the same.
Shannon’s information is also known as Shannon’s entropy, which can be also used as a measure of “disorder”. We already saw that it is maximal for random strings, and thus minimal for particularly ordered strings. Then, we can use the negative of Shannon’s information (which would be Wiener’s information) as a measure of organization. If the organization is a result of internal dynamics, then we can also use this measure for self-organization.
Nevertheless, just like with many measures, the interpretation depends on how the observer performs the measurement. Figure 1 shows how the same system, divided into four microstates or two macrostates (with probabilities represented as shades of gray) can increase its entropy/information (become more homogeneous) or decrease it, depending on how it is observed.
Fig. 1: The same system, observed at different levels or with different coarse grainings can be said to be disorganizing (entropy increasing) or organizing (entropy decreasing), for arbitrary initial and final states.
Probabilities of the system being in a state (a1, a2, b1, and b2 at the lower level, which can be aggregated in different ways at a higher level) are represented as shades of gray, so one can observe which configurations are more homogeneous (i.e., with higher entropy): if there is a high contrast in different states (such as between A' and B' in their initial state), then this implies more organization (less entropy), while similar shades (as between A' and B' in their final state) imply less organization (more entropy).
Still, the fact that self-organization is partially subjective does not mean that it cannot be useful. We just have to be aware that a shared description and interpretation should be agreed upon.
Complexity
Self-organizing systems are intimately related to complex systems. Again, the question is not so much whether a system is self-organizing or complex, but when is it useful to describe it as such. This is because most systems can be described as complex or not, depending on our context and purposes.
Etymologically, complexity comes from the Latin plexus, which could be translated as entwined. We can say that complex systems are those where interactions make it difficult to separate the components and study them in isolation, because of their interdependence. These interactions can generate novel information that limit predictability in an inherent way, as it is not present in initial nor boundary conditions. In other words, there is no shortcut to the future, but we have to go through all intermediate steps, as interactions partially determine the future states of the system.
For example, markets tend to be unpredictable because different agents make decisions depending on what they think other agents will decide. But since it is not possible to know what everyone will decide in advance, the predictability of markets is rather limited.
Complex systems can be confused with complicated or chaotic systems. Perhaps they will be easier to distinguish considering their opposites: complicated are the opposite of easy, chaotic (sensitive to initial conditions) are the opposite of robust, while complex systems are the opposite of separable.
Given the above notion of self-organizing systems, then all of them would also be complex systems, but not necessarily vice versa. This is because interactions are an essential aspect of self-organizing systems, which would make them complex by definition. However, we could have a description of a complex system whose elements interact but do not produce a global pattern or function we are interested in during the timeframe we are interested in. So, the narrative of complexity would be useful, but not the one of self-organization. Nevertheless, understanding complexity should be essential for the study of self-organization.
Emergence
One of the most relevant and controversial properties of complex systems is emergence. It could be seen as problematic because last century some people described emergent properties as “surprising”. So then emergence would be a measure of our ignorance, and then it would be reduced once we understood the mechanisms behind emergent properties. Also, there are different flavors of emergence, some easier to study and accept than others. But in general, emergence can be described as information that is present at one scale and not at another scale.
For example, we can have full knowledge of the properties of carbon atoms. But if we focus only on the atoms, i.e. without interactions, we will not be able to know whether they are part of a molecule of graphite, diamond, graphene, buckyballs, etc. (all composed only of carbon atoms) which have drastically different macroscopic properties. Thus, we cannot derive the conductivity, transparency, or density of these materials by looking only at the atomic properties of carbon. The difference lies precisely in how the atoms are organized, i.e. how they interact.
If emergence can be described in terms of information, Shannon’s measure can be used (understanding that we are measuring only the information that is absent from another scale). Thus, emergence would be the opposite of self-organization. This might seem contradictory, as usually emergence and self-organization are both present in complex systems8. But if we take each to its extreme, we can see that maximum emergence (information) occurs when there is (quasi)randomness, so no organization. Maximum (self-)organization occurs when entropy is minimal (no new information, and thus, no emergence). Because of this, complexity can be seen as a balance between emergence and self-organization.
Why should we use self-organizing systems?
"It is as though a puzzle could be put together simply by shaking its pieces.” —Christian De Duve
Self-organization can be used to build adaptive systems. This is useful for non-stationary problems, i.e., those that change in time. Since interactions can generate novel information, complexity often leads to non-stationarity. Thus, when a problem changes, the elements of a self-organizing system can adapt through their interactions. Then, designers do not need to specify precisely the problem beforehand, or how it will change, but just to define/regulate interactions to achieve a desired goal.
For example, if we want to improve passenger flow in public transportations systems, we cannot really change the elements of the system (passengers). Still, we can change how they interact. In 2016, we successfully implemented such a change to regulate boarding and alighting in Mexico City metro. In a similar way, we cannot change teachers in an education system. But we can change their interactions to improve learning. We cannot change politicians, but we can regulate their interactions to reduce corruption and improve efficiency. We cannot change businesspeople, but we can control their interactions to promote sustainable economic growth.
There have been many other examples of applications of self-organization in different field, and the following is only a partial enumeration.
Physics
The Industrial revolution led to the formalization of thermodynamics in the XIXth century. The second law of thermodynamics states that an isolated system will tend to thermal equilibrium. In other words, it loses organization, as heterogeneities become homogeneous, and entropy is eventually maximized. Still, non-equilibrium thermodynamics has studied how open systems can self-organize.
Lasers can be seen as self-organized light, which Hermann Haken used as an inspiration to propose the study of synergetics, which precisely studies self-organization in open systems far from thermodynamic equilibrium and is related to phase transitions, where criticality is found.
Self-organized criticality (SOC) was proposed to explain why power laws and scale-free-like distributions and fractals are so prevalent in nature. SOC was illustrated with the sandpile model, where grains accumulate and lead to avalanches with a scale-free (critical) distribution. Similarly, self-organization has been used to describe granular media.
Generalizing principles of granular media, self-organization can be used to describe and design “optimal” configurations in biological, social, and economic systems.
Chemistry
Around 1950, Boris P. Belousov was interested in a simplified version of the Krebs cycle. He found that a solution of citric acid in water with acidified bromate and yellow ceric ions produced an oscillating reaction. His attempts to publish his findings were rejected, arguing that it violated the second law of thermodynamics (which only applies to systems at equilibrium, and this system is far from equilibrium). In the 1960s, Anatol M. Zhabotinsky began working on this reaction, and only in the late 1960s and 1970s the Belousov-Zhabotinsky reaction became known outside the Soviet Union. Since then, many chemical systems far from equilibrium have been studied. Some have been characterized as self-organizing, because they are able to use free energy to increase their organization.
More generally, self-organization has been used to describe pattern formation, which includes self-assembly.
Molecules are basically atoms joined by covalent bonds. Supramolecular chemistry studies chemical structures formed by weaker forces (Van Der Waals, hydrogen bonds, electrostatic charges), and can also be described in terms of self-organization.
Biology
The study of form in biology (morphogenesis) is far from new, but far from complete.
Alan Turing was one of the first to describe morphogenesis with differential equations. Morphogenesis can be seen as pattern formation with local stimulation and long-range inhibition (skins, scales), or as fractals (capillaries, neurons). These processes are more or less well understood. Still, it becomes more sophisticated for embryogenesis and regeneration, where many open questions remain.
Humberto Maturana and Francisco Varela proposed autopoiesis (self-production) to describe the emergence of living systems from complex chemistry. Autopoiesis can be seen as a special case of self-organization (to the disagreement of Maturana), because molecules self-organize to produce membranes and metabolism. Moreover, it can be argued that living systems also need information handling, self-replication, and evolvability.
There are further examples of self-organization in biology, that include firefly synchronization, ant foraging, and collective behavior.
Collective Behavior
Groups of agents can produce global patterns or behavior through local interactions. Craig Reynolds presented a simple model of boids, where agents followed three simple rules: separation (don’t crash), alignment (head to average heading of neighbors), and cohesion (go towards average position of neighbors). Varying its parameters, this simple model produces dynamic patterns similar to those of flocks, schools, herds, and swarms. It was used to animate bats and penguins in the 1992 Batman Returns film and contributed to earning Reynolds an Oscar in 1998.
A flock of boids self-organize even only with the alignment rule and added noise. It has been shown that when the number of boids increases, novel properties emerge.
Slightly more sophisticated models have been used to describe more precisely animal collective behavior.
Furthermore, similar models and rules have been used to study the self-organization of active matterand robots (see below).
Ecology
Species self-organize to produce ecological patterns. These include trophic networks (who eats who), mutualistic networks (cooperating species), and host-parasite networks.
At the biosphere level, ecosystems also self-organize. This is a central aspect of the Gaia hypothesis, which defends that our planet self-regulates its own conditions that allow life to thrive.
Self-organization can be useful to study how ecosystems can be robust, resilient, or antifragile
Communication networks
Self-organization has been useful in telecommunication networks106, as it is desirable to have the ability to self-reconfigure based on changing demands. Also, having local rules to define global functions makes them robust to potential failures or attacks of central nodes: if there is a path that is not responsive, then an alternative is sought. These principles have been used in Internet protocols, peer-to-peer networks, cellular networks, and more.
Robotics
There has been a broad variety of self-organizing robots, terrestrial, aerial, aquatic, and/or hybrid (for a review see ref. 26).
A common aspect of self-organizing robots is that there is no leader, and the collective function or pattern is the result of local interactions. Some have been inspired in the collective behavior of animals, and their potential applications are vast.
Artificial Intelligence
As mentioned in the first section of this paper, the study of self-organizing systems originated in cybernetics, which had a strong influence and overlap in the early days of artificial intelligence. Claude Shannon, William Grey Walter, Warren McCulloch and Walter Pitts contributed to both fields in their early days.
If brains can be described as self-organizing, it is no surprise that certain flavors of artificial neural networks have also been described as self-organizing. Independently on the terminology, adjustments to local weights between artificial neurons lead to an error reduction in the task of the network.
Even when their interpretation is still controversial, large language models have been useful in multiple domains. Whether describing them as self-organizing would bring any benefit or not, still remains to be seen.
Linguistics
The statistical study of linguistics became popular after Zipf. Different explanations have been put forward to try to explain statistical regularities found across languages, and in even more general contexts.
Naturally, some of these explanations focus on the evolution of language. It has been shown that a shared vocabulary and grammar140 can evolve using self-organization: individuals interact locally leading to a population converging to a shared language. This is useful not only for understanding language evolution, but also to build adaptive artificial systems. Similar mechanisms can be used in other social systems, e.g. to reach consensus.
Social Science
Individuals in a society interact in different ways. These interactions can lead to social properties, such as norms, fashions, and expectations. In turn, these social properties can guide, constrain, and promote behaviors and states of individuals.
Computers have allowed the simulation of social systems, including systematic explorations of abstract models. Combined with an increase in data availability, computational social science has been increasingly adopted by social scientists. The understanding and implications of self-organization are naturally relevant to this field.
Urbanism
It is similar to the scientific study of cities.
For example, city growth can be modeled as a self-organizing process152. Similar to the metro case study mentioned above, self-organization has been shown to efficiently and adaptively coordinate traffic lights, and is promising for regulating interactions among autonomous vehicles.
More generally, urban systems tend to be non-stationary, as conditions are changing constantly. Thus, self-organization offers a proven alternative to design urban systems that adapt as fast as their conditions change.
Philosophy
Concepts similar to self-organization can be traced to ancient Greece in Heraclitus and Aristotle and also to Buddhist philosophy.
There has been a long debate about the relationship between mechanistic principles and the purpose of systems. This question was at the origins of cybernetics. It has been argued12 that self-organization can be used to explain teleology, in accordance with Kant’s attempt from the late XVIIIth century, as purpose can also be described in terms of organization.
Also, self-organization is related to downward causation: when higher-level properties cause changes in lower-level elements. This is still debated, along with other philosophical questions related to self-organization.
Engineering
There have been several examples of self-organization applied to different areas of engineering apart from those already mentioned, such as power grids, computing, sensor networks, supply networks and production systems, bureaucracies, and more.
In general, self-organization has been a promising approach to build adaptive systems, as mentioned above. It might seem counterintuitive to speak about controlling self-organization, since we might think that self-organizing systems are difficult to regulate because of a certain autonomy of their components. Still, we can speak about a balance between control and independence, in what has been called “guided self-organization”.
Conclusions
"We can never be right, we can only be sure when we are wrong” —Richard Feynman
There are many open questions related to the scientific study of self-organizing systems. Even when their potential has been promising, they are far from being commonly used to address non-stationary problems. Could it be because of a lack of literacy in concepts related to complex systems? Might there be any conceptual or technical obstacle? Do we need further theories? Independently of the answers, these questions are worth exploring.
For example, we have yet to explore the relationship between self-organization and antifragility: the property of systems that benefit from perturbations. Self-organization seems to be correlated with antifragility, but why or how still has to be investigated. In a similar vein, a systematic exploration of the “slower is faster” effect might be useful to better understand self-organizing systems and vice versa.
Many problems and challenges we are facing — climate change, migration, urban growth, social polarization, etc. — are clearly non-stationary. It is not certain that with self-organization we will be able to improve the situation in all of them. But it is almost certain that with the current tools we have, we will not be able to make much more progress (otherwise we would have made it already). It would be imprudent not to make efforts to use the narrative of self-organization, even if for slightly improving situations related to only one of these challenges.