From imitating swarm intelligence to artificial synapses engineered to compute like the human brain, many of our technological leaps are inspired by behaviors and systems created in nature. Complex systems are particularly fascinating, as they are all around us, and can affect our lives in unforeseen ways. So what exactly is emergence in complex systems? Adams, who researches emergence in complex systems, first explains that complex systems are composed of many individual component parts that interact with each other and with their local environment.
Adams uses ants as an analogy: Ants are relatively simple components in the complex system of the ant colony. In contrast, no one single ant will have the impulse or knowledge to undertake such collective tasks on its own. Adams invokes the research of computer scientist and physicist Stephen Wolfram here, and his principle of computational irreducibility , which states that it is impossible to predict what a complex system will do, except by going through as many steps in the computation as the evolution of the system itself.
This certainly makes research much easier, but this resistance to simplification is also a fundamental feature of complex systems.
Emergence and complexity are also different. So emergence seems to happen when the system has evolved to some critical point. A collective then emerges, whose behavior as a whole is no longer correlated to the behavior of individual components.
In this way, the system maintains its decentralized character, yet can act as a single entity. Thus, in tying the concepts back to computational systems, the expression of any algorithms of these individual components must necessarily be simple, distributed and scalable. While we have many biological analogs of computational problems, Adams cautions that one cannot apply the solutions from biological systems to computers, or even more abstract problems like artificial intelligence, without a full understanding of the environmental pressures that prompted those biological solutions in the first place.
So while any system may begin with a simple set of components, under the right conditions, it will nevertheless be enough to generate a diverse range of differently-scaled systems, whether in nature or computing. Simple Ants, Complex Colony Adams uses ants as an analogy: Ants are relatively simple components in the complex system of the ant colony.
Experiments show that this model produces laminar flow, vortex streams, and even turbulence that is indistinguishable from the behavior of real fluids. Although the detailed rules of interaction are very different than the interactions of real molecules, the emergent phenomena are the same.
The emergent phenomena can be created without understanding the details of the forces between the molecules or the equations that describe the flow of the fluid. The recreation of intricate patterns of ebbs and flows within a fluid offers an example of how it is possible to produce a phenomenon without fully understanding it. But the model was constructed by physicists who knew a lot about fluids. That knowledge helped to determine which features of the physical system were important to implement, and which were not.
Physics is an unusually exact science. Perhaps a better example of an emergent system which we can simulate with only a limited understanding is evolutionary biology. We understand, in a weak sense, how creatures with Mendelian patterns of inheritance, and different propensities for survival can evolve toward better fitness in their environments. In certain simple situations we can even write down equations that describe how quickly this adaptation will take place.
But there are many gaps in our understanding of the processes of evolution. We can explain in terms of natural selection why flying animals have light bones, but we cannot explain why certain animals have evolved flight and others have not. We have some qualitative understanding of the forces that cause evolutionary change, but except in the simplest cases, we cannot explain the rate or even the direction of that change.
In spite of these limitations, our understanding is sufficient to write programs of simulated evolution that show interesting emergent behaviors. For example, I have recently been using an evolutionary simulation to evolve programs to sort numbers. In this system, the genetic material of each simulated individual is interpreted as a program specifying a pattern of comparisons and exchanges.
The probability of an individual survival in the system is dependent on the efficiency and accuracy of this program in sorting numbers. Surviving individuals produce offspring by sexual combination of their genetic material with occasional random mutation. After tens of thousands of generations, a population of hundreds of thousands of such individuals will evolve very efficient programs for sorting.
Although I wrote the simulation producing these sorting programs, I do not understand in detail how they were produced or how they work. If the simulation had not produced working programs, I would have had very little idea about how to fix it. The fluid flow and simulated evolution examples suggest that it is possible to make a great deal of use of a small amount of understanding. The emergent behaviors exhibited by these systems are a consequence of the simple underlying rules which are defined by the program.
Although the systems succeed in producing the desired results, their detailed behaviors are beyond our ability to analyze and predict. One can imagine if a similar process produced a system of emergent intelligence, we would have a similar lack of understanding about how it worked.
My own guess is that such an emergent system would not be an intelligent system itself, but rather the metabolic substrate on which intelligence might grow. In terms of the apes and the songs, the emergent portion of the system would play the role of the ape, or at least that part of the ape that hosts the songs. This artificial mind would need to be innoculated with human knowledge.
I imagine this process to be not so different from teaching a child. This would be a tricky and uncertain procedure since, like a child, this emergent mind would presumably be susceptible to bad ideas as well as good. The result would be not so much an artificial intelligence, but rather a human intelligence sustained within an artificial mind. Of course, I understand that this is just a dream. And I will admit that I am more propelled by hope than by the probability of success.
But if, within this artificial mind, the seed of human knowledge begins to sustain itself and grow of its own accord, then for the first time human thought will live free of bones and flesh, giving this child of mind an earthly immortality denied to us. Attempts to create emergent intelligence, at least those that are far enough in the past for us to judge, have been disappointing. Many computational systems, such as homeostats, perceptrons, and cellular automata exhibit clear examples of emergent behavior, but that behavior falls far short of intelligence.
A perceptron, for example, is a collection of artificial neurons that can recognize simple patterns. Considerable optimism was generated in the 's when it was proved that anything a perceptron could recognize, it could learn to recognize from examples.
This was followed by considerable disappointment when it was realized that the set of things that could be recognized at all was very limited. What appeared to be complicated behavior of the system turned out in the final analysis to be surprisingly simple.
In spite of such disappointments, I believe that the notion of emergence contains an element of truth, an element that can be isolated and put to use. A helpful analogy is the brewing of beer. The brewmaster creates this product by making a soup of barley and hops, and infecting it with yeast.
Chemically speaking most of the real work is done by the yeast, which converts the starch to alcohol. The brewmaster is responsible for creating and maintaining the conditions under which that conversion can take place.
The brewmaster does not need to understand exactly how the yeast does its work, but does need to understand the properties of the environment in which the yeast will thrive. By providing the right combination of ingredients at the right temperature in the right container, the brewmaster is able to create the necessary conditions for the production of beer. Something analogous to this process may be possible in the creation of an artificial intelligence.
It is unlikely that intelligence would spontaneously appear in a random network of neurons, just as it is unlikely that life would spontaneously appear in barley soup. But just as carefully mixed soup can be inoculated with yeast, it may be that a carefully constructed network of artificial neurons can be inoculated with thought. The approach depends on the possibility of separating human intelligence into two parts, corresponding to the soup and the yeast.
Depending on one's point of view, these two parts can be viewed as hardware and software, intellect and knowledge, nature and nurture, or program and data. Each point of view carries with it a particular set of intuitions about the nature of the split and the relative complexity of the parts. One way that biologists determine if a living entity is a symbiont is to see if the individual components can be kept alive separately.
For example, biologists have tried unsuccessfully to prove the oil-drop theory by sustaining metabolizing oil drops in an artificial nutrient broth. Such an experiment for human intelligence would have two parts.
One would be a test of the human ape's ability to live without the ideas of human culture. This experiment is occasionally conducted in an uncontrolled form when feral children are reared by animals.
The two-part theory would predict that such children, before human contact, would not be significantly brighter than nonhuman primates. The complementary experiment, sustaining human ideas and culture in an artificial broth, is the one in which we are more specifically interested.
If this were successful we would have a thinking machine, although perhaps it would not be accurate to call it an artificial intelligence.
It would be natural intelligence sustained within an artificial mind. To pursue the consequences of this point of view, we will assume that human intelligence can be cleanly divided into two portions which we will refer to as acquired and inherited intelligence. These correspond to the songs and to the apes, respectively, or in the fermentation metaphor, the yeast and the barley soup.
We will consider only those features of inherited intelligence that are necessary to support acquired intelligence, only those features of acquired intelligence that impose requirements on inherited intelligence. We will study the interface between the two. Even accepting this definition of the problem, it is not obvious that the interface is easy to understand or recreate. This leads to a specific question about the scope of the interface that can presumably be answered by experiment.
The functional scope of the interface between acquired and inherited intelligence is not the only property that can be investigated. To build a home for an animal, the first thing we would need to know is the animal's size. This is also one of the first thing we need to know in building an artificial home for acquired intelligence. This leads to question number two:. The guesses to answers that I have given are imprecise, but the questions are not. In principle they can be answered by experiment.
The final question I will pose is more problematic. What I would like to ask is "What are the organizing principles of inherited intelligence? I shall substitute a more specific question that hopefully captures the same intent:. This question assumes that inheritable intelligence is some form of homeostatic process and asks what quantity is held static. It is the most difficult of the four questions, but historically it has been an important question to ask in areas when there was not yet a science to guide progress.
The study of chemistry is one example. In chemical reactions between substances it is obvious that a great number of things change and not so obvious what stays the same. It turns out that if the experiment is done carefully, the weight of the reactants will always equal the weight of the product. The total weight remains the same.
This is an important organizing principle in chemistry and understanding it was a stepping stone to the understanding of an even more important principle: the conservation of the weights of the individual elements. The technical difficulty of defining and creating a truly closed experiment, in particular eliminating the inflow and outflow of gases, explains why chemists did not fully appreciate these principles until the middle of the 19th century. Another very different example of a system that can be understood in terms of what is held constant is the system of formal logic.
This is a set of rules under which sentences may be changed without changing their truth. A similar example, which has also been important to artificial intelligence, is the lambda calculus which is the basis of the language Lisp.
This is a system of transforming expressions in such a way that their "values" do not change, where the values are those forms of the expression which are not changed by the transformations. This sounds circular because it is. A more detailed explanation would show it to be more so. These formal systems are conceptually organized around that which is held constant. In physics there are many examples of how conservations have been used successfully to organize our conception of reality, but while conservations of energy, momentum, mass, and charge are certainly important, I do not wish to make too much of them in this context.
In this sense the principles of conservation will more likely resemble those of biology than physics. One of the most useful conservation principle in biology appears in the notion of a gene. This is the unit of character determination that is conserved during reproduction. In sexual reproduction this can get complicated since an individual receives a set of genes from each of two parents.
A gene that affects a given trait may not be expressed if it is masked by another, and there is not a simple correspondence between genes and measurable traits. The notion that atomic units of inheritance are always present, even when they are not expressed, was hard to accept and it was not widely believed almost a century after Mendel's initial experiments. In fact the conservation is not perfect, but it is still one of the most important organizing principles in the study of living organisms.
In biology, the rules of conservation are often expressed as minimum principles. The two forms are equivalent. For instance, the minimum principle corresponding to the physical conservation of momentum is the principle of least action. A biological example is the principle of optimal adaptation, which states that species will evolve toward fitness to their environments.
The distance to the ideal is minimized. A conservation principle associated with this is the Fischer Theorem of Natural Selection, which states that the rate of change in fitness is equal to the genetic variance. In cases where this minimum principle can be applied, it allows biologists to quantitatively predict the values of various biological parameters. For example, sickle-cell anemia is a congenital disease controlled by a recessive gene.
Individuals who inherit the gene from both parents are likely to die without reproducing, but individuals who inherit the gene from a single parent are resistant to malaria. This estimate fits well with measured data. Similar methods have been used to estimate the number of eggs laid by a bird, the shape of sponges, and the gait of animals at different speeds.
But these examples of applying a minimum principle are not so crisp as those of physics. Why, for example, do we not evolve a non-lethal gene that protects against malaria? The answer is complicated, and the principle of fitness offers no help.
It is useful in aiding our understanding, but it does not explain all. This is probably the kind of answer to Question IV for which we will have to settle. Even in physics, knowledge of the exact law does not really explain all behaviors. The snowflakes and whirlpools of water are examples.
The forces that governs the interaction of water molecules are understood in some detail, but there is no analytical understanding of the connection between these forces and their emergent behaviors of water. On the other hand, our goal is not necessarily to understand, but to recreate. In both of the examples mentioned, conservation principles give us sufficient understanding to recreate the phenomena.
The critical principles in this case. If so this film series is for you! Why is democracy in crisis? One intermediate cause is a weakened Voter Feedback Loop. Powerful root cause forces are working to weaken the loop.
These average 9 minutes. They give a quick introduction to the Dueling Loops model and how it explains the tremendous change resistance to solving the sustainability problem.
The most eye-opening article on the site since it was written in December More people have contacted us about this easy to read paper and the related Dueling Loops videos than anything else on the site. Do you every wonder why the sustainability problem is so impossibly hard to solve? It's because of the phenomenon of change resistance. The system itself, and not just individual social agents, is strongly resisting change.
Why this is so, its root causes, and several potential solutions are presented. The most astonishing short read 7 pages on the site, if you've never heard about it. The memo was written in Emergent Behavior Emergent behavior is behavior of a system that does not depend on its individual parts, but on their relationships to one another.
Why this is important The concept of emergent behavior is hugely important to solving the sustainability problem because it's emergent behavior that's the problem to solve. This leads to one of our key principles: Key Principle If you don't understand the structure of a difficult problem, then you can't solve the problem. Key Principle The behavior of a system emerges from the structure of its parts. Democracy in Crisis Film Series.
0コメント