This week we are going to explore complex adaptive systems. Particularly we are going to be looking at an important aspect of complex systems which is degeneracy. Now normally degeneracy is the sort of word you hear coming from the cranky old men, unfortunately for you, you’re going to have to learn to love degeneracy (biology):
Within biological systems, degeneracy occurs when structurally dissimilar components/modules/pathways can perform similar functions (i.e. are effectively interchangeable) under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behaviour of two or more components. In particular, if degeneracy is present in a pair of components then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct. –La Wik
When most people refer to biological degeneracy they usually mean a unique feature of the translation of DNA to amino acids.
A code in which several code words have the same meaning. The genetic code is degenerate because there are many instances in which different codons specify the same amino acid. A genetic code in which some amino acids may each be encoded by more than one codon. –Glossary Holmgren
And for a review a codon is:
A codon is a sequence of three DNA or RNA nucleotides that corresponds with a specific amino acid or stop signal during protein synthesis. DNA and RNA molecules are written in a language of four nucleotides; meanwhile, the language of proteins includes 20 amino acids. Codons provide the key that allows these two languages to be translated into each other. Each codon corresponds to a single amino acid (or stop signal), and the full set of codons is called the genetic code. The genetic code includes 64 possible permutations, or combinations, of three-letter nucleotide sequences that can be made from the four nucleotides. Of the 64 codons, 61 represent amino acids, and three are stop signals. For example, the codon CAG represents the amino acid glutamine, and TAA is a stop codon. The genetic code is described as degenerate, or redundant, because a single amino acid may be coded for by more than one codon. When codons are read from the nucleotide sequence, they are read in succession and do not overlap with one another. –Nature
Catch that, there are 61 different amino acid codons for only 20 amino acids (degeneracy)? Note that degeneracy is NOT redundancy. Having two eyes is redundant, not degenerate. Degeneracy requires components that are functionally similar be structurally dissimilar. Let us take a look at a paper on the relationship between degeneracy and evolvability in computer models.
Degenerate neutrality creates evolvable fitness landscapes
James Whitacre1, Axel Bender2
1School of Information Technology and Electrical Engineering; University of New South Wales at the Australian Defence Force Academy, Canberra, Australia
2Land Operations Division, Defence Science and Technology Organisation; Edinburgh, Australia
Abstract – Understanding how systems can be designed to be evolvable is fundamental to research in optimization, evolution, and complex systems science. Many researchers have thus recognized the importance of evolvability, i.e. the ability to find new variants of higher fitness, in the fields of biological evolution and evolutionary computation. Recent studies by Ciliberti et al (Proc. Nat. Acad. Sci., 2007) and Wagner (Proc. R. Soc. B., 2008) propose a potentially important link between the robustness and the evolvability of a system. In particular, it has been suggested that robustness may actually lead to the emergence of evolvability. Here we study two design principles, redundancy and degeneracy, for achieving robustness and we show that they have a dramatically different impact on the evolvability of the system. In particular, purely redundant systems are found to have very little evolvability while systems with degeneracy, i.e. distributed robustness, can be orders of magnitude more evolvable. These results offer insights into the general principles for achieving evolvability and may prove to be an important step forward in the pursuit of evolvable representations in evolutionary computation.
Simple enough do degenerate or redundant systems adapt better?
By describing natural selection as a process of retaining fitter variants, Darwin implicitly assumed that repeated iterations of variation and selection would result in the successive accumulation of useful variations . However, decades of research applying Darwinian principles to computer models have irrefutably demonstrated that the founding principles of natural selection are an incomplete recipe for evolving systems of unbounded complexity. In computer simulations, adaptive changes (i.e. innovations) are at best finite and at worst short-lived. Understanding the origin of innovations is one of the most important open questions that a theory of evolution must still address .
Either the computer models of evolution suck or Darwin’s principles are necessary but insufficient for complex adaptability. Or both could be true.
These developments have been followed by the EC community, and some have started to investigate whether increasing neutrality (e.g. artificially introducing a many-to-one mapping between genotypes and phenotypes) can improve the evolvability of a search process       .
In particular, the neutrality is generated through mechanisms for achieving robust phenotypes. Our chief concern is to understand the necessary conditions for evolvability, how these conditions are attained in biological systems, as well as the origins of “useful neutrality” in evolution.
We then touch upon recent developments that have indicated evolvability might be an emergent property of robust complex systems. We also introduce redundancy and degeneracy as two distinct design principles for achieving robustness and neutrality in biological systems.
I’ll drag out a paper later discussing the interactions between robustness and evolvability, but for now keep in mind that maximum genetic agility might not be the best thing.
In general, evolvability is concerned with the selection of new phenotypes. It requires an ability to generate distinct phenotypes and it requires that some of these phenotypes have a non-negligible probability of being selected by the environment. Given the important role the environment plays in the selection process, studies of biological evolution often consider the ability to generate distinct phenotypes as an important precondition and a useful proxy for evolvability.
So here we have the countervailing requirements of evolvability, finding new phenotypes and maintaining useful neutrality. A species doesn’t want to lose that heart thing beating in their chest when they are being selected against by cold weather. Being able to diversify genetically WITHOUT losing essential phenotypes is important for evolvability. Too much variability in phenotype is bad news.
….it is worth differentiating between phenotypic variation and phenotypic variability (evolvability) . Phenotypic variation is the simultaneous existence of distinct phenotypes (e.g. in a population); i.e. it is a directly measurable property of a set of distinct phenotypes. On the other hand, phenotypic variability is a dispositional concept, namely the potential or the propensity for phenotypic variation. More precisely, it is the total accessibility of distinct phenotypes. As with other studies   , we thus use phenotypic variability as a proxy for a system’s evolvability.
In   it was speculated that robustness increases evolvability, largely through the existence of a neutral network that extends far throughout the fitness landscape. On the one hand, robustness is achieved through a connected network of equivalent (or nearly equivalent) phenotypes. Because of this connectivity, we know that some mutations or perturbations will leave the phenotype unchanged , the extent of which depending on the local network topology.
To be clear being able to change the genome without changing the phenotype is what a neutral network represents.
There are two design principles that are believed to play a role in achieving robustness in biological systems; redundancy and distributed robustness  . Redundancy is an easily recognizable design principle that is prevalent in both biological and man-made systems. Here, redundancy is used to refer to a redundancy of parts, that is, identical parts that have identical functionality. It is a common feature in engineered systems where redundancy provides a robustness against environmental variations of a very specific type. In particular, redundant parts can be used to replace parts that fail or can be used to augment output when demand for a particular output increases.
This is an important distinction which I will address elsewhere. Humans do not create complex adaptive mechanisms, for the most part. The logical way we DESIGN systems trend towards redundancy. When you car breaks down it generally just breaks. It doesn’t fix itself, it doesn’t normally have a backup system, it just breaks. When we do design a backup plan but we tend to design them to be exactly the same ie (more than one engine on an airplane).
Distributed robustness emerges through the actions of multiple dissimilar parts  . It is in many ways unexpected because it is only derived in complex systems where heterogeneous components have multiple interactions with each other. In our experiments we demonstrate that distributed robustness can be achieved through degeneracy.
Distributed robustness is exactly what we tend to see in naturally occurring systems. As some have argued this may be the reason these systems are adaptive over the long term.
Degeneracy is ubiquitous in biology as evidenced by the numerous examples provided by Edelman and Gally . Degeneracy, sometimes also referred to as partial redundancy, is a term used in biology to refer to conditions where there is a partial overlap in the functions or capabilities of components . In particular, degeneracy refers to conditions where we have structurally distinct components (but also modules and pathways) that can perform similar roles (i.e. are interchangeable) under certain conditions, yet can play distinct roles in others.
Okay, so we’re done with the review section let’s see about this experiment.
A fleet attempts to satisfy environmental conditions through control over its phenotype, which involves changing the settings of the vehicle states C. We implement an ordered asynchronous updating of C where each vehicle conducts a local search and evaluates the changes to fleet fitness resulting from an incremental increase or decrease in the state value of the vehicle. In other words, we reallocate the vehicle to improve its utilization for the vehicle’s set of feasible task types. A change in state value is kept if it improves system fitness. Unless stated otherwise, updating component state values is stopped once the fleet fitness converges to a stable fitness value.
I’m going to skip over most of the details of the experiment or better put the modeling. You can read them if you want. It is entirely possible the modeling is broken or made a mistake. I didn’t find it, but that doesn’t mean a whole lot. Fortunately, there is tons of writing on degeneracy so you don’t have to rely on the faith in this model alone.
In its simplest form, the transportation fleet model consists of a set of vehicles and is specified by the types of tasks that each vehicle can accomplish. In particular, we define a set of n vehicles and m task types. Vehicles are characterized by a matrix with components δij, which take a value of one if vehicle type i is capable of doing task type j and zero otherwise.
So we have a fleet of transportation vehicles. I’m going to imagine trucks. It could be taxis, but I don’t like taxis.
Degeneracy and redundancy are modeled by constraining the setting of the matrix δ, which acts to control how the capabilities of vehicles are able to overlap. In the purely redundant model, vehicles are placed into subsets in which all vehicles are genetically identical. In other words, vehicles within a subset can only influence the same set of traits (but are free to take on distinct state values). In the degenerate model, a vehicle can only have a partial overlap in its capabilities when compared with any other vehicle. A simple illustration of the difference between these two design principles is given in Figure 2.
Evolvability (phenotypic variability) of a fleet design is then defined as the total count of unique phenotypes that can be accessed directly from the neutral network (i.e. unique phenotypes within the 1-neighborhood).
So we are looking for a system which maximizes the diversity of phenotypes immediately accessible within the genome, though not necessarily the most diversity of expressed phenotypes.
Measurement of evolvability requires an exploration of both the neutral network and the 1-neighborhood. Starting with an initial fleet and a given external environment, defined as the first node in the neutral network, the neutral network and 1-neighborhood are explored by iterating the following steps: 1) select a node from the neutral network at random; 2) mutate the fleet; 3) allow the fleet to modify its phenotype in order to adapt to the new conditions; and 4) if fitness is within α % of initial fleet fitness then the fleet is added to the neutral network, else it is added to the 1- neighborhood.
As indicated in Figure 4, adding excess resources increases the size of the neutral network for both types of fleets. Surprisingly however, the redundant system does not display any substantial increase in evolvability as its neutral network grows. In contrast, the degenerate system is found to have large increases in evolvability, becoming orders of magnitude more evolvable compared with the redundant model, with only modest increases in fleet size. The most important conclusion drawn from these results is that the size of the neutral network within a fitness landscape does not necessarily lead to differences in evolvability, which refutes our earlier speculation. This can be directly observed from the results in Figure 4 by comparing the evolvability of different fleet types for conditions where they have similar neutral network sizes.
The clear conclusion is that degenerate trucks are best trucks. The results finding that the size of the network didn’t necessarily lead to increased adaptability are interesting but probably limited given the simplicity of the model.
This study demonstrates that the design principles used to achieve robustness/neutrality in a fitness landscape can dramatically affect the accessibility of distinct phenotypes and hence the evolvability of a system. In agreement with , we find that a many-to-one G:P mapping does not guarantee a highly evolvable fitness landscape. However, we also discovered that distributed robustness or degeneracy can result in remarkably high levels of evolvability. Degeneracy is known to be a ubiquitous property of biological systems and is believed to play an important role in achieving robustness . Here we have suggested that the importance of degeneracy could be much greater than previously thought. It actually may act as a key enabling factor in the evolvability of complex systems.
Naturally, this study still leaves many open questions. It is limited by its simplicity and its fuzzy way of dealing with genotypes, mutations, phenotypes, and fitness etc. I will explore the ramifications of degeneracy elsewhere, but this was a nice taste.