Evolutionary computation is a very powerful generic optimization technique that draws its main inspiration from the theory of evolution by natural selection. Evolution by natural selection is a very elegant theory that depends for its explanation of the biodiversity in nature on two main components:
Different ecological habitats have different challenges and requirements for survival. According to the evolutionary theory, in any ecological niche, the different organisms’ traits will be variant due to random mutations in DNA and copying mistakes in its duplication. Due to this variation in traits, there will be differential advantages of survival for organisms having more suitable traits for survival, i.e the nature is implicitly applying a pressure that selects for the fit individuals. Because the most fit organisms will be more probable to survive, they will pass their ‘fit’ genes to their offsprings, which will be again more probable to survive.
Evolution can be thought of as an algorithm optimizing for fitness. This is the core idea of evolutionary optimization. In other words, if we have a problem that we can generate different solutions for, then we can use the performance of each solution as a measure of fitness that can drive an evolutionary algorithm to find better and better solutions. Evolutionary algorithms have different flavors which share most of their components, however, differing in details and characteristics of each component. The main components of an evolutionary algorithm are:
You can find a more detailed introduction to the different variations of evolutionary algorithms in my previous article . For the purpose of this tutorial, I will focus on a variation called evolutionary strategy (ES), which I will introduce briefly next. The full jupyter notebook is available here .
As I mentioned, all evolutionary algorithms share most of the aforementioned components, while differing in their details. For ES, the representations scheme is mainly a phenotype, i.e the individuals (or solutions) are represented explicitly as vectors of numbers. Each individual will have an accompanying vector, called strategy, which is just a vector controlling its mutation. Different mating operators are used in ES, but the one we will be using is blending, which is mainly a form of linear combination between the mated parents. The mutating operator we will be using is log-normal, which, like all mutating operators in ES, depends on the strategy vector mentioned above for mutating different values in the individual’s representation vector. Selection strategy will be tournament selection, in which multiple random selections of a subset individual are done, and the best individual is selected each time. The fitness function is the only part of any evolutionary algorithm that must be delegated to the user to define, i.e the user will provide some function that will assign fitness to each individual in the population based on a suitable measure for the problem at hand. The evolutionary strategy controls the population size and here I will use (mu,lambda) _pronounced ‘mu comma lambda’_, where mu and lambda are positive integers and mu refers to the size of the parents’ population while lambda refers to the size of the generated offsprings. In this strategy, the selection strategy (i.e tournament selection in this example) is applied to the offsprings only.
ES, like all evolutionary algorithms, is done in iterations called generations. Each generation, offsprings are generated from current parents in the population by mating and then mutating. The fitness of the new members are then evaluated and the selection strategy is applied to select the individuals that will survive into the next generation.