Book Image

Hands-On Neuroevolution with Python

By : Iaroslav Omelianenko
Book Image

Hands-On Neuroevolution with Python

By: Iaroslav Omelianenko

Overview of this book

Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems. You'll start with learning the key neuroevolution concepts and methods by writing code with Python. You'll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you'll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you'll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you'll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones. By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments.
Table of Contents (18 chapters)
Free Chapter
1
Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods
4
Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems
9
Section 3: Advanced Neuroevolution Methods
14
Section 4: Discussion and Concluding Remarks

Exercises

  1. Try to run an experiment with different values of the random seed generator that can be changed in line 101 of the retina_experiment.py script. See if you can find successful solutions with other values.
  2. Try to increase the initial population size to 1,000 by adjusting the value of the params.PopulationSize hyperparameter. How did this affect the performance of the algorithm?
  3. Try to change the number of activation function types used during the evolution by setting the probability of its selection to 0. It's especially interesting to see what happens when you exclude the ActivationFunction_SignedGauss_Prob and ActivationFunction_SignedStep_Prob activation types from selection.