Creation of a simple and workable genetic algorithm with Python and NumPy

Creation of a simple and workable genetic algorithm with Python and NumPy

A genetic algorithm is needed when you know the parameters of your neural network, but you don’t know what the output should be, for example, this algorithm can be used to play Google Dinosaur or Flappy Bird, because there you don’t know what the output should be, but you have the opportunity to sort the most viable options, for example, by time, this is called fitness functions.

I have never been able to find such an algorithm that works and is simple and usable, so I set out to create my easy, simple, great working Genetic Algorithm.

My goal is not to drag out the writing of this article and torture readers with its length, so let’s get straight to the code. As mentioned, the code is simple, so most of it doesn’t need to be described in whole works.

First we need to import the modules:

import numpy as np
import random

Then we add the Dataset and the answers to it, but not to use the error backpropagation algorithm, but simply to count the number of correct answers. Then you can test it on the other options that are currently commented out


x = np.array([[1, 1, 0], [0, 0, 1], [1, 0, 1], [0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]])
y = np.array([[0],[1],[1], [0], [0], [0], [0], [1], [1]])

#x = np.array([[0, 1, 1], [0, 0, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [0, 0, 0], [1, 1, 0], [1, 1, 1]])
#y = np.array([[1],[0], [0], [1], [0], [1], [0], [1], [1]])

#x = np.array([[1, 1, 0], [0, 0, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]])
#y = np.array([[1],[0],[1], [0], [1], [0], [1], [0], [1]])

We add lists and activation functions. The meaning of the lists will become clear later. The first activation function is a sigmoid, and the second is a threshold function.

listNet = []
NewNet = []
goodNET = []
GoodNet0 = []
GoodNet1 = []
GoodNet2 = []
GoodNet3 = []
GoodNet4 = []
GoodNet5 = []
GoodNet6 = []
good = 0
epoch = 0

good = 0
epoch = 0

def sigmoid(x):
    return 1/(1 + np.exp(-x)) 
def finfunc(x):
    if x[0] >= 0.5:
        x[0] = 1
        if x[1] >= 0.5:
            x[1] = 1
            return x[1]
        else:
            x[1] = 0
            return x[1]

    else:
        x[0] = 0
        if x[1] >= 0.5:
            x[1] = 1
            return x[1]
        else:
            x[1] = 0
            return x[1]

Next, we need to create two classes, the first is necessary for the creation of the initial population, and the second for all subsequent ones, since for the first time we need to randomly create weights, but only to cross them and mutate them. The function init() we create weights or add, predict() is needed for the algorithm itself and the calculation of the best options, and the function Fredict() differs by returning the answer and the fitness function to display the numbers on the screen and see the learning stages. At the output layer, the sigmoid function is first used to approximate the answer to one of the options, and only then – the limit.

class Network():
    def __init__(self):
        self.H1 = np.random.randn(3, 6)
        self.O1 = np.random.randn(6, 1)

    def predict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1

    def Fpredict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
        return t2, good
class Network1():
    def __init__(self, H1, O1):
        self.H1 = H1
        self.O1 = O1


    def predict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
    def Fpredict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
        return t2, good

We output the first answers and the variable good, which is the same fitness function here, then we reset it to zero for the next neural network, print wait0 (you can write anything you want here) is needed so as not to get confused, where the answers of different neural networks begin.

s = Network()
print(s.Fpredict(x[0], y[0]))
print(s.Fpredict(x[1], y[1]))
print(s.Fpredict(x[2], y[2]))
print(s.Fpredict(x[3], y[3]))
print("wait0")
good = 0

The first cycle passes, here and in all subsequent cycles we give only six questions to check how well she will cope with a task that she has not encountered, that is, we test her for cram, which sometimes occurs. And now let’s go into more detail: depending on how many answers she answered correctly, we define her in one of the classes, if a large number is correct, then we should support such a neural network and increase its number so that with further mutation there will be more intelligent people to understand this, you you can imagine that there is one genius for every 100 people, but it will not be enough for everyone, which means that his genius will fade in the next generations, which means that the neural network will learn very slowly, or not at all, so that this does not happen, we increase in the cycle the number of neural networks with a large number of correct answers. At the end, we empty the main listNet, assign it new values ​​of GoodNet lists in order from best to worst, make a cut on the best 100 individuals for further mutation.

for s in range (1000):
    s = Network()
    good = 0
    s.predict(x[0], y[0])
    s.predict(x[1], y[1])
    s.predict(x[2], y[2])
    s.predict(x[3], y[3])
    s.predict(x[4], y[4])
    s.predict(x[5], y[5])
    if good == 6:
        GoodNet6.append(s)
        for r in range(15):
            GoodNet4.append(s)
    elif good == 5:
        GoodNet5.append(s)
        for r in range(10):
            GoodNet4.append(s)
    elif good == 4:
        GoodNet4.append(s)
        for r in range(5):
            GoodNet4.append(s)
    elif good == 3:
        GoodNet3.append(s)
    elif good == 2:
        GoodNet2.append(s)
    elif good == 1:
        GoodNet1.append(s)
    elif good == 0:
        GoodNet0.append(s)
    good = 0
listNet = []
listNet.extend(GoodNet6)
listNet.extend(GoodNet5)
listNet.extend(GoodNet4)
listNet.extend(GoodNet3)
listNet.extend(GoodNet2)
listNet.extend(GoodNet1)
GoodNet1 = []
GoodNet2 = []
GoodNet3 = []
GoodNet4 = []
GoodNet5 = []
GoodNet6 = []
goodNET = listNet[:100]
listNet = goodNET
goodNET = []

Crossing and mutation itself: we take one part of the first parent, the second from the second, mutate and we have a child on the NewNet list, so 1000 times.

for g in range(1000):
    parent1 = random.choice(listNet)
    parent2 = random.choice(listNet)
    ch1H = np.vstack((parent1.H1[:1], parent2.H1[1:])) * random.uniform(-0.2, 0.2)
    ch1O = parent1.O1 * random.uniform(-0.2, 0.2)
    g = Network1(ch1H, ch1O)
    NewNet.append(g)
listNet = NewNet
NewNet = []

Starting with the previous part of the code, we use Network1() because we are now crossing and mutating, but not generating randomly. This should be repeated 1000 times, we show the answers on the first epoch and the 1000th – the final version. Here the code is repeated, so I will not describe it, everything is very clear there.

for i in range(1000):
    good = 0
    epoch += 1
    for s in listNet:
      good = 0
      s.predict(x[0], y[0])
      s.predict(x[1], y[1])
      s.predict(x[2], y[2])
      s.predict(x[3], y[3])
      s.predict(x[4], y[4])
      s.predict(x[5], y[5])
      if good == 6:
          GoodNet6.append(s)
          for r in range(15):
              GoodNet4.append(s)
      elif good == 5:
          GoodNet5.append(s)
          for r in range(10):
              GoodNet4.append(s)
      elif good == 4:
          GoodNet4.append(s)
          for r in range(5):
              GoodNet4.append(s)
      elif good == 3:
          GoodNet3.append(s)
      elif good == 2:
          GoodNet2.append(s)
      elif good == 1:
          GoodNet1.append(s)
      elif good == 0:
          GoodNet0.append(s)
      good = 0
    listNet = []
    listNet.extend(GoodNet6)
    listNet.extend(GoodNet5)
    listNet.extend(GoodNet4)
    listNet.extend(GoodNet3)
    listNet.extend(GoodNet2)
    listNet.extend(GoodNet1)
    GoodNet1 = []
    GoodNet2 = []
    GoodNet3 = []
    GoodNet4 = []
    GoodNet5 = []
    GoodNet6 = []
    goodNET = listNet[:100]
    listNet = goodNET
    goodNET = []
    if epoch == 1000:

        print(listNet[0].Fpredict(x[0], y[0]))
        print(listNet[0].Fpredict(x[1], y[1]))
        print(listNet[0].Fpredict(x[2], y[2]))
        print(listNet[0].Fpredict(x[3], y[3]))
        print(listNet[0].Fpredict(x[4], y[4]))
        print(listNet[0].Fpredict(x[5], y[5]))
        print(listNet[0].Fpredict(x[6], y[6]))
        print(listNet[0].Fpredict(x[7], y[7]))
        print(listNet[0].Fpredict(x[8], y[8]))

        good = 0
        print('wait')
    elif epoch == 1:

        good = 0
        print(listNet[0].Fpredict(x[0], y[0]))
        print(listNet[0].Fpredict(x[1], y[1]))
        print(listNet[0].Fpredict(x[2], y[2]))
        print(listNet[0].Fpredict(x[3], y[3]))
        print('wait1')
    for g in range(1000):
        parent1 = random.choice(listNet)

        parent2 = random.choice(listNet)
        ch1H = np.vstack((parent1.H1[:1], parent2.H1[1:])) * random.uniform(-2, 2)
        ch1O = parent1.O1 * random.uniform(2, 2)
        g = Network1(ch1H, ch1O)
        NewNet.append(g)
    listNet = NewNet

That’s all, the regularity that the neural network has to find, the final version depends on which number (first, second, third) and ignore the others. You can do for example logical operations (XOR, NOT, AND …), only in this case in the network class change the input data by two, I also followed the rule neurons in the hidden layer are equal to the input data multiplied by two, it worked, but you can try your options, it is also very important to give the neural network the same number of one answers and other answers so that the number of correct answers like “a” would be “b”, otherwise the neural network will answer all the answers equally, that is, if more a, then she will answer everything with a and nothing will work, also give her different options in the training sample so that she understands the pattern, for example, if you make an XOR block, then you must add an option with two ones, but in the case of logical operations , there you will have to give all the options, because there are very few of them and she will not understand anything.

Many may ask, but how to use this to make a neural network for Google dinosaur, etc. Answer: use three lists, first: listNet, second: NewNet, third: fitness function.

a = []#listNet
b = []#NewNet
с = []# фитнесс функция
d = 0# переменная
e = 0# счетчик
f = 100
#создавая нейросеть мы добавляем в listNet нейросеть, а в c - время
a.append(нейросеть)
c.append(время)
'''далаете так 100 нейросетй(поэтому в f - 100), в каждой проверяете сколько 
она продержалась добавляете время в c, потом выбор 10 самых лучших:
'''
for r in range(10):
  for i in c:
    e += 1
    if d < i:
      d = i
    if e == f:
      n = c.index(d)
      b.append(a[n])
      c.remove(d)
      e = 0
      f -= 1
a = b
b = []
c = []
'''Срез списка не нужен, дальше скрещиваете, мутируете, повторяете и готово!!!'''   
  

Related posts