I do not remember how old I was back then. Around 16-18. We liked to talk with my father about AI and the future of humanity. Pretty sure he was doing this intentionally. He has an engineering mind and likes to discuss things like that and, I think, he wanted to interest me.
I don't know if he read about this idea or came up himself, but he proposed to create AI of an artificial rabbit. Imagine a 2D world of cells where the rabbit lives. Some cells are empty; some cells have green grass growing on them. Rarely new grass can appear in random empty cells. The rabbit has skills. It can move around in the 2D world; it can eat grass on the same cell it stands on, or it can do nothing. Rabbit needs to eat grass to increase health level (health points). Doing anything else decreases the health level of the rabbit. If health goes to zero - the rabbit dies. Rabbit's goal is to live as long as it can. My goal was to create an AI for this rabbit.
You can say it is pretty simple, right? But here is the catch. I should not program what rabbit should do. I should have created AI that will figure out what to do itself. The rabbit should program itself to execute the optimal scenario.
Back then, I knew nothing about AI. Later I read about neural networks, for example. But back then, I was trying to figure out everything by myself.
First, the rabbit needs receptors to see a world. For example, six cells of visibility in each of the directions.
Second, the rabbit needs to have a set of simple actions (skills) it can execute. Think about it. Whatever a human can achieve in its life is done by performing simple operations: moving fingers, moving hands and legs, pronouncing sounds and words, etc. These are simple actions. The same with the rabbit, it has simple actions like moving in 8 directions, eating grass in a cell that it is located on, doing nothing.
Third, I needed to let the rabbit know its goal. It should understand that it should survive as long as possible. How to do that? How can you force an artificial creature to have your intention? I reasoned that it should work the same way as with real beings - through instinct. When we stop receiving food for an extended period - we feel pain - we are hungry. But what exactly is pain or pleasure, and how can you program that? Pain is a brain activity that we call feeling. Even in this simple scenario to have a goal being should have at least one feeling.
Forth, when the rabbit is born, it knows nothing. Maybe it has a growing feeling of hungriness, but it doesn't know what to do. To act optimally and survive as long as possible, the rabbit should learn first. It should determine what is wrong and what is right for it based on its memory. To learn anything, it should have a memory of its previous actions and results.
The rabbit can see an only limited number of cells around itself, for example, six cells in each direction. Other parts of a 2D world are invisible. To be productive rabbit should also have some spatial memory to know where it was before.
Memory should be limited - we do not want to remember every situation possible because it is not fun, not optimal, and almost unrealistic. Six cells visibility in each direction means the rabbit can see a matrix of 13x13 cells. Each cell has or has no grass - like a binary system. 2^13x13=7.482888383134228e50 combinations. Even in this simple 2D world, remembering all possible visible situations is unrealistic and very ineffective. To optimize, we need to work instead with abstractions of situations. But how can we abstract from a 13x13 matrix of zeros and ones? How is it possible to make it even simpler? It is possible.
Fifth, if we have memory and can deduct some knowledge from it, the rabbit needs the ability to plan its actions. Planning actions means that the rabbit should have the ability to imagine what will happen in one situation or another, evaluate these situations, and choose actions that lead to most optimal results. The rabbit needs imagination.
To summarize to create simple intelligence, we need:
- Set of actions (skills) it can perform.
- Memory with an ability to extract useful knowledge from it.
If we would be able to implement this, which I think is a matter of knowing the right algorithms, does it mean that rabbit will have some level of intelligence and therefore become a form of life? Is it alive? I believe, yes. Furthermore, the rabbit will have free will.
Free will is born from the complexity and uniqueness of previous experience and where there is more than one supposedly optimal path that can lead to unknown results. Every rabbit will have a different experience as it will experience different grass locations. Complexity and specificity of prior knowledge plus instincts/feelings form what we call "gut" feeling. Free will can exist only in a limited brain with limited experience. An unlimited mind would know the only most optimal path, and then there would not be a choice.
All these raise interesting philosophical questions. For example, for the rabbit, its 2D world is real. For the rabbit, its creators are its God. The rabbit will never be able to understand our (God's) motivation for creating it and its world. The rabbit will never be able to prove our existence (unless we allow it). The rabbit will never see and understand our 3D "real" world. The rabbit will never recognize if we pause its world for any period and then unpause it again. We, as creators, can make changes to the rabbit's world, and the rabbit will never see us - things will appear or disappear or rules change. It also raises moral questions. As creators, we should be adequately responsible for beings we create.