I do not remember how old I was back then. Around 16-18. We liked to talk with my father about AI and the future of humanity. Pretty sure he was doing this intentionally. He has an engineering mind and likes to discuss things like that and, I think, he wanted to interest me.
I don't know if he read about this idea or came up himself, but he proposed to create AI of an artificial rabbit. Imagine a 2D world of cells where a rabbit lives. Some cells are empty; some cells have green grass growing on them. Rarely new grass can appear in random empty cells. A rabbit has actions; it can move around in the 2D world, to eat grass on the same cell it stands on or to do nothing. Eating grass increases health level of a rabbit. Doing anything else decreases health level of a rabbit. If health goes to zero - rabbit dies. The goal of a rabbit to live as long as it can. My goal was to create an AI for this rabbit.
You can say it is pretty simple, right? But here is the catch. I should not program what rabbit should do. I should have created AI that will figure out what to do itself. A rabbit should program itself to execute the optimal scenario.
Back then I knew nothing about AI. Later I read about neural networks, for example. But back then I was trying to figure out everything by myself.
First, a rabbit needs receptors to see a world. For example, six cells of visibility in each of directions.
Second, a rabbit needs to have a set of simple actions it can execute. Think about it, whatever a human can achieve in its life is done by executing simple operations: moving fingers, moving hands and legs, pronouncing sounds and words, etc. Those are simple actions. The same with a rabbit, it has simple actions like moving in 8 directions, eating grass in a cell that it is located on, doing nothing.
Third, I needed to let rabbit know its goal. It should know that it should survive as long as possible. How to do that? How can you force an artificial creature to have your goal? I reasoned that it should work the same way as with real beings - through instinct. When we stop receiving food for an extended period - we feel pain - we are hungry. But what exactly is pain or pleasure and how can you program that? Pain is a brain activity that we call feeling. Even in this simple scenario to have a goal being should have at least one feeling.
Forth, when a rabbit is born, it knows nothing. Maybe it has a growing feeling of hungriness, but it doesn't know what to do. To act optimally and survive as long as possible a rabbit should learn first. It should determine what is bad and what is good for it based on its memory. To learn anything, it should have a memory of its previous actions and results.
A rabbit can see an only limited number of cells around itself, for example, six cells in each direction. Other parts of a 2D world are invisible. To be effective rabbit should also have some spatial memory to know where it was before.
Memory should be limited - we do not want to remember every situation possible because it is not fun, not optimal and almost unrealistic. Six cells visibility in each direction means rabbit can see a matrix of 13x13 cells. Each cell has or has no grass - like a binary system. 2^13x13=7.482888383134228e50 combinations. Even in this simple 2D world remembering all possible visible situations is unrealistic and very ineffective. To optimize we need to work instead with abstractions of situations. But how can we abstract from a 13x13 matrix of zeros and ones? How is it possible to make it even simpler? It is possible.
Fifth, if we have memory and can deduct some knowledge from it a rabbit needs the ability to plan its actions. Planning actions means that rabbit should have the ability to imagine what will happen in one situation or another, evaluate those situations and choose actions that lead to most optimal results. Rabbit needs imagination.
To summarize to create simple intelligence, we need:
- Set of actions it can perform.
- Memory with an ability to extract useful knowledge from it.
If we would be able to implement this, which I think is a matter of knowing right algorithms, does it mean that rabbit will have some level of intelligence and therefore become a form of life? Is it alive? I believe yes. Furthermore, a rabbit will have a free will.
Free will is born from the complexity and uniqueness of previous experience and where there is more than one supposedly optimal path that can lead to unknown results. Complexity and specificity of prior knowledge (every rabbit will have a different experience as it will experience different grass locations) plus instincts/feelings form what we call "gut" feeling. Free will can exist only in a limited brain with limited knowledge. An unlimited mind would know the only most optimal path and then there would not be a choice.
All these raises interesting philosophical questions. For example, for a rabbit, its 2D world is real. For a rabbit, its creators are its God. A rabbit will never be able to understand our (God's) motivation for creating it and its world. A rabbit will never be able to prove our existence (unless we allow it). A rabbit will never see and understand our 3D "real" world. A rabbit will never recognize if we pause its world for any period and then unpause it again. We as creators can make changes to rabbit's world, and a rabbit will never see us - things will appear or disappear or rules change. It also raises moral questions. As creators, we should be adequately responsible for artificial beings we create.