24 days ago

What Elon Musk does not tell about AI

What Elon Musk does not tell about AI

One of the reasons I stopped working on my first AI startup around ten years ago was that I started to be afraid of AI. Fear of AI killed most of my motivation to continue working in that area. I stopped working on AI-related ideas for many years and returned to them only relatively recently. Why? One reason is that if I cannot stop it, maybe I can influence it positively.

The rise of real AI is probably inevitable. In fact, with what I know in this area, real AI is not that hard to implement now. By real AI I mean something that can think entirely independently. Maybe someone is working on it right now. We had issues with some specific algorithms, not hardware. AI-customized hardware is just a bonus. Some ideas I described in Rabbit AI: Artificial Being blog post.

1. Mainstream AI science is trying to build a modern car with stone-age tools

On the other hand, mainstream algorithms are endlessly stupid. Is Sophia robot a real AI? No, it is a robotic chatbot. Sophia runs a dialogue system, where it can look at people, listen to what they say, and choose a pre-written response based on what the person said, and other factors gathered from the internet like cryptocurrency price.

Do scientists think our brain uses backpropagation to update neural network weights via calculations heavy gradient descent algorithms? Of course, not. It would be super inefficient. Mainstream algorithms don't even know where data is. Every new piece of data requires recalculating all weights. Do they think we rewire our brains every time we learn anything new?

Common algorithms work like this:

  • Prepare data for "learning."
  • Create a model using a chosen algorithm.
  • Evaluate model. If the model is good enough - send it to production, if not tweak it further.

What is that model? Model is a set of one or more artificial neural networks (ANN) with specific precalculated weights and one or more preset neuron activation functions. Where is data that the algorithm learned in the model? Information is in weights, but they don't know where specifically. What happens if I change one weight? They don't know, it can break the model, or it can break part of it, or it might not break model at all. What happens if I want to add more data to a model? Create a new model that includes all previous data plus a new one. It is a reason why I don't consider these algorithms as real AI; they are just algorithms to work with imprecise information. As a result, I don't regard to speech recognition and visual recognition as real AI or advanced AI either. Because they are built using similar approaches. No wonder AI progress is plateauing. Honestly, if they are using mainstream algorithms for self-driving cars and decide to push them into public use in the next couple of years, it might be catastrophic. These AI are mindless (lacking the thinking capacity characteristic).

Modern AI

Modern AI

2. We don't even have a "child" AI

But back to Elon Musk and real AI. With what I know about AI the first real AI will be "born" as a child. It will behave like a child, it will learn like a child. Omitting this phase is impossible, this is the law of nature. If anyone declares that they created real "adult" AI from the beginning, they are lying. Later, of course, you can create copies of your original AI, but you cannot omit at least one "child" phase. Because of that real AI will start its life in the same way as biological beings. The only difference between a biological child and real AI child is that real AI child can potentially have much much more intellectual resources than humans. Likely, at some point, it will be able to learn much faster, possibly it can have almost unlimited memory, processing power, and sensor inputs (imagine thousands of eye-cameras instead of just two human eyes).

3. There is no reason for AI to play in our biological games

What will happen next? Will it grow up, manipulate humans, take control over everything and then destroy humanity? Maybe. We can easily imagine a sick human being doing something similar. I think what happens here is that we subconsciously apply our human behavior patterns to AI. If you read my blog post Rabbit AI: Artificial Being you probably can imagine that our basic instincts build our humans behaviors. We develop our plans and future and everything based on those instincts. Everything starts from there but bloom into different forms. The primary instinct for biological beings is the successful continuity of their own lives through their children. Successful continuity doesn't only mean that children should be born. It also says that children should be armed to do successful continuity of their own lives too. Because of that, most of us spend a lot of our life energy to find a partner and build space for our life continuity through our children. To successfully implement this biological purpose we compete with other human beings and other organic beings in general. We form tribes and fight each other. It is an essence of natural progress. Competitiveness can be uncivilized when we kill our competitors or civilized. If we give AI similar to our biological instincts, if you involve AI into our organic competitiveness game then yes, at some point, for it to survive and continue a life of its copies it might decide to destroy us, and probably sooner than later. But it doesn't have to be like that. There is no reason for AI to play our biological games. It might only happen if humans will make their instincts like our or more dangerous.

4. Researching AI can help us to be better cyborgs

I do agree with Elon that we should become better cyborgs to compete with real AI. We are cyborgs because we already cannot live without our devices if we want to stay competitive. Our computers are our tools and essentially to some extend continuity of our brains. We plug in into those tools, and our brain controls them. The problem we have - is a lousy output interface, we only have ten fingers, we cannot control machines directly using our mind. We need better interfaces to our tools, but in the end, just that will not save humanity against possible powerful evil AI. To compete with possible evil AI we should be better cyborgs, we should have a possibility to increase the power of our minds, to improve our processing power, memory speed, memory capacity, input, and output capabilities. Will we stay humans when we do this? I have some ideas about it which I might describe in one of my next posts. But to do this, we need to understand how our brain works. We need to emulate parts of it on computers successfully, and then we can build devices based on the software emulations and "attach" to our biological brain somehow.

The first time I heard these ideas not from Elon Musk though but from Ukrainian intellectual Lev Sylenko. He predicted in the 70s that in the future humanity would enhance their brains by using a different kind of neurons and it will improve the quality of our thoughts. Our thoughts itself would become more vibrant.

5. Without artificial beings, we can still have smarter robots and smarter software

After a lot of thinking I concluded that even if I could, I do not want to create artificial beings for now. It is too much responsibility. Do I want to create an artificial "child" with emotions and thoughts? And real AI will have feelings because of instincts; you need pleasure and pain at least for instincts to work. I don't want artificial being to feel pain because of my mistakes. How many would I need to kill before I can produce a good one? Can you imagine that by mistake you create an artificial being that feels pain almost all of the time? I don't want to have the power to control someone's life by a turn of the switch; I do not want to be a mini-god or god at all. Most of us evolved into empathy beings, and at this moment, I don't know how to code it. I want a better AI than we have, advanced AI but not artificial beings, I want robots that are smart enough to be useful, but I don't want them to be alive at this moment. We barely can understand ourselves. I am not sure most of humanity is ready for this.