More About Artificial Intelligence
AI, Artificial Intelligence, is the talk of the town. Even the Vice-President is giving her interpretation of what it is. Many people are running scared and some of them are scientists and engineers. People feel if they are scared there must be something to this. First of all, what exactly is AI? I would like to give you my definition of it. It is a computer program which makes decisions without the aid of humans. Supposedly some believe it is safe because it can’t make a decision which wasn’t programmed into it. This is not true.
I associate it with this example. Many of us take medicine to stay well. We know what are medicines are supposed to do, the doctor usually tells us what they are for. So far so good. When medicines are mixed we can get unexpected results. I am not saying AI programs are mixed with others, but when commands are mixed in some cases AI has been known to do some unexpected things. Taking this on a simple level, a robot would have some sort of AI program. If it was a very simple machine it might just make burgers. This robot probably, but not for sure, makes burgers for its entire use without a mistake. It has been equipped to do one very simple thing even though this thing takes many steps to accomplish.
Moving on to something more complex, let’s talk about self-driving cars. This is a much more complicated AI program which has to take into consideration many other facts and many other reactions. Human life could depend on the program doing the right thing at the right time. It would rely on the fact the programmers took into consideration every type of happenstance and programmed a solution for it. I was always told you could never think of everything which might affect your life. The same might be true for the AI driving program. Something could happen which just was never thought of where the software would be at a loss to solve. An example might be an auto heading towards you for a head on collision and the program takes a drastic step, but the collision might never have happened because at the last minute the other driver swerved out of the way. I couldn’t fault the AI software, but in that case maybe it would have been better if a human was driving.
When the rules were adopted for robotic devices they were quite clear. There were three laws. The first law stated a robot may not injure a human being or through inaction, allow a human being to come to harm. The second law stated a robot must obey the orders given to it by human beings except where such orders would conflict with the first law. The third law stated a robot must protect its own existence as long as such protection does not conflict with the first or second law. While these are called laws they are not the laws of any country or state. They are more like suggestions which many had adopted. One of the problems with these laws is they have been abandoned by the military.
The militaries of the world would love nothing better than robots which would be trained to kill humans in various ways. If a robot was to turn on its maker, I think there might be a greater chance of this during wartime. While today, most robotic devices like drones and assault vehicles are controlled by humans, the military is working hard to let at least some of them act autonomously. That is to decide on their own who is the enemy and who is a friend. I haven’t heard much about teaching them to capture enemy soldiers only to wipe them out. How might a robotic device entering a building full of enemy soldiers with captives know the difference between the two?
There is no stopping AI devices from being developed at this point and the reason for this is the same as the reasons for countries wanting to develop nuclear bombs. Humans are their own worst enemies. We were forced to develop the atom bomb because Germany was working on it and almost had it. This made the Soviet Union want one and they stole the plans. The next thing that happened was other countries who didn’t have one felt threatened and they began to develop them. Then the plans for the bombs spread out to some of the more radical countries and they began to develop them. The same thing is beginning to happen with Artificial Intelligence. As I said, the human race is its own worst enemy.
Don’t get me wrong AI could also be a boon in some areas. Think about all those old people who could use a sort of assistant. It has been shown the elderly and sick react favorably to robots for example. Many begin to think of them as a friend. An elderly person who is alone could benefit from having a smart robot in their home. They could also benefit from owning a car they don’t have to drive themselves. A smart car could take them to the doctor, or hospital for appointments and taking it a step further could end their isolation.
Robot doctors are beginning to creep into the medical field in different areas such as operations. While they are still under the control of the surgeon, they make it a lot easier to make smaller incisions and quicker recoveries. One of the reasons for this is they are so much more accurate. An AI controlled computer might be able to discern all the different reactions of the different medicines mixed together, and this would be a tremendous help in writing prescriptions for people.
I think we have all noticed how much hacking is going on. I am talking about all the stolen data around the world and how no one is safe anymore from data loss no matter how careful they are. The reason being your personal information is all over the internet and some is on your computer and even your cell phones. Think about this, we get into a war and the enemy figures out how to hack our drones and robotic vehicles and even make them stop working, or turn them against us. Given the past history with computers, and since AI is in essence a computer program, how can we be sure this will never happen? I am sure we are probably working on these hacks ourselves. I don’t believe we can ever fully protect any computer program from being hacked even one that is an AI.