Recently big names in science and technology Stephen Hawking and Eon Musk showed publicly their concerns about artificial intelligence. Musk even donated $10 million in order to keep artificial intelligence friendly. However, Forbes stated “Gates and Musk both have an interest in ensuring that artificial intelligence not only stays friendly, but stays viable (e.g. – public sentiment and lawmakers don’t turn against the basic notion of smart networks and devices), given that it’s likely to play a role in the future of not only Microsoft, but also Musk’s SpaceX and Tesla.”. This statement is especially true, having in mind that Musk announced that Tesla will be the first company to release self driven cars and Microsoft is developing Cortana. I will not argue that we should not have friendly AI, we definitely have to. I would also not argue that it is not a danger, while almost anyone can see it is and even today it has not very morally correct uses. Many applications of AI today are not friendly. But we are far from the point where we cannot control it. I am also in AI field and I am not seeing in near future any approach that can lead to consciousness of machines. Lets have a brief overview of AI.

Overview and very brief history of AI

Currently most of the AI efforts are partial. This is not bad by definition and we can learn a lot about intelligence from the partial approach, but still we are unable to model behavior of a thinking entity. By partial, I mean there are some subfields and these subfields also have more subfields.These subfields are modeling some of the human abilities and they are:

  • Seeing – Computer vision
  • Talking – Speech synthesis
  • Listening – Speech recognition
  • Understanding language – Natural Language Processing
  • Reasoning – Automated Reasoning
  • Consciousness – Philosophy / Cognitive Science
  • Walking / moving around – Robotics
  • Learning – Machine Learning

You can argue that Philosophy is not AI field, but it research consciousness and strong AI in order to destroy humanity have to be conscious.

Alan Turing in his paper “Computing Machinery and Intelligence” stated “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.”. He also predicted, that in about 50 years we would be able to model child behavior. Since this paper passed about 65 years, and we are still far away from that goal. Because of this, many say that the history of AI is a history of failure. This judgment sums up 50 years of trying to get computers to think. There have been some recent breakthroughs in some fields and machines can infer patterns from data, reason from them, they are becoming decent in understanding some languages, etc. For some applications, they are as accurate as humans. It can learn ones habits and recommend best adds or best things to buy. We have seen some applications of successfully driving a car. However, these capabilities cannot think by their own and are trained to perform single or small number of tasks. This does not mean they are not dangerous. Self driving car still can crash. But at the current state, machines are not capable of rebelling against humans. This might change in a long run and this is why Hawking, Gates and Musk are worried. I am as well.

State administration and military is the main user of a new technologies

It is invisible, but the state administration and military are funding on a large scales research in AI. They are also the main users of these technologies. Some of the applications appears in a general use as some company’s product. However, there are quite few uses we can see, such as recommendation engines, advertising, search, translation and maybe a few more. However, machine learning, natural language processing, computer vision, robotics and speech recognition are very useful military technologies. DARPA was funding Message Understanding Conferences in the 1990s, where the big advances were archived in natural language processing. Almost exclusive use of robots are in military. Natural language processing and machine learning with data mining are very useful in surveillance (especially those pointed out by Edward Snowden).

The main purpose of military is to defend and attack. Defend own country and government and attack countries or entities that are threat to the country or military. These is also the main purpose of various security state administration agencies. Often with the goal of defending own government, they are violating the rights of the people living in the country. Also often with excuse of defending country against terrorism, the citizens of other countries are under threat. With natural language processing and machine learning, governments are enabled to do this on a larger scale then ever and to perform surveillance on almost all of the inhabitants of the world. With the use of robotics, the danger of loosing soldiers is getting lower, especially for the economically advanced countries.

The governments and military had huge gains in using artificial intelligence. And as is was said before they are the main funding source for the research in this field. However, this main research funding body is already using these technology to do harm. Here and there, maybe they save someone, prevent some terrorist attack, but these are the organizations we should be afraid of when it comes to building machines capable of destroying human kind. The other thing is, that a lot of research is done covered in honest and good goal. I am doing text mining in biomedicine. And I think it is a good cause. However, it is relatively easy to generalize the research and apply it on the other field, thus misusing it.

Can we do anything about it?

The answer is probably no. If someone ever builds strong AI, military will be among first to see it and use it in real case scenario. If it performs well, they will just apply it more and more, even fund the improvements. The only thing we can do is never to build strong AI. It is quite pessimistic view, but seems to be the only one.