What’s To Fear About Artificial Intelligence?

Please press “創瞰巴黎” to follow us

SUMMARY

Stephen Hawking, the famous British physicist, recently warned that the development of a full-fledge artificial intelligence could result in the end of the human race. Others, such as the engineer Raymond Kurzweil, offer a more optimistic outlook and believe that we will soon be able to download our consciousness onto machines. In his book titled “Le Mythe de la Singularité” (The Myth of Singularity, Seuil, 2017), researcher Jean-Gabriel Ganascia refutes the so-called “Technological Singularity,” a brutal rupture that will supposedly transform humanity. According to him, if there is cause for concern about artificial intelligence, it doesn’t stem from the dangers it allegedly poses to humanity but rather from its current applications in our societies.

What’s To Fear About Artificial Intelligence?

Paris Innovation Review— In your book titled Le Mythe de la Singularité (“The Myth of Singularity”), you distinguish between artificial intelligence as a scientific discipline and strong artificial intelligence used by the advocates of technological Singularity. How is this difference important?

Jean-Gabriel Ganascia —The term artificial intelligence was introduced in 1955 by scientists who wanted to use machines, that seemed quite extraordinary at the time, to understand intelligence — let’s keep in mind that the first electronic computer was built in 1946. The issue was to break down intelligence into elementary processes and simulate each one of these processes on a machine. Therefore, the ultimate goal of this discipline is the simulation of cognitive functions with two objectives: to better understand intelligence and use these functionalities simulated on machines for practical applications (face recognition, recognition of general forms, comprehension of language, automatic demonstration of theorems, simulation of reasoning, etc.). But this scientific discipline doesn’t intend to reconstitute an intelligent entity by any means.

Conversely, strong artificial intelligence is a philosophical approach that aims to create an entity guided by intelligence. This reflects an old idea in collective psyche that man will use science to produce some kind of alter ego: whether positive, if it helps us in our tasks; or evil, if it ultimately ends up replacing us.

Does artificial intelligence lack a scientific basis?

It is my position. The arguments used to justify strong artificial intelligence are based on science fiction and hence, misleading. The exponents of Technological Singularity believe in Moore’s law, whose validity in the long run can be challenged; and on a hyperbolic form of automatic learning that simply does not exist. They claim that this learning will make machines autonomous, not only technologically but also philosophically. In other words, that machines will have the ability to choose their own goals according to their own interests. As consequence, they suggest that technological autonomy will lead to philosophical autonomy and that robots will suddenly have a will of their own, separated from ours. I am a scientist and therefore, I can’t say it is impossible. But the current evolution of technology does not allow us to imagine this kind of development.

Isn’t intelligence often mistaken with consciousness?

Absolutely. But consciousness is a complex notion, which can be considered from at least three perspectives. First, a conscious person has intentions. There are machines to which we attribute intention by some kind of projection. These are so-called intentional systems. They are used in artificial intelligence for machine interfaces: their creators don’t try to give them emotions but all the external traits that make their behavior mimic that of an entity affected by emotions. It’s called affective computing.

Secondly, a conscious person is capable of seeing or representing themselves taking action. Can we make machines capable of reflection? It’s far from easy but not impossible. Some machines are capable of looking at their behavior and self-correcting themselves. Third, being conscious means feeling pain, having a sensation of writing, etc. We are quite helpless to simulate these aspects. Some exciting works in cognitive sciences are being developed but it still remain a mystery. We know that consciousness is closely linked to the living and the needs of the living. Increasing computing capabilities won’t help, by any means, the investigation of this form of consciousness.

What’s To Fear About Artificial Intelligence?

You dispute the hypothesis that Moore’s law will continue indefinitely. Why do you think it will lose its validity?

Moore’s Law is an empirical law according to which the computational capacity of machines and their ability to store information double at regular intervals: every two years or every 18 months. This law was observed since 1959 and formulated in 1964 by Gordon Moore, an engineer and one of the co-foundors of Intel. It is linked to miniaturization which allows a doubling of the number of electronic components at regular intervals. The supporters of Singularity extrapolate from this law but they overlook the fact that in the physical world, nothing is infinite. Processors are currently designed with silicon technologies. The size of components keeps decreasing but at some point, they are going to reach the so-called “silicon wall” i.e. the size of a silicon molecule. Of course, we can imagine using other materials that will allow to continue improving the computational power of processors but for the moment there are no clear alternatives to silicon. There is some talk about quantum computing, but quantum computers won’t be sitting on our desks before long. As things stand now, we have no assurance of the continuation of Moore’s law over the next 10-15 years. In 2016, we already witnessed a significant decrease in the growth rate of processors’ computational capacities.

You also point out the limits of automatic learning techniques that will one day lead machines to surpass us, according to the advocates of Singularity…

Machine learning techniques are confronted to two limitations. On the one hand, they only work when fed with examples, which need to be annotated. This costs a lot of professors! For example, before a machine can recognize a face on a photograph, you need to give it millions of faces and specify for each face, the name of the person in question. The second limitation concerns the machine’s inability to learn new concepts and notions by itself in order to restructure the body of knowledge that forms its world view. This is how human knowledge has evolved, thanks to breakthroughs at the time of scientific revolutions. For now, machines are unable to do the same.

It isn’t a question of denying the innumerable successes of artificial intelligence, related to the type of society in which we live. With the use of the Web, all of our exchanges are now digitized. Besides, sensors can digitize a wide array of observations. Our entire contemporary economy is based on the exploitation of big data. This leads to the development of prediction systems that are used to make decisions, with pernicious side effects.

How can we regulate the artificial intelligence that pervades our daily lives?

I believe that two ethical aspects are crucial today. First of all, the question of accountability. A device that is autonomous in the technological sense i.e. that takes decisions without human presence and learns from large amounts of data can become partly unpredictable. The European Parliament voted a resolution on the legal personality of robots which, in my opinion, doesn’t really address the problem. This may provide compensation to victims but it is a simple civil settlement which leads to avoiding investigation. If there is an accident, it is important to understand what are the causes in order to improve the machine and prevent this from happening again. It is also necessary to limit the actions of these systems in order to ensure that they can’t override certain commands. It is therefore necessary to involve human moral values when programming these devices.

Then, there is the problem of the massive and excessive use of predictive systems in our societies. Concerning this aspect, I strongly recommend a book by Cathy O’Neil titled Weapons of Math Destruction published in 2016. The author shows that despite very positive aspects, automatic learning techniques can also have an extremely pernicious effects, when misused. It shows their tragic use in the United States where, in some States, sentences are established on the offender’s likelihood of recidivism, based on highly questionable indicators. Our societies need to realize that we simply cannot delegate all of our decisions to machines. It is a huge mistake to believe that machine decisions are neutral.

Of course, there are other ethical aspects, such as the protection of personal data. On this issue, there is great concern about how the central government can use our personal data, but it seems to me that in our democracies, the government is essentially protective. On the other hand, the use of our data by major web companies is much more worrying.

What’s To Fear About Artificial Intelligence?

The latter are among the main promoters of Technological Singularity. Why this attitude of “pyromaniac firefighter,” as you define it in your book?

Indeed, one wonders why these great companies give credit to these frivolous ideas. Different factors need to be taken into consideration but I believe that the most important is that they fuel irrational fears to hide the real dangers. Today, the question is no longer to delete information in order to hide it from public view, like during the Stalinist era, but rather to add false information to divert attention. The fear of machines taking over power hides a social and political reality that I believe is totally transfigured by the use of contemporary technologies. The founding principle of any political organization, the very notion of government, is called into question by the fact that with the Web, there is no longer a coextension between the government and its territory. It seems to me that big Web players have political ambitions and are engaged in a form of rivalry with governments to take over a number of the latter’s prerogatives.

What’s To Fear About Artificial Intelligence?

Jean-Gabriel Ganascia

Professor at Pierre et Marie Curie University, Researcher in AI, Chairman of the Ethics Committee, CNRS

Jean-Gabriel Ganascia is a professor at the Pierre-et-Marie-Curie University, where he conducts research on artificial intelligence at the Laboratoire Informatique de Paris 6 (LIP6). He is chairman of the National Committee on Ethics of CNRS (France’s National Center of Scientific Research) and has published several works including the precursor, L'Âme machine (The Machine Soul, 1990).

Copyright & Re-publishing

Chuàngkàn Bālí is Paris Innovation Review's Chinese edition, for all decision makers in the whorld.

This content is under Creative Commons Attribution 3.0 License. You are free to share, copy, distribute and transmit, under the condition that original source is declared.

Re-publishing need to get permission from Chuàngkàn Bālí, please leave your message after the article.

Images from internet and copyrights belong to the original authors.

Let’s feel the pulse of the present and future together!

Press and hold upper QR code, select “識別圖中二維碼” and follow us.


分享到:


相關文章: