how to work with AI

2024-06-26 13:41:38

“AI solves specific problems, human intelligence adapts to general situations.” Daniel Andler takes a detailed look at technologies and relies on our ability to adapt to learn how to use them.

The mathematician and philosopher of science Daniel Andler has been working for almost 40 years on the functioning of our brain and on digital technologies. In IArtificial intelligence, human intelligence: the double enigma (Gallimard, 2023), it traces the history of the research that has enabled the dazzling progress of AI in recent years, while demonstrating that it remains radically different from us – and often less efficient.

Your book was published just over a year ago. Since then, the OpenAI wave has changed the general public’s image of AI. Has your analysis also evolved?

Daniel Andler : While my book has already discussed the prowess of OpenAI and its ilk at length, AI systems are being deployed faster than I had imagined, in businesses, in our work tools and already in our daily lives, on the Internet or in our phones. However, this spectacular deployment is leading to a democratization of multimodal applications, making it possible to generate text, images, sound and even videos all at once. AIs have thus become formidable illusionists, capable of imitating real people, adopting a voice, a style, increasingly credible reactions, and giving the impression that they can work in our place. All of this is largely illusory, but not without consequences. Business leaders imagine replacing employees to reduce their payroll. Politicians are considering automating certain decision-making processes, relying on automatic data analysis… On the internet, we also anticipate a massive dissemination of false information, through social accounts, posts and comments generated by AI.

You are more worried, in short…

DA: In my book I explained how AI remains fundamentally different from human intelligence. Today I think more regarding the political and social consequences induced by the confusion between the two. In the immediate future, I would argue for drastically limiting the generation of products and works with AI, because it introduces a kind of “false currency” on the cultural market. This was also an intuition of the philosopher Daniel Dennett, eminent specialist in human consciousness and cognition. In an article published shortly before his deathhe denounces as criminal the creation of software or machines pretending to be real people. For him, we should condemn this practice in the same way that we criminalize the creation of counterfeit money. Be careful, for Dennett as for me, it is not a question of banning all development of AI. These technologies offer a range of formidable tools and possibilities that we still seem to barely touch on. But using them well requires remaining lucid regarding what they are.

Basically, do you think that AI should not replace human work for moral reasons, or that it really cannot?

DA: I tend more towards the second option, which makes me optimistic for the future. If we go through a period of enthusiasm for AI, we will often find that the results remain mediocre without significant human work upstream, downstream, or both! It’s a bit like industrial products: their massive diffusion since the 1980s has paradoxically highlighted the unparalleled value of craft or artistic work. In the same way, products generated solely by AI will certainly continue to progress, but they will always lack something, and the public will tire of these soulless works. Companies will also realize that humans are indispensable, even in positions that are already heavily automated. Unfortunately, this awareness will still take time, while the deployment of AI can cause damage in the short term – job destruction, speculative bubble, etc.

Will we always need drivers, radiologists, designers, translators, etc.?

DA: I am convinced of it, even if it seems less obvious today. At the end of May, an article from New-York Times reported that nearly one in two white-collar workers – executives, business leaders, etc. – thought that their daily work might be done by an AI. In my opinion, this is a fundamental error, which I call “the fallacy of the perfect ersatz”. If you create a copy that is perfectly identical at first glance, this means that it can perform the known and well-established functions of its model; but this also implies that it cannot do anything more! In other words, it will remain incapable of adapting to an unforeseen situation and even more so of innovating. This is exactly the limit that AIs encounter: the way they were designed is to propose the most frequent response or reaction to a request, and therefore to reproduce what already exists. This may certainly be sufficient for many small daily tasks. But the reality of human work, whatever it may be, mainly consists of adapting to unexpected events.

Could you give an example?

DA: Today, airline pilots operate on autopilot for most of the flight time. So airlines have considered automating their aircraft to save money, but they have found that it is impossible. Pilots do not sit around twiddling their thumbs while the onboard computer takes over. They are constantly monitoring, anticipating problems, taking over in the event of a breakdown or emergency landing. As captains, they must also manage potentially aggressive or sick passengers, decide whether or not to disembark someone, change the flight path if necessary, etc. All these decisions require adaptation, flexibility and an analysis of the situation as a whole, with all that is unknown, uncertain and unprecedented. It is the same for a manager or a CEO: their job is not to consult spreadsheets and mechanically make decisions. He must create an offer, adapt to changes in a market, take into account human factors in his company, etc.

What regarding seemingly more routine jobs? In your book, you argue that fully autonomous cars will probably never see the light of day, yet in the United States, driverless taxis are already on the road, right?

DA: We must be wary of announcements. Spectacular videos of “robo-taxis” regularly arrive from the United States, but these vehicles are much less autonomous than their manufacturers would have us believe. On the one hand, absurd accidents remain frequent: at the end of May, a Tesla was still speeding onto a railway track while the passenger might clearly see that a train was approaching at full speed. But above all, these accidents give rise to revealing investigations. Last year in the United States, I learned that employees of a robot-taxi company intervened remotely every 20 minutes on average! This company also employed around three people for each vehicle in circulation. It is part of a long-term investment approach, but for the time being remains less profitable than traditional taxis. Human labor therefore still seems necessary. However, I do not deny the progress and changes in usage that are emerging. It would already be possible to activate automatic driving without any problem when driving in a straight line for hundreds of kilometers in the middle of the desert… But fully autonomous driving in all circumstances is still unlikely to see the light of day.

More generally, in what way do you think human intelligence remains radically different from AI?

DA: You might think that the two, seen as information processing systems, can be equivalent in principle. But in reality they do not do the same thing at all. AI solves specific problems by relying on a finite set of data. Schematically, it analyzes billions of examples to deduce that, if you start a sentence with, for example, “the cat eats…”, there is a good chance that the rest will be “the mouse”. Human intelligence, on the other hand, is above all an ability to adapt to situations. Unlike a finite set of data, a situation does not have strict contours, it contains a lot of uncertainties, and at the same time it always has something unique, irreducible to a set of past examples. Let’s imagine a banal situation: you are at the office and want to go home. But you have not finished your work, and your boss might blame you for it tomorrow. At the same time, you would like to spend the evening with your family, or watching the latest episode of your favorite series… What are you going to do in the end? When you are in this kind of reflection, you feel that the challenge is not to objectively solve a problem, in the same way that you solve a mathematical equation. You must make more or less appropriate decisions, improvise according to the evolution of the situation, and above all be able to assume the consequences.

You also consider that you need a body of flesh and blood to think like a human…

DA: Unlike an AI, we have a body engaged in the real world, both the subject and the object of its own decisions. If you get too hot and decide to take off your sweater, your body cools down; the temperature felt by your brain drops as a result of the action it has initiated. Even such a basic experience is completely alien to AIs. The computers and robots that house them are not, strictly speaking, their bodies. An AI would not feel the loss of hardware in the same way that we experience a physical injury. It is an extension that it pilots and which it can change, like our clothes. This difference is fundamental, because our decisions are ultimately intended to ensure our survival and our happiness. In the absence of a biological body, an AI is not engaged in a logic of survival and adaptation to the world, it cannot develop an intelligence of the same type as that of living beings. If in the future we were to develop synthetic biology, such that an AI would be both the product of a body and its information processing system, then things would be different. But in the current state of technology, even the cognition of animals remains closer to ours than an AI.

1720046056
#work

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.