The Mirage of Intelligent Machines: Understanding ChatGPT

2024-09-15 04:54:13

OpenAI presented its new ChatGPT o1 model on Thursday, which would represent a “significant advance” because it would be able to “reason” – soon like a “doctor”, assures the American group. But how can a chatbot reason? By pretending.

Has ChatGPT reached the age of reason? Its parent, OpenAI, claims that its “baby” has reached this milestone. The American company announced on Thursday, September 12, the release of its new artificial intelligence model – dubbed o1 – which was allegedly trained to perform “complex reasoning”.

For now, ChatGPT o1 has mainly learned to “think before answering,” OpenAI rejoices in its press release. The stated goal would be to make it a large language model (LLM) – or chatbot – capable of competing with the reasoning of a “doctoral student” in mathematics, biology or physics.

Champions in probability, not in reasoning

Big program! Until now, large language models – such as ChatGPT – were not associated with the idea of ​​”reasoning”, and “when they ventured into this territory, they were rather bad”, says Mark Stevenson, a computer science and language model specialist at the University of Sheffield.

No wonder: these chatbots are “above all machines that excel in the art of deciding which is the most appropriate word to add after a sentence.” [pour qu’elle fasse sens et réponde à la question posée par l’utilisateur, NDLR]”, adds this expert. They are champions in probability, and nothing in their programming prepares them to reason.

Could ChatGPT have some kind of little extra soul – to stay in the anthropomorphism dear to OpenAI – allowing it to rise above its peers? It is difficult to determine what would allow o1 to “reason” better, as OpenAI is stingy with details about the internal cooking of its algorithm“It’s frustrating for researchers not to know the details of the scientific results that OpenAI claims to have obtained,” laments Nicolas Sabouret, professor of computer science and specialist in artificial intelligence at the University of Paris-Saclay.

For him, using the term “reason” would in any case be an “abuse of language”. “We should probably say simulation of reasoning, because asserting that a machine can reason is like maintaining that a submarine can swim. It’s absurd”, summarizes Nicolas Sabouret, taking up a comparison made in 1984 by the Dutch mathematician and computer scientist Edsger Dijkstra.

Master of Thought Chains

In the world of AI, “we need to free ourselves from the definition of logical reasoning,” says Nello Cristianini, professor of artificial intelligence at the University of Bath and author of “Machina Sapiens” (published by Il Mulino). For him, “there are other approaches to this term: for example, we can consider that using probabilities to assemble different pieces of information in order to create new information – which is what generative AI does – is already the beginning of reasoning.”

ChatGPT o1 has apparently become much better at putting together the pieces of the puzzle in its database. It seems that OpenAI “has found a new way to train its model [c’est-à-dire découvrir des liens logiques dans son immense base de données, NDLR]”, says Anthony Cohn, professor of automated reasoning and artificial intelligence at the University of Leeds and l’Institut Alan Turing.

This is what would allow him, ultimately, to “push the limits of predicting the next word and give a better impression of reasoning,” continues Anthony Cohn.

Also readOpenAI: Mathematics still resists ChatGPT and AI

OpenAI claims that its generative AI can now better use “chains of thought to solve complex problems.” This concept “chains of thoughts” is not new : “It is the ability to break a statement into smaller elements to solve the problem more easily, one step after another,” explains Nicolas Sabouret.

This is what OpenAI means by indicating that ChatGPT o1 will “think before answering”. A classic example for an AI “is to detail step by step the preparation and making of a cake”, illustrates Tom Lenaerts, professor at the Free University of Brussels and president of the Benelux Association for Artificial Intelligence.

No longer take the chatbot by the hand

Traditionally, the user has to take the chatbot by the hand to guide it step by step. With the right questions, language models can thus perform step-by-step reasoning simulations.

In the case of ChatGPT o1, the AI ​​seems to have “learned to control thought chains and use them deliberately,” says Nello Cristianini. It will no longer need to be told the steps to make a cake in order to do so.

But OpenAI does not want to register its model for the reality TV show Top Chef. The goal is to launch it to attack physics, chemistry or maths, subjects in which this type of step-by-step reasoning “can allow us to solve problems that were previously beyond the reach of large language models,” assures Nello Cristianini.

It is no coincidence that OpenAI only cites hard sciences to test the “reasoning” abilities of its virtual “doctor”. Unlike a flesh-and-blood student, fields such as history, philosophy or geopolitics seem out of reach of ChatGPT o1.

“That’s probably because with hard science, there are correct, verifiable answers, which is important for testing the validity of the model,” Stevenson says. How do you check whether ChatGPT o1 answered “correctly” questions like “What is the best way to resolve the Israeli-Palestinian conflict?” or “Does discussion mean renouncing violence?”

Furthermore, “the concepts manipulated by the human and social sciences do not obey strict relationships between words, unlike physical formulas for example,” adds Nicolas Sabouret. An AI will thus have much more difficulty “understanding” and manipulating the term “liberal” (economic sense, political sense) than that of the theory of relativity.

Less mistakes, more money?

But why push a chatbot to give the impression of “reasoning”? First, “if we can get it to simulate step-by-step reasoning, we can hope to reduce the risks of wrong answers,” says Anthony Cohn. Indeed, “breaking down a problem into several parts can lead the AI ​​to realize if answers are contradictory and thus eliminate them. This would be a major step forward for generative AIs,” says Simon Thorne, an artificial intelligence specialist at Cardiff University.

OpenAI also has a much more down-to-earth goal. The group claims that its model has “increasingly human behavior, reasoning like a human, because it wants to suggest that it is approaching a ‘general intelligence’ [comme les humains, NDLR]”, analysis Tom Lenaerts.

This sought-after anthropomorphism is found in the ChatGPT o1 interface: the time it takes for the chatbot to respond is indicated as if it were thinking, “while it is only the calculation time”, adds Tom Lenaerts.

General intelligence – or “superintelligence,” as Sam Altman calls itthe CEO of OpenAI – is the group’s Holy Grail. Achieving this quest is OpenAI’s official goal. “Certainly, to achieve this, knowing how to reason is an essential step, because humans do it every day,” confirms Anthony Cohn.

That’s why OpenAI rushed to release this new model, which, in the company’s own opinion, is still “very improvable.” Sam Altman had to show that he has a roadmap for his famous “superintelligence.” After all, the promise of achieving it is one of the reasons OpenAI is so popular with investors.

1726610191
#ChatGPT #mirage #artificial #intelligences #reason

Can ChatGPT truly reason​ like a human, ⁣or is it just a simulation of reasoning?

The Age of Reasoning: Has ChatGPT Reached New Heights?

OpenAI,⁤ a leading artificial intelligence research⁤ organization, has made a groundbreaking announcement, claiming that its new ChatGPT o1 model has achieved a ​”significant advance” in AI capabilities. According ⁣to OpenAI, this latest⁤ iteration of ChatGPT is capable of “reasoning” ⁤like ‌a human,⁢ with the potential ‌to rival the‌ cognitive abilities of a doctoral student in mathematics, biology, or physics. But can a ​chatbot truly reason, or is it just pretending?

The Art of Pretending to Reason

Mark Stevenson, a computer science and language model‍ specialist at the University‌ of Sheffield, is skeptical about the notion of a chatbot reasoning. He argues ⁤that large language models, such as ChatGPT, are designed‌ to excel in probability, not reasoning. These models are trained ⁤to predict the most likely word⁢ to add to a sentence, ensuring⁤ it makes⁢ sense and ​responds to‌ the user’s query. They are not programmed to reason in the ​way humans do.

What’s Behind the Claim?

OpenAI’s announcement has raised eyebrows ⁢among ‌experts, who are curious about the internal‍ workings of the new algorithm. Unfortunately, OpenAI has been tight-lipped about the details,⁢ leaving⁤ researchers ​in the dark. Nicolas Sabouret, a professor of computer science and artificial⁤ intelligence at the University of Paris-Saclay, laments ‌the lack of transparency, saying it’s ⁤frustrating⁢ for researchers not to know the specifics​ of OpenAI’s claims.

Simulation of Reasoning

Sabouret and other experts argue that using⁢ the term “reason” might ⁣be an abuse ‌of language. “We‍ should probably​ say simulation of reasoning,” Sabouret suggests, drawing an analogy with the idea ‌that a submarine can’t swim. It’s a clever imitation, but not the real thing.

Alternative Approaches to Reasoning

Nello Cristianini, a professor of artificial intelligence at the ⁢University of Bath, offers a different⁤ perspective. He believes that AI can ‌be ⁢thought of as reasoning in a broader sense, such as using probabilities to assemble pieces of information to create new information. This approach, he ⁤argues, ⁣is already a ‌form of reasoning.

The ​Master of Thought Chains

ChatGPT o1 appears⁢ to have improved its ability to put together the pieces⁣ of the ⁣puzzle in‍ its vast⁤ database. ​OpenAI has seemingly found⁤ a new‍ way to train its⁢ model, discovering logical connections within its massive dataset. Anthony Cohn, a professor of ⁣automated reasoning and ‌artificial intelligence at the University of Leeds, speculates that OpenAI’s ⁤new approach might involve identifying patterns ​and relationships ⁣in the data, enabling the chatbot to generate more coherent and informed responses.

The⁤ Future of AI

While ChatGPT o1’s capabilities are undoubtedly impressive,‍ the question remains: has it truly reached the age of reasoning? The answer ⁢lies in our understanding of what it means to reason. ⁢If⁣ we define reasoning ⁣as the ability to simulate ‌human-like thought‌ processes, then ChatGPT o1 has made significant strides. However,‌ if we require true human-like understanding and cognitive abilities, we still have ‌a long ⁤way to go.

As AI continues to evolve, we must grapple with ⁤the implications of these advancements on our understanding of human intelligence and cognition. Will we soon ‍reach a point where ⁤AI ⁤can truly reason, or will it continue to simulate human-like behavior?‌ The future of AI is exciting,‌ complex, and full⁣ of possibilities.

Keyword optimization:

‌ChatGPT

⁤ OpenAI

Artificial Intelligence

Reasoning

Language Models

Probability

⁣Simulation ⁣of Reasoning

Alternative Approaches to Reasoning

Master⁣ of Thought Chains

Meta Tags:

Title: The Age of Reasoning: Has ChatGPT Reached New Heights?

Description: OpenAI claims ChatGPT o1 can reason like a human. But what does it mean for a chatbot to reason? We explore the implications of this advancement on AI capabilities and our understanding‍ of human intelligence.

Header Tags:

H1: The Age of Reasoning:⁣ Has ⁢ChatGPT Reached New Heights?

H2: Champions in Probability, not in Reasoning

H2: Master of Thought Chains

* H2: The​ Future of AI

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.