ChatGPT&co: here is what should still evolve

The cycle of news on ChatGPT has reached (or exceeded) its overheating point. Not a day goes by without an article that highlights either its prowess or the excessive expectations placed on it (ensuring the complete defense of a case in court) or its flagrant shortcomings (errors in math, logic or good meaning, like “cow’s egg”). If we want to get out of the hype that surrounds ChatGPT and better understand the underlyings, it is important to understand the pre-trained generative models, also called foundation models (from a Stanford paper in 2021), which include GPT-3, OPT, PaLM, Bloom and many more…

The Turing Institute in London, one of the major AI research institutions in Europe, organized a symposium this week to take stock of these famous Foundation models. We better understand the basic logic: it is above all a statistical mechanism, which shows, following a text, the most likely word. It was almost by surprise that we discovered the “emergent” properties of these models, their capacity to produce surprisingly relevant answers, or, with a few improvements, to engage in a dialogue, or even to lay poetry or to imitate a style.

But the defects of these tools are more and more obvious: if they are a very good model of language, they are a poor model of reasoning. They sorely lack a representation of the world, are notoriously bad on questions of temporality or understanding of space. Examples abound. All of this is explainable and will gradually improve… but it has serious consequences.

ChatGPT-type AIs (which are sure to be on the rise throughout 2023) can certainly be a great help in writing tasks, but do we really want to consider entrusting them with medical diagnostic tasks? Or strategic decisions for a business? Major efforts are underway to establish models of benchmarking, fine evaluation of the possibilities of these models. Until we better understand what these foundationswe would be well advised not to build castles there in Spain…

Leave a Replay