Edouard Grave (Kyutai): “Making AI more factual remains a real scientific question to resolve”

2023-11-28 16:32:08

Published on Nov. 28, 2023 at 4:57 p.m. Updated on Nov. 28, 2023 at 5:19 p.m.

A year following the emergence of ChatGPT, research in AI has continued to progress even if publications are becoming rarer. Edouard Grave, from the French laboratory Kyutai and formerly of Meta and Apple, takes stock for “Les Echos”.

Since the release of ChatGPT a year ago, has research in generative AI progressed?

Unfortunately, more and more major players in the discipline, and particularly large American companies, are publishing less and less. So it has become difficult to know where they are.

The last major major publication was in 2020, with the release of GPT-3, launched by OpenAI – although research published in 2022 to make this large language model (LLM) accessible via the chatbot ChatGPT is also interesting. That being said, my feeling and that of many of my colleagues is that the LLMs underlying robots like ChatGPT have improved a lot over the past year.

Is this type of artificial intelligence reaching the end of its potential?

No. Increasing the size of the model, especially the size of the training dataset, always works very well. It is also not so easy to design the architecture that allows it. Thus, there are rumors that GPT-4 is not a single model but a mix of 16 models requested according to queries (one for computer code, another for different languages, etc.).

What potential advances are being studied at the moment?

One of the research directions that interests us a lot at Kyutai is the use of tools to increase LLMs. For example, we work on so-called “retrieval” modules which allow us to solicit the right sources to generate precise responses: knowing when to mobilize this module, ensuring the relevance of the documents consulted, taking into account events in the current events (the president of a country has been replaced, for example), cite your sources… Making the models more factual and limiting “hallucinations” is a real scientific question that is not at all resolved.

Another big subject on the table is giving LLMs the so-called “planning” capacity. Today, they only predict the next word, without really constructing a demonstration in the same way that a human would pose his ideas when determining the plan of a text.

Finally, I might cite the research on so-called “multimodal” training of LLMs. This involves making them master all media (text, image, video, sound, etc.) so that they can use and interpret them together in their responses to requests.

Some seek to synthetically create model training data. What do you think ?

Training LLMs has until now required a lot of human intervention for the LLM to learn what type of answers is most expected for each type of query. Many are looking for tips to go faster. Academic researchers but also companies like Anthropic use LLMs such as Claude or ChatGPT to train models and adjust them.

One might think that this increases the risk of errors. But on logical reasoning (mathematical problems, for example), it is easy to verify. And we see that the model, which has absorbed many discussion forums with debates on the Web, is capable of correcting itself.

Are we far from artificial general intelligence (AGI)?

I don’t know if LLMs can be improved to achieve this level of intelligence. Some seek to give the ability to AIs, whatever they may be, to have abstract representations, with words coming in a second step. But there have been no breakthroughs in this area, as far as I know.

That said, AGI encompasses a lot of things. In a way, LLMs are already a form of general AI. However, we are very far from a system that is more intelligent than humans on all subjects. These are complex models but for almost all of their capabilities, we can have an idea of ​​the data on which they rely to give this or that result.

Faced with American and Chinese giants, what can French challengers like Kyutai, in research, or Mistral AI in business, bring?

At companies like OpenAI, there are strong tensions between the research and product divisions. At Kyutai, we will be dedicated to research, which is one of the reasons why I am here. As for the large American AI companies, they are working a lot on scaling their model as large as possible.

There are plenty of other topics, and not just scientific ones. For example, organizations like OpenAI will not look, or only marginally, at models that you personalize on your computer or smartphone. Many companies will also not want Gafa’s clouds and infrastructures.

1701189207
#Edouard #Grave #Kyutai #Making #factual #remains #real #scientific #question #resolve

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.