Meta has developed several versions of LLaMA, requiring more or less resources. (Photo: 123RF)
Meta on Friday unveiled its own version of artificial intelligence (AI) that can generate text, like ChatGPT, opening it up for now to researchers so they can respond to the risks posed by these new technologies.
The objective of this new language model called LLaMA is “to help researchers progress in their work” on this subject, in particular because it does not require very large infrastructures to be studied, specifies the company in a text of presentation.
The launch in November of the ChatGPT conversational robot from the OpenAI start-up has indeed shaken up the world of AI by allowing the general public to see the ability of new “language models” to generate, in a few seconds, a text on a given theme or to provide an explanation on a complex subject.
But they also pose risks, be it factual errors, bias or data protection.
A test version of Microsoft’s Bing search engine, developed in partnership with OpenAI, quickly issued inconsistent responses, with the computer program notably expressing threats or its desire to steal the nuclear codes.
“Additional research is needed to address the risks of bias, toxic comments and hallucinations,” says Meta, the parent company of Facebook and Instagram.
But it takes significant resources, especially in terms of computing power, to train and run these language models.
This “limits researchers’ ability to understand how and why these large language models work, hampering efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the ability to generate erroneous information”, notes the company.
This is why Meta has developed several versions of LLaMA, requiring more or less resources.
As OpenAI and Microsoft limit access to the technologies that power their AI, Mark Zuckerberg’s company decided to share how it built LLaMA so researchers can “more easily test new approaches to limit or eliminate ” The problems.