The GPT language model moves to version 4. OpenAI announced the next generation of its AI, of which version 3.5 gave ChatGPT. It would be much more precise and nuanced, and multimodal. A new version which is however reserved for subscribers for the moment.
OpenAI, the firm behind the famous chatbot ChatGPTChatGPT, has just announced the highly anticipated new version of its large language model (LLM), GPT-4. This new version is already available for ChatGPT Plus subscribers (at 20 dollars per month), and developers can register on the waiting list to access the new API (programming interface).
The firm did not disclose the technical details that differentiate GPT-4 from its predecessor. However, she claims that this news IAIA « est more reliable, more creative and able to handle much more nuanced instructions than GPT-3.5 “. Where ChatGPT is limited to 3,000 words input or output, the new version can reach 25,000 words. Enough to give cold sweats to those who seek to recognize content generated by AI.
An AI that understands images in addition to text
Contrary to what was stated MicrosoftMicrosoft Germany last week, GPT-4 will not support videos. Nevertheless, it is indeed a multimodal model. The AI accepts images as input, in addition to text, but responses will be limited to text. However, image support will not be available to the general public immediately. She is currently being tested by Be My Eyes, a applicationapplication assistance for the visually impaired.
Even if it will be necessary to wait to be able to use images, the demonstrations of the multimodal version of GPT-4 are impressive. When the AI is presented with a photo of floating balloons attached to the ground by strings, and the question ” What would happen if the wires were cut she is able to understand the content of the image and respond ” The balloons would fly away “. In another example, GPT-4 responds to a photo of milk, eggs, and flour with recipe ideas. No need to stay in front of the porteporte open the fridge trying to decide what to make for dinner. All you have to do is send a photo to the AI for suggestions.
More correct answers, fewer hallucinations
OpenAI warns that GPT-4 is still prone to hallucinations, but this latest version is still less prone to error. The rate of correct answers would be improved by 40% compared to the current version of ChatGPT, and the risk of responding to requests for unauthorized content would be reduced by 82%. This is good news when you know that the firm has several commercial partnerships. Duolingo, the app for learning languages, has announced its Duolingo Max service, conversations with GPT-4 in a foreign language (currently limited to French and Spanish for English-speaking users). The Stripe online payment system uses GPT-4 for technical support and to combat fraud.
Those who have access to the new version of Bing with conversational AI (following registering on the waiting list) have already tested GPT-4. After avoiding giving a clear answer in recent weeks, Microsoft has finally confirmed that its search engine does indeed integrate the new language model.