2024-01-22 09:00:33
While GPT-3.5, which focused on text generation, generated significant interest, GPT-4 further strengthened the world’s belief in the capabilities of artificial intelligence to create images and sound. We are approaching an era where AI interacts with us through multimedia, a reality that GPT-5 is intended to bring to fruition.
Following the commitment to pause development of the more powerful version, GPT-4, for six months in summer 2023 due to concerns over the lack of shared security protocols for advancements in design and development of AI, Sam Altman, CEO of OpenAI, recently announced that the company is continuing its research with an eye toward introducing an improved version in the future.
With ten times more power than GPT-4 and housing a hundred times more parameters than GPT-3, which already has 175 billion parameters. GPT-5 is projected to contain approximately 17.5 trillion parameters, making it one of the largest neural networks ever created. These anticipations generate palpable enthusiasm on a global scale.
Generative artificial intelligence reaches a decisive milestone with the announcement of Sam Altman, CEO of Open AI, on Bill Gates’ “Unconfused Me” podcast. Altman reveals that the next model would be “fully multimodal”, supporting “speech, image, code and video”. This revolutionary version will be able to process speech, images, code, and even video. GPT-5 will thus be able to generate videos in response to user requests, marking an impressive milestone in the evolution of generative artificial intelligence. It is necessary to wait for the launch event in order to get details on all the capabilities of GPT-5.
Sources :
1705931411
#fully #multimodal #model #times #overpowering #GPT4 #Digital #Economy #Blog