2023-08-25 16:33:01
Microsoft President Brad Smith speaks at a forum in India
Microsoft President Brad Smith said today, Friday, that the rapid development of artificial intelligence threatens to repeat the mistakes made by the technology industry at the beginning of the era of social media.
Rapid advances in artificial intelligence are raising concerns around the world regarding the technology’s potential to spread misinformation, misuse and disrupt the labor market.
Smith said, at a business forum today, that these doubts did not arise among the developers of this technology, noting that their optimism reminded him of the early years of social media platforms.
At the time, according to Smith, the technology industry became “a little bit excited regarding the good things that social media is going to bring to the world, and there’s a lot of it, without thinking regarding the risks either.”
“We have to be clear-eyed, we have to be excited regarding the opportunities, but we have to think deeply and even worry regarding the downside,” Smith said. We have to build guardrails from the start.”
This week, a United Nations study said that artificial intelligence is more likely to boost jobs than destroy them, with growing concern regarding the technology’s potential impact.
The study examined the potential impact of this platform and others on the quantity and quality of jobs, noting that most jobs and sectors are only partially susceptible to automation.
And she suggested that most of them “will most likely be complementary, not replaced, by the latest wave of generative artificial intelligence, such as ChatGPT.”
It’s clear, Smith said, that people “want to be confident that this new technology will remain under human control.”
There are concerns that chatbots might flood the internet with misinformation, biased algorithms might produce racist material, etc., or AI-powered automation might destroy entire industries.
Source: agencies
1692986360
#Microsoft #President #Artificial #intelligence #threatens #repeat #previous #mistakes