Gary Marcus, a prominent artificial intelligence expert, has been vocal on social media about his concerns regarding a potential return of Donald Trump to the presidency, especially with the controversial figure of Elon Musk by his side. Following the recent election, Marcus is feeling a profound sense of disillusionment. Renowned for his contributions to the discourse on AI safety, he provided critical testimony last year alongside Sam Altman, CEO of OpenAI, in front of a Senate subcommittee aiming to establish effective regulations for AI technology. In his thought-provoking new book, Taming Silicon Valley, Marcus emphasizes that without sufficient oversight, generative AI — the foundation of platforms like ChatGPT and Gemini — poses significant risks that could exacerbate existing global challenges.
As a professor of psychology and neuroscience at New York University, Marcus boasts decades of interdisciplinary research that merges cognitive psychology with advancements in AI. His entrepreneurial ventures include founding two startups, one of which, Geometric Intelligence, was bought by Uber in 2016 and evolved into a cutting-edge deep learning lab. His second company, Robust.AI, co-founded with a Roomba creator, is dedicated to developing open-source software tailored for autonomous robotic systems.
Active on social media platforms such as X, Marcus frequently expresses his critiques of Musk, becoming embroiled in spirited exchanges with notable AI figures, including Yann LeCun, the leading expert and current AI chief of Meta. Marcus does not mince words in his assessment of LeCun, labeling him an “intellectually dishonest egomaniac” who attempted to sideline his criticisms of large language models, only to pivot his stance after ChatGPT outperformed Meta’s endeavors.
Question. How do you see the future after Trump’s victory in the presidential election?
Answer. Dark. Generative AI comes with many risks, short-term and long, and I think the prospects for meaningful regulation under the Trump administration are poor. The EU has its AI Act; the U.S. has very little law directly around AI to protect its citizens, and I don’t see that changing in the next few years.
Q. There have been some attempts to regulate AI in California. Do you think it’s possible that some states will pass their own AI laws?
A. California did pass some laws regarding data transparency, but lobbyists from Silicon Valley successfully blocked SB-1047, which would have held companies liable for “catastrophic harm.” This was a significant mistake in my view. There is hope that certain states may attempt to pass AI laws, but achieving this will be a challenging uphill struggle unless citizens collectively raise their voices demanding adequate protections. Otherwise, it might take a major catastrophe, like a significant AI-induced cyberattack, to prompt substantial legislative action.
Q. Trump has chosen Elon Musk to lead the Department of Government Efficiency. What do you expect from him?
A. Elon was one of the early advocates highlighting the dangers of AI, yet his financial investments in the technology make it difficult to see how his advice to Trump could be unbiased. I anticipate that Musk will push for government subsidies to facilitate AI development, including initiatives for his own companies, despite the risks he previously warned against. Remember, this is the same individual who signed the “six-month AI pause” letter while simultaneously building a massive GPU cluster for his personal AI projects.
It’s also ironic to note that Musk’s substantial wealth — and consequently his power — largely stems from the success of Tesla, a company that champions eco-friendly electric vehicles. However, the strain generated by the AI technologies he supports often significantly undermines environmental sustainability due to their high energy, water consumption, and pollution footprint. I expect the Trump administration will aggressively advocate for easing environmental restrictions to promote increased power generation to support AI’s demands.
Q. Microsoft, Amazon, Google, and Meta are exploring utilizing nuclear power plants to fuel their data centers, and some discussions have taken place with the Biden administration. Do you envision this initiative being more successful under Trump?
A. I have a strong belief that the Trump administration will embrace this approach, assuming there are no unforeseen complications. Personally, I find nuclear power a sound option, but funneling that energy into extensive large language models may not be the best and most responsible use of such immense resources, particularly when we could prioritize reducing reliance on fossil fuels.
Q. Returning to Musk, what are your thoughts on the government appointing the richest man in the world? Can a government member also own a major social media platform?
Q. What do you mean?
Q. Considering big tech companies, do you think they will thrive under his oversight? Trump previously regarded Facebook and Twitter as liberal-leaning platforms.
A. Twitter (now X) has undergone significant transformation under Elon Musk’s leadership, while Meta has seen lesser changes. However, I believe that Big Tech faces substantial hurdles because of its massive investments in generative AI, predicated on the unfounded dream that it will develop into artificial general intelligence (AGI). In reality, it’s not a robust enough technology to catalyze the transformations that advocates predict. If generative AI fails to turn a profit soon, the industry will face a burst bubble — and neither Trump nor Musk will be able to salvage that.
Q. Social media has undermined privacy and facilitated the rise of surveillance capitalism. What does the future hold for AI?
A. Generative AI will likely exacerbate the framework of surveillance capitalism. Many individuals unknowingly divulge their most personal secrets to chatbots, while creators of large language models may gain access to vast amounts of sensitive information, including files, emails, and passwords. The way LLMs generate responses can manipulate people’s beliefs, potentially implanting false notions, as evidenced by a recent study from Elizabeth Loftus. The power bestowed upon those developing LLMs is extraordinary. Simultaneously, these models are already being weaponized to disseminate misinformation and introduce biases in job hiring processes, as well as being leveraged in cybercrime activities. Although some advantages of this technology exist, it remains uncertain whether they constitute a genuine benefit to humanity as a whole. The primary gains may largely accrue to the creators, with society bearing most of the associated costs.
Q. You argue in your book that tech oligarchs will wield increasing influence over American society. Did you foresee Trump’s victory while writing it?
A. I indeed harbored concerns about Trump possibly winning, yet I believe we would have encountered significant challenges regardless of the outcome. The central thesis of my book has become even more pressing: we cannot rely on big tech to be its own regulator, and the U.S. government is far too entangled with the tech industry to effectuate meaningful change. Citizens must raise their voices loudly and clearly — perhaps even through sustained boycotts — to carve out protections against the risks posed by AI.
Q. One of the shadowy figures from this technological oligarchy will occupy a government role.
A. Indeed. We should expect Musk to wield considerable influence in shaping tech policy, likely surpassing the sway most billionaires have ever experienced. It would not be surprising if Trump were to delegate significant portions of tech policy to Musk, despite the evident conflicts of interest at play. The alarming reality I cautioned against has arrived, and how we respond is now in our hands.
Q. How can we tame Silicon Valley?
A. The global populace must unify with a resolute stance: “We reject AI that devastates the environment, exploits artists and writers, maligns individuals, and fuels mass misinformation, while the creators take little to no accountability for the damages they inflict.” Only through unyielding advocacy for greater responsibility from big tech can we anticipate tangible improvement.
What are the potential implications of AI regulation under a Trump administration, according to Gary Marcus?
**Interview with Gary Marcus on AI Regulation and Future Implications Under a Trump Administration**
**Interviewer:** Thank you for joining us today, Gary. Given the current political climate, what are your thoughts on the future with Donald Trump potentially returning to the presidency?
**Gary Marcus:** Thank you for having me. I foresee a rather bleak future. The risks associated with generative AI are significant, both in the short-term and long-term. The ability to establish meaningful regulations seems poor under a Trump administration, especially compared to what we’re seeing in the EU with their AI Act. Unfortunately, the U.S. lacks comprehensive laws directly addressing AI and I don’t expect this to change soon.
**Interviewer:** California has made attempts to regulate AI. Do you believe other states might follow suit with their own legislation?
**Gary Marcus:** California did pass some laws regarding data transparency, but crucial legislation, like SB-1047—which aimed to hold companies accountable for catastrophic harm—was blocked by lobbyists. While there might be some momentum for states to pass their own AI laws, it will require a united effort from citizens demanding protections. Otherwise, we might need to experience a major disaster—perhaps an AI-related cyberattack—before we see significant legislative initiatives.
**Interviewer:** What are your expectations now that Trump has appointed Elon Musk to lead the Department of Government Efficiency?
**Gary Marcus:** Musk has been an early advocate against unregulated AI, but his financial investments in this area raise questions about his objectivity. I expect him to push for government subsidies to promote AI development, potentially benefitting his own companies. It’s a contradiction considering his past warnings about AI dangers. Furthermore, he’s accumulated great wealth from Tesla, an eco-friendly company, yet the AI technologies he supports often conflict with sustainability due to their energy demands.
**Interviewer:** There are discussions about utilizing nuclear power plants for data centers amid the major tech companies. Do you see this initiative gaining traction under a Trump administration?
**Gary Marcus:** I believe it could gain traction, assuming no major obstacles arise. While I view nuclear power as a viable option, using that energy primarily for extensive generative AI models feels misallocated, especially when we could focus on reducing our reliance on fossil fuels.
**Interviewer:** What do you think about Musk’s potential influence as one of the richest individuals while overseeing government initiatives? Is there a conflict of interest?
**Gary Marcus:** Absolutely. The idea of having someone with such vast wealth and interests in major tech platforms in a government role raises eyebrows. It complicates regulatory perspectives, especially since Trump has previously labeled social media companies like Facebook and Twitter as left-leaning.
**Interviewer:** Considering the evolving landscape of Big Tech under Musk, do you think they will continue to thrive?
**Gary Marcus:** Big Tech faces substantial challenges, particularly with their heavy investments in generative AI, which are based on the unreliable hope of achieving artificial general intelligence (AGI). If generative AI doesn’t soon become profitable, the industry might face a bubble burst, and neither Trump nor Musk will be able to save it.
**Interviewer:** Lastly, what are your concerns about AI’s impact on privacy and surveillance capitalism?
**Gary Marcus:** I’m quite worried. Generative AI is likely to intensify the surveillance capitalism framework. Users often unknowingly share sensitive information with AI systems, and the creators of large language models could exploit this data access extensively. Moreover, these models can manipulate public perception and spread misinformation. Although there are potential benefits to the technology, the overriding concern is whether these advantages truly serve humanity or mainly benefit their creators.
**Interviewer:** Thank you for your insights, Gary. It’s crucial to consider these aspects as we navigate the future of AI and its regulation.
**Gary Marcus:** Thank you for the discussion. It’s vital we stay vigilant about these issues.