OpenAI chief executive Sam Altman delivered a sobering account of ways artificial intelligence could “cause significant harm to the world” during his first congressional testimony, expressing a willingness to work with nervous lawmakers to address the risks presented by his company’s ChatGPT and other AI tools.
Altman advocated for a number of regulations, including a new government agency charged with creating government standards for the field, to address mounting concerns that generative AI could distort reality and create unprecedented safety risks. The CEO tallied a list of “risky” behaviors presented by technology like ChatGPT, including spreading “one-on-one interactive disinformation” and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.
“If this technology goes wrong, it can go quite wrong,” Altman said.
Yet in nearly three hours of discussion about the potentially catastrophic harms of AI, Altman affirmed that his company will continue to release the technology, despite likely dangers. Rather than being reckless, he argued OpenAi’s “iterative deployment” of AI models gives institutions time to understand potential harms – a strategic move puts “relatively weak” and “deeply imperfect” technology in the world to understand the associated safety risks.
For weeks, Altman has been on a global good will tour, privately meeting with policymakers – including the Biden White House and members of Congress – to address their mounting concerns with the rapid rollout of ChatGPT and other technologies. Tuesday’s hearing marked the first opportunity for the broader public to hear his message to policymakers, at a moment when Washington is increasingly grappling with ways to regulate the technology that is already upending jobs, empowering scams and spreading falsehoods.
In sharp contrast to contentious hearings with other tech CEOs, including TikTok’s Shou Zi Chew and Meta’s Mark Zuckerberg, lawmakers from both parties gave Altman a relatively warm reception. They appeared to be in listening mode, expressing a broad willingness to listen to regulatory proposals from Altman and the two other witnesses in the hearing, IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus.
Members of the Senate Judiciary subcommittee expressed deep fears about the rapid evolution of artificial intelligence, repeatedly suggesting that recent advances could be more transformative than the advent of the internet – or as risky as the atomic bomb.
“This is your chance, folks, to tell us how to get this right,” Sen. John Kennedy, R-La., told the witnesses. “Please use it.”
Lawmakers from both parties expressed an openness to the idea of creating a new government agency tasked with regulating artificial intelligence, though past attempts to create a specific agency with oversight of Silicon Valley have languished in Congress amid partisan divisions about how to regulate the industry. Yet it’s unclear if such a proposal would gain broad traction with Republicans, who are generally wary of expanding government power. Sen. Josh Hawley, Mo., the top Republican on the panel, warned such a body could be “captured by the interests that they’re supposed to regulate.”
Sen. Richard Blumenthal, D-Conn., who chairs the subcommittee that hosted the hearing, said Altman’s testimony was a “far cry” from past outings by other top Silicon Valley CEOs, who lawmakers have criticized over the years for at-times declining to endorse specific legislative proposals.
“Sam Altman is night and day compared to other CEOs, and not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action,” Blumenthal told reporters after the hearing.
Altman’s appearance comes as Washington policymakers are increasingly waking up to the threat of artificial intelligence, as the broad popularity of ChatGPT and other generative AI tools have dazzled the public but created new safety concerns. The Biden administration is increasingly calling AI a key priority, and lawmakers repeatedly say they want to avoid the same mistakes they’ve made with social media.
Yet despite broad bipartisan agreement that AI presents a threat, there appears to be little consensus to date about what legislation lawmakers should pass to regulate it. Blumenthal said Tuesday’s hearing had “successfully raised” hard questions about AI, but not answered them. Senate Majority Leader Chuck Schumer, D-N.Y., has been developing a new AI framework, which would “deliver transparent, responsible AI while not stifling critical and cutting edge innovation.”
Altman’s rosy reception signals the success of his recent charm offensive, which included a dinner with lawmakers Monday night about artificial intelligence regulation and a private huddle following Tuesday’s hearing with House Speaker Kevin McCarthy, R-Calif., House Minority Leader Hakeem Jeffries, D-N.Y., and members of the congressional Artificial Intelligence Caucus.
The sharpest critiques of Altman throughout the hearing came not from lawmakers, but another witness sitting next to him. Gary Marcus, a professor emeritus at New York University, warned the lawmakers they were confronting a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.”
Marcus he specifically critiqued OpenAI, citing its original mission statement to advance AI to “benefit humanity as a whole” unconstrained by financial pressures. Now, Marcus said, the company is “beholden” to its investor Microsoft, and that its rapid release of products is putting pressure on Google parent company Alphabet to swiftly roll out products too.
“Humanity has taken a back seat,” Marcus he said.
In addition to creating a new regulatory agency, Altman proposed creating a new set of safety standards for AI models, testing whether they could go rouge and start acting on their own. He also suggested that independent experts could conduct independent audits, testing the performance of the models on various metrics.
However, Altman sidestepped other suggestions, such as requirements for transparency in the training data that AI models use. OpenAI has been secretive about the data it uses to train its models, while some rivals are building open-source models that allow researchers to scrutinize the training data.
Altman also dodged a call from Sen. Marsha Blackburn, R-Tenn., to commit to not to train OpenAI’s models on artists’ copyrighted works, or to use their voices or likenesses without first receiving their consent. And when Sen. Cory Booker, D-N.J., asked if OpenAI would ever put ads in its chatbots, Altman replied, “I wouldn’t say never.”
Send questions/comments to the editors.
Comments are no longer available on this story