Future generations of artificial intelligence tools like ChatGPT might require government regulation for them to be used safely, the AI chatbot maker’s chief executive told Congress Tuesday.
- More powerful artificial intelligence models may require government regulations, OpenAI CEO Sam Altman said in his first congressional appearance.
- Licensing may be a tool to ensure upcoming AI tools adhere to safety rules, he said.
- Lawmakers said more hearings and legislation focusing on AI should be expected.
- IBM said regulations should focus on use cases, not the technology itself.
OpenAI CEO Sam Altman told the senators that government regulators may need to intervene as increasingly powerful AI models are developed. He suggested potentially licensing high-powered models to ensure that they adhere to safety requirements.
“U.S. leadership is critical to mitigate the risks and grow the U.S. and world economy,” Altman said.
This is the first time the executive has testified before Congress, and last week he met with members of President Joe Biden's administration to talk through similar issues.
Both Wall Street and Capitol Hill are turning their attention toward artificial intelligence, especially after OpenAI’s ChatGPT captured the public’s attention with its capabilities. Investors took interest in Microsoft (MSFT) after it announced it had integrated ChatGPT into its Bing search engine, and several companies have included their advances in AI capabilities as part of recent earnings reports.
White House, Congress Begins Looking at AI
The hearing was the first on AI held by the Senate Judiciary Subcommittee on Privacy, a starting point for some regulators looking to address AI business practices.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said Missouri Republican Senator Josh Hawley. “This hearing marks a critical first step towards understanding what Congress should do.”
During the hearing, Christina Montgomery, IBM’s (IBM) chief privacy officer, said regulators should focus on AI use cases, not the technology itself. Gary Marcus, a former New York University professor and one of the leading advocates of a six-month pause on AI development, said models should be subject to scientific review before release.
Last week, the White House met with the heads of four top U.S. AI companies in order to better understand where regulation may be required in the field. Altman was joined by the CEOs of Microsoft, Alphabet (GOOG) (GOOGL) and Anthropic in that meeting.
Vice President Kamala Harris said the White House won’t hesitate to legislate or regulate AI if it believes it necessary to protect U.S. security and the economy, and it will soon lay out rules for how government employees can use AI in its work.