Regulators and lawmakers are looking at potential AI issues and considering actions to prevent the technology from becoming biased or causing economic harm.
- Following its meeting with AI company CEOs, the White House said it will consider legislation and regulation as the technology develops.
- Principles include the need to “evaluate, verify, and validate the safety, security, and efficacy of AI systems.”
- Lawmakers, as well as the Federal Trade Commission, are looking to ‘get ahead’ of AI issues.
As part of a discussion with artificial intelligence CEOs Thursday, Vice President Kamala Harris said the White House is considering options as it reviews a technology that the president said has “enormous potential and enormous danger.”
“President Biden and I are committed to doing our part—including by advancing potential new regulations and supporting new legislation—so that everyone can safely benefit from technological innovations,” Harris said.
While there are few new rules targeting AI technology like ChatGPT, some regulators and lawmakers are starting to put together proposals that would lay out regulations as AI technology develops.
In its meeting with top AI CEOs, the White House laid out a few of the principles it plans to use to gauge what rules are needed. The executive branch plans to focus on mitigating risks that could include everything from protecting from bias to preventing economic harm.
The meeting included CEOs of four American AI companies at the forefront of development: Google parent Alphabet (GOOG, GOOGL), startup Anthropic, tech giant Microsoft (MSFT), and ChatGPT maker OpenAI.
The White House said it would be vital for these companies to be transparent with policymakers and the public about their AI systems. AI companies will also need to be able to “evaluate, verify, and validate the safety, security, and efficacy of AI systems,” it said.
Lawmakers, Regulators Moving Forward with Proposals
One regulator is already pressing forward.
Federal Trade Commission (FTC) Chair Lina Khan said in a New York Times op-ed that the consumer protection agency already has the authority to regulate AI companies. Under existing rules, the FTC could regulate AI for privacy, data security and unfair practices, she argues. It will consider new ones if needed, she wrote.
The FTC has already warned people not to use AI to help them trick people. It has also said it will look into an area of AI that has less to do with technological advancements: false advertisement. FTC attorneys will be asking whether the “AI” products that a company advertises are true “artificial intelligence.”
In Congress, lawmakers have introduced a slate of bills, proposals and warnings that seek to address AI, some targeting specific vulnerabilities and concerns in a mostly piecemeal approach.
Senate Majority Leader Chuck Schumer has launched a “major effort to get ahead” of AI. Virginia Democratic Sen. Mark Warner, the chair of the Senate Intelligence Committee, wrote to several AI CEOs recently to prioritize security and block malicious misuse of AI technology. A congressional caucus has formed to study AI and there has also been a legislative proposal designed to prevent AI from launching nuclear weapons.