US begins study of possible rules to regulate AI like ChatGPT
WASHINGTON, April 11 (Reuters) - The Biden administration said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.
ChatGPT, an AI program that recently grabbed the public's attention for its ability to write answers quickly to a wide range of queries, in particular has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.
The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."
The agency wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.
President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.
ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp (MSFT.O).
NTIA plans to draft a report as it looks at "efforts to ensure AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing work to "ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities."
A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4 saying it was "biased, deceptive, and a risk to privacy and public safety."
Our Standards: The Thomson Reuters Trust Principles.