On Monday, The White House introduced an executive order aimed at ensuring the country takes the lead in both harnessing the potential and managing the risks of artificial intelligence.
The order sets standards for AI safety, security, and privacy, introduces civil rights guidelines, and offers research into AI’s effects on the labor market, innovation, and competition.
“Given the pace of this technology, we can’t move in normal government or private-sector pace, we have to move fast, really fast – ideally faster than the technology itself,” White House Chief of Staff Jeff Zients told CNN. “You have to continue to be proactive, anticipate where things are headed, continue to act fast and pull every lever we can.”
The move comes after months of public concern from industry professionals and global leaders. In March, an open letter signed by thousands of AI experts and industry executives (including prominent tech magnates such as Elon Musk and Steve Wozniak), called for a six-month pause on AI development. In May, OpenAI CEO Sam Altman spoke before Congress regarding the risks associated with AI, and said himself that government intervention is “crucial” to “mitigate the risks of increasingly powerful models.”
Ahead of the order, the White House said that President Biden engaged with various leaders and experts and is issuing the order with a focus on protecting consumers and addressing national security concerns.
“More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation,” the White House wrote in the release.
The executive order, which addresses various issues, mandates several key actions:
- New Standards for AI Safety and Security: Enforce strict guidelines for the development and testing of powerful AI systems to ensure safety, security, and trust between companies and users. Measures include sharing safety test results with the government and establishing standards and tests to guarantee AI systems’ safety, as well as watermarking to label AI-generated content. Also, should a developing AI model pose risks to national security, the economy, or public health, companies will be required to notify the federal government under the Defense Production Act.
- Protecting Against Risks: Mitigate risks of using AI for dangerous purposes such as developing strong standards for screening biological materials and protecting against AI-enabled fraud.
- Privacy Protection: Prioritize the development of “privacy-preserving techniques” and technologies, especially regarding AI systems’ use of personal data. Additionally, evaluate and strengthen privacy guidance for federal agencies and commercial data usage.
- Advancing Equity and Civil Rights: Provide guidance to prevent AI algorithms from “exacerbating discrimination” in various sectors such as housing and healthcare. These measures focus on ensuring fairness in the criminal justice system’s use of AI.
- Consumer, Patient, and Worker Protection: Ensure the responsible use of AI in healthcare, consumer products, and education. Also, support workers by mitigating potential harms and maximizing the benefits of AI in the workplace by putting in place guidelines to ensure fair assessment of job applications and compensation.
- Promoting Innovation and Competition: Catalyze AI research, support small developers, and encourage a fair and competitive AI ecosystem. Additionally, streamline visa criteria to attract and retain AI expertise in the U.S.
- Advancing American Leadership Abroad: Collaborate with other nations to establish international frameworks for the safe and trustworthy use of AI globally.
- Responsible Government Use of AI: Issue guidance for agencies’ use of AI, improve procurement, and ensure the responsible deployment of AI within the government.