Government
Government

2024: The Year of AI Implementation and Legislation

byon January 24, 2024

As we step into 2024, it is clear that artificial intelligence (AI) will continue to dominate global government discussions. Last year saw over 100 new requirements for the federal governments from the Executive Order, OMB implementation memo, and NDAA. Specifically they recognized that AI-ready data is a national asset, initiated AI standards development work through the launch of the US AI Safety Institute, and established provisions that require external Test and Evaluation (T&E) before the government procures AI. Scale strongly supports these developments because they are critical steps to ensure that AI is safe, secure, and trustworthy for its intended use cases.

Government officials are now focused on implementation and legislation, aiming to build on the robust foundation established in 2023 and sustain the momentum. 

Agencies

More than 100 new requirements must be implemented over the next year or so and these requirements are in addition to the existing ones from the two previous Executive Orders and bipartisan legislation. This has provided agencies no shortage of items to prioritize, positions to create, and work products to kick off. As they work through prioritization, the following three items should be top of mind because they underpin the adoption of responsible AI.

  • Test and Evaluation. The Executive Order, accompanying OMB Agency Implementation Memo, and NDAA all included provisions requiring external T&E prior to government procurement of AI. The federal agencies must now determine how to best implement comprehensive AI T&E by August 1, 2024 to ensure that AI is safe to be deployed on government networks. 

  • Standards. Initial work establishing AI best practices and frameworks must be transitioned into Standards Development Organizations. The newly announced U.S. AI Safety Institute, which will be housed within the Department of Commerce, will be critical to build the frameworks necessary to advance AI trustworthiness and will underpin safety techniques such as red teaming and T&E. 

  • Chief AI Officers. The Executive Order establishes the positions of Chief AI Officers at every agency. This is an important step forward to ensuring the efficient adoption of AI at all federal agencies and that the over 700 identified use cases can be carried out. However, on day one, it will be critical that the newly appointed person correctly prioritizes items like AI-ready data strategies to lay the foundation for successful AI adoption. 

Congress

One of the biggest questions is what Congress will do this year related to AI governance. The Executive Order and accompanying OMB Implementation Memo established a strong foundation for the United State’s approach to AI governance, but gaps still exist around critical topics like commercial AI safety. Key pieces of the EO must be funded and codified to be fully implemented. Thanks to the leadership of key Members of Congress, AI has remained a bipartisan issue that everyone recognizes that the United States must lead on. To maintain American leadership in AI, it will be critical that Congress works on three key topics this year:

  • Shifting from Learning to Legislating. Since Senate Leader Chuck Schumer’s announced that AI is a key issue in April 2023, Congress has prioritized learning about the complexities of AI. This involved hearings, roundtables, hands-on demonstrations, and Insight Forums involving many of the leading technologists and visionaries in the field. As a longtime builder and expert in the AI spaces, Scale supported these educational objectives through building the testing & evaluation platform for the generative red team efforts at DEF CON31, providing expert testimony to Congress, sharing our expertise at the Insight Forums, and providing Congressional members and their staff hands-on red teaming experience for generative AI models. It is critical for Congress to leverage these learnings and take action to craft a legislative package that moves U.S. leadership in AI forward.

  • Maintaining a Pro-innovation Approach to AI Safety. The Administration’s actions took these steps for government procured AI systems by establishing sector specific risk-based external test and evaluation requirements. However, due to the limits of Administrative actions, it was unable to cover commercial and enterprise AI use cases. Congress must fill this gap and establish an effective approach to safety for all AI use cases.

  • Funding Key Elements of Government AI Use. Recently federal agencies submitted over 700 different potential use cases for AI. However, despite the clear signal of interest, federal agencies are not funded to take advantage of AI and lack the AI-ready data foundation to do so. It is critical that Congress prioritizes AI funding in the FY25 appropriation process for agencies to help them build the right data infrastructure and begin funding some of the first use cases for agencies to employ AI. 

Moving Forward

Countries globally are accelerating their development of AI, and the United States must harness the full strength of its innovation ecosystem to maintain American AI leadership. Scale looks forward to continuing our work across the industry and federal spaces to help our nation adopt safe, secure, and trustworthy AI.

 


The future of your industry starts here.