Current Landscape of AI

With Congress yet to take significant action, states are taking the lead in shaping artificial intelligence (AI) regulations and policies. 

 Arkansas

Arkansas is one of the states working to better understand AI. In June, Governor Sarah Huckabee Sanders launcheda working group to study and offer recommendations for the safe use of AI within state government. The AI & Analytics Center of Excellence (AI CoE), a subcommittee of the Data and Transparency Panel (DTP), will study and provide recommendations for policies, guidelines, and best practices for the ethical, effective, and safe use of AI across Arkansas state government. The AI CoE is chaired by Robert McGough, Arkansas’ Chief Data Officer.

The working group also will review and evaluate a set of pilot projects to encourage learning about AI and its potential risks. With that information, they will craft a set of best practices for the safe potential implementation of the technology. Governor Sanders’ press release outlined two projects as pilot use cases: the Division of Workforce Services’ unemployment insurance fraud program and the Arkansas Department of Corrections’ recidivism reduction program.

The AI CoE will meet monthly, with an initial report due by December 15, 2024. The 95th General Assembly is set to convene in January 2025, and it is likely some of these policy recommendations may be considered then.

 

State-Level Action

Arkansas is not alone. According to the National Conference of State Legislatures, 40 states considered AI resolutions and legislation during the 2024 legislative session. Emerging legislation regulating the design, development, and use of artificial intelligence varies from addressing deceptive uses of AI in elections to combatting discrimination caused by reliance on algorithms.

California, Minnesota, Texas, and Washington are among the states that have regulated AI-made “deepfakes.” California was the first state to act, passing legislation in 2020 to ban the use of deepfakes to influence political campaigns. Texas and Minnesota have issued similar bans, while Washington has adopted disclosure requirements for electioneering communication containing “synthetic media”.

In 2023, Connecticut Governor Ned Lamont signed a bill establishing a task force to study AI and develop an AI bill of rights. The legislation requires an inventory and ongoing assessment of all AI systems used by state agencies to ensure the systems do not result in unlawful discrimination or disparate impact.

Colorado passed legislation in 2024 that provides consumer protections for AI. The law establishes requirements for developers and deployers of high-risk AI systems to use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination. Examples outlined in the bill included education enrollment, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services.  

California, home to 35 of the world’s top 50 AI companies, has been a pioneer in both regulating and adopting the use of AI. Companies backed by Microsoft, Google, and Amazon are conducting trials with the state to help improve public services, such as providing tax guidance and improving traffic congestion and road safety.

 

Federal-Level Efforts

The federal government is moving slower than the states. In the fall of 2023, President Biden issued an executive order on Safe, Secure, and Trustworthy Artificial Intelligence to advance the federal government’s approach toward the safe and responsible development of AI. Key directives included ensuring AI safety and security, protecting Americans’ privacy, advancing equity and civil rights, standing up for consumers and workers, promoting innovation and competition, and advancing American leadership around the world, among others. An April 2024 update revealed federal agencies completed all 180-day actions in the schedule outlined by the executive order and made progress on many longer-term tasks.  The full details are here.

Neither house of Congress has taken significant action to regulate or implement AI. In May 2024, Senate Majority Leader Chuck Schumer and his bipartisan Senate AI Working Group released a comprehensive road map for AI, advocating legislative action across various congressional committees. The “Driving U.S. Innovation in Artificial Intelligence” plan seeks to support U.S. innovation in AI and to collaborate with and prepare the American workforce to work alongside and to mitigate potential negative impacts of AI. The plan also outlines goals specific to elections and democracy, privacy and liability, intellectual property and copyright, and national security, among others.  In the House, Speaker Mike Johnson and Democratic Leader Hakeem Jeffries appointed members to a bipartisan Task Force on Artificial Intelligence which is working to produce a comprehensive report that includes guiding principles, recommendations, and policy proposals.

 

Conclusion

As AI policies continues to evolve and Congress moves forward with next steps, regulatory efforts at the state-level are likely to inform and shape a cohesive national strategy for the governance and ethical management of AI technology. One obvious question is whether government at any level in our country will be able to keep pace with the rapid changes coming from other countries and from the corporate sector.

Kelly Sullivan