ARMA InfoCon offered more sessions than any one attendee could take in, so let’s break down the highlights and share key insights you may have missed. Join us for a conversational recap of what stood out in Phoenix and how…
The last decade has seen a significant surge in the development and adoption of Artificial Intelligence (AI) tools across various industries. In fact, McKinsey research shows that 78% of global organizations now use AI in at least one business function. From increased productivity and accelerated product development to improved decision making, AI has proved to offer many benefits across different business sectors—but it also brings uncertainty and regulatory complexity.
In this post, we’ll build on the discussion from our webinar, “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance,” highlighting the critical role IG professionals play in responsible AI adoption. Use the following insights as a guide to help you navigate the maze of regulatory complexity while preparing actionable governance strategies for your organization.
“AI is the bullet train being chased by a legislative bicycle.”
With no single federal AI law in place and AI continuing to evolve at lightning speed, the U.S. is left with a patchwork of state-by-state measures—each taking a fragmented, small-scale approach. While some states are choosing to focus on business innovation requirements or consumer high-risk protection, others are passing measures to advance technology within their borders. Adding to the complexity, industries like insurance, banking, and healthcare make their own AI guidance, often with no uniformity or alignment.
However, in contrast, Colorado is taking the lead in AI legislation with the first comprehensive AI law in the U.S. that will regulate high-risk AI systems. The Colorado Artificial Intelligence Act (CAIA), which will take effect on February 1, 2026, will require a developer of a high-risk AI system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in high-risk areas, including healthcare, housing, insurance, finance, legal services, and education.
While there’s no federal law in the U.S. that governs the development or use of AI, the federal government has been actively engaged in guiding its adoption and education with initiatives such as integrating AI into school curricula, establishing specialized groups like the U.S. Navy’s AI Task Force, the protection of children online, and the protection of your likeness and identity. Last year alone, 165 AI-related bills were proposed in Congress—and while there is still no nationwide legislation or strategy, there is at least momentum in that direction.
In the webinar, Dr. Laura Caroli, senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS), outlined several significant differences between how AI is regulated in the U.S. compared to the European Union’s (EU) AI Act. While some U.S. bills borrow language or structure from the EU Act, their implementation and practical effect are very different. The following are some key differences that were highlighted:
While U.S. organizations face fewer formal obligations, the patchwork and ambiguity create compliance uncertainty. The EU’s AI Act provides a more unified, standards-based approach that prioritizes upfront design requirements and holds developers accountable for their tools—providing enhanced protection for consistency.

ARMA InfoCon offered more sessions than any one attendee could take in, so let’s break down the highlights and share key insights you may have missed. Join us for a conversational recap of what stood out in Phoenix and how…
A proactive approach is crucial when it comes to AI regulatory compliance. With AI evolving at such a rapid pace, staying ahead of compliance requirements is key to reducing risk and ensuring AI is being used ethically across your organization. By making AI a broader conversation that involves stakeholders from various departments across the organization, it allows for valuable perspectives and a shared commitment to responsible use.
Use the following six strategies to help you navigate AI regulatory compliance and build a solid governance framework within your organization.
1. Inventory and Document AI Use
2. Create Clear Internal Guidelines
3. Train Teams for AI Awareness
During the webinar, “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance,” Dr. Laura Caroli touched on strategies 1, 2, and 3.
4. Align with Recognized Frameworks
5. Involve Cross-Functional Stakeholders Early
6. Prioritize Ethics and Transparent Decision-Making
Megan O’Hern-Crook, C.A., Director of Archives and Information Management Services at History Associates, explains how involving cross-functional stakeholders promotes ethical decision-making surrounding AI deployment.
With no federal AI legislation currently in place, regulatory clarity and consistency in the U.S. may still be years away—making AI governance an urgent priority within organizations. By taking a proactive approach, IG teams can reduce risk, support innovation, and strengthen public trust. The first step is understanding what AI tools are being used, how they are used, and who is responsible for them. From there, organizations can determine which regulations apply to their operations.
For additional expert insights and real-world examples, tune in to the full webinar on demand: “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance.”
Share