Navigating the AI Regulation Maze: A Guide for Information Governance Professionals

Navigating the AI Regulation Maze: A Guide for Information Governance Professionals

Melanie Martinez, Senior Content Marketing Specialist

The last decade has seen a significant surge in the development and adoption of Artificial Intelligence (AI) tools across various industries. In fact, McKinsey research shows that 78% of global organizations now use AI in at least one business function. From increased productivity and accelerated product development to improved decision making, AI has proved to offer many benefits across different business sectors—but it also brings uncertainty and regulatory complexity.

In this post, we’ll build on the discussion from our webinar, “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance,” highlighting the critical role IG professionals play in responsible AI adoption. Use the following insights as a guide to help you navigate the maze of regulatory complexity while preparing actionable governance strategies for your organization.

The Current State of U.S. AI Regulation

AI is the bullet train being chased by a legislative bicycle.

With no single federal AI law in place and AI continuing to evolve at lightning speed, the U.S. is left with a patchwork of state-by-state measures—each taking a fragmented, small-scale approach. While some states are choosing to focus on business innovation requirements or consumer high-risk protection, others are passing measures to advance technology within their borders. Adding to the complexity, industries like insurance, banking, and healthcare make their own AI guidance, often with no uniformity or alignment.

However, in contrast, Colorado is taking the lead in AI legislation with the first comprehensive AI law in the U.S. that will regulate high-risk AI systems. The Colorado Artificial Intelligence Act (CAIA), which will take effect on February 1, 2026, will require a developer of a high-risk AI system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in high-risk areas, including healthcare, housing, insurance, finance, legal services, and education.

While there’s no federal law in the U.S. that governs the development or use of AI, the federal government has been actively engaged in guiding its adoption and education with initiatives such as integrating AI into school curricula, establishing specialized groups like the U.S. Navy’s AI Task Force, the protection of children online, and the protection of your likeness and identity. Last year alone, 165 AI-related bills were proposed in Congress—and while there is still no nationwide legislation or strategy, there is at least momentum in that direction.

How U.S. Regulations Compare to Global AI Laws

In the webinar, Dr. Laura Caroli, senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS), outlined several significant differences between how AI is regulated in the U.S. compared to the European Union’s (EU) AI Act. While some U.S. bills borrow language or structure from the EU Act, their implementation and practical effect are very different. The following are some key differences that were highlighted:

  • Focus on Developers vs. Deployers: U.S. laws often place more burden on deployers (the organizations using AI tools) than on developers. In contrast, the EU AI Act places primary responsibility on developers, particularly during the system’s design and deployment phase.
  • Timing of Compliance: The EU AI Act requires AI systems to meet standards before they are released to the market. U.S. frameworks often assess compliance after deployment, focusing on behavior and outcomes rather than upfront system design.
  • Scope of What Is Considered “AI”: U.S. laws sometimes use broader definitions of AI, potentially applying to basic systems like linear regression models. The EU definition, based on OECD guidelines, emphasizes autonomy and adaptiveness, which can limit what qualifies as a regulated AI system.
  • Use of Standards and Guidelines: The EU AI Act integrates guidance from international standards bodies like the OECD. U.S. legislation tends to offer less technical specificity and relies more on industry self-regulation.
  • Risk-Based Classifications: Both systems use a risk-based approach, but the EU’s list of “high-risk” applications is broader and more prescriptive. Some U.S. proposals mimic this structure but limit the number of regulated scenarios.
  • Documentation Requirements: EU regulations call for extensive technical documentation, impact assessments, and transparency. U.S. laws typically have lighter requirements unless personal harm or discrimination is involved.

While U.S. organizations face fewer formal obligations, the patchwork and ambiguity create compliance uncertainty. The EU’s AI Act provides a more unified, standards-based approach that prioritizes upfront design requirements and holds developers accountable for their tools—providing enhanced protection for consistency.

Practical Strategies for Navigating AI Regulatory Compliance

A proactive approach is crucial when it comes to AI regulatory compliance. With AI evolving at such a rapid pace, staying ahead of compliance requirements is key to reducing risk and ensuring AI is being used ethically across your organization. By making AI a broader conversation that involves stakeholders from various departments across the organization, it allows for valuable perspectives and a shared commitment to responsible use.

Use the following six strategies to help you navigate AI regulatory compliance and build a solid governance framework within your organization.

1. Inventory and Document AI Use

  • Identify where and how AI is being used across departments, including informal or decentralized use (e.g., ChatGPT, internal automation, AI image processing), and maintain detailed documentation of each use, including purpose, data sources, decision logic, and outputs.
  • Conduct thorough risk, bias, and impact assessments—especially for tools that influence decisions about people.

2. Create Clear Internal Guidelines

  • Establish organization-wide guidelines around ethical and compliant AI use, including policies for handling sensitive data, decision-making tools, and mitigating misinformation risks.

3. Train Teams for AI Awareness

  • Provide baseline AI education across all functions, and help non-technical teams understand their legal, regulatory, and ethical obligations.

During the webinar, “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance,” Dr. Laura Caroli touched on strategies 1, 2, and 3.

4. Align with Recognized Frameworks

  • Use established standards such as ISO/IEC 42001 and NIST AI RMF as guidance in managing AI-related risks and look to the EU AI Act for best practices—especially if your organization is global.

5. Involve Cross-Functional Stakeholders Early

  • Engage legal, compliance, HR, marketing, and IT teams before deploying AI tools, as involving stakeholders from across the organization improves decision-making, ensures broad accountability, and helps uncover unintended consequences or blind spots.

6. Prioritize Ethics and Transparent Decision-Making

  • Maintain detailed records of how key AI-related decisions are made, and consider any potential harm, fairness, bias, and public trust implications.
  • Build a culture of ethical awareness around AI’s role in your organization to ensure use is responsible.

Megan O’Hern-Crook, C.A., Director of Archives and Information Management Services at History Associates, explains how involving cross-functional stakeholders promotes ethical decision-making surrounding AI deployment.

Final Thoughts: Readiness is Your Competitive Advantage

With no federal AI legislation currently in place, regulatory clarity and consistency in the U.S. may still be years away—making AI governance an urgent priority within organizations. By taking a proactive approach, IG teams can reduce risk, support innovation, and strengthen public trust. The first step is understanding what AI tools are being used, how they are used, and who is responsible for them. From there, organizations can determine which regulations apply to their operations.

For additional expert insights and real-world examples, tune in to the full webinar on demand: “AI & You: Real Talk on Regulations and Future-Proofing Your Information Governance.