This quarter’s legal and information governance update examines the growing impact of Shadow AI, focusing on how everyday use of unapproved AI tools introduces new privacy, records, and enterprise risk considerations. It explores the tension between the convenience and speed AI offers and the need to maintain defensible governance, minimization, and compliance practices as AI becomes embedded in routine business operations.
Throughout the update, we’ve included links to relevant legislation and documents where applicable.
When I began working in records and data management, AI was just starting to emerge, but I was skeptical. It felt overhyped and far removed from everyday work, and privacy concerns were not yet a major focus.
That changed quickly as technology became embedded in business operations, and the COVID-19 pandemic accelerated that shift. Remote work, telehealth, and digital services pushed organizations to adopt AI rapidly to keep operations running. Capability and efficiency took priority, and governance and privacy often followed later.
As AI tools expanded to support nearly every task, I began using them regularly. They’re effective at organizing regulations, parsing dense legislative language, and spotting patterns across jurisdictions. At the same time, it became clear how much data these tools consumed and how easily sensitive information could slip in. Risks to enterprise data soon appeared through unapproved chatbots, browser plug-ins, and “free trial” tools operating outside organizational controls. What began as convenience quickly became a significant governance and privacy concern, now widely known as Shadow AI.
Now, issues like data minimization, high‑risk automation, trade secret protection, and selective disclosure matter more than ever. These are areas where records, office, and data managers already play a critical role, and where their expertise is essential to help organizations use AI responsibly, safely, and in a way that remains defensible over time.
Responsible AI use starts with deciding what information should—and should not—be shared.
Applying privacy principles to AI practices helps organizations stay compliant with rules around personal information, as well as internal policies that protect trade secrets and privileged legal information.
At the core of many privacy laws is a simple principle: only use the data you truly need.
GDPR/UK GDPR Article 5(1)(c), says personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’).”
In practice, this means sharing only what’s necessary. For example, if two paragraphs are enough to generate a summary, there’s no reason to upload an entire contract. Alternatively, if a forecast only requires a few data points, sharing a full customer ledger is excessive.
Similar data‑minimization requirements are becoming standard across the United States. States like Virginia, Colorado, and Connecticut have adopted language that closely mirrors GDPR, while others, such as California, Oregon, and Delaware, apply functionally similar rules that focus on proportionality and purpose. These laws make it clear that personal data shouldn’t be collected or reused beyond what’s been disclosed and justified.
Virginia Consumer Data Protection Act (VCDPA), Va. Code § 59.1‑578(A)(1): A controller shall “Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer”
California Consumer Privacy Act: CAL. CIV. CODE § 1798.100(c): “A business’ collection, use, retention, and sharing of a consumer’s personal information shall be reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed, or for another disclosed purpose that is compatible with the context in which the personal information was collected, and not further processed in a manner that is incompatible with those purposes.”
Unchecked AI use creates minimization and governance challenges across departments; each with distinct obligations, sensitivities, and exposure points, from privilege and employment law to intellectual property, customer trust, and fiduciary oversight. Understanding how Shadow AI manifests within each role is essential to building controls that are practical, defensible, and aligned with how the business operates. The following sections explore those function‑specific risks in detail, and why a coordinated, cross‑disciplinary approach to Shadow AI governance is no longer optional.
1) Legal & Compliance: Privileged Information and Regulated Data
Legal and compliance teams love AI for speed, clause extraction, case summarization, or issue spotting, but these are precisely the workflows that mix confidential strategy with regulated personal data. For example:
What to do: Make “excerpts‑only” the default; implement a redaction tool or assistant; and create policy that AI prompts and outputs are records when they capture legal analysis or evidence.
2) HR & Operations: High‑Risk AI, Bias Audits, and Automated Decisions
Human resource teams are increasingly expected to manage AI across multiple layers of regulation. Three prominent examples illustrate this growing scope: the EU AI Act at the supranational level, the UK GDPR at the national level, and New York City Local Law 144 at the sub‑national level. Together, they represent a small fraction of the total regulatory demand emerging across each level of authority in a landscape continuing to expand with government response to real‑world impacts of AI in the workplace.
What to do: Organizations should limit AI‑assisted hiring and performance decisions to approved tools, maintain documented risk assessments and bias audits, ensure human involvement in decision making processes, and ensure AI outputs and decision rationales are stored in official HR systems of record. This is necessary because a legal action could make this content discoverable and potentially subject to employee data rights oversight.
3) R&D & Engineering: Trade Secrets and Intellectual Property
R&D and engineering are where trade secrets can be found in source code repositories, proprietary algorithms, design schematics, early-stage prototypes, and product roadmaps that never appear in public filings. Access to these valuable assets is limited and tightly controlled, but Shadow AI quietly undermines that control.
What to do: Use an enterprise AI platform that does not train on your inputs, maintain clear logs of who submitted what information and when, and preserve AI‑assisted design decisions in a formal design history file (or equivalent system).
4) Sales, Marketing, and Customer Success: Privacy Rights & ADMT
Sales, marketing, and customer service teams rely on customer data to operate. They maintain records, update customer files, and reuse information to support ongoing relationships and service. Through a data minimization lens, these everyday practices take on greater importance. Habits like retaining legacy fields, reusing records for new purposes, or combining datasets across systems can introduce unnecessary risk.
What to do: Use only the customer data needed to manage active accounts and renewals, avoid copying records into unsanctioned tools or spreadsheets, and ensure any automated scoring or prioritization can be explained and reviewed if questioned. Limit campaign data to documented purposes, avoid reusing historical or third‑party data without clear justification, and ensure targeting or suppression rules are transparent, reviewable, and based on approved datasets. Keep customer records accurate and current, retire outdated usage or sentiment data, and ensure automated alerts or churn signals support human judgment, with decisions documented and defensible.
Employees are already using AI within your organization. Whether it operates unchecked or within defined guardrails is for you to decide.
For additional insights and actionable advice on this topic, watch the webinar recording of Responsible AI in Regulated Environments: How to Innovate Without Losing Control. In it, Megan O’Hern-Crook of History Associates, Pulkit Mogra of the University of Ottawa Faculty of Law, and Ignatius Grande of BRG discuss what controls must be in place before AI is deployed, how organizations are accidentally leaking data today, and how to align Legal, IT, InfoSec, and Records teams around a shared AI governance model.
Share