Enterprises deploying Microsoft Copilot, AI agents, and generative AI tools across SharePoint, Google Drive, and file shares are sitting on a compliance crisis they haven’t fully accounted for. Decades of over-shared folders, undeleted employee records, and ungoverned PII are no longer dormant problems — one prompt from an aggressive AI system can surface board meeting notes, salary data, or protected customer information to anyone in the organization. With EU AI Act enforcement arriving in August 2026, carrying penalties of up to 7% of global revenue, the cost of inaction is no longer abstract.
The race to deploy AI at enterprise scale has outpaced the data governance programs required to support it. Most organizations know they have a problem. Few have a path to solving it fast enough to matter.
The Guest: Joe Pearce, Head of Product at RecordPoint
Key Takeaways
- AI agents amplify existing data governance failures 1,000x — what was always a risk is now a one-prompt exposure event
- RecordPoint’s managed in-place architecture classifies and governs data across SharePoint, SAP, Google Drive, and thousands of other sources without requiring migration
- The RecordPoint MCP Server acts as a universal governed connector — any AI system that speaks MCP can securely access clean, compliant, segmented data
- Safe segmentation creates topic-based data pipelines that prevent cross-contamination between teams, bots, and use cases
- Runtime governance and auditability enables organizations to trace exactly what data an AI system accessed — critical for regulated industries facing EU AI Act, CCPA, and financial compliance requirements
***
In a recent TFiR interview, Swapnil Bhartiya spoke with Joe Pearce, Head of Product at RecordPoint, about the governance gap created by enterprise AI deployments, how RecordPoint’s platform addresses data compliance at scale, and the role of the newly launched RecordPoint MCP Server in accelerating AI adoption without sacrificing control.
The AI Governance Crisis: Amplifying Old Problems at Scale
Organizations deploying Microsoft Copilot and AI agents across SharePoint, Google Drive, email, and file shares are confronting a problem that predates generative AI — but has been dramatically accelerated by it. Poorly governed data estates, over-sharing, and undeleted records that should have been purged years ago are now exposed through the low-friction surface area of AI prompts. The risk is no longer theoretical; it’s architectural.
Q: What challenges are generative AI and AI agents creating for enterprises when it comes to data governance and compliance?
Joe Pearce: “Some of the problems that the companies we run across are having are not new. They’re just amplified 1,000x. Over the last year, we’ve been seeing heavy deployments of Microsoft Copilot throughout organizations. This is bringing into question — you’re connected to your SharePoint, file systems, and emails, but generally, folks have not done a very good job of cleaning up data that they should have gotten rid of, sometimes decades ago. There’s also widespread over-sharing throughout the organization. While, in theory, you could have always found this data, it’s just so much easier now — because with one prompt, a very aggressive AI system can go out there, and even an intern can uncover the board meeting notes from the CEO if they’re not properly locked down. Additionally, organizations can start to expose PII and other protected information.”
Pearce frames the compliance dimension as layered — not just the new EU AI Act, but the long-standing regulatory frameworks that enterprises in financial services, healthcare, and other regulated sectors have always had to navigate, now compounded by AI acceleration.
Q: What are the specific compliance risks when it comes to governed data access for AI?
Joe Pearce: “We still have a long history of regulatory problems that industries have to deal with. Whether you’re dealing with financial data, on the New York Stock Exchange, or pretty much every company at this point — whether it’s EU or CCPA or another privacy framework — you need to protect PII and ensure it’s being used within the proper regulatory framework. The core problem that a lot of companies have always had is they don’t know what data they have. They’ll have employee records sitting on old file shares, or SharePoint folders filled with employee reviews or potentially customer data.”
How RecordPoint Works: Managed In-Place Governance
RecordPoint’s approach is built around a “managed in-place” model — organizations keep working in their existing environments (SharePoint, Salesforce, Zoom, SAP) while RecordPoint connects, classifies, governs, and safely segments data without requiring migration or disruption. The platform connects to thousands of data sources, uses custom-built AI models for classification, and applies governance controls including deletion, access removal, and topic-based segmentation before data ever reaches an AI system.
Q: Can you walk us through how RecordPoint’s platform works — how do you let AI systems access the data they need while keeping everything compliant and traceable?
Joe Pearce: “There are two categories of companies we account for. There are the companies that have good governance — and they are admittedly few and far between. But the best time to do governance was probably two years ago. Realistically, everyone’s starting to do it today. What we do is connect out using what’s called a managed in-place concept. You keep working in SharePoint, Zoom meetings, and your Salesforce environment, but we bring all of your data in, we figure out what is sensitive, and we figure out what the business use of that data is. We then put it under good governance — we delete it when we should, and we remove access when we shouldn’t have access to it. We then go into this world of safe segmentation.”
Pearce illustrated the safe segmentation concept with a banking use case: investment teams analyzing hotel acquisitions should not be able to see deal data from colleagues evaluating gas station portfolios. RecordPoint enforces those boundaries at both user and topic levels before data is connected out to the AI system of choice.
Q: What happens if an organization hasn’t yet cleaned up its data estate?
Joe Pearce: “If along the way you didn’t clean up your data estate, we put compensating controls in place, so that, in theory, maybe you do have access to what your boss makes somewhere out there in a SharePoint file that was over-shared — but we’ll fix that before it’s connected to the AI system, at both the user and topical levels.”
The RecordPoint MCP Server: A Universal Governed Connector for Any AI System
RecordPoint launched its Model Context Protocol (MCP) Server in March 2026, positioning it as a production-ready universal connector that exposes governed enterprise data to any MCP-compatible AI system — Microsoft Copilot, Anthropic Claude, Google Gemini, and custom LLM applications — without custom integrations or elevated permissions. The MCP Server is the execution layer for RecordPoint’s vision: any data in, good governance in the middle, any AI system out.
Q: What additional capabilities does the RecordPoint MCP Server offer enterprises, and what role does Model Context Protocol play in the AI governance space?
Joe Pearce: “The MCP server is like a universal connector to any of the AI systems out there, so that it fulfills our vision of being able to put good data governance into any AI system. Currently, you can connect Copilot directly to SharePoint, but you’re dealing with all the over-sharing. You’re dealing with the fact that PII is exposed. You’re dealing with lots of risk, especially with compliance around the EU AI Act. What MCP lets us do is put a universal governed connector onto any data source. It’s a game changer — even if the source system hasn’t thought about ways of cleaning up your PII, your credit card data, or your internal employee data, we’re able to apply that layer on top of it universally, and then connect out to any AI system. Pretty much all of the vendors — Microsoft, Google, Anthropic, and everyone in between — use the MCP protocol.”
From Governance Friction to AI Enabler: Accelerating Pilot-to-Production
One of the most persistent barriers to enterprise AI deployment is the compliance layer — the privacy, security, and legal review that sits between a successful pilot and production rollout. Pearce is direct about the organizational dynamics: boards and CEOs are demanding AI acceleration, while compliance and security teams are unable to validate the data controls required for enterprise-scale deployment. RecordPoint positions its AI-ready data pipeline and MCP Server as the infrastructure that closes that gap.
Q: How is RecordPoint helping organizations integrate governance controls without slowing down developers and data scientists?
Joe Pearce: “What we’re seeing in the market is that boards and CEOs are saying go, go, go with AI. The technical teams are able to get it working, but it’s that middle layer — the privacy, security, and compliance layer — that’s gunking up the process of taking things from pilot to enterprise scale. At RecordPoint, we are very foundationally aligned with the fact that good AI governance equals good data governance. We want to enable the data governance side of things with our new AI-ready data pipeline, connected through MCP, and really accelerate the speed of deploying AI into production.”
Auditability and Runtime Governance: Proving Compliance When It Counts
For regulated industries — financial services, healthcare, government — the ability to trace AI data access after the fact is as critical as preventing unauthorized access in the first place. RecordPoint provides both pre-assessment evidence for compliance sanctioning and real-time runtime governance that blocks unauthorized data flows and creates an immutable audit trail of every AI system data interaction.
Q: How does RecordPoint help organizations prove compliance when auditors or regulators come knocking?
Joe Pearce: “We have both sides of the coin. We have the pre-assessment data and evidence that you can gather, sanctioning that this data set complies with all of your regulations, but we also have the runtime governance that enforces those controls in real time — we’re able to block data that you don’t want going out to AI systems, and we can show those blocks. But there’s a further layer on top of this: knowing when an AI system accesses data. For example, we’ve run across scenarios where we’re talking about material nonpublic information — financial documents before they’re released to the market. Maybe someone in accounting should have access to that, but if we start getting insider trading questions, we can go back and systematically trace who had access to that data through the AI systems that they were using to potentially create leaks before the information was made public.”
EU AI Act Enforcement: Organizations Are Scrambling
With the EU AI Act enforcement deadline arriving in August 2026, Pearce is frank about where most enterprise customers stand: they’re behind, they know it, and they’re under pressure to close the gap without pausing AI programs that their leadership teams are demanding. RecordPoint’s answer is speed — getting compensating governance controls in place in one to two days rather than waiting for a full 18-month data estate remediation.
Q: How prepared are customers for the EU AI Act enforcement deadline?
Joe Pearce: “To be blunt, they’re scrambling. This goes back to my point that it would have been great if we had been governing our data two or five years ago. We aren’t. This is why we’re very focused on — yes, we do have the tools to do the long-term good governance that you need for long-term compliance, but you don’t want to wait those 18 months to get your AI projects off the ground. There’s too much pressure from the market, too much pressure from your leadership team and your boards. This is why we’re focused on getting those layers in place where you can effectively get up and running in a day or two — something that would have otherwise taken months, or, for some companies, years — to be able to get that data safely into AI systems, and to do so in a regulated way, so that you don’t get those big EU fines that are knocking at a lot of our doors.”





