The Core Concept: Enterprises asking “what should never leave our environment in AI workloads” are asking the right question — but most are drawing the boundary too narrowly, missing that AI-assisted code and business processes represent business value that carries the same sensitivity as any regulated data asset.
The Guest: Rob Hirschfeld, CEO at RackN
The Bottom Line:
• The AI flywheel effect is real and powerful — but it compounds data value and data sovereignty risk in parallel, meaning the enterprises that delay drawing a clear data perimeter are making that problem progressively harder to solve with every passing sprint
Speaking with TFiR, Rob Hirschfeld of RackN answered one of the most direct questions in enterprise AI strategy: what data should never leave a customer’s environment, and where exactly do you draw that line?
WHAT IS THE AI DATA PERIMETER — AND WHY IS IT HARDER TO DEFINE THAN IT LOOKS?
Hirschfeld’s answer began with a reframe: the sensitivity question isn’t just about regulated data categories like PII or financial records. The more practically important category is the data that enterprises are already generating through AI-assisted workflows — code, custom screens, business processes, operational logic.
“The code that you’re writing is actually your business value, even more so than it ever was. Any information that you’re sending through AI is likely sensitive to some extent — especially if given to a company using AI to do analysis.”
This matters because enterprises have developed reasonably robust frameworks for protecting traditionally regulated data. They have not developed equivalent frameworks for protecting the AI-assisted work product that is now flowing through third-party model providers at engineering velocity. That gap is where the real exposure lives.
THE TRUST QUESTION COMES BEFORE THE CAPABILITY QUESTION
Hirschfeld was direct about sequencing: before an enterprise can make a meaningful decision about what data to send through a third-party AI provider, it has to answer the trust question. Can you trust this provider with data that has genuine business value?
He cited the Anthropic vs. DoD dispute as a meaningful market signal — not because of its specific outcome, but because it forced AI providers to make their trustworthiness commitments explicit and public. That kind of market signal, he argued, is exactly what enterprise AI procurement teams should be watching for when evaluating partners.
“Whichever company you have a relationship with, you’re going to be sending absolutely critical data that has high value. You need to figure out which partners you can trust.”
For enterprises that cannot clear the trust bar — whether due to regulatory constraints, competitive sensitivity, or geopolitical data sovereignty requirements — self-hosting is the answer. RackN’s role in that equation is giving enterprises the confidence to say yes: yes, they can buy infrastructure, run AI hardware, and operate their own inference clusters.
THE AI FLYWHEEL AND ITS HIDDEN COMPOUNDING RISK
Hirschfeld introduced a dynamic that most enterprise AI strategy conversations miss: the flywheel effect. As AI adoption grows, teams develop greater proficiency, consume more AI more quickly, and generate increasingly valuable data through their AI workflows. The flywheel accelerates — but it does so in both directions.
“Every day that goes by, as you learn to use these tools more and more, two things are going to happen. You’re going to have even more valuable data, and you’re going to have this challenge of how much of it can I then take over — can I run for myself?”
The implication is that the data sovereignty problem does not stay static. The longer an enterprise delays drawing a clear perimeter, the more valuable the data in flight becomes — and the more costly the course correction. Budget exposure grows in parallel with data value exposure, which is why Hirschfeld framed the perimeter question as something that needs to be answered early, not deferred until the governance crisis makes it unavoidable.
Watch the full TFiR interview with Rob Hirschfeld here.





