Egress fees are the quiet tax on every AI workload. Storage looks cheap at sign-up — that’s the point. But the moment your engineers start running experiments, pulling training datasets, or moving model checkpoints across environments, the bill becomes unmanageable. And if you want to leave? Data gravity and exit costs make switching almost impossible. That is the trap enterprises have been living in since 2006, and it’s now colliding with the demands of modern AI infrastructure at full scale.
Akave is building the exit ramp. The Austin-based startup has raised $6.65 million in seed funding to launch Akave Cloud — a decentralized, S3-compatible object storage platform priced at a flat $14.99 per terabyte per month with zero egress fees. Backed by Protocol Labs, the Filecoin Foundation, the Avalanche Foundation, No Limit Holdings, and Blockchange, Akave is entering the market as a direct alternative to AWS S3 and Wasabi, purpose-built for AI and analytics workloads.
The Guest: Stefaan Vervaet, Founder and CEO at Akave
Key Takeaways
- Akave Cloud delivers S3-compatible decentralized storage at $14.99/TB/month — flat rate, zero egress fees, unlimited queries
- A three-layer architecture separates the data plane, verifiability layer, and control plane — enabling true sovereign storage without re-architecting existing pipelines
- Cryptographic audit trails and immutable data verification address GDPR compliance, the EU Cloud Act conflict, and AI governance for regulated industries
- Early customers include Intuizi, LaserSETI, and 375ai — with qualified integrations across Snowflake, Apache Iceberg, Databricks, Presto/Trino, and Hugging Face
- Long-term vision: becoming the storage layer for agentic AI, with SDKs and x402-compatible infrastructure already in development
***
In a recent TFiR interview, Swapnil Bhartiya spoke with Stefaan Vervaet, Founder and CEO at Akave, about the broken economics of cloud storage for AI workloads, how Akave’s decentralized architecture restores enterprise data sovereignty, and where the company is heading as agentic AI reshapes the infrastructure stack.
Why Cloud Storage Economics Are Broken for AI
Vervaet has spent 20 years in infrastructure — including three years at Protocol Labs building the Filecoin ecosystem, the world’s largest decentralized storage network. That experience gave him a front-row seat to a pattern: enterprises weren’t asking for decentralization as an ideology. They were asking for a way out of hyperscaler lock-in driven by compliance pressure, cost unpredictability, and the growing strategic value of their own data.
Q: What led you to build Akave?
Stefaan Vervaet: “Customers would come to us wanting to move away from hyperscalers. One, compliance — they needed data stored within a certain premise. Two, they wanted to take control back over their assets. And we’ve seen with new AI workloads customers realizing how important it is to control their assets — not only govern them with their own keys, but move them as freely as they want to run more analytics at a lower cost.”
He connected this to a historical pattern that mirrors the original cloud adoption wave. When AWS launched S3 in 2006, Vervaet was building a competing object store and remembers analysts predicting enterprises would never move to the cloud. Today, he sees the reverse motion beginning — selective repatriation driven not by ideology but by compliance, cost optimization, and access to purpose-built AI compute.
Q: Is this a trend away from hyperscalers entirely?
Stefaan Vervaet: “We don’t see customers move completely out of hyperscalers. We see more customers looking at specific data sets they’re going to use more often, where they’re going to run new AI pipelines or do experiments. Those are the opportunities we’re latching on to.”
The Architecture: Three Layers, Full Control
Akave’s technical differentiator isn’t just pricing — it’s architecture. Where traditional object storage providers offer a monolithic black box, Akave has broken storage into three distinct layers: the data plane (where data resides), a verifiability layer (an immutable blockchain-backed audit trail), and the control plane (where encryption keys are managed and access is governed). This separation is what enables both sovereign deployments and S3-compatible drop-in replacement.
Q: How does the architecture work technically?
Stefaan Vervaet: “We have separated what is typically a monolithic approach. You buy object storage at whatever price per terabyte per month and it’s a black box. We’ve broken this down in three layers — the data plane, a verifiability layer which is a mutable audit trail, and the control plane where the keys are stored and where encryption happens. Any customer can choose a hosted version, where Akave Cloud hosts the infrastructure, or a self-managed instance where they run the control plane as a container in their own environment. That means they’re in control of the keys.”
The practical implication: any application that speaks S3 — and most enterprise cloud applications do — can point to an Akave endpoint and automatically inherit sovereign controls. No new connectors, no application-layer rewiring. The controls are implemented at the storage layer, not above it.
Q: Why build controls at the storage layer rather than the application layer?
Stefaan Vervaet: “Other approaches build connectors at the application layer. Our approach is implementing this at the storage layer, so that by default any application that speaks S3 can automatically plug into the Akave endpoint and automatically inherit those controls — whether that container runs on-premises or in the cloud.”
The Verifiability Layer: Blockchain Where It Actually Matters
Akave uses blockchain primitives not for decentralization as a marketing term, but for a specific technical purpose: immutable, cryptographically provable audit trails. Content-addressed hashes prove data integrity at rest. This becomes operationally critical as enterprises fine-tune models and store training checkpoints that must not be tampered with.
Q: Where does blockchain fit in the architecture?
Stefaan Vervaet: “Blockchain is very powerful when used correctly — you can use it to ensure immutability, ensure there’s an immutable audit trail that is verifiable and can be used for audits to prove data was stored where it’s supposed to be. We use cryptographic hashes based on the content itself to prove data integrity. As you’re fine-tuning models or storing checkpoints, you have to ensure that checkpoint wasn’t tampered with. We have object lock, but on top of that we have this immutable audit trail.”
Storage Economics: What $14.99/TB Actually Changes
The pricing model is the commercial thesis. Hyperscaler storage pricing is deliberately complex: per-API-request fees, tiered retrieval costs, and egress charges that compound with every experiment an engineering team runs. The result is that CFOs cannot predict cloud bills, and engineering teams self-censor experiments. Akave’s flat-rate model directly attacks this dynamic.
Q: How does the cost model change enterprise behavior?
Stefaan Vervaet: “Customers told us they didn’t know what their cloud bill was going to be next month, because their engineers were running more experiments. It was really hard for the CFO to do budgeting, because they’d only know after the fact how many queries were run — and that turned into a higher cloud bill. We saw an opportunity to price storage at a fixed price point — not archive tier, not ephemeral — at $14.99 where customers pay for unlimited egress and unlimited queries per month, versus being charged every time they consume their own data.”
On migration economics: moving data out of AWS costs $90 per terabyte at standard rates. Akave uses direct connects to major hyperscalers to reduce that one-time cost, and Vervaet notes that the differential in ongoing storage and retrieval fees typically recovers that migration cost within two to four months.
Q: How do you handle the migration cost itself?
Stefaan Vervaet: “When you want to move data out of AWS, it’s $90 per terabyte. If you have a direct connect, you can lower that cost. We help customers by having a direct connect and moving that data out at a lower cost. Then as we charge less for retrieval and storage, you pay back that one-time data movement cost within literally two to four months, depending on your storage size.”
Data Sovereignty in Practice: Europe, GDPR, and the Cloud Act Conflict
Vervaet visited European customers a month before the interview and described data sovereignty as the dominant topic — and more specifically, the GDPR-Cloud Act conflict. European enterprises storing data with US-headquartered cloud providers remain legally exposed to US government data access requests under the CLOUD Act, even when data physically resides in Europe. Akave’s community-driven, software-first architecture directly addresses this.
Q: How does Akave handle European data sovereignty requirements?
Stefaan Vervaet: “If you’re storing with an American cloud that is still governed by an American board, you are subject to the Cloud Act implications — the US government can still request access to your data even if you are a European company storing it in Europe with an American-owned company. The solution is software and a local integrator. We’ve designed Akave so that integrators can run their own Akave instance locally within region. They contribute hardware locally within Europe and contribute it to the Akave network, or run the Akave endpoints themselves.”
This community-first model mirrors open-source infrastructure ecosystems and is designed to restore the local VAR and MSP relationship that hyperscaler reselling eroded over the past 18 years. GDPR compliance, Vervaet explained, is not just about physical data residency — it requires proving that a European entity controls the keys, makes changes to the stack, and is governed by a European board. Akave’s control plane separation makes that proof possible.
Migration Path: Snowflake, Apache Iceberg, and the S3 Bridge
For data scientists, developers, and data engineers evaluating Akave, the migration path is engineered to minimize disruption. Because Akave is S3-compatible, existing tools — Snowflake, Databricks, Presto, Trino, Hugging Face — connect via endpoint configuration changes, not re-architecture. Akave worked directly with Snowflake’s product team to support Apache Iceberg external tables, allowing Snowflake instances running in the cloud to query data stored on Akave directly.
Q: What does migration actually look like for an existing Snowflake customer?
Stefaan Vervaet: “With Snowflake and Apache Iceberg, we worked with the Snowflake PM team to optimize Akave to make it very seamless — so you can point an existing Snowflake instance running in the cloud directly to an Akave table as an external table. In the past, Snowflake and Databricks only supported storage targets within the hyperscaler cloud. Now they support external tables, and the way we migrate data is by replicating it from the S3 target from AWS into Akave.”
Growth, Customers, and the Agentic AI Roadmap
With $6.65 million raised and customers including Intuizi, LaserSETI, and 375ai already running production workloads on Akave Cloud, Vervaet outlined a two-phase growth strategy for 2026: expanding the customer funnel across ELT and agentic AI workloads, and investing in partner and integrator network growth. The longer-term vision is positioning Akave as the default storage layer for agentic AI infrastructure.
Q: What is the long-term vision for Akave?
Stefaan Vervaet: “The long-term vision is to become the storage layer for agentic AI and AI workloads as a whole. I believe agents will become the largest consumers of hardware resources. What that means is that the interaction with storage will change, and for that you need better rails — data rails more optimized for agentic workloads. We started with S3-compatible interfaces because that’s where you have to start — create the bridge, make it easy to integrate. But down the road, we’ve already built SDKs for the developer community to support that new wave of custom applications.”
He also pointed to x402 — the emerging payment standard for AI agents being driven by Coinbase and Cloudflare — as an early signal of where agentic infrastructure is heading, and an area where Akave’s compute-agnostic storage layer is positioned to serve as foundational infrastructure.





