Cloud Native

Linux Remains the Foundation Powering Kubernetes Revolution | Greg Kroah-Hartman

0

Greg Kroah-Hartman discusses kernel security transparency, Rust adoption, and why every cloud-native deployment depends on Linux

I have known Greg Kroah-Hartman for a very long time and I never miss the opportunity to sit down with him to get a pulse on the technology that’s literally running our world – Linux kernel.

At KubeCon London, while thousands of developers focused on container orchestration and cloud-native technologies, lead Linux kernel maintainer of the stable branch and The Linux Foundation Fellow Greg Kroah-Hartman delivered a crucial reminder: everything runs on Linux.

“Everybody uses Linux,” Kroah-Hartman emphasized during his keynote and in our exclusive interview. “All these containers, Kubernetes — all of this runs on top of Linux, and they all rely on Linux.”

The Invisible Infrastructure Challenge

The relationship between Kubernetes and Linux kernel communities represents a fascinating intersection of dependencies. While Kubernetes developers focus on orchestration abstractions, they ultimately depend on Linux kernel capabilities for containers, networking, and storage.

Containers are not a native object in Linux,” Kroah-Hartman explained. “There’s no such thing as a container in Linux at the kernel level. Kubernetes creates a container by piecing together a bunch of different Linux kernel components.”

This abstraction creates both opportunities and challenges. Technologies like eBPF have emerged from kernel development to become essential tools for Kubernetes monitoring and networking, demonstrating successful cross-pollination between communities.

Unprecedented Security Transparency

One of the most significant developments in kernel security is the team’s decision to publish all Common Vulnerabilities and Exposures (CVEs) directly, averaging 13 per day. This transparency initiative positions Linux ahead of other operating systems in vulnerability disclosure.

“We are now responsible for issuing all kernel CVEs,” Kroah-Hartman noted. “We’re averaging about 13 security issues a day, which is actually fewer than other operating systems.”

The challenge lies not in the volume of issues, but in risk assessment. Unlike proprietary software with defined use cases, Linux powers everything from satellite systems to smartphone modems, making universal risk scoring impossible.

“I can’t tell you if this bug is a severe security issue or a minor one,” he explained. “Everybody has a different workload, and everybody faces different issues.”

Rust: The Future of Kernel Development

Perhaps the most transformative change in kernel development is the adoption of Rust programming language. Kroah-Hartman, previously skeptical, now advocates for Rust as a solution to endemic security problems.

“The vast majority — I’d say about 50% — of security bugs would be solved with Rust,” he revealed. “There are simple mistakes like, ‘Oh, I forgot to unlock this lock,’ ‘Oh, I forgot to grab this lock,’ ‘Oh, I had a memory leak because I forgot to check the error condition.’ The Rust compiler won’t let you make those mistakes.”

Major technology companies are driving Rust adoption through self-interested contributions. Red Hat developers are writing new NVIDIA drivers in Rust, Apple engineers have rewritten their DRM driver in Rust, and Android developers have rewritten the binder system in Rust.

“They’re contributing in a selfish way. They want to solve their problem,” Kroah-Hartman observed. “And let’s take advantage of their contributions.”

Regulatory Compliance and Open Source Responsibility

The European Union’s Cyber Resilience Act (CRA) initially created concern among open source communities, but Kroah-Hartman views the final legislation positively. The act places responsibility on companies shipping products, not open source projects themselves.

This approach mirrors existing hardware regulations, where device manufacturers rather than component suppliers bear responsibility for compliance certification.

Sustaining Innovation Through Selfish Contribution

Despite concerns about attracting younger developers, the Linux kernel project remains healthy with over 250 new contributors every release cycle. The mentorship program receives 400-450 applications per round, limited more by available mentors than interested participants.

“We have more developers on our project than any other open source project,” Kroah-Hartman noted. “5,000 developers last year, across 350 companies.”

The key to sustainable open source contribution, according to Kroah-Hartman, is enlightened self-interest. Companies succeed by contributing to solve their own problems, which typically align with broader community needs.

“Contribute to open source in a selfish manner,” he advised. “It turns out everybody has the same problems, and open source is a commodity. You’re not making money at this level — you’re making money on the value add above it.”

Looking Forward: Hardware Evolution and Adaptation

The Linux kernel’s perpetual development cycle continues driven by hardware innovation. RISC-V architecture presents both opportunities and challenges, with Kroah-Hartman encouraging vendors to learn from ARM’s fragmentation mistakes.

“Don’t fragment. Don’t try to do your own thing,” he advised RISC-V vendors. “Your job is to sell chips. Make sure your chips run Linux well and work with the community.”

For AI workloads — which are currently driving significant infrastructure investment — Linux’s role remains foundational but often invisible. “AI is just a GPU workload, just processing numbers,” Kroah-Hartman explained. “Linux works really well, gets out of the way, and runs the applications as fast as it can.”

The Invisible Success Story

As cloud-native technologies continue evolving, Kroah-Hartman’s message resonates beyond technical implementation details. The success of Kubernetes, containers, and modern application architectures depends entirely on robust, secure, and continuously evolving foundational technologies.

“If we didn’t have open source, you wouldn’t even be able to do this type of stuff,” he concluded, referencing the entire cloud-native ecosystem.

For organizations building on Kubernetes and cloud-native technologies, understanding and supporting the Linux kernel community isn’t just good citizenship—it’s an essential business strategy for long-term success.


Edited Transcript

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and we are here at KubeCon and CloudNativeCon in London. And we have with us, once again, Greg Kroah-Hartman, Linux Foundation Fellow and maintainer of the stable branch. First of all, Greg, it’s great to see you again. We met last year, I think it was in Paris, if I’m not wrong. Austria last year? Vienna?

Greg Kroah-Hartman: Vienna, yeah. I was in Paris after that.

Swapnil Bhartiya: Yeah, different conferences. Everything gets blurry, right? The funny thing is that, you know, I mean, you have been there. I’ve talked to a lot of folks who have been in the Linux community. They keep changing companies, but they keep doing the same thing, you know. And we have talked about that as well when we discussed the whole career path. So I want to talk to you about what you are doing these days?

Greg Kroah-Hartman: Still maintaining the Linux kernel stable branch. Sasha Levin and I do that work. We turn out a kernel every week for that—a bunch of different ones that people build their devices off of and rely on, distros pull from, whatnot. So that development process is going well. We do a new kernel release. Linus does a new one every eight to nine weeks. We’ve been doing that for 15 years. I had an interview with somebody like, “How do you guys do that? That seems like it’s so hard.” But if you just do time-based releases instead of feature-based releases, it turns out pretty easy. It works out much better. Everybody’s on board with what’s going to happen and they know what’s going on. Kubernetes has a much longer timespan, I noticed. I think it’d be nice if they shortened it a little bit—might be a little simpler. But that’s just us. Everybody works on different timescales.

Swapnil Bhartiya: And you delivered a talk today as well.

Greg Kroah-Hartman: I did, yes. I’m here at the CNCF Foundation conference, KubeCon. I want to remind everybody that everybody uses Linux.

Swapnil Bhartiya: That’s what I’m going to ask about. What are you doing there?

Greg Kroah-Hartman: Well, it turns out everybody uses Linux. I was talking—I gave another interview a couple weeks ago with a podcast, and they’re like, “Oh, we forgot that we all really do use Linux, but we don’t understand what is Linux and what is it here for?” And that’s what all these containers, Kubernetes, all this stuff works on top of Linux, and they all rely on Linux. So I’m here to talk about that, just to mention, “Hey, we’re down here below the thing.” Linux Foundation is still a big part of your community as well. And also, we want your feedback because you are a workload to us, and we want to make sure that everything you guys need works well for us—or rather, everything that you guys do and need works properly for Linux.

Swapnil Bhartiya: So what kind of interaction is there between the Kubernetes communities, CNCF, and the Linux kernel communities? These are two disconnected but coupled communities. I just want to understand the intersection, the cross-pollination. Or are you like, “No, they’re still distinct. We have to bring them together”?

Greg Kroah-Hartman: They’re very distinct. I mean, they use the stuff that we provide, right? Some of the feedback, though, is good. Containers are not a native object in Linux. There’s no such thing as a container from Linux at the kernel level. Kubernetes creates a container on top of that by piecing together a bunch of different Linux kernel pieces, making sure that it works well for them. They’ve talked—the networking especially, I know the networking team talks to the container people and works all these issues out. So there’s really good interaction there. eBPF is a huge thing in analysis and Kubernetes and monitoring your containers, whatnot. The eBPF community is the core kernel community. So those are good cross-pollinations, and those are really good interactions. So yes, they’re there.

But at the level I’m working on—mostly devices and new hardware coming up—Kubernetes doesn’t care what kind of disk controller they’re running on or what kind of network they’re running on. They just want to make sure their stuff works right. So at my level, not a lot. But then again, above that, the providers for Linux to run Kubernetes on top of care about a lot—AWS, Azure, Google Cloud, all those guys work with the kernel community to make sure that they have a kernel that they can provide as a solid base. They report bugs to us. We find out. They evaluate our CVEs and try to give a risk assessment for their use case. And that’s great, and that’s what the cloud and the Kubernetes people require. So it’s not directly the Kubernetes developers always, but they’re part of the ecosystem.

Swapnil Bhartiya: And now let’s go back to the Linux kernel community. You folks have solved a lot of problems. Are there still things that keep you awake at night? Or are you like, “Nah, I just sleep like a baby”? I think sleeping like a baby is bad, right? You’re always hungry, your diaper is always dirty, and you’re always crying.

Greg Kroah-Hartman: That’s true. No, I mean, Linux is kind of unique in that it’s a software project that is never going to be finished because there’s always new hardware, right? There’s always new stuff that hardware manufacturers are creating—new different types of CPUs, different types of everything. So Linux—we’re not changing because we want to change. We’re changing because we have to change because people want to use this type of stuff. So yeah, our job is never going to be finished unless they stop making new hardware. But I mean, Linux is in everything. It’s not just cloud. It’s Android, ChromeOS, the Wi-Fi chip and the 5G modems are running Linux in the iPhone. Like Linux is everywhere in this type of stuff. There are a lot of different use cases out there.

Swapnil Bhartiya: Yeah, because the Kubernetes case is different—everything is already abstract, you know. But you have to deal with the real hardware. A lot of innovation is happening in this space. And that innovation—I mean, new devices keep coming up, new use cases coming up, which not only brings, you know, of course, the need to support them. Then security could become a challenge. Size can become a challenge. Scale can become a challenge. So what kind of emergence of new devices—because in the very early days, I remember, I used to talk to Chinese vendors, you know, they keep coming up with new hardware, and they’d convince them to… So what are the new beats or devices that you are like, “Hey, these are the ones that are still posing challenges for even a mature community like the Linux kernel community”?

Greg Kroah-Hartman: RISC-V. RISC-V is interesting. We’ve gotten some flack for saying, “Don’t make the same mistakes that ARM did.” They seem to make the same mistakes that ARM did. I mean, learn from history, guys. But they’re doing well. I have a RISC-V laptop now, so it’s nice to see that they have some work to do on this stuff, but it’s good to see. I also have now an ARM64 server, and the performance per watt number is really good now. So ARM64 is becoming a viable competitor in servers, which is nice to see.

But so a lot of that is like, I’ll call it RISC-V—just learn from the past. Don’t fragment. Don’t try and do your own thing. Work as—your job is you want to sell chips, right? Make sure your chips sell well, run Linux well, and work with the community. Luckily, there are some RISC-V vendors that understand this. Hence I’m interacting with them, and we’re trying to get their code upstream and their drivers upstream. It’s getting better, but the architecture-specific code in Linux is very small. So everybody uses the core of Linux very well, and then after that, it’s the drivers for your specific hardware. So it’s not really that big of a deal that way.

The more complex beasts are the phone SoCs. Those are the most complex CPUs and system-on-chips out there. Servers are easy. Laptops are a little bit harder, but servers, hardware-wise, are a network card, storage card, and a bunch of memory and maybe a GPU. It’s actually very simple. But like a phone has like 300-350 different drivers for all the little tiny pieces all around. It’s so complex. So SoCs are still the complex piece out there, which we’re still doing a lot of work with.

Swapnil Bhartiya: When you look at AI workloads, and people quickly jump to Linux, you know, to be able to run that, what kind of new use cases are you seeing for Linux, which are less to do with the hardware specifically, but they do want to run on top of Linux?

Greg Kroah-Hartman: Oh, well, AI is just a GPU workload, just processing numbers. So Linux works really well—gets out of the way, runs the applications as fast as it can, runs the network as fast as it can, runs the storage as fast as it can. So it’s not really any different than a normal server workload for us. So we didn’t really have to do anything special. New hardware supporting new GPUs to do that—great, but it’s… I mean, a video card is more complex than a GPU, in a way, so the GPU drivers are simpler, in a way, because they don’t have to deal with a lot of frame buffer issues, a lot of other weird HDMI devices and whatnot. So we have—I mean, the whole AI world out there, besides the accelerator drivers, which we support, really hasn’t affected us much. We just get out of the way and let them run their workloads.

Swapnil Bhartiya: Let’s talk about security, and also, since we are here in Europe, last year, last two years, CRA came, and there was a lot of friction because while the idea behind it was great, but there was some kind of disconnect between the community and the lawmakers. But then a lot of work was done. Linux Foundation, other projects, they tried to work together. So talk about the security aspect. Is security—I mean, we can talk about all those, you know, I don’t want to go into that dark, gloomy stuff to remind me where we were back then. But instead, security today, and then also, if you can touch on CRA. And do you—what do you think of CRA and what they are doing in Europe? And do you think that they should have something similar globally?

Greg Kroah-Hartman: So security-wise, we are now responsible for issuing all kernel CVEs. We are now what’s called a CNA—CVE Numbering Authority—and that’s good. So CVE.org allows open source projects to do this finally, and Python did it first. Curl did it. Curl just did it. We’ve done it, and the kernel and a number of other smaller projects, open source projects, are doing that, and that’s really good because the CRA is going to make open source projects be responsible for identifying or publicizing all the vulnerabilities that they have, which is fine. I want to take ownership of that and let the world know. Before, we didn’t publish anything. So now the kernel publishes all the CVEs, so we’re averaging about 13 a day security issues, which is actually less than other operating systems. So this is kind of funny. That seems like a huge number, but a bug at our level of the stack is an issue of, “Oh, we can leak some memory. Oh, we can crash the machine, or this can leak some memory,” some more tiny issues. There are a few occasional bigger ones, but we’re identifying them all, and we’re pushing them out to the world.

The trick is, open source projects can’t identify how their code is being used, so Linux is used in everything—cow milking machines, satellites, servers, Android. I don’t know how it’s being used, so I can’t tell you if this bug is a severe security issue or a minor security issue. Same thing with tools like Python or Perl or curl—they don’t know. And so the problem is trying to break down how you identify the score of this vulnerability. Is it high? Is it medium? Is it low? And the traditional method doesn’t work for us because everybody has a different workload and everybody has a different issue.

So the CVE board is realizing this, and we’re going to try and come up with a way for us to not have to give a score in that way, and then different groups can justify the score on their own. I will call out Microsoft and Google and Amazon working together with SUSE and Oracle to identify for the cloud use of Linux these CVEs—here’s the risk model, and here’s the high, low, whatnot, and they break it down, they provide a whole bunch of meta information. They’re not doing it for all 13 a day, but they’re going through and they’re doing this work, and I want to integrate that into our reporting as well. But they’re doing great work for this specific use case. Android takes a look at them all and says, “Okay, do these apply? Do they not? Do we even build this code in?” So a lot of this stuff—the majority of all these CVEs—they’re not even building the code because they’re only using a small portion of our 40 million lines of code. So it’s those types of individual use cases that are important.

The CRA, the EU law, did get worked out. So they do understand that the open source communities don’t have the same rules or risks that a company does, but the CRA puts the burden of proof on the companies that integrate the open source into their product. So it’s like you ship a product, you have to get a CE mark for the electronics in there. It’s not the manufacturer of that chip. It’s the manufacturer of the device itself that’s responsible for that, and that’s what it is. They’re going to dictate software bill of materials so that you will know what software is in it. They will have to update based on vulnerabilities that are known for that software. They will have to report vulnerabilities that are found and reported to them. And this is a good thing. I think it’s a really good thing. So the law is finished. Now comes the specification part. So they said, “You must do this type of thing.” And now we need to get down into the details, like, “What is the risk model here? What is that?” And me, through the Linux Foundation Europe—I’ll call out Eclipse, Apache are also doing this work with the EU group. We’re part of the expert group. We’re going through and defining what are the different categories, what are the risk categories and the specifications for actually how to break down and do this stuff.

There will be some new rules. I don’t know if we’re going to use CVE. We might have to report to CVE for the US and China and whatever the CRA sets up in Europe. We’ll have to report to a different one for Japan, maybe different one for India, which is fine. I want to be agnostic. I can push to anybody. Any open source tool can do the same. Oh, Kubernetes—they’re also a CNA, so they also are in charge of their own stuff. I’ve talked a lot with the security people from Kubernetes. They push out a much smaller volume, so they can still do it easily.

So the CRA is good. And I think over the next two years, year and a half, I think we’ll be finalizing the specification. It’s a public process through ETSI in the EU, and it’ll come out as a specification. I think it needs to be finished one and a half years from now. And then it’ll have another year before it goes into full effect. I think it’ll be voluntary in one and a half years. It’s going fast, it seems like, but it’s good. And this is something that actually the US government also requires the same rules, almost. So it would be nice. I did see some reporting saying maybe that the UL labs in the US are going to do much the same thing for devices in the US because software is in all these devices, and you want to have it be able to be updatable and fixable and things like that. And vendors who ship devices with software need to be responsible for that. They can work with us—the open source communities and us—we’re here, glad to work with them, but the responsibility is on them for when they bundle it all together and ship it to a consumer, and that’s the same thing as dealing with the metal in it or the electronics in it or the paint in the device. Same rules—software should be noted as well.

Swapnil Bhartiya: We have asked this question, and since I met your son also this morning, so once again, I will talk about, what are you folks doing to attract the new generation? First of all, they are all TikTok audience, they’re all, you know, reels. There was real talk about, how are you building the future generation? Because kernel is the foundational technology. It’s not a technology like—I mean, I cover mainframe also. Mainframe is also critical in today’s modern world. But this is not a technology which, “Oh, it’s an ancient technology.” No, this is the foundational technology of the modern economy. What are you folks doing to attract modern maintainers, developers to the kernel?

Greg Kroah-Hartman: We haven’t stopped. We have over 200, maybe 250 new developers every eight to nine weeks join our project, submit maybe just one patch, drive-by or not. We had 400-450 people apply for the kernel mentorship program intern program last round. We have a huge number of people applying for this stuff. The university where my son goes to, they teach the foundation of systems development and a little bit of kernel stuff, and they know that kind of stuff. This is out there, and people are doing it. We have more developers on our project than any other open source project—almost 5,000 developers last year, 350 companies. We are not exactly a shrinking project by any means, but we still have a huge number of new people coming in. If you look at our numbers, we’re not shrinking.

That being said, we can always use more. SUSE is doing a great job with the mentorship program, with the intern program. The main limit of what we have is the number of people that can be a mentor. So we don’t have a lack of people wanting to be part of the process. We have a lack of people being allowed by their company to participate as a mentor. So I want to see more companies allow their kernel developers to be a mentor to other people. Because everybody who comes out of this process, they either usually all go back to university, or they get a job. And all these companies are hiring these people, and they kind of disappear, which is great. That’s fine, but they have kernel knowledge, and these companies need people with that. So we have a huge—they’re asking for more developers. So they should allow their maintainers that work for them to be part of the process, and they get—they got hired. So this is good. They can hire from them.

Swapnil Bhartiya: So you have a scalable problem, because there are challenges there, but you do have developer maintenance problems. But have you also seen that—I don’t want to get into specifics, but you know, especially regions where you see there are more people? Because different regions, they have different, you know, education systems. Where are you seeing that different regions, that’s where most of the developers and mentors are coming from?

Greg Kroah-Hartman: All around the world. I have called out it comes down to the university you go to. It’s like some universities are more theoretical knowledge, or like higher level, like F#, or some focus on systems, and that just—in the same country, you can have different universities do different things. So it depends on the university system. So I can’t call out anyone. Europe is huge. India is huge. US is huge. In some different universities, some universities know. So again, it comes down to the university system that’s churning out these students. So it’s all over the world. So I don’t see any one area that’s no, or any one area that’s yes.

Swapnil Bhartiya: I mean, I’ve been covering open source from the very early days, and they are the constant where everybody is. But it feels that people have forgotten that they are using open source, you know, and then they also forget to be a good open source citizen. But I know you’re—because you are a very positive guy, so you also will always be positive. But in general, are you saying that we should talk more about open source? Or you feel, “Man, people are doing enough open source. We don’t have to go out and educate them about open source”?

Greg Kroah-Hartman: I think we always need to educate about open source, right? I mean, the way everybody succeeds is by contributing to this. You’re relying on these facilities. Everybody contributes to open source in a selfish manner, which is fine. Contribute to solve your problem, because it turns out everybody has the same problems, and open source is a commodity, right? Everybody’s not making money on this level stuff—they’re making money on the value add above that, and that’s fine. These are the building blocks that we provide for you to solve your problem, your business problems, whatever it might be. So these are the underlying pieces and parts that you rely on. So contribute to it, and it also—when you do it, it saves you time and money. You’re not having to try and work around things. If you hire one or two developers for the project you’re relying on, you get a much better product overall for you. And that saves you money to do this stuff. It’s cheaper. Intel and IBM proved it long, long time ago. It saves you time and money to work with the community than to ever do it yourself. So there’s a business case to do this. So it’s not just that. So yes, I would always want people to work and talk more about open source. They don’t realize it, and that’s kind of good. But on the flip side, we do need to emphasize all this is open source. Contribute to it because you are relying on it.

Swapnil Bhartiya: And you have no idea how many times I’ve quoted you saying, “No, we need selfish users,” because, you know, the charity people forget about it. You know, you’re charitable, you donated, you’re done. But if you’re a selfish user, your success is tied to the success of the product. So you will—so that’s why I know that quote, you know, I have used so many times that, you know, Greg says this: “We need selfish users because that makes the project sustainable.”

Greg Kroah-Hartman: Because your success and you’re relying on it more and you want it to succeed, and because—but I mean, using open source has let so many more things grow than ever before. You can spin up the whole old LAMP stack, right, with the whole first dot-com boom. And now all the Kubernetes, all the cloud-native stuff. That’s all thanks to open source. If we didn’t have open source, you wouldn’t even be able to do this type of stuff.

Swapnil Bhartiya: And you also talked about Rust this morning. Talk about Rust. I mean, when I talked to Linus also, I brought up the point of Rust, you know, so I will ask you also, you know, what is the kernel community thinking of Rust?

Greg Kroah-Hartman: So I talked about this today. Rust—I was a skeptic a year or two ago. I have a friend many years ago who said, “Oh, you should try Rust, and it makes programming fun again.” I was like, “Ah, okay.” And then I ignored him, and he was right. I should have listened more. Rust is nice in that from a C programmer’s point of view, you can kind of do the same thing. It looks the same, but the compiler makes you actually think about what you’re doing more. You have to—and once you get it right, it’s always right. The memory accesses are there, the proper ownership of stuff. “Oh yes, I really did have to check my error conditions.” You kind of get frustrated a little bit, but then you realize that once it compiles, once this compiles, then I just can focus on the logic, and the logic is what, as a programmer, you care about. You don’t care about the syntax and some of the other foolish things.

The cool thing about the kernel is that—since I’ve seen every single security bug for the past 18 years—I can say the huge majority, maybe 50% or more (some say I’m undercounting), would be solved by using Rust. A lot of issues come from simple mistakes: “I forgot to unlock this lock,” “I forgot to grab this lock,” “I had a memory leak because I forgot to check an error condition.” The Rust compiler won’t let you make those mistakes.

Going forward, Rust makes development easier for both contributors and maintainers. If the code compiles, I know it will work because it passed the compiler checks. I no longer have to wonder, “Is this lock held?” or “Did they clean up properly on this error path?” It’s all enforced, so we can focus on logic problems. As a maintainer, that makes life easier.

Now, the tricky part is that Linux has 35 million lines of C code. The intersection between C and Rust is very complex—the “hairiest, craziest stuff” Rust can handle. Personally, I can read these bindings between C and Rust now, but I couldn’t write them. I can write Rust code. Thankfully, we have very talented Rust developers doing this work.

And they’re contributing in a “selfish” way—they want to solve their own problems. For example: Daniel, a developer at Red Hat, wants to write the new NVIDIA driver in Rust. Apple developers wrote their new DRM driver for Apple chips in Rust because it made their work much simpler. Android developers rewrote Binder in Rust.

They’re contributing this Rust code to Linux because it solves their needs—and we should take advantage of that. It makes drivers simpler and better. We don’t need to rewrite old C code; that’s fine. Google has shown that just writing new code in Rust improves things.

Now, there’s some confusion about memory safety in Rust. People say, “Oh, we can’t have memory errors in Rust.” You can still have memory errors. Rust won’t save you from that entirely. For example, I showed today how a simple off-by-one error can instantly crash the system in Rust. The difference is, when Rust crashes, it will typically just panic and reboot the machine. In C, that kind of error would corrupt memory silently, possibly creating a vulnerability to take over the system. So while you can still get CVEs in Rust (e.g., denial of service from a crash), it reduces the risk of critical vulnerabilities like remote code execution.

However, maintainers face challenges too. We now have to define our C interfaces in a way that Rust can understand, especially for the driver model. The Rust developers initially brought some code to us and said, “We’ll bind to this.” I thought it was crazy—but after working with them, we realized they were only a little wrong, and I was a lot wrong.

It comes down to collaboration. For example, they wrote 1,000 lines of Rust to avoid changing two lines of C. I said, “I can change this to three lines of C and make it much easier.” They agreed. So, we’ve also improved C code along the way, making it safer even if Rust goes away—those improvements stay.

Yes, this creates more work for maintainers upfront, and we’re already overloaded. But in the long run, Rust can help us write drivers that are safer, easier to maintain, and more robust. And that’s important—because we write code for people first, compilers second. We need to maintain this code for a long time, which is a very different mindset than just writing one-off code.

Swapnil Bhartiya: Greg, once again, thank you so much for taking the time. Talking to you always brings us back to our roots—our foundation—and that’s where I come from too. Always a pleasure. See you in the next city!

—By Monika Chauhan & inputs from Swapnil Bhartiya

How Infrastructure Specialists Are Becoming Critical to Enterprise AI Success

Previous article

LF Energy Summit Europe 2025 to Spotlight Open Source Innovations Driving the Energy Transition

Next article