AI/MLDevelopersFeaturedOpen SourceVideo

Brian Behlendorf Unpacks What Biden’s EO On AI Means For Open Source

0

Guest: Brian Behlendorf (LinkedIn)
Organization: Linux Foundation (Twitter)

President Biden recently issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It establishes new standards for AI safety and security as well as protects Americans from potential risks of AI systems.

In this episode, Linux Foundation’s Chief AI Strategist Brian Behlendorf shares his thoughts on this sweeping executive order and its implications on the AI and open-source ecosystem, in general.

Linux Foundation’s involvement in EOs:

  • When President Biden came into office, he brought in a lot of technologists and had a very tech-forward agenda across several domains, including cybersecurity and AI.
  • The Linux Foundation, with its Open Source Security Foundation (OpenSSF), worked with Cisco and the White House on cybersecurity. It participated in a number of summits that helped advance the state of security and open-source software.
  • A parallel effort was conducted to develop the AI policy. The publication of this executive order indicates the administration’s huge appetite to not only lead, but to work with the industry, the open-source community and foundations, and other members of society.
  • The European Union has similar proposed legislations: the Cyber Resilience Act (CRA) and the AI Act.

EU versus US approach to AI:

  • In the EU legislation, there’s this precautionary principle of “If we don’t quite understand the possible negative downsides, then we shouldn’t start until we understand them completely.” (Per Behlendorf, this is unfortunate because you won’t know all the downsides until you get started.)
  • In the US Executive Order around AI, there is a cautious constructivism: “We know there are some potential downsides, so let’s invest in ways to mitigate those. At the same time, let’s invest and take advantage of AI and open source to help bolster security.”

Key points about this Executive Order:

  • It tries to shape not just how the private sector operates, how open source operates, but also how some of the underlying technologies are built by the private sector.
  • It is also a directive to the government itself on how to move forward and how to safely manage and take advantage of the opportunities here.
  • It requires the open-source community to share safety results with the US government. It calls for transparency, which is easy because it is the bedrock that open source is built upon. The open source community has always shared not just the source code and the list of bugs, but also the test results.
  • There’s a focus on privacy protection and mitigating the potential for bias that might be within the datasets that are consumed.
  • It mentions concerns about fraud and deep fakes and the use of the technologies for malicious purposes.
  • There’s a focus on cybersecurity and making models resilient to vulnerabilities that might malign outcomes.
  • There’s a lot of potential for the open-source community to actively build safety and guardrails and other kinds of functions into the tooling that developers are using to build AI models.

Current AI trends around the world:

  • The Singapore government recently released its own open-source project called AI Verify to attempt to be an auditing tool for AI models.
  • Uganda uses open-source software across their government infrastructure to not only cut costs, but to be able to do things and not be limited by a vendor here or a big tech. They use Ugandan developers from their own universities and their own startup ecosystem.
  • Open-source software has always been a digital public good. This concept of digital public goods is now something the UN and many international organizations have been championing. This can bring a sense of equity across the globe to access technology.
  • Countries are coming up with their own national AI strategy and they’re working with their peers in Africa, Asia, and in the west to come up with strategies that complement each other.
  • There’s a lot of research being conducted in China and a lot of patents being filed in China around artificial intelligence. We can benefit from the work that they put into these foundational models and these foundational technologies.
  • There are now people in governments who, even if they’re not technologists, understand the potential for open-source software to be a driver for innovation within their own countries. They are beginning to understand that governments need to become IT organizations in order to deliver services to their citizens and need to be better interconnected with other governments.

What’s ahead for Linux Foundation:

  • At the Linux AI & Data Foundation (LFAI & Data), an umbrella within the LF, there’s a project called Intersectional Fairness (ISF) that is focused on building a model that detects bias in the underlying datasets and the resulting models.
  • It also recently launched the Generative AI Commons to try to map out everything that’s going on in the generative AI space and figure out how LF as an open-source community can be helpful in driving innovation and consolidation in this domain.
  • There are a lot of different tools for building models, for query models, for managing these pipelines so there will be some consolidation While LF is formally agnostic about the platform, it wants to be helpful in seeing the consolidation happen in a way that builds a coherent platform that is also a truly open source, that as many people can build interesting stuff on top of as possible.
  • It will host the AI.dev Summit on December 12-13 in San Jose, CA. This event is for anybody who is either at the core of building AI models and machine learning tools, but also for those who are using these tools and connecting them into their own applications. It is a “nerdy” event in that it is more about the nuts and bolts, and less about the policy and public messaging.

This summary was written by Camille Gregory.