AI Infrastructure

IBM open sources Granite AI models

0

IBM Research is open sourcing its Granite code foundation models, with the aim to make coding as easy as possible. IBM is releasing a series of decoder-only Granite code models for code generative tasks, trained with code written in 116 programming languages.

The Granite family comprises models ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants. These models are fine-tuned for a range of tasks, from complex application modernization tasks to on-device memory-constrained use cases.

“We are transforming the generative AI landscape for software by releasing the highest performing, cost-efficient code LLMs, truly empowering the open community to innovate on top for many use cases, without any restrictions — for research, commercial use cases, and beyond,” said Ruchir Puri, chief scientist at IBM Research, who leads IBM’s efforts to bring coding assistants to the world. “I am very excited about the future of software with generative AI.”

These models are available on Hugging Face, GitHub, watsonx.ai, and RHEL AI, Red Hat’s new foundation model platform for developing, testing, and deploying generative AI models. The underlying base code models are the same as the one used to train WCA for specialized domains.

How mainframe systems are leveraging AI for businesses

Previous article

anynines Klutch simplifies data management for Kubernetes clusters

Next article