AI is already everywhere, and according to many experts, it will be a major force shaping the future of technology in general.
So far, however, there hasn’t been a Linux distribution that is closely specialized in developing, testing, and running generative AI models. Well, that’s no longer the case.
As we informed you in early May, Red Hat was working on a new version of its RHEL distro, featuring open-source Granite models designed for businesses needing a dependable platform to develop AI applications.
A few days ago, the company announced the general availability of Red Hat Enterprise Linux AI.
It integrates the open-source-licensed Granite large language model family (distributed under the Apache-2.0 license) with the innovative InstructLab alignment tools.
This combination is optimized for deployment on individual servers across the hybrid cloud, promising a significant shift in how enterprises can leverage generative AI.
The distro is delivered as a bootable image that includes Red Hat Enterprise Linux along with popular AI libraries such as PyTorch and is optimized for hardware from leading manufacturers like NVIDIA, Intel, and AMD, ensuring high-performance AI inference capabilities.
Moreover, RHEL AI addresses key challenges by making generative AI more accessible, efficient, and flexible for CIOs and IT organizations across the hybrid cloud.
The key point in the above is “more accessible” because the cost of training generative AI models should not be underestimated, with some costing up to $200 million before even being launched.
RHEL AI seeks to change this by offering a more efficient and cost-effective solution, reducing the barriers to entry for enterprises looking to integrate AI into their operations.
The robust Red Hat subscription model supports the platform, which offers enterprise product distribution, 24×7 production support, extended model lifecycle support, and legal protections under Open Source Assurance.
As of September 5, RHEL AI is accessible via the Red Hat Customer Portal for on-premise use or as a “bring your own subscription” (BYOS) offering on Amazon Web Services and IBM Cloud.
On top of that, plans are in place to extend BYOS offerings to Azure and Google Cloud in the fourth quarter of 2024. Further expansions of RHEL AI’s cloud and OEM partnerships are expected in the coming months
For more information, visit the official Red Hat’s announcement.