In recent years, AI has become a central focus within the tech community, attracting significant investments and yielding highly promising outcomes.
Big tech companies are diligently crafting their own AI models, which are widely regarded as the next major technological breakthrough that will shape and propel the future of technology.
In light of this, Red Hat isย onceย again a step ahead of its competitors in the Enterprise Linux segment by providing a dedicated platform to develop AI models. Hereโs what itโs all about.
In an unexpected move towards democratizing AI development, Red Hat today announced the developer preview of its new offering, Red Hat Enterprise Linux AI (RHEL AI), promising to change how enterprises deploy AI by leveraging the power of open source.
Built on the InstructLab open-source project, RHEL AI integrates IBM Researchโs Granite large language models and InstructLab model alignment tools, making it easier for domain experts without extensive data science skills to develop AI-driven applications.
Key Features of RHEL AI include:
- Community-Driven Innovation: RHEL AI fully utilizes open-source models and knowledge, allowing users to contribute to and benefit from shared advancements.
- User-Friendly Tools: The platform offers tools that simplify AI training and fine-tuning processes, making these technologies accessible to non-specialists.
- Optimized for AI Hardware: Packaged in a bootable RHEL image, it supports optimized AI hardware, ensuring efficient server deployment.
- Enterprise Support: Red Hat guarantees support and intellectual property indemnification, enhancing reliability for enterprise use.
The developer preview provides access to the Granite 7b English language base model and its LAB-aligned counterpart. Looking forward, Red Hat plans to expand the Granite model family to include specialized code models.
Moreover, the company outlined a three-step process to integrate RHEL AI into enterprise environments:
- Step 1: Run the InstructLab CLI on a personal computer to familiarize yourself with the tools and develop initial models.
- Step 2: To refine and enhance your models, move to more powerful hardware, such as bare metal servers or cloud VMs.
- Step 3: For large-scale deployments, spread training across multiple nodes using OpenShift AI to enhance throughput and integration with cloud-native applications.
At the moment, Red Hat Enterprise Linux AI is available in developer preview, leveraging IBM Cloudโs GPU infrastructure for training Granite models and InstructLab.
You can start by training open-source Granite models with the InstructLab CLI on your personal laptop or desktop, or if you feel confident enough, dive directly into the RHEL AI developer preview.
For more information, visit the official Red Hat announcement or look here.