Can generative AI solutions tailored for enterprise use ever achieve interoperability? The Linux Foundation, in collaboration with a consortium of organizations including Cloudera and Intel, aims to explore this question.
On Tuesday, the Linux Foundation unveiled the Open Platform for Enterprise AI (OPEA), a project aimed at fostering the development of open, multi-provider, and modular generative AI systems. This initiative, under the auspices of the Linux Foundation’s LF AI and Data organization, seeks to pave the way for the creation of robust and scalable generative AI systems that leverage the best open-source innovations. Ibrahim Haddad, the executive director of LF AI and Data, emphasized the project’s goal of establishing a detailed and composable framework that integrates cutting-edge technologies.
OPEA’s membership includes industry giants such as Cloudera, Intel, Red Hat (an IBM-owned company), Hugging Face, Domino Data Lab, MariaDB, and VMware, among others. Together, they aim to collaborate on various aspects of generative AI development.
One of the key areas of focus for OPEA is the optimization of AI toolchains and compilers to ensure compatibility across different hardware components. Additionally, the project will explore heterogeneous pipelines for retrieval-augmented generation (RAG), a technique increasingly employed in enterprise applications of generative AI. RAG enables models to reference external information sources beyond their training data, enhancing their capabilities to generate responses or perform tasks.
Intel provided further insights into OPEA’s objectives, highlighting the need to standardize components for RAG solutions and ensure interoperability across enterprise systems.
Evaluation will play a crucial role in OPEA’s efforts, with the project proposing a comprehensive rubric for grading generative AI systems based on performance, features, trustworthiness, and enterprise readiness. This evaluation framework aims to provide a standardized method for assessing the quality and reliability of generative AI deployments.
Moving forward, OPEA envisions additional initiatives, including open model development akin to Meta’s Llama family and Databricks’ DBRX. Already, Intel has contributed reference implementations for generative AI-powered applications optimized for specific hardware.
While each member of OPEA brings its own expertise and solutions to the table, the success of the project hinges on collaboration and compatibility among vendors. By working together to build cross-compatible AI tools, OPEA aims to offer customers greater flexibility and choice while avoiding vendor lock-in.