Infosys, Intel target large-scale enterprise AI rollouts
Expanded collaboration aims to help companies deploy secure and cost-efficient AI across cloud, data centers and edge systems.
News
- Apple Picks John Ternus to Succeed Cook
- MIT Sloan Management Review Brings Its Global AI Research Forum to India
- Poonawalla Fincorp Launches AI Platform for Customer Service
- Google in Talks with Marvell to Develop AI Chips, Report Says
- Sequoia Raises $7 Billion for Bigger AI Bets
- Anthropic Releases Claude Opus 4.7 With Safeguards
The collaboration aims to accelerate secure, scalable and cost-efficient AI adoption across industries by advancing open standards from the edge to the cloud, they said, adding that the joint architecture will focus on building “right-sized” AI systems that balance performance, security and total cost of ownership for enterprise workloads.
As part of the partnership, Infosys and Intel will co-design, optimize and benchmark AI workloads across Intel Xeon processors, Intel Gaudi AI accelerators and Intel AI PCs.
The combined platform will support mission-critical use cases such as IT operations, developer productivity and automation, while enabling AI agents that can securely access enterprise data, coordinate tasks and operate within governance controls.
Infosys Chief Executive Officer Salil Parekh said the partnership aims to help enterprises “unlock AI value at scale securely, cost-effectively and with clear business impact.”
Parekh said the partnership aims to facilitate their clients in institutionalizing AI at the core of their operations and transforming their AI journey.
Intel Chief Executive Officer Lip-Bu Tan said the collaboration would enable organizations to deploy performance-optimized AI systems across data centers, cloud environments and edge infrastructure.
Benchmarking AI workloads compares performance metrics, including speed, throughput, latency, and efficiency across Intel hardware tailored for AI tasks. Such benchmarking enables organizations to choose the right platform for specific needs, such as training models, inference, or edge computing.


