CoRover, Intel Roll Out BharatGPT Mini for Fully Offline AI
CoRover’s Small Language Model aims to deliver multilingual, low-latency AI in sectors where cloud use is restricted.
Topics
News
- CrowdStrike to Buy Identity Security Startup SGNL for $740 Million
- Aditya Birla Ventures Bets on US Generative AI Firm Articul8
- Google Brings Gemini AI Features to Gmail
- Indian AI Startups Pitch Global Ambitions Ahead of AI Impact Summit
- CES 2026 Day 3: Foldables, Gaming PCs, and Everyday AI Take Center Stage
- Infosys Taps US AI Firm Cognition to Deploy Devin
[Source photo: Krishna Prasad/Fast Company Middle East]
Conversational AI platform CoRover has announced its newest innovation, BharatGPT Mini, an offline-capable AI agent developed in partnership with Intel Corp.
The system is designed as an enterprise-grade, multilingual agent that runs entirely on local hardware, allowing organizations to deploy secure conversational AI without depending on cloud connectivity.
Powered by Intel Core Ultra processors and accelerated through OpenVINO Toolkit, BharatGPT Mini is designed to deliver real-time interactions even in environments with limited or no internet access.
The approach targets a growing challenge for enterprises and government departments working in remote areas or high-security zones where connectivity gaps and data-privacy concerns limit the use of cloud-based large language models.
Operating as a Small Language Model (SLM), BharatGPT Mini is optimized for fast, low-latency responses while keeping all data within an organization’s own infrastructure.
The edge-first design helps safeguard sensitive information, reduce reliance on external servers, and support compliance with strict data protection requirements.
For sectors such as governance, citizen services, healthcare, and enterprise support desks, the ability to run AI fully offline can significantly improve reliability and security.
The collaboration leverages Intel’s hardware acceleration and software optimization stack to deliver efficient on-device inference. With Intel’s CPUs, NPUs and other accelerators handling local processing, the system aims to offer enterprise-grade performance at the edge.
The model is engineered for easy scaling across large deployments, making it suitable for agencies and organizations that need secure AI across devices or field locations.