Create a “UPI for AI” as a Digital Public Infrastructure

Source of image: Generated by the author using AI

Artificial Intelligence (AI) technology is advancing rapidly and the AI models are becoming more and more complex and capable. This is being driven by increasing computational resources, massive data availability and record investments being made in developing ever larger large language models (LLMs) with hundreds of billions of parameters. However, many of the most prominent LLMs are proprietary or closed-source, which may pose significant barriers in their usage by small businesses and start-ups due to the high costs involved in their subscription. There are also a large number of open-source AI models available, including many small language models fine-tuned for specific domains or use cases, which can be used freely by anyone. However, these models have their own individual Application Programming Interfaces (APIs) and protocols that may require separate integrations for each model in various applications. This model fragmentation makes the entire process quite cumbersome and inefficient for developers developing applications for various use cases and users trying to seamlessly switch between models to discover the best one suited for their specific requirements.

Can a unified interface like that of UPI be developed for open-source AI models for easier access, interoperability and discoverability? Developing such a unified interface as a Digital Public Infrastructure (DPI) can address these concerns. The key factors for success of UPI lie in its open architecture, interoperability, and instant real-time transactions through Virtual Payment Addresses (VPAs) or mobile numbers. This creates a level playing field for various payment apps and services. Similarly, an open network of AI models would allow developers to swap between different models from different providers through a Unified Model API without rewriting their application’s core logic. Each model can be provided a Virtual Model Address like that of VPAs in UPI for routing the request to the correct model. Such an architecture would also allow for seamless interoperability between models from different providers based on accuracy, speed or specific domain requirements.   

Such a Unified Interface (UI) for AI platform requires several steps for implementation. First, standardization is required to address the issue of API fragmentation in the AI ecosystem. This would involve defining compliant APIs, model data and Input-Output (IO) formats, and a virtual address system for each model. A universal standard would also require broad industry buy-in, which can be addressed through a combination of policy measures and engagement with the industry.

Secondly, a central routing and arbitration layer incorporating a central public registry of AI models would need to be created for intelligent routing of user’s requests to the best-suited model based on performance, costs or domain specific requirements. This central routing layer would also ensure interoperability.

Thirdly, a governance and regulatory compliance layer would need to be created to ensure fair access, maintain security and prevent misuse. This would involve defining security standards for authentication, data privacy and encryption to protect sensitive data shared with the AI models. It would also require a compliance framework to be put in place with clear rules to address AI bias, transparency and accountability. A dispute resolution mechanism would also need to be established.

Last, but not the least, a systematic drive for adoption of the UI for AI would need to be undertaken to ensure that all the model providers are onboarded through a broad consensus for uniform API standards. Continued engagement with them would also be required to keep the ecosystem robust and secure. Start-ups and application developers can be provided with incentives to build applications on top of the unified interface. Government ministries and departments can train specific AI models using their own domain datasets and undertake large-scale development of AI-driven applications for their use cases using this unified AI interface. This would drastically cut down the time required for developing and going-live with their applications.

The concept of a “UPI for AI” as a digital public infrastructure is both feasible and desirable and needs to be actively pursued to simplify model access and enhance interoperability. This would also encourage innovations in AI technologies, model training and their deployment in a large number of use cases. For maximum impact and accessibility, such an endeavour needs to be undertaken by the government through its IndiaAI Mission. This would also help in ensuring security, data privacy and regulatory compliances.

(The above article appeared on October 30, 2025 in The Economic Times online. It is available at: https://economictimes.indiatimes.com/tech/artificial-intelligence/create-a-upi-for-ai-as-a-digital-public-infrastructure/articleshow/124943526.cms?from=mdr)

(The author is a senior IAS officer and currently the Secretary, Department of Border Management, Government of India. The views are personal.)

Transforming Governance with a Unified AI Stack

Source: Generated through AI by the author

With rapid advancements in artificial intelligence (AI), organisations are scrambling to implement the technology in their business processes and service delivery frameworks to improve efficiency and enhance citizen experiences.

AI is set to impact nearly every sector, but to harness its potential, organisations need an ‘AI-first’ strategy that includes scalable, flexible AI solutions for business transformation. This requires an integrated AI stack — comprising infrastructure, data, AI models, and applications — enabling AI deployment across various use cases. Can such an AI stack be developed as a digital public infrastructure (DPI) by the government to provide seamless, proactive services to citizens and businesses?

To create a DPI, it is essential to understand the components of an enterprise-level AI stack. The foundation of this stack is a compute infrastructure layer, which includes compute capacity, storage, networking and tools for developing, training and deploying AI models. This layer would utilise Graphics Processing Units (TPUs), Central Processing Units (CPUs) and Tensor Processing Units (TPUs) optimised for AI workloads. Cloud platforms offer scalability, while edge computing may be necessary for real-time services in remote or low-bandwidth environments.

The second layer consists of the data layer, which focuses on collecting, storing, cleaning and annotating data for use by the AI models. Data security and compliance with the privacy laws must be ensured through encryption, anonymisation and access control. Data comes from various sources like structured and unstructured databases, web, Internet of Things (IoT), Application Programming Interfaces (APIs), etc. It must be cleaned and prepared for the AI model training to enhance accuracy and fairness. Ministries and departments have created huge databases under the Digital India programme that can be shared to train AI models for delivering predictive and proactive services to citizens and businesses.

The next layer is the model development layer, which focuses on designing and training models on the processed data from the data layer to address specific use cases, such as text or image/video generation, predictive analytics, etc. This involves selecting suitable AI frameworks, libraries, algorithms for the type of AI tasks involved, their optimisation and validation. Many open-source options, including pre-trained foundational models, can be customised for specific domains. However, developing indigenous foundational models is crucial to ensuring strategic autonomy and creating world-class capabilities within the Indian technology ecosystem. This doesn’t need to be resource-intensive, as demonstrated by DeepSeek.

The developed AI model is then deployed or exposed through APIs or microservices enabling integration with the enterprise systems, web and mobile applications. Next comes the application layer, which integrates AI models into real-world systems to deliver AI-enhanced products and services. This may involve reengineering business processes, automating tasks and redesigning user interfaces. For example, an AI application for predictive analytics might generate advance warnings for heavy traffic at specific locations during peak hours and send automated alerts for immediate action.

Finally, the AI stack also needs to have a governance layer to ensure that the associated risks, if any, are managed and trust is built in the AI systems. The government’s IndiaAI Mission should focus on creating a common AI stack as DPI, which all ministries and departments can use to build their own AI applications. This will prevent duplication of efforts and resources and create a vibrant innovation ecosystem focused on transforming public services with an ‘AI-First’ strategy. The AI stack could also be made available to startups and the private industry to promote collaborative development and deployment of AI applications.

(The above article appeared in The Economic Times on February 9, 2025. It is available at https://economictimes.indiatimes.com/tech/artificial-intelligence/transforming-governance-with-a-unified-ai-stack/articleshow/118073623.cms?from=mdr. The views expressed are personal.)