Artificial intelligence is being positioned as a central driver of India’s developmental transformation, with a strategy that combines technological capability, inclusion, and sovereign capacity. The proposed AI governance architecture seeks to ensure that AI adoption expands across key sectors while remaining trusted, accountable, and aligned with national priorities.
India’s AI Vision and Strategic Direction
- Development-centred AI approach: India’s AI strategy is built around democratisation, scale, and inclusion so that AI benefits are distributed across sectors such as agriculture, healthcare, education, governance, manufacturing, and climate action.
- AI for broad-based transformation: The national approach seeks to combine sovereign capability, open innovation, public digital infrastructure, indigenous model development, and affordable compute to drive productivity, inclusion, and strategic autonomy.
- Alignment with long-term national goals: AI is being positioned as an instrument of economic transformation, social empowerment, and strategic self-reliance within the broader aspiration of Viksit Bharat 2047.
Existing Achievements And Ecosystem Expansion
- National compute expansion: Under the IndiaAI Mission, more than 38,000 GPUs have been onboarded through a subsidised national compute facility, with a target of 100,000.
- Growth of data and model resources: AIKosh hosts over 9,500 datasets and 273 sectoral models, strengthening the ecosystem for indigenous model development.
- Supercomputing backbone: The National Supercomputing Mission has operationalised more than 40 petaflop systems, including AIRAWAT and PARAM Siddhi-AI.
- Human resource development: IndiaAI and FutureSkills initiatives are supporting 500 PhDs, 5,000 postgraduates, and 8,000 undergraduates to strengthen the AI talent pipeline.
- Grassroots innovation network: The country has established 570 AI Data Labs and 27 IndiaAI labs across states, while 174 ITIs have been approved across 27 States and Union Territories.
- AI adoption across the innovation ecosystem: Nearly 90 per cent of startups are integrating AI in some form, indicating deepening diffusion of AI within India’s startup landscape.
- Mass AI literacy efforts: The YUVA AI for ALL foundational course has been launched to broaden public familiarity with AI technologies.
- Curricular integration: AI-linked curriculum has been incorporated under the National Education Policy 2020 to prepare an AI-ready workforce from an early stage.
Foundations Of The Governance Framework
- Institutional beginnings: The Ministry of Electronics and Information Technology constituted a drafting committee in July 2025 to develop an AI governance framework for India.
- Mandate of the drafting process: The Committee was tasked with drawing upon existing laws, global developments, available literature, and public feedback while framing the guidelines.
- Four-part framework structure: The framework is organised into four parts covering the seven sutras, major issues and recommendations, an action plan, and practical guidelines for industry actors and regulators.
- Principle-based orientation: The framework adopts a principle-based and techno-legal approach intended to remain cross-sectoral, technology-neutral, flexible, and future-ready.
The Seven Guiding Sutras
- Trust is the Foundation: Trust is treated as the core condition for innovation, adoption, and risk mitigation, and must extend across technology, organisations, supervisory institutions, and user conduct.
- People First: AI systems must strengthen human agency, reflect societal values, and retain meaningful human control wherever possible through appropriate oversight.
- Innovation over Restraint: Governance should actively encourage AI-led innovation as a pathway to socio-economic development, competitiveness, and resilience, while ensuring that innovation remains responsible.
- Fairness and Equity: AI systems should be designed and evaluated to avoid bias and discrimination, especially against marginalised communities, while also being used to advance inclusion.
- Accountability: AI developers and deployers should remain visible and answerable, with accountability assigned according to function, risk, and due diligence responsibilities.
- Understandable by Design: AI systems should provide explanations and disclosures, to the extent technically feasible, so that users and regulators can understand how they operate and what outcomes they are intended to produce.
- Safety, Resilience and Sustainability: AI systems should be robust, equipped with safeguards and anomaly-detection capacities, and developed in ways that are environmentally responsible and resource-efficient.
Infrastructure And Foundational Capacity
- Infrastructure as a governance priority: India’s AI governance approach treats compute access, datasets, foundational models, and application deployment as core conditions for innovation and safe adoption.
- Role of digital public infrastructure: AI integration with Digital Public Infrastructure is seen as an important pathway for scalable and inclusive deployment.
- Need for continued investment: Sustained growth requires ongoing investment in scalable infrastructure, equitable access to data and compute, and strong institutional capacity.
- Data governance and portability: Improved data sharing is to be supported through stronger governance frameworks and portability standards.
- Locally relevant datasets: The framework stresses the importance of culturally representative datasets for building AI models suited to Indian contexts.
- Evaluation and safety support: Access to evaluation datasets and compute resources is considered necessary for both deployment and safety testing.
Capacity Building And Social Preparedness
- Scaling human capital: Existing initiatives have created a strong base, but further expansion is needed to meet the demands of inclusive growth and wider AI adoption.
- Public-sector readiness: Government officials and regulators require stronger technical capacity for informed procurement, responsible use, and effective risk management.
- Law enforcement preparedness: The Committee recommends strengthening the ability of law enforcement agencies to identify and address AI-enabled crimes.
- Regional and vocational inclusion: AI skilling efforts are to be expanded further in vocational institutes and in tier-2 and tier-3 cities.
- Public trust through awareness: Regular training programmes and awareness campaigns are viewed as necessary to deepen public understanding and trust in AI.
Policy And Regulatory Foundations
- Reliance on existing legal architecture: The framework recognises that many AI-related risks can already be addressed through current constitutional, statutory, regulatory, and guideline-based mechanisms.
- Coverage across legal domains: Existing legal foundations span information technology, data protection, intellectual property, competition, media, employment, consumer protection, and criminal law.
- Need for legal review: A comprehensive review is still required to identify regulatory gaps involving classification, liability, data protection, content authentication, copyright use in AI training, and sector-specific risks.
- Challenge of rapid technological change: The increasing autonomy and rapid evolution of AI systems make it difficult for regulatory frameworks to remain timely, coherent, and future-ready.
- IndiaAI Mission as policy anchor: The IndiaAI Mission is positioned as a foundation for AI sovereignty, compute democratisation, indigenous model development, and responsible capacity building.
- Digital regulation backbone: The IT Rules, 2021 and later amendments provide the baseline intermediary liability and enforcement structure for AI-related harms within the digital space.
- Data protection support: The Digital Personal Data Protection Act, 2023 provides accountability and lawful processing norms for AI systems handling personal data.
- Synthetic content governance: The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rule 2026 addresses AI-generated and deepfake content.
Risk Landscape And Mitigation Strategy
- Centrality of risk mitigation: Risk mitigation is presented as the bridge between broad governance principles and practical safeguards against harm.
- Nature of AI-related risks: Because AI systems are probabilistic, generative, adaptive, and agentic, they may create new harms or intensify existing ones across individuals, markets, and society.
- Forms of harm identified: Risks include misinformation, cyberattacks, bias and discrimination, opacity in personal data use, concentration-related systemic risks, loss of control, and threats to national security and critical infrastructure.
- Vulnerable groups at higher risk: Children may face harms through exploitative recommendation systems, while women are identified as being disproportionately targeted by AI-generated deepfakes.
- Need for India-specific assessment: The framework calls for a context-specific risk assessment and classification system grounded in empirical evidence of real-world harms in India.
- Importance of incident learning: A structured mechanism for collecting and analysing AI-related incidents is seen as necessary for anticipating risks, designing safeguards, and ensuring accountability.
Existing Risk Management Institutions
- Cyber incident response: CERT-In serves as the national agency for cyber incident response, coordination, and real-time threat advisories.
- Cybercrime coordination: The Indian Cyber Crime Coordination Centre has been established to combat cybercrime in a coordinated and comprehensive manner.
- Protection of critical infrastructure: The National Critical Information Infrastructure Protection Centre acts as the nodal body for safeguarding strategic information infrastructure.
- Role of sectoral regulators: Institutions such as the RBI, SEBI, and IRDAI already enforce technology, cybersecurity, and risk-management norms in their respective domains.
- Threat monitoring and data protection: The National Cyber Coordination Centre strengthens real-time threat monitoring, while the Data Protection Board of India serves as the statutory enforcement body for data protection compliance and accountability.
Accountability And Compliance Architecture
- Accountability as a governance backbone: The framework treats accountability as essential to effective AI governance, though difficult to operationalise in practice.
- Limits of current voluntary models: Voluntary mechanisms are considered inadequate on their own because they lack legal enforceability and clear liability allocation.
- Gaps in user protection: Users often do not have sufficiently accessible grievance redressal systems, and organisational transparency across the AI value chain remains limited.
- Uncertainty from adaptive systems: Because AI systems may produce unexpected outcomes, governance must balance enforceability with the need to preserve space for responsible innovation.
- Foundational legal support: The IT Act, 2000 provides the basic framework for digital intermediaries, cyber offences, and platform liability across the AI value chain.
- Consent and fiduciary obligations: The DPDP Act, 2023 introduces consent-based data processing and accountability standards for AI systems using personal data.
- Redressal mechanisms for synthetic harms: The IT Rules, 2021 and IT Amendment Rules 2026 provide grievance redressal and faster takedown timelines for AI-generated harms.
- Proposed graded liability: The Committee recommends obligations and liability that are proportionate to function, risk, and due diligence across AI actors.
- Transparency and compliance tools: It also supports transparency reports, audits, self-certifications, and stronger feedback-linked grievance systems.
Institutional Architecture And Coordination
- Need for coordinated governance: The framework argues for a whole-of-government approach to improve coherence, coordination, and strategic alignment across agencies.
- Current distribution of responsibilities: AI-related responsibilities are presently spread across multiple bodies, creating the need for stronger inter-agency integration.
- Nodal policy ministry: The Ministry of Electronics and Information Technology is identified as the apex ministry for AI policy development.
- Strategic advisory role of NITI Aayog: NITI Aayog functions as the anchor institution for India’s National AI Strategy and supports cross-sectoral coordination and policy vision.
- AI Governance Group: The proposed AI Governance Group is intended to coordinate overall policy development and align governance frameworks with national priorities.
- Technology and Policy Expert Committee: The proposed TPEC is meant to provide expert inputs to the AI Governance Group on major national and international issues relating to AI governance.
- IndiaAI Safety Institute: The framework calls for adequate resources to the IndiaAI Safety Institute for research, standard-setting support, evaluation metrics, testing methods, and technical guidance for regulators and industry.
- Whole-of-government model: The broader coordination framework is meant to bring together ministries, sectoral regulators, standards bodies, and public institutions so that AI governance is cohesive and avoids duplication.
Phased Action Plan
- Short-term priorities: Immediate steps include establishing AIGG and TPEC, developing India-specific risk frameworks, conducting regulatory gap analysis, issuing master circulars, preparing incident databases and grievance systems, expanding foundational infrastructure, and launching public awareness programmes.
- Medium-term reforms: The medium-term roadmap includes publishing common standards, operationalising a national AI incidents database, amending laws where needed, piloting regulatory sandboxes in high-risk domains, and supporting AI integration with DPI.
- Long-term direction: Over the long term, the framework envisages continuous review, adoption of new laws for emerging risks, expanded international engagement, and horizon-scanning and scenario planning for future risks and opportunities.
- Expected governance trajectory: The phased approach is designed to institutionalise AI governance, build trust, enable safe innovation, improve compliance, and create a future-ready ecosystem with stronger resilience and accountability.
Practical Guidelines For Industry And Regulators
- Obligation to comply with Indian law: All persons involved in developing or deploying AI systems are expected to comply with applicable Indian laws, including those relating to information technology, data protection, copyright, consumer protection, and offences affecting women, children, and other vulnerable groups.
- Duty to demonstrate compliance: Industry actors should be able to show compliance with applicable legal and regulatory requirements when called upon by relevant agencies or sectoral regulators.
- Use of voluntary safeguards: The Committee encourages the adoption of voluntary principles, codes, and standards on privacy, security, fairness, inclusivity, non-discrimination, and transparency.
- Grievance redressal responsibility: Developers and deployers should create mechanisms through which AI-related harms can be reported and resolved within a reasonable period.
- Transparency reporting: Industry participants are advised to publish transparency reports evaluating the risk of harm to individuals and society in the Indian context, while sensitive portions may be shared confidentially with regulators.
- Techno-legal mitigation tools: The framework encourages privacy-enhancing technologies, machine unlearning, algorithmic auditing, and automated bias detection as tools for risk mitigation.
- Innovation-and-risk balance for regulators: Regulatory frameworks should simultaneously support innovation and distribute AI’s benefits while addressing risks through suitable policy tools.
- Preference for agile governance: Governance frameworks should remain flexible, subject to periodic review, monitoring, and recalibration in response to stakeholder feedback.
- Focus on real and present harms: Regulators are advised to prioritise interventions where there is actual or imminent harm to life, livelihood, or well-being.
- Avoidance of excessive compliance burdens: Mandatory approvals, licensing conditions, and similarly heavy requirements should generally be avoided unless clearly necessary.
- Least burdensome policy instruments: Regulators should choose the most useful and least burdensome instrument for the objective, whether through industry codes, technical standards, advisories, or binding rules.
- Use of techno-legal methods in regulation: Where policy objectives around privacy, fairness, cybersecurity, and transparency already exist, regulators should promote techno-legal approaches to meet them effectively.
India’s AI governance framework seeks to create a balanced model that supports innovation while building trust, accountability, and social safeguards. By combining guiding principles, phased institution-building, risk-sensitive regulation, and practical compliance expectations, the framework aims to ensure that AI development remains inclusive, resilient, and aligned with national priorities.
UPSC Prelims Quiz
Practice exam-oriented current affairs questions daily and track your preparation effectively.
Attempt Quiz →