E-AIPOWW’s India Tracker shows AI policy and regulation work across India; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.

E-AIPOWW’s India Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.

A person typing on a laptop.
India tracker

The Tracker collates documents relating to the government proposals, papers, and other policy documents regarding legislation around artificial intelligence in the world of work.

View the India tracker

Report

India has outlined a bold vision to leverage artificial intelligence (AI) for economic prosperity and social inclusion. The government’s “AI for All” mantra envisions using AI as a catalyst for inclusive development, ensuring benefits reach all sectors (NITI Aayog, 2018; Press Information Bureau [PIB], 2023a). AI is seen as a transformative tool to accelerate growth in agriculture, healthcare, education, smart cities, and small businesses, addressing developmental challenges while enhancing India’s digital economy (Office of the Principal Scientific Adviser [PSA], 2025). The AI for India 2030 initiative, launched in January 2024, aims to integrate AI across sectors to boost India’s GDP by $500 billion by 2030 (Kaushik et al., 2025). This initiative, co-led by the Ministry of Electronics and IT (MeitY) with partners like National Association of Software and Service Companies (NASSCOM) and the World Economic Forum (WEF), supports India’s goal to be a global leader in AI innovation. Overall, India’s AI vision connects technological advancement with national development goals, addressing social challenges and driving economic growth together.

An enabling policy landscape is crucial for realising this vision, balancing innovation with safeguards. India’s AI governance is evolving. Officials maintain a “pro-innovation” position; in 2023, the government announced no additional regulations on AI to promote growth without hindrance (MeitY, 2023a). Consistent with this MeitY supports flexible guidelines over strict rules. Voluntary ethical frameworks, like the 2021 Responsible AI guidelines, promote transparency, accountability, and fairness in AI development (NITI Aayog, 2021a, b). At the same time, India is advancing its digital governance through the significant Digital Personal Data Protection Act, 2023, which provides crucial data privacy safeguards for AI applications (Government of India [GOI], 2023). Suggested regulations for AI include supervision of “high-risk” AI systems in MeitY's draft for the forthcoming Digital India Act (GOI, 2024). A brief 2024 attempt to license certain AI models via executive order was quickly retracted after industry pushback, highlighting ongoing policy adjustments (MeitY, 2024a). Overall, regulators are developing a coherent AI policy framework that fosters innovation and investment while establishing norms for data protection, ethical use, and risk mitigation (Morgan Lewis, 2024).

India's AI agenda is driven by multiple national initiatives and institutions. NITI Aayog, the government’s policy think-tank, established a National Strategy for AI in 2018, highlighting priority sectors and the “AI for All” vision (NITI Aayog, 2018; PIB, 2023a). Recently approved, the National AI Mission – IndiaAI – aims to create a comprehensive AI ecosystem. It focuses on strengthening core infrastructure, developing indigenous capabilities, nurturing talent, supporting startups, and promoting industry-academia partnerships (MeitY, 2024b). The mission emphasises socially impactful and ethical AI, aligning solutions with India’s needs and values. The AI for India 2030 blueprint develops “AI playbooks” and sandboxes to facilitate adoption in agriculture and MSMEs, demonstrating the intent to democratise AI benefits (Kaushik et al., 2025). Key institutional actors support these missions: MeitY leads digital technology policy and IndiaAI; NITI Aayog informs strategic direction; and the Office of the Principal Scientific Adviser coordinates cross-ministry efforts and leads consultations on AI regulations and safety standards (MeitY, 2025a, NITI Aayog, 2018; PSA, 2025). This ecosystem mobilises resources and stakeholder engagement to realise India’s AI vision.

Commitment to responsible AI is a unifying theme across India’s AI strategy. The government insists AI advancement must pair ethics and inclusion: Prime Minister Modi affirmed India seeks to maximise AI’s capabilities for social good while ensuring responsible and equitable use (Modi, 2023). Concrete steps reinforce this ethos. Under the IndiaAI program’s “Safe and Trusted AI” pillar, authorities announced the creation of a dedicated IndiaAI Safety Institute to research AI safety and address risks like bias, deepfakes, and cyber threats (MeitY, 2025b). This institute will collaborate with academia, industry, and government to develop AI security and trust frameworks tailored to India’s diverse context. India is also engaging in global AI governance, having been a founding member of the Global Partnership on AI (GPAI) in 2020 and taking on leadership roles, including hosting the GPAI 2023 summit in New Delhi (Center for AI and Digital Policy [CAIDP], 2025; GPAI, 2023). During its G20 Presidency in 2023, India championed inclusive and human-centric AI principles in international forums. Through such multilateral engagement, India positions itself as a voice for the developing world, advocating AI for sustainable development and pushing for international norms that ensure no one is left behind in the AI era.

India’s AI trajectory reflects ambition and accountability. The country envisions AI as a driver of economic growth and societal progress, supported by a policy framework that fosters innovation, ensures responsibility, and enhances institutional capacity. The report will further explore India’s AI landscape through three sections: Regulation, Development, and Governance of AI in India. The Regulation section discusses India’s legal and policy landscape, including the origins, status, provisions, workforce relevance, and enforcement challenges. The Development section covers public, private, and public–private initiatives, the startup ecosystem, international partnerships, national infrastructure innovations, and their implications for work. The Governance section details institutional frameworks, responsibilities for AI oversight, guiding legal and ethical frameworks, multi-stakeholder governance, implementation mechanisms, India’s global engagement in AI governance, critical challenges, and implications for the workforce.

"The “deemed consent” for employment allows various uses of employee data (salary payments, performance monitoring, background checks) without explicit consent. While this reduces administrative burden, it could enable intrusive surveillance or data processing without worker opt-in, as deemed "necessary for employment" by the employer."

Regulation

AI in India is evolving with multiple laws and initiatives. Effective regulations can help AI support human labour and create jobs, while weak enforcement may cause job displacement and inequality. This section outlines the major frameworks governing AI in India, examining each framework’s origin, legal status, core provisions, workforce relevance, and enforcement challenges.

Information Technology Act, 2000 (IT Act)

Origin and Status

The Information Technology Act, 2000 is India’s key cyber law, enacted on October 17, 2000 (MLJCA, 2000). Amended in 2008 to enhance data security and intermediary liability, the IT Act now seems outdated after 22 years, as its initial focus on electronic transactions and hacking predates the emergence of AI-driven online services (GOI, 2009).

Core Provisions

The IT Act defines offences like unauthorised access, data theft, identity fraud, and cybercrimes, establishing intermediary liability under Section 79. This section grants “safe harbour” immunity to online intermediaries for user-generated content, provided they remove unlawful content when notified. Section 43a holds corporate bodies liable to compensate individuals for failing to implement “reasonable security practices” for sensitive personal data. Sections 43A and the Privacy Rules (2011) require companies to protect personal data and privacy (MeitY, 2011).

Relevance and Implications for Work

The IT Act’s protections support workers’ rights in the digital economy. Section 43A mandates data protection, requiring employers to secure sensitive personal information and allowing claims for damages from data breaches. Provisions for cyber offences tackle identity theft and fraud, affecting workers. The safe harbour clause (Section 79) has shaped online working conditions by shielding platforms from liability for user posts or algorithmically curated content, influencing the management of gig workers and content moderation policies. Critics note that blanket immunity lets platforms avoid accountability for algorithmic harms (like the spread of misinformation or biased content) that can fuel societal discord, impacting workers (NDTV, 2023). The IT Act’s focus on electronic transactions and cybercrime means its labour implications are narrow, mainly addressing privacy and security in a generic sense.

Gaps and Enforcement Challenges

The IT Act’s limitations are clear as it does not address AI systems, automated decision-making, or algorithmic accountability, resulting in a regulatory gap. Cyber provisions enforcement is inconsistent, with prosecutions lacking technical expertise. The safe harbour regime in India faces criticism for allowing platforms to evade responsibility; the government questions the viability of a broad safe harbour since intermediaries “have morphed into multiple types of platforms…functionally very different…and requiring different guardrails” (NDTV, 2023). In 2023, the Union IT Minister announced the introduction of a new Digital India Act to tackle modern internet business realities, AI, and user harms (Chandrasekhar, 2023). While the IT Act provided foundational privacy protections, it is inadequate for governing AI and today’s data-driven complexities, necessitating new legislation.

Draft India Data Accessibility and Use Policy, 2022

Origin and Scope

The Draft India Data Accessibility and Use Policy, released by MeitY in February 2022, aimed to enhance access and sharing of government data as part of India’s open data initiatives (MeitY, 2022a). It proposed licensing and selling anonymised datasets to the private sector and researchers to foster innovation and improve public services. This draft sought to monetise non-personal government data for a broader ecosystem. The controversial proposal was never implemented and was eventually scrapped in favour of a revised policy framework. MeitY issued a Draft National Data Governance Framework Policy (NDGFP) in May 2022 to replace the now–scrapped India Data Accessibility and Use Policy (MeitY2022b). The 2022 draft thus remains a proposal that informed later policy rather than a binding policy itself.

Core Provisions

The 2022 Data Accessibility draft aimed to institutionalise data sharing through an India Data Office (IDO) under MeitY. This office would oversee implementation and assign data stewardship roles in all government departments. It proposed a centralised repository, the India Datasets Program, for sharing all non-personal data from government sources and encouraged private companies to contribute anonymised data. Importantly, the draft allowed data collected by the Centre that has undergone value addition to be sold in the open market for an appropriate price, enabling departments to license citizens’ anonymised data. The goal was to foster a data economy, driving AI research, startups, and evidence-based policy by utilising India’s extensive data resources. The draft also emphasised that IDO would set data anonymisation and quality standards.

Relevance and Implications for Work

If implemented, this policy could have profoundly affected workers and workplaces. Opening datasets related to labour markets, public employment, and workplace safety promised enhanced transparency and informed decisions. Workers and unions might access crucial aggregated data (e.g., job openings, wages, workforce demographics) for collective bargaining or advocacy. The draft’s focus on data portability and sharing could enable employees to request anonymised workplace data to demonstrate discrimination or exploitation. Furthermore, startups leveraging open government data could develop new services for job matching, skill enhancement, or improved working conditions. However, the policy raised privacy concerns. Without personal data protection laws in early 2022, there was worry that “anonymised” data could be re-identified, risking individuals’ privacy. Government-collected employee data could be misused if sold. Thus, while increased data accessibility could promote innovation and support pro-worker policies, it also presents risks of privacy violations without strong data protection.

Gaps, Overlaps and Outcome

The draft policy faced backlash from privacy advocates and civil society who argued it released anonymised government data for broad use “without a data protection law” (Drishti, 2022). Critics raised ethical concerns about monetising citizen data. Consequently, the draft was withdrawn within months in 2022 (Barik, 2022). MeitY transitioned to the NDGFP, taking a cautious approach. The NDGFP (2022–23) excluded the sale provision, focusing on government data, emphasising voluntary data sharing, and establishing the India Data Management Office (IDMO) to oversee a non-personal data programme (MeitY2022b). However, challenges remain: without mandates, companies may hesitate to share data for fear of losing competitive edge, and the IDMO's governance is unclear (Drishti, 2022). The Data Accessibility Policy's timeline overlaps with the personal data protection law, delaying data-sharing plans until the Digital Personal Data Protection Act was established. The 2022 draft aimed for an open-data ecosystem—beneficial for AI development and the public but revealed tensions between innovation and privacy, prompting a correction to align India’s data governance with global privacy norms.

Digital Personal Data Protection Act, 2023

Origin and Legal Status

The Digital Personal Data Protection Act, 2023 (DPDP Act) is India’s first comprehensive data protection law (MeitY, 2023b). Enacted in mid-2025, the Act is not fully operational yet, with sections being notified in phases (DLA Piper, 2025; Internet Freedom Foundation [IFF], 2025). It follows earlier draft bills from 2019 and 2021 that were debated and withdrawn. By enacting this law, India fulfilled a Supreme Court mandate (after the 2017 privacy judgment, SCI, 2017) to create a statutory regime for personal data protection. The Act covers all of India and applies to digital personal data processed outside India if it’s for offering goods/services within the country. It focuses on “digital” personal data, meaning electronic information about individuals – offline paper data is excluded. This narrows the scope compared to earlier drafts, aimed at easing compliance businesses.

Core Provisions

The DPDP Act lays down the rules for the processing of personal data, centred on principles of consent, purpose limitation, and data minimisation. Key provisions include (Burman, 2023):

  • Consent and “Legitimate Uses”: Data fiduciaries (controllers) must obtain consent to process personal data unless under “legitimate uses." The Act introduces “deemed consent” for specific cases, allowing processing without explicit consent for employment-related purposes, public interest, or emergencies. Employers can lawfully process employees’ personal data without needing consent each time, raising concerns about misuse.
  • Data Principal Rights: Individuals possess rights such as accessing information regarding data usage, requesting corrections or erasure of data, and redress for grievances; however, some rights are limited in comparison to the EU's GDPR, particularly the right to data portability.
  • Obligations on Data Fiduciaries: Entities processing personal data must implement security measures, inform users about data collection through notices, and delete data after its purpose is fulfilled. They must notify the Data Protection Board and affected users of breaches. Significant Data Fiduciaries (SDFs) have additional obligations, including appointing a Data Protection Officer (DPO) and conducting audits, thereby creating a compliance tier for large entities companies.
  • Data Transfer and Localisation: The 2023 Act permits cross-border data flows, allowing data to be processed abroad freely unless restricted by government designation of certain countries. This flexibility is viewed as advantageous for global business operations.
  • Exemptions: The Act grants exemptions for government actions related to national security, public order, or crime prevention. Some research, statistical, and archival activities are also exempt if they do not harm individuals. The government can exempt specific businesses or classes from provisions of the law, even suspending rules for up to five years without clear justification criteria.
  • Enforcement Mechanism: The Act establishes a Data Protection Board of India to ensure compliance and penalise violations, appointed by the central government, which raises independence concerns. It can impose fines up to ₹250 crore (approximately $30 million) but does not allow direct lawsuits by individuals against data fiduciaries. This streamlined enforcement approach might limit individuals’ recourse remedies.

Implications for the World of Work

The DPDP Act brings essential protections for workers’ personal data with certain caveats. It holds employers legally accountable for collecting, using, and sharing workers’ personal information. Employees have basic data rights (access, correction, etc.), enabling them to understand and rectify inaccuracies in their data. For example, workers can request changes to erroneous HR records or delete unnecessary data. Companies must secure employee data; breaches of health records or payroll databases may incur penalties. The Act requires transparency, ensuring workers know how their personal data is processed and for what purpose. Furthermore, large employers must appoint Data Protection Officers, providing employees with a contact for privacy concerns and grievances.

However, certain provisions weaken worker empowerment. The “deemed consent” for employment allows various uses of employee data (salary payments, performance monitoring, background checks) without explicit consent. While this reduces administrative burden, it could enable intrusive surveillance or data processing without worker opt-in, as deemed "necessary for employment" by the employer. Employees may struggle to refuse or opt-out of workplace data processing; workplace consent has always been nominal due to power imbalances, and the Act solidifies this by removing consent in many contexts. Additionally, broad government exemptions can indirectly affect workers. For instance, if law enforcement requests employee data (for national security), individuals have no say or knowledge, as such processing can be exempt from consent or notice requirements (Burman, 2023).

Another implication is for the growing gig and platform workforce. Large digital platforms relying on personal data (like ride-hailing and food delivery) will be SDFs under the Act. They must appoint DPOs and conduct impact assessments to identify issues like algorithmic bias or privacy risks. This oversight could improve how gig worker data is handled and curtail abuses, such as excessive data collection or unfair automated decisions. However, the Act does not explicitly address automated decision-making, profiling, or AI transparency. Workers lack a right to explanation for AI-driven decisions (e.g., algorithmic firing or discipline), unlike laws in other jurisdictions. The Act focuses on classical data protection (consent, security, breach notification) over AI governance. While it lays the foundation for personal data privacy – a net positive for workers’ dignity and autonomy – it does not resolve many AI-related employment concerns, such as discrimination by AI systems or the need for human oversight in high-stakes decision-making.

Gaps and Challenges

The DPDP Act is critiqued for gaps that hinder enforcement. A major issue is the lack of an independent data protection authority; the Data Protection Board's creation by the Executive raises concerns about its independence and capacity, impacting its impartiality in handling complaints, especially involving exempt government entities. The Act’s limited scope excludes non-digital data and allows many exemptions, leaving significant amounts of personal information unprotected, like manual HR records or small firms' data. It also heavily relies on government rule-making for specifics, so much will depend on future rules and the Board’s approach. From a worker protection angle, the absence of collective redress is a drawback, as labour unions or groups cannot file complaints on behalf of workers; it all relies on the Board’s discretionary enforcement. Additionally, the relationship with sectoral regulations remains unresolved. Technologies like workplace surveillance and AI hiring might eventually be regulated by labour law or sector-specific rules, but until then, the DPDP Act may struggle with these complex issues. Enforcement will necessitate awareness and capacity-building; companies must train staff and upgrade systems for compliance while the Board addresses numerous grievances once operational. In summary, while the DPDP Act, 2023 enhances privacy for Indian workers and citizens, its compromises indicate a policy choice favouring business ease and state interests over individual rights (Burman, 2023). Ensuring it effectively protects workers’ data will hinge on implementation and possible future amendments gaps.

Proposed Digital India Act, 2023 (Digital India Bill)

Origin and Status

The Digital India Act (DIA) is a proposed legislation to replace the two-decade-old IT Act, 2000 (Tata Consultancy Services [TCS], 2023). Spearheaded by MeitY, it was first conceptually unveiled in early 2023. In March 2023, the government presented it and initiated multi-city public consultations. As of 2025, the DIA remains in draft and consultation stage, yet to be introduced to Parliament.

Motivation and Scope

The Digital India Act is needed due to India's significant digital transformation, as internet users grew from 5 million in 2000 to over 800 million in 2025 (Statista, 2024). This surge has led to issues regarding user rights, cyber harms, competition, and technological innovations exceeding the IT Act’s provisions. The DIA aims to be a comprehensive omnibus legislation addressing current challenges like user harm and misinformation while being flexible enough for future technologies, including advanced AI. According to the government, India’s digital revolution has rendered existing laws “old and dated,” necessitating a new Act. The development of the DIA includes discussions on AI regulation, with MeitY’s 2023 presentation emphasising “regulation of high-risk AI systems” (Mohanty and Sahu, 2024). This aligns with global trends, such as the EU’s draft AI Act, within a broader digital law rather than a standalone AI regulation law.

Emerging Provisions

While the official Bill text isn’t public yet, presentations and ministerial statements shed light on its likely provisions (TCS, 2023):

  • Digital Rights and User Safety: The DIA will present a digital user rights charter that includes protection from online harm, grievance redressal, data privacy, and the right to be forgotten. It emphasises safety and trust for women and children, addressing cyberbullying, doxxing, revenge porn, and user harm. The charter signals strong content moderation norms and penalties for enabling such harmful content.
  • “Principles-based” and Evolvable Framework: The Act will remain flexible to adapt to changing technology. It is characterised as “light-touch, principle-based” regulation, focusing on broad outcomes and responsibilities while allowing updates as needed. MeitY emphasises evolvability – keeping the law current through secondary rules to respond to new tech threats.
  • Coverage of Emerging Tech: The DIA seeks to address technologies like AI, IoT, and blockchain. It proposes defining “high-risk AI systems” through legal and technical frameworks for accountability and certification. For instance, the government suggests auditing AI ad targeting and content moderation algorithms for bias or security vulnerabilities, addressing concerns about AI's potential for discrimination and unsafe outcomes.
  • Intermediary Regulation and Safe Harbour Review: A cornerstone is reforming intermediary liability. The Act may revise or eliminate the blanket “safe harbour” immunity (Section 79 of IT Act) for internet intermediaries. The government argues platforms need to take more responsibility for content and actions. The DIA will likely classify intermediaries by risk and impose varying due diligence requirements. Misinformation and deepfakes are emphasised – platforms may need to respond to AI-generated misinformation flagged by authorities. Chandrasekhar’s 2023 statements questioned whether intermediaries should retain safe harbour “at all” in the future (NDTV, 2023). Early signs suggest the DIA could eliminate safe harbour in certain cases, holding platforms accountable for not removing government-identified fake news. This indicates a tough stance on platform accountability.
  • Fair Digital Markets: The Act addresses “open internet” principles – choice, competition, online diversity, and fair access. It suggests preventing monopolistic practices by big tech, thus enabling startups and smaller players to compete, potentially through app store restrictions or anti-competitive behaviour. Additionally, it might integrate digital competition elements to complement the Competition Act. DIA aims to promote fair trade by preventing market power concentration, aligning with pro-competition and antitrust measures in the digital sector.
  • Content Moderation and KYC: The DIA will likely mandate stricter content moderation and identity verification, including Know-Your-Customer (KYC) requirements for device sales and online services. The government enforces KYC for social media influencers and may extend it to users of major platforms to curb anonymity-driven abuse. Additionally, KYC might be required for privacy-invasive devices like spy glasses at the point of sale and through pre-market regulation to prevent misuse.
  • Adjudicatory Mechanism: The DIA will establish specialised bodies to address digital offenses, possibly utilising dedicated digital courts or tribunals to manage cyber disputes and violations of the Act swiftly, as traditional courts are too slow for the fast-paced tech world.
  • Coherence with Other Laws: The Digital India Act aligns with the DPDP Act 2023, the new Telecom Act, and cybercrime law amendments. It links to the National Data Governance Policy for non-personal data and the penal code for serious cyber offences. This creates an integrated legal framework where the DIA forms the foundation for cohesive digital governance, data protection, governance, and tech regulations.

Implications for the World of Work

If enacted, the Digital India Act would significantly shape the online environment for Indian workers and AI deployment in workplaces:

  • Empowering and Protecting Workers Online: Strong user rights and safety provisions create safer digital spaces. Protections against online harassment would benefit workers, like journalists and gig workers, facing trolling or doxxing. If an employee is harassed on a company platform or a gig worker is bullied via an app, the platform could be held accountable for inaction, prompting companies to enforce conduct codes for worker safety. However, digital free speech and content rights must be balanced to avoid infringing on legitimate worker discourse, such as union organising or whistleblowing, which should not be suppressed as “user harm”.
  • Accountability of AI Systems Used at Work: The DIA could oversee high-risk AI systems in recruitment, workplace surveillance, productivity scoring, and HR analytics. For example, AI for evaluating candidates or monitoring drivers might need to meet transparency and bias-testing standards. It promotes ethical AI practices and considers AI in content moderation and ad targeting—principles applicable to AI impacting workers’ livelihoods. Its risk-based approach implies that AI with serious consequences (e.g. layoffs) would face more regulatory scrutiny and possibly require human oversight or auditing. This is significant as India currently lacks laws on automated decisions; the DIA could address this by acknowledging algorithmic harms mitigation.
  • Platform Work and Gig Economy: The shake-up of intermediary liability and new obligations will impact gig workers. Platforms may lose unconditional safe harbour – for example, if a ride-hailing app’s algorithm biases certain workers, regulators could demand explanations under new rules. Moreover, fair trade provisions might curb predatory practices by dominant platforms, allowing for fairer terms or competition that better treats workers. Conversely, stricter content rules may lead platforms to over-censor, including labour organising content; thus, strong due process is vital to ensure moderation doesn’t inadvertently silence workers’ voices.
  • Enforcement and Rights: A dedicated adjudication mechanism for digital offences allows faster resolution of non-payment of wages, misuse of worker data, or harassment under the Act’s “user harm” violations. This enhances workers’ ability to seek redress against tech firms. The Act promotes user choice and competition, encouraging a variety of platforms—workers can switch if mistreated, knowing of viable alternatives. Additionally, aligning with the DPDP Act will strengthen data privacy at the platform level, benefiting workers who often report invasive data practices in gig work (e.g., constant GPS tracking).

Potential Challenges

While promising, the Digital India Act’s proposals raise challenges. There is a fine line between regulating platforms and stifling innovation. Overly stringent obligations may increase compliance costs and deter startups, limiting job creation in tech. Narrowing safe harbour increases accountability but may lead to platforms proactively removing content to avoid liability, impacting gig workers and small creators (Sanzgiri, 2023). Strict regulations on emerging tech could unintentionally hamper AI innovation in India without stakeholder input (Sanzgiri, 2023). The DIA aims to be principle-based to avoid this. Coordination with sectoral regulators, like telecom and the central bank, is essential for success, requiring clear authority delineation and avoiding overlap. Enforcement capacity is also a concern; training new regulators on AI and platform issues is vital. Finally, while the Act may empower workers as digital citizens, it is not a labour law and won’t directly address issues like collective bargaining or social security without separate policy measures. The Act can facilitate fair platforms and transparent AI, but ensuring decent work will still need complementary labour policies and regulations.

The DIA’s success depends on balancing stakeholder interests and enforcing regulations that adapt to technological change while fostering India’s digital economy. Stakeholders await the official draft of the Bill in 2025, which will outline how these principles will become legal provisions.

National Programme on AI (IndiaAI Mission)

Origin and Evolution

The National Programme on AI (NPAI) is an initiative by the Government of India aimed at creating a nationwide AI ecosystem. This journey commenced with the National Strategy for Artificial Intelligence by NITI Aayog (2018), which introduced the vision of “AI For All.” It identified five priority sectors—healthcare, agriculture, education, smart cities, and mobility—and emphasised India’s potential to lead with socially beneficial AI (NITI Aayog, 2018). Proposed in the Union Budget 2018 and subsequently spearheaded by MeitY, it has led to the establishment of the “IndiaAI Mission” (MeitY, 2025c). In March 2024, the Union Cabinet approved the IndiaAI Mission with a budget of ₹10,300 crore (≈ $1.3 billion) over five years, marking the program's shift to a funded project (PIB, 2024a). The IndiaAI Mission will operate as a public-private partnership, fostering collaboration among government, industry, and academia. It encompasses a nationwide, multi-sectoral range of initiatives to enhance AI R&D, infrastructure, skilling, and deployment in India. The Mission is anchored in MeitY, involving organisations like National e-Governance Division (NeGD) and NASSCOM through the IndiaAI portal. It supports the country’s digital goals, such as achieving a $1 trillion digital economy.

Objectives and Components

The National Programme on AI aims to develop India’s AI capabilities holistically. According to MeitY and Cabinet releases, the IndiaAI Mission will fund and drive several key components (PIB, 2024a):

  • AI Infrastructure (Compute and Datasets): IndiaAI Compute Capacity aims to deploy over 10,000 GPUs in a distributed cloud to give researchers and startups access to high-performance computing, essential for training AI models. Additionally, an IndiaAI Datasets Platform will curate and share quality datasets, particularly from the public sector, for AI development. Managed by the Independent Business Division, it will establish data quality standards and anonymization for safe sharing. These efforts tackle the bottleneck of compute and data access, democratizing AI development opportunities India.
  • Research and Innovation Ecosystem: The Mission will establish an IndiaAI Innovation Centre (IAIC) at a top academic institution as a hub for AI research. This centre will focus on foundational AI research, such as developing indigenous large language models and domain-specific AI models tailored to India’s needs. Funds will be allocated to attract and retain top AI talent and possibly network many institutions (a hub-and-spoke model). Additionally, there will be strong support for startups: IndiaAI Startup Financing will provide grants or seed funding to AI startups, facilitating their journey from idea to commercialisation. By offering capital and possibly incubator support, the mission aims to foster a wave of AI entrepreneurs addressing local issues. The mission will also fund industry-led AI projects with social impact, promoting private sector innovation in fields like healthcare, agriculture, and smart technologies cities.
  • Skilling and Talent Development: Recognising human capital's importance, the program includes IndiaAI FutureSkills to train AI professionals and upskill the workforce. It builds on existing platforms like FutureSkills Prime by NASSCOM, offering AI courses. The aim is to produce a proficient AI talent pool. Additionally, there are plans to upgrade educational institutions to be “AI-ready” (Intel, 2024). Initiatives like “YUVAi” for school students and hackathons integrate AI education at various levels (Elias, 2023). These efforts ensure India’s youth and professionals can engage in the AI economy and enhance workforce participation employability.
  • Institutions and Governance for Responsible AI: The mission includes “Safe & Trusted AI”, indicating the creation of frameworks for AI ethics and governance (PIB, 2024a). While details are limited, it may involve guidelines for responsible AI (building on NITI’s principles) and tools to test AI systems for bias or security. The mission could establish an AI Safety and Ethics Board or Institute. In early 2025, India announced plans for an “AI Safety Institute” to study AI risks (Announced Jan 30, 2025, MeitY, 2025b). This likely connects to the Safe & Trusted AI component, ensuring the mission’s focus on AI includes leadership on alignment and safety globally standards.

The National Programme on AI (IndiaAI) is a strategy that builds infrastructure, fosters innovation, cultivates talent, and guides ethical AI development. It addresses India’s unique inclusion needs, such as the US’s national AI strategy and the EU’s coordinated plan on an AI scale.

Relevance and Implications for Work

The IndiaAI Mission significantly affects India's labour market, primarily indirectly. It aims to foster economic development and innovation instead of focusing on worker protections. Its main objectives are to enhance AI innovation, leading to new industries, products, and jobs, including:

  • Job Creation and Transformation: If successful, the mission will foster AI industry growth—new startups, expanded IT services, and AI adoption across sectors like agriculture and finance. This could create high-skilled jobs (AI engineers, data scientists) and boost productivity in traditional industries. For example, AI tools may enhance crop yields or manufacturing processes, maintaining or creating jobs in these fields. The government presents AI as a “kinetic enabler” for digital economy expansion, viewing AI-driven growth as a route to a $5 trillion economy and beyond (PIB, 2024a). The mission’s investments could also curb brain drain by offering opportunities for talented Indian AI researchers, fostering a local ecosystem.
  • Skilling the Workforce: A direct positive impact lies in skill development. Through FutureSkills and educational reforms, many Indian workers and students will acquire AI-related competencies. This is crucial to avoid a scenario where automation displaces workers faster than they can retrain. By proactively training individuals in AI (developers, as well as users of AI tools), the mission can assist the workforce in transitioning to new roles created by AI (such as AI maintenance, data labeling, etc.). The reference to AI training in ITIs (industrial training institutes) indicates a focus on mid-level technical skills, which could upskill workers from traditional sectors to become AI technicians or analysts (Intel, 2024). Over time, this could enhance employability and facilitate the movement of workers from declining job areas to emerging ones.
  • AI in Service of Workers: The mission’s orientation is “AI for All” and addressing societal challenges. Many exemplar projects in India (like AI chatbots for agriculture or health) benefit workers by enhancing public services or reducing drudgery (Elias, 2023). For instance, AI tools in healthcare can improve workers' health, and AI tutoring can better educate their children (Elias, 2023). Additionally, AI can enhance labour administration, such as improved job matching for employment exchanges and data analysis for enforcing labour laws. If the mission funds AI for social good projects, some could address labour issues like workplace safety (AI systems predicting factory accidents) and empower informal workers (apps offering gig workers fair pay information). These applications are not explicitly stated; it depends on specific projects mission.
  • Responsible AI and Worker Rights: One notable absence in the program is the lack of mention of labour rights or worker protections from AI’s adverse effects. The mission documents emphasise economic opportunity and “AI for good,” but do not articulate protections for displaced or exploited workers. The tracker analysis points out that it currently does not advocate for worker protections and lacks a commitment to assist the most disadvantaged individuals. This reveals a gap: while IndiaAI will boost AI adoption, it lacks a plan for managing AI-driven job disruption. Responsibility may fall on other ministries (e.g., the Ministry of Labour) to ensure reskilling and social security for displaced workers. The “Safe & Trusted AI” pillar could address fairness and bias relevant to workers, especially if AI hiring tools must be non-discriminatory. However, this remains an ethical guideline, not a binding protection. Essentially, the mission focuses on preparing India’s workforce to leverage AI, rather than safeguarding them from its impacts. Its success from a labour perspective will depend on whether the growth and new jobs outpace those that AI displaces as obsolete.

Gaps and Challenges

The National AI Programme is ambitious, facing foreseeable challenges. Effective implementation of ₹ 10,000 + crore (≈ $1.3 billion) requires strong project management. Establishing infrastructure like a GPU cloud for academia and startups is complex; similar global projects have faltered due to bureaucracy or underutilisation (Data Center Dynamics, 2023; The Register, 2024). Collaboration among stakeholders (government departments, private partners, state governments) needs careful coordination. Measuring outcomes poses questions: high-level goals like foundational AI models or successful startups will take time. Inclusive growth is crucial; the mission must reach beyond tech hubs to smaller towns and diverse communities, preventing benefits from concentrating in elite circles.

Regulatory-wise, the mission is a policy initiative without statutory force, allowing agility but risking instability if political priorities shift. There's no legal guarantee that funds or efforts will continue beyond 5 years unless successful. Ethical AI governance may lack enforceability unless tied to laws like the DPDP Act or the upcoming DIA, and responsible AI guidelines remain voluntary (DLA Piper, 2025). A multidisciplinary advisory body on AI ethics is planned (based on NITI Aayog's recommendation), which could be part of governance but needs authority to influence AI usage (NITI Aayog, 2021a).

In conclusion, the National AI Mission is a landmark initiative signalling India's intent to be a leading AI nation. It focuses on capacity-building and innovation, which are essential for societal benefits. Skills and infrastructure investment prepare Indian workers to engage actively in the AI revolution. Nonetheless, the mission alone won't address risks to employment quality or inequality. Those issues need addressing through complementary social policies and changes to education and labour laws as AI technology evolves. The philosophy of “AI for All” must be guided to ensure it truly benefits everyone, including the working class, marginalised communities, and those at risk of being left behind in an AI-driven future economy (MeitY, 2021a).

NITI Aayog’s Principles for Responsible AI (RAI)

Origin and Nature

As AI technologies gained traction, the Indian government recognised the need for ethical guidance. NITI Aayog, the government’s policy think tank, led the way by articulating principles for responsible AI (NITI Aayog, 2021a, b). In February 2021, NITI Aayog released a discussion paper titled “Responsible AI for All: Approach Document for India”, outlining Principles for the Responsible Management of AI Systems (NITI Aayog, 2021a). This document builds on the earlier National Strategy for AI (NSAI) 2018 (NITI Aayog, 2018) and is part of a series of AI policy papers. Importantly, these advisory principles inform policymaking, industry self-regulation, and future legislation. They align with global AI ethics frameworks such as the OECD AI Principles and represent India’s view that AI deployment should adhere to core values and fundamental human rights (Elias, 2023).

Core Principles

NITI Aayog’s Responsible AI document proposes seven key principles to ensure AI systems are developed and used in a manner that is ethical, inclusive, and reliable (Invest India, 2024):

  1. Safety and Reliability: AI systems should be safe and consistently provide correct results, being tested to avoid harm and erratic behaviour. Reliability signifies that AI operates as expected under various conditions, enhancing user experience trust. For instance, a medical diagnosis AI must not fail unpredictably in critical scenarios. Safety also includes security, requiring AI to resist cyberattacks.
  2. Equality (Inclusivity and Non-Discrimination): AI must be inclusive, treating everyone fairly and without bias. Algorithms must be checked to prevent discrimination based on gender, race, caste, or religion. In India, this is crucial due to social diversity; thus, AI in hiring or lending requires fairness audits to protect minority and underprivileged groups.
  3. Inclusivity and Accessibility: AI should be accessible to everyone, including differently abled individuals and speakers of local languages. This entails developing interfaces in vernacular languages, ensuring assistive AI for disabilities, and making AI benefits available to rural and urban communities.
  4. Privacy and Security: AI must respect personal privacy and secure data, adhering to data protection norms and incorporating privacy-by-design. Developers should minimise personal data usage, obtain consent, and store data securely, avoiding invasive surveillance.
  5. Transparency (Explainability): The design and logic of AI should be comprehensible. Users must be aware when they are interacting with AI and understand the decisions that affect them. For high-stakes AI, such as those related to job promotion decisions, the criteria must be conveyed clearly. Transparency involves openly communicating the capabilities and limitations of AI.
  6. Accountability: Clear responsibility for AI outcomes is essential, ensuring humans or organisations are accountable for AI behaviour. If AI harms, mechanisms must hold the deploying entity accountable. For example, if a factory machine injures a worker, the company must investigate and remedy the situation. Accountability also involves governance structures like ethics committees or audit trails.
  7. Protection and Reinforcement of Human Values: AI must uphold rights and values, promoting well-being without undermining dignity or autonomy. Its adoption should ultimately benefit society and align with human rights expectations. Applications need evaluation for their impact on human agency, determining if AI enhances choices or manipulates them unfairly.

These principles reflect global AI ethics like fairness, transparency, accountability, and safety (FATES, OECD, 2019 [updated 2024]). NITI Aayog aims to guide stakeholders: government, industry, and researchers. The document recommends implementing this through actions such as sectoral guidelines, bias mitigation research, and possibly an AI advisory body ethics.

Relevance to the World of Work

Though broad, the responsible AI principles have a direct resonance for the use of AI in the workplace and labour contexts:

  • Preventing AI-driven Discrimination: The equality and non-discrimination principle is vital for hiring algorithms and AI HR tools. It urges employers and AI vendors to audit for bias (e.g., against women or marginalised groups) and take corrective actions. For hiring, this principle warns against AI trained on biased data that could favour male resumes. Following it results in fairer AI that doesn't replicate societal prejudices in the workplace decisions.
  • Transparency and Worker Autonomy: In workplaces using AI for decisions (promotion, scheduling, termination), these principles call for openness and recourse for workers. Employees should be informed when AI monitors their performance and receive explanations for AI-driven decisions like shift assignments or performance scores. This relates to discussions on “AI explainability in HR” – not yet a law in India, but it sets an important expectation. It empowers employees to question and understand AI decisions, which is crucial for contesting unfair outcomes.
  • Safety in AI-Augmented Workplaces: As factories and mines adopt AI and automation, the safety principle requires rigorous testing to ensure worker safety. AI–controlled machinery must be fail-safe to prevent accidents. This principle also enhances cybersecurity in AI-integrated workplaces, protecting against hacks that could disrupt manufacturing robots and endanger safety workers.
  • Privacy of Employees: Employers using AI for monitoring, such as surveillance cameras and productivity tracking, must respect boundaries. Constant AI surveillance can erode trust and morale; thus, use should be proportionate and strong data security for collected employee data is essential to prevent breaches. This aligns with the DPDP Act’s (MeitY, 2023b) requirements and encourages companies to think ethically, avoiding the collection of unnecessary personal data.
  • Accountability and Grievance Redressal: The accountability principle suggests that workers shouldn't face situations where “the computer says no" without accountability. Companies using AI must designate a human accountable for AI decisions impacting employees and provide accessible channels for employees to appeal or discuss outcomes. This builds trust and ensures AI supports, rather than replaces, human resource management responsibility.

Overall, these workplace principles advocate that AI should enhance human decision-making and not replace it without oversight, ensuring it respects workers’ rights and well-being. NITI Aayog’s approach emphasises “AI for All”, meaning AI benefits should reach all segments, including grassroots workers, while considering India’s socio-economic context, such as language diversity and varying literacy levels.

Limitations and Enforcement

The NITI Aayog principles are high-level, non-binding, offering both strengths and limitations. They provide flexibility and promote voluntary adoption but lack legal enforcement, relying on goodwill for compliance. There are no penalties for AI violations unless they breach specific laws (like discrimination under constitutional or labour law, not yet under any specific AI law in India). These principles aim to inform regulations in the draft Digital India Act discussions (e.g., risk-based AI regulation, ethical practices). Sectoral regulators, like the Telecom Regulatory Authority of India (TRAI), have recommended establishing an AI and Data Authority (AIDAI) and highlighted responsible AI principles in telecom, echoing NITI's language. Future developments may enhance these principles with sector-specific guidelines.

From a labour perspective, there is a need to turn some principles into regulations, like transparency in algorithmic management or non-discrimination in AI hiring. So far, discussions remain at the ethical guideline level. Critically, the principles lack a direct reference to collective worker voice or algorithmic transparency for trade unions. For example, should workers influence AI integration in their workflow? While broadly addressing stakeholder inclusion, labour representation is not emphasised. Ensuring inclusivity involves worker participation in AI design and deployment decisions.

In conclusion, NITI Aayog's responsible AI principles form a foundational policy framework, establishing that India's AI approach will be ethical and human-centred. They influence government policy and corporate strategies, with major IT companies endorsing similar principles. Although non-enforceable, they initiate “soft governance” of AI and lay the groundwork for concrete regulations. Importantly, they convey that AI gains should not compromise fundamental rights, equality, or safety, which is crucial for safeguarding good work. Future integration of these principles through standards, audits, and legal mandates will be essential for realising Responsible AI for All in India.

Conclusion and Outlook

The regulatory landscape for AI in India is evolving, reflecting the country’s ambition to be a global AI leader and a responsible digital society. India’s approach includes adapting legacy laws (IT Act), enacting new data protection legislation (DPDP Act), drafting forward-looking laws (Digital India Act), implementing national strategies (IndiaAI), and establishing ethical frameworks (NITI’s principles) to guide stakeholders. These efforts lay the foundation for AI governance, though many elements are still being finalised.

Several themes emerge from this analysis. India’s policymakers emphasise data governance as the foundation for AI. The shift from the 2022 Data Accessibility policy to the DPDP Act 2023 and NDGFP highlights that securing data rights is essential for AI innovation. Ensuring privacy and individual rights through the DPDP Act was prioritised, with pragmatic compromises to build public trust in the digital ecosystem. This approach protects workers by safeguarding personal data and granting them control while providing industry with clarity on data usage norms.

Second, there is recognition that old regulatory paradigms must evolve. The IT Act’s replacement with the Digital India Act symbolises a shift – laws must address AI, platform-centric business models, and digital risks unforeseen two decades ago. India is moving towards principle-based, adaptive regulation, balancing innovation and risk. Notably, India has avoided a blanket ban or heavy licensing of AI, aligning with its pro-innovation stance. However, as the Carnegie analysis notes (Mohanty and Sahu, 2024), the government has recently oscillated between hands-off and interventionist approaches. The cancelled 2024 advisory requiring licenses for certain AI models – quickly withdrawn after industry backlash – illustrates the ongoing debate about how much to regulate AI now versus later. Going forward, we can expect more sector-specific guidelines (e.g., for AI in healthcare, finance, or employment) as consensus builds. The Office of the Principal Scientific Advisor is consulting to develop a cohesive AI policy roadmap, indicating that future AI regulations will be deliberative and multistakeholder.

India’s approach combines hard law and soft initiatives. Legislative efforts (DPDP Act, Digital India Act) complement non-legislative measures (IndiaAI Mission, ethical principles, innovation sandboxes). This blend is strategic: some issues need legal enforcement (e.g. data breaches, platform accountability), while others suit guidelines and incentives (e.g. promoting voluntary ethical AI adoption, funding AI safety research). For instance, instead of quickly legislating AI ethics, India established frameworks and allowed industry/self-regulation to take precedence temporarily. This can be effective with broad support but is contingent on major players' commitment to AI-responsibility.

From a work perspective, these frameworks carry mixed implications. The DPDP Act establishes privacy and dignity for workers amid data-driven HR and gig platforms. The proposed Digital India Act could empower workers with safer online environments and accountability for algorithms governing work opportunities. The IndiaAI mission aims to create jobs and upskill workers, helping prevent technological unemployment by enhancing workforce value. Responsible AI principles ensure the AI discussion in India prioritises human and labour rights, promoting fairness and transparency essential for workplace justice.

However, significant gaps remain. There is no specific regulation on AI in employment, such as on AI-based recruitment or worker surveillance. Issues like gig workers’ algorithmic management and AI-led productivity monitoring are not directly supervised by labour or IT laws. This area is ripe for future policy development by the Ministry of Labour in dialogue with MeitY. Additionally, enforcement capacity will be a critical test. Laws like the DPDP Act will depend on the effectiveness of the agencies enforcing them. Resourcing the Data Protection Board, training judicial officers on AI issues, and raising worker awareness about digital rights are all essential steps. Furthermore, state-level implementation matters; if state governments adopt the National Data Governance policy, it will unlock local datasets, whereas delays may hinder benefits unevenly.

India’s active role in international AI forums is promising. In 2023, India chaired the Global Partnership on AI (GPAI) and led a declaration on responsible AI at the GPAI Summit in New Delhi. The GPAI 2024 Delhi Declaration highlights collaboration to ensure AI benefits workers globally and emphasises up-skilling and protecting them. Through such forums, India learns and contributes to global best practices, shaping domestic policy. India may align with or influence frameworks like the EU AI Act, particularly in AI risk classifications and conformity assessments.

In summary, India’s AI regulatory landscape 2025 shows a country aiming to integrate AI for economic and social good while ensuring safeguards against harm. The existing laws and policies – from the IT Act to the draft Digital India Bill, DPDP Act, and NITI’s ethical principles – reflect ongoing evolution. AI governance is crucial in determining whether AI creates or displaces jobs. India seems dedicated to guiding AI in a direction that supports its large workforce and democratic values. While there will be challenges, the path is toward a pro-innovation and pro-responsibility framework. As these policies firm up and new ones possibly arise, like AI-specific regulations or oversight bodies, there is hope that AI in India’s workforce evolves under the guiding principle – AI for All, and AI for Good Work.

"...legitimate concerns about job displacement arise, particularly for roles involving routine tasks. India’s workforce in business process outsourcing (BPO), customer support, data entry, and routine IT maintenance faces risks as AI systems, like chatbots and RPA, advance."

Development

This section analyses AI development in India, focusing on public, private, and public–private initiatives, the startup ecosystem, international partnerships, national infrastructure sector-specific innovations, and implications for work.

Public and Private AI Development Initiatives in India

Government AI Initiatives and Strategies

The IndiaAI Mission includes seven pillars: high-end compute infrastructure, a national dataset platform (AIKosh), Indigenous foundational models, innovation challenges, skill development (IndiaAI FutureSkills), startup funding, and the Safe and Trusted AI vertical (IndiaAISafe) (PIB, 2023a). Key initiatives involve deploying 18,000+ GPUs for AI computing and developing Indian language Large Language Models (LLMs, MeitY, 2024a). Additionally, AI Cloud platforms and regulatory sandboxes will test AI governance models (MeitY, 2024a). The IndiaAI Safety Institute will research AI security and risks (bias, deepfakes, cyber threats) and create India-specific safety standards with a “techno-legal” approach (MeitY, 2025b). The “AI for India 2030” platform, co-led by MeitY, NASSCOM, and the World Economic Forum, unites government, industry, and civil society to produce AI “playbooks” and ethics guidelines for sectors like agriculture and MSMEs (Kaushik et al., 2025). Its focus on ethics, inclusion, and co-creation highlights India’s commitment to responsibly integrate AI into governance and public services.

The Digital Personal Data Protection Act (2023) and draft Digital India Act (2024) aim to modernise tech governance with AI accountability (Access Partnership, 2023, Burman, 2023). Voluntary guidelines, such as NITI Aayog’s 2021 Responsible AI principles and draft MeitY ethics frameworks, promote transparency and risk assessment (NITI Aayog, 2021a,b).

India's AI development relies on state-led strategies fostering inclusive innovation and responsible governance. It began with NITI Aayog's National Strategy for Artificial Intelligence (2018), which introduced the vision of “AI For All” and identified five priority sectors: healthcare, agriculture, education, smart cities, and mobility. The strategy highlighted India's potential for socially beneficial AI (NITI Aayog, 2018; PIB, 2023a). The strategy called for ethical frameworks, data governance, and domestic research capacity, and preceded AI regulation.

Private Sector Initiatives

India's private sector, from large IT firms to startups, drives AI development and adoption. The technology industry, once focused on IT outsourcing, is rapidly shifting to AI-centric products and services. Major Indian IT companies have created their AI platforms, like TCS's “ignio” and Wipro's “HOLMES,” enhancing global business operations and indicating early investment in AI R&D (Digitate, 2024; Wipro, 2024). Conglomerates like Reliance and Adani have dedicated AI units to improve customer analytics and supply chain efficiency (Bhargava, 2024; Reliance, 2023). Many banks and telecoms partner with AI firms for large-scale predictive analytics and chatbots.

Global tech giants leverage India's talent, with companies like Google, Microsoft, IBM, and Intel establishing AI research centres, such as Google Research India in Bengaluru. This fosters local innovation and upskills researchers. Stanford's AI Index 2023 ranks India number 1 in AI skill penetration and GitHub AI projects (IndiaAI, 2023), showcasing the developer community's strength. In the 2025 report, India ranks second globally in AI skill penetration, just behind the U.S., indicating a highly concentrated AI-skilled workforce (Maslej et al., 2025). As per this report, India experienced the highest year-over-year growth in AI hiring at 33.4%, surpassing Brazil and Saudi Arabia. The report also ranks India 5th for private AI investment and newly funded AI companies, with annual investment growth of over 30%. Indian AI market expected to rise to $22 billion by 2027, according to a NASSCOM–EY report (NASSCOM, 2024a).

Additionally, the Indian industry leads in mission-driven AI for social good. The Wadhwani Institute for AI, a non-profit launched in 2018, develops AI solutions for underserved communities and highlights collaboration around AI for development (Wadhwani Institute for AI, n.d.). This union of government, corporate investment, and academic involvement fosters a robust AI ecosystem. The subsequent sections detail how this ecosystem spurs innovative AI use cases across sectors with significant infrastructure investments and a vibrant startup scene.

Public-Private Partnerships (PPPs) in AI

Public-private partnerships are crucial in India’s AI development, combining the strengths of both sectors. The Indian government recognises the need for private sector partnerships in AI, co-developing policies, sharing resources, and co-funding projects. The benefits are mutual: the public sector gains advanced technology, while companies receive government support and a major customer.

Notably, the MeitY-NASSCOM Centres of Excellence exemplify PPP success (MeitY, 2021; NASSCOM Centre of Excellence for IoT & AI, n.d.). Another significant PPP is the FutureSkills Prime program, launched in 2018 by MeitY and NASSCOM to reskill India’s IT workforce in emerging technologies like AI (FutureSkills Prime, n.d.). The IndiaAI initiative began as a government portal for AI but is evolving into a broader alliance with industry and academia, serving as a central AI platform in India. New partnerships are emerging under the IndiaAI initiative. Google partnered with IIT Madras in 2023 to launch a Centre for Responsible AI and contributed $1 million to the effort (IIT Madras, 2023). In 202, a5 partnership with Meta to create “Shrijan” – a Cenrer for Generative AI at IIT Jodhpur, aimed at advancing open source models and training (TOI, 2024). Microsoft likewise joined the IndiaAI mission via an MoU to advance AI innovation and train 500,000 people in AI skills by 2026 (Microsoft India, 2024).

Various ministries have teamed with tech firms for AI in public services. The Ministry of Agriculture collaborates with Microsoft and IBM on pilot projects for AI in crop advisory, while the Ministry of Health is working with NIRAMAI on AI-based breast cancer screening (Microsoft Stories India, 2017; Ministry of Agriculture and Farmers Welfare, 2019). NITI Aayog’s 2019 initiative in Aspirational Districts forged partnerships with Philips and Cisco to apply AI in healthcare and education for underdeveloped areas (NITI Aayog, 2019). The YuvAI initiative, in collaboration with All India Council for Technical Education (AICTE) and industry, trains students in AI, demonstrating multi-stakeholder capacity-building partnerships (NEGD, 2020). The Ministry of Defence formed a task force in 2018 on AI co-chaired by Tata Sons’ chairman, leading to a Defence AI Council that includes private defence tech startups, resulting in PPPs for AI-driven military systems (MoD, 2022).

AI Startup Ecosystem in India

As of April 2024, there were approximately 6.2 thousand AI startups in India, with a significant portion, around 29, having reached the Series D or higher funding stage. While the overall AI startup ecosystem is growing, the number of generative AI (GenAI) startups has grown substantially, increasing from 66+ in H1 2023 to 240+ in H1 2024 (NASSCOM, 2024b). India ranks 6th in share of GenAI startups among major economies.

Several factors drive this ecosystem. India’s large market and development challenges create opportunities for impactful AI solutions, dubbing the country the “AI use-case capital” for developing-world issues. Government support comes through startup incentives, incubation programs, and policies. The establishment of a Startup Hub under MeitY and initiatives like Startup India facilitate mentorship and funding access (MeitY, n.d.). NASSCOM's DeepTech Club nurtures AI startups, while a skilled engineering workforce and low operational costs give Indian startups a competitive advantage (NASSCOM, n.d.). Successful stories include Uniphore, a unicorn in AI speech analytics; Postman, an AI-focused API platform; and Fractal Analytics, surpassing a $1 billion valuation (Tech In Asia, 2024). Big firms increasingly acquire AI startups to accelerate their transformation, with companies like TCS, NIIT, and CleverTap leading the charge. The trend reflects a shift from independent AI growth to strategic acquisitions, as firms seek specialised solutions to enhance productivity and innovation (Shanthi, 2025). Venture capital investments in India are increasingly focusing on AI startups across various consumer sectors, driven by expanding use cases (ET, 2025). Indian AI startups are also attracting global acquisition interest, such as Google’s Halli Labs purchase (Malik, 2017).

Under its AI mission, the government aims to create “100 AI Unicorns,” indicating strong support for scaling startups (MeitY, 2023c). Specialised funds like the IndiaAI Startup Fund are being established to provide growth-stage capital. Open compute and data resources significantly reduce startups' costs, enhancing their competitiveness against larger global players. The government promotes AI challenges and hackathons, awarding startup winners with pilot contracts. These factors make India’s AI startup landscape one of high growth potential.

International Collaboration and Partnerships

In 2020, India became a founding member of the Global Partnership on AI (GPAI) and was elected Chair of the GPAI Council in 2023. During its chairmanship, India hosted the annual GPAI summit in December 2023 in New Delhi, resulting in the New Delhi Declaration on AI (GPAI, 2023). This declaration promoted a renewed partnership for utilising AI “for good and for all,” aligning with India’s commitment to ethical, inclusive AI. It addressed AI for social welfare, responsible principles, and collaboration on R&D. India’s GPAI leadership enhanced its global AI governance profile and fostered networks with AI experts in member countries (including the US, EU, and Japan, etc.).

Among the bilateral cooperations, India–USA ties in AI have significantly deepened. In 2023, the two nations launched the Initiative on Critical and Emerging Technologies (iCET), focusing on AI, quantum, and semiconductor tech (White House, 2024). Under iCET, joint research centres and funding have been established. For example, in October 2024, the U.S. and India announced over $2 million in grants for 17 joint AI and quantum research projects addressing challenges like AI-assisted early cancer detection. These projects unite U.S. and Indian researchers and startups, demonstrating a commitment to co-develop AI solutions for healthcare, agriculture, and other sectors. The U.S. National Science Foundation (NSF) and India’s Department of Science & Tech (DST) have launched joint programs to fund AI research, including partnerships between American tech companies and Indian institutes, like Google Research collaborating with IITs on AI for flood forecasting. (NSF, 2023). High-level dialogues further ensure policy alignment, with both countries agreeing to collaborate on AI standards and risk management frameworks, sharing best practices for responsible AI statements. The recent U.S.-India TRUST initiative outlines three AI goals: (1) accelerate U.S.-origin AI infrastructure in India through market access and investments; (2) remove constraints in financing and building this infrastructure; (3) support the development of innovative AI models and applications (Chaudhuri and Mohanty, 2025).

India has a Technology Partnership with the UK emphasising AI and digital ethics (UK Government, 2024). Imperial College has launched its science hub, Imperial Global India, in Bengaluru. A key initiative is establishing six high-impact fellowships with the National Centre for Biological Sciences and the Indian Institute of Science to develop a London-Bengaluru AI in Science Network (Johns, 2025). In 2022, UK–India agreed on Roadmap 2030 for AI research cooperation, and in early 2025, officials discussed expanding ties in telecom and AI, focusing on AI for 6G networks and links to the UK’s Alan Turing Institute (PIB, 2025d). A new UK-India Tech Security Initiative aims for safe, responsible, human-centric AI, promoting joint research and market access for innovations. In 2023, India and the European Union established a Trade and Technology Council (TTC), with one working group addressing digital governance and ICT, highlighting AI regulation (European Commission, 2023). As the EU progresses with its AI Act, India is involved in dialogues to express its views and potentially align on AI standards ensuring privacy and non-discrimination innovation.

Indonesia and India recently signed a MoU for AI cooperation, featuring collaboration between Indonesia’s Indosat Ooredoo Hutchison (IOH) and India’s AI firm AIonOS, a joint venture of InterGlobe and Assago Group, aimed at AI solutions in tourism, knowledge industries, and sustainable agriculture (Nugraha, 2025).

In multilateral fora like the G20, India has emphasised AI. During its 2023 G20 Presidency, India promoted “Digital Public Infrastructure” and inclusive AI (GOI, 2023). The G20 Digital Ministers’ declaration acknowledged the need for responsible AI and endorsed collaboration on AI R&D for language translation and agriculture, vital for emerging economies. India also participates in UNESCO’s AI ethics initiative; it endorsed the UNESCO Recommendation on AI Ethics in 2021 and co-organised an AI Readiness conference with UNESCO in 2024 to develop an India-specific AI policy report (UNESCO and MeitY, 2024).

Additionally, India utilises its talent diaspora for international collaboration. Many top AI researchers of Indian origin work in the US/Europe, and India invites them for knowledge exchange. For example, the VAJRA faculty scheme allows overseas Indian AI experts to engage with Indian universities (SERB, n.d.). Such interactions, alongside international conferences (e.g., the annual RAISE summit on Responsible AI for Social Empowerment), keep Indian AI efforts aligned with global advancements. In summary, India’s international engagement on AI aims to influence global AI governance with a pro-development perspective, access resources and expertise for domestic capacity building, and form alliances that strengthen its ambition to become an AI powerhouse. These collaborations enhance domestic initiatives and integrate India’s AI growth with the global landscape community.

National AI Infrastructure Programs (Compute, Data, Cloud, Sandboxes)

India invests in AI infrastructure, including supercomputing, data platforms, and innovation sandboxes. These initiatives provide researchers, startups, and industry with the resources (computing power, datasets, testbeds) needed to develop world-class AI solutions in India. By promoting hard infrastructure, like supercomputers, and soft infrastructure, like data governance, India intends to achieve sustainable AI growth.

AI Compute Power and Cloud Infrastructure

A major initiative focuses on building high-end AI computing infrastructure. In 2024, the IndiaAI Mission allocated ₹10,300 crore (≈ $1.3 billion) over five years to enhance AI capabilities, especially by developing a national AI computing facility with over 18,000 GPUs, one of the largest in the world (PIB, 2024a). Once operational, it will have nearly nine times the capacity of “DeepSeek” and two-thirds of OpenAI’s ChatGPT. This GPU cluster will enable Indian researchers and companies to train cutting-edge AI models domestically. The government is providing access through an open AI cloud platform with a GPU marketplace where startups, students, and institutions can rent capacity at subsidised rates (PIB, 2024b). Unlike many countries where tech giants dominate advanced AI computing, India aims to democratise access, allowing small startups to innovate. Pricing is set at well below the global rate. This open AI cloud is expected to significantly assist academic researchers and entrepreneurs who struggle with the costs of large AI training models.

India is strengthening its semiconductor ecosystem to reduce reliance on AI chip imports. The government launched a ₹76,000 crore ($10 billion) program to develop a sustainable semiconductor and display ecosystem (PIB, 2021). This initiative includes the India Semiconductor Mission (ISM) to drive the sector and aims to establish India as a global electronics manufacturing hub, with semiconductors as the foundation. Recently, Tata Electronics partnered with Taiwanese Powerchip Semiconductor Manufacturing Corp (PSMC) to build a state-of-the-art semiconductor fabrication plant in Dholera, Gujarat (Tata Electronics, 2024). The facility will produce up to 50,000 wafers monthly, incorporating advanced automation technologies, including data analytics and machine learning, to improve operational efficiency. This project significantly bolsters India's semiconductor manufacturing capabilities.

Data Platforms and National Datasets

Recognising data as the “fuel” for AI, the Indian government has launched initiatives to enhance access to diverse datasets for AI development. The main project is the IndiaAI Dataset Platform (AIKosh), an open data repository announced in 2025 to provide access to high-quality, anonymised datasets for AI innovators (PIB, 2025c). It aims to host extensive government and non-personal datasets from various ministries (e.g., agriculture statistics, weather data, traffic patterns, census data). Making datasets searchable reduces barriers for startups and researchers in data acquisition. The goal is to empower Indian AI developers with rich data, enhancing AI accuracy and bias-resilience models. For example, agriculture AI models can use years of crop yield and rainfall data, while traffic AI can access extensive smart city datasets. The platform also supports open data standards and sharing between government and private entities, under the upcoming National Data Governance Policy. The draft National Data Governance Framework (released in 2022) suggested creating an India Data Management Office (IDMO) to manage data sharing and license anonymised public datasets for research (MeitY, 2022b). This reflects India’s intent to treat data as a public good for innovation while balancing privacy protections from the DPDP Act 2023.

AI for Indian languages focuses on India's 22 official languages and many dialects. In 2022, the government launched Digital India Bhashini, a national AI platform that curates datasets and offers models for translation, speech recognition, and text-to-speech (PIB, 2022). By 2025, it had over 350 trained models for automatic speech recognition, machine translation, and optical character recognition. These resources enable developers to create voice assistants and local-language applications, such as an app for farmers to query in Bhojpuri. Bhashini’s collaboration with over 70 research institutions showcases India's commitment to enhancing digital resources for vernacular languages, which is crucial for inclusive AI development in a multilingual society.

Various initiatives like the National Health Stack and the Health Data Repository supply de-identified health records for medical AI research (PIB, 2024c). The Open Government Data (OGD) platform has published thousands of datasets on socioeconomic and geospatial data, enhancing documentation, APIs, and formats for better usability (National Informatics Centre, n.d.). The private sector releases valuable datasets, including an Indian driving dataset for self-driving AI research and e-commerce companies sharing product review datasets for NLP challenges (Dokania et al., 2023; Thummar, n.d.). This collaboration from both sectors enriches India’s data landscape ecosystem.

Regulatory Sandboxes and Innovation Testbeds

India uses sandbox environments to promote AI and tech innovation in controlled settings. Regulatory sandboxes enable companies to test new products with users under relaxed rules.

Notable initiatives in the finance sector, such as the Reserve Bank of India (RBI) establishing sandboxes for fintech in 2019, have targeted retail payments with AI solutions. RBI later added themes like MSME lending and fraud prevention using AI/ML (RBI, 2019). Securities and Exchange Board of India (SEBI) also created a sandbox for market innovations, testing AI algorithms for trading and risk management (SEBI, 2021). These programs help regulators better understand AI while innovators gain essential insights feedback.

In telecommunications, the 2023 Indian Telecom Act promotes technology sandboxes. The Department of Telecom (DoT) partnered with C-DOT and academia to test AI in network management. C–DOT and IIT–Jodhpur signed an agreement sunder the Telecom Technology Development Fund (TTDF) scheme, aiming to develop AI frameworks for automated network management, fault detection, and diagnostic techniques in 5G and beyond networks (PIB, 2024d). The collaboration includes establishing a real-time 5G testbed compliant with O-RAN standards to demonstrate automated network management and slicing techniques for applications such as smart metering and remotely operated vehicles. In 2025, India and the UK launched a joint telecom/AI sandbox, linking DoT’s Telecom Innovation Centre with the UK’s SONIC Labs to develop AI solutions for network security and performance. (PIB, 2025d).

For urban governance, the National Urban Innovation Hub created a sandbox for smart city projects, allowing cities like Bengaluru and Hyderabad to test AI-driven traffic and surveillance systems (Ministry of Housing and Urban Affairs, 2019). This initiative led to solutions like adaptive traffic lights, reducing delays by 15-20% in some areas. Sandboxes also highlight regulatory issues, such as privacy concerns in AI CCTV analytics, which inform future policies.

Several Indian states have enacted broad innovation sandbox legislation (Government of Karnataka, 2020; i–Hub Gujarat, n.d.; Kerala Startup Mission, n.d.). These policies demonstrate the support of regional governments for AI experimentation aimed at attracting investment from startups.

India’s sandboxes create safe testing grounds for AI deployments but are often hampered by regulations or procurement challenges. They enforce safeguards while enabling innovation. The government is also launching innovation testbeds and pilot projects for AI. For example, the Ministry of Agriculture’s AI pilots test solutions before broader implementation, like the AI sowing advisory (PIB, 2019). The Ministry of Road Transport has piloted an AI driver monitoring system to assess its effect on road safety (DEST, n.d.). These experiments, often in collaboration with startups or research labs, yield valuable insights.

Sector-Specific AI Use Cases and Innovation

AI innovation in India spans industries, addressing unique challenges in healthcare, agriculture, education, manufacturing, and finance. The subsections survey AI applications in these key sectors.

Healthcare

AI enhances healthcare in India by improving diagnostics, expanding access, and optimising workflows. Tools analyse medical images (X-rays, MRIs, CT scans) to help detect diseases more accurately. AI image analysis reduces diagnostic errors, improving outcomes in resource-constrained systems (Zuhair et al., 2024). Hospitals use these tools to identify cancers, neurological disorders, and fractures. During COVID-19, the government deployed the AI chatbot MyGov Saathi via WhatsApp to provide reliable health information, counter misinformation, and assist millions with health queries (IndiaAI, 2021).

Innovative AI solutions like those led by NITI Aayog help screen for diabetic retinopathy using portable eye scanners in remote areas (Ruamviboonsuk et al., 2020). Researchers have developed AI models for tuberculosis detection through chest X-rays and cough sounds, aiding in the disease's elimination (Gent, 2024). AI predictive analytics anticipate disease outbreaks: pilot projects have analysed climate and epidemiological data to predict dengue and malaria outbreaks with high accuracy, enabling authorities to mobilise preventive measures in advance (Joi, 2025). These capabilities are crucial in a country prone to seasonal epidemics, as they improve resource allocation and reduce hospital burden during outbreaks.

AI also enhances rural healthcare access via telemedicine and virtual care (Kerketta and Balasundaram, 2024). Platforms feature AI symptom checkers and virtual assistants, providing preliminary medical advice in local languages. This is vital in remote areas, connecting patients to doctors during the pandemic. AI-assisted robotic surgeries are emerging in top hospitals, improving precision and outcomes. Their use has grown by 30% in two years, enabling complex procedures with smaller incisions.

Indian healthcare providers themselves are developing cutting-edge AI solutions. Apollo Hospitals utilised a decade of patient data to develop an AI heart disease risk prediction tool in collaboration with Microsoft's AI Network for Healthcare (Ang, 2021). This tool predicts cardiac events and promotes preventive cardiology to combat high heart disease rates. These advancements signal a new era for Indian healthcare, promising equitable and efficient quality care for the population.

Agriculture

Precision farming using AI is changing traditional practices into data-driven methods. Researchers and agritech startups deploy drones and remote sensors with AI algorithms to assess crop health, soil conditions, and pest infestations (Agrotech India, n.d.). Drone surveillance alerts farmers to stress or disease, enabling targeted interventions and reducing costs and environmental impact. ‘Namo Drone Didi’ is a central sector scheme aiming to empower women-led Self-Help Groups (SHGs) by equipping them with drone technology to provide agricultural services (National Portal of India, 2024).

The Indian government actively promotes “smart farming” initiatives through a national AI mission and a dedicated Agriculture AI Centre to provide farmers with real-time information. AI-powered precision agriculture optimises irrigation, planting depth, and inputs for better yields, exemplified by the AI Sowing App in Telengana, Maharashtra and Madhya Pradesh, sending personalised text recommendations to smallholder farmers (Microsoft Stories India, 2017). The Kisan e-Mitra chatbot offers an AI assistant, addressing queries in various Indian languages about government schemes and farming practices, enhancing farmers’ resource access (PIB, 2025a). Another initiative is an AI Crop Health Monitoring system integrating satellite imagery, weather data, and IoT inputs to check crop conditions (Raj et al., 2025). Stress indicators prompt alerts for timely action. India launched a National Pest Surveillance system using machine learning to pre-empt severe pest infestations, thus supporting its objective to double farmers’ incomes with technology (PIB, 2025b).

Private agritech startups like CropIn, Fasal, and DeHaat are also innovating, employing AI for predictive insights on weather, soil moisture, and crop prices. AI also enhances supply chain logistics for perishable goods, minimising waste. Although still emerging, AI in Indian agriculture addresses longstanding productivity challenges, modernising the sector with targeted intelligence delivered through mobile interfaces to livelihoods.

Education

India’s education sector, one of the largest in the world, is embracing AI to improve teaching and learning outcomes. Personalised learning through AI is a significant trend, especially in digital education platforms. The Indian government is integrating AI into mainstream education. In 2019, India’s largest school board (CBSE) introduced AI as an elective in high schools, rolling out a basic AI curriculum in thousands of schools. The “AI For All” initiative, launched in 2021 with industry partners like Intel, offers a 4-hour online training module that has educated over 1 million students and teachers on AI basics (CBSE, 2021). MeitY’s Responsible AI for Youth program has trained students nationally in foundational AI skills, encouraging them to develop AI solutions for social issues. By embedding AI literacy at K-12 levels, India aims to create an AI-ready workforce. In higher education, specialised programs in AI and data science have proliferated in universities and technical institutes. The AICTE partnered with IBM to offer faculty training and student courses in AI, planning to establish over 50 Centres for Excellence in AI at top institutions (IBM, 2021). These initiatives contribute to India having the highest global AI skill penetration rate, reflecting successful skilling efforts.

AI bridges educational gaps in rural and underserved areas. Intelligent tutoring systems and AI chatbots, accessible via low-end phones, supplement teaching without qualified teachers. For example, an AI app can teach math and English interactively in students' native languages, adjusting difficulty based on responses. Nonprofits use these tools in remote villages to enhance literacy and numeracy (Soenke and Kaushal, 2022). AI improves administrative efficiency; some state examination boards employ AI for automated grading and proctoring exams with face recognition to prevent cheating. During COVID-19 school closures, AI ensured learning continuity for millions. In summary, AI in education enhances personalisation, expands access, and builds future skills, allowing educators to concentrate on mentorship, ultimately aiming to improve outcomes for students in India.

Manufacturing and Industry 4.0

AI drives India’s manufacturing towards “Industry 4.0” with smart automation across sectors like automotive and textiles. Major manufacturers adopt AI/ML solutions to boost efficiency, quality, and safety. Predictive maintenance exemplifies AI's impact: sensors on equipment provide data to algorithms predicting failures preemptively, thereby reducing downtime. For instance, Indian Oil Corporation uses AI to monitor refinery equipment in real time, identifying anomalies to prevent shutdowns, significantly improving logistics and refining efficiency (PSU Watch, 2020). Tata Steel applies AI for blast furnace failure predictions and raw material optimisation, enhancing throughput and consistency (Tata Steel, 2024).

AI transforms quality control by using computer vision for fast product inspections, from automotive parts to pharmaceutical tablets. These systems detect microscopic flaws more accurately than humans, enhancing quality and reducing waste. AI also optimises processes by suggesting settings to maximise yield and minimise energy use; chemical and cement plants collaborate with AI firms to improve parameters and reduce energy consumption costs.

The logistics sector benefits from AI advancements, with retailers using AI for demand forecasting to prevent stockouts. Some Indian retailers improved forecast accuracy with seasonal trends and real-time data, reducing inventory costs. In warehouses, e-commerce leaders like Flipkart utilise AI robots for sorting and packing to meet the demands of online orders, reflecting a growing trend of automation that complements, rather than replaces, human workers (Express Computer, 2019).

The Indian government supports Industry 4.0 with initiatives like SAMARTH Udyog Bharat 4.0, which establishes smart manufacturing demo centres for small and medium manufacturers to explore AI and robotics (Ministry of Heavy Industries, 2024). Public sector units, especially in defence and railways, implement AI for maintenance and optimisation –As global supply chains shift, India aims to enhance manufacturing competitiveness through AI for productivity, cost-efficiency, and quality.

Finance and Banking

The financial services sector in India—banking, insurance, and fintech—leads in AI adoption. Banks use AI and machine learning to enhance customer experience, risk management, and operations. AI chatbots like HDFC’s “Eva” and SBI’s YONO app assist with routine queries, balance checks, loan applications, and transactions. They operate 24/7 in multiple languages, serve millions instantly, and reduce wait times and call centre loads.

AI plays a key role in credit decisions. With the push for financial inclusion, many new customers lack credit histories. To assess creditworthiness, fintech startups and non-bank lenders use AI to analyse alternative data, such as mobile usage and utility payments. This allows them to underwrite loans for traditional scoring invisible borrowers, with machine learning models evaluating applications in real time. AI-based credit scoring boosts approval rates while limiting defaults by identifying risk signals overlooked by traditional methods. AI models are now standard in large banks, processing vast datasets to predict default probabilities, thus enabling faster approvals and personalised offers, fostering consumer credit growth.

AI significantly benefits fraud detection and compliance. Indian banks and payment companies monitor transactions for fraud using AI systems. Machine learning analyses millions of transaction patterns to flag real-time anomalies, aiding early fraud detection and minimising losses. For instance, Paytm identifies fraudulent merchants and transactions, while the Unified Payments Interface (UPI) uses AI for suspicious pattern monitoring. Capital markets leverage AI for surveillance against insider trading, analysing trading data for markers undetectable manually.

Insurance firms in India utilise AI for rapid claims processing—automatically assessing vehicle damage from photos and detecting fraudulent health claims by identifying discrepancies in bills. Algorithmic trading is increasing, with AI making rapid trading decisions under regulatory oversight. RBI employs AI for forecasting and regulatory supervision, establishing an AI and Machine Learning Framework for banks’ risk analysis and creating regulatory sandboxes to test fintech innovations involving AI (The Economic Times, 2023).

AI enhances India’s financial sector, allowing secure and efficient service scaling. A 2023 NASSCOM study recognised Indian financial firms as AI adoption leaders, employing real-time analytics and AI-based personal finance suggestions (NASSCOM, 2024a). AI integration enables operational cost reductions via automation, while also expanding access, particularly in lending for informal sector entrepreneurs. As data volumes soar, AI is crucial for financial firms to extract actionable insights and manage risks. Future advancements will likely include advisory bots and advanced regulatory analytics, necessitating ethical, transparent use with strong safeguards for fairness and privacy decision-making.

Implications for the World of Work in India

AI's rapid growth in India significantly impacts job creation, skill demands, productivity, and work dynamics, particularly because of its vast and diverse workforce. This section explores AI's influence on employment and examines how India is adapting to these changes.

Job Creation, Transformation, and Displacement

On one hand, AI promises to boost economic growth and create new jobs. India’s tech industry is seeing surging demand for data scientists, AI engineers, big data specialists, and cybersecurity analysts. The fastest-growing job titles are expected to be in digital and tech, reflecting a shift to an AI-driven economy (Li and Shine, 2025). The World Economic Forum’s Future of Jobs Report 2025 finds that Indian employers expect new tech-enabled roles to proliferate. Companies plan to focus on digital skills, with 67% saying they will tap into diverse talent pools, significantly higher than the global average of 47% (Li and Shine, 2025). The government’s Skill India Digital Hub aims to prepare millions for these opportunities through continuous training in AI and automation (Li and Shine, 2025).

Generative AI and other advances are creating new industries and services, from content creation to process automation, potentially generating fresh employment. An EY analysis estimates that by 2030, AI adoption, particularly GenAI, could impact 38 million jobs in India by changing work processes while driving a 2.6% increase in overall productivity (EY, 2025 ). Many impacted jobs won't disappear; instead, AI will enhance them, boosting output and efficiency. For instance, in healthcare, AI can help doctors automate diagnostics, allowing them to see more patients and potentially increasing demand for healthcare workers. Similarly, in IT services, while mundane coding tasks may be automated, programmers will be necessary for higher-level design, logic, and AI oversight. This indicates a transformation in job profiles, evolving roles to highlight human strengths like creativity, complex problem-solving, and interpersonal communication alongside AI systems.

On the other hand, legitimate concerns about job displacement arise, particularly for roles involving routine tasks. India’s workforce in business process outsourcing (BPO), customer support, data entry, and routine IT maintenance faces risks as AI systems, like chatbots and RPA, advance. For instance, PhonePe automated 60% of its customer support jobs from 2017 to 2022 using AI solutions (Rajmohan, 2025). This suggests that certain white-collar jobs are also vulnerable to automation. A 2024 IIM Ahmedabad study found 68% of surveyed white-collar employees in India expect AI to automate their jobs in the next five years, while 40% fear their skills will become outdated (Chakrabarti et al., 2024). These findings highlight workforce anxiety about AI-driven changes redundancy.

The impact will likely vary across sectors and skill levels. Research on technological automation has shown job polarisation: middle-skill routine jobs decline, while high-skill and low-skill jobs can grow or remain, resulting in a hollowing of the middle (Rajmohan, 2025). In India, some anticipate a similar trend; AI may significantly reduce clerical roles (through intelligent software) and some manufacturing jobs (through robotics), while increasing demand for high-skill engineers and low-skill gig workers (like data annotators or delivery personnel for AI-driven platforms). However, outcomes are not guaranteed. India’s 2024–25 Economic Survey cautioned that fears of mass unemployment from AI may be overstated given the current early stage of AI deployment (Ministry of Finance, 2025). The Survey asked, “What were the problems in the world that demanded AI as the answer?”, suggesting that AI should be viewed as a tool to address pressing challenges rather than an end (Ministry of Finance, 2025). This perspective encourages careful consideration of AI's application and how its benefits can help mitigate labour impacts (for example, using AI-driven productivity to create jobs in other areas or investing in social safety) nets).

Reskilling and Education for the AI Era

AI’s spread necessitates significant upskilling of India’s workforce. According to the WEF Future of Jobs survey, employers estimate that 63% of the workforce will need training by 2030 due to technology integration, with 12% of workers (over 70 million) potentially missing out at the current pace (Li and Shine, 2025). This skills gap could worsen unemployment or underemployment. The Indian government and industry acknowledge this urgency. Initiatives like Skill India, PMKVY, and NASSCOM’s FutureSkills Prime are expanding AI, data analytics, and cloud computing courses. FutureSkills Prime has advanced digital skilling in India (FutureSkills Prime, n.d.), empowering over 2 million learners in Tier 2 and 3 cities with courses aligned to National Occupational Standards (NOS) and National Skills Qualification Framework (NSQF). It ranks 3rd among 47 digital skilling initiatives in the European Commission's 2024 Pact for Skills Report (FutureSkills Prime, n.d.). The platform offers government-backed incentives, industry-recognised certifications, and job placement support. Another initiative, YuvAI, collaborates with academia and companies like Meta to train young engineers and researchers through workshops and open-source projects (NEGD, 2020). Education reforms introduce AI basics in schools and launch specialised undergraduate and master’s programs, ensuring a steady stream of AI-proficient graduates. Many universities now offer BTech or MTech degrees in AI/ML or data science, a significant shift from just a few years ago, when it was rare.

Companies in India are increasingly removing strict degree requirements, opting instead for apprenticeships and in-house training to develop necessary skills (Li and Shine, 2025). For example, major IT firms like TCS and Infosys have reskilled thousands through internal “AI academies” and platforms like Infosys Lex, gearing up for AI-related projects. Government initiatives under the Skill India mission offer short-term digital literacy courses, including AI basics, across various sectors. The Telecom Sector Skill Council trains technicians for AI-powered networks, while the Agriculture Skills Council teaches agricultural workers about AI farming apps, aiming to empower non-tech workers with AI tools in the fields.

Another focus is inclusion in the future workforce. Underrepresented groups – women, rural youth, and economically disadvantaged communities – could be either left behind by the AI revolution or actively included in new opportunities. India’s female labour force participation is low (around 30% in 2022) (Li and Shine, 2025). Empowering women with digital skills and remote work flexibility could help them secure emerging AI jobs, improving gender balance in tech. The IndiaAI Mission’s skill pillar and government schemes promote diversity in tech education, offering scholarships for women in STEM and creating innovation hubs in smaller towns. Two-thirds of Indian companies plan to hire from diverse talent pools, indicating industry alignment with this goal (Li and Shine 2025).

Challenges remain in reskilling a large workforce. SMEs often lack resources for retraining. The government explores incentives, like tax breaks or training subsidies, to promote upskilling in AI and digital tech. Continuous learning is crucial as AI technology evolves, requiring regular skill updates. There’s a growing demand for a Skills 2.0 approach, promoting collaboration between industry, government, and training providers to offer modular, lifelong learning. The rise of online education and MOOCs in India, such as Coursera, Udacity, and NPTEL, focusing on AI, enables self-driven upskilling professionals.

Productivity

AI’s infusion into workplaces also changes how work is done and business processes themselves. Indian companies are increasingly adopting AI in functions like recruitment (using AI tools to screen candidates), performance management (AI analytics to track productivity), and decision-making (data-driven AI insights for strategy). This can lead to significant productivity gains – for example, AI-driven automation in operations and supply chain can reduce costs and errors. An Ernst & Young (2024) report estimates that by 2035, AI could add an extra 1% annual growth to India’s GDP. Another study projects a multi-per cent uplift in productivity in the organised sector by 2030 due to AI adoption (EY, 2025). Another study suggests that generative AI might contribute $400 billion to GDP by 2030 by transforming various sectors (Bhalla, 2024). For India, boosting productivity is key to sustaining high GDP growth, so AI could help “do more with less” in many industries. For India, with historically lower productivity in many sectors compared to global benchmarks.

AI can help micro-enterprises and farmers access advanced analytics, enhancing output with existing resources. If achieved, this could lead to higher wages for skilled workers and lower consumer prices, theoretically improving living standards.

However, it raises concerns about work culture and employee well-being. AI's role in monitoring workers introduces privacy issues, workplace surveillance, and job stress from human-AI interactions. Companies must ensure transparency in AI usage for employee evaluation and maintain ethical trust management.

The nature of teamwork is shifting as human workers increasingly collaborate with AI “co-workers” and decision-support systems. In fields like customer service, employees might handle complex queries while chatbots manage simple FAQs; in journalism, reporters use AI tools to generate quick news briefs and then add their analysis. These hybrid work models are becoming standard, requiring new skill sets—not just technical know-how but also skills to work effectively alongside AI. Soft skills like adaptability, learning-to-learn, and cross-disciplinary collaboration will be valued. Recognizing this, some Indian training programs now include modules on “AI literacy” for non-technical staff, teaching AI capabilities, how to interpret outputs, and supervise AI systems.

Inequality and the Nature of Work

Concerns arise that AI could worsen inequalities if mismanaged. Skilled tech workers and companies may reap most benefits, while low-skilled workers risk job losses or wage stagnation. India’s informal workforce might evade direct AI displacement but could still feel market shifts (e.g., autonomous trucks may impact millions of truck drivers). Scholars argue that without intentional policies, AI’s benefits may not reach India’s poor. Korinek and Stiglitz (2021) suggest that broad redistributive policies (such as social safety nets or universal basic income) must share AI-driven productivity gains globally. Currently, the Indian government plans no such measures, focusing instead on reskilling and creating new jobs for displaced workers. Labour unions and movements face challenges advocating for redistributive policies due to low private sector unionisation and widespread informal employment. This heightens the need for retraining workers and fostering job-creating industries; otherwise, inequality may increase between those thriving in an AI-driven economy and those who cannot.

AI transforms work by automating repetitive tasks, allowing employees to concentrate on more complex functions. This leads to more engaging jobs; for example, junior lawyers spend less time on document discovery and more on strategic case elements. Doctors apply AI for routine diagnostics, increasing patient interaction. Nevertheless, AI may also heighten work intensity; reports show increased surveillance and monitoring from AI tools, potentially causing stress and decreasing autonomy (Pathak and Agrawal, 2025). In India’s gig economy, AI-driven management oversees work assignments, evaluations, and pay. Ride-hailing drivers often contend with unclear AI systems that impact their ride allocations and compensation, resulting in dissatisfaction and strikes over algorithmic fairness (Kapoor and Rai, 2023). Without clear regulations, workers have limited options. The Ministry of Labour is analysing AI's effects on labour and considering updates to legislation for gig workers. Advocating for “AI for good work”, where AI supports rather than exploits workers, is becoming a significant policy issue. Civil society in India calls for transparency in workplace AI systems and worker data rights.

AI could reshape workforce distribution by enabling more remote and gig work for Indian workers, allowing a graphic designer in a small town to take freelance gigs worldwide using AI design tools. This might decentralise work from cities. However, job polarisation poses a risk: high-skill and low-skill jobs may increase, while mid-skill jobs decrease, as noted in developed countries. India might see declines in clerical jobs but rises in AI maintenance and engineering roles, along with persistent demand for low-end services. Policymakers acknowledge these shifts. NITI Aayog’s strategy promotes India as a hub for frugal AI innovation, supporting inclusive growth (NITI Aayog, 2018). The coming years will challenge India's ability to harness AI’s benefits—productivity, new jobs, improved services—while tackling issues like displacement, inequality, and worker rights. Current policies focus on significant investments in skills, responsible AI use, and worker-centric applications, necessitating ongoing effort adaptation.

Conclusion and Outlook

India’s AI development is at a pivotal moment. Significant progress results from public initiatives, private sector innovation, and international cooperation. AI benefits critical sectors like healthcare and agriculture, positioning India as a global leader in inclusive AI solutions. However, the nation must ensure these advancements improve livelihoods for its 500-million workforce.

The Indian government recognises the need to update labour policies and social security frameworks as AI transforms work. In 2023, India passed the Code on Social Security, extending benefits to gig workers expected to grow with AI. Discussions on managing labour displacement include unemployment insurance, portable benefits, and a reskilling fund from automating industries. The Economic Survey 2024-25 suggested “aggressive skilling programs and apprenticeship schemes” to address job dislocations caused by AI (EY, 2025), indicating potential incentives for companies to retrain at-risk workers. Additionally, India's AI ethics discourse emphasises that AI should enhance human capabilities rather than replace humans in critical decision–making, reflected in proposed frameworks requiring human approval for high-stakes AI applications like AI-driven medical diagnoses or legal decisions.

India can learn from how industrialised countries are addressing AI’s impact on jobs. The government participates in international forums (G20, ILO) to explore future work and will likely adopt relevant best practices. While automation may eliminate some roles, it also generates new jobs and increases demand elsewhere. The focus should be on facilitating labour mobility, allowing workers to reskill for new roles. For example, if the demand for junior accountants decreases, these workers can be retrained as data analysts or business analysts, where human judgment is essential alongside AI tools.

In conclusion, AI will significantly influence work in India through the 2020s. The effect on employment is uncertain and depends on policy choices, education, and workforce preparation. Optimism exists that AI can be a “force multiplier” for India’s development, enhancing productivity and creating higher-value jobs if the workforce adapts. The Government aims to maximise AI’s economic benefits (innovation, new industries, efficiency) while addressing challenges (job loss, inequality) through the IndiaAI Mission and the Economic Survey. By investing in human capital and updating labour policies, India seeks to ensure AI promotes better work that is more skilled, creative, and fulfilling than a threat. The coming years will test this approach as AI’s theoretical impacts become real for millions of Indians workers.

"Although still in draft form in 2025, the Digital India Act features proposals for classifying high-risk AI systems and recommends potential bans on specific AI applications, including impersonation and deepfakes. This demonstrates India’s commitment to integrating AI accountability into contemporary technology regulation while balancing the need for innovation."

Governance

While the government views AI as a transformative “AI for All” solution for pressing issues in agriculture, healthcare, and smart cities (Kaushik et al., 2025), policymakers acknowledge the need to balance AI’s benefits with risks to safety, privacy, and equity (Khan and Gardezi, 2025; PIB, 2023b). This section provides a comprehensive analysis of AI governance in India, examining: the institutional frameworks and responsibilities for AI oversight; the legal, policy and ethical frameworks guiding AI development and use; multi-stakeholder governance approaches; implementation mechanisms and enforcement practices; India’s engagement with global AI governance bodies and norms; critical gaps or challenges in the current governance approach; and implications for the world of work.

Institutional Frameworks and Responsibilities for AI Governance

Central Government Bodies

The Ministry of Electronics and Information Technology (MeitY) is the nodal agency for AI governance in India (Mohanty and Sahu, 2024). MeitY oversees technology policy and has led many AI initiatives, from drafting legislation to launching national programs. NITI Aayog, a government think-tank, developed India’s early AI strategy and ethical guidelines (NITI Aayog, 2018). In late 2023, the Office of the Principal Scientific Advisor (PSA) was tasked with providing strategic guidance on AI regulation across ministries (Mohanty and Sahu, 2024). Under the PSA’s chairmanship, an Advisory Group on AI was formed to create an AI-specific regulatory framework for India. This group, with representatives from relevant ministries, oversees a Subcommittee on AI Governance and Guidelines. The Subcommittee analysed regulatory gaps and drafted recommendations for a national AI governance approach. MeitY convened this expert subcommittee in November 2023, producing a Report on AI Governance Guidelines Development by January 2025 (now under public consultation) (MeitY, 2025d).

Other central institutions are involved. The Prime Minister’s Office (PMO) and the National Security Council Secretariat (NSCS) influence AI policy due to their mandates and security concerns (Mohanty and Sahu, 2024). Issues like AI’s effect on national security and misinformation have captured high-level attention. Meanwhile, sectoral regulators are identifying roles in AI oversight. RBI and the Telecom Regulatory Authority of India (TRAI) have expressed concerns about AI risks in their sectors (Mohanty and Sahu, 2024). In 2023, TRAI recommended measures for an AI and Big Data “Telecom Stack”, acknowledging AI-related consumer harms (Mohanty and Sahu, 2024). The RBI Governor warned that unregulated AI use in banking may pose systemic risks (Mohanty and Sahu, 2024). Regulators in finance, telecommunications, consumer protection, health, and education are expected to craft sector-specific AI guidelines and monitoring systems (Mohanty and Sahu, 2024). Parliament is also interested, with a Standing Committee on IT discussing AI impacts and potentially coordinating oversight (Mohanty and Sahu, 2024).

AI Coordination Initiatives

In March 2024, the government launched the IndiaAI Mission as a comprehensive program for AI innovation and governance. With a budget of ₹10,371 crore (≈ $1.3 billion) over 5 years, it aims to establish a national AI ecosystem encompassing computing infrastructure, data resources, skilling, research, and standards for “Safe & Trusted AI” (MeitY, 2023c; MeitY, 2025d; Mohanty and Sahu, 2024). The Mission includes seven key pillars: IndiaAI Compute Capacity (high-end GPU facility), IndiaAI Datasets Platform, Innovation Centre, Application Development, FutureSkills programs, Startup financing, and the Safe & Trusted AI pillar focused on ethics and safety tools (MeitY, 2023c; MeitY, 2025d). Remarkably, within 10 months, it established a computing infrastructure with over 18,000 GPUs for indigenous AI models (PIB, 2025e). Under the Safe & Trusted AI pillar, grants have been issued for projects on bias mitigation, explainability, privacy tech, algorithmic auditing, and more (MeitY, 2023c; MeitY, 2025d). This initiative seeks to embed governance into India’s AI R&D ecosystem by encouraging “techno-legal” compliance solutions and responsible AI innovation.

India is establishing the AI Safety Institute (AISI) to focus on AI risk research and standards. In January 2025, the Union IT Minister announced the agency's creation to promote “a techno-legal approach” to AI safety, uniting researchers, industry, and government for ethical AI deployment (MeitY, 2025b). AISI will be a hub for best practices in AI risk assessment, auditing, and accountability. It reflects India’s need for specialised capacity to keep pace with AI developments, like initiatives in other countries, which could inform regulatory enforcement (e.g., certification of high-risk AI systems). Before AISI, MeitY held stakeholder consultations in 2024 on its mandate and governance (Jeevanandam 2024).

Other Institutional Actors

Earlier government bodies laid the groundwork for today’s AI governance. In 2018, an AI Task Force appointed by the Union Commerce Ministry released recommendations for AI research, skilling, and ethics. Around the same time, NITI Aayog was tasked with formulating a national AI strategy, becoming the de facto coordinator for AI policy (Joshi, 2024). As NITI’s role evolved into a broader advisory capacity, executive responsibility shifted to MeitY and line ministries. Several expert committees were formed by MeitY between 2018 and 2020 to address AI policy specifics, examining data platforms, privacy, and AI in key sectors like agriculture, health, and finance (Joshi, 2024). Their reports advocated open data sharing and industry self-regulation (Joshi, 2024), which contributed to concrete policies, including data governance frameworks below).

India’s federal system enables state governments to engage in AI governance. Tamil Nadu pioneered a “Safe and Ethical AI” policy in 2020, promoting fairness, transparency, and accountability in public services. Other states like Telangana and Karnataka have initiated AI hubs or collaborations, such as the WEF Centre for the Fourth Industrial Revolution in Telangana. Although these efforts are early-stage, they indicate a rising awareness of the importance of responsible AI governance. Harmonising national and state AI policies will be a significant institutional focus challenge.

Legal, Policy and Ethical Frameworks for AI Development and Use

India does not currently have a standalone AI law; however, it is developing a regulatory ecosystem that addresses AI through adjacent digital legislation and soft governance mechanisms (Section 2). The cornerstone of this evolving regime is the Digital Personal Data Protection (DPDP) Act, 2023, which regulates the processing of personal data. While not explicitly focused on AI, it establishes critical standards for lawful data handling, user consent, and purpose limitation—principles that are foundational to trustworthy AI systems (Burman, 2023). Although still in draft form in 2025, the Digital India Act features proposals for classifying high-risk AI systems and recommends potential bans on specific AI applications, including impersonation and deepfakes (Access Partnership, 2023). This demonstrates India’s commitment to integrating AI accountability into contemporary technology regulation while balancing the need for innovation. NITI Aayog’s 2021 Principles for Responsible AI established key standards—fairness, transparency, safety, and accountability (NITI Aayog, 2021a,b). Additionally, MeitY’s draft AI Governance Guidelines for 2025 advocate for a decentralised, sector-specific approach alongside an inter-ministerial oversight committee, proposed model AI codes for developers, and an AI incident reporting system (MeitY, 2025d). These recommendations advance ethical development and recommend soft regulatory measures such as certification and algorithm audits.

Indian industry bodies, such as NASSCOM, have reiterated these ethical principles in their best practice documents, reflecting a widespread agreement on fundamental values. Additionally, India has publicly supported international ethical standards, joining the OECD’s Principles on AI, which focus on safety, transparency, accountability, and human-centred values, as well as the G20’s AI Principles that align with the OECD framework (CAIDP, 2025). At the AI Safety Summit in Bletchley Park, UK, in November 2023, India’s representatives emphasised that AI should be “safe and trusted” and highlighted the necessity for democratic nations to reach a consensus on the dos and don’ts of AI (PIB, 2023b). India is also one of the 193 countries that endorsed UNESCO’s Recommendation on the Ethics of AI (2021), committing to principles such as human dignity, fairness, and environmental sustainability in AI applications (CAIDP, 2025). These soft-law instruments demonstrate India’s normative position and serve as benchmarks for assessing domestic AI practices.

Multi-Stakeholder Governance Approaches

India’s approach to AI governance is multi-stakeholder, engaging government, industry, academia, and civil society in dialogue and implementation. This collaboration is essential due to the fast-moving nature of AI technology (Kaushik et al., 2025).

Public-Private Collaboration

The Indian government has gathered industry input on AI policy through consultations and initiatives. NITI Aayog’s strategy, supported by think tanks and experts (e.g. Vidhi Centre for Legal Policy), created the 2018 strategy (Moleon, 2024). MeitY formed expert committees with members from tech companies, startups, and academia in 2020 to formulate AI recommendations (Joshi, 2024). This inclusive process ensured industry perspectives, such as avoiding regulation that stifles innovation, influenced policy proposals (Joshi, 2024). The “AI for India 2030” platform is co-hosted by World Economic Forum Centre for the Fourth Industrial Revolution (WEF C4IR) India, MeitY, the Office of the PSA, and NASSCOM (Kaushik et al., 2025). Launched in January 2024, it unites leaders, agencies, experts, and civil groups under an Advisory Council to co-create blueprints for inclusive AI development (Kaushik et al., 2025). The initiative focuses on leveraging AI for social impact and bridging skill gaps. It includes workstreams like AI Playbooks for priority sectors (e.g. agriculture, manufacturing MSMEs) and an AI Sandbox for experimentation on data sharing, ethics, and governance. These workstreams involve collaboration among startups, industry, and government. AI for India 2030's multi-stakeholder approach is crucial for designing governance mechanisms that reflect broad consensus (Kaushik et al., 2025). The partnership has produced sectoral “AI playbook” reports and convened policy hackathons. This indicates India's preference for a participatory governance model where industry self-regulation and co-regulation are significant alongside state involvement regulation.

Industry associations like NASSCOM serve as key intermediaries in governance. NASSCOM established an AI & Big Data Council, collaborating with MeitY on policy advocacy. It released Responsible AI resource kits for companies and runs the FutureSkills platform for AI workforce development in partnership with the government. Another example is RAISE 2020 (Responsible AI for Social Empowerment), a global AI summit hosted by India in October 2020, where government, industry (domestic and foreign tech firms), and civil society shared insights on AI ethics and innovation (MeitY, 2024c). Outcomes from RAISE included the launch of the IndiaAI online knowledge portal (jointly developed by MeitY and NASSCOM) and announcements of public-private research centres.

Civil Society and Academia

Indian civil society, including non-profits, researchers, and the media, has actively engaged in AI governance, advocating for stronger accountability. Organisations like the Internet Freedom Foundation (IFF) and the Centre for Internet and Society (CIS) have contributed insights on policies such as the Data Governance Framework and AI ethics guidelines, typically through public consultations or advisory roles. For example, the government’s AI ethics guidelines drew from independent research, and the UNESCO AI Ethics Readiness Assessment for India involved academic experts (CAIDP, 2025). Civil society has raised concerns about bias in facial recognition and the demand for algorithmic transparency in AI's governmental use (Joshi, 2024). This feedback has pressured the government to adopt rights-based approaches. Following advocacy, the 2022 Data Governance draft placed greater emphasis on privacy safeguards compared to the previous data accessibility policy (Joshi, 2024).

India’s tech academia improves governance through research and policy advice. Institutes like IITs and IISc have AI labs and policy centres, including the Centre for Responsible AI at IIT Madras, which support government bodies. The Principal Scientific Advisor’s Advisory Group features academic experts. In 2024, MeitY collaborated with Google Research India to fund 20 faculty-led projects on AI’s social impact, focusing on evidence for policy in AI fairness in health and agriculture, highlighting shared responsibility among stakeholders.

Transparency and Consultation

A key aspect of India’s multi-stakeholder approach is transparency in policymaking. Major AI-related policies are open for public comment. For instance, drafts of the AI Governance Guidelines report (2025) were published on the IndiaAI portal for stakeholder feedback (MeitY, 2025c). The government has organised workshops, including a consultation in October 2024 on the AI Safety Institute (MeitY, 2025b), to gather diverse views. Although civil society is not always at the decision-making core, its inclusion has enhanced legitimacy. The Center for AI and Digital Policy (CAIDP) commented that it “welcomes the… public consultations on AI policy and that the country’s AI policies and practices are readily available to the public, such as reports from each of MeitY’s AI Committees” (CAIDP, 2025). This shows that India’s process is notably open. Going forward, maintaining trust among stakeholders will require institutionalising this openness and ensuring feedback, especially from marginalised groups, is reflected in final outcomes.

Implementation Mechanisms and Enforcement Practices

In India, actualising AI governance involves translating broad principles and policies into practice via specific programs, oversight mechanisms, and penalties when required. Currently, India’s strategy emphasises capacity building and promoting voluntary compliance, while formal enforcement is beginning to take shape through emerging legislation.

Institutional Implementation

The Digital India programme implements many AI initiatives, with the IndiaAI Mission as a cornerstone. By backing projects on bias mitigation and explainable AI, the government fosters technical solutions for standard industry practice (MeitY, 2025d). It also includes the IndiaAI FutureSkills platform to retrain workers in AI, emphasising that ethical AI enforcement relies on skilled practitioners who can responsibly build and audit systems (MeitY, 2025d). Moreover, India is establishing Centres of Excellence (CoEs) in AI, specialised hubs collaborating with academia and industry to develop sector-specific AI solutions and ethical practices. CoEs in fields like healthcare and fintech AI have been announced for 2025. While primarily focused on innovation, they support governance by serving as testbeds for guidelines (e.g., a Healthcare AI CoE can pilot AI medical device ethics guidelines).

Guidelines and Standards

Indian authorities have been issuing advisories to influence AI deployment. In March 2024, the government issued an Advisory on Generative AI, directing companies to obtain permission before deploying certain models and to address algorithmic bias and deepfakes (Mohanty and Sahu, 2024). This directive surprised the industry and faced criticism about stifling innovation (Mohanty and Sahu, 2024). Consequently, the advisory was withdrawn and replaced with a softened version after consultation (Mohanty and Sahu, 2024). This episode revealed the government’s willingness to enforce AI safeguards and the challenges of doing so without proper deliberation. It highlighted the need for a structured, transparent rule-making process for AI rather than ad-hoc dictates. Future mandatory AI rules will likely be introduced under the forthcoming Digital India Act or through amendments to existing laws, ensuring a proper legal basis (Mohanty and Sahu, 2024).

Regulatory standards serve as emerging enforcement tools. The Bureau of Indian Standards (BIS) has formed committees to develop AI safety and reliability standards aligned with ISO/IEC guidelines. While these standards are voluntary, government procurement may reference them (e.g., requiring AI systems for public use to meet BIS’s quality benchmarks). Such standards could become de facto mandatory if incorporated into licenses or contracts. The AI Safety Institute is expected to help by creating India-specific standards or certification schemes for AI products, incentivising safety through market advantages for certified products.

Enforcement Agencies

With the DPDP Act, India is operationalising the Data Protection Board to investigate and penalise violations of personal data rules. This relates directly to AI systems managing personal data; for example, an AI hiring tool misusing candidate data could incur penalties under the Act. The Board’s enforcement will test its ability to hold big tech companies accountable in India. Existing agencies can also enforce laws on AI activities: the RBI can apply banking laws if a bank’s credit scoring AI is discriminatory, and the Medical Council can intervene if an AI diagnostic tool threatens patient safety. However, these measures depend on regulators updating guidelines to cover AI explicitly. Encouragingly, SEBI in 2023 set up a committee to examine AI in financial markets, and the Central Consumer Protection Authority will monitor AI-enabled consumer fraud.

Judicial Oversight

The courts in India serve as a reactive means for enforcing responsible AI. The Supreme Court has shown interest in technology governance, notably ruling data privacy a fundamental right in 2017, which pressures AI actors to respect privacy. If an AI system causes harm, such as a lethal autonomous vehicle accident or wrongful arrest from facial recognition, victims can seek recourse through existing tort, criminal, or constitutional law. Although no landmark AI liability case has arisen, the litigation potential serves as a backstop enforcement mechanism. This legal approach is slow and case-specific, revealing a broader issue: India lacks dedicated accountability frameworks for AI harms, relying on interpretations of existing laws.

Compliance Culture

The government promotes “ethical AI compliance” through capacity building. The 2025 draft AI Governance report advocates a “whole-of-government approach to ensure effective compliance and enforcement,” urging ministries to coordinate as the AI ecosystem grows (MeitY, 2025d). It recommends mapping existing laws to identify unaddressed AI risks (Mohanty and Sahu, 2024) and assigning clear responsibility for each risk type. It is suggested that technology for governance be leveraged, such as AI tools to monitor biased outcomes (Khan and Gardezi, 2025). These ideas aim to enhance preventive enforcement. Industry is encouraged to adopt “ethical AI audit” practices. Big IT companies in India have begun instituting AI ethics boards, bias bounties, and third-party audits to meet government and public expectations. Though not mandated, these practices are crucial for maintaining a license to operate. A nascent self-regulatory regime is emerging, where companies demonstrate compliance with voluntary codes (like avoiding AI for mass surveillance or lethal purposes) to preempt regulation. Government officials praise voluntary efforts but warn that failure to self-regulate may lead to hard regulations (Mohanty and Sahu, 2024).

India’s Engagement with Global AI Governance

India has established itself as an active participant in global AI governance discussions, promoting inclusive, human-centred, and development-focused strategies. It became a founding member of the Global Partnership on Artificial Intelligence (GPAI) in 2020 and hosted the GPAI Summit in New Delhi in December 2023, highlighting the importance of responsible and equitable AI deployment, particularly for the Global South (GPAI, 2023).

During its G20 Presidency in 2023, India prioritised AI governance as a fundamental issue, advocating for global collaboration on ethical standards, safety measures, and fair access to AI resources (G20, 2023). India’s agenda called for AI frameworks that cater to different socio-economic realities and work to bridge the growing digital divide.

In light of increasing global concerns regarding AI safety, India, alongside 27 other nations, endorsed the Seoul Ministerial Statement on AI Safety (May 2024), pledging to work together on AI risk reduction and compliance with international standards (OECD, 2024). Subsequently, India backed the establishment of a worldwide network of AI Safety Institutes to foster international research, testing, and policy coordination (Jeevanandam, 2024; MeitY, 2025b).

India has also engaged in both bilateral and multilateral discussions on emerging technologies with the US, UK, Japan, and the EU, frequently within digital partnership frameworks that include provisions for AI collaboration. These discussions encompass interoperability of AI standards, joint research and development, and responsible application in critical sectors such as health, education, and climate resilience.

India’s position highlights its goal to serve as a bridge between developed and developing countries in shaping global AI governance. Its contributions focus not just on technical standards, but also on principles of equity, accessibility, and global AI safety.

Gaps, Overlaps and Challenges in India’s AI Governance

India’s AI governance approach is comprehensive in theory but faces gaps and challenges in implementation. Key issues include regulatory fragmentation, capacity constraints, and the need for stronger enforcement.

Regulatory Gaps

The main gap is the lack of dedicated AI legislation or binding regulations addressing AI risks. India’s reliance on general IT laws, data protection law, and voluntary guidelines leaves many AI-specific issues unregulated, such as algorithmic bias, lack of transparency in decisions, and liability for autonomous system failures. The DPDP Act 2023 covers data privacy but not the full range of AI’s effects on fairness or safety. As AI systems are increasingly used in sensitive areas like hiring, lending, policing, or social welfare, this regulatory gap could result in harm without legal remedies. As a response, the government’s AI task force and PSA’s subcommittee have called for regulations tailored to AI’s unique challenges (Mohanty and Sahu, 2024; Meity2025d). Until new rules are established (e.g., the proposed Digital India Act), India relies on self-regulation by companies to “do the right thing,” which may be inadequate if market incentives prioritise quick deployment. The risk is that without clear prohibitions, harmful AI applications (like mass surveillance or exploitative lending) may thrive in a regulatory vacuum. CAIDP’s assessment urged India to “set clear red lines: ban AI systems which lack scientific validity and undermine rights, " rather than depend solely on voluntary commitments (CAIDP, 2025).

Overlap and Fragmentation

Institutional overlap and coordination pose challenges in AI governance. Multiple government entities – MeitY, NITI Aayog, the PSA’s office, sector regulators, and state governments – engage with AI, often with overlapping mandates. For example, NITI Aayog released ethical guidelines around the same time as MeitY’s committees developed theirs, creating potential duplication (Joshi, 2024). While the diverse involvement is a strength (whole-of-government approach), it risks incoherence without synchronisation. Reports indicate differing philosophies: MeitY favours a light-touch approach, while others, like the PM’s Economic Advisory Council, advocate for more regulation (Mohany and Sahu, 2024). The March 2024 AI advisory controversy illustrated this, as MeitY insiders suggested the heavy-handed draft came from “factions” within the government. Such misalignment can cause policy zigzags that confuse industry and the public. The PSA-led coordination mechanism aims to address this, but its effectiveness in harmonising views across ministries, ensuring, for instance, that the RBI’s AI approach aligns with MeitY’s framework, remains uncertain. Another challenge is the overlap between data governance and AI governance, as India’s efforts on data and AI sometimes operate in silos despite their connection (AI thrives on data). Overcoming these silos to create an integrated governance ecosystem is an ongoing challenge.

Capacity Constraints

Implementing AI governance requires substantial technical and institutional capacity, which India is still developing. Regulators like the Data Protection Board will need AI expertise to assess compliance and investigate incidents. Currently, trained AI auditors and regulators are in short supply in the public sector. Bridging this gap is critical; otherwise, even well-crafted rules may go unenforced. The government’s adoption of technology-based governance, such as AI tools for monitoring (Khan and Gardezi, 2025), could help but requires investment. Similarly, the judicial system’s ability to handle complex AI-related cases, like determining algorithmic discrimination, remains untested and will need expert input. Many smaller Indian companies and start-ups lack resources for thorough algorithmic impact assessments or bias audits. Without external support, they may inadvertently deploy harmful AI. Thus, capacity-building initiatives like training programs for civil servants on AI, encouraging third-party AI audit firms, and developing an academic curriculum on AI ethics are essential for effective governance. India acknowledges this need; for instance, the Safe & Trusted AI pillar supports projects on audit tools (MeitY, 2025d), marking progress toward compliance. However, scaling these solutions across numerous AI deployers remains a significant challenge.

Enforcement and Accountability

A fundamental challenge is moving from principles to accountability. Much of India’s AI governance relies on soft measures. The ethical principles sound promising, but there are often no consequences for ignoring them. Currently, there is no AI regulator to sanction misuse, and no specific legal recourse for those affected beyond general laws. This lack of enforceable accountability could undermine public trust. The situation may improve with the Data Protection Act and potential penalties under the Digital India Act for harmful AI content. Until then, India’s approach could be seen as “all carrot, no stick”. Some scholars argue that the government’s strategy “constructs and legitimises a globally-dominant paradigm of informational capitalism” by prioritising data while leaving self-regulation to firms (Joshi, 2024). This critique suggests India’s AI governance accommodates industry too much and protects individuals insufficiently. There is a notable gap in addressing the impact on workers: current policies focus on job creation but less on social safety nets or workers’ rights in an AI-driven economy. Issues like algorithmic management, AI-driven surveillance, and potential mass displacement by automation have not been directly tackled in Indian policy. These could become flashpoints if not addressed as part of a comprehensive AI governance strategy.

Inclusivity and Representation

Another challenge is ensuring inclusive AI governance in India that reflects all societal needs. Poorly governed AI systems could worsen inequalities; for example, language models failing in Indian vernaculars may underserve non-English speakers, while automated credit scoring could marginalise those without digital footprints. The government’s focus on “AI for All” and inclusive growth (Kaushik et al., 2025) is promising, but difficult to implement, requiring diverse representation in AI design and policy. Currently, discussions are dominated by government and big tech, limiting input from marginalised communities, rural voices, and unions. Civil society has noted that affected groups (e.g., gig workers or citizens under facial recognition) lack a say in rule-making for these systems. Bridging this requires grassroots consultations, empowering consumer groups, and establishing formal channels for public input in AI oversight, like requiring a public interest assessment for major AI deployments. Without this, governance may cater to elite interests and overlook real-world AI harms.

Balancing Innovation and Regulation

India faces the challenge of balancing innovation and regulation. The government aims not to “miss the bus” on AI-led economic growth, shown by its reluctance to impose strict regulations too soon (Mohany and Sahu, 2024). This pro-innovation approach supports growth but may delay necessary protections. Conversely, a significant AI failure might prompt overregulation that hinders innovation. India must carefully navigate this balance, adopting a phased strategy that initially employs principles and voluntary measures before specific regulations are introduced. However, maintaining public trust requires evidence that voluntary measures are effective. If self-regulation fails to mitigate harms, demands for “hard law” will increase. Additionally, international pressures mount as the EU, US, and others implement AI regulations (like the EU AI Act or U.S. executive orders), compelling Indian software exporters to comply, thereby raising domestic standards. India must align its governance with international norms to avoid being viewed as a regulatory grey zone.

Implications for the World or Work

As outlined in earlier sections, both the existing and proposed measures support workforce upskilling and raise awareness of ethical AI usage, fostering an innovation-friendly atmosphere centred on inclusion and fairness. However, current regulations fall short in protecting workers and their data rights. The Digital Personal Data Protection Act (DPDP Act 2023) establishes a privacy right, yet critics argue it mostly exempts employer practices (Shobhit, 2024). For instance, Section 7(i) permits companies to process personal data for " employment purposes” or to “protect the employer from loss or liability” (Shobhit, 2024). This effectively gives employers broad access to employee data without stringent boundaries. Fundamental data rights, including access, erasure, and portability, only apply when data processing is consented to; employees have no right to know or delete non-consensually collected workplace data (Shobhit, 2024). One analysis points out that the law “enables opaque collection and exploitation of employees’ data, depriving them of essential rights and protections” (Shobhit, 2024). These deficiencies pose a risk to worker privacy and autonomy. Consequently, unchecked monitoring tools like screen recording and GPS tracking are legitimised by the law, allowing employers to keep extensive worker data indefinitely, potentially fueling bias-prone algorithms with unchallenged personal information (Shobhit, 2024).

In a similar vein, algorithmic accountability remains insufficient. India does not have a mandatory system that compels companies to audit or clarify the AI tools they employ in the workplace. Experts observe that the nation currently lacks a regulatory structure to safeguard gig workers from rights infringements caused by arbitrary and obscure algorithmic decision-making. Proposed guidelines, including MeitY’s draft AI Governance report and NITI’s principles, advocate for transparency and fairness in vague terms, yet they are merely recommendations. In the absence of statutory requirements for impact assessments or bias audits, unlike the EU’s AI Act or the U.S. Algorithmic Accountability Act, platforms and employers primarily regulate themselves. Consequently, algorithmic management—whether for hiring, task distribution, or performance evaluation—functions with minimal scrutiny. Workers do not have guaranteed rights to explanations, appeals, or recourse if they are treated unjustly by automated systems. Ultimately, the existing governance framework fosters algorithmic innovation while exposing workers to an opaque “black box” of risk.

Regarding automation and transitions, policy has prioritised productivity and skills over safety nets. The Economic Survey 2024–25 and IndiaAI initiatives highlight AI's role in enhancing the workforce, while skilling programs like the National AI skilling initiative and FutureSkills grants focus on equipping workers for future roles (Jeevanandam, 2025). However, formal assistance for displaced workers remains limited. India’s consolidated labour codes (2019–2020) manage wages and working conditions but fail to address issues related to AI-induced job loss or the complexities of gig work. There are no provisions for an automation-displacement fund, guaranteed retraining vouchers, or income support connected to technological advances. Advocates have urged the establishment of employer-funded transition programs and the inclusion of gig and AI-enhanced workers under labour laws, though these proposals have not been implemented. While India’s AI governance encourages skill development and ethical practices, challenges persist in protecting labour rights. A combination of new laws and policies will be essential to balance innovation with strong worker protections, including data rights, algorithmic recourse, and transition support.

Conclusion

India’s current AI governance strategy integrates soft enforcement tools such as guidelines, incentives, and audits alongside indirect hard enforcement via pertinent legislation, which includes data protection laws and sector-specific regulations. The creation of new bodies like the Data Protection Board and AI Safety Institute marks a move toward enhancing direct enforcement mechanisms. Looking forward, we can anticipate more formalisation, likely in the shape of an AI Act or targeted regulations that empower regulators to audit AI systems, require impact assessments for high-risk AI, and impose penalties for non-compliance, akin to the EU’s framework. The success of this implementation will depend on improving institutional capabilities to supervise AI and ensuring that stakeholders are fully aware of their responsibilities.

References