E-AIPOWW’s India Tracker shows AI policy and regulation work across India; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.
This report was published in 2024 and offers detail on existing regulations and implications for economic inequality.
The more recent E-AIPOWW India Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.
This report is part of the Artificial Intelligence Observatory for the World of Work (AIPOWW) Symposium for Global Political Economy. It was written in 2024 and precedes the 2025 report, written by Chandrima Roy and Koushik Das Sarma. While the 2025 report emphasises developmental and work-related aspects, this report offers more detail on existing regulations and implications for economic inequality.
India has achieved substantial economic progress on many measures but remains a poor nation. The Indian government believes that AI will play an important role in India’s continued economic growth, both through its contribution productivity in the private sector and through smarter and more data-led government. Over the past years, the Indian government has taken steps to encourage AI adoption in an attempt to draw on existing strengths in digital infrastructure and specialised human capital. To some extent these efforts are successful. According to the MIT Artificial Intelligence Index Report (2025), India is the country with the second largest AI skill penetration rate.[1] Additionally, the second largest number of Github AI projects in the world are located in India and 9.2% of all published articles on AI are from India. At the same time there are significant worries regarding the potential of AI to widen existing inequalities, to enable greater government censorship, and to exacerbate data protection concerns. This article describes existing regulations and policies of AI in India and explains some of their shortcomings.
India currently does not have any regulation dealing directly or exclusively with AI (Joshi, 2024). However, there is in place a National Programme on AI, called the IndiaAI Mission, under the Ministry of Electronics and Information Technology (MeitY) and a National Strategy on AI put forward by the Niti Aayog. Additionally, there are legislations or proposed legislations around IT, data protection and digitalization. This section provides an overview of these strategies and legislations and explains how they bear on the use of AI.
The Information Technology Act 2000, is current the primary legislation in India dealing with electronic commerce and cybercrime. It was formulated to ensure the lawful conduct of digital transactions and the reduction of cyber crimes, on the basis of the United Nations Model Law on Electronic Commerce 1996 (UNCITRAL Model). The IT Act 2000 came into effect on 17 October 2000. It applies to all individuals and companies independently of their nationality and country of registration.
There have been some amendments associated with the IT Act 2000. The 2008 amendment brought modifications to Section 66A of the IT Act 2000. These modifications aim to address cases of cybercrime. Specifically, they criminalize the sharing of messages that incite hatred. In 2015, another bill was initiated to amend Section 66A of the IT Act 2000, with the aim of safeguarding the fundamental rights guaranteed to citizens by the country's Constitution. This was later accomplished by declaring it as violative of Article 19 of the Constitution. Critics worry that the amended IT Act enables government censorship (Singh 2023). In the absence of more specific regulation of AI, the IT Act 2000 continues to play an important role in offering legal guidance for the permissible use of AI. Something similar is true of India’s Digital Personal Data Protection Act.
This piece of legislation was adopted on 11 August 2023 to establish a framework for personal data processing. It regulates how digital personal data is collected, used, stored, and shared, with a focus on protecting individuals’ privacy rights while allowing lawful data use. It applies to the processing of digital personal data and covers both Indian and some foreign entities offering goods or services to people in India. It establishes that organizations generally must seek consent before using their data and must explain how and why the data will be used.
The Proposed Digital India Act, 2023, currently referred to as the Digital India Bill, is intended to replace India’s existing Information Technology Act, 2000. The IT Act was drafted over two decades ago, at a time when the internet, social media, artificial intelligence, and digital platforms were far less developed. As a result, the IT Act no longer adequately addresses the realities of today’s digital world. The new Digital India Act aims to update India’s legal framework so that it better reflects modern technological developments and the rapidly changing digital landscape (Manda and Misra 2024).
In simple terms, the Digital India Act is expected to act as a modern law for the internet and AI age. Its goal is to regulate India’s digital ecosystem. This includes overseeing online platforms, regulating large technology companies, encouraging fair competition, protecting innovation, and ensuring accountability in digital spaces. The Act also seeks to improve online safety, protect the rights of citizens and digital users, and speed up the handling of cybercrime cases. Because of its wide scope, the Act is expected to have a major impact on businesses operating in India’s digital economy. Through this legislation, the government hopes to encourage global innovation, support entrepreneurship, and position India as a reliable and trusted participant in global digital value chains. Reports suggest that a draft of the Digital India Bill is likely to be released soon for public consultation, allowing stakeholders to provide feedback before the law is finalised (Mohanty and Shatakratu 2025).
Importantly, the Digital India Act is not meant to function in isolation. It is designed to work alongside other important laws and policies, such as the Digital Personal Data Protection Act, the National Data Governance Policy, and amendments to the Indian Penal Code related to cybercrime. Together, these laws aim to create a coordinated and comprehensive framework for governing different aspects of India’s digital ecosystem under the broader Digital India initiative (Maheshwari and Sharma2025).
The success of the Digital India Act will depend on several key factors. First, the government must strike a careful balance between competing interests. This includes managing the power of large technology companies while also protecting the rights and freedoms of citizens. It is essential that the voices of all stakeholders—industry players, civil society, experts, and the general public—are heard during both the drafting and implementation stages.
Moreover, the government will need to invest adequately in resources, skilled personnel, and technological infrastructure to ensure that the law can be enforced effectively. Without sufficient capacity, even a well-designed law may fail in practice. Third, public awareness will play a crucial role. Citizens need to understand their rights and responsibilities in the digital space so that they can engage safely and confidently online.
The central government has stated that it plans to conduct extensive consultations with experts, industry representatives, academics, media organisations, and the general public before finalising the Bill. Because of these consultations and the timing of India’s general elections, it is expected that the Bill will be introduced in Parliament only after the 2024 elections. While consultations are important, experts have expressed concern that this delay will prolong the absence of effective internet regulation in India. The existing IT Act, 2000, is widely seen as inadequate for dealing with modern digital challenges.
According to critics, delaying the Digital India Act could slow the growth of India’s digital economy by discouraging investment, limiting business expansion, and reducing India’s global competitiveness. It may also allow large technology companies to continue operating with limited oversight, weaken online safety, and delay improvements in areas such as digital governance, data privacy, and cybersecurity. Prolonged regulatory gaps could even put India at risk of violating international trade agreements, including those under the World Trade Organization.
India has one of the largest populations in the world, which makes it a major generator of data. However, much of this data ecosystem remains poorly regulated. The absence of clear laws and policies has contributed to problems such as data theft, misuse of data, and cyber-related crimes. The rapid growth of smartphone usage and widespread internet access have made it increasingly important for India to develop a strong and effective data governance framework.
In recent years, the Indian government has shown growing interest in regulating non-personal data. Non-personal data refers to data that does not directly identify individuals. This interest is largely driven by the economic value such data can generate, especially for research, innovation, and policymaking. After introducing the Digital Personal Data Protection Act in 2023, the Ministry of Electronics and Information Technology (MeitY) began focusing more closely on regulating non-personal data.
In 2020, MeitY set up an expert committee chaired by Kris Gopalakrishnan to study issues related to non-personal data and recommend regulatory measures. The committee’s report classified non-personal data into two categories: human and non-human. Human non-personal data refers to data that originally related to individuals but has been anonymised so that individuals can no longer be identified. Non-human non-personal data includes data that never involved personal information, such as weather or traffic data. The committee also recommended creating a National Data Protection Authority to oversee this emerging area.
In February 2022, MeitY released the Draft India Data Accessibility and Use Policy. This policy faced strong criticism because it proposed monetising government-held data by selling or licensing it to private companies. Critics argued that treating data as a commodity raised serious ethical and governance concerns. After repeated withdrawals and revisions during the consultation process, the draft policy was eventually taken down.
The Draft National Data Governance Framework Policy was introduced as a replacement for the earlier draft. Unlike its predecessor, the NDGF Policy shifts away from a strong focus on monetisation and instead emphasises governance, research, and public good. The policy aims to regulate anonymised non-personal data held by government bodies and make it accessible for research, innovation, and better governance.
A key proposal under the policy is the creation of the “India Datasets Programme,” which would consist of curated, anonymised non-personal datasets. These datasets would be made available to researchers, startups, and other users. MeitY opened the Draft NDGF Policy for public comments in May 2022, and government documents indicate that the policy is currently in the process of being finalised.
At present, non-personal data is stored across various government departments, often in incompatible formats. This makes it difficult to access and use the data effectively. The NDGF Policy aims to modernise how government data is collected, managed, and shared. To oversee this process, the policy proposes setting up an Indian Data Management Office (IDMO) under the Digital India Corporation. The IDMO would be responsible for managing the India Datasets Programme and implementing the NDGF Policy. The availability of quality datasets is also expected to support the growth of AI in India, where lack of data has been a major challenge.
Despite its positive intentions, the Draft NDGF Policy has raised several concerns. One major issue is the risk of de-anonymisation. Even data that has been anonymised can sometimes be re-identified using advanced techniques. This raises privacy and security risks. Critics also note that while the policy focuses on government-held data, it does not clearly exclude private companies. In fact, it encourages private entities to share their non-personal data with the IDMO, which raises concerns about confidentiality and proprietary information.
Another concern is the lack of clarity on how data will be stored and protected. The policy does not provide detailed technical safeguards to ensure data security. There are also unresolved questions about how the NDGF Policy will interact with the Digital Personal Data Protection Act, 2023.
Experts have also pointed out that the policy does not clearly define the structure, powers, and accountability mechanisms of the IDMO. Transparency in decision-making, clear criteria for approving or rejecting data access requests, and recognition of intellectual property rights in datasets are all areas that need further clarification.
Finally, while the policy mentions the possibility of charging users for accessing datasets, critics argue that this contradicts its stated goal of supporting research, startups, and citizen participation. Many experts believe that non-personal data governance should be backed by a dedicated law rather than relying solely on a policy framework.
India’s MeitY is implementing the ‘National Programme on AI’ which encompasses four components: Data Management Office, National Centre for AI, Skilling on AI, and Responsible AI. The ‘IndiaAI Mission’ initiative is crucial to complement the ongoing 'National Programme on AI' by establishing a framework that addresses specific gaps in India’s AI ecosystem. The objective of this exercise is to undertake a comprehensive study of the pillars of IndiaAI and to identify tangible action items that need to be worked on to achieve the Indian government’s goal of “AI for all” (MeitY, 2023).
India's Ministry of Electronics and Information Technology (MeitY 2023) has set up seven working groups to drive the adoption of AI across various sectors. These groups address AI research, infrastructure, policy, and skills development. The detailed reports of each of the Working Groups are available here. Additionally, MEITY released the Draft National Strategy on Robotics in July 2023.
The Niti Aayog (India’s leading public policy think tank) has prepared the 2018 National Strategy for Artificial Intelligence, which claims that the transformative nature of AI technology, yet the nascent stage of its adoption worldwide, provides India with an opportunity to define its own brand of AI leadership. #AIforAll - the brand proposed for India - implies inclusive technology leadership, where the full potential of AI is realised in pursuit of the country’s unique needs and aspirations. Niti Aayog holds that India’s strategy should strive to leverage AI for economic growth, social development and inclusive growth, and finally as a “garage” for emerging and developing economies. In order for India to establish a leadership role, Niti Aayog positions its strategy document as a crucial foundational signposting of these goals (Niti Aayog, 2018).
According to Niti Aayog (2018), while AI has the potential to provide large incremental value to a wide range of sectors, adoption till date has been driven primarily from a commercial perspective. Technology disruptions like AI are a once-in-a-generation phenomenon, and hence large-scale adoption strategies, especially national strategies, need to strike a balance between narrow definitions of financial impact and the greater good. NITI Aayog has decided to focus on five sectors that are envisioned to benefit the most from AI in solving societal needs: Healthcare, agriculture, education, smart Cities and infrastructure, and smart mobility and transportation.
Niti Aayog (2018) acknowledges that as AI-based solutions permeate people’s lives, questions on ethics, privacy and security will also emerge. Most discussions on ethical considerations of AI are a derivation of the FAT framework (Fairness, Accountability and Transparency). A consortium of Ethics Councils at each Centre of Research Excellence can be set up and it would be expected that all COREs adhere to standard practice while developing AI technology and products.
Notwithstanding the articulation of a strategic vision for India, the Niti Aayog (2018) document is seen as actively encouraging experimentation among India’s population by the private sector, positioning India as a ‘playground’ for the globalised data-based technology industry, which relies upon the datafication of people and their environments for commodification. The role of the state is conceived of as a ‘facilitator’ or enabler for private enterprise, explicitly echoing some established tenets of neoliberal economic policy, including the assertion that government investment in a particular economic field may ‘crowd out’ and disincentivise private spending, and that regulation can disincentivise ‘innovation’ (Joshi, 2024).
The Niti Aayog’s (2018) paper resulted in the establishment of India’s National AI Portal, the central repository for AI resources, research and development in the country, which portrays the IndiaAI Mission.
There is a broad consensus that increases in the use of AI will have profound social and economic effects. The question of whether these effects are largely beneficial or detrimental depends to a significant extent on the way in which the use of AI is governed and regulated. The purpose of this section is to analyse how India’s governance structures affect the use of AI and to discuss their consequences for different social groups.
Government commissioned reports on the future of AI in India are optimistic that the technology will have largely beneficial effects, provided that obstacles related to access, expertise, and privacy can be surmounted. The reports expect that the areas of healthcare, agriculture, education, infrastructure, and mobility are particularly likely to benefit from an increased use of AI (NITI Aayog 2018a; 2021a; 2021b). As we will explain below, we think that there is reason to temper the optimistic outlook of these reports.
Before I begin to describe some of the likely effects of increased AI adoption in India, it is helpful to distinguish between regulatory regimes that rely on tight regulation and stringent transparency rules and regulatory regimes that are more lenient and rely mainly on self-regulation. This means that there is a spectrum of regulatory approaches that ranges from ‘highly directive’ to ‘little directive’. To illustrate, the European Union is in the process of designing a set of relatively stringent legal rules that impose significant limits on how different kinds of AI can be used in different contexts. These rules come on top of a set of strict existing laws and regulations regarding privacy and other basic rights that apply to the use of AI. The EU thus lies relatively close to the ‘highly directive’ end of the regulatory spectrum. India, by contrast, lies relatively close to the ‘little directive’ end of the spectrum.
There are two main reasons for this. One reason is that India has not yet implemented various rules and policies that are in the process of being drafted. Two important pieces of legislation, the Digital India Act 2023 and the National Data Governance Framework Policy are still in the process of consultation. This means that the use of AI in India is at the moment mainly regulated by a set of general laws and legal rules that apply to AI but that have not been issued specifically with a view to regulating AI. Examples of this are the Information Technology Act 2000 and the Digital Personal Data Protection Act, each of which have implications regarding the use of AI, while not having been designed to regulate AI specifically. The second reason why India lies close to the ‘little directive’ end of the regulatory spectrum is that even those rules that are being drafted are quite permissive, in the sense that they impose few strict and sanction-backed rules on corporations, so that corporations will have a large degree of discretion regarding their use of AI (Joshi 2024). Moreover, the planned rules do not impose tough restrictions on the government’s capacity to gather and process sensitive personal data. The upshot of this is that India’s corporations and government will be subject to significantly fewer constraints than those of other jurisdictions, such as the EU.
We can begin to understand the importance of governance structures by observing some general ways in which the social and economic effects of AI are determined by governance and regulation. As an example, consider the contentious question of whether AI will create jobs or displace workers. Academic predictions vary, with some anticipating mass unemployment, and others being cautiously optimistic about the job-creating potential of AI (Autor 2015; Frey and Osborne 2017; Gironde et al 2019; Brynjolfsson and Syverson 2017). What is sometimes overlooked is that these predictions are based on a range of premises regarding the design of basic social and economic institutions, and the distribution of wealth and power across different social groups. Predictions regarding the effects of AI often ask a narrow question: “given the existence of current governance structures, basic institutions and distributions of power and wealth, what will be the effect of introducing AI?” When these assumptions are not made explicit, it can appear that the nature of the consequences of AI is fixed, predetermined, or unalterable.
However, the answer to questions such as whether AI will create or destroy jobs depends on the backdrop of governance structures against which AI is introduced. According to Acemoglu (1998, 2002) and Autor (2011) political power and legal frameworks shape the impact of technology on employment and welfare. Technology's influence on the labour market is not deterministic but contingent on the societal context within which it is deployed. These scholars posit two main scenarios: complementing workers or displacing them.
When AI complements workers, it enhances their productivity by automating routine tasks, thereby allowing them to focus on more complex, creative, or interpersonal activities. Inclusive political institutions and well-crafted laws can steer the use of technology to complement and augment human labour. In such contexts, technology can lead to the creation of jobs and to economic growth (Webb 2020). Conversely, in societies where political power is concentrated in the hands of elites or where laws are poorly designed or enforced, technology may primarily serve to displace jobs (Seamans and Raj 2018; Mani et al 2020). This scenario can lead to increased inequality and reduced welfare, as the benefits of technological progress are captured by a small segment of the population. Moreover, it matters who owns AI databases and tools. Autor (2020) and Korinek and Stiglitz (2021, 2) worry that AI tends “to give rise to natural monopolies, creating a small set of so-called superstar firms that are located in a few powerful countries but serve the entire world economy”.
The upshot of this is that understanding the likely effects of AI in India requires understanding those governance structures that steer the effects of AI. In particular, it is helpful to ask how much control members of different social classes have over the process of shaping AI regulation and implementation. The interests that low-income earners and otherwise disadvantaged groups have in designing AI regulation are distinct from the interests that wealthy individuals have. The reason for this misalignment of interests has to do with the above-mentioned distinction between the complementing effect and the displacing effect of AI. Members of the working class have an interest in rising wages and high employment rates. Firm owners, on the other hand, might benefit more from replacing workers with AI.
This means that we should expect differences in the design of AI regulation, depending on how power is distributed across social classes and who can bring their interests to bear on the design of regulation most effectively (Alonso et al 2020; Bhattacharyya and 2019). When workers, labour unions, and social movements have significant bargaining power, they can lobby for incentives and regulations to use AI to complement their work. When these groups have little bargaining power, we can expect that AI is more likely to be used to displace workers. More broadly, we can expect that the extent to which a society is democratic has an effect on how AI is used. The more democratic a society is, the more likely it is that the interests of a majority of workers prevail over the interest of small groups of social elites.
It is therefore crucial to take into account how wealth and power are distributed in India, as well as how strong democratic institutions are to determine the likely effects of AI. Measuring the distribution of income and wealth in India faces several methodological challenges. However, recent work of Bharti et al (2024, 3) has improved the accuracy of estimates regarding material inequality in India. The results of a recent study
“point to extreme levels of inequality in India compared to international standards. In 2022-23, 22.6% of national income went to just the top 1%, the highest level recorded in our series since 1922, higher than even during the inter-war colonial period. The top 1% wealth share stood at 40.1% in 2022-23, also at its highest level since 1961 when our wealth series begins. In other words, the ‘Billionaire Raj’ headed by India’s modern bourgeoisie is now more unequal than the British Raj headed by the colonialist forces.”
Apart from inequalities in wealth, India is characterised by other deep inequalities, such as in gender, land ownership, and caste membership (Sengupta and Guchhait 2021; Anand 2021). To illustrate, India occupies rank 127 out of 146 on the Global Gender Gap Index (World Economic Forum 2023).
Recent findings emphasise that in the absence of redistributive policies the increased availability of AI cannot be expected to improve the plight of those at the losing end of these inequalities (Saraswati, 2012). On the contrary, it is likely that AI will exacerbate these inequalities. According to one study of the Indian labour market, “AI jobs pay a substantial wage premium, but these opportunities are highly concentrated in certain industries, cities and large firms. AI adoption within an establishment reduces both the number of other job vacancies posted on the platform and the corresponding wage offers. Such net displacement effects within the firm could have important negative consequences if they are not balanced out by positive effects elsewhere in the economy” (Copestake et al, 36). These findings give us reason to suspect that the above mentioned optimistic outlook of government reports is misguided.
As Korinek and Stiglitz (2021, 2) emphasize, “Developing countries and emerging market economies have even more reason to be concerned than high-income countries, as their comparative advantage in the world economy relies on abundant labor and natural resources. Declining returns to labor and natural resources as well as the winner-takes-all dynamics brought on by new information technologies could lead to further immiseration in the developing world. This would undermine the rapid gains that have been the hallmark of success in development over the past fifty years, and threaten the progress made in reducing poverty and inequality”. According to another study, an increased uptake of AI in India means that “some opportunities will be created, but the spread of new technologies is likely to reproduce informal and precarious work rather than transform existing trends” (Hammer and Karmakar 2021, 1337).
To realize AI’s potential to benefit the entire population, a range of redistributive policies would be required (Korinek and Stiglitz 2021). However, India’s current government does not have plans to implement wide ranging redistributive measures that have the potential to spread the anticipated gains of AI across India’s population.
What is more, the capacity of labour unions and social movements to pressure the government into adopting such measures is severely limited (Hammer 2010). To some extent this is because this is because the overwhelming majority of India’s workers are not members of a trade union or covered by collective bargaining agreements (Hensman 2011). Another part of the explanation is that India’s democracy has experienced a steady trend toward authoritarianism over the past years.
According to the Global State of Democracy Initiative (2024), India has “experienced significant five-year declines in Free Political Parties, Civic Engagement, Civil Liberties and six other measures of democracy.” Freedom House (2024) reaches a similar conclusion, stating that “the government led by Prime Minister Narendra Modi and the Hindu nationalist Bharatiya Janata Party (BJP) has presided over discriminatory policies and a rise in persecution affecting Muslims. The constitution guarantees civil liberties including freedom of expression and freedom of religion, but harassment of journalists, nongovernmental organizations (NGOs), and other government critics has increased significantly under Modi.” According to reporters without borders (2024), “violence against journalists, highly concentrated media ownership, and political alignment” mean that press freedom is severely limited in what is still considered the world’s largest democracy.
The upshot of this is, in my view, that we should temper our expectations regarding the potential of AI to boost welfare in India. When AI is implemented within a little directive regulatory regime and against a backdrop of extreme inequality and weakened democracy, it cannot be expected to generate broad based gains in prosperity.