Blog written by Carmel Williams (PhD), a researcher who contributes to the HRBDT Project.

It should come as no surprise that artificial intelligence (AI) is being positioned as significant to winning the COVID-19 battle nor that the tech giants are part of the global rush to develop detection and medical responses. In this blog I explore briefly two aspects of AI and COVID-19: firstly, where the technology is being used to detect, monitor, and control the spread of the virus; and secondly, in the development of vaccines, diagnostic tools, and medical treatments. There is little doubt that AI can advance healthcare products and services, but it is important to question whether it can do so without risking freedoms, protections, and entitlements afforded by human rights. To make such an assessment it is necessary to look not just at use of data in the present, but also to anticipate how new technology will be used, owned, and governed.

Detecting, monitoring, and controlling the spread of the virus

The traditional method of identifying outbreaks of notifiable disease relies on a reporting chain from local primary health care through to national disease surveillance mechanisms and, when required under the International Health Regulations, to the World Health Organization (WHO) which holds the mandate to lead and coordinate global surveillance and response.[i] That system has weaknesses, not least being that the more remote and under resourced health facilities and systems are, the less likely it is that outbreaks of disease are notified. States can also be reluctant to notify WHO about disease outbreaks because it can lead to travel or export bans, which, as we are now experiencing, cripple economies.

For some time now, technology companies and other researchers have therefore been looking for alternative methods of quickly identifying outbreaks of disease, especially influenza. Social media and online behaviour has been a focus of such research, predominantly analysing Google searches for key words associated with the disease. Despite widespread optimism, Google’s Global Flu Trend experiment failed in 2009 when it missed a nonseasonal flu outbreak of H1NI entirely[ii], and again in 2013[iii] when it significantly overestimated flu prevalence. But work has continued to develop machine-learning-enabled forecasting, including in “developing countries that lack surveillance data.”[iv] This work holds promise as shown with a retrospective study of internet searches in China finding a high correlation with daily incidence of COVID-19 as early as 21-25 January 2020, five days before WHO had declared it a public health emergency of international concern.[v]

However, there are human rights risks associated with these uses of technology, for example, equality, non-discrimination and accessibility – particularly as they impact on gender, ethnicity, locality, and wealth. To understand risks before relying on such methods, we must assess whether data can be collected any more effectively from people in remote or disadvantaged areas than with the traditional methods. Other questions would include whether information is gathered equally from women and men, and older people; do all ethnicities have equal access to phones and mobile data; and does the cost of internet access and data use discriminate against poorer people?

Technologies specifically deployed in the COVID-19 context to monitor and curtail transmission have included drones and street cameras with face recognition software, temperature monitors in public buildings and transport, and digital tracing apps. These tools are used to identify (and sometimes punish) people in public without masks, or with high temperatures, or to track the movements and contacts of people who test positive for the virus. The accuracy of the technology is known to be questionable which immediately raises the question of accountability and redress.[vi]

Other specific human rights threats posed by these types of technologies include rights to privacy, freedom of movement, lack of consent to use of data derived from one’s own body, and in general, a lack of transparency as to who is collecting this data. Further, it is imperative that the public knows how the data is used both at the time of collection and later, and ultimately, who owns the data. Digital tracing, if it replaces traditional manual tracing, would not offering equal access to health promotion for people who do not use smart phones, and this group includes those more susceptible to COVID-19 – the elderly.[vii]

Data about people, obtained through invasive or opaque technologies, can contribute to a breakdown in trust, especially if it is gathered in states that lack a good track record on civil and political rights. Trust is a critical component for an effective response to a pandemic: trust in government and its processes and information, and in freedom to give and receive information without risk of punishment.[viii] A fear of speaking out, and punishment for doing so, contributed to COVID-19’s initial detection and spread. When health workers in Wuhan first observed signs of a respiratory illness suggestive of a new disease, they were detained by police for raising concerns on social and mainstream media.[ix] Those delays had a huge impact on the pandemic’s global spread.

The less trust there is in official sources of information, the more people rely on social media – another form of digital technology and surveillance – for COVID-19 updates and protection. However, it is also a major contributor to and vehicle for the spread of fake news and misinformation, further contributing to an ill-informed public. The pandemic has heightened awareness of the critically important role trust in the state plays in fulfilling human rights.

Vaccines, treatments and diagnostic technology

AI has been touted as offering great promise and speed in the race to find a vaccine and treatments for COVID-19. Its capacity to scan rapidly through huge databases of clinical trials, publications, chemical processes, and patents and, based on machine learning systems, is the key to identifying new approaches and creating new drugs or vaccines. Many institutions and other actors have responded to the pandemic by opening their databases to provide access to sufficient volumes of data for AI to function. These include the WHO Global Research on Coronavirus Disease Database, the GISAID Initiative (formerly the Global Initiative on Sharing All Influenza Data), and the joint initiative between Semantic Scholar, the Allen Institute for Artificial Intelligence, Microsoft, Facebook, Kaggle (a Google company) and others, to make openly available the COVID-19 Open Research Dataset (CORD-19).[x] By early July 2020 it had over 167,000 scholarly articles for data mining.[xi] The presence of the tech giants raises flags regarding ownership of any data-based research outputs, given their history and interest in the profitability of the health data market.[xii]

The human rights principle of transparency must be upheld in such arrangements especially as state data, and state-funded data, are included in these data sets. While it is appropriate that data on this scale is being made available by WHO, states, researchers, publishers, and other data owners, in keeping with their obligations to promote the right to health and scientific advancement, there is as yet little transparency around the ownership of the resulting intellectual property and downstream products. Now is the time for states to include human rights respected conditions in contracts with pharmaceutical companies and/or tech giants, so if they develop a vaccine with the support of government funding, prices and accessibility of the vaccine is equitable and affordable for all. States have an obligation to protect the most vulnerable, and in the case of COVID-19, this includes older people, those whose health is poor, and people experiencing hardships at the intersection of the social determinants of health. To maintain this control, states must insist that vaccines for COVID-19 are patent-free, mass produced, distributed fairly, and made available, free of charge, to all¾a ‘people’s vaccine’.[xiii]

AI is also being used to try and speed up the diagnosis of COVID-19, in part because the sooner a diagnosis is confirmed, the more expeditiously it can be contained. Chest CT scans can confirm a positive diagnosis immediately, whereas the standard swab test takes about 24 hours to produce a result. But the bottleneck in the use of CT scans is the availability of radiologists to read the scans, which is why AI systems are being developed for rapid reading of the CTs.[xiv]

But in addition to the same human rights concerns addressed earlier about vaccine development, there are two further right to health issues to consider about CT scans. The first is equitable access: CT scanners are not readily available in most of the world. In 2012 WHO estimated that two thirds of the world’s population could not access even basic x-rays,[xv] and in 2016 a study confirmed there were only 0.42 CT scanners per million people in Tanzania.[xvi] Therefore, if state funding supports COVID-19 diagnostic services based on CT scans this will be spent inequitably: it is not going to improve testing times in low- or middle-income countries, or even for many people living rurally in high-income countries. State support for the development of this use of AI technology may well divert funding from more equitable solutions for rapid tests.

The second right to health consideration concerns the overall impact of using new technology on the health system itself. For example, if CT scans are prioritised as the gold standard diagnosis for COVID-19, is there a risk the simpler, primary care administered swab test is no longer supported? And for people who cannot travel to a secondary or tertiary level care centre, do they remain undiagnosed, and at risk of spreading the virus further?

It is now incumbent upon human rights commissions and activists to be vigilant that state use of invasive technologies, adopted to promote the right to health during the pandemic, do not lead to permanent deployment of strategies that infringe human rights and were only ever acceptable in a crisis situation, under specific conditions and guided by international human rights law¾the Siracusa Principles.[xvii] States must ensure, and demonstrate through the human rights monitoring mechanisms, that any authoritarian actions undertaken during COVID-19 do not become permanent features of state control.[xviii] It is important that the urgency imposed by the pandemic does not blindside leaders into allowing the ownership of data-driven solutions to reside with the private sector actors who will then be able to control access to the benefits of science. This would be an egregious human rights oversight.


[i] WHO Report on Global Surveillance of Epidemic-prone Infectious Diseases – Introduction; available at; last accessed 1 July 2020

[ii] S. Pappas, “Data Fail! How Google Flu Trends Fell Way Short”, LiveScience, 13 March 2014, available at

[iii] D. Lacer and R. Kennedy,  What We Can Learn From the Epic Failure of Google Flu Trends, 2015, available at

[iv] C. Schmidt, The Social Forecast, Nature, Vol 573, 19 September 2019, S58

[v] Li Cuilian , Chen Li Jia , Chen Xueyu , Zhang Mingzhi , Pang Chi Pui , Chen Haoyu . Retrospective analysis of the possibility of predicting the COVID-19 outbreak from Internet searches and social media data, China, 2020. Euro Surveill. 2020;25(10):pii=2000199.

[vi] Wim Naudé, “Artificial intelligence vs COVID‑19: limitations, constraints and pitfalls” AI & SOCIETY

[vii] L.; Office for National Statistics, ‘Exploring the UK’s Digital Divide’ (4 March 2019) available at: usage/articles/exploringtheuksdigitaldivide/2019-03-04

[viii] Helena Legido-Quigley, Nima Asgari, Yik Ying Teo, Gabriel M Leung, Hitoshi Oshitani, Keiji Fukuda, Alex R Cook, Li Yang Hsu, Kenji Shibuya, David Heymann “Are high-performing health systems resilient against the COVID-19 epidemic?” The Lancet,  Vol 395 March 14, 2020 S0140-6736(20)30551-1

[ix] Human Rights Watch, “China: Respect rights in Coronavirus response” [press release], January 30, 2020. Available at; D. Gilbert, “A Chinese citizen journalist covering Coronavirus live-streamed his own arrest,” Vice (March 31, 2020). Available at;  V. Wang, “They documented the Coronavirus crisis in Wuhan. Then they vanished,” New York Times (February 14, 2020). Available at; C. Buckley, “Chinese doctor, silenced after warning of outbreak, dies from Coronavirus,” New York Times (February 6, 2020). Available at

[x] Wim Naudé, “Artificial intelligence vs COVID‑19: limitations, constraints and pitfalls” AI & SOCIETY 

[xi] Reference

[xii] A. Dickens and L. McGoey, “Beyond privacy: the right to health implications of data sharing partnerships in healthcare,” Human Rights Big Data and Technology Blog

[xiii] See UNAIDS, “Uniting behind a people’s vaccine against COVID-19” 14 May 2020

[xiv]Cheng Jin, Weixiang Chen, Yukun Cao, Zhanwei Xu, Zimeng Tan, Xin Zhang, Lei Deng, Chuansheng Zheng, Jie Zhou, Heshui Shi, Jianjiang Feng “Development and Evaluation of an AI System for COVID-19 Diagnosis” medRxiv 2020.03.20.20039834

[xv] Reference

[xvi] Ngoya, P. S., Muhogora, W. E., & Pitcher, R. D. (2016). Defining the diagnostic divide: an analysis of registered radiological equipment resources in a low-income African country. The Pan African medical journal, 25, 99.

[xvii] Siracusa Principles, 1984

[xviii] Dainius Pūras, 2020, Report by the Special Rapporteur on the right to health, to the UN General Assembly, in press, due October 2020.