Artificial Intelligence & Machine Learning

What is AI and ML?

Artificial intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can approximate human intelligence. There is no single, precise, universal definition of AI.

Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?

Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems, and evolutionary computation.

artificial intelligence, types

The diagram above shows the many different types of technology fields that comprise AI. AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems. When referring to AI, one can be referring to any or several of these technologies or fields. Applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes natural language processing (question answering) to understand what you’re asking, and then uses vision (image recognition) to find a banana and show it to you.

As noted above, AI doesn’t have a universal definition. There are many myths surrounding AI—from the fear that AI will take over the world by enslaving humans, to the hope that AI can one day be used to cure cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as to outline some of the benefits and risks posed by AI.

Definitions

Algorithm: An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.

Algorithmic decision-making/Algorithmic decision system (ADS): Algorithmic decision systems use data and statistical analyses to make automated decisions, such as determining whether people are eligible for a benefit or a penalty. Examples of fully automated algorithmic decision systems include the electronic passport control check-point at airports or an automated decision by a bank to grant a customer an unsecured loan based on the person’s credit history and data profile with the bank. Driver-assistance features that control a vehicle’s brake, throttle, steering, speed, and direction are an example of a semi-automated ADS.

Big Data: There are many definitions of “big data,” but we can generally think of it as extremely large data sets that, when analyzed, may reveal patterns, trends, and associations, including those relating to human behavior. Big Data is characterized by the five V’s: the volume, velocity, variety, veracity, and value of the data in question. This video provides a short introduction to big data and the concept of the five V’s.

Class label: A class label is applied after a machine learning system has classified its inputs; for example, determining whether an email is spam.

Data mining: Data mining, also known as knowledge discovery in data, is the “process of analyzing dense volumes of data to find patterns, discover trends, and gain insight into how the data can be used.”

Generative AI[1]: Generative AI is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. See section on Generative AI for more details.

Label: A label is the thing a machine learning model is predicting, such as the future price of wheat, the kind of animal shown in a picture, or the meaning of an audio clip.

Large language model: A large language model (LLM) is “a type of artificial intelligence that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new content.” An LLM is a type of generative AI[2]  that has been specifically architected to help generate text-based content.

Model: A model is the representation of what a machine learning system has learned from the training data.

Neural network: A biological neural network (BNN) is a system in the brain that makes it possible to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.

Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.

Robot: Robots are programmable, automated devices. Fully autonomous robots (e.g., self-driving vehicles) are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses and behaviors accordingly in order to perform complex tasks without human intervention.

Scoring: Scoring, also called prediction, is the process of a trained machine learning model generating values based on new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome. When used vis-a-vis people, scoring is a statistical prediction that determines whether an individual fits into a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.

Supervised learning: In supervised learning, ML systems are trained on well-labeled data. Using labeled inputs and outputs, the model can measure its accuracy and learn over time.

Unsupervised learning: Unsupervised learning uses machine learning algorithms to find patterns in unlabeled datasets without the need for human intervention.

Training: In machine learning, training is the process of determining the ideal parameters comprising a model.

 

How do artificial intelligence and machine learning work?

Artificial Intelligence

Artificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic, and economics to “understand, model, and replicate intelligence and cognitive processes.”

AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:

  • Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
  • Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
  • Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Artificial general intelligence and artificial superintelligence have not yet been achieved and likely will not be for the next few years or decades.

Machine Learning

Machine learning is an application of artificial intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern or correlation to make predictions. Most of the AI in use today is driven by machine learning.

Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: supervised learning; unsupervised learning; and deep learning.

Supervised Learning

Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a data set  containing training examples with associated labels. Take the example of a spam-filtering system that is being trained using spam and non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email or IP address of the sender. Using this correlation, the system tries to predict the correct label (spam/not spam) to apply to all the future emails it processes.

“Spam” and “not spam” in this instance are called “class labels.” The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labeled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about—in this case, it is the “spaminess” of an email. The “correct answer,” so to speak, in the categorization of email is called the “desired outcome” or “outcome of interest.”

Unsupervised Learning

Unsupervised learning involves neural networks finding a relationship or pattern without access to previously labeled datasets of input-output pairs. The neural networks organize and group the data on their own, finding recurring patterns and detecting deviations from these patterns. These systems tend to be less predictable than those that use labeled datasets, and are most often deployed in environments that may change at some frequency and are unstructured or partially structured. Examples include:

  1. An optical character-recognition system that can “read” handwritten text, even if it has never encountered the handwriting before.
  2. The recommended products a user sees on retail websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference, and the prices of their previous purchases.
  3. The detection of fraudulent monetary transactions based on timing and location. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.

A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available to train the neural network to act upon a larger, unlabeled dataset. An example of semi-supervised learning is software that creates deepfakes, or digitally altered audio, videos, or images.

Deep Learning

Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks to create AI that can detect financial fraud, conduct medical-image analysis, translate large amounts of text without human intervention, and automate the moderation of content on social networking websites. These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine, and sort huge amounts of data, which theoretically enables them to identify new solutions to existing problems.

Generative AI

Generative AI[3] is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. The launch of OpenAI’s chatbot, ChatGPT, in late 2022 placed a spotlight on generative AI and created a race among companies to churn out alternate (and ideally superior) versions of this technology. Excitement over large language models and other forms of generative AI was also accompanied by concerns about accuracy, bias within these tools, data privacy, and how these tools can be used to spread disinformation more efficiently.

Although there are other types of machine learning, these three—supervised learning, unsupervised learning and deep learning—represent the basic techniques used to create and train AI systems.

Bias in AI and ML

Artificial intelligence is built by humans, and trained on data generated by them. Inevitably, there is a risk that individual and societal human biases will be inherited by AI systems.

There are three common types of biases in computing systems:

  • Pre-existing bias has its roots in social institutions, practices, and attitudes.
  • Technical bias arises from technical constraints or considerations.
  • Emergent bias arises in a context of use.

Bias in artificial intelligence may affect, for example, the political advertisements one sees on the internet, the content pushed to the top of social media news feeds, the cost of an insurance premium, the results of a recruitment screening process, or the ability to pass through border-control checks in another country.

Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate can get compounded or magnified and greatly affect the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.

Bias in AI/ML systems can result in discriminatory practices, ultimately leading to the exacerbation of existing inequalities or the generation of new ones.. For more information, see this explainer related to AI bias and the Risks section of this resource.

Back to top

How are AI and ML relevant in civic space and for democracy?

Elephant tusks pictured in Uganda. In wildlife conservation, AI/ML algorithms and past data can be used to predict poacher attacks. Photo credit: NRCN.

The widespread proliferation, rapid deployment, scale, complexity, and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human rights bodies, businesses, and the general public alike. AI systems may require varying degrees of human interaction or none at all. When applied in design, operation, and delivery of services, AI/ML offers the potential to provide new services and improve the speed, targeting, precision, efficiency, consistency, quality, or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships, and patterns, and offering new solutions. By analyzing large amounts of data, ML systems save time, money, and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks, and discovering new species of viruses.

Tuberculosis microscopy diagnosis in Uzbekistan. AI/ML systems aid healthcare professionals in medical diagnosis and the detection of diseases. Photo credit: USAID.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering, and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, and security, as well as in safety, crime prevention, policing, law enforcement, urban management, and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time about energy consumption, traffic congestion, pollution levels, and waste in order to monitor and manage these issues and identify patterns in their generation, consumption, and handling.

Digital maps created in Mugumu, Tanzania. Artificial intelligence can support planning of infrastructure development and preparation for disaster. Photo credit: Bobby Neptune for DAI.

AI is also used in climate monitoring, weather forecasting, the prediction of disasters and hazards, and the planning of infrastructure development. In healthcare, AI systems aid professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, facial recognition systems, drones, and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, privacy, security, mass surveillance, social inequality, and negative impacts on democracy (see the Risks section).

Fish caught off the coast of Kema, North Sulawesi, Indonesia. Facial recognition is used to identify species of fish to contribute to sustainable fishing practices. Photo credit: courtesy of USAID SNAPPER.

AI and ML have both positive and negative implications for public policy and elections, as well as democracy more broadly. While data may be used to maximize the effectiveness of a campaign through targeted messaging to help persuade prospective voters, it may also be used to deliver propaganda or misinformation to vulnerable audiences. During the 2016 U.S. presidential election, for example, Cambridge Analytica used big data and machine learning to tailor messages to voters based on predictions about their susceptibility to different arguments.

During elections in the United Kingdom and France in 2017, political bots were used to spread misinformation on social media and leak private campaign emails. These autonomous bots are “programmed to aggressively spread one-sided political messages to manufacture the illusion of public support” or even dissuade certain populations from voting. AI-enabled deepfakes (audio or video that has been fabricated or altered) also contribute to the spread of confusion and falsehoods about political candidates and other relevant actors. Though artificial intelligence can be used to exacerbate and amplify disinformation, it can also be applied in potential solutions to the challenge. See the Case Studies section  of this resource for examples of how the fact-checking industry is leveraging artificial intelligence to more effectively identify and debunk false  and misleading narratives.

Cyber attackers seeking to disrupt election processes use machine learning to effectively target victims and develop strategies for defeating cyber defenses. Although these tactics can be used to prevent cyber attacks, the level of investment in artificial intelligence technologies by malign actors in many cases exceeds that of legitimate governments or other official entities. Some of these actors also use AI-powered digital surveillance tools to track down and target opposition figures, human rights defenders, and other perceived critics.

As discussed elsewhere in this resource, “the potential of automated decision-making systems to reinforce bias and discrimination also impacts the right to equality and participation in public life.” Bias within AI systems can harm historically underrepresented communities and exacerbate existing gender divides and the online harms experienced by women candidates, politicians, activists, and journalists.

AI-driven solutions can help improve the transparency and legitimacy of campaign strategies, for example, by leveraging political bots for good to help identify articles that contain misinformation or by providing a tool for collecting and analyzing the concerns of voters. Artificial intelligence can also be used to make redistricting less partisan (though in some cases it also facilitates partisan gerrymandering) and prevent or detect fraud or significant administrative errors. Machine learning can inform advocacy by predicting which pieces of legislation will be approved based on algorithmic assessments of the text of the legislation, how many sponsors or supporters it has, and even the time of year it is introduced.

The full impact of the deployment of AI systems on the individual, society, and democracy is not known or knowable, which creates many legal, social, regulatory, technical, and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The European Union’s (EU) General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan, and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”

Back to top

Opportunities

Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights, and good governance. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.

Detect and overcome bias

Although artificial intelligence can reproduce human biases, as discussed above, it can also be used to combat unconscious biases in contexts like job recruitment.  Responsibly designed algorithms can bring hidden biases into view and, in some cases, nudge people into less-biased outcomes; for example by masking candidates’ names, ages, and other bias-triggering features on a resume.

Improve security and safety

AI systems can be used to detect attacks on public infrastructure, such as a cyber attack or credit card fraud. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud quickly, or even prevent it before it occurs. Machine learning can help identify agile and unusual patterns that match or exceed traditional strategies used to avoid detection.

Moderate harmful online content

Enormous quantities of content are uploaded every second to the internet and social media . There are simply too many videos, photos, and posts for humans to manually review. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen for content that violates their terms of service (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfakes and other computer-generated content requires similarly advanced identification tactics. Fact-checkers and other actors working to diffuse the dangerous, misleading power of deepfakes are developing their own artificial intelligence to identify these media as false.

Web Search

Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the vast stretches of the internet. Search engines on the web (like Google and Bing) or within platforms and websites (like searches within Wikipedia or The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor higher-quality results that may be beneficial to society. For example, Google has an initiative to highlight original reporting, which prioritizes the first instance of a news story rather than sources that republish the information.

Translation

Machine learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.

Back to top

Risks

The use of emerging technologies like AI can also create risks for democracy and in civil society programming. Read below to learn how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate  unintended—and intended—consequences.

Discrimination against marginalized groups

There are several ways in which AI may make decisions that can lead to discrimination, including how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data; during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.

The accuracy of AI systems is based on how ML processes Big Data, which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or external attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to determine why AI makes certain decisions about some individuals or groups of people, or conclusively prove it has made a discriminatory decision. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status, or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement, and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.

A study by the Royal Statistical Society notes that the “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation or software for credit-scoring, banking, insurance, healthcare, and the selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.

The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving vulnerable groups such as refugees, or about life or death circumstances, such as in medical care. A 2018 report by the University of Toronto’s Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.

Security vulnerabilities

Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately owned systems such as internet of things (IoT) devices and self-driven cars.

If malicious entities target AI systems deployed in public infrastructure, such as smart cities, smart grids, nuclear installations,healthcare facilities, and banking systems, among others, they “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”

Privacy and data protection

The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection. Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks. Criminals, illiberal governments, and people with malicious intent often  target these data for economic or political gain. For instance, health data captured from smartphone applications and internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The issue is not only leaks, but the data that people willingly give out without control about how it will be used down the road. This includes what we share with both companies and government agencies. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.

Chilling effect

AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination, and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Many people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.

Opacity (Black box nature of AI systems)

Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, behind-the-scenes processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal or judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike decisions made by  judges who are required to justify their legal order or judgment.

Technological unemployment

Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.

Loss of individual autonomy and personhood

Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may affect wellbeing, physical integrity, and quality of life. This affects what constitutes an individual’s consent (or lack thereof); the way consent is formed, communicated and understood; and the context in which it is valid. “[T]he dilution of the free basis of our individual consent—either through outright information distortion or even just the absence of transparency—imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence

Back to top

Questions

If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:

  1. Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
  2. Who is designing and overseeing the technology? Can they explain what is happening at different steps of the process?
  3. What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
  4. What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
  5. Are you confident the technology will work as intended when used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
  6. Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
  7. What measures do you have in place to identify and address potentially harmful biases in the technology?
  8. What regulatory safeguards and redress mechanisms do you have in place for people who claim that the technology has been unfair to them or abused them in any way?
  9. Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
  10. Are you certain that the technology abides with relevant regulations and legal standards, including the GDPR?
  11. Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?

Back to top

Case Studies

Leveraging artificial intelligence to promote information integrity

The United Nations Development Programme’s eMonitor+ is an AI-powered platform that helps “scan online media posts to identify electoral violations, misinformation, hate speech, political polarization and pluralism, and online violence against women.” Data analysis facilitated by eMonitor+ enables election commissions and media stakeholders to “observe the prevalence, nature, and impact of online violence.” The platform relies on machine learning to track and analyze content on digital media to generate graphical representations for data visualization. eMonitor+ has been used by Peru’s Asociación Civil Transparencia and Ama Llulla to map and analyze digital violence and hate speech in political dialogue, and by the Supervisory Election Commission during the 2022 Lebanese parliamentary election to monitor potential electoral violations, campaign spending, and misinformation. The High National Election Commission of Libya has also used eMonitor+ to monitor and identify online violence against women in elections.

“How Nigeria’s fact-checkers are using AI to counter election misinformation”

How Nigeria’s fact-checkers are using AI to counter election misinformation”

Ahead of Nigeria’s 2023 presidential election, the UK-based fact-checking organization Full Fact “offered its artificial intelligence suite—consisting of three tools that work in unison to automate lengthy fact-checking processes—to greatly expand fact-checking capacity in Nigeria.” According to Full Fact, these tools are not intended to replace human fact-checkers but rather assist with time-consuming, manual monitoring and review, leaving fact-checkers “more time to do the things they’re best at: understanding what’s important in public debate, interrogating claims, reviewing data, speaking with experts and sharing their findings.” The scalable tools which include search, alerts, and live functions allow fact-checkers to “monitor news websites, social media pages, and transcribe live TV or radio to find claims to fact check.”

Monitoring crop development: Agroscout

Monitoring crop development: Agroscout

The growing impact of climate change could further cut crop yields, especially in the world’s most food-insecure regions. And our food systems are responsible for about 30% of greenhouse gas emissions. Israeli startup AgroScout envisions a world where food is grown in a more sustainable way. “Our platform uses AI to monitor crop development in real-time, to more accurately plan processing and manufacturing operations across regions, crops and growers,” said Simcha Shore, founder and CEO of AgroScout. ‘By utilizing AI technology, AgroScout detects pests and diseases early, allowing farmers to apply precise treatments that reduce agrochemical use by up to 85%. This innovation helps minimize the environmental damage caused by traditional agrochemicals, making a positive contribution towards sustainable agriculture practices.’”

Machine Learning for Peace

The Machine Learning for Peace Project seeks to understand how civic space is changing in countries around the world using state of the art machine learning techniques. By leveraging the latest innovations in natural language processing, the project classifies “an enormous corpus of digital news into 19 types of civic space ‘events’ and 22 types of Resurgent Authoritarian Influence (RAI) events which capture the efforts of authoritarian regimes to wield influence on developing countries.” Among the civic space “events” being tracked are activism, coups, election activities, legal changes, and protests. The civic space event data is combined with “high frequency economic data to identify key drivers of civic space and forecast shifts in the coming months.” Ultimately, the project hopes to serve as a “useful tool for researchers seeking rich, high-frequency data on political regimes and for policymakers and activists fighting to defend democracy around the world.”

Food security: Detecting diseases in crops using image analysis

Food security: Detecting diseases in crops using image analysis

“Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops.” As a first step toward supplementing existing solutions for disease diagnosis with a smartphone-assisted diagnosis system, researchers used a public dataset of 54,306 images of diseased and healthy plant leaves to train a “deep convolutional neural network” to automatically identify 14 different crop species and 26 unique diseases (or the absence of those diseases).

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Data Protection

What is data protection?

Data protection refers to practices, measures, and laws that aim to prevent certain information about a person from being collected, used, or shared in a way that is harmful to that person.

Interview with fisherman in Bone South Sulawesi, Indonesia. Data collectors must receive training on how to avoid bias during the data collection process. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Data protection isn’t new. Bad actors have always sought to gain access to individuals’ private records. Before the digital era, data protection meant protecting individuals’ private data from someone physically accessing, viewing, or taking files and documents. Data protection laws have been in existence for more than 40 years.

Now that many aspects of peoples’ lives have moved online, private, personal, and identifiable information is regularly shared with all sorts of private and public entities. Data protection seeks to ensure that this information is collected, stored, and maintained responsibly and that unintended consequences of using data are minimized or mitigated.

What are data?

Data refer to digital information, such as text messages, videos, clicks, digital fingerprints, a bitcoin, search history, and even mere cursor movements. Data can be stored on computers, mobile devices, in clouds, and on external drives. It can be shared via email, messaging apps, and file transfer tools. Your posts, likes and retweets, your videos about cats and protests, and everything you share on social media is data.

Metadata are a subset of data. It is information stored within a document or file. It’s an electronic fingerprint that contains information about the document or file. Let’s use an email as an example. If you send an email to your friend, the text of the email is data. The email itself, however, contains all sorts of metadata like who created it, who the recipient is, the IP address of the author, the size of the email, etc.

Large amounts of data get combined and stored together. These large files containing thousands or millions of individual files are known as datasets. Datasets then get combined into very large datasets. These very large datasets, referred to as big data, are used to train machine-learning systems.

Personal Data and Personally Identifiable Information

Data can seem to be quite abstract, but the pieces of information are very often reflective of the identities or behaviors of actual persons. Not all data require protection, but some data, even metadata, can reveal a lot about a person. This is referred to as Personally Identifiable Information (PII). PII is commonly referred to as personal data. PII is information that can be used to distinguish or trace an individual’s identity such as a name, passport number, or biometric data like fingerprints and facial patterns. PII is also information that is linked or linkable to an individual, such as date of birth and religion.

Personal data can be collected, analyzed and shared for the benefit of the persons involved, but they can also be used for harmful purposes. Personal data are valuable for many public and private actors. For example, they are collected by social media platforms and sold to advertising companies. They are collected by governments to serve law-enforcement purposes like the prosecution of crimes. Politicians value personal data to target voters with certain political information. Personal data can be monetized by people for criminal purposes such as selling false identities.

“Sharing data is a regular practice that is becoming increasingly ubiquitous as society moves online. Sharing data does not only bring users benefits, but is often also necessary to fulfill administrative duties or engage with today’s society. But this is not without risk. Your personal information reveals a lot about you, your thoughts, and your life, which is why it needs to be protected.”

Access Now’s ‘Creating a Data Protection Framework’, November 2018.

How does data protection relate to the right to privacy?

The right to protection of personal data is closely interconnected to, but distinct from, the right to privacy. The understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. Data protection is not always considered a right in itself. Read more about the differences between privacy and data protection here.

Data privacy is also a common way of speaking about sensitive data and the importance of protecting it against unintentional sharing and undue or illegal  gathering and use of data about an individual or group. USAID’s Digital Strategy for 2020 – 2024 defines data privacy as ‘the  right  of  an  individual  or  group  to  maintain  control  over  and  confidentiality  of  information  about  themselves’.

How does data protection work?

Participant of the USAID WeMUNIZE program in Nigeria. Data protection must be considered for existing datasets as well. Photo credit: KC Nwakalor for USAID / Digital Development Communications

Personal data can and should be protected by measures that protect from harm the identity or other information about a person and that respects their right to privacy. Examples of such measures include determining which data are vulnerable based on privacy-risk assessments; keeping sensitive data offline; limiting who has access to certain data; anonymizing sensitive data; and only collecting necessary data.

There are a couple of established principles and practices to protect sensitive data. In many countries, these measures are enforced via laws, which contain the key principles that are important to guarantee data protection.

“Data Protection laws seek to protect people’s data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.”

Privacy International on data protection

A couple of important terms and principles are outlined below, based on The European Union’s General Data Protection Regulation (GDPR).

  • Data Subject: any person whose personal data are being processed, such as added to a contacts database or to a mailing list for promotional emails.
  • Processing data means that any operation is performed on personal data, manually or automated.
  • Data Controller: the actor that determines the purposes for, and means by which, personal data are processed.
  • Data Processor: the actor that processes personal data on behalf of the controller, often a third-party external to the controller, such as a party that offers mailing lists or survey services.
  • Informed Consent: individuals understand and agree that their personal data are collected, accessed, used, and/or shared and how they can withdraw their consent.
  • Purpose limitation: personal data are only collected for a specific and justified use and the data cannot be used for other purposes by other parties.
  • Data minimization: that data collection is minimized and limited to essential details.

 

Healthcare provider in Eswatini. Quality data and protected datasets can accelerate impact in the public health sector. Photo credit: Ncamsile Maseko & Lindani Sifundza.

Access Now’s guide lists eight data-protection principles that come largely from international standards, in particular,, the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (widely known as Convention 108) and the Organization for Economic Development (OECD) Privacy Guidelines and are considered to be “minimum standards” for the protection of fundamental rights by countries that have ratified international data protection frameworks.

A development project that uses data, whether establishing a mailing list or analyzing datasets, should comply with laws on data protection. When there is no national legal framework, international principles, norms, and standards can serve as a baseline to achieve the same level of protection of data and people. Compliance with these principles may seem burdensome, but implementing a few steps related to data protection from the beginning of the project will help to achieve the intended results without putting people at risk.

common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms

The figure above shows how common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms.

The European Union’s General Data Protection Regulation (GDPR)

The data protection law in the EU, the GDPR, went into effect in 2018. It is often considered the world’s strongest data protection law. The law aims to enhance how people can access their information and limits what organizations can do with personal data from EU citizens. Although coming from the EU, the GDPR can also apply to organizations that are based outside the region when EU citizens’ data are concerned. GDPR, therefore, has a global impact.

The obligations stemming from the GDPR and other data protection laws may have broad implications for civil society organizations. For information about the GDPR- compliance process and other resources, see the European Center for Not-for-Profit Law‘s guide on data-protection standards for civil society organizations.

Notwithstanding its protections, the GDPR also has been used to harass CSOs and journalists. For example, a mining company used a provision of the GDPR to try to force Global Witness to disclose sources it used in an anti-mining campaign. Global Witness successfully resisted these attempts.

Personal or organizational protection tactics

How to protect your own sensitive information or the data of your organization will depend on your specific situation in terms of activities and legal environment. The first step is to assess your specific needs in terms of security and data protection. For example, which information could, in the wrong hands, have negative consequences for you and your organization?

Digital–security specialists have developed online resources you can use to protect yourself. Examples are the Security Planner, an easy-to-use guide with expert-reviewed advice for staying safer online with recommendations on implementing basic online practices. The Digital Safety Manual offers information and practical tips on enhancing digital security for government officials working with civil society and Human Rights Defenders (HRDs). This manual offers 12 cards tailored to various common activities in the collaboration between governments (and other partners) and civil society organizations. The first card helps to assess the digital security.

Digital Safety Manual

 

The Digital First Aid Kit is a free resource for rapid responders, digital security trainers, and tech-savvy activists to better protect themselves and the communities they support against the most common types of digital emergencies. Global digital safety responders and mentors can help with specific questions or mentorship, for example, The Digital Defenders Partnership and the Computer Incident Response Centre for Civil Society (CiviCERT).

Back to top

How is data protection relevant in civic space and for democracy?

Many initiatives that aim to strengthen civic space or improve democracy use digital technology. There is a widespread belief that the increasing volume of data and the tools to process them can be used for good. And indeed, integrating digital technology and the use of data in democracy, human rights, and governance programming can have significant benefits; for example, they can connect communities around the globe, reach underserved populations better, and help mitigate inequality.

“Within social change work, there is usually a stark power asymmetry. From humanitarian work, to campaigning, documenting human rights violations to movement building, advocacy organisations are often led by – and work with – vulnerable or marginalised communities. We often approach social change work through a critical lens, prioritising how to mitigate power asymmetries. We believe we need to do the same thing when it comes to the data we work with – question it, understand its limitations, and learn from it in responsible ways.”

What is Responsible Data?

When quality information is available to the right people when they need it, the data are protected against misuse, and the project is designed with the protection of its users in mind, it can accelerate impact.

  • USAID’s funding of improved vineyard inspection using drones and GIS data in Moldova, allows farmers to quickly inspect, identify, and isolate vines infected by a ​phytoplasma disease of the vine.
  • Círculo is a digital tool for female journalists in Mexico to help them create strong networks of support, strengthen their safety protocols and meet needs related to the protection of themselves and their data. The tool was developed with the end-users through chat groups and in-person workshops to make sure everything built into the app was something they needed and could trust.

At the same time, data-driven development brings a new responsibility to prevent misuse of data, when designing,  implementing or monitoring development projects. When the use of personal data is a means to identify people who are eligible for humanitarian services, privacy and security concerns are very real.

  • Refugee camps In Jordan have required community members to allow scans of their irises to purchase food and supplies and take out cash from ATMs. This practice has not integrated meaningful ways to ask for consent or allow people to opt out. Additionally, the use and collection of highly sensitive personal data like biometrics to enable daily purchasing habits is disproportionate, because other less personal digital technologies are available and used in many parts of the world.

Governments, international organizations, and private actors can all – even unintentionally – misuse personal data for other purposes than intended, negatively affecting the well-being of the people related to that data. Some examples have been highlighted by Privacy International:

  • The case of Tullow Oil, the largest oil and gas exploration and production company in Africa, shows how a private actor considered extensive and detailed research by a micro-targeting research company into the behaviors of local communities in order to get ‘cognitive and emotional strategies to influence and modify Turkana attitudes and behavior’ to the Tullow Oil’s advantage.
  • In Ghana, the Ministry of Health commissioned a large study on health practices and requirements in Ghana. This resulted in an order from the ruling political party to model future vote distribution within each constituency based on how respondents said they would vote, and a negative campaign trying to get opposition supporters not to vote.

There are resources and experts available to help with this process. The Principles for Digital Development website offers recommendations, tips, and resources to protect privacy and security throughout a project lifecycle, such as the analysis and planning stage, for designing and developing projects and when deploying and implementing. Measurement and evaluation are also covered. The Responsible Data website offers the Illustrated Hand-Book of the Modern Development Specialist with attractive, understandable guidance through all steps of a data-driven development project: designing it, managing data, with specific information about collecting, understanding and sharing it, and closing a project.

NGO worker prepares for data collection in Buru Maluku, Indonesia. When collecting new data, it’s important to design the process carefully and think through how it affects the individuals involved. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Back to top

Opportunities

Data protection measures further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about data protection in your work.

Privacy respected and people protected

Implementing data–protection standards in development projects protects people against potential harm from abuse of their data. Abuse happens when an individual, company or government accesses personal data and uses them for purposes other than those for which the data were collected. Intelligence services and law enforcement authorities often have legal and technical means to enforce access to datasets and abuse the data. Individuals hired by governments can access datasets by hacking the security of software or clouds. This has often led to intimidation, silencing, and arrests of human rights defenders and civil society leaders criticizing their government. Privacy International maps examples of governments and private actors abusing individuals’ data.

Strong protective measures against data abuse ensure respect for the fundamental right to privacy of the people whose data are collected and used. Protective measures allow positive development such as improving official statistics, better service delivery, targeted early warning mechanisms, and effective disaster response.

It is important to determine how data are protected throughout the entire life cycle of a project. Individuals should also be ensured of protection after the project ends, either abruptly or as intended, when the project moves into a different phase or when it receives funding from different sources. Oxfam has developed a leaflet to help anyone handling, sharing, or accessing program data to properly consider responsible data issues throughout the data lifecycle, from making a plan to disposing of data.

Back to top

Risks

The collection and use of data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with collection and use of data in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unauthorized access to data

Data need to be stored somewhere. On a computer or an external drive, in a cloud, or on a local server. Wherever the data are stored, precautions need to be taken to protect the data from unauthorized access and to avoid revealing the identities of vulnerable persons. The level of protection that is needed depends on the sensitivity of the data, i.e. to what extent it could have negative consequences if the information fell into the wrong hands.

Data can be stored on a nearby and well-protected server that is connected to drives with strong encryption and very limited access, which is a method to stay in control of the data you own. Cloud services offered by well-known tech companies often offer basic protection measures and wide access to the dataset for free versions. More advanced security features are available for paying customers, such as storage of data in certain jurisdictions with data-protection legislation. The guidelines on how to secure private data stored and accessed in the cloud help to understand various aspects of clouds and to decide about a specific situation.

Every system needs to be secured against cyberattacks and manipulation. One common challenge is finding a way to protect identities in the dataset, for example, by removing all information that could identify individuals from the data, i.e. anonymizing it. Proper anonymization is of key importance and harder than often assumed.

One can imagine that a dataset of GPS locations of People Living with Albinism across Uganda requires strong protection. Persecution is based on the belief that certain body parts of people with albinism can transmit magical powers, or that they are presumed to be cursed and bring bad luck. A spatial-profiling project mapping the exact location of individuals belonging to a vulnerable group can improve outreach and delivery of support services to them. However, hacking of the database or other unlawful access to their personal data might put them at risk of people wanting to exploit or harm them.

One could also imagine that the people operating an alternative system to send out warning sirens for air strikes in Syria run the risk of being targeted by authorities. While data collection and sharing by this group aims to prevent death and injury, it diminishes the impact of air strikes by the Syrian authorities. The location data of the individuals running and contributing to the system needs to be protected against access or exposure.

Another risk is that private actors who run or cooperate in data-driven projects could be tempted to sell data if they are offered large sums of money. Such buyers could be advertising companies or politicians that aim to target commercial or political campaigns at specific people.

The Tiko system designed by social enterprise Triggerise rewards young people for positive health-seeking behaviors, such as visiting pharmacies and seeking information online. Among other things, the system gathers and stores sensitive personal and health information about young female subscribers who use the platform to seek guidance on contraceptives and safe abortions, and it tracks their visits to local clinics. If these data are not protected, governments that have criminalized abortion could potentially access and use that data to carry out law-enforcement actions against pregnant women and medical providers.

Unsafe collection of data

When you are planning to collect new data, it is important to carefully design the collection process and think through how it affects the individuals involved. It should be clear from the start what kind of data will be collected, for what purpose, and that the people involved agree with that purpose. For example, an effort to map people with disabilities in a specific city can improve services. However, the database should not expose these people to risks, such as attacks or stigmatization that can be targeted at specific homes. Also, establishing this database should answer to the needs of the people involved and not driven by the mere wish to use data. For further guidance, see the chapter Getting Data in the Hand-book of the Modern Development Specialist and the OHCHR Guidance to adopt a Human Rights Based Approach to Data, focused on collection and disaggregation.

If data are collected in person by people recruited for this process, proper training is required. They need to be able to create a safe space to obtain informed consent from people whose data are being collected and know how to avoid bias during the data-collection process.

Unknowns in existing datasets

Data-driven initiatives can either gather new data, for example, through a survey of students and teachers in a school or use existing datasets from secondary sources, for example by using a government census or scraping social media sources. Data protection must also be considered when you plan to use existing datasets, such as images of the Earth for spatial mapping. You need to analyze what kind of data you want to use and whether it is necessary to use a specific dataset to reach your objective. For third-party datasets, it is important to gain insight into how the data that you want to use were obtained, whether the principles of data protection were met during the collection phase, who licensed the data and who funded the process. If you are not able to get this information, you must carefully consider whether to use the data or not. See the Hand-book of the Modern Development Specialist on working with existing data.

Benefits of cloud storage

A trusted cloud-storage strategy offers greater security and ease of implementation compared to securing your own server. While determined adversaries can still hack into individual computers or local servers, it is significantly more challenging for them to breach the robust security defenses of reputable cloud storage providers like Google or Microsoft. These companies deploy extensive security resources and a strong business incentive to ensure maximum protection for their users. By relying on cloud storage, common risks such as physical theft, device damage, or malware can be mitigated since most documents and data are securely stored in the cloud. In case of incidents, it is convenient to resynchronize and resume operations on a new or cleaned computer, with little to no valuable information accessible locally.

Backing up data

Regardless of whether data is stored on physical devices or in the cloud, having a backup is crucial. Physical device storage carries the risk of data loss due to various incidents such as hardware damage, ransomware attacks, or theft. Cloud storage provides an advantage in this regard as it eliminates the reliance on specific devices that can be compromised or lost. Built-in backup solutions like Time Machine for Macs and File History for Windows devices, as well as automatic cloud backups for iPhones and Androids, offer some level of protection. However, even with cloud storage, the risk of human error remains, making it advisable to consider additional cloud backup solutions like Backupify or SpinOne Backup. For organizations using local servers and devices, secure backups become even more critical. It is recommended to encrypt external hard drives using strong passwords, utilize encryption tools like VeraCrypt or BitLocker, and keep backup devices in a separate location from the primary devices. Storing a copy in a highly secure location, such as a safe deposit box, can provide an extra layer of protection in case of disasters that affect both computers and their backups.

Back to top

Questions

If you are trying to understand the implications of lacking data protection measures in your work environment, or are considering using data as part of your DRG programming, ask yourself these questions:

  1. Are data protection laws adopted in the country or countries concerned? Are these laws aligned with international human rights law, including provisions protecting the right to privacy?
  2. How will the use of data in your project comply with data protection and privacy standards?
  3. What kind of data do you plan to use? Are personal or other sensitive data involved?
  4. What could happen to the persons related to that data if the government accesses these data?
  5. What could happen if the data are sold to a private actor for other purposes than intended?
  6. What precaution and mitigation measures are taken to protect the data and the individuals related to the data?
  7. How is the data protected against manipulation and access and misuse by third parties?
  8. Do you have sufficient expertise integrated during the entire course of the project to make sure that data are handled well?
  9. If you plan to collect data, what is the purpose of the collection of data? Is data collection necessary to reach this purpose?
  10. How are collectors of personal data trained? How is informed consent generated when data are collected?
  11. If you are creating or using databases, how is the anonymity of the individuals related to the data guaranteed?
  12. How is the data that you plan to use obtained and stored? Is the level of protection appropriate to the sensitivity of the data?
  13. Who has access to the data? What measures are taken to guarantee that data are accessed for the intended purpose?
  14. Which other entities – companies, partners – process, analyze, visualize, and otherwise use the data in your project? What measures are taken by them to protect the data? Have agreements been made with them to avoid monetization or misuse?
  15. If you build a platform, how are the registered users of your platform protected?
  16. Is the database, the system to store data or the platform auditable to independent research?

Back to top

Case Studies

People Living with HIV Stigma Index and Implementation Brief

The People Living with HIV Stigma Index is a standardized questionnaire and sampling strategy to gather critical data on intersecting stigmas and discrimination affecting people living with HIV. It monitors HIV-related stigma and discrimination in various countries and provides evidence for advocacy in countries. The data in this project are the experiences of people living with HIV. The implementation brief provides insight into data protection measures. People living with HIV are at the center of the entire process, continuously linking the data that is collected about them to the people themselves, starting from research design, through implementation, to using the findings for advocacy. Data are gathered through a peer-to-peer interview process, with people living with HIV from diverse backgrounds serving as trained interviewers. A standard implementation methodology has been developed, including the establishment if a steering committee with key  stakeholders and population groups.

RNW Media’s Love Matters Program Data Protection

RNW Media’s Love Matters Program offers online platforms to foster discussion and information-sharing on love, sex and relationships to 18-30 year-olds in areas where information on sexual and reproductive health and rights (SRHR) is censored or taboo. RNW Media’s digital teams introduced creative approaches to data processing and analysis, Social Listening methodologies and Natural Language Processing techniques to make the platforms more inclusive, create targeted content, and identify influencers and trending topics. Governments have imposed restrictions such as license fees or registrations for online influencers as a way of monitoring and blocking “undesirable” content, and RNW Media has invested in security of its platforms and literacy of the users to protect them from access to their sensitive personal information. Read more in the publication ‘33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p 12-14.

Amnesty International Report

Amnesty International Report

Thousands of democracy and human rights activists and organizations rely on secure communication channels every day to maintain the confidentiality of conversations in challenging political environments. Without such security practices, sensitive messages can be intercepted and used by authorities to target activists and break up protests. One prominent and well-documented example of this occurred in the aftermath of the 2010 elections in Belarus. As detailed in this Amnesty International report, phone recordings and other unencrypted communications were intercepted by the government and used in court against prominent opposition politicians and activists, many of whom spent years in prison. In 2020, another swell of post-election protests in Belarus saw thousands of protestors adopt user-friendly, secure messaging apps that were not as readily available just 10 years prior to protect their sensitive communications.

Norway Parliament Data

Norway Parliament Data

The Storting, Norway’s parliament, has experienced another cyberattack that involved the exploitation of recently disclosed vulnerabilities in Microsoft Exchange. These vulnerabilities, known as ProxyLogon, were addressed by emergency security updates released by Microsoft. The initial attacks were attributed to a state-sponsored hacking group from China called HAFNIUM, which utilized the vulnerabilities to compromise servers, establish backdoor web shells, and gain unauthorized access to internal networks of various organizations. The repeated cyberattacks on the Storting and the involvement of various hacking groups underscore the importance of data protection, timely security updates, and proactive measures to mitigate cyber risks. Organizations must remain vigilant, stay informed about the latest vulnerabilities, and take appropriate actions to safeguard their systems and data.

Girl Effect

Girl Effect, a creative non-profit working where girls are marginalized and vulnerable, uses media and mobile tech to empower girls. The organization embraces digital tools and interventions and acknowledges that any organisation that uses data also has a responsibility to protect the people it talks to or connects online. Their ‘Digital safeguarding tips and guidance’ provides in-depth guidance on implementing data protection measures while working with vulnerable people. Referring to Girl Effect as inspiration, Oxfam has developed and implemented a Responsible Data Policy and shares many supporting resources online. The publication ‘Privacy and data security under GDPR for quantitative impact evaluation’ provides detailed considerations of the data protection measures Oxfam implements while doing quantitative impact evaluation through digital and paper-based surveys and interviews.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital IDs

What are digital IDs?

Families displaced by Boko Haram violence in Maiduguri, Northeast Nigeria. Implementation of a digital ID system requires informed consent from participants. Photo credit: USAID.
Families displaced by Boko Haram violence in Maiduguri, Northeast Nigeria. Implementation of a digital ID system requires informed consent from participants. Photo credit: USAID.

Digital IDs are identification systems that rely on digital technology. Biometric technology is one kind of tool often used for digital identification: biometrics allow people to prove their identity based on a physical characteristic or trait (biological data). Other forms of digital identification include cards and mobile technologies. This resource, which draws on the work of The Engine Room, will look at different forms and the implications of digital IDs, with a particular focus on biometric IDs, including their integration with health systems and their potential for e-participation.

“Biometrics are not new – photographs have been used in this sector for years, but current discourse around ‘biometrics’ commonly refers to fingerprints, face prints and iris scans. As technology continues to advance, capabilities for capturing other forms of biometric data are also improving, such that voice prints, retinal scans, vein patterns, tongue prints, lip movements, ear patterns, gait, and of course, DNA, can be used for authentication and identification purposes.””

The Engine Room

Definitions

Biometric Data: automatically measurable, distinctive physical characteristics or personal traits used to identify or verify the identity of an individual.

Consent: Article 4(11) of the General Data Protection Regulation (GDPR) defines consent: “Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.” See also the Data Protection resource.

Data Subject: the individual whose data is collected.

Digital ID:  an electronic identity-management system used to prove an individual’s identity or their right to access information or services.

E-voting: an election system that allows a voter to record their secure and secret ballot electronically.

Foundational Biometric Systems: systems that supply general identification for official uses, like national civil registries and national IDs.

Functional Biometric Systems: systems that respond to a demand for a particular service or transaction, like voter IDs, health records, or financial services.

Identification/One-to-Many Authentication: using the biometric identifier to identify the data subject from within a database of other biometric profiles.

Immutability: the quality of a characteristic that does not change over time (for example, DNA).

Portable Identity: an individual’s digital ID credentials may be taken with them beyond the initial issuing authority, to prove official identity for new user relationships/entities, without having to repeat verification each time.

Self-Sovereign Identity: a digital ID that gives the data subject full ownership over their digital identity, guaranteeing them lifetime portability, independent from any central authority.

Uniqueness: a characteristic that sufficiently distinguishes individuals from one another. Most forms of biometric data are singularly unique to the individual involved.

Verification/One-to-One Authentication: using the biometric identifier to confirm that the data subject is who they claim to be.

How do digital IDs work?

Young Iraq woman pictured at the Harsham IDP camp in Erbil, Iraq. Digital IDs and biometrics have potential to facilitate the voting process. Photo credit: Jim Huylebroek for Creative Associates International.
Young Iraq woman pictured at the Harsham IDP camp in Erbil, Iraq. Digital IDs and biometrics have potential to facilitate the voting process. Photo credit: Jim Huylebroek for Creative Associates International.

There are three primary categories of technology used for digital identification: biometrics, cards, and mobile. Within each of these areas, a wide range of technologies may be used.

The NIST (National Institute of Standards and Technology, one of the primary international authorities on digital IDs) identifies three parts of how the digital ID process works.

Part 1: Identity proofing and enrollment

This is the process of binding the data on the subject’s identity to an authenticator, which is a tool that is used to prove their identity.

  • With a biometric ID, this involves collecting the data (through an eye scan, fingerprinting, submitting a selfie, etc.), verifying that the person is who they claim to be, and connecting the individual to an identity account (profile).
  • With a non-biometric ID, this involves giving the individual a tool (an authenticator) they can use for authentication, like a password, a barcode, etc.

Part 2: Authentication

This is the process of using the digital ID to prove identity or access services.

Biometric authentication: There are two different types of biometric authentication.

  • Biometric Verification (or One-to-One Authentication) confirms that the person is who they say they are. This allows organizations to determine, for example, that a person is entitled to certain food, vaccine, or housing.
  • Biometric Identification (or One-to-Many Authentication) is used to identify an individual from within a database of biometric profiles. Organizations may use biometrics for identification to prevent fraudulent enrollments and to “de-duplicate” lists of people. One-to-many authentication systems pose more risks than one-to-one systems because they require a larger amount of data to be stored in one place and because they lead to more false matches. (Read more in the Risks section).

The chart below synthesizes the advantages and disadvantages of different biometric authentication tools. For further details, see the World Bank’s “Technology Landscape for Digital Identification (2018).”

Biometric ToolAdvantages Disadvantages
FingerprintsLess physically/personally invasive; advanced and relatively affordable methodNot fully inclusive: some fingerprints are harder to capture than others
Iris ScanFast, accurate, inclusive, and secureMore expensive technology; verification requires precise positioning of data subject; can be misused for surveillance purposes (verification without data subject’s permission)
Face RecognitionRelatively affordableProne to error; can be misused for surveillance purposes (verification without data subject’s permission); not enough standardization among technology suppliers, which could lead to vendor lock-in
Voice RecognitionRelatively affordable; no concerns about hygiene (unlike some other biometrics that involve touch)Collection process can be difficult and time-consuming; technology is difficult to scale
Behavior Recognition, also known as “Soft Biometrics” (i.e., a person’s gait, how they write their signature)Can be used in real timeProne to error; not yet a mature technology; can be misused for surveillance purposes (verification without data subject’s permission)
Vascular Recognition (A person’s distinct pattern of veins)Secure, accurate, and inclusive technologyMore expensive; not yet a mature technology and not yet widely understood; not interoperable/data are not easily portable
DNA ProfilingSecure; accurate; inclusive; useful for large populations Collection process is long; technology is expensive; involves extremely sensitive information which can be used to identify race, gender, and family relationships, etc. that could put the individual at risk

Non-biometric authentication: There are two common forms of digital ID that are not based on physical characteristics or traits, which also have authentication methods. Digital ID cards and digital ID applications on mobile devices can also be used to prove identity or to access services or aid (much like a passport, residence card, or driver’s license).

  • Cards: These are a common digital identifier, which can rely on many kinds of technology, from microchips to barcodes. Cards have been in use for a long time which makes them a mature technology, but they are also less secure because they can be lost or stolen. “Smart cards” exist in the form of an embedded microchip combined with a password. Cards can also be combined with biometric systems. For example, Mastercard and Thales began offering cards with fingerprint sensors in January 2020.
  • Apps on mobile devices: Digital IDs can be used on mobile devices by relying on a password, a “cryptographic” (specially encoded) SIM card, or a “Smart ID” app. These methods are fairly accurate and scalable, but they have security risks and also risks over the long term due to reliance on technology providers: the technology may not be interoperable or may become outdated (see Privatization of ID and Vendor Lock-In in the Risks section ).

Part 3: Portability and interoperability

Digital IDs are usually generated by a single issuing authority (NGO, government entity, health provider, etc.) for an individual. However, portability means that digital ID systems can be designed to allow the person to use their ID elsewhere than with the issuing authority — for example with another government entity or non-profit organization.

To understand interoperability, consider different email providers, for instance, Gmail and Yahoo Mail: these are separate service providers, but their users can send emails to one another. Data portability and interoperability are critical from a fundamental rights perspective, but it is first necessary that different networks (providers, governments) be interoperable with one another to allow for portability. Interoperability is increasingly important for providing services within and across countries, as can be seen in the European Union and Schengen community, the East African community, and the West African ECOWAS community.

Self-Sovereign Identity (SSI) is an important, emerging type of digital ID that gives a person full ownership over their digital identity, guaranteeing them lifetime portability, independent from any central authority. The Self-Sovereign Identity model aims to remove the trust issues and power imbalances that generally accompany digital identity, by giving a person full control over their data.

Back to top

How are digital IDs relevant in civic space and for democracy?

People across the world who are not identified by government documents face significant barriers to receiving government services and humanitarian assistance. Biometrics are widely used by donors and development actors to identify individuals and connect them with services. Biometric technology can increase access to finance, healthcare, education, and other critical services and benefits. It can also be used for voter registration and in facilitating civic participation.

Resident of the Garin Wazam site in Niger exchanges her e-voucher with food. Biometric technology can increase access to critical services and benefits. Photo credit: Guimba Souleymane, International Red Cross Niger.

The United Nations High Commissioner for Refugees (UNHCR) began its global Biometric Identity Management System (“BIMS”) in 2015, and the following year the World Food Program began using biometrics for multiple purposes, including refugee protection, cash-based interventions, and voter registration. In recent years, a growing preference in aid delivery for cash-based interventions has been part of the push towards digital IDs and biometrics, as these tools can facilitate monitoring and reporting of assistance distribution.

The automated nature of digital IDs brings many new challenges, from gathering meaningful informed consent, to guaranteeing personal security and organization-level security, to potentially harming human dignity and increasing exclusion. These technical and societal issues are detailed in the Risks section.

Ethical Principles for Biometrics

Founded in July 2001 in Australia, the Biometrics Institute is an independent and international membership organization for the biometrics community. In March of 2019, they released seven “Ethical Principles for Biometrics.”

  1. Ethical behaviour: We recognise that our members must act ethically even beyond the requirements of law. Ethical behaviour means avoiding actions which harm people and their environment.
  2. Ownership of the biometric and respect for individuals’ personal data: We accept that individuals have significant but not complete ownership of their personal data (regardless of where the data are stored and processed) especially their biometrics, requiring their personal data, even when shared, to be respected and treated with the utmost care by others.
  3. Serving humans: We hold that technology should serve humans and should take into account the public good, community safety and the net benefits to individuals.
  4. Justice and accountability: We accept the principles of openness, independent oversight, accountability and the right of appeal and appropriate redress.
  5. Promoting privacy-enhancing technology: We promote the highest quality of appropriate technology use including accuracy, error detection and repair, robust systems and quality control.
  6. Recognising dignity and equal rights: We support the recognition of dignity and equal rights for all individuals and families as the foundation of freedom, justice and peace in the world, in line with the United Nations Universal Declaration of Human Rights.
  7. Equality: We promote planning and implementation of technology to prevent discrimination or systemic bias based on religion, age, gender, race, sexuality or other descriptors of humans.

Back to top

Opportunities

Biometric voter registration in Kenya. Collection and storage of biometric data require strong data protection measures. Photo credit: USAID/Kenya Jefrey Karang’ae.

If you are trying to understand the implications of digital IDs in your work environment, or are considering using aspects of digital IDs as part of your DRG programming, ask yourself these questions:
Potential fraud reduction

Biometrics are frequently cited for their potential to reduce fraud and more generally manage financial risk by facilitating due diligence oversight and scrutiny of transactions. According to The Engine Room, these are frequently-cited justifications for the use of biometrics among development and humanitarian actors, but The Engine Room also found a lack of evidence to support this claim. It should not be assumed that fraud only occurs at the beneficiary level: the real problems with fraud may occur elsewhere in an ecosystem.

Facilitate E-Voting

Beyond the distribution of cash and services, the potential of digital IDs and biometrics is to facilitate the voting process. The right to vote, and to participate in democratic processes more broadly, is a fundamental human right. Recently, the use of biometric voter registration and biometric voting systems has become more widespread as a means of empowering civic participation and securing electoral systems, and protecting against voter fraud and multiple enrollments.

Advocates claim that e-voting can reduce the costs of participation and make the process more reliable. Meanwhile, critics claim that digital systems are at risk of failure, misuse, and security breach. Electronic ballot manipulation, poorly written code, or any other kind of technical failure could compromise the democratic process, particularly when there is no  backup paper trail. For more, see “Introducing Biometric Technology in Elections” (2017) by the International Institute for Democracy and Electoral Assistance, which includes detailed case studies on e-voting in Bangladesh, Fiji, Mongolia, Nigeria, Uganda, and Zambia.

Health Records

Securing electronic health records, particularly when care services are provided by multiple actors, can be very complicated, costly, and inefficient. Because biometrics link a unique verifier to a single individual, they are useful for patient identification, allowing doctors and health providers to connect someone to their health information and medical history. Biometrics have potential in vaccine distribution, for example, by being able to identify who has received specific vaccines (see the case study by The New Humanitarian about Gavi technology).

Access to healthcare can be particularly complicated in conflict zones, for migrants and displaced people, or for other groups without their documented health records. With interoperable biometrics, when patients need to transfer from one facility to another for whatever reason, their digital information can travel with them. For more, see the World Bank Group ID4D, “The Role of Digital Identification for Healthcare: The Emerging Use Cases” (2018).

Increased access to cash-based interventions

Digital ID systems have the potential to include the unbanked or those underserved by financial institutions in the local or even global economy. Digital IDs grant people access to regulated financial services by enabling them to prove their official identity. Populations in remote areas can benefit especially from digital IDs that permit remote, or non-face-to-face, identity proofing/enrollment for customer identification/verification. Biometrics can also make accessing banking services much more efficient, reducing the requirements and hurdles that beneficiaries would normally face. The WFP provides an example of a successful cash-based intervention: in 2017, it launched its first cash-based assistance for secondary school girls in northwestern Pakistan using biometric attendance data.

According to the Financial Action Task Force by bringing more people into the regulated financial sector, biometrics further reinforce financial safeguards.

Improved distribution of aid and social benefits

Biometric systems can reduce much of the administrative time and human effort behind aid assistance, liberating human resources to devote to service delivery. Biometrics permit aid delivery to be tracked in real-time, which allows governments and aid organizations to respond quickly to beneficiary problems.

Biometrics can also reduce redundancies in social benefit and grant delivery. For instance, in 2015, the World Bank Group found that biometric digital IDs in Botswana achieved a 25 percent savings in pensions and social grants by identifying duplicated records and deceased beneficiaries. Indeed, the issue of “ghost” beneficiaries is a common problem. In 2019, the Namibian Government Institutions Pension Fund (GIPF) began requiring pension recipients to register their biometrics at their nearest GIPF office and return to verify their identity three times a year. Of course, social-benefit distribution can be aided by biometrics, but it also requires human oversight, given the possibility of glitches in digital service delivery and the critical nature of these services (see more in the Risks section).

Proof of identity

Migrants, refugees, and asylum seekers often struggle to prove and maintain their identity when they relocate. Many lose the proof of their legal identities and assets — for example, degrees and certifications, health records, and financial assets — when they flee their homes. Responsibly-designed biometrics can help these populations reestablish and maintain proof of identity. For example in Finland, a blockchain startup called MONI has been working since 2015 with the Finnish Immigration Service to provide refugees in the country with a prepaid credit card backed by a digital identity number stored on a blockchain . The design of these technologies is critical: data should be distributed rather than centralized to prevent security risks and misuse or abuse that come with centralized ownership of sensitive information.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible risks associated with the use of digital ID tools in DRG work.

Dehumanization of beneficiaries

The way that biometrics are regarded — bestowing an identity on someone as if they did not have an identity previously — can be seen as problematic and even dehumanizing.

As The Engine Room explains, “the discourse around the ‘identifiability’ benefits of biometrics in humanitarian interventions often tends to conflate the role that biometrics play. Aid agencies cannot ‘give’ a beneficiary an identity, they can only record identifying features and check those against other records. Treating the acquisition of biometric data as constitutive of identity risks dehumanising beneficiaries, most of whom are already disempowered in their relationship with humanitarian entities upon whom they rely for survival. This attitude is evident in the remarks of one Burmese refugee undergoing fingerprint registration in Malaysia in 2006 — ‘I don’t know what it is for, but I do what UNHCR wants me to do’ — and of a Congolese refugee in Malawi, who upon completing biometric registration told staff, ‘I can be someone now.’”

Lack of informed consent

It is critical to obtain the informed consent of individuals in the process of biometric enrollment. But it’s rarely the case in humanitarian and development settings, given the many confusing technical aspects of the technology, language, and cultural barriers, etc. An agreement that is potentially coerced, as illustrated by the case of the biometric registration program in Kenya, which was challenged in court after many Kenyans felt pressured into it, does not constitute consent. It is difficult to guarantee and even evaluate consent when the power imbalance between the issuing authority and the data subject is so substantial. “Refugees, for instance, could feel they have no choice but to provide their information, because they are in a vulnerable situation.”

Minors also face a similar risk of coerced or uninformed consent. As the Engine Room pointed out in 2016, “UNHCR has adopted the approach that refusal to submit to biometric registration amounts to refusal to submit to registration at all. If this is true, this constrains beneficiaries’ right to contest the taking of biometric data and creates a considerable disincentive to beneficiaries voicing opposition to the biometric approach.”

For consent to be given truly, the individual must have an alternative method available to them so they feel they can refuse the procedure without being disproportionately penalized. Civil society organizations could play an important role in helping to remedy this power imbalance.

Security risks

Digital ID systems provide many important security features, but they increase other security risks, like the risk of data leakage, data corruption, or data use/misuse by unauthorized actors. Digital ID systems can involve very detailed data about the behaviors and movements of vulnerable individuals, for example, their financial histories and their attendance at schools, health clinics, and religious establishments. This information could be used against them, if in the hands of other actors (corrupt governments, marketers, criminals).

The loss, theft, or misuse of biometric data are some of the greatest risks for organizations deploying these technologies. By collecting and storing their biometric data in centralized databases, aid organizations could be putting their beneficiaries at serious risk, particularly if their beneficiaries are people fleeing persecution or conflict. In general, because digital IDs rely on the Internet or other open communications networks, there are multiple opportunities for cyberattacks and security breaches. The Engine Room also cites anecdotal accounts of humanitarian workers losing laptops, USB keys, and other digital files containing beneficiary data. See also the Data Protection resource.

Data Reuse and Misuse

Because biometrics are unique and immutable, once biometric data are out in the world, people are no longer the only owners of their identifiers. The Engine Room describes this as the “non-revocability” of biometrics. This means that biometrics could be used for other purposes than those originally intended. For instance, governments could require humanitarian actors to give them access to biometric databases for political purposes, or foreign countries could obtain biometric data for intelligence purposes. People cannot easily change their biometrics as they would a driver’s license or even their name: for instance, with facial recognition, they would need to undergo plastic surgery in order to remove their biometric data.

There is also the risk that biometrics will be put to use in future technologies that may be more intrusive or harmful than current usages. “Governments playing hosts to large refugee populations, such as Lebanon, have claimed a right to access to UNHCR’s biometric database, and donor States have supported UNHCR’s use of biometrics out of their own interest in using the biometric data acquired as part of the so-called ongoing “war on terror”

The Engine Room

For more on the potential reuse of biometric data for surveillance purposes, see also “Aiding surveillance: An exploration of how development and humanitarian aid initiatives are enabling surveillance in developing countries,” I&N Working Paper (2014).

Malfunctions and inaccuracies

Because they are so technical and rely on multiple steps and mechanisms, digital ID systems can experience many errors. Biometrics can return false matches, linking someone to the incorrect identity, or false negatives, failing to link someone to their actual identity. Technology does not always function as it does in the laboratory setting when it is deployed within real communities. Furthermore, some populations are at the receiving end of more errors than others: for instance, as has been widely proven, people of color are more often incorrectly identified by facial recognition technology.

Some technologies are more error-prone than others, for example, soft biometrics which measure elements like a person’s gait are less mature and accurate technologies than iris scans. Even fingerprints, though relatively mature and widely used, still have a high error rate. The performance of some biometrics can also diminish over time: aging can change a person’s facial features and even their irises in a way that can impede biometric authentication. Digital IDs can also suffer from connectivity issues: lack of reliable infrastructure can reduce the system’s functioning in a particular geographic area for a significant period of time. To mitigate this, it is important that digital ID systems be designed to support both offline and online transactions.

When it comes to providing life-saving aid services, even a small mistake or malfunction during a single step in the process can cause severe harm. Unlike manual processes where humans are involved and can intervene in the case of error, automated processes bring the possibility that no one will notice a seemingly small technicality until it is too late.

Exclusionary potential

Biometrics may exclude individuals for several reasons, according to The Engine Room: “Individuals may be reluctant to submit to providing biometric samples because of cultural, gender or power imbalances. Acquiring biometric samples can be more difficult for persons of darker skin color or persons with disabilities. Fingerprinting, in particular, can be difficult to undertake correctly, particularly when beneficiaries’ fingerprints are less pronounced due to manual and rural labor. All of these aspects may inhibit individuals’ provision of biometric data and thus exclude them from the provision of assistance.”

The kinds of errors mentioned in the section above are more frequent with respect to minority populations who tend to be underrepresented in training data sets, for example, people of color, and persons with disabilities.

Lack of access to technology or lower levels of technology literacy can compound exclusion: for example, lack of access to smartphones or lack of cellphone data or coverage may increase exclusion in the case of smartphone-reliant ID systems. As mentioned, manual laborers’ typically have worn fingerprints which can be difficult when using biometric readers; similarly, the elderly may experience match failure due to changes in their facial characteristics like hair loss or other signs of aging or illness — all increasing risk of exclusion.

The World Bank ID4D program explains that they often note differential rates in coverage for the following groups and their intersections: women and girls; orphans and vulnerable children; poor people; rural dwellers; ethnolinguistic minorities; migrants and refugees; stateless populations or populations at risk of statelessness; older people; persons with disabilities; non-nationals. It bears emphasizing that these groups tend to be the most vulnerable populations in society — precisely those that biometric technology and digital IDs aim to include and empower. When considering which kind of ID or biometric technology to deploy, it is critical to assess all of these types of potential errors in relation to the population, and in particular how to mitigate against the exclusion of certain groups.

Insufficient regulation

“Technology is moving so fast that laws and regulations are struggling to keep up… Without clear international legislation, businesses in the biometrics world are often faced with the dilemma, ‘Just because we can, should we?’”

Isabelle Moeller, Chief Executive of the Biometrics Institute

Digital identification technologies exist in a continually evolving regulatory environment, which presents challenges to providers and beneficiaries alike. There are many efforts to create international standards for biometrics and digital IDs — for example, by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). But beyond the GDPR, there is not yet sufficient international regulation to enforce these standards in many of the countries where they are being implemented.

Privatization of ID and Vendor Lock-In

The technology behind digital identities and biometrics is almost always provided by private-sector actors, often in partnership with governments and international organizations and institutions. The major role played by the private sector in the creation and maintenance of digital IDs can put both the beneficiaries and aid organizations and governments at risk of vendor lock-in: if the cost of switching to a new service provider is too expensive or onerous, the organization/actor may be forced to stay with their original supplier. Overreliance on a private-sector supplier can also bring security risks (for instance, when the original supplier’s technology is insecure) and can pose challenges to partnering with other services and providers when the technology is not interoperable. For these reasons, it is important for technology to be interoperable and to be designed with open standards.

IBM’s Facial Recognition Ban
In June of 2020, IBM decided to withdraw its facial-recognition technology from use by law enforcement in the U.S. These one-off decisions by private actors should not replace legal judgments and regulations. Debbie Reynolds, data privacy officer for Women in Identity, believes that facial recognition will not soon disappear, and so, considering the many flaws in the technology today, companies should focus on further improving the technology rather than on banning it. International regulation and enforcement are necessary first and foremost, as this will provide private actors with guidelines and incentives to design responsible, rights respecting technology over the long term.

Back to top

Questions

If you are considering using digital ID tools as part of your programming, ask yourself these questions to understand the possible implications for your work and for your community and partners.

  1. Has the beneficiary given their informed consent? How were you able to check their understanding? Was consent coerced in any way, perhaps due to a power dynamic or lack of alternative options?
  2. How does the community feel about the technology? Does the technology fit with cultural norms and uphold human dignity?
  3. How affordable is the technology for all stakeholders, including the data subjects?
  4. How mature is the technology? How long has the technology been in use, where, and with what results? How well is it understood by all stakeholders?
  5. Is the technology accredited? When and by whom? Is the technology based on widely accepted standards? Are these standards open?
  6. How interoperable is the technology with the other technologies in the identity ecosystem?
  7. How well does the technology perform? How long does it take to collect the data, to validate identity, etc? What is the error rate?
  8. How resilient is the digital system? Can it operate without internet access or without reliable internet access?
  9. How easy is the technology to scale and use with larger or other populations?
  10. How secure and accurate is the technology? Have all security risks been addressed? What methods do you have in terms of backup (for example, a paper trail for electronic voting)
  11. Is the collection of biometric data proportional in regards to the task at hand? Are you collecting the minimal amount of data necessary to achieve your goal?
  12. Where are all data being stored? What other parties might have access to this information? How are the data protected?
  13. Are any of the people who would receive biometric or digital IDs part of a vulnerable group? If digitally recording their identity could put them at risk, how could you mitigate against this? (for instance, avoiding a centralized database, minimizing the amount of data collected, taking cybersecurity precautions, etc.).
  14. What power does the beneficiary have over their data? Can they transfer their data elsewhere? Can they request that their data be erased, and can the data in fact be erased?
  15. If you are using digital IDs or biometrics to automate the fulfillment of fundamental rights or the delivery of critical services, is there sufficient human oversight?
  16. Who is a technological error most likely to exclude or harm? How will you address this potential harm or exclusion?

Back to top

Case studies

Aadhaar, India, the world’s largest national biometric system

Aadhaar is India’s national biometric ID program, and the largest in the world. It is an essential case study for understanding the potential benefits and risks of such a system. Aadhaar is controversial. Many have attributed hunger-related deaths to failures in the Aadhaar system, which does not have sufficient human oversight to intervene when the technology malfunctions and prevents individuals from accessing their benefits. However, in 2018, the Indian Supreme Court  upheld the legality of the system, saying it does not violate Indians’ right to privacy and could therefore remain in operation. “Aadhaar gives dignity to the marginalized,” the judges asserted, and “Dignity to the marginalized outweighs privacy.” While there are substantial risks, there are also significant opportunities for digital IDs in India, including increasing inclusivity and accessibility for otherwise unregistered individuals, to be able to access social services and participate in society.

WFP Iris Scan Technology in Zaatari Refugee Camp

In 2016, the World Food Program introduced biometric technology to the Zataari Refugee camp in Jordan. “WFP’s system relies on UNHCR biometric registration data of refugees. The system is powered by IrisGuard, the company that developed the iris scan platform, Jordan Ahli Bank, and its counterpart Middle East Payment Services. Once a shopper has their iris scanned, the system automatically communicates with UNHCR’s registration database to confirm the identity of the refugee, checks the account balance with Jordan Ahli Bank and Middle East Payment Services, and then confirms the purchase and prints out a receipt – all within seconds.” As of 2019, the program, which relies in part on blockchain technology, was supporting more than 100,000 refugees.

Kenya’s Huduma Namba

In January 2020, the New York Times reported that Kenya’s Digital IDs may exclude millions of minorities. In February, the Kenyan ID Huduma Namba was suspended by a High Court ruling, halting “the $60 million Huduma Namba scheme until adequate data protection policies are implemented. The panel of three judges ruled in a 500-page report that the National Integrated Identification Management System (NIIMS) scheme is constitutional, reports The Standard, but current laws are insufficient to guarantee data protection. […] Months after biometric capture began, the government passed its first data protection legislation in late November 2019, after the government tried to downgrade the role of data protection commissioner to a ‘semi-independent’ data protection agency with a chairperson appointed by the president. The data protection measures have yet to be implemented. The case was brought by civil rights groups including the Nubian Rights Forum and Kenya National Commission on Human Rights (KNCHR), citing data protection and privacy issues, that the way in which data protection legislation was handled in parliament prevented public participation, and how the NIIMs scheme is proving ethnically divisive in the country, particularly in border areas.”

Biometrics for child vaccination

As explored in The New Humanitarian, 2019: “A trial project is being launched with the underlying betting that biometric identification is the best way to help boost vaccination rates, linking children with their medical records. Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently. Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.”

Financial Action Task Force Case Studies

See also the case studies assembled by the Financial Action Task Force (FATF), the intergovernmental organization focused on combating terrorist financing. They released a comprehensive resource on Digital Identity in 2020, which includes brief case studies.

Digital Identity in the Migration and Refugee Context

Digital Identity in the Migration and Refugee Context

For migrants and refugees in Italy, identity data collection processes can “exacerbate existing biases, discrimination, or power imbalances.” One key challenge is obtaining meaningful consent. Often, biometric data are collected as soon as migrants and refugees arrive in a new country, at a moment when they may be vulnerable and overwhelmed. Language barriers exacerbate the issue, making it difficult to provide adequate context around the rights to privacy. Identity data are collected inconsistently by different organizations, all of whose data protection and privacy practices vary widely.

Using Digital IDs in Ukraine

Using Digital IDs in Ukraine

In 2019, USAID in partnership with the Ministry of Digital Transformation of Ukraine helped to launch the Diia app which allows citizens to access digital forms of identification that, since August of 2021, hold the same legal value as physical forms of identification. Diia has about 18 million total users in Ukraine and is the most frequently used app in the country. Support for the app is crucial for Ukraine’s digital development and has become increasingly important as the war has forced many to flee and caused damage to government buildings and existing infrastructure. The app allows users to store digital passports along with 14 other digital documents and access 25 public services all online.

Back to top

References

Find below the works cited in this resource.

This primer draws from the work of The Engine Room, and the resource they produced in collaboration with Oxfam on Biometrics in the Humanitarian Sector, published in March 2018.

Back to top

Categories

Extended Reality / Augmented Reality / Virtual Reality (XR/AR/VR)

What is Extended Reality (XR)?

Extended Reality (XR) is a collective term encompassing Augmented Reality (AR) and Virtual Reality (VR), technologies that transform our interaction with the world by either enhancing or completely reimagining our perception of reality.

Utilizing advancements in computer graphics, sensors, cameras, and displays, XR creates immersive experiences that range from overlaying digital information onto our physical surroundings in AR, to immersing users in entirely digital environments in VR. XR represents a significant shift in how we engage with and perceive digital content, offering intuitive and natural interfaces for a wide range of applications in various sectors, including democracy, human rights, and governance.

What is Virtual Reality (VR)?

Virtual Reality (VR) is a technology that immerses users in a simulated three-dimensional (3D) environment, allowing them to interact with it in a way that simulates real-world experiences, engaging senses like sight, hearing, and touch. Unlike traditional user interfaces, VR places the user inside an experience. Instead of viewing a screen in front of them, users are immersed and able to interact with 3D worlds.

VR uses a specialized headset, known as a VR Head Mounted Display (HMD), to create a 3D, computer-generated world that fully encompasses your vision and hearing. This immersive technology not only visualizes but also enables interaction through hand controllers. These controllers provide haptic feedback, a feature that simulates the sense of touch, enhancing the realism of the virtual experience. VR’s most notable application is in immersive gaming, where it allows players to fully engage in complex fantasy worlds.

What is Augmented Reality (AR)?

Augmented Reality (AR) is a technology that overlays digital information and objects onto the real world, enhancing what we see, hear, and feel. For instance, it can be used as a tourist application to help a user find her way through an unfamiliar city and identify restaurants, hotels, and sights. Rather than immersing the user in an imaginary or distant virtual world, the physical world is enhanced by augmenting it in real time with digital information and entities.

AR became widely popular in 2016 with the game Pokémon Go, in which players found virtual characters in real locations, and Snapchat, which adds playful filters like funny glasses to users’ faces. AR is used in more practical applications as well such as aiding surgeries, enhancing car displays, and visualizing furniture in homes. Its seamless integration with the real world and focus on enhancing, rather than replacing reality, positions AR as a potential key player in future web and metaverse technologies, replacing traditional computing interfaces like phones and desktops by accurately blending real and virtual elements in real time.

What is Mixed Reality (MR)?

Mixed Reality (MR) is a technology that merges real-world and digital elements. It combines elements of Virtual Reality (VR), which creates a completely computer-generated environment, with Augmented Reality (AR), which overlays digital information onto the real world. In MR, users can seamlessly interact with both real and virtual objects in real time. Digital objects in MR are designed to respond to real-world conditions like light and sound, making them appear more realistic and integrated into our physical environment. Unlike VR, MR does not fully replace the real world with a digital one; instead, it enhances your real-world experience by adding digital elements, providing a more interactive and immersive experience.

MR has diverse applications, such as guiding surgeons in minimally invasive procedures using interactive 3D images and models through MR headsets. MR devices are envisioned as versatile tools poised to deliver value across multiple domains.

What is the Metaverse?

The Metaverse, a term first coined in the 1992 novel “Snow Crash,” is an immersive, interconnected virtual world in which people use avatars to interact with each other and digital environments via the internet. It blends the physical and digital realms using Extended Reality (XR) technologies like AR and VR, creating a space for diverse interactions and community building. Gaining traction through advancements in technology and investments from major companies, the Metaverse offers a platform that mirrors real-world experiences in a digitally enhanced environment, allowing simultaneous connections among numerous users.

Metaverse and how it leverages XR. Metaverse can be built using VR technology to create Virtual Metaverse and using AR technology to create Augmented Metaverse. (Figure adapted from Ahmed et al., 2023)

A metaverse can span the spectrum of virtuality and may incorporate a “virtual metaverse” or an “augmented metaverse” as shown above. Features of this technology range from employing avatars within virtual realms to utilizing smartphones for accessing metaverse environments, and from wearing AR glasses that superimpose computer-generated visuals onto reality, to experiencing MR scenarios that flawlessly blend elements from both the physical and virtual domains.

Spectrum ranging from Reality to Virtuality [Milgram and Kishino (1994) continuum.] Figure adapted from “Reality Media” (Source: Bolter, Engberg, & MacIntyre, 2021)
The above figure illustrates a spectrum from the real environment (physical world) at one end to a completely virtual environment (VR) at the other. Augmented Reality (AR) and Augmented Virtuality (AV) are placed in between, with AR mostly showing the physical world enhanced by digital elements, and AV being largely computer-generated but including some elements from the real world. Mixed Reality (MR) is a term for any combination of the physical and virtual worlds along this spectrum.

Back to top

How is AR/VR relevant in civic space and for democracy?

In the rapidly evolving landscape of technology, the potential of AR/VR stands out, especially in its relevance to democracy, human rights, and governance (DRG). These technologies are not just futuristic concepts; they are tools that can reshape how we interact with the world and with each other, making them vital for the DRG community.

At the forefront is the power of AR/VR to transform democratic participation. These technologies can create immersive and interactive platforms that bring the democratic process into the digital age. Imagine participating in a virtual town hall from your living room, debating policies with avatars of people from around the world. This is not just about convenience; it’s about enhancing engagement, making participation in governance accessible to all, irrespective of geographic or physical limitations.

Moreover, AR/VR technologies offer a unique opportunity to give voice to marginalized communities. Through immersive experiences, people can gain a visceral understanding of the challenges faced by others, fostering empathy and breaking down barriers. For instance, VR experiences that simulate the life of someone living in a conflict zone or struggling with poverty can be powerful tools in human rights advocacy, making abstract issues concrete and urgent.

Another significant aspect is the global collaboration facilitated by AR/VR. These technologies enable DRG professionals to connect, share experiences, and learn from each other across borders. Such collaboration is essential in a world where human rights and democratic values are increasingly interdependent. The global exchange of ideas and best practices can lead to more robust and resilient strategies in promoting democracy and governance.

The potential of AR/VR in advocacy and awareness is significant. Traditional methods of raising awareness about human rights issues can be complemented and enhanced by the immersive nature of these technologies. They bring a new dimension to storytelling, allowing people to experience rather than just observe. This could be a game-changer in how we mobilize support for causes and educate the public about critical issues.

However, navigating the digital frontier of AR/VR technology calls for a vigilant approach to data privacy, security, and equitable access, recognizing these as not only technical challenges but also human rights and ethical governance concerns.

The complexity of governing these technologies necessitates the involvement of elected governments and representatives to address systemic risks, foster shared responsibility, and protect vulnerable populations. This governance extends beyond government oversight, requiring the engagement of a wide range of stakeholders, including industry experts and civil society, to ensure fair and inclusive management. The debate over governance approaches ranges from advocating government regulation to protect society, to promoting self-regulation for responsible innovation. A potentially effective middle ground is co-regulation, where governments, industry, and relevant stakeholders collaborate to develop and enforce rules. This balanced strategy is crucial for ensuring the ethical and impactful use of AR/VR in enhancing democratic engagement and upholding human rights.

Back to top

Opportunities

AR/VR offers a diverse array of applications in the realms of democracy, human rights, and governance. The following section delves into various opportunities that AR/VR technology brings to civic and democracy work.

Augmented Democracy

Democracy is much more than just elections and governance by elected people. A fully functional democracy is characterized by citizen participation in the public space; participatory governance; freedom of speech and opportunities; access to information; due legal process and enforcement of justice; protection from abuse by the powerful, etc. Chilean physicist César Hidalgo, formerly director of the MIT Collective Learning group at MIT Media Lab, has worked on an ambitious project that he called “Augmented Democracy.” Augmented Democracy banks on the idea of using technology such as AR/VR, along with other digital tools, including AI and digital twins, to expand the ability of people to participate directly in a large volume of democratic decisions. Citizens can be represented in the virtual world by a digital twin, an avatar, or a software agent. Through such technology, humans can participate more fully in all of the public policy issues in a scalable, convenient fashion. Hidalgo asserts that democracy can be enhanced and augmented using technology to automate several of the tasks of governments and in the future, politicians and citizens will be supported by algorithms and specialist teams, fostering a collective intelligence that serves the people more effectively.

Participatory Governance

Using AR/VR, enhanced opportunities to participate in governance become available. When used in conjunction with other technologies such as AI, participatory governance becomes feasible at scale in which the voice of citizens and their representatives is incorporated into all of the decisions pertaining to public policy and welfare. However, “a participatory public space” is only one possibility. As we shall see later in the Risks section, we cannot ascribe outcomes to technology deterministically because the intent and purpose of deployment matters a lot. If due care is not exercised, the use of technology in public spaces may result in other less desirable scenarios such as “autocratic augmented reality” or “big-tech monopoly” (Gudowsky et al.). On the other hand, a well-structured metaverse could enable greater democratic participation and may offer citizens new ways to engage in civic affairs with AR/VR, leading to more inclusive governance. For instance, virtual town hall meetings, debates, and community forums could bring together people from diverse backgrounds, overcoming geographical barriers and promoting democratic discussions. AR/VR could facilitate virtual protests and demonstrations, providing a safe platform for expression in regions where physical gatherings might be restricted.

AR/VR in Healthcare

Perhaps the most well-known applications of AR/VR in the civic space pertain to the healthcare and education industries. The benefit of AR/VR for healthcare is well-established and replicated through multiple scientific studies. Even skeptics, typically doubtful of AR/VR/metaverse technology’s wider benefits, acknowledge its proven effectiveness in healthcare, as noted by experts like Bolter et al. (2021) and Bailenson (2018). These technologies have shown promise in areas such as therapeutics, mental therapy, emotional support, and specifically in exposure therapy for phobias and managing stress and anxiety. Illustrating this, Garcia-Palacios et al. (2002) demonstrated the successful use of VR in treating spider phobia through a controlled study, further validating the technology’s therapeutic potential.

AR/VR in Education

Along with healthcare, education and training provide the most compelling use cases of immersive technologies such as AR/VR. The primary value of AR/VR is that it provides a unique first-person immersive experience that can enhance human perception and educate or train learners in the relevant environment. Thus, with AR/VR, education is not reading about a situation or watching it, but being present in that situation. Such training can be useful in a wide variety of fields. For instance, Boeing presented results of a study that suggested that training performed through AR enabled workers to be more productive and assemble plane wings much faster than when instructions were provided using traditional methods. Such training has also been shown to be effective in diversity training, where empathy can be engendered through immersive experiences.

Enhanced Accessibility and Inclusiveness

AR/VR technology allows for the creation of interactive environments that can be customized to meet the needs of individuals with various abilities and disabilities. For example, virtual public spaces can be adapted for those with visual impairments by focusing on other senses, using haptic (touch-based) or audio interfaces for enhanced engagement. People who are colorblind can benefit from a ‘colorblind mode’ – a feature already present in many AR/VR applications and games, which adjusts colors to make them distinguishable. Additionally, individuals who need alternative communication methods can utilize text-to-speech features, even choosing a unique voice for their digital avatars. Beyond these adaptations, AR/VR technologies can help promote workplace equity, through offering people with physical disabilities equal access to experiences and opportunities that might otherwise be inaccessible, leveling the playing field in both social and professional settings.

Generating Empathy and Awareness

AR/VR presents a powerful usability feature through which users can experience what it is like to be in the shoes of someone else. Such perspective-enhancing use of AR/VR can be used to increase empathy and promote awareness of others’ circumstances. VR expert Jeremy Bailenson and his team at Stanford Virtual Human Interaction Lab have worked on VR for behavior change and have created numerous first-person VR experiences to highlight social problems such as racism, sexism, and other forms of discrimination (see some examples in Case Studies). In the future, using technology in real time with AR/VR-enabled, wearable and broadband wireless communication, one may be able to walk a proverbial mile in someone else’s shoes in real time, raising greater awareness of the difficulties faced by others. Such VR use can help in removing biases and in making progress on issues such as poverty and discrimination.

Immersive Virtual Communities and Support Systems

AR/VR technologies offer a unique form of empowerment for marginalized communities, providing a virtual space for self-expression and authentic interaction. These platforms enable users to create avatars and environments that truly reflect their identities, free from societal constraints. This digital realm fosters social development and offers a safe space for communities often isolated in the physical world. By connecting these individuals with broader networks, AR/VR facilitates access to educational and support resources that promote individual and communal growth. Additionally, AR/VR serves as a digital archive for diverse cultures and traditions, aiding in the preservation and celebration of cultural diversity. As highlighted in Jeremy Bailenson’s “Experience on Demand,” these technologies also provide therapeutic benefits, offering emotional support to those affected by trauma. Through virtual experiences, individuals can revisit cherished memories or envision hopeful futures, underscoring the technology’s role in emotional healing and psychological wellbeing.

Virtual Activism

Virtual reality, unlike traditional media, does not provide merely a mediated experience. When it is done well, explains Jeremy Bailenson, it is an actual experience. Therefore, VR can be the agent of long-lasting behavior change and can be more engaging and persuasive than other types of traditional media. This makes AR/VR ideally suited for virtual activism, which seeks to bring actual changes to the life of marginalized communities. For instance, VR has been used by UN Virtual Reality to provide a new lens on an existing migrant crisis; create awareness around climate change; and engender humanitarian empathy. Some examples are elaborated upon in the Case Studies.

Virtual Sustainable Economy

AR/VR and the metaverse could enable new, more sustainable economic models. Decentralized systems like blockchain technology can be used to support digital ownership of virtual assets, and empower the disenfranchised economically, and to challenge traditional, centralized power structures. Furthermore, since AR/VR and the metaverse promise to be the next evolution of the Internet – which is more immersive, and multi-sensory, individuals may be able to participate in various activities and experiences virtually. This could reduce the need for physical travel and infrastructure, resulting in more economical and sustainable living, reducing carbon footprints, and mitigating climate change.

Back to top

Risks

The use of AR/VR in democracy, human rights, and governance work carries various risks. The following sections will explore these risks in a little more detail. They will also provide strategies on how to mitigate these risks effectively.

Limited applications and Inclusiveness

For AR/VR technologies to be effectively and inclusively used in democratic and other applications, it is essential to overcome several key challenges. Currently, these technologies fall short in areas like advanced tactile feedback, comprehensive sign language support, and broad accessibility for various disabilities. To truly have a global impact, AR/VR must adapt to diverse communication methods, including visual, verbal, and tactile approaches, and cater to an array of languages, from spoken to sign. They should also be designed to support different cognitive abilities and neurodiversity, in line with the principles set by the IEEE Global Initiative on Ethics of Extended Reality. There is a pressing need for content to be culturally and linguistically localized as well, along with the development of relevant skills, making AR/VR applications more applicable and beneficial across various cultural and linguistic contexts.

Furthermore, access to AR/VR technologies and the broader metaverse and XR ecosystem is critically dependent on advanced digital infrastructure, such as strong internet connectivity, high-performance computing systems, and specialized equipment. As noted by Matthew Ball in his 2022 analysis, significant improvements in computational efficiency are necessary to make these technologies widely accessible and capable of delivering seamless, real-time experiences, which is particularly crucial in AR to avoid disruptive delays. Without making these advancements affordable, AR/VR applications at scale remain limited.

Concentration of Power & Monopolies of Corporations

The concept of the Metaverse, as envisioned by industry experts, carries immense potential for shaping the future of human interaction and experience. However, the concentrated control of this expansive digital realm by a single dominant corporation raises critical concerns over the balance of power and authority. As Matthew Ball (2022) puts it, the Metaverse’s influence could eclipse that of governments, bestowing unprecedented authority upon the corporation that commands it. The concentration of power within this immersive digital ecosystem brings forth apprehensions about accountability, oversight, and the potential implications for personal freedoms.

Another significant concern is how companies gather and use our data. While they can use data to improve their programs and lives in many ways, the World Bank (2021) warns that collecting vast amounts of data can lead to companies getting too much economic and political power, which could be used to harm citizens. The more data is used over and over, the more chances there are for it to be misused. Especially in situations characterized by concentrations of power, like in authoritarian regimes or corporate monopolies, the risks of privacy violations, surveillance, and manipulation become much higher.

Privacy Violation with Expanded Intrusive Digital Surveillance

The emergence of AR/VR technologies has revolutionized immersive experiences but also raises significant privacy concerns due to the extensive data collection involved. These devices collect a wide range of personal data, including biometric information like blood pressure, pulse oximetry, voice prints, facial features, and even detailed body movements. This kind of data gathering poses specific risks, particularly to vulnerable and marginalized groups, as it goes much further than simple identification. Current regulatory frameworks are not adequately equipped to address these privacy issues in the rapidly evolving XR environment. This situation underscores the urgent need for updated regulations that can protect individual privacy in the face of such advanced technological capabilities.

Moreover, AR/VR technologies bring unique challenges in the form of manipulative advertising and potential behavior modification. Using biometric data, these devices can infer users’ deepest desires, leading to highly targeted and potentially invasive advertising that taps into subconscious motivations. Such techniques blur the line between personal privacy and corporate interests, necessitating robust privacy frameworks. Additionally, the potential of AR/VR to influence or manipulate human behavior is a critical concern. As these technologies can shape our perceptions and choices, it is essential to involve diverse perspectives in their design and enforce proactive regulations to prevent irreversible impacts on their infrastructure and business models. Furthermore, the impact of XR technology extends to bystanders, who may unknowingly be recorded or observed, especially with the integration of technologies like facial recognition, posing further risks to privacy and security.

Unintended Harmful Consequences of AR/VR

When introducing AR/VR technology into democracy-related programs or other social initiatives, it is crucial to consider the broader, often unintended, consequences these technologies might have. AR/VR offers immersive experiences that can enhance learning and engagement, but these very qualities also bear risks. For example, while VR can create compelling simulations of real-world scenarios, promoting empathy and understanding, it can also lead to phenomena like “VR Fatigue” or “VR Hangover.” Users might experience a disconnection from reality, feeling alienated from their physical environment or their own bodies. Moreover, the prevalence of “cybersickness,” akin to motion sickness, caused by discrepancies in sensory inputs, can result in discomfort, nausea, or dizziness, detracting from the intended positive impacts of these technologies.

Another significant concern is the potential for AR/VR to shape users’ perceptions and behaviors in undesirable ways. The immersive nature of these technologies can intensify the effects of filter bubbles and echo chambers, isolating users within highly personalized, yet potentially distorted, information spheres. This effect can exacerbate the fragmentation of shared reality, impeding constructive discourse in democratic contexts. Additionally, the blending of virtual and real experiences can blur the lines between factual information and fabrication, making users more susceptible to misinformation. Furthermore, the perceived anonymity and detachment in VR environments might encourage anti-social behavior, as people might engage in actions they would avoid in real life. There is also the risk of empathy, generally a force for good, being manipulated for divisive or exploitative purposes. Thus, while AR/VR holds great promise for enhancing democratic and social programs, potential negative impacts call for careful, ethically guided implementation.

“Too True to Be Good”: Disenchantment with Reality & Pygmalion Effect

In our era of augmented and virtual realities, where digital escapism often seems more enticing than the physical world, there is a growing risk to our shared understanding and democracy as people might become disenchanted with reality and retreat into virtual realms (Turkle, 1996; Bailenson,2018). The transformative nature of AR/VR introduces a novel concept where individuals might gravitate towards virtual worlds at the expense of engaging with their physical surroundings (which are now considered “too true to be good”). The use of VR by disadvantaged and exploited populations may provide them relief from the challenges of their lived experience, but it also diminishes the likelihood of their resistance to those conditions. Moreover, as AR/VR advances and becomes integrated with advanced AI in the metaverse, there is a risk of blurring the lines between the virtual and real worlds. Human beings have a tendency to anthropomorphize machines and bots that have some humanistic features (e.g., eyes or language) and treat them as humans (Reeve & Nass, 1996). We might treat AI and virtual entities as if they are human, potentially leading to confusion and challenges in our interactions. There are also severe risks associated with overindulgence in immersive experiences having a high degree of VR realism (Greengard, 2019). VR Expert Denny Unger, CEO of Cloudhead Games, cautions that extreme immersion could extend beyond discomfort and result in even more severe outcomes, including potential heart attacks and fatal incidents.

Neglect of physical self and environment

Jeremy Bailenson’s (2018) observation that being present in virtual reality (VR) often means being absent from the real world is a crucial point for those considering using VR in democracy and other important work. When people dive into VR, they can become so focused on the virtual world that they lose touch with what is happening around them in real life. In his book “Experience on Demand,” Bailenson explains how this deep engagement in VR can lead to users neglecting their own physical needs and environment. This is similar to how people might feel disconnected from themselves and their surroundings in certain psychological conditions. There is also a worry that VR companies might design their products to be addictive, making it hard for users to pull away. This raises important questions about the long-term effects of using VR a lot and highlights the need for strategies to prevent these issues.

Safety and Security

In the realm of immersive technologies, safety is a primary concern. There is a notable lack of understanding about the impact of virtual reality (VR), particularly on young users. Ensuring the emotional and physical safety of children in VR environments requires well-defined guidelines and safety measures. The enticing nature of VR must be balanced with awareness of the real world to protect younger users. Discussions about age restrictions and responsible use of VR are critical in this rapidly advancing technological landscape. Spiegel (2018) emphasizes the importance of age restrictions to protect young users from the potential negative effects of prolonged VR exposure, arguing for the benefits of such limitations.

On another front, the lack of strong identity verification in virtual spaces raises concerns about identity theft and avatar misuse, particularly affecting children who could be victims of fraud or wrongly accused of offenses. The absence of effective identity protection increases the vulnerability of users, highlighting the need for advanced security measures. Additionally, virtual violence, like harassment incidents reported in VR games, poses a significant risk. These are not new issues; for instance, Julian Dibbell’s 1994 article “A Rape in Cyberspace” brought attention to the challenge of preventing virtual sexual assault. This underlines the urgent need for policies to address and prevent harassment and violence in VR, ensuring these spaces are safe and inclusive for all users.

Alignment with Values and Meaning-Making

When incorporating AR/VR technologies into programs, it is crucial to be mindful of their significant impact on culture and values. As Neil Postman pointed out, technology invariably shapes culture, often creating a mix of winners and losers. Each technological tool carries inherent biases, whether political, social, or epistemological. These biases subtly influence our daily lives, sometimes without our conscious awareness. Hence, when introducing AR/VR into new environments, consider how these technologies align or conflict with local cultural values. As Nelson and Stolterman (2014) observed, culture is dynamic, caught between tradition and innovation. Engaging the community in the design process can enhance the acceptance and effectiveness of your project.

In the context of democracy, human rights, and governance, it is essential to balance individual desires with the collective good. AR/VR can offer captivating, artificial experiences, but as philosopher Robert Nozick’s (2018) “Experience Machine” thought experiment illustrates, these cannot replace the complexities and authenticity of real-life experiences. People often value reality, authenticity, and the freedom to make life choices over artificial pleasure. In deploying AR/VR, the goal should be to empower individuals, enhancing their participation in democratic processes and enriching their lives, rather than offering mere escapism. Ethical guidelines and responsible design practices are key in ensuring the conscientious use of virtual environments. By guiding users towards more meaningful and fulfilling experiences, AR/VR can be used to positively impact society while respecting and enriching its cultural fabric.

Back to top

Questions

If you are trying to understand the implications of AR/VR in your DRG work, you should consider the following questions:

  1. Does the AR/VR use enhance human engagement with the physical world and real-world issues, or does it disengage people from the real world? Will the AR/VR tool being developed create siloed spaces that will alienate people from each other and from the real world? What steps have been taken to avoid such isolation?
  2. Can this project be done in the real world and is it really needed in virtual reality? Does it offer any benefit over doing the same thing in the real world? Does it cause any harm compared to doing it in the real world?
  3. In deploying AR/VR technology, consider if it might unintentionally reinforce real-world inequalities. Reflect on the digital and economic barriers to access: Is your application compatible with various devices and networks, ensuring wide accessibility? Beware of creating a “reality divide,” in which marginalized groups are pushed towards virtual alternatives while others enjoy physical experiences. Always consider offering real-world options for those less engaged with AR/VR, promoting inclusivity and broad participation.
  4. Have the system-level repercussions of AR/VR technology usage been considered? What will be the effect of the intervention on the community and the society at large? Are there any chances that the proposed technology will result in the problem of technology addiction? Can any unintended consequences be anticipated and negative risks (such as technology addiction) be mitigated?
  5. Which policy and regulatory frameworks are being followed to ensure that AR/VR-related technologies, or more broadly XR and metaverse-related technologies, do not violate human rights and contribute positively to human development and democracy?
  6. Have the necessary steps been taken to accommodate and promote diversity, equity and inclusion so that the technology is appropriate for the needs and sensitivities of different groups? Have the designers and developers of AR/VR taken on board input from underrepresented and marginalized groups to ensure participatory and inclusive design?
  7. Is there transparency in the system regarding what data are collected, who they are shared with and how they are used? Does the data policy comply with international best practices regarding consumer protection and human rights?
  8. Are users given significant choice and control over their privacy, autonomy, and access to their information and online avatars? What measures are in place to limit unauthorized access to data containing private and sensitive information?
  9. If biometric signals are being monitored, or sensitive information such as eye-gaze detection is performed, what steps and frameworks have been followed to ensure that they are used for purposes that are ethical and pro-social?
  10. Is there transparency in the system about the use of AI entities in the Virtual Space such that there is no deception or ambiguity for the user of the AR/VR application?
  11. What steps and frameworks have been followed to ensure that any behavior modification or nudging performed by the technology is guided by ethics and law and is culturally sensitive and pro-social? Is the AR/VR technology complying with the UN Human Rights principles applied to communication surveillance and business?
  12. Has appropriate consideration been given to safeguarding children’s rights if the application is intended for use by children?
  13. Is the technology culturally appropriate? Which cultural effects are likely when this technology is adopted? Will these effects be welcomed by the local population, or will it face opposition?

Back to top

Case Studies

AR/VR can have positive impacts when used to further DRG issues. Read below to learn how to think about AR/VR use more effectively and safely in your work.

UN Sustainable Development Goals (SDG) Action Campaign

Starting in January 2015, the UN SDG Action Campaign has overseen the United Nations Virtual Reality Series (UN VR), aiming to make the world’s most urgent issues resonate with decision makers and people worldwide. By pushing the limits of empathy, this initiative delves into the human narratives underlying developmental struggles. Through the UN VR Series, individuals in positions to effect change gain a more profound insight into the daily experiences of those who are at risk of being marginalized, thereby fostering a deeper comprehension of their circumstances.

UN Secretary-General Ban Ki-moon & Executive Director, WHO Margaret Chan (c) David Gough. Credit. UN VR: https://unvr.sdgactioncampaign.org/

As a recent example advocacy and activism and the use of immersive storytelling to brief decision makers, in April 2022, the United Nations Department of Political and Peacebuilding Affairs (UN DPPA), together with the Government of Japan, released the VR experience “Sea of Islands” that takes viewers to the Pacific islands, allowing them to witness the profound ramifications of the climate crisis in the Asia Pacific area. Through this medium, the urgency, magnitude, and critical nature of climate change become tangible and accessible.

Poster of the VR Film: Sea of Islands. Source: https://media.un.org/en/asset/k1s/k1sbvxqll2

VR For Democratizing Access to Education

AR/VR technologies hold great promise in the field of educational technology (“edtech”) due to their immersive capabilities, engaging demeanor, and potential to democratize access and address issues such as cost and distance (Dick, 2021a). AR/VR can play a crucial role in facilitating the understanding of abstract concepts and enable hands-on practice within safe virtual environments, particularly benefiting STEM courses, medical simulations, arts, and humanities studies. Additionally, by incorporating gamified, hands-on learning approaches across various subjects, these technologies enhance cognitive development and classroom engagement. Another advantage is their capacity to offer personalized learning experiences, benefiting all students, including those with cognitive and learning disabilities. An illustrative instance is Floreo, which employs VR-based lessons to impart social and life skills to young individuals with autism spectrum disorder (ASD). The United Nations Children’s Fund (UNICEF) has a series of initiatives under its AR/VR for Good Initiative. For example, Nigerian start-up Imisi 3D, founded by Judith Okonkwo, aims to use VR in the classroom. Imisi 3D’s solution promises to provide quality education tools through VR, enrich the learning experiences of children, and make education accessible to more people.

Source: UNICEF Nigeria/2019/Achirga

Spotlight on Refugees and Victims of War

A number of projects have turned to VR to highlight the plight of refugees and those affected by war. One of UN VR’s first documentaries, released in 2015, is Clouds Over Sidra, the story of a 12-year-old girl Sidra who has lived in Zaʿatari Refugee Camp since the summer of 2013. The storyline follows Sidra around the Zaʿatari Camp, where approximately 80,000 Syrians, approximately half of them children, have taken refuge from conflict and turmoil. Through the VR film, Sidra takes audiences on a journey through her daily existence, providing insights into activities such as eating, sleeping, learning, and playing within the expansive desert landscape of tents. By plunging viewers into this world that would otherwise remain distant, the UN strives to offer existing donors a tangible glimpse into the impact of their contributions and, for potential donors, an understanding of the areas that still require substantial support.

The Life of Migrants in a Refugee Camp in VR (UN VR Project Clouds over Sidra) Source: http://unvr.sdgactioncampaign.org/cloudsoversidra/

Another UN VR project, My Mother’s Wing, offers an unparalleled perspective of the war-torn Gaza Strip, presenting a firsthand account of a young mother’s journey as she grapples with the heart-wrenching loss of two of her children during the bombardment of the UNRWA school in July 2014. This poignant film sheds light on the blend of sorrow and hope that colors her daily existence, showcasing her pivotal role as a beacon of strength within her family. Amid the process of healing, she emerges as a pillar of support, nurturing a sense of optimism that empowers her family to persevere with renewed hope.

Experience of a War-Torn Area (UN VR Project My Mother’s Wing) Source: https://unvr.sdgactioncampaign.org/a-mother-in-gaza/

Improving Accessibility in the Global South with AR

In various parts of the world, millions of adults struggle to read basic things such as bus schedules or bank forms. To rectify this situation, AR technology can be used with phone cameras to help people who struggle with reading. As an example, Google Lens offers support for translation and can read the text out loud when pointed at the text. It highlights the words as they are spoken, so that it becomes possible to follow along and understand the full context. One can also tap on a specific word to search for it and learn its definition. Google Lens is designed to work not only with expensive smartphones but also with cheap phones equipped with cameras.

Google Translate with Google Lens for Real-Time Live Translation of Consumer Train Tickets Source: https://storage.googleapis.com/gweb-uniblog-publish-prod/original_images/Consumer_TrainTicket.gif

Another example AR app “IKEA Place” shows the power of AR-driven spatial design and consumer engagement. The app employs AR technology to allow users to virtually integrate furniture products into their living spaces, enhancing decision-making processes, and elevating customer satisfaction. Such AR technology can also be applied in the realm of civic planning. By providing an authentic representation of products in real-world environments, the app can aid urban planners and architects in simulating various design elements within public spaces, contributing to informed decision-making for cityscapes and communal areas.

IKEA Place: Using AR to visualize furniture within living spaces. Source: Ikea.com

More examples of how AR/VR technology can be used to enhance accessibility are noted in Dick (2021b).

Spotlight on Gender and Caste Discrimination

The presence of women at the core of our democratic system marks a significant stride toward realizing both gender equality (SDG 5) and robust institutions (SDG 16).  The VR film titled “Compliment,” created by Lucy Bonner, a graduate student at Parsons School of Design, aimed to draw attention to harassment and discrimination endured by women in unsafe environments, which regrettably remains a global issue. Through this VR movie, viewers can step into the shoes of a woman navigating the streets, gaining a firsthand perspective on the distressing spectrum of harassment that many women experience often on a daily basis.

View of a Scene from the VR Movie, “Compliment.”
Source: http://lucymbonner.com/compliment.html

There are other forms of systemic discrimination including caste-based discrimination. A VR based film “Course to Question” produced by Novus Select in collaboration with UN Women and Vital Voices and supported by Google offers a glimpse into the struggles of activists combating caste-based discrimination. This movie highlights the plight of Dalit women who continue to occupy the lowest rungs of caste, color, and gender hierarchies. Formerly known as “untouchables,” the Dalits are members of the lowest caste in India and are fighting back against systems of oppression. They are systematically deprived of fundamental human rights, including access to basic necessities like food, education, and fair labor.

Scene from UN Women’s VR movie “Courage to Question” highlighting discrimination faced by Dalit women Snapshot from https://www.youtube.com/watch?v=pJCl8FNv22M

Maternal Health Training

The UN Population Fund, formerly the UN Fund for Population Activities (UNFPA), pioneered a VR innovation in 2022 to improve maternal-health training, the first project to implement VR in Timor-Leste and possibly in the Asia-Pacific region. The VR program includes VR modules which contain Emergency Obstetric and Newborn Care (EmONC) skills and procedures to save the lives of mothers and babies using VR goggles. The aim of this project is to create digitally mediated learning environments in which real medical situations are visualized for trainees to boost learning experiences and outcomes and “refresh the skills of hundreds of trained doctors and midwives to help them save lives and avoid maternal deaths.”

Source: https://timor-leste.unfpa.org/en/news/unfpa-develop-novel-innovation-help-reduce-maternal-deaths-timor-leste-using-virtual-reality © UNFPA Timor-Leste.

Highlighting Racism and Dire Poverty

The immersive VR experience, “1000 Cut Journey,” takes participants on a profound exploration. They step into the shoes of Michael Sterling, a Black male, and traverse through pivotal moments of his life, gaining firsthand insight into the impact of racism. This journey guides participants through his experiences as a child facing disciplinary measures in the classroom, as an adolescent dealing with encounters with the police, and as a young adult grappling with workplace discrimination (Cogburn et al., 2018).

View from 1000 Cut Journey, a VR film on Racism.
Source: https://www.virtualrealitymarketing.com/case-studies/1000-cut-journey/

1000 Cut Journey serves as a powerful tool to foster a significant shift in perspective. By immersing individuals in the narrative of Michael Sterling, it facilitates a deeper, more authentic engagement with the complex issues surrounding racism.

In another project from Stanford University’s Virtual Human Interaction Lab, one can experience firsthand the lives of indigent homeless people and walk in the shoes of those who can no longer afford a home inside a VR experience. Through this, the researchers aim to raise awareness and also study the effect of VR experiences on empathy. Researchers have uncovered through their research that a VR experience, compared to other perspective taking exercises, engenders longer-lasting behavior change.

View of Becoming Homeless — A Human Experience VR Film.
Source: https://xrgigs.com/offered/becoming-homeless-a-human-experience/

Participatory Governance using XR

The MIT Media Lab and HafenCity University Hamburg teamed up to create CityScope, an innovative tool blending AI, algorithms, and human insight for participatory governance. This tool utilizes detailed data, including demographics and urban planning information, to encourage citizen involvement in addressing community issues and collective decision-making. It allows users to examine various scenarios, fostering informed dialogue and collaborative solutions. This project highlights how combining technology and human creativity can enhance citizen engagement in urban development.

District leaders and residents meet to explore possible sites for refugee communities. Credit: Walter Schiesswohl. Source: https://medium.com/mit-media-lab/

Another example is vTaiwan, an innovative approach to participatory governance, seamlessly bringing together government ministries, scholars, citizens, and business leaders to redefine modern democracy. This transformative process eliminates redundancy by converging online and offline consultations through platforms like vtaiwan.tw and using it for proposals, opinion gathering, reflection, and legislation. Taiwan has also used VR to highlight its response to the COVID-19 crisis through the VR film The Three Crucial Steps which showcases how three steps—prudent action, rapid response and early deployment—have played a critical role in Taiwan’s successful COVID-19 response.

Taiwanese Deputy Minister of Foreign Affairs watches the Ministry’s VR film, Three Crucial Steps, about Taiwan’s response to COVID-19. Photo: Louise Watt.
Source: https://topics.amcham.com.tw/2020/12/taiwan-new-trails-with-extended-reality/

Taiwan harnesses the power of open-source tools and advanced real-time systems such as Pol.is, which use statistical analysis and machine learning to decode the sentiments of its extensive user base (exceeding 200,000 participants) who can participate with immersive VR through the pioneering integration of 3D cameras in live-streamed dialogues. This evolutionary movement, born in 2014 and still ongoing, serves as a model of technology-enhanced 21st-century democratic governance.

Clinical VR and Virtual Therapy

VR holds significant potential for application in clinical settings, particularly for virtual therapy and rehabilitation, as noted by Rizzo et al. (2023). To exemplify the clinical utility of VR, consider the treatment if burn pain, which is often described by medical professionals as excruciating, frequently leading to patients susceptible to post-traumatic stress. For more than two decades, VR has provided a measure of solace to burn patients through innovative solutions like the VR game SnowWorld, developed by researchers from the University of Washington.

A patient uses SnowWorld VR during treatment for burns.
Photo and Copyright: Hunter G. Hoffman. Credit: University of Washington.
Source: https://depts.washington.edu/hplab/research/virtual-reality/

Throughout the process of the operative care of burn wounds, patients are made to immerse themselves in the SnowWorld VR experience. Remarkably, this immersive engagement has proven successful in either drowning out or mitigating the pain signals that afflict patients. The concept behind SnowWorld’s design revolves around the idea of snow, leveraging the stark contrast between cold and ice to counter the sensations of pain from the burn wounds. The intention is to divert patients’ thoughts away from their accidents or burn injuries. The effectiveness of VR in managing incoming pain highlights the great potential of AR/VR technology in clinical therapy and healthcare.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19