Artificial Intelligence & Machine Learning

What is AI and ML?

Artificial intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can approximate human intelligence. There is no single, precise, universal definition of AI.

Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?

Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems, and evolutionary computation.

artificial intelligence, types

The diagram above shows the many different types of technology fields that comprise AI. AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems. When referring to AI, one can be referring to any or several of these technologies or fields. Applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes natural language processing (question answering) to understand what you’re asking, and then uses vision (image recognition) to find a banana and show it to you.

As noted above, AI doesn’t have a universal definition. There are many myths surrounding AI—from the fear that AI will take over the world by enslaving humans, to the hope that AI can one day be used to cure cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as to outline some of the benefits and risks posed by AI.

Definitions

Algorithm: An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.

Algorithmic decision-making/Algorithmic decision system (ADS): Algorithmic decision systems use data and statistical analyses to make automated decisions, such as determining whether people are eligible for a benefit or a penalty. Examples of fully automated algorithmic decision systems include the electronic passport control check-point at airports or an automated decision by a bank to grant a customer an unsecured loan based on the person’s credit history and data profile with the bank. Driver-assistance features that control a vehicle’s brake, throttle, steering, speed, and direction are an example of a semi-automated ADS.

Big Data: There are many definitions of “big data,” but we can generally think of it as extremely large data sets that, when analyzed, may reveal patterns, trends, and associations, including those relating to human behavior. Big Data is characterized by the five V’s: the volume, velocity, variety, veracity, and value of the data in question. This video provides a short introduction to big data and the concept of the five V’s.

Class label: A class label is applied after a machine learning system has classified its inputs; for example, determining whether an email is spam.

Data mining: Data mining, also known as knowledge discovery in data, is the “process of analyzing dense volumes of data to find patterns, discover trends, and gain insight into how the data can be used.”

Generative AI[1]: Generative AI is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. See section on Generative AI for more details.

Label: A label is the thing a machine learning model is predicting, such as the future price of wheat, the kind of animal shown in a picture, or the meaning of an audio clip.

Large language model: A large language model (LLM) is “a type of artificial intelligence that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new content.” An LLM is a type of generative AI[2]  that has been specifically architected to help generate text-based content.

Model: A model is the representation of what a machine learning system has learned from the training data.

Neural network: A biological neural network (BNN) is a system in the brain that makes it possible to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.

Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.

Robot: Robots are programmable, automated devices. Fully autonomous robots (e.g., self-driving vehicles) are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses and behaviors accordingly in order to perform complex tasks without human intervention.

Scoring: Scoring, also called prediction, is the process of a trained machine learning model generating values based on new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome. When used vis-a-vis people, scoring is a statistical prediction that determines whether an individual fits into a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.

Supervised learning: In supervised learning, ML systems are trained on well-labeled data. Using labeled inputs and outputs, the model can measure its accuracy and learn over time.

Unsupervised learning: Unsupervised learning uses machine learning algorithms to find patterns in unlabeled datasets without the need for human intervention.

Training: In machine learning, training is the process of determining the ideal parameters comprising a model.

 

How do artificial intelligence and machine learning work?

Artificial Intelligence

Artificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic, and economics to “understand, model, and replicate intelligence and cognitive processes.”

AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:

  • Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
  • Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
  • Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Artificial general intelligence and artificial superintelligence have not yet been achieved and likely will not be for the next few years or decades.

Machine Learning

Machine learning is an application of artificial intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern or correlation to make predictions. Most of the AI in use today is driven by machine learning.

Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: supervised learning; unsupervised learning; and deep learning.

Supervised Learning

Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a data set  containing training examples with associated labels. Take the example of a spam-filtering system that is being trained using spam and non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email or IP address of the sender. Using this correlation, the system tries to predict the correct label (spam/not spam) to apply to all the future emails it processes.

“Spam” and “not spam” in this instance are called “class labels.” The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labeled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about—in this case, it is the “spaminess” of an email. The “correct answer,” so to speak, in the categorization of email is called the “desired outcome” or “outcome of interest.”

Unsupervised Learning

Unsupervised learning involves neural networks finding a relationship or pattern without access to previously labeled datasets of input-output pairs. The neural networks organize and group the data on their own, finding recurring patterns and detecting deviations from these patterns. These systems tend to be less predictable than those that use labeled datasets, and are most often deployed in environments that may change at some frequency and are unstructured or partially structured. Examples include:

  1. An optical character-recognition system that can “read” handwritten text, even if it has never encountered the handwriting before.
  2. The recommended products a user sees on retail websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference, and the prices of their previous purchases.
  3. The detection of fraudulent monetary transactions based on timing and location. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.

A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available to train the neural network to act upon a larger, unlabeled dataset. An example of semi-supervised learning is software that creates deepfakes, or digitally altered audio, videos, or images.

Deep Learning

Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks to create AI that can detect financial fraud, conduct medical-image analysis, translate large amounts of text without human intervention, and automate the moderation of content on social networking websites. These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine, and sort huge amounts of data, which theoretically enables them to identify new solutions to existing problems.

Generative AI

Generative AI[3] is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. The launch of OpenAI’s chatbot, ChatGPT, in late 2022 placed a spotlight on generative AI and created a race among companies to churn out alternate (and ideally superior) versions of this technology. Excitement over large language models and other forms of generative AI was also accompanied by concerns about accuracy, bias within these tools, data privacy, and how these tools can be used to spread disinformation more efficiently.

Although there are other types of machine learning, these three—supervised learning, unsupervised learning and deep learning—represent the basic techniques used to create and train AI systems.

Bias in AI and ML

Artificial intelligence is built by humans, and trained on data generated by them. Inevitably, there is a risk that individual and societal human biases will be inherited by AI systems.

There are three common types of biases in computing systems:

  • Pre-existing bias has its roots in social institutions, practices, and attitudes.
  • Technical bias arises from technical constraints or considerations.
  • Emergent bias arises in a context of use.

Bias in artificial intelligence may affect, for example, the political advertisements one sees on the internet, the content pushed to the top of social media news feeds, the cost of an insurance premium, the results of a recruitment screening process, or the ability to pass through border-control checks in another country.

Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate can get compounded or magnified and greatly affect the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.

Bias in AI/ML systems can result in discriminatory practices, ultimately leading to the exacerbation of existing inequalities or the generation of new ones.. For more information, see this explainer related to AI bias and the Risks section of this resource.

Back to top

How are AI and ML relevant in civic space and for democracy?

Elephant tusks pictured in Uganda. In wildlife conservation, AI/ML algorithms and past data can be used to predict poacher attacks. Photo credit: NRCN.

The widespread proliferation, rapid deployment, scale, complexity, and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human rights bodies, businesses, and the general public alike. AI systems may require varying degrees of human interaction or none at all. When applied in design, operation, and delivery of services, AI/ML offers the potential to provide new services and improve the speed, targeting, precision, efficiency, consistency, quality, or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships, and patterns, and offering new solutions. By analyzing large amounts of data, ML systems save time, money, and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks, and discovering new species of viruses.

Tuberculosis microscopy diagnosis in Uzbekistan. AI/ML systems aid healthcare professionals in medical diagnosis and the detection of diseases. Photo credit: USAID.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering, and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, and security, as well as in safety, crime prevention, policing, law enforcement, urban management, and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time about energy consumption, traffic congestion, pollution levels, and waste in order to monitor and manage these issues and identify patterns in their generation, consumption, and handling.

Digital maps created in Mugumu, Tanzania. Artificial intelligence can support planning of infrastructure development and preparation for disaster. Photo credit: Bobby Neptune for DAI.

AI is also used in climate monitoring, weather forecasting, the prediction of disasters and hazards, and the planning of infrastructure development. In healthcare, AI systems aid professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, facial recognition systems, drones, and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, privacy, security, mass surveillance, social inequality, and negative impacts on democracy (see the Risks section).

Fish caught off the coast of Kema, North Sulawesi, Indonesia. Facial recognition is used to identify species of fish to contribute to sustainable fishing practices. Photo credit: courtesy of USAID SNAPPER.

AI and ML have both positive and negative implications for public policy and elections, as well as democracy more broadly. While data may be used to maximize the effectiveness of a campaign through targeted messaging to help persuade prospective voters, it may also be used to deliver propaganda or misinformation to vulnerable audiences. During the 2016 U.S. presidential election, for example, Cambridge Analytica used big data and machine learning to tailor messages to voters based on predictions about their susceptibility to different arguments.

During elections in the United Kingdom and France in 2017, political bots were used to spread misinformation on social media and leak private campaign emails. These autonomous bots are “programmed to aggressively spread one-sided political messages to manufacture the illusion of public support” or even dissuade certain populations from voting. AI-enabled deepfakes (audio or video that has been fabricated or altered) also contribute to the spread of confusion and falsehoods about political candidates and other relevant actors. Though artificial intelligence can be used to exacerbate and amplify disinformation, it can also be applied in potential solutions to the challenge. See the Case Studies section  of this resource for examples of how the fact-checking industry is leveraging artificial intelligence to more effectively identify and debunk false  and misleading narratives.

Cyber attackers seeking to disrupt election processes use machine learning to effectively target victims and develop strategies for defeating cyber defenses. Although these tactics can be used to prevent cyber attacks, the level of investment in artificial intelligence technologies by malign actors in many cases exceeds that of legitimate governments or other official entities. Some of these actors also use AI-powered digital surveillance tools to track down and target opposition figures, human rights defenders, and other perceived critics.

As discussed elsewhere in this resource, “the potential of automated decision-making systems to reinforce bias and discrimination also impacts the right to equality and participation in public life.” Bias within AI systems can harm historically underrepresented communities and exacerbate existing gender divides and the online harms experienced by women candidates, politicians, activists, and journalists.

AI-driven solutions can help improve the transparency and legitimacy of campaign strategies, for example, by leveraging political bots for good to help identify articles that contain misinformation or by providing a tool for collecting and analyzing the concerns of voters. Artificial intelligence can also be used to make redistricting less partisan (though in some cases it also facilitates partisan gerrymandering) and prevent or detect fraud or significant administrative errors. Machine learning can inform advocacy by predicting which pieces of legislation will be approved based on algorithmic assessments of the text of the legislation, how many sponsors or supporters it has, and even the time of year it is introduced.

The full impact of the deployment of AI systems on the individual, society, and democracy is not known or knowable, which creates many legal, social, regulatory, technical, and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The European Union’s (EU) General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan, and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”

Back to top

Opportunities

Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights, and good governance. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.

Detect and overcome bias

Although artificial intelligence can reproduce human biases, as discussed above, it can also be used to combat unconscious biases in contexts like job recruitment.  Responsibly designed algorithms can bring hidden biases into view and, in some cases, nudge people into less-biased outcomes; for example by masking candidates’ names, ages, and other bias-triggering features on a resume.

Improve security and safety

AI systems can be used to detect attacks on public infrastructure, such as a cyber attack or credit card fraud. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud quickly, or even prevent it before it occurs. Machine learning can help identify agile and unusual patterns that match or exceed traditional strategies used to avoid detection.

Moderate harmful online content

Enormous quantities of content are uploaded every second to the internet and social media . There are simply too many videos, photos, and posts for humans to manually review. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen for content that violates their terms of service (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfakes and other computer-generated content requires similarly advanced identification tactics. Fact-checkers and other actors working to diffuse the dangerous, misleading power of deepfakes are developing their own artificial intelligence to identify these media as false.

Web Search

Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the vast stretches of the internet. Search engines on the web (like Google and Bing) or within platforms and websites (like searches within Wikipedia or The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor higher-quality results that may be beneficial to society. For example, Google has an initiative to highlight original reporting, which prioritizes the first instance of a news story rather than sources that republish the information.

Translation

Machine learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.

Back to top

Risks

The use of emerging technologies like AI can also create risks for democracy and in civil society programming. Read below to learn how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate  unintended—and intended—consequences.

Discrimination against marginalized groups

There are several ways in which AI may make decisions that can lead to discrimination, including how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data; during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.

The accuracy of AI systems is based on how ML processes Big Data, which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or external attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to determine why AI makes certain decisions about some individuals or groups of people, or conclusively prove it has made a discriminatory decision. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status, or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement, and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.

A study by the Royal Statistical Society notes that the “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation or software for credit-scoring, banking, insurance, healthcare, and the selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.

The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving vulnerable groups such as refugees, or about life or death circumstances, such as in medical care. A 2018 report by the University of Toronto’s Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.

Security vulnerabilities

Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately owned systems such as internet of things (IoT) devices and self-driven cars.

If malicious entities target AI systems deployed in public infrastructure, such as smart cities, smart grids, nuclear installations,healthcare facilities, and banking systems, among others, they “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”

Privacy and data protection

The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection. Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks. Criminals, illiberal governments, and people with malicious intent often  target these data for economic or political gain. For instance, health data captured from smartphone applications and internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The issue is not only leaks, but the data that people willingly give out without control about how it will be used down the road. This includes what we share with both companies and government agencies. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.

Chilling effect

AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination, and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Many people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.

Opacity (Black box nature of AI systems)

Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, behind-the-scenes processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal or judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike decisions made by  judges who are required to justify their legal order or judgment.

Technological unemployment

Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.

Loss of individual autonomy and personhood

Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may affect wellbeing, physical integrity, and quality of life. This affects what constitutes an individual’s consent (or lack thereof); the way consent is formed, communicated and understood; and the context in which it is valid. “[T]he dilution of the free basis of our individual consent—either through outright information distortion or even just the absence of transparency—imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence

Back to top

Questions

If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:

  1. Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
  2. Who is designing and overseeing the technology? Can they explain what is happening at different steps of the process?
  3. What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
  4. What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
  5. Are you confident the technology will work as intended when used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
  6. Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
  7. What measures do you have in place to identify and address potentially harmful biases in the technology?
  8. What regulatory safeguards and redress mechanisms do you have in place for people who claim that the technology has been unfair to them or abused them in any way?
  9. Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
  10. Are you certain that the technology abides with relevant regulations and legal standards, including the GDPR?
  11. Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?

Back to top

Case Studies

Leveraging artificial intelligence to promote information integrity

The United Nations Development Programme’s eMonitor+ is an AI-powered platform that helps “scan online media posts to identify electoral violations, misinformation, hate speech, political polarization and pluralism, and online violence against women.” Data analysis facilitated by eMonitor+ enables election commissions and media stakeholders to “observe the prevalence, nature, and impact of online violence.” The platform relies on machine learning to track and analyze content on digital media to generate graphical representations for data visualization. eMonitor+ has been used by Peru’s Asociación Civil Transparencia and Ama Llulla to map and analyze digital violence and hate speech in political dialogue, and by the Supervisory Election Commission during the 2022 Lebanese parliamentary election to monitor potential electoral violations, campaign spending, and misinformation. The High National Election Commission of Libya has also used eMonitor+ to monitor and identify online violence against women in elections.

“How Nigeria’s fact-checkers are using AI to counter election misinformation”

How Nigeria’s fact-checkers are using AI to counter election misinformation”

Ahead of Nigeria’s 2023 presidential election, the UK-based fact-checking organization Full Fact “offered its artificial intelligence suite—consisting of three tools that work in unison to automate lengthy fact-checking processes—to greatly expand fact-checking capacity in Nigeria.” According to Full Fact, these tools are not intended to replace human fact-checkers but rather assist with time-consuming, manual monitoring and review, leaving fact-checkers “more time to do the things they’re best at: understanding what’s important in public debate, interrogating claims, reviewing data, speaking with experts and sharing their findings.” The scalable tools which include search, alerts, and live functions allow fact-checkers to “monitor news websites, social media pages, and transcribe live TV or radio to find claims to fact check.”

Monitoring crop development: Agroscout

Monitoring crop development: Agroscout

The growing impact of climate change could further cut crop yields, especially in the world’s most food-insecure regions. And our food systems are responsible for about 30% of greenhouse gas emissions. Israeli startup AgroScout envisions a world where food is grown in a more sustainable way. “Our platform uses AI to monitor crop development in real-time, to more accurately plan processing and manufacturing operations across regions, crops and growers,” said Simcha Shore, founder and CEO of AgroScout. ‘By utilizing AI technology, AgroScout detects pests and diseases early, allowing farmers to apply precise treatments that reduce agrochemical use by up to 85%. This innovation helps minimize the environmental damage caused by traditional agrochemicals, making a positive contribution towards sustainable agriculture practices.’”

Machine Learning for Peace

The Machine Learning for Peace Project seeks to understand how civic space is changing in countries around the world using state of the art machine learning techniques. By leveraging the latest innovations in natural language processing, the project classifies “an enormous corpus of digital news into 19 types of civic space ‘events’ and 22 types of Resurgent Authoritarian Influence (RAI) events which capture the efforts of authoritarian regimes to wield influence on developing countries.” Among the civic space “events” being tracked are activism, coups, election activities, legal changes, and protests. The civic space event data is combined with “high frequency economic data to identify key drivers of civic space and forecast shifts in the coming months.” Ultimately, the project hopes to serve as a “useful tool for researchers seeking rich, high-frequency data on political regimes and for policymakers and activists fighting to defend democracy around the world.”

Food security: Detecting diseases in crops using image analysis

Food security: Detecting diseases in crops using image analysis

“Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops.” As a first step toward supplementing existing solutions for disease diagnosis with a smartphone-assisted diagnosis system, researchers used a public dataset of 54,306 images of diseased and healthy plant leaves to train a “deep convolutional neural network” to automatically identify 14 different crop species and 26 unique diseases (or the absence of those diseases).

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Automation

What is automation?

A worker at the assembly line of a car-wiring factory in Bizerte, Tunisia. The automation of labor disproportionately affects women, the poor, and other vulnerable members of society. Photo credit: Alison Wright for USAID, Tunisia, Africa

Automation involves techniques and methods applied to enable machines, devices, and systems to function with minimal or no human involvement. Automation is used, for example, in applications for managing the operation of traffic lights in a city, navigating aircrafts, running and configuring different elements of a telecommunications network, in robot-assisted surgeries, and even for automated storytelling (which uses artificial intelligence software to create verbal stories). Automation can improve efficiency and reduce error, but it also creates new opportunities for error and introduces new costs and challenges for government and society.

How does automation work?

Processes can be automated by programming certain procedures to be performed without human intervention (like a recurring payment for a credit card or phone app) or by linking electronic devices to communicate directly with one another (like self-driving vehicles communicating with other vehicles and with road infrastructure). Automation can involve the use of temperature sensors, light sensors, alarms, microcontrollers, robots, and more. Home automation, for example, may include home assistants such as Amazon Echo, Google Home, and OpenHAB. Some automation systems are virtual, for example, email filters that automatically sort incoming emails into different folders, and AI-enabled moderation systems for online content.

The exact architecture and functioning of automation systems depend on their purpose and application. However, automation should not be confused with artificial intelligence in which an algorithm-led process ‘learns’ and changes over time: for instance, an algorithm that reviews thousands of job applications and studies and learns from patterns in the applications is using artificial intelligence, while a chatbot that replies to candidates’ questions is using automation.

For more information on the different components of automation systems, read also the resources about the Internet of Things and sensors, robots and drones, and biometrics.

Back to top

How is automation relevant in civic space and for democracy?

Automated processes can be built to increase transparency, accuracy, efficiency, and scale. They can help minimize effort (labor) and time; reduce errors and costs; improve the quality and/or precision in tasks/processes; carry out tasks that are too strenuous, hazardous, or beyond the physical capabilities of humans; and generally free humans of repetitive, monotonous tasks.

From a historical perspective, automation is not new: the first industrial revolution in the 1700s harnessed the power of steam and water; the technological revolution of the 1880s relied on railways and telegraphs; and the digital revolution in the 20th century saw the beginning of computing. Each of these transitions brought fundamental changes not only to industrial production and the economy, but to society, government, and international relations.

Now, the fourth industrial revolution, or the ‘automation revolution’ as it is sometimes called, promises to once again disrupt work as we know it as well as relationships between people, machines, and programmed processes.

When used by governments, automated processes promise to deliver government services with greater speed, efficiency, and coverage. These developments are often called e-government, e-governance, or digital government. E-government includes government communication and information sharing on the web (sometimes even the publishing of government budgets and agendas), facilitation of financial transactions online such as electronic filing of tax returns, digitization of health records, electronic voting, and digital IDs.

Additionally, automation can be used in elections to help count votes, register voters, and record voter turnout to increase trust in the integrity of the democratic process. Without automation, counting votes can take weeks or months and can lead to results being challenged by anti-democratic forces and to possible voter disenchantment with democratic systems. E-voting and automated vote counting have already become politicized in many countries like Kazakhstan and Pakistan, although many countries are increasingly adopting e-voting systems to help increase voter turnout and participation and hasten the election process.

A health worker receives information on a disease outbreak in Brewerville, Liberia. Automated processes promise to deliver government services with greater speed, efficiency, and coverage. Photo credit: Sarah Grile.

The benefits of automating government services are numerous, as the UK’s K4D helpdesk explains, by lowering the cost of service delivery, improving quality and coverage  (for example, through telemedicine or drones); strengthening communication, monitoring, and feedback, and in some cases by encouraging citizen participation at the local level. In Indonesia, for example, the Civil Service Agency (BKN) introduced a computer-assisted testing system (CAT) to disrupt the previously long-standing manual testing system that created rampant opportunities for corruption in civil service recruitment by line ministry officials. With the new system, the database of questions is tightly controlled, and the results are posted in real time outside the testing center.

In India, an automated system relying on a specifically designed computer (an Advanced Virtual RISC) and the common telecommunications standard GSM (Global System for Mobile) is used to inform farmers about exact field conditions and to point to the necessary next steps with command functions such as irrigating, plowing, deploying seeds and carrying out other farming activities.

Drone used for irrigation scheduling in the southern part of Bangladesh. Automated systems have vast applications in agriculture. Photo credit: Alanuzzaman Kurishi.

As with previous industrial revolutions, automation changes the nature of work, and these changes could bring unemployment in certain sectors if not properly planned. The removal of humans from processes also brings new opportunities for error (such as ‘automation bias’) and raises new legal and ethical questions. See the Risks section below.

Back to top

Opportunities

Islamabad Electric Supply Company’s (IESCO) Power Distribution Control Center (PDC), Pakistan. Smart meters enable monitoring of power demand, supply, and load shedding in real-time. Photo credit: USAID.

Automation can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about automation in your work.

Increase in productivity

Automation may improve output while reducing the time and labor required, thus increasing the productivity of workers and the demand for other kinds of work. For example, automation can streamline document review, cutting down on the time that lawyers need to search through documents or academics through sources, etc. In Azerbaijan, the government partnered with the private sector in the use of an automated system to reduce the backlog of relatively simple court cases, such as claims for unpaid bills. In instances where automation increases the quality of services or goods and/or brings down their cost, a more significant demand for the goods or services can be served.

Improvements in processes and outputs

Automation can improve the speed, efficiency, quality, consistency, and coverage of service delivery and reduce human error, time spent, and costs. It can therefore allow activities to scale up. For example, the UNDP and the government of the Maldives used automation to create 3-D maps of the islands and chart their topography. Having this information on record would speed up further disaster relief and rescue efforts. The use of drones also reduced the time and money required to conduct this exercise: while mapping 11 islands would normally take almost a year, using a drone reduced the time to one day. See the Robots and Drones resource for additional examples.

Optimizing an automated task generally requires trade-offs among cost, precision, the permissible margin of error, and scale. Automation may sometimes require tolerating more errors in order to reduce costs or achieve greater scale. For more, see the section “Knowing when automation offers a suitable solution to the challenge at hand” in Automation of government processes.

For democratic processes, automation can help facilitate access for voters who cannot travel to polling stations via remote e-voting or using accessible systems at polling stations. Moreover, using automation for counting votes can help decrease user error in some cases and increase trust in the democratic process.

Increase transparency

Automation may increase transparency by making data and information easily available to the public, thus building public trust and aiding accountability. In India, the State Transport Department of Karnataka has automated driving test centers hoping to eliminate bribery in the issuing of driver’s licenses. A host of high-definition cameras and sensors placed along the test track captured the movement of the vehicle while a computerized system decides if the driver has passed or failed the test. See also “Are emerging technologies helping win the fight against corruption in developing countries?”

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with automation in DRG work, as well as how to mitigate unintended – and intended – consequences.

Labor issues

When automation is used to replace human labor, the resulting loss of jobs causes structural unemployment known as “technological unemployment.” Structural unemployment disproportionately affects women, the poor, and other vulnerable members of society, unless they are re-skilled and provided with adequate protections. Automation also requires skilled labor that can operate, oversee or maintain automated systems, eventually creating jobs for a smaller section of the population. But the immediate impact of this transformation of work can be harmful to people and communities without social safety nets or opportunities for finding other work.

Additionally, there have been links drawn between increased automation and a rise in preferences for populist politicians as job loss begins to affect particularly low-wage workers. A study conducted by the Proceedings of the National Academy of Sciences (PNAS) found a correlation between the impact of globalization and automation and increased vote shares for right-wing populist parties in several European countries. Although automation can have a positive impact on overall profits, low-wage, non-educated workers may feel particularly impacted as wages remain low with their tasks being replaced by automated systems.

Discrimination towards marginalized groups and minorities and increasing social inequality

Automation systems equipped with artificial intelligence (AI) may produce results that are discriminatory towards some marginalized and minority groups when the system has learned from biased learning patterns, from biased datasets, or from biased human decision-making. The outputs of AI-equipped automated systems may reflect real-life societal biases, prejudices, and discriminatory treatment towards some demographics. Biases can also occur from the human implementation of automated systems, for instance, when the systems do not function in the real world as they were able to function in a lab or theoretical setting, or when the humans working with the machines misinterpret or misuse the automated technology.

There are numerous examples of racial and other types of discrimination being either replicated or magnified by automation. To take an example from the field of predictive policing, ProPublica reported after conducting an investigation in 2016 that COMPAS, a data-driven AI tool meant to assist judges in the United States, was biased against Black people while determining if a convicted offender would commit more crimes in the future. For more on predictive policing see “How to Fight Bias with Predictive Policing” and “A Popular Algorithm Is No Better at Predicting Crimes Than Random People.

These risks exist in other domains as well. The University of Toronto and Citizen Lab report titled “Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system” notes that “[m]any [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims is lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Insufficient Legal Protections

Existing laws and regulations may not be applicable to automation systems and, in cases where they are, the application may not be well-defined. Not all countries have laws that protect individuals against these dangers. Under the GDPR  (the European General Data Protection Regulation), individuals have the right not to be subject to a decision based only on automated processing, including profiling. In other words, humans must oversee important decisions that affect individuals. But not all countries have or respect such regulations, and even the GDPR is not upheld in all situations. Meanwhile, individuals would have to actively claim their rights and contest these decisions, usually by seeking legal assistance, which is beyond the means of many. Groups at the receiving end of such discrimination tend to have fewer resources and limited access to human rights protections to contest such decisions.

Automation Bias

People tend to have faith in automation and tend to believe that technology is accurate, neutral, and non-discriminating. This can be described as “automation bias”: when humans working with or overseeing automated systems tend to give up responsibility to the machine and trust the machine’s decision-making uncritically. Automation bias has been shown to have harmful impacts across automated sectors, including leading to errors in healthcare. Automation bias also plays a role in the discrimination described above.

Uncharted ethical concerns

The ever-increasing use of automation brings ethical questions and concerns that may not have been considered before the arrival of the technology itself. For example, who is responsible if a self-driving car gets into an accident? How much personal information should be given to health-service providers to facilitate automated health monitoring? In many cases, further research is needed to even begin to address these dilemmas.

Issues related to individual consent

When automated systems make decisions that affect people’s lives, they blur the formation, context, and expression of an individual’s consent (or lack thereof) as described in this quote: “…[T]he dilution of the free basis of our individual consent – either through outright information distortion or even just the absence of transparency – imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation.” See additional information about informed consent in the Data Protection resource.

High capital costs

Large-scale automation technologies require very high capital costs, which is a risk in case the use of the technology becomes unviable in the long term or does not otherwise guarantee commensurate returns or recovery of costs. Hence, automation projects funded with public money (for example, some “smart city ” infrastructure) require thorough feasibility studies for assessing needs and ensuring long-term viability. On the other hand, initial costs also may be very high for individuals and communities. An automated solar-power installation or a rainwater-harvesting system is a large investment for a community. However, depending on the tariffs for grid power or water, the expenditure may be recovered in the long run.

Back to top

Questions

If you are trying to understand the implications of automation in your work environment, or are considering using aspects of automation as part of your DRG programming, ask yourself these questions:

  1. Is automation a suitable method for the problem you are trying to solve?
  2. What are the indicators or guiding factors that determine if automation is a suitable and required solution to a particular problem or challenge?
  3. What risks are involved regarding security, the potential for discrimination, etc? How will you minimize these risks? Do the benefits of using automation or automated technology outweigh these risks?
  4. Who will work with and oversee these technologies? What is their training and what are their responsibilities? Who is liable legally in case of an accident?
  5. What are the long-term effects of using these technologies in the surrounding environment or community? What are the effects on individuals, jobs, salaries, social welfare, etc.? What measures are necessary to ensure that the use of these technologies does not aggravate or reinforce inequality through automation bias or otherwise?
  6. How will you ensure that humans are overseeing any important decisions made about individuals using automated processes? (How will you abide by the GDPR or other applicable regulations?)
  7. What privacy and security safeguards are necessary for applying these technologies in a given context regarding, for example, cybersecurity, protection or personal privacy, protecting operators from accidents, etc.? How will you build-in these safeguards?

Back to top

Case studies

Automated Farming Vehicles

Automated Farming Vehicles

“Forecasts of world population increases in the coming decades demand new production processes that are more efficient, safer, and less destructive to the environment. Industries are working to fulfill this mission by developing the smart factory concept. The agriculture world should follow industry leadership and develop approaches to implement the smart farm concept. One of the most vital elements that must be configured to meet the requirements of the new smart farms is the unmanned ground vehicles (UGV).”

Automated Voting Systems in Estonia

Automated Voting Systems in Estonia

Since 2005, Estonia has allowed e-voting wherein citizens are able to cast their ballot online. In each succeeding election voters have increasingly chosen to cast online ballots to save time and participate in local and national elections with ease. Voters use digital IDs to help verify their identification and prevent fraud and ballots cast online are automatically cross-referenced with lists to ensure there is no duplication or voter fraud.

Automated Mining in South Africa

Automated Mining in South Africa

“Spiraling labour and energy costs are putting pressure on the financial performance of gold mines in South Africa, but the solution could be found in adopting digital technologies. By implementing automation operators can remove underground workers from harm’s way, and that is going to become an ever-bigger imperative if gold miners are to remain investable by international capital. This increased emphasis for the safety of the workforce and mines is motivating the development of the mining automation market. Earlier, old-style techniques of exploration and drilling compromised the security of mine labour force. Such examples have forced operators to develop smart resolutions and tools to confirm security of workers.”

Automating Processing of Uncontested Civil Cases to Reduce Court Backlogs in Azerbaijan, Case Study 14

Automating Processing of Uncontested Civil Cases to Reduce Court Backlogs in Azerbaijan, Case Study 14

“In Azerbaijan, the government developed a new approach to dealing with their own backlog of cases, one which addressed both supply side and demand side elements. Recognizing that much of the backlog stemmed from relatively simple civil cases, such as claims for unpaid bills, the government partnered with the private sector in the use of an automated system to streamline the handling of uncontested cases, thus freeing up judges’ time for more important cases.”

Reforming Civil Service Recruitment through Computerized Examinations in Indonesia, Case Study 6

Reforming Civil Service Recruitment through Computerized Examinations in Indonesia, Case Study 6

“In Indonesia, the Civil Service Agency (BKN) succeeded in introducing a computer-assisted testing system (CAT) to disrupt the previously long-standing manual testing system that created rampant opportunities for corruption in civil service recruitment by line ministry officials. Now the database of questions is tightly controlled, and the results are posted in real time outside the testing center. Since its launch in 2013, CAT has become the de facto standard for more than 62 ministries and agencies.”

Real Time Automation of Indian Agriculture

Real Time Automation of Indian Agriculture

“Real time automation of Indian agricultural system” using AVR (Advanced Virtual RISC) microcontroller and GSM (Global System for Mobile) is focused on making the agriculture process easier with the help of automation. The set up consists of processor which is an 8-bit microcontroller. GSM plays an important part by controlling the irrigation on field. GSM is used to send and receive the data collected by the sensors to the farmer. GSM acts as a connecting bridge between AVR microcontroller and farmer. Our study aims to implement the basic application of automation of the irrigation field by programming the components and building the necessary hardware. In our study different type of sensors like LM35, humidity sensor, soil moisture sensor, IR sensor used to find the exact field condition. GSM is used to inform the farmer about the exact field condition so that [they] can carry necessary steps. AT(Attention) commands are used to control the functions like irrigation, ploughing, deploying seeds and carrying out other farming activities.”

E-voting terminated in Kazakhstan

A study published in May 2020 on the discontinuation of e-voting in Kazakhstan highlights some of the political challenges around e-voting. Kazakhstan used e-voting between 2004 and 2011 and was considered a leading example. See “Kazakhstan: Voter registration Case Study (2006)” produced by the Ace Project Electoral Knowledge Network. However, the country returned to a traditional paper ballot due to a lack of confidence from citizens and civil society in the government’s ability to ensure the integrity of e-voting procedures. See “Politicization of e-voting rejection: reflections from Kazakhstan,” by Maxat Kassen. It is important to note that Kazakhstan did not employ biometric voting, but rather electronic voting machines that operated via touch screens.

Back to top

References

Additional resources

Back to top

Categories

Data Protection

What is data protection?

Data protection refers to practices, measures, and laws that aim to prevent certain information about a person from being collected, used, or shared in a way that is harmful to that person.

Interview with fisherman in Bone South Sulawesi, Indonesia. Data collectors must receive training on how to avoid bias during the data collection process. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Data protection isn’t new. Bad actors have always sought to gain access to individuals’ private records. Before the digital era, data protection meant protecting individuals’ private data from someone physically accessing, viewing, or taking files and documents. Data protection laws have been in existence for more than 40 years.

Now that many aspects of peoples’ lives have moved online, private, personal, and identifiable information is regularly shared with all sorts of private and public entities. Data protection seeks to ensure that this information is collected, stored, and maintained responsibly and that unintended consequences of using data are minimized or mitigated.

What are data?

Data refer to digital information, such as text messages, videos, clicks, digital fingerprints, a bitcoin, search history, and even mere cursor movements. Data can be stored on computers, mobile devices, in clouds, and on external drives. It can be shared via email, messaging apps, and file transfer tools. Your posts, likes and retweets, your videos about cats and protests, and everything you share on social media is data.

Metadata are a subset of data. It is information stored within a document or file. It’s an electronic fingerprint that contains information about the document or file. Let’s use an email as an example. If you send an email to your friend, the text of the email is data. The email itself, however, contains all sorts of metadata like who created it, who the recipient is, the IP address of the author, the size of the email, etc.

Large amounts of data get combined and stored together. These large files containing thousands or millions of individual files are known as datasets. Datasets then get combined into very large datasets. These very large datasets, referred to as big data, are used to train machine-learning systems.

Personal Data and Personally Identifiable Information

Data can seem to be quite abstract, but the pieces of information are very often reflective of the identities or behaviors of actual persons. Not all data require protection, but some data, even metadata, can reveal a lot about a person. This is referred to as Personally Identifiable Information (PII). PII is commonly referred to as personal data. PII is information that can be used to distinguish or trace an individual’s identity such as a name, passport number, or biometric data like fingerprints and facial patterns. PII is also information that is linked or linkable to an individual, such as date of birth and religion.

Personal data can be collected, analyzed and shared for the benefit of the persons involved, but they can also be used for harmful purposes. Personal data are valuable for many public and private actors. For example, they are collected by social media platforms and sold to advertising companies. They are collected by governments to serve law-enforcement purposes like the prosecution of crimes. Politicians value personal data to target voters with certain political information. Personal data can be monetized by people for criminal purposes such as selling false identities.

“Sharing data is a regular practice that is becoming increasingly ubiquitous as society moves online. Sharing data does not only bring users benefits, but is often also necessary to fulfill administrative duties or engage with today’s society. But this is not without risk. Your personal information reveals a lot about you, your thoughts, and your life, which is why it needs to be protected.”

Access Now’s ‘Creating a Data Protection Framework’, November 2018.

How does data protection relate to the right to privacy?

The right to protection of personal data is closely interconnected to, but distinct from, the right to privacy. The understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. Data protection is not always considered a right in itself. Read more about the differences between privacy and data protection here.

Data privacy is also a common way of speaking about sensitive data and the importance of protecting it against unintentional sharing and undue or illegal  gathering and use of data about an individual or group. USAID’s Digital Strategy for 2020 – 2024 defines data privacy as ‘the  right  of  an  individual  or  group  to  maintain  control  over  and  confidentiality  of  information  about  themselves’.

How does data protection work?

Participant of the USAID WeMUNIZE program in Nigeria. Data protection must be considered for existing datasets as well. Photo credit: KC Nwakalor for USAID / Digital Development Communications

Personal data can and should be protected by measures that protect from harm the identity or other information about a person and that respects their right to privacy. Examples of such measures include determining which data are vulnerable based on privacy-risk assessments; keeping sensitive data offline; limiting who has access to certain data; anonymizing sensitive data; and only collecting necessary data.

There are a couple of established principles and practices to protect sensitive data. In many countries, these measures are enforced via laws, which contain the key principles that are important to guarantee data protection.

“Data Protection laws seek to protect people’s data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.”

Privacy International on data protection

A couple of important terms and principles are outlined below, based on The European Union’s General Data Protection Regulation (GDPR).

  • Data Subject: any person whose personal data are being processed, such as added to a contacts database or to a mailing list for promotional emails.
  • Processing data means that any operation is performed on personal data, manually or automated.
  • Data Controller: the actor that determines the purposes for, and means by which, personal data are processed.
  • Data Processor: the actor that processes personal data on behalf of the controller, often a third-party external to the controller, such as a party that offers mailing lists or survey services.
  • Informed Consent: individuals understand and agree that their personal data are collected, accessed, used, and/or shared and how they can withdraw their consent.
  • Purpose limitation: personal data are only collected for a specific and justified use and the data cannot be used for other purposes by other parties.
  • Data minimization: that data collection is minimized and limited to essential details.

 

Healthcare provider in Eswatini. Quality data and protected datasets can accelerate impact in the public health sector. Photo credit: Ncamsile Maseko & Lindani Sifundza.

Access Now’s guide lists eight data-protection principles that come largely from international standards, in particular,, the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (widely known as Convention 108) and the Organization for Economic Development (OECD) Privacy Guidelines and are considered to be “minimum standards” for the protection of fundamental rights by countries that have ratified international data protection frameworks.

A development project that uses data, whether establishing a mailing list or analyzing datasets, should comply with laws on data protection. When there is no national legal framework, international principles, norms, and standards can serve as a baseline to achieve the same level of protection of data and people. Compliance with these principles may seem burdensome, but implementing a few steps related to data protection from the beginning of the project will help to achieve the intended results without putting people at risk.

common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms

The figure above shows how common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms.

The European Union’s General Data Protection Regulation (GDPR)

The data protection law in the EU, the GDPR, went into effect in 2018. It is often considered the world’s strongest data protection law. The law aims to enhance how people can access their information and limits what organizations can do with personal data from EU citizens. Although coming from the EU, the GDPR can also apply to organizations that are based outside the region when EU citizens’ data are concerned. GDPR, therefore, has a global impact.

The obligations stemming from the GDPR and other data protection laws may have broad implications for civil society organizations. For information about the GDPR- compliance process and other resources, see the European Center for Not-for-Profit Law‘s guide on data-protection standards for civil society organizations.

Notwithstanding its protections, the GDPR also has been used to harass CSOs and journalists. For example, a mining company used a provision of the GDPR to try to force Global Witness to disclose sources it used in an anti-mining campaign. Global Witness successfully resisted these attempts.

Personal or organizational protection tactics

How to protect your own sensitive information or the data of your organization will depend on your specific situation in terms of activities and legal environment. The first step is to assess your specific needs in terms of security and data protection. For example, which information could, in the wrong hands, have negative consequences for you and your organization?

Digital–security specialists have developed online resources you can use to protect yourself. Examples are the Security Planner, an easy-to-use guide with expert-reviewed advice for staying safer online with recommendations on implementing basic online practices. The Digital Safety Manual offers information and practical tips on enhancing digital security for government officials working with civil society and Human Rights Defenders (HRDs). This manual offers 12 cards tailored to various common activities in the collaboration between governments (and other partners) and civil society organizations. The first card helps to assess the digital security.

Digital Safety Manual

 

The Digital First Aid Kit is a free resource for rapid responders, digital security trainers, and tech-savvy activists to better protect themselves and the communities they support against the most common types of digital emergencies. Global digital safety responders and mentors can help with specific questions or mentorship, for example, The Digital Defenders Partnership and the Computer Incident Response Centre for Civil Society (CiviCERT).

Back to top

How is data protection relevant in civic space and for democracy?

Many initiatives that aim to strengthen civic space or improve democracy use digital technology. There is a widespread belief that the increasing volume of data and the tools to process them can be used for good. And indeed, integrating digital technology and the use of data in democracy, human rights, and governance programming can have significant benefits; for example, they can connect communities around the globe, reach underserved populations better, and help mitigate inequality.

“Within social change work, there is usually a stark power asymmetry. From humanitarian work, to campaigning, documenting human rights violations to movement building, advocacy organisations are often led by – and work with – vulnerable or marginalised communities. We often approach social change work through a critical lens, prioritising how to mitigate power asymmetries. We believe we need to do the same thing when it comes to the data we work with – question it, understand its limitations, and learn from it in responsible ways.”

What is Responsible Data?

When quality information is available to the right people when they need it, the data are protected against misuse, and the project is designed with the protection of its users in mind, it can accelerate impact.

  • USAID’s funding of improved vineyard inspection using drones and GIS data in Moldova, allows farmers to quickly inspect, identify, and isolate vines infected by a ​phytoplasma disease of the vine.
  • Círculo is a digital tool for female journalists in Mexico to help them create strong networks of support, strengthen their safety protocols and meet needs related to the protection of themselves and their data. The tool was developed with the end-users through chat groups and in-person workshops to make sure everything built into the app was something they needed and could trust.

At the same time, data-driven development brings a new responsibility to prevent misuse of data, when designing,  implementing or monitoring development projects. When the use of personal data is a means to identify people who are eligible for humanitarian services, privacy and security concerns are very real.

  • Refugee camps In Jordan have required community members to allow scans of their irises to purchase food and supplies and take out cash from ATMs. This practice has not integrated meaningful ways to ask for consent or allow people to opt out. Additionally, the use and collection of highly sensitive personal data like biometrics to enable daily purchasing habits is disproportionate, because other less personal digital technologies are available and used in many parts of the world.

Governments, international organizations, and private actors can all – even unintentionally – misuse personal data for other purposes than intended, negatively affecting the well-being of the people related to that data. Some examples have been highlighted by Privacy International:

  • The case of Tullow Oil, the largest oil and gas exploration and production company in Africa, shows how a private actor considered extensive and detailed research by a micro-targeting research company into the behaviors of local communities in order to get ‘cognitive and emotional strategies to influence and modify Turkana attitudes and behavior’ to the Tullow Oil’s advantage.
  • In Ghana, the Ministry of Health commissioned a large study on health practices and requirements in Ghana. This resulted in an order from the ruling political party to model future vote distribution within each constituency based on how respondents said they would vote, and a negative campaign trying to get opposition supporters not to vote.

There are resources and experts available to help with this process. The Principles for Digital Development website offers recommendations, tips, and resources to protect privacy and security throughout a project lifecycle, such as the analysis and planning stage, for designing and developing projects and when deploying and implementing. Measurement and evaluation are also covered. The Responsible Data website offers the Illustrated Hand-Book of the Modern Development Specialist with attractive, understandable guidance through all steps of a data-driven development project: designing it, managing data, with specific information about collecting, understanding and sharing it, and closing a project.

NGO worker prepares for data collection in Buru Maluku, Indonesia. When collecting new data, it’s important to design the process carefully and think through how it affects the individuals involved. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Back to top

Opportunities

Data protection measures further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about data protection in your work.

Privacy respected and people protected

Implementing data–protection standards in development projects protects people against potential harm from abuse of their data. Abuse happens when an individual, company or government accesses personal data and uses them for purposes other than those for which the data were collected. Intelligence services and law enforcement authorities often have legal and technical means to enforce access to datasets and abuse the data. Individuals hired by governments can access datasets by hacking the security of software or clouds. This has often led to intimidation, silencing, and arrests of human rights defenders and civil society leaders criticizing their government. Privacy International maps examples of governments and private actors abusing individuals’ data.

Strong protective measures against data abuse ensure respect for the fundamental right to privacy of the people whose data are collected and used. Protective measures allow positive development such as improving official statistics, better service delivery, targeted early warning mechanisms, and effective disaster response.

It is important to determine how data are protected throughout the entire life cycle of a project. Individuals should also be ensured of protection after the project ends, either abruptly or as intended, when the project moves into a different phase or when it receives funding from different sources. Oxfam has developed a leaflet to help anyone handling, sharing, or accessing program data to properly consider responsible data issues throughout the data lifecycle, from making a plan to disposing of data.

Back to top

Risks

The collection and use of data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with collection and use of data in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unauthorized access to data

Data need to be stored somewhere. On a computer or an external drive, in a cloud, or on a local server. Wherever the data are stored, precautions need to be taken to protect the data from unauthorized access and to avoid revealing the identities of vulnerable persons. The level of protection that is needed depends on the sensitivity of the data, i.e. to what extent it could have negative consequences if the information fell into the wrong hands.

Data can be stored on a nearby and well-protected server that is connected to drives with strong encryption and very limited access, which is a method to stay in control of the data you own. Cloud services offered by well-known tech companies often offer basic protection measures and wide access to the dataset for free versions. More advanced security features are available for paying customers, such as storage of data in certain jurisdictions with data-protection legislation. The guidelines on how to secure private data stored and accessed in the cloud help to understand various aspects of clouds and to decide about a specific situation.

Every system needs to be secured against cyberattacks and manipulation. One common challenge is finding a way to protect identities in the dataset, for example, by removing all information that could identify individuals from the data, i.e. anonymizing it. Proper anonymization is of key importance and harder than often assumed.

One can imagine that a dataset of GPS locations of People Living with Albinism across Uganda requires strong protection. Persecution is based on the belief that certain body parts of people with albinism can transmit magical powers, or that they are presumed to be cursed and bring bad luck. A spatial-profiling project mapping the exact location of individuals belonging to a vulnerable group can improve outreach and delivery of support services to them. However, hacking of the database or other unlawful access to their personal data might put them at risk of people wanting to exploit or harm them.

One could also imagine that the people operating an alternative system to send out warning sirens for air strikes in Syria run the risk of being targeted by authorities. While data collection and sharing by this group aims to prevent death and injury, it diminishes the impact of air strikes by the Syrian authorities. The location data of the individuals running and contributing to the system needs to be protected against access or exposure.

Another risk is that private actors who run or cooperate in data-driven projects could be tempted to sell data if they are offered large sums of money. Such buyers could be advertising companies or politicians that aim to target commercial or political campaigns at specific people.

The Tiko system designed by social enterprise Triggerise rewards young people for positive health-seeking behaviors, such as visiting pharmacies and seeking information online. Among other things, the system gathers and stores sensitive personal and health information about young female subscribers who use the platform to seek guidance on contraceptives and safe abortions, and it tracks their visits to local clinics. If these data are not protected, governments that have criminalized abortion could potentially access and use that data to carry out law-enforcement actions against pregnant women and medical providers.

Unsafe collection of data

When you are planning to collect new data, it is important to carefully design the collection process and think through how it affects the individuals involved. It should be clear from the start what kind of data will be collected, for what purpose, and that the people involved agree with that purpose. For example, an effort to map people with disabilities in a specific city can improve services. However, the database should not expose these people to risks, such as attacks or stigmatization that can be targeted at specific homes. Also, establishing this database should answer to the needs of the people involved and not driven by the mere wish to use data. For further guidance, see the chapter Getting Data in the Hand-book of the Modern Development Specialist and the OHCHR Guidance to adopt a Human Rights Based Approach to Data, focused on collection and disaggregation.

If data are collected in person by people recruited for this process, proper training is required. They need to be able to create a safe space to obtain informed consent from people whose data are being collected and know how to avoid bias during the data-collection process.

Unknowns in existing datasets

Data-driven initiatives can either gather new data, for example, through a survey of students and teachers in a school or use existing datasets from secondary sources, for example by using a government census or scraping social media sources. Data protection must also be considered when you plan to use existing datasets, such as images of the Earth for spatial mapping. You need to analyze what kind of data you want to use and whether it is necessary to use a specific dataset to reach your objective. For third-party datasets, it is important to gain insight into how the data that you want to use were obtained, whether the principles of data protection were met during the collection phase, who licensed the data and who funded the process. If you are not able to get this information, you must carefully consider whether to use the data or not. See the Hand-book of the Modern Development Specialist on working with existing data.

Benefits of cloud storage

A trusted cloud-storage strategy offers greater security and ease of implementation compared to securing your own server. While determined adversaries can still hack into individual computers or local servers, it is significantly more challenging for them to breach the robust security defenses of reputable cloud storage providers like Google or Microsoft. These companies deploy extensive security resources and a strong business incentive to ensure maximum protection for their users. By relying on cloud storage, common risks such as physical theft, device damage, or malware can be mitigated since most documents and data are securely stored in the cloud. In case of incidents, it is convenient to resynchronize and resume operations on a new or cleaned computer, with little to no valuable information accessible locally.

Backing up data

Regardless of whether data is stored on physical devices or in the cloud, having a backup is crucial. Physical device storage carries the risk of data loss due to various incidents such as hardware damage, ransomware attacks, or theft. Cloud storage provides an advantage in this regard as it eliminates the reliance on specific devices that can be compromised or lost. Built-in backup solutions like Time Machine for Macs and File History for Windows devices, as well as automatic cloud backups for iPhones and Androids, offer some level of protection. However, even with cloud storage, the risk of human error remains, making it advisable to consider additional cloud backup solutions like Backupify or SpinOne Backup. For organizations using local servers and devices, secure backups become even more critical. It is recommended to encrypt external hard drives using strong passwords, utilize encryption tools like VeraCrypt or BitLocker, and keep backup devices in a separate location from the primary devices. Storing a copy in a highly secure location, such as a safe deposit box, can provide an extra layer of protection in case of disasters that affect both computers and their backups.

Back to top

Questions

If you are trying to understand the implications of lacking data protection measures in your work environment, or are considering using data as part of your DRG programming, ask yourself these questions:

  1. Are data protection laws adopted in the country or countries concerned? Are these laws aligned with international human rights law, including provisions protecting the right to privacy?
  2. How will the use of data in your project comply with data protection and privacy standards?
  3. What kind of data do you plan to use? Are personal or other sensitive data involved?
  4. What could happen to the persons related to that data if the government accesses these data?
  5. What could happen if the data are sold to a private actor for other purposes than intended?
  6. What precaution and mitigation measures are taken to protect the data and the individuals related to the data?
  7. How is the data protected against manipulation and access and misuse by third parties?
  8. Do you have sufficient expertise integrated during the entire course of the project to make sure that data are handled well?
  9. If you plan to collect data, what is the purpose of the collection of data? Is data collection necessary to reach this purpose?
  10. How are collectors of personal data trained? How is informed consent generated when data are collected?
  11. If you are creating or using databases, how is the anonymity of the individuals related to the data guaranteed?
  12. How is the data that you plan to use obtained and stored? Is the level of protection appropriate to the sensitivity of the data?
  13. Who has access to the data? What measures are taken to guarantee that data are accessed for the intended purpose?
  14. Which other entities – companies, partners – process, analyze, visualize, and otherwise use the data in your project? What measures are taken by them to protect the data? Have agreements been made with them to avoid monetization or misuse?
  15. If you build a platform, how are the registered users of your platform protected?
  16. Is the database, the system to store data or the platform auditable to independent research?

Back to top

Case Studies

People Living with HIV Stigma Index and Implementation Brief

The People Living with HIV Stigma Index is a standardized questionnaire and sampling strategy to gather critical data on intersecting stigmas and discrimination affecting people living with HIV. It monitors HIV-related stigma and discrimination in various countries and provides evidence for advocacy in countries. The data in this project are the experiences of people living with HIV. The implementation brief provides insight into data protection measures. People living with HIV are at the center of the entire process, continuously linking the data that is collected about them to the people themselves, starting from research design, through implementation, to using the findings for advocacy. Data are gathered through a peer-to-peer interview process, with people living with HIV from diverse backgrounds serving as trained interviewers. A standard implementation methodology has been developed, including the establishment if a steering committee with key  stakeholders and population groups.

RNW Media’s Love Matters Program Data Protection

RNW Media’s Love Matters Program offers online platforms to foster discussion and information-sharing on love, sex and relationships to 18-30 year-olds in areas where information on sexual and reproductive health and rights (SRHR) is censored or taboo. RNW Media’s digital teams introduced creative approaches to data processing and analysis, Social Listening methodologies and Natural Language Processing techniques to make the platforms more inclusive, create targeted content, and identify influencers and trending topics. Governments have imposed restrictions such as license fees or registrations for online influencers as a way of monitoring and blocking “undesirable” content, and RNW Media has invested in security of its platforms and literacy of the users to protect them from access to their sensitive personal information. Read more in the publication ‘33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p 12-14.

Amnesty International Report

Amnesty International Report

Thousands of democracy and human rights activists and organizations rely on secure communication channels every day to maintain the confidentiality of conversations in challenging political environments. Without such security practices, sensitive messages can be intercepted and used by authorities to target activists and break up protests. One prominent and well-documented example of this occurred in the aftermath of the 2010 elections in Belarus. As detailed in this Amnesty International report, phone recordings and other unencrypted communications were intercepted by the government and used in court against prominent opposition politicians and activists, many of whom spent years in prison. In 2020, another swell of post-election protests in Belarus saw thousands of protestors adopt user-friendly, secure messaging apps that were not as readily available just 10 years prior to protect their sensitive communications.

Norway Parliament Data

Norway Parliament Data

The Storting, Norway’s parliament, has experienced another cyberattack that involved the exploitation of recently disclosed vulnerabilities in Microsoft Exchange. These vulnerabilities, known as ProxyLogon, were addressed by emergency security updates released by Microsoft. The initial attacks were attributed to a state-sponsored hacking group from China called HAFNIUM, which utilized the vulnerabilities to compromise servers, establish backdoor web shells, and gain unauthorized access to internal networks of various organizations. The repeated cyberattacks on the Storting and the involvement of various hacking groups underscore the importance of data protection, timely security updates, and proactive measures to mitigate cyber risks. Organizations must remain vigilant, stay informed about the latest vulnerabilities, and take appropriate actions to safeguard their systems and data.

Girl Effect

Girl Effect, a creative non-profit working where girls are marginalized and vulnerable, uses media and mobile tech to empower girls. The organization embraces digital tools and interventions and acknowledges that any organisation that uses data also has a responsibility to protect the people it talks to or connects online. Their ‘Digital safeguarding tips and guidance’ provides in-depth guidance on implementing data protection measures while working with vulnerable people. Referring to Girl Effect as inspiration, Oxfam has developed and implemented a Responsible Data Policy and shares many supporting resources online. The publication ‘Privacy and data security under GDPR for quantitative impact evaluation’ provides detailed considerations of the data protection measures Oxfam implements while doing quantitative impact evaluation through digital and paper-based surveys and interviews.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Extended Reality / Augmented Reality / Virtual Reality (XR/AR/VR)

What is Extended Reality (XR)?

Extended Reality (XR) is a collective term encompassing Augmented Reality (AR) and Virtual Reality (VR), technologies that transform our interaction with the world by either enhancing or completely reimagining our perception of reality.

Utilizing advancements in computer graphics, sensors, cameras, and displays, XR creates immersive experiences that range from overlaying digital information onto our physical surroundings in AR, to immersing users in entirely digital environments in VR. XR represents a significant shift in how we engage with and perceive digital content, offering intuitive and natural interfaces for a wide range of applications in various sectors, including democracy, human rights, and governance.

What is Virtual Reality (VR)?

Virtual Reality (VR) is a technology that immerses users in a simulated three-dimensional (3D) environment, allowing them to interact with it in a way that simulates real-world experiences, engaging senses like sight, hearing, and touch. Unlike traditional user interfaces, VR places the user inside an experience. Instead of viewing a screen in front of them, users are immersed and able to interact with 3D worlds.

VR uses a specialized headset, known as a VR Head Mounted Display (HMD), to create a 3D, computer-generated world that fully encompasses your vision and hearing. This immersive technology not only visualizes but also enables interaction through hand controllers. These controllers provide haptic feedback, a feature that simulates the sense of touch, enhancing the realism of the virtual experience. VR’s most notable application is in immersive gaming, where it allows players to fully engage in complex fantasy worlds.

What is Augmented Reality (AR)?

Augmented Reality (AR) is a technology that overlays digital information and objects onto the real world, enhancing what we see, hear, and feel. For instance, it can be used as a tourist application to help a user find her way through an unfamiliar city and identify restaurants, hotels, and sights. Rather than immersing the user in an imaginary or distant virtual world, the physical world is enhanced by augmenting it in real time with digital information and entities.

AR became widely popular in 2016 with the game Pokémon Go, in which players found virtual characters in real locations, and Snapchat, which adds playful filters like funny glasses to users’ faces. AR is used in more practical applications as well such as aiding surgeries, enhancing car displays, and visualizing furniture in homes. Its seamless integration with the real world and focus on enhancing, rather than replacing reality, positions AR as a potential key player in future web and metaverse technologies, replacing traditional computing interfaces like phones and desktops by accurately blending real and virtual elements in real time.

What is Mixed Reality (MR)?

Mixed Reality (MR) is a technology that merges real-world and digital elements. It combines elements of Virtual Reality (VR), which creates a completely computer-generated environment, with Augmented Reality (AR), which overlays digital information onto the real world. In MR, users can seamlessly interact with both real and virtual objects in real time. Digital objects in MR are designed to respond to real-world conditions like light and sound, making them appear more realistic and integrated into our physical environment. Unlike VR, MR does not fully replace the real world with a digital one; instead, it enhances your real-world experience by adding digital elements, providing a more interactive and immersive experience.

MR has diverse applications, such as guiding surgeons in minimally invasive procedures using interactive 3D images and models through MR headsets. MR devices are envisioned as versatile tools poised to deliver value across multiple domains.

What is the Metaverse?

The Metaverse, a term first coined in the 1992 novel “Snow Crash,” is an immersive, interconnected virtual world in which people use avatars to interact with each other and digital environments via the internet. It blends the physical and digital realms using Extended Reality (XR) technologies like AR and VR, creating a space for diverse interactions and community building. Gaining traction through advancements in technology and investments from major companies, the Metaverse offers a platform that mirrors real-world experiences in a digitally enhanced environment, allowing simultaneous connections among numerous users.

Metaverse and how it leverages XR. Metaverse can be built using VR technology to create Virtual Metaverse and using AR technology to create Augmented Metaverse. (Figure adapted from Ahmed et al., 2023)

A metaverse can span the spectrum of virtuality and may incorporate a “virtual metaverse” or an “augmented metaverse” as shown above. Features of this technology range from employing avatars within virtual realms to utilizing smartphones for accessing metaverse environments, and from wearing AR glasses that superimpose computer-generated visuals onto reality, to experiencing MR scenarios that flawlessly blend elements from both the physical and virtual domains.

Spectrum ranging from Reality to Virtuality [Milgram and Kishino (1994) continuum.] Figure adapted from “Reality Media” (Source: Bolter, Engberg, & MacIntyre, 2021)
The above figure illustrates a spectrum from the real environment (physical world) at one end to a completely virtual environment (VR) at the other. Augmented Reality (AR) and Augmented Virtuality (AV) are placed in between, with AR mostly showing the physical world enhanced by digital elements, and AV being largely computer-generated but including some elements from the real world. Mixed Reality (MR) is a term for any combination of the physical and virtual worlds along this spectrum.

Back to top

How is AR/VR relevant in civic space and for democracy?

In the rapidly evolving landscape of technology, the potential of AR/VR stands out, especially in its relevance to democracy, human rights, and governance (DRG). These technologies are not just futuristic concepts; they are tools that can reshape how we interact with the world and with each other, making them vital for the DRG community.

At the forefront is the power of AR/VR to transform democratic participation. These technologies can create immersive and interactive platforms that bring the democratic process into the digital age. Imagine participating in a virtual town hall from your living room, debating policies with avatars of people from around the world. This is not just about convenience; it’s about enhancing engagement, making participation in governance accessible to all, irrespective of geographic or physical limitations.

Moreover, AR/VR technologies offer a unique opportunity to give voice to marginalized communities. Through immersive experiences, people can gain a visceral understanding of the challenges faced by others, fostering empathy and breaking down barriers. For instance, VR experiences that simulate the life of someone living in a conflict zone or struggling with poverty can be powerful tools in human rights advocacy, making abstract issues concrete and urgent.

Another significant aspect is the global collaboration facilitated by AR/VR. These technologies enable DRG professionals to connect, share experiences, and learn from each other across borders. Such collaboration is essential in a world where human rights and democratic values are increasingly interdependent. The global exchange of ideas and best practices can lead to more robust and resilient strategies in promoting democracy and governance.

The potential of AR/VR in advocacy and awareness is significant. Traditional methods of raising awareness about human rights issues can be complemented and enhanced by the immersive nature of these technologies. They bring a new dimension to storytelling, allowing people to experience rather than just observe. This could be a game-changer in how we mobilize support for causes and educate the public about critical issues.

However, navigating the digital frontier of AR/VR technology calls for a vigilant approach to data privacy, security, and equitable access, recognizing these as not only technical challenges but also human rights and ethical governance concerns.

The complexity of governing these technologies necessitates the involvement of elected governments and representatives to address systemic risks, foster shared responsibility, and protect vulnerable populations. This governance extends beyond government oversight, requiring the engagement of a wide range of stakeholders, including industry experts and civil society, to ensure fair and inclusive management. The debate over governance approaches ranges from advocating government regulation to protect society, to promoting self-regulation for responsible innovation. A potentially effective middle ground is co-regulation, where governments, industry, and relevant stakeholders collaborate to develop and enforce rules. This balanced strategy is crucial for ensuring the ethical and impactful use of AR/VR in enhancing democratic engagement and upholding human rights.

Back to top

Opportunities

AR/VR offers a diverse array of applications in the realms of democracy, human rights, and governance. The following section delves into various opportunities that AR/VR technology brings to civic and democracy work.

Augmented Democracy

Democracy is much more than just elections and governance by elected people. A fully functional democracy is characterized by citizen participation in the public space; participatory governance; freedom of speech and opportunities; access to information; due legal process and enforcement of justice; protection from abuse by the powerful, etc. Chilean physicist César Hidalgo, formerly director of the MIT Collective Learning group at MIT Media Lab, has worked on an ambitious project that he called “Augmented Democracy.” Augmented Democracy banks on the idea of using technology such as AR/VR, along with other digital tools, including AI and digital twins, to expand the ability of people to participate directly in a large volume of democratic decisions. Citizens can be represented in the virtual world by a digital twin, an avatar, or a software agent. Through such technology, humans can participate more fully in all of the public policy issues in a scalable, convenient fashion. Hidalgo asserts that democracy can be enhanced and augmented using technology to automate several of the tasks of governments and in the future, politicians and citizens will be supported by algorithms and specialist teams, fostering a collective intelligence that serves the people more effectively.

Participatory Governance

Using AR/VR, enhanced opportunities to participate in governance become available. When used in conjunction with other technologies such as AI, participatory governance becomes feasible at scale in which the voice of citizens and their representatives is incorporated into all of the decisions pertaining to public policy and welfare. However, “a participatory public space” is only one possibility. As we shall see later in the Risks section, we cannot ascribe outcomes to technology deterministically because the intent and purpose of deployment matters a lot. If due care is not exercised, the use of technology in public spaces may result in other less desirable scenarios such as “autocratic augmented reality” or “big-tech monopoly” (Gudowsky et al.). On the other hand, a well-structured metaverse could enable greater democratic participation and may offer citizens new ways to engage in civic affairs with AR/VR, leading to more inclusive governance. For instance, virtual town hall meetings, debates, and community forums could bring together people from diverse backgrounds, overcoming geographical barriers and promoting democratic discussions. AR/VR could facilitate virtual protests and demonstrations, providing a safe platform for expression in regions where physical gatherings might be restricted.

AR/VR in Healthcare

Perhaps the most well-known applications of AR/VR in the civic space pertain to the healthcare and education industries. The benefit of AR/VR for healthcare is well-established and replicated through multiple scientific studies. Even skeptics, typically doubtful of AR/VR/metaverse technology’s wider benefits, acknowledge its proven effectiveness in healthcare, as noted by experts like Bolter et al. (2021) and Bailenson (2018). These technologies have shown promise in areas such as therapeutics, mental therapy, emotional support, and specifically in exposure therapy for phobias and managing stress and anxiety. Illustrating this, Garcia-Palacios et al. (2002) demonstrated the successful use of VR in treating spider phobia through a controlled study, further validating the technology’s therapeutic potential.

AR/VR in Education

Along with healthcare, education and training provide the most compelling use cases of immersive technologies such as AR/VR. The primary value of AR/VR is that it provides a unique first-person immersive experience that can enhance human perception and educate or train learners in the relevant environment. Thus, with AR/VR, education is not reading about a situation or watching it, but being present in that situation. Such training can be useful in a wide variety of fields. For instance, Boeing presented results of a study that suggested that training performed through AR enabled workers to be more productive and assemble plane wings much faster than when instructions were provided using traditional methods. Such training has also been shown to be effective in diversity training, where empathy can be engendered through immersive experiences.

Enhanced Accessibility and Inclusiveness

AR/VR technology allows for the creation of interactive environments that can be customized to meet the needs of individuals with various abilities and disabilities. For example, virtual public spaces can be adapted for those with visual impairments by focusing on other senses, using haptic (touch-based) or audio interfaces for enhanced engagement. People who are colorblind can benefit from a ‘colorblind mode’ – a feature already present in many AR/VR applications and games, which adjusts colors to make them distinguishable. Additionally, individuals who need alternative communication methods can utilize text-to-speech features, even choosing a unique voice for their digital avatars. Beyond these adaptations, AR/VR technologies can help promote workplace equity, through offering people with physical disabilities equal access to experiences and opportunities that might otherwise be inaccessible, leveling the playing field in both social and professional settings.

Generating Empathy and Awareness

AR/VR presents a powerful usability feature through which users can experience what it is like to be in the shoes of someone else. Such perspective-enhancing use of AR/VR can be used to increase empathy and promote awareness of others’ circumstances. VR expert Jeremy Bailenson and his team at Stanford Virtual Human Interaction Lab have worked on VR for behavior change and have created numerous first-person VR experiences to highlight social problems such as racism, sexism, and other forms of discrimination (see some examples in Case Studies). In the future, using technology in real time with AR/VR-enabled, wearable and broadband wireless communication, one may be able to walk a proverbial mile in someone else’s shoes in real time, raising greater awareness of the difficulties faced by others. Such VR use can help in removing biases and in making progress on issues such as poverty and discrimination.

Immersive Virtual Communities and Support Systems

AR/VR technologies offer a unique form of empowerment for marginalized communities, providing a virtual space for self-expression and authentic interaction. These platforms enable users to create avatars and environments that truly reflect their identities, free from societal constraints. This digital realm fosters social development and offers a safe space for communities often isolated in the physical world. By connecting these individuals with broader networks, AR/VR facilitates access to educational and support resources that promote individual and communal growth. Additionally, AR/VR serves as a digital archive for diverse cultures and traditions, aiding in the preservation and celebration of cultural diversity. As highlighted in Jeremy Bailenson’s “Experience on Demand,” these technologies also provide therapeutic benefits, offering emotional support to those affected by trauma. Through virtual experiences, individuals can revisit cherished memories or envision hopeful futures, underscoring the technology’s role in emotional healing and psychological wellbeing.

Virtual Activism

Virtual reality, unlike traditional media, does not provide merely a mediated experience. When it is done well, explains Jeremy Bailenson, it is an actual experience. Therefore, VR can be the agent of long-lasting behavior change and can be more engaging and persuasive than other types of traditional media. This makes AR/VR ideally suited for virtual activism, which seeks to bring actual changes to the life of marginalized communities. For instance, VR has been used by UN Virtual Reality to provide a new lens on an existing migrant crisis; create awareness around climate change; and engender humanitarian empathy. Some examples are elaborated upon in the Case Studies.

Virtual Sustainable Economy

AR/VR and the metaverse could enable new, more sustainable economic models. Decentralized systems like blockchain technology can be used to support digital ownership of virtual assets, and empower the disenfranchised economically, and to challenge traditional, centralized power structures. Furthermore, since AR/VR and the metaverse promise to be the next evolution of the Internet – which is more immersive, and multi-sensory, individuals may be able to participate in various activities and experiences virtually. This could reduce the need for physical travel and infrastructure, resulting in more economical and sustainable living, reducing carbon footprints, and mitigating climate change.

Back to top

Risks

The use of AR/VR in democracy, human rights, and governance work carries various risks. The following sections will explore these risks in a little more detail. They will also provide strategies on how to mitigate these risks effectively.

Limited applications and Inclusiveness

For AR/VR technologies to be effectively and inclusively used in democratic and other applications, it is essential to overcome several key challenges. Currently, these technologies fall short in areas like advanced tactile feedback, comprehensive sign language support, and broad accessibility for various disabilities. To truly have a global impact, AR/VR must adapt to diverse communication methods, including visual, verbal, and tactile approaches, and cater to an array of languages, from spoken to sign. They should also be designed to support different cognitive abilities and neurodiversity, in line with the principles set by the IEEE Global Initiative on Ethics of Extended Reality. There is a pressing need for content to be culturally and linguistically localized as well, along with the development of relevant skills, making AR/VR applications more applicable and beneficial across various cultural and linguistic contexts.

Furthermore, access to AR/VR technologies and the broader metaverse and XR ecosystem is critically dependent on advanced digital infrastructure, such as strong internet connectivity, high-performance computing systems, and specialized equipment. As noted by Matthew Ball in his 2022 analysis, significant improvements in computational efficiency are necessary to make these technologies widely accessible and capable of delivering seamless, real-time experiences, which is particularly crucial in AR to avoid disruptive delays. Without making these advancements affordable, AR/VR applications at scale remain limited.

Concentration of Power & Monopolies of Corporations

The concept of the Metaverse, as envisioned by industry experts, carries immense potential for shaping the future of human interaction and experience. However, the concentrated control of this expansive digital realm by a single dominant corporation raises critical concerns over the balance of power and authority. As Matthew Ball (2022) puts it, the Metaverse’s influence could eclipse that of governments, bestowing unprecedented authority upon the corporation that commands it. The concentration of power within this immersive digital ecosystem brings forth apprehensions about accountability, oversight, and the potential implications for personal freedoms.

Another significant concern is how companies gather and use our data. While they can use data to improve their programs and lives in many ways, the World Bank (2021) warns that collecting vast amounts of data can lead to companies getting too much economic and political power, which could be used to harm citizens. The more data is used over and over, the more chances there are for it to be misused. Especially in situations characterized by concentrations of power, like in authoritarian regimes or corporate monopolies, the risks of privacy violations, surveillance, and manipulation become much higher.

Privacy Violation with Expanded Intrusive Digital Surveillance

The emergence of AR/VR technologies has revolutionized immersive experiences but also raises significant privacy concerns due to the extensive data collection involved. These devices collect a wide range of personal data, including biometric information like blood pressure, pulse oximetry, voice prints, facial features, and even detailed body movements. This kind of data gathering poses specific risks, particularly to vulnerable and marginalized groups, as it goes much further than simple identification. Current regulatory frameworks are not adequately equipped to address these privacy issues in the rapidly evolving XR environment. This situation underscores the urgent need for updated regulations that can protect individual privacy in the face of such advanced technological capabilities.

Moreover, AR/VR technologies bring unique challenges in the form of manipulative advertising and potential behavior modification. Using biometric data, these devices can infer users’ deepest desires, leading to highly targeted and potentially invasive advertising that taps into subconscious motivations. Such techniques blur the line between personal privacy and corporate interests, necessitating robust privacy frameworks. Additionally, the potential of AR/VR to influence or manipulate human behavior is a critical concern. As these technologies can shape our perceptions and choices, it is essential to involve diverse perspectives in their design and enforce proactive regulations to prevent irreversible impacts on their infrastructure and business models. Furthermore, the impact of XR technology extends to bystanders, who may unknowingly be recorded or observed, especially with the integration of technologies like facial recognition, posing further risks to privacy and security.

Unintended Harmful Consequences of AR/VR

When introducing AR/VR technology into democracy-related programs or other social initiatives, it is crucial to consider the broader, often unintended, consequences these technologies might have. AR/VR offers immersive experiences that can enhance learning and engagement, but these very qualities also bear risks. For example, while VR can create compelling simulations of real-world scenarios, promoting empathy and understanding, it can also lead to phenomena like “VR Fatigue” or “VR Hangover.” Users might experience a disconnection from reality, feeling alienated from their physical environment or their own bodies. Moreover, the prevalence of “cybersickness,” akin to motion sickness, caused by discrepancies in sensory inputs, can result in discomfort, nausea, or dizziness, detracting from the intended positive impacts of these technologies.

Another significant concern is the potential for AR/VR to shape users’ perceptions and behaviors in undesirable ways. The immersive nature of these technologies can intensify the effects of filter bubbles and echo chambers, isolating users within highly personalized, yet potentially distorted, information spheres. This effect can exacerbate the fragmentation of shared reality, impeding constructive discourse in democratic contexts. Additionally, the blending of virtual and real experiences can blur the lines between factual information and fabrication, making users more susceptible to misinformation. Furthermore, the perceived anonymity and detachment in VR environments might encourage anti-social behavior, as people might engage in actions they would avoid in real life. There is also the risk of empathy, generally a force for good, being manipulated for divisive or exploitative purposes. Thus, while AR/VR holds great promise for enhancing democratic and social programs, potential negative impacts call for careful, ethically guided implementation.

“Too True to Be Good”: Disenchantment with Reality & Pygmalion Effect

In our era of augmented and virtual realities, where digital escapism often seems more enticing than the physical world, there is a growing risk to our shared understanding and democracy as people might become disenchanted with reality and retreat into virtual realms (Turkle, 1996; Bailenson,2018). The transformative nature of AR/VR introduces a novel concept where individuals might gravitate towards virtual worlds at the expense of engaging with their physical surroundings (which are now considered “too true to be good”). The use of VR by disadvantaged and exploited populations may provide them relief from the challenges of their lived experience, but it also diminishes the likelihood of their resistance to those conditions. Moreover, as AR/VR advances and becomes integrated with advanced AI in the metaverse, there is a risk of blurring the lines between the virtual and real worlds. Human beings have a tendency to anthropomorphize machines and bots that have some humanistic features (e.g., eyes or language) and treat them as humans (Reeve & Nass, 1996). We might treat AI and virtual entities as if they are human, potentially leading to confusion and challenges in our interactions. There are also severe risks associated with overindulgence in immersive experiences having a high degree of VR realism (Greengard, 2019). VR Expert Denny Unger, CEO of Cloudhead Games, cautions that extreme immersion could extend beyond discomfort and result in even more severe outcomes, including potential heart attacks and fatal incidents.

Neglect of physical self and environment

Jeremy Bailenson’s (2018) observation that being present in virtual reality (VR) often means being absent from the real world is a crucial point for those considering using VR in democracy and other important work. When people dive into VR, they can become so focused on the virtual world that they lose touch with what is happening around them in real life. In his book “Experience on Demand,” Bailenson explains how this deep engagement in VR can lead to users neglecting their own physical needs and environment. This is similar to how people might feel disconnected from themselves and their surroundings in certain psychological conditions. There is also a worry that VR companies might design their products to be addictive, making it hard for users to pull away. This raises important questions about the long-term effects of using VR a lot and highlights the need for strategies to prevent these issues.

Safety and Security

In the realm of immersive technologies, safety is a primary concern. There is a notable lack of understanding about the impact of virtual reality (VR), particularly on young users. Ensuring the emotional and physical safety of children in VR environments requires well-defined guidelines and safety measures. The enticing nature of VR must be balanced with awareness of the real world to protect younger users. Discussions about age restrictions and responsible use of VR are critical in this rapidly advancing technological landscape. Spiegel (2018) emphasizes the importance of age restrictions to protect young users from the potential negative effects of prolonged VR exposure, arguing for the benefits of such limitations.

On another front, the lack of strong identity verification in virtual spaces raises concerns about identity theft and avatar misuse, particularly affecting children who could be victims of fraud or wrongly accused of offenses. The absence of effective identity protection increases the vulnerability of users, highlighting the need for advanced security measures. Additionally, virtual violence, like harassment incidents reported in VR games, poses a significant risk. These are not new issues; for instance, Julian Dibbell’s 1994 article “A Rape in Cyberspace” brought attention to the challenge of preventing virtual sexual assault. This underlines the urgent need for policies to address and prevent harassment and violence in VR, ensuring these spaces are safe and inclusive for all users.

Alignment with Values and Meaning-Making

When incorporating AR/VR technologies into programs, it is crucial to be mindful of their significant impact on culture and values. As Neil Postman pointed out, technology invariably shapes culture, often creating a mix of winners and losers. Each technological tool carries inherent biases, whether political, social, or epistemological. These biases subtly influence our daily lives, sometimes without our conscious awareness. Hence, when introducing AR/VR into new environments, consider how these technologies align or conflict with local cultural values. As Nelson and Stolterman (2014) observed, culture is dynamic, caught between tradition and innovation. Engaging the community in the design process can enhance the acceptance and effectiveness of your project.

In the context of democracy, human rights, and governance, it is essential to balance individual desires with the collective good. AR/VR can offer captivating, artificial experiences, but as philosopher Robert Nozick’s (2018) “Experience Machine” thought experiment illustrates, these cannot replace the complexities and authenticity of real-life experiences. People often value reality, authenticity, and the freedom to make life choices over artificial pleasure. In deploying AR/VR, the goal should be to empower individuals, enhancing their participation in democratic processes and enriching their lives, rather than offering mere escapism. Ethical guidelines and responsible design practices are key in ensuring the conscientious use of virtual environments. By guiding users towards more meaningful and fulfilling experiences, AR/VR can be used to positively impact society while respecting and enriching its cultural fabric.

Back to top

Questions

If you are trying to understand the implications of AR/VR in your DRG work, you should consider the following questions:

  1. Does the AR/VR use enhance human engagement with the physical world and real-world issues, or does it disengage people from the real world? Will the AR/VR tool being developed create siloed spaces that will alienate people from each other and from the real world? What steps have been taken to avoid such isolation?
  2. Can this project be done in the real world and is it really needed in virtual reality? Does it offer any benefit over doing the same thing in the real world? Does it cause any harm compared to doing it in the real world?
  3. In deploying AR/VR technology, consider if it might unintentionally reinforce real-world inequalities. Reflect on the digital and economic barriers to access: Is your application compatible with various devices and networks, ensuring wide accessibility? Beware of creating a “reality divide,” in which marginalized groups are pushed towards virtual alternatives while others enjoy physical experiences. Always consider offering real-world options for those less engaged with AR/VR, promoting inclusivity and broad participation.
  4. Have the system-level repercussions of AR/VR technology usage been considered? What will be the effect of the intervention on the community and the society at large? Are there any chances that the proposed technology will result in the problem of technology addiction? Can any unintended consequences be anticipated and negative risks (such as technology addiction) be mitigated?
  5. Which policy and regulatory frameworks are being followed to ensure that AR/VR-related technologies, or more broadly XR and metaverse-related technologies, do not violate human rights and contribute positively to human development and democracy?
  6. Have the necessary steps been taken to accommodate and promote diversity, equity and inclusion so that the technology is appropriate for the needs and sensitivities of different groups? Have the designers and developers of AR/VR taken on board input from underrepresented and marginalized groups to ensure participatory and inclusive design?
  7. Is there transparency in the system regarding what data are collected, who they are shared with and how they are used? Does the data policy comply with international best practices regarding consumer protection and human rights?
  8. Are users given significant choice and control over their privacy, autonomy, and access to their information and online avatars? What measures are in place to limit unauthorized access to data containing private and sensitive information?
  9. If biometric signals are being monitored, or sensitive information such as eye-gaze detection is performed, what steps and frameworks have been followed to ensure that they are used for purposes that are ethical and pro-social?
  10. Is there transparency in the system about the use of AI entities in the Virtual Space such that there is no deception or ambiguity for the user of the AR/VR application?
  11. What steps and frameworks have been followed to ensure that any behavior modification or nudging performed by the technology is guided by ethics and law and is culturally sensitive and pro-social? Is the AR/VR technology complying with the UN Human Rights principles applied to communication surveillance and business?
  12. Has appropriate consideration been given to safeguarding children’s rights if the application is intended for use by children?
  13. Is the technology culturally appropriate? Which cultural effects are likely when this technology is adopted? Will these effects be welcomed by the local population, or will it face opposition?

Back to top

Case Studies

AR/VR can have positive impacts when used to further DRG issues. Read below to learn how to think about AR/VR use more effectively and safely in your work.

UN Sustainable Development Goals (SDG) Action Campaign

Starting in January 2015, the UN SDG Action Campaign has overseen the United Nations Virtual Reality Series (UN VR), aiming to make the world’s most urgent issues resonate with decision makers and people worldwide. By pushing the limits of empathy, this initiative delves into the human narratives underlying developmental struggles. Through the UN VR Series, individuals in positions to effect change gain a more profound insight into the daily experiences of those who are at risk of being marginalized, thereby fostering a deeper comprehension of their circumstances.

UN Secretary-General Ban Ki-moon & Executive Director, WHO Margaret Chan (c) David Gough. Credit. UN VR: https://unvr.sdgactioncampaign.org/

As a recent example advocacy and activism and the use of immersive storytelling to brief decision makers, in April 2022, the United Nations Department of Political and Peacebuilding Affairs (UN DPPA), together with the Government of Japan, released the VR experience “Sea of Islands” that takes viewers to the Pacific islands, allowing them to witness the profound ramifications of the climate crisis in the Asia Pacific area. Through this medium, the urgency, magnitude, and critical nature of climate change become tangible and accessible.

Poster of the VR Film: Sea of Islands. Source: https://media.un.org/en/asset/k1s/k1sbvxqll2

VR For Democratizing Access to Education

AR/VR technologies hold great promise in the field of educational technology (“edtech”) due to their immersive capabilities, engaging demeanor, and potential to democratize access and address issues such as cost and distance (Dick, 2021a). AR/VR can play a crucial role in facilitating the understanding of abstract concepts and enable hands-on practice within safe virtual environments, particularly benefiting STEM courses, medical simulations, arts, and humanities studies. Additionally, by incorporating gamified, hands-on learning approaches across various subjects, these technologies enhance cognitive development and classroom engagement. Another advantage is their capacity to offer personalized learning experiences, benefiting all students, including those with cognitive and learning disabilities. An illustrative instance is Floreo, which employs VR-based lessons to impart social and life skills to young individuals with autism spectrum disorder (ASD). The United Nations Children’s Fund (UNICEF) has a series of initiatives under its AR/VR for Good Initiative. For example, Nigerian start-up Imisi 3D, founded by Judith Okonkwo, aims to use VR in the classroom. Imisi 3D’s solution promises to provide quality education tools through VR, enrich the learning experiences of children, and make education accessible to more people.

Source: UNICEF Nigeria/2019/Achirga

Spotlight on Refugees and Victims of War

A number of projects have turned to VR to highlight the plight of refugees and those affected by war. One of UN VR’s first documentaries, released in 2015, is Clouds Over Sidra, the story of a 12-year-old girl Sidra who has lived in Zaʿatari Refugee Camp since the summer of 2013. The storyline follows Sidra around the Zaʿatari Camp, where approximately 80,000 Syrians, approximately half of them children, have taken refuge from conflict and turmoil. Through the VR film, Sidra takes audiences on a journey through her daily existence, providing insights into activities such as eating, sleeping, learning, and playing within the expansive desert landscape of tents. By plunging viewers into this world that would otherwise remain distant, the UN strives to offer existing donors a tangible glimpse into the impact of their contributions and, for potential donors, an understanding of the areas that still require substantial support.

The Life of Migrants in a Refugee Camp in VR (UN VR Project Clouds over Sidra) Source: http://unvr.sdgactioncampaign.org/cloudsoversidra/

Another UN VR project, My Mother’s Wing, offers an unparalleled perspective of the war-torn Gaza Strip, presenting a firsthand account of a young mother’s journey as she grapples with the heart-wrenching loss of two of her children during the bombardment of the UNRWA school in July 2014. This poignant film sheds light on the blend of sorrow and hope that colors her daily existence, showcasing her pivotal role as a beacon of strength within her family. Amid the process of healing, she emerges as a pillar of support, nurturing a sense of optimism that empowers her family to persevere with renewed hope.

Experience of a War-Torn Area (UN VR Project My Mother’s Wing) Source: https://unvr.sdgactioncampaign.org/a-mother-in-gaza/

Improving Accessibility in the Global South with AR

In various parts of the world, millions of adults struggle to read basic things such as bus schedules or bank forms. To rectify this situation, AR technology can be used with phone cameras to help people who struggle with reading. As an example, Google Lens offers support for translation and can read the text out loud when pointed at the text. It highlights the words as they are spoken, so that it becomes possible to follow along and understand the full context. One can also tap on a specific word to search for it and learn its definition. Google Lens is designed to work not only with expensive smartphones but also with cheap phones equipped with cameras.

Google Translate with Google Lens for Real-Time Live Translation of Consumer Train Tickets Source: https://storage.googleapis.com/gweb-uniblog-publish-prod/original_images/Consumer_TrainTicket.gif

Another example AR app “IKEA Place” shows the power of AR-driven spatial design and consumer engagement. The app employs AR technology to allow users to virtually integrate furniture products into their living spaces, enhancing decision-making processes, and elevating customer satisfaction. Such AR technology can also be applied in the realm of civic planning. By providing an authentic representation of products in real-world environments, the app can aid urban planners and architects in simulating various design elements within public spaces, contributing to informed decision-making for cityscapes and communal areas.

IKEA Place: Using AR to visualize furniture within living spaces. Source: Ikea.com

More examples of how AR/VR technology can be used to enhance accessibility are noted in Dick (2021b).

Spotlight on Gender and Caste Discrimination

The presence of women at the core of our democratic system marks a significant stride toward realizing both gender equality (SDG 5) and robust institutions (SDG 16).  The VR film titled “Compliment,” created by Lucy Bonner, a graduate student at Parsons School of Design, aimed to draw attention to harassment and discrimination endured by women in unsafe environments, which regrettably remains a global issue. Through this VR movie, viewers can step into the shoes of a woman navigating the streets, gaining a firsthand perspective on the distressing spectrum of harassment that many women experience often on a daily basis.

View of a Scene from the VR Movie, “Compliment.”
Source: http://lucymbonner.com/compliment.html

There are other forms of systemic discrimination including caste-based discrimination. A VR based film “Course to Question” produced by Novus Select in collaboration with UN Women and Vital Voices and supported by Google offers a glimpse into the struggles of activists combating caste-based discrimination. This movie highlights the plight of Dalit women who continue to occupy the lowest rungs of caste, color, and gender hierarchies. Formerly known as “untouchables,” the Dalits are members of the lowest caste in India and are fighting back against systems of oppression. They are systematically deprived of fundamental human rights, including access to basic necessities like food, education, and fair labor.

Scene from UN Women’s VR movie “Courage to Question” highlighting discrimination faced by Dalit women Snapshot from https://www.youtube.com/watch?v=pJCl8FNv22M

Maternal Health Training

The UN Population Fund, formerly the UN Fund for Population Activities (UNFPA), pioneered a VR innovation in 2022 to improve maternal-health training, the first project to implement VR in Timor-Leste and possibly in the Asia-Pacific region. The VR program includes VR modules which contain Emergency Obstetric and Newborn Care (EmONC) skills and procedures to save the lives of mothers and babies using VR goggles. The aim of this project is to create digitally mediated learning environments in which real medical situations are visualized for trainees to boost learning experiences and outcomes and “refresh the skills of hundreds of trained doctors and midwives to help them save lives and avoid maternal deaths.”

Source: https://timor-leste.unfpa.org/en/news/unfpa-develop-novel-innovation-help-reduce-maternal-deaths-timor-leste-using-virtual-reality © UNFPA Timor-Leste.

Highlighting Racism and Dire Poverty

The immersive VR experience, “1000 Cut Journey,” takes participants on a profound exploration. They step into the shoes of Michael Sterling, a Black male, and traverse through pivotal moments of his life, gaining firsthand insight into the impact of racism. This journey guides participants through his experiences as a child facing disciplinary measures in the classroom, as an adolescent dealing with encounters with the police, and as a young adult grappling with workplace discrimination (Cogburn et al., 2018).

View from 1000 Cut Journey, a VR film on Racism.
Source: https://www.virtualrealitymarketing.com/case-studies/1000-cut-journey/

1000 Cut Journey serves as a powerful tool to foster a significant shift in perspective. By immersing individuals in the narrative of Michael Sterling, it facilitates a deeper, more authentic engagement with the complex issues surrounding racism.

In another project from Stanford University’s Virtual Human Interaction Lab, one can experience firsthand the lives of indigent homeless people and walk in the shoes of those who can no longer afford a home inside a VR experience. Through this, the researchers aim to raise awareness and also study the effect of VR experiences on empathy. Researchers have uncovered through their research that a VR experience, compared to other perspective taking exercises, engenders longer-lasting behavior change.

View of Becoming Homeless — A Human Experience VR Film.
Source: https://xrgigs.com/offered/becoming-homeless-a-human-experience/

Participatory Governance using XR

The MIT Media Lab and HafenCity University Hamburg teamed up to create CityScope, an innovative tool blending AI, algorithms, and human insight for participatory governance. This tool utilizes detailed data, including demographics and urban planning information, to encourage citizen involvement in addressing community issues and collective decision-making. It allows users to examine various scenarios, fostering informed dialogue and collaborative solutions. This project highlights how combining technology and human creativity can enhance citizen engagement in urban development.

District leaders and residents meet to explore possible sites for refugee communities. Credit: Walter Schiesswohl. Source: https://medium.com/mit-media-lab/

Another example is vTaiwan, an innovative approach to participatory governance, seamlessly bringing together government ministries, scholars, citizens, and business leaders to redefine modern democracy. This transformative process eliminates redundancy by converging online and offline consultations through platforms like vtaiwan.tw and using it for proposals, opinion gathering, reflection, and legislation. Taiwan has also used VR to highlight its response to the COVID-19 crisis through the VR film The Three Crucial Steps which showcases how three steps—prudent action, rapid response and early deployment—have played a critical role in Taiwan’s successful COVID-19 response.

Taiwanese Deputy Minister of Foreign Affairs watches the Ministry’s VR film, Three Crucial Steps, about Taiwan’s response to COVID-19. Photo: Louise Watt.
Source: https://topics.amcham.com.tw/2020/12/taiwan-new-trails-with-extended-reality/

Taiwan harnesses the power of open-source tools and advanced real-time systems such as Pol.is, which use statistical analysis and machine learning to decode the sentiments of its extensive user base (exceeding 200,000 participants) who can participate with immersive VR through the pioneering integration of 3D cameras in live-streamed dialogues. This evolutionary movement, born in 2014 and still ongoing, serves as a model of technology-enhanced 21st-century democratic governance.

Clinical VR and Virtual Therapy

VR holds significant potential for application in clinical settings, particularly for virtual therapy and rehabilitation, as noted by Rizzo et al. (2023). To exemplify the clinical utility of VR, consider the treatment if burn pain, which is often described by medical professionals as excruciating, frequently leading to patients susceptible to post-traumatic stress. For more than two decades, VR has provided a measure of solace to burn patients through innovative solutions like the VR game SnowWorld, developed by researchers from the University of Washington.

A patient uses SnowWorld VR during treatment for burns.
Photo and Copyright: Hunter G. Hoffman. Credit: University of Washington.
Source: https://depts.washington.edu/hplab/research/virtual-reality/

Throughout the process of the operative care of burn wounds, patients are made to immerse themselves in the SnowWorld VR experience. Remarkably, this immersive engagement has proven successful in either drowning out or mitigating the pain signals that afflict patients. The concept behind SnowWorld’s design revolves around the idea of snow, leveraging the stark contrast between cold and ice to counter the sensations of pain from the burn wounds. The intention is to divert patients’ thoughts away from their accidents or burn injuries. The effectiveness of VR in managing incoming pain highlights the great potential of AR/VR technology in clinical therapy and healthcare.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Generative AI

What is Generative AI?

Generative artificial intelligence (GenAI) refers to a class of artificial-intelligence techniques and models that creates new, original content based on data on which the models were trained. The output can be text, images, or videos that reflect or respond to the input. Much as artificial intelligence applications can span many industries, so too can GenAI. Many of these applications are in the area of art and creativity, as GenAI can be used to create art, music, video games, and poetry based on the patterns observed in training data. But its learning of language also makes it well suited to facilitate communication, for example, as chatbots or conversational agents that can simulate human-like conversations, language translation, realistic speech synthesis or text-to-speech. These are just a few examples. This article elaborates on the ways in which GenAI offers both opportunities and risks in civic space and to democracy and what government institutions, international organizations, activists, and civil society organizations can do to capitalize on the opportunities and guard against the risks.

How does GenAI work?

At the core of GenAI are generative models, which are algorithms or information architectures designed to learn the underlying patterns and statistics of training data. These models can then use this learned knowledge to produce new outputs that resemble the original data distribution. The idea is to capture the underlying patterns and statistics of the training data so that the AI model can generate new samples that belong to the same distribution.

Steps of the GenAI Process

As the figure above illustrates, GenAI models are developed through a process by which a curated database is used to train neural networks with machine learning techniques. These networks learn to identify patterns in the data, which allows them to generate new content or make predictions based on the learned information. From there, users can input commands in the form of words, numbers, or images into these algorithmic models, and the model produces content that responds based on the input and the patterns learned from the training data. As they are trained on ever-larger datasets, the GenAI models gain a broader range of possible content they can generate across different media, from audio to images and text.

Until recently, GenAI simply mimicked the style and substance of the input. For example, someone could input a snippet of a poem or news article into a model, and the model would output a complete poem or news article that sounded like the original content. An example of what this looks like in the linguistics field that you may have seen in your own email is predictive language along the lines of a Google Smart Compose that completes a sentence based on a combination of the initial words you use and the probabilistic expectation of what could follow. For example, a machine studying billions of words from datasets would generate a probabilistic expectation of a sentence that starts with “please come ___.” In 95% of cases, the machine might have seen “here” as the next word, in 3% of cases “with me” and in 2% of cases “soon.” Thus, when completing sentences or generating outputs, the algorithm that learned the language would use the sentence structure and combination of words that it had seen previously. Because the models are probabilistic, they might sometimes make errors that do not reflect the nuanced intentions of the input.

GenAI now has far more expansive capabilities. Far beyond text, GenAI is also a tool for producing images from text. For example, tools such as DALL-E, Stable Diffusion, and MidJourney allow a user to input text descriptions that the model then uses to produce a corresponding image. These images vary in their realism–for example, some look like they are out of a science fiction scene while others look like a painting while others are more like a photograph. Additionally, it is worth noting that these tools are constantly improving, ensuring that the boundaries of what can be achieved with text-to-image generation continue to expand.

Conversational AI

Recent models have incorporated machine learning from language patterns but also factual information about politics, society, and economics. Recent models are also able to take input commands from images and voice, further expanding their versatility and utility in various applications.

Consumer-facing models that simulate human conversation–“conversational AI”–have proliferated recently and operate more as chatbots, responding to queries and questions, much in the way that a search engine would function. Some examples include asking the model to answer any of the following:

  • Provide a photo of a political leader playing a ukulele in the style of Salvador Dali.
  • Talk about Kenya’s capital, form of government, or character, or about the history of decolonization in South Asia.
  • Write and perform a song about adolescence that mimics a Drake song.

In other words, these newer models may function like a blend between a Google search and an exchange with a knowledgeable individual about their area of expertise. Much like a socially attentive individual, these models can be taught during a conversation. If you were to ask a question about the best restaurants in Manila, and the chatbot responds with a list of restaurants that include some Continental European restaurants, you can then follow up and express a preference for Filipino restaurants, which will prompt the chatbot to tailor its output to your specific preferences. The model learns based on feedback, although models such as ChatGPT will be quick to point out that it is only trained on data up to a certain date, which means some restaurants will have gone out of business and some award-winning restaurants may have cropped up. The example highlights a fundamental tension between up-to-date models or content and the ability to refine models. If we try to have models learn from information as it is produced, those models will generate up-to-date answers but will not be able to filter outputs for bad information, hate speech, or conspiracy theories.

Definitions

GenAI involves several key concepts:

Generative Models: Generative models are a class of machine learning models designed to create or generate new data outputs that resemble a given set of training data. These models learn underlying patterns and structures from the training data and use that knowledge to generate new, similar data outputs.

ChatGPT: ChatGPT is a Generative Pre-trained Transformer (GPT) model developed by OpenAI. While researchers had developed and used language models for decades, ChatGPT was the first consumer-facing language model. Trained to understand and produce human-like text in a dialogue setting, it was specifically designed for generating conversational responses and engaging in interactive text-based conversations. As such, it is well-suited for creating chatbots, virtual assistants, and other conversational AI applications.

Neural Network: A neural network is a computational model intended to function like the brain’s interconnected neurons. It is an important part of deep learning because it performs a calculation, and the strength of connections (weights) between neurons determines the flow of information and influences the output.

Training Data: Training data are the data used to train generative models. These data are crucial since the model learns patterns and structures from these data to create new content. For example, in the context of text generation, training data would consist of a large collection of text documents, sentences, or paragraphs. The quality and diversity of the training data have a significant impact on the performance of the GenAI model because it helps the model generate more relevant content.

Hallucination: In the context of GenAI, the term “hallucination” refers to a phenomenon where the AI model produces outputs that are not grounded in reality or accurate representations of the input data. In other words, the AI generates content that seems to exist, but in reality, it is entirely fabricated and has no basis in the actual data on which it was trained. For instance, a language model might produce paragraphs of text that seem coherent and factual but, upon closer inspection, might include false information, events that never happened, or connections between concepts that are logically flawed. The problem results from noise in the training data. Addressing and minimizing hallucinations in GenAI is an ongoing research challenge. Researchers and developers strive to improve the models’ understanding of context, coherence, and factual accuracy to reduce the likelihood of generating content that can be considered hallucinatory.

Prompt: GenAI prompt is a specific input or instruction provided to a GenAI model to guide it in producing a desired output. In image generation, a prompt might involve specifying the style, content, or other attributes you want the generated image to have. The quality and relevance of the generated output often depend on the clarity and specificity of the prompt. A well-crafted prompt can lead to more accurate and desirable generated content.

Evaluation Metrics: Evaluating the quality of outputs from GenAI models can be challenging, but several evaluation metrics have been developed to assess various aspects of generated content. Metrics like Inception Score, Frechet Inception Distance (FID), and Perceptual Path Length (PPL) attempt to measure aspects of model performance such as the diversity of responses (so that they do not all sound like copies of each other), relevance (so the responses are on topic) and coherence (so that responses stay on topic) of the output.

Prompt Engineering: Prompt engineering is the process of designing and refining prompts or instructions given to GenAI systems, such as chatbots or language models like GPT-3.5, to elicit specific and desired responses. It involves crafting the input text or query in such a way that the model generates outputs that align with the user’s intent or the desired task. It is useful for optimizing the benefits of GenAI but requires a deep understanding of the model’s behavior and capabilities as well as the specific requirements of the application or task. Well-crafted prompts can enhance the user experience by ensuring that the models provide valuable and accurate responses.

Back to top

How is GenAI relevant in civic space and for democracy?

The rapid development and diffusion of GenAI technologies–across medicine, environmental sustainability, politics, and journalism, among many other fields–is creating and will create enormous opportunities. GenAI is being used for drug discovery, molecule design, medical-imaging analysis, and personalized treatment recommendations. It is being used to model and simulate ecosystems, predict environmental changes, and devise conservation strategies. It offers more accessible answers about bureaucratic procedures so citizens better understand their government, which is a fundamental change to how citizens access information and how governments operate. It is supporting the generation of written content such as articles, reports, and advertisements.

Across all of these sectors, GenAI also introduces potential risks. Governments, working with the private sector and civil society organizations, are taking different approaches to balancing capitalizing on the opportunities while guarding against the risks, reflecting different philosophies about risk and the role of innovation in their respective economies and different legal precedents and political landscapes across countries. Many of the pioneering efforts are taking place in the countries where AI is being used most, such as in the United States or countries in the European Union, or in tech-heavy countries such as China. Conversations about regulation in other countries have lagged. In Africa, for example, experts at the Africa Tech Week conference in spring 2023 expressed concern about the lag in Africa’s access to AI and the need to catch up to reap the benefits of AI in the economy, medicine, and society, though they also gestured toward privacy issues and the importance of diversity in AI research teams to guard against bias. These conversations suggest that both access and regulation are developing at different rates across different contexts, and those regions developing and testing regulations now may be role models or at least provide lessons learned for other countries as they regulate.

The European Union has moved quickly to regulate AI, using a tiered, risk-based approach that designates some types of “high risk uses” as prohibited. GenAI systems that do not have risk-assessment and -mitigation plans, clear information for users, explainability, activity logging, and other requirements are considered high risk. Most GenAI systems would not meet those standards, according to a 2021 Stanford University study. However, executives from 150 European companies have collectively pushed back against aggressive regulation, suggesting that overly stringent AI regulation will incentivize companies to establish headquarters outside of Europe and stifle innovation and economic development in the region. An open letter acknowledges that some regulation may be warranted but that GenAI will be “decisive” and “powerful” and that “Europe cannot afford to stay on the sidelines.”

China has been one of the most aggressive countries when it comes to AI regulation. The Cybersecurity Administration of China requires that AI be transparent, unbiased, and not used for generating misinformation or social unrest. Existing rules highly regulate deepfakes—synthetic media in which a person’s likeness, including their face and voice, is replaced with someone else’s likeness, typically using AI. Any service provider that uses content produced by GenAI must also obtain consent from deepfake subjects, label outputs, and then counter any misinformation. However, enacting such regulations does not mean that state actors will not use AI for malicious purposes or for influence operations themselves as we discuss below.

The United States has held a number of hearings to better understand the technology and its impact on democracy, but by September 2023 had not put in place any significant legislation to regulate GenAI. The Federal Trade Commission, responsible for promoting consumer protection, issued a 20-page letter to OpenAI, the creator of ChatGPT, requesting responses to its concerns about consumer privacy and security. In addition, the US government has worked with the major GenAI firms to establish voluntary transparency and safety safeguards as the risks and benefits of the technology evolve.

Going beyond regional or country-level regulatory initiatives, the UN Secretary General, António Guterrez, has advocated for transparency, accountability, and oversight of AI. Mr. Guterrez observed: “The international community has a long history of responding to new technologies with the potential to disrupt our societies and economies. We have come together at the United Nations to set new international rules, sign new treaties and establish new global agencies. While many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach.” The statement gestures toward the fact that digital space does not know boundaries and that the software technologies innovated in one country will inevitably cross over to others, suggesting that meaningful norms or constraints on GenAI will likely require a coordinated, international approach. To that end, some researchers have proposed an international artificial intelligence organization that would help certify compliance with international standards on AI safety, which also acknowledges the inherently international nature of AI development and deployment.

Back to top

Opportunities

Enhancing Representation

One of the main challenges in democracy and for civil society is ensuring that constituent voices are heard and represented, which in part involves citizens themselves participating in the democratic process. GenAI may be useful in providing both policymakers and citizens a way to communicate more efficiently and enhance trust in institutions. Another avenue for enhancing representation is for GenAI to provide data that allow researchers and policymakers an opportunity to understand various social, economic, and environmental issues and constituents’ concerns about these issues. For example, GenAI could be used to synthesize large volumes of incoming commentary from open comment lines or emails and then better understand the bottom-up concerns that citizens have about their democracy. To be sure, these data-analysis tools need to ensure data privacy, but can provide data visualization for institutional leaders to understand what people care about.

Easy Read Access

Many regulations and pieces of legislation are dense and difficult to comprehend for anyone outside the decisionmaking establishment. These accessibility challenges are magnified for  individuals with disabilities such as cognitive impairments. GenAI can summarize long pieces of legislation and translate dense governmental publications into an easy read format, with images and simple language. Civil society organizations can also use GenAI to develop social media campaigns and other content to make it more accessible to those with disabilities.

Civic Engagement

GenAI can enhance civic engagement by generating personalized content tailored to individual interests and preferences through a combination of data analysis and machine learning. This could involve generating informative materials, news summaries, or visualizations that appeal to citizens and encourage them to participate in civic discussions and activities. The marketing industry has long capitalized on the realization that content specific to individual consumers is more likely to elicit consumption or engagement, and the idea holds in civil society. The more the content is personalized and targeted to a specific individual or category of individual, the more likely that individual will be to respond. Again, the use of data for helping classify citizen preferences inherently relies on user data. Not all societies will endorse this use of data. For example, the European Union has shown a wariness about privacy, suggesting that one size will not fit all in terms of this particular use of GenAI for civic engagement.

That being said, this tool could help dislodge voter apathy that can lead to disaffection and disengagement from politics. Instead of boilerplate communication urging young people to vote, for example, GenAI could produce clever content known to resonate with young women or marginalized groups, helping to counter some of the additional barriers to engagement that marginalized groups face. In an educational setting, personalized content could be used to cater to the needs of students in different regions and with different learning abilities, while also providing virtual tutors or language-learning tools.

Public Deliberation

Another way that GenAI could enable public participation and deliberation is through GenAI-powered chatbots and conversational agents. These tools can facilitate public deliberation by engaging citizens in dialogue, addressing their concerns, and helping them navigate complex civic issues. These agents can provide information, answer questions, and stimulate discussions. Some municipalities have already launched AI-powered virtual assistants and chatbots that automate civic services, streamlining processes such as citizen inquiries, service requests, and administrative tasks. This can lead to increased efficiency and responsiveness in government operations. Lack of municipal resources—for example, staff—can mean that citizens also lack the information they need to be meaningful participants in their society. With relatively limited resources, a chatbot can be trained on local data to provide specific information needed to narrow that gap.

Chatbots can be trained in multiple languages, making civic information and resources more accessible to diverse populations. They can assist people with disabilities by generating alternative formats for information, such as audio descriptions or text-to-speech conversions. GenAI can be trained on local dialects and languages, promoting indigenous cultures and making digital content more accessible to diverse populations.

It is important to note that the deployment of GenAI must be done with sensitivity to local contexts, cultural considerations, and privacy concerns. Adopting a human-centered design approach to collaborations among AI researchers, developers, civil society groups, and local communities can help to ensure that these technologies are adapted appropriately and equitably to address specific needs and challenges.

Predictive Analytics

GenAI can also be used for predictive analytics to forecast potential outcomes of policy decisions. For example, AI-powered generative models can analyze local soil and weather data to optimize crop yield and recommend suitable agricultural practices for specific regions. It can be used to generate realistic simulations to predict potential impacts and develop disaster response strategies for relief operations. It can analyze local environmental conditions and energy demand to optimize the deployment of renewable energy sources like solar and wind power, promoting sustainable power solutions.

By analyzing historical data and generating simulations, policymakers can make more informed and evidence-based choices for the betterment of society. These same tools can assist not only policymakers but also civil society organizations in generating data visualizations or summarizing information about citizen preferences. This can aid in producing more informative and timely content about citizen preferences and the state of key issues, like the number of people who are homeless.

Environmental Sustainability

GenAI can be used in ways that lead to favorable environmental impacts. For example, it can be used in fields such as architecture and product design to optimize designs for efficiency. It can be used to optimize processes in the energy industry that can enhance energy efficiency. It also has potential for use in logistics where GenAI can optimize routes and schedules, thereby reducing fuel consumption and emissions.

Back to top

Risks

To harness the potential of GenAI for democracy and the civic space, a balanced approach that addresses ethical concerns, fosters transparency, promotes inclusive technology development, and engages multiple stakeholders is necessary. Collaboration among researchers, policymakers, civil society, and technology developers can help ensure that GenAI contributes positively to democratic processes and civic engagement. The ability to generate large volumes of credible content can create opportunities for policymakers and citizens to connect with each other–but those same capabilities of advanced GenAI models create possible risks as well.

Online Misinformation

Although GenAI has improved, the models still hallucinate and produce convincing-sounding outputs, for example, facts or stories that sound plausible but are not correct. While there are many cases in which these hallucinations are benign–such as a scientific query about the age of the universe–there are other cases where the consequences are destabilizing politically or societally.

Given that GenAI is public facing, individuals can use these technologies without understanding the limitations. They could then inadvertently spread misinformation from an inaccurate answer to a question about politics or history, for example, an inaccurate statement about a political leader that ends up inflaming an already acrimonious political environment. The spread of AI-generated misinformation flooding the information ecosystem has the potential to reduce trust in the information ecosystem as a whole, leading people to be skeptical of all facts and to conform to the beliefs of their social circles. The spread of information may mean that members of society believe things that are not true about political candidates, election procedures, or wars.

Examples of GenAI generating disinformation include not just text but also deepfakes. While deepfakes have benign potential applications, such as for entertainment or special effects, they can also be misused to create highly realistic videos that spread false information or fabricated events that make it difficult for viewers to discern between fake and real content, which can lead to the spread of misinformation and erode trust in the media. Relatedly, they can be used for political manipulation, in which videos of politicians or public figures are altered to make them appear to say or do things that could defame, harm their reputation, or influence public opinion.

GenAI makes it more efficient to generate and amplify disinformation, intentionally created for the purposes of misleading a reader, because it can produce, in large quantities, seemingly original and seemingly credible but nonetheless inaccurate information. None of the stories or comments would necessarily repeat, which could then lead to an even more credible-seeming narrative. Foreign disinformation campaigns have often been identified on the basis of spelling or grammatical errors, but the ability to use these new GenAI technologies means the efficient creation of native-sounding content that can fool the usual filters that a platform might use to identify large-scale disinformation campaigns. GenAI may also proliferate social bots that are indistinguishable from humans and can micro-target individuals with disinformation in a tailored way.

Astroturfing Campaigns

Since GenAI technologies are public facing and easy to use, they can be used to manipulate not only the mass public, but also different levels of government elites. Political leaders are expected to engage with their constituents’ concerns, as reflected in communications such as emails that reveal public opinion and sentiment. But what if a malicious actor used ChatGPT or another GenAI model to create large volumes of advocacy content and distributed it to political leaders as if it were from real citizens? This would be a form of astroturfing, a deceptive practice that masks the source of content with an aim of creating a perception of grassroots support. Research suggests that elected officials in the United States have been susceptible to these attacks. Leaders could well allow this volume of content to influence their political agenda, passing laws or establishing bureaucracies in response to the apparent groundswell of support that in fact was manufactured by the ability to generate large volumes of credible-sounding content.

Bias

GenAI also raises discrimination and bias concerns. If the training data used to create the generative model contains biased or discriminatory information, the model will produce biased or offensive outputs. This could perpetuate harmful stereotypes and contribute to privacy violations for certain groups. If a GenAI model is trained on a dataset containing biased language patterns, it might produce text that reinforces gender stereotypes. For instance, it might associate certain professions or roles with a particular gender, even if there is no inherent connection. If a GenAI model is trained on a dataset with skewed racial or ethnic representation, it can produce images that unintentionally depict certain groups in a negative or stereotypical manner. These models might also, if trained on biased or discriminatory datasets, produce content that is culturally insensitive or uses derogatory terms. Text-to-image GenAI mangles the features of a “Black woman” at high rates, which is harmful to the groups misrepresented. The cause is overrepresentation of non-Black groups in the training datasets. One solution is more balanced, diverse datasets instead of just Western and English-language data that would contain Western bias and create biases by lacking other perspectives and languages. Another is to train the model so that users cannot “jailbreak” it into spewing racist or inappropriate content.

However, the issue of bias extends beyond training data that is openly racist or sexist. AI models draw conclusions from data points; so an AI model might look at hiring data and see that the demographic group that has been most successful getting hired at a tech company is white men and conclude that white men are the most qualified for working at a tech company, though in reality the reason white men may be more successful is because they do not face the same structural barriers that affect other demographics, such as being unable to afford a tech degree, facing sexism in classes, or racism in the hiring department.

Privacy

GenAI raises several privacy concerns. One is that the datasets could contain sensitive or personal information. Unless that content is properly anonymized or protected, personal information could be exposed or misused. Because GenAI outputs are intended to be realistic-looking, generated content that resembles real individuals could be used to re-identify individuals whose data was intended to be anonymized, also undermining privacy protections. Further, during the training process, GenAI models may inadvertently learn and memorize parts of the training data, including sensitive or private information. This could lead to data leakage when generating new content. Policymakers and the GenAI platforms themselves have not yet resolved the concern about how to protect privacy in the datasets, outputs, or even the prompts themselves, which can include sensitive data or reflect a user’s intentions in ways that could be harmful if not secure.

Copyright and Intellectual Property

One of the fundamental concerns around GenAI is who owns the copyright for work that GenAI creates. Copyright law attributes authorship and ownership to human creators. However, in the case of AI-generated content, determining  authorship, the cornerstone of copyright protection, becomes challenging. It is unclear whether the creator should be the programmer, the user, the AI system itself, or a combination of these parties. AI systems learn from existing copyrighted content to generate new work that could resemble existing copyrighted material. This raises questions about whether AI-generated content could be considered derivative work and thus infringe upon the original copyright holder’s rights or whether the use of GenAI would be considered fair use, which allows limited use of copyrighted material without permission from the holder of the copyright. Because the technology is still new, the legal frameworks for judging fair use versus copyright infringement are still evolving and might look different depending on the jurisdiction and its legal culture. As that body of law develops, it should balance innovation with treating creators, users, and AI systems’ developers fairly.

Environmental Impacts

Training GenAI models and storing and transmitting data uses significant computational resources, often with hardware that consumes energy that can contribute to carbon emissions if it is not powered by renewable sources. These impacts can be mitigated in part through the use of renewable energy and by optimizing algorithms to reduce computational demands.

Unequal Access

Although access to GenAI tools is becoming more widespread, the emergence of the technology risks expanding the digital divide between those with access to technology and those without. There are several reasons why unequal access–and its consequences–may be particularly germane in the case of GenAI:

  • The computing power required is enormous, which can strain the infrastructure of countries that have inadequate power supply, internet access, data storage, or cloud computing.
  • Low and middle income countries (LMICs) may lack the high-tech talent pool necessary for AI innovation and implementation. One report suggests that the whole continent of Africa has 700,000 software developers, compared to California, which has 630,000. This problem is exacerbated by the fact that, once qualified, developers from LMICs often leave for countries where they can earn more.
  • Mainstream, consumer-facing models like ChatGPT were trained on a handful of languages, including English, Spanish, German, and Chinese, which means that individuals seeking to use GenAI in these languages have access advantages unavailable to Swahili speakers, for example, not to mention local dialects.
  • Localizing GenAI requires large amounts of data from the particular context, and low-resourced environments often rely on models developed by larger tech companies in the United States or China.

The ultimate result may be the disempowerment of marginalized groups who have fewer opportunities and means to share their stories and perspectives through AI-generated content. Because these technologies may enhance an individual’s economic prospects, unequal access to GenAI can in turn increase economic inequality as those with access are able to engage in creative expression, content generation, and business innovation more efficiently.

Back to top

Questions

If you are conducting a project and considering whether to use GenAI for it, ask yourself these questions:

  1. Are there cases where individual interactions between people might be more effective, more empathetic, and even more efficient than using AI for communication?
  2. What ethical concerns—whether from biases or privacy—might the use of GenAI introduce? Can they be mitigated?
  3. Can local sources of data and talent be employed to create localized GenAI?
  4. Are there legal, regulatory, or security measures that will guard against the misuses of GenAI and protect the populations that might be vulnerable to these misuses?
  5. Can sensitive or proprietary information be protected in the process of developing datasets that serve as training data for GenAI models?
  6. In what ways can GenAI technology bridge the digital divide and increase digital access in a tech-dependent society (or as societies become more tech-dependent)? How can we mitigate the tendency of new GenAI technologies to widen the digital divide?
  7. Are there forms of digital literacy for members of society, civil society, or a political class that can mitigate against the risks of deepfakes or large-scale generated misinformation text?
  8. How can you mitigate against the negative environmental impacts associated with the use of GenAI?
  9. Can GenAI be used to tailor approaches to education, access to government and civil society, and opportunities for innovation and economic advancement?
  10. Is the data your model trained on accurate data, representative of all identities, including marginalized groups? What inherent biases might the dataset carry?

Back to top

Case Studies

GenAI largely emerged in a widespread, consumer-facing way in the first half of 2023, which limits the number of real-world case studies. This section on case studies therefore includes cases where forms of GenAI have proved problematic in terms of deception or misinformation; ways that GenAI may conceivably affect all sectors, including democracy, to increase efficiencies and access; and experiences or discussions of specific country approaches to privacy-innovation tradeoffs.

Experiences with Disinformation and Deception

In Gabon, a possible deepfake played a significant role in the country’s politics. The president had reportedly experienced a stroke but had not been seen in public. The government ultimately issued a video on New Year’s Eve 2018 intending to assuage concerns about the president’s health, but critics suggested that he had inauthentic blinking patterns and facial expressions in the video and that it was a deepfake. Rumors that the video was inauthentic proliferated, leading many to conclude that the president was not in good health, which led to an attempted coup, due to the belief that the president’s ability to withstand the overthrow attempt would be weakened. The example demonstrates the serious ramifications of a loss of trust in the information environment.

In March 2023, a GenAI image of the Pope in a Balenciaga puffy coat went viral on the internet, fooling readers because of the likeness between the image and the Pope. Balenciaga, several months before, had faced backlash because of an ad campaign that had featured children in harnesses and bondage. The Pope seemingly wearing Balenciaga then implied that he and the Catholic church embraced these practices. The internet consensus ultimately concluded that it was a deepfake after identifying telltale signs such as a blurry coffee cup and resolution problems with the Pope’s eyelid. Nonetheless, the incident illustrated just how easily these images can be generated and fool readers. It also illustrated the way in which reputations could be stained through deepfakes.

In September 2023, the Microsoft Threat Initiative released a report pointing to numerous instances of online influence operations. Ahead of the 2022 election, Microsoft identified Chinese Communist Party (CCP)-affiliated social media accounts that were impersonating American voters, responding to comments in order to influence opinions through exchanges and persuasion. In 2023, Microsoft then observed the use of AI-created visuals that portrayed American images such as the Statue of Liberty in a negative light. These images had hallmarks of AI such as the wrong number of fingers on a hand but were nonetheless provocative and convincing. In early 2023, Meta similarly found the CCP engaged in an influence operation by posting comments critical of American foreign policy, which Meta was able to identify due to the types of spelling and grammatical mistakes combined with the time of day (appropriate hours for China rather than the US).

Current and Future Applications

As GenAI tools improve, they will become even more effective in these online influence campaigns. On the other hand, applications with positive outcomes will also become more effective. GenAI, for example, will increasingly step in to fill gaps in government resources. An estimated four billion people lack access to basic health services, with a significant limitation being the low number of health care providers. While GenAI is not a substitute for direct access to an individual health care provider, it can at least bridge some access gaps in certain settings. One healthcare chatbot, Ada Health, is powered by OpenAI and can correspond with individuals about their symptoms. ChatGPT has demonstrated an ability to pass medical qualification exams and should not be used as a stand-in for a doctor, but, in resource-constrained environments, it could at least provide an initial screening, a savings of costs, time, and resources. Relatedly, analogous tools can be used in mental health settings. The World Economic Forum reported in 2021 that an estimated 100 million individuals in Africa have clinical depression, but there are only 1.4 health care providers per 100,000 people, compared to the global average of 9 providers per 100,000 people. People in need of care, who lack better options, are increasingly relying on mental health chatbots until a more comprehensive approach can be implemented because, while the level of care they can provide is limited, it is better than nothing. These GenAI-based resources are not without challenges–potential privacy problems and suboptimal responses–and societies and individuals will have to determine whether these tools are better than the alternative but may be considered in resource-constrained environments.

Other future scenarios involve using GenAI to increase government efficiency on a range of tasks. One such scenario entails a government bureaucrat trained in economics assigned to work on a policy brief related to the environment. The individual begins the policy brief but then puts the question into a GenAI tool, which helps draft an outline of ideas, reminds the individual about points that had been missed, identifies key relevant international legal guideposts, and then translates the English-language brief into French. Another scenario involves an individual citizen trying to figure out where to vote, pay taxes, clarify government processes, make sense of policies for citizens deciding between candidates, or explain certain policy concepts. These scenarios are already possible and accessible at all levels within society and will only become more prevalent as individuals become more familiar with the technology. However, it is important that users understand the limitations and how to appropriately use the technology to prevent situations in which they are spreading misinformation or failing to find accurate information.

In an electoral context, GenAI can help evaluate aspects of democracy, such as electoral integrity. Manual tabulation of votes, for example, takes time and is onerous. However, new AI tools have played a role in ascertaining the degree of electoral irregularities. Neural networks have been used in Kenya to “read” paper forms submitted at the local level and enumerate the degree of electoral irregularities and then correlate those with electoral outcomes to assess whether these irregularities were the result of fraud or human error. These technologies may actually alleviate some of the workload burden placed on electoral institutions. In the future, advances in GenAI will be able to provide data visualization that further eases the cognitive load of efforts to adjudicate electoral integrity.

Approaches to the Privacy-Innovation Dilemma

Countries such as Brazil have raised concerns about the potential misuses of GenAI. After the release of ChatGPT in November 2022, the Brazilian government received a detailed report, written by academic and legal experts as well as company leaders and members of a national data-protection watchdog, urging that these technologies be regulated. The report raised three main concerns:

  • That citizen rights be protected by ensuring that there be “non-discrimination and correction of direct, indirect, illegal, or abusive discriminatory biases” as well as clarity and transparency as to when citizens were interacting with AI.
  • That the government categorize risks and inform citizens of the potential risks. Based on this analysis, “high risk” sectors included essential services, biometric verification and job recruitment, and “excessive risk” included the exploitation of vulnerable peoples and social scoring (a system that tracks individual behavior for trustworthiness and blacklists those with too many demerits or equivalents), both practices that should be scrutinized closely.
  • That the government issue governance measures and administrative sanctions, first by determining how businesses that fall afoul of regulations would be penalized and second by recommending a penalty of 2% of revenue for mild non-compliance and the equivalent of 9 million USD for more serious harms.

At the time of this writing in 2023, the government was debating next steps, but the report and deliberations are illustrative of the concerns and recommendations that have been issued with respect to GenAI in the Global South.  

In India, the government has approached AI in general and GenAI in particular with a less skeptical eye, which sheds light on the differences in how governments may approach these technologies and the basis for those differences. In 2018, the Indian government proposed a National Strategy for AI, which prioritized the development of AI in agriculture, education, healthcare, smart cities, and smart mobility. In 2020, the National Artificial Intelligence Strategy called for all systems to be transparent, accountable, and unbiased. In March 2021, the Indian government announced that it would use “light touch” regulation and that the bigger risk was not from AI but from not seizing on the opportunities presented by AI. India has an advanced technological research and development sector that is poised to benefit from AI. Advancing this sector is, according to the Ministry of Electronics and Information Technology, “significant and strategic,” although it acknowledged that it needed some policies and infrastructure measures that would address bias, discrimination, and ethical concerns.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

IoT & Sensors

What is the IoT and what are sensors?

The Internet of Things (IoT) refers to a network of objects connected over the internet. IoT-connected devices include everyday items such as phones, doorbells, cars, watches, and washing machines. The IoT links these devices for a range of tasks, processes, and environments – from streetlights on a “smart” urban grid to refrigerators in a “smart home” and even to “smart” pacemakers inside human bodies, which belong to the category of so-called “wearable” smart technology. Once installed and connected, these devices can communicate with one another with reduced human involvement.

Scientific diver in Indonesia. Autonomous reef monitoring structure uses sensors to support conservation efforts. Photo credit: Christopher Meyer.

An integral component of the IoT are sensors, devices that detect and respond to changes in an environment from a variety of sources — such as light, temperature, motion, and pressure. When placed on devices and linked to an IoT network, sensors can share data in real time with other connected devices and management systems.

It is important to note that the IoT is an evolving concept, continually expanding to include more devices and increasing the level of connection and communication between devices.

How does IoT work and how do sensors work?

IoT devices connect wirelessly to an internet network. They are provided with unique identifiers (UIDs) and have the ability to transmit data to one another over the network without human intervention. IoT systems may combine the use of wearable devices, sensors, robots, data analytics, artificial intelligence, and many other technologies.

Sensors generally work by taking an input — such as light, heat, pressure, motion, or other physical stimuli— and converting this into an output that can then be communicated to a human user through some kind of signal or interface (for example, the display on a digital thermometer or the noise of a fire alarm). The output could also be forwarded directly into a larger, extended system such as an industrial plant. Usually, devices will have multiple sensors: for example, a smartphone has a touchscreen, a camera, a GPS, and an accelerometer to measure acceleration.

Sensors can be “smart” or “non-smart,” meaning that they may connect to the Internet or not. Smart sensors accept input from their surroundings and convert it into digital data using in-built computing capabilities. These data are then passed on for further processing. Take for example a “smart” irrigation system: an Internet-connected water meter might be used to continually measure the quantity and quality of the water in a reservoir. These data would be transmitted in real-time to a water-management interface that a human could interpret to adjust the water delivery, or alternately, the irrigation system could be programmed to self-adjust without human intervention, shutting off automatically when the water is below a certain quality or quantity.

Smart sensors may be considered IoT devices themselves. The sensor in a mobile phone that automatically adjusts the brightness of its screen based on ambient lighting is an example of a smart sensor. Remote sensing involves the use of sensors in applications in which the sensing instrument does not make physical contact with the object or phenomenon being measured and recorded, for example, satellite imaging, radar, aerial photography, or videography by drones. The US National Aeronautics and Space Administration (NASA) has a list of the types of sensors used in remote sensing instruments. This short video provides a basic introduction to the different types of sensors.

Back to top

How is the IoT and how are sensors relevant in civic space and for democracy?

The IoT has been harnessed for a range of civic, humanitarian, and development purposes, for example, as part of smart cities infrastructure, urban traffic management and crowd-control systems, and for disaster risk reduction by remote-sensing environmental dangers. The city of London has been using the IoT and big data systems to improve public transportation systems. (See Smart Cities page) These systems handle unexpected delays and breakdowns, inform passengers directly regarding delays, create maps of common routes via anonymized data, offer personalized updates to travelers, and make it possible to identify areas for improvement or increased efficiency. Transportation via autonomous vehicles represents one of the domains where the IoT is anticipated to bring important advancements.

In the Boudry community, Burkina Faso, smartphones and GPS are connected to identify land parcel boundaries. Photo credit: Anne Girardin.

Many uses of the IoT are being explored in relation to health care and assistance. For instance, wearable glucose monitors in the form of skin patches can continuously and automatically monitor the blood glucose levels of diabetic persons and administer insulin when required.

Device systems equipped with sensors are frequently used by researchers, development workers, and community leaders for gathering and recording data on the environment— for example, air and soil quality, water quality and levels, radiation levels, and even the migration of animals.

Sensor datasets may also reveal new and unexpected information, enabling people to tell evidence-backed stories that serve the public interest.

Finally, the use of the IoT is also being explored in relation to human rights defenders. IoT systems equipped with sensors can be used to document human-rights violations and collect data about them. A bracelet developed by the Natalia Project automatically triggers an alert when forcibly removed or activated by the wearer. The bracelet uses GPS and mobile networks to send a pre-defined distress message, along with the location of the device and a timestamp, to volunteers present nearby and to the headquarters of Civil Rights Defenders, a Swedish NGO.

However, along with these exciting applications come serious concerns. Digital devices have inherent vulnerabilities, and linking devices to the Internet and to one another magnifies security threats such as stalking, data leaks, and violations of personal privacy. IoT sensors can be used to monitor and surveil minority communities by policing organizations which may lead to civil-rights abuses. These concerns and other implications of a vast and pervasive network of data-collecting devices are explored in the Risks section.

Back to top

Opportunities

Biological investigators in Madidi, Bolivia, set up a camera to photograph jaguars using a sensor that detects body heat. Photo credit: Julie Larsen Maher, Wildlife Conservation Society

The IoT and sensors can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about IoT and sensors in your work.

Monitoring and Evaluation

IoT systems can facilitate the continuous monitoring of small and intricate details, transmitting these data to systems that can analyze and evaluate them in real time. This kind of monitoring has many implications for resource efficiency and sustainability. In Mongolia, inexpensive temperature sensors have been used in the monitoring and evaluation of a subsidy program that offered energy-efficient stoves for home heating. The stoves aimed to reduce air pollution and fuel expenditure. Information obtained from the sensors led to the conclusion that fuel efficiency had indeed been achieved even though the coal consumption of the surveyed households had remained constant. Other examples of sensor-enabled monitoring include the Riffle (Remote, Independent Field Friendly Logger Electronics), a set of open-source instrument designs that enables communities to collect data and monitor their water quality. The designs deploy different types of sensors for measuring parameters such as the temperature, depth, turbidity, and conductivity of water. Riffle is a part of Public Lab’s Open Water Project.

Safety and security systems

IoT systems, when installed legally in private homes and workplaces, can provide security. Smart locks, video cameras, and motion detectors can be used to alert to or prevent against possible intrusion, while smart smoke detectors and thermostats can alert to and react to fire, dangerous air quality, etc. “Smart homes” advertise these security benefits. For instance, a video doorbell can send an alert to your smartphone when motion is detected and record a video of who or what triggered it for later viewing. Smart locks allow you to lock and unlock your doors remotely, or to grant access to guests through an app or keypad. Of course, these home security conveniences come with their own concerns. Amazon’s Ring home security system has been in the headlines multiple times after stories of hacking and vulnerabilities surfaced.

Early Warning Systems

IoT systems, when paired with data analytics and artificial intelligence, can help with early warnings about environmental or health risks, for example about the possibility of floods, earthquakes, or even the spread of infectious diseases like Covid-19. The company Kinsa Health has been able to leverage data gathered from its Bluetooth-connected thermometers to produce daily maps of which US counties are seeing an increase in high fevers, thereby offering real-time indications of where the disease may be clustering. Sensor networks combined with data analytics can be used by governments and ecologists to detect environmental changes that may indicate an emerging threat — for instance, a dangerous rise in water level or change in air quality — information that can be analyzed and shared rapidly to better understand, respond to, and alert others to the threat.

Autonomous Vehicles

By allowing vehicles to communicate with one another and with road infrastructure — like traffic lights, charging stations, roadside assistance, and even sensor-lined highway lanes — the IoT could improve the safety and efficiency of road transport. Autonomous vehicles have a long way to go (both in terms of the technology and safety and the legal frameworks necessary for proper functioning); but the IoT is making self-driving vehicles a possibility, bringing new opportunities for car sharing, urban transport, and service delivery.

Back to top

Risks

The use of IoT and sensors can also create risks in civil society programming. Read below on how to discern the possible risks associated with the use of these technologies in DRG work.

Mass surveillance and stalking

IoT systems generate large amounts of data, which in the absence of adequate protections, have been used by governments, commercial entities, or other actors for mass surveillance. IoT systems collecting data from their environments may collect personal data about humans without their knowledge or consent, or process and combine these data in an invasive, non consensual manner. IoT systems — when coupled with monitoring capabilities, AI systems (for example, face recognition and emotion recognition), and big data analysis — present opportunities for mass surveillance and potentially for repressive purposes that harm human and civil rights. Indeed authoritarian governments have a history of using IoT devices and so called ‘smart cities’ as methods of oppression and silencing dissent.

Concerns about privacy, data protection and security

The United States Federal Trade Commission (FTC) identified three key challenges the IoT poses to consumer privacy (2015): ubiquitous data collection, potential for unexpected uses of consumer data, and heightened security risks. IoT systems generate immense amounts of data and create large datasets. Meanwhile, IoT applications, especially consumer applications, are known to have privacy and security vulnerabilities, the risks of which are magnified given the quantity and potentially sensitive nature of data involved. Seemingly harmless information, or information collected without the full awareness of the people involved, can bring serious threats. For example, a visualization of exercise routes of users published by a fitness tracker app exposed the location and staffing of US military bases and secret outposts inside and outside the country.

Commercial entities may use data obtained from IoT systems to influence the behavior of consumers, for example through targeted advertising. Indeed, sensors in stores are increasingly being used to leverage data about users based on their in-store shopping behavior.

See also the Data Protection, Big Data, and Artificial Intelligence resources for additional information on privacy, data protection, and security concerns.

Increased risk of cyberattacks

The hardware and software of IoT devices are known to be highly vulnerable to cyberattacks, and cybercriminals actively try to exploit these security vulnerabilities. Increasing the number of devices on an IoT network means increasing the surface area for cyberattacks. Common devices such as Internet-connected printers, web cameras, network routers, and television sets are used by cybercriminals for malicious purposes, such as executing coordinated distributed denial of service (DDoS) attacks against websites, distributing malware, and breaching the privacy of individuals. There have been numerous incidents of hackers gaining access to feeds from home security cameras and microphones, including from monitors that enable parents to keep an eye on their infant while they are away. These incidents have led to a demand to regulate entities that design, manufacture and deploy commercial IoT devices and systems.

Uncharted ethical concerns

The ever-increasing use of automation brings ethical questions and concerns that may not have been considered before the arrival of the technology itself. Here are just a few examples: Would smart devices in the home recognize voice commands given by people who speak different languages, dialects, and in different accents? Would it be appropriate to collect voice recording data in these other dialects and accents— in a fully consensual and ethical way — in order to improve the quality of the device and its ability to serve more people?

Data obtained from IoT devices are also increasingly being used as digital evidence in courtrooms or for other law enforcement purposes, raising questions about the ethics and even the legality of such use, as well as about the accuracy and appropriateness of such data as evidence.

Vendor lock-in, insufficient interoperability between networks, and the challenge of obtaining informed consent from data subjects must also be taken into account. See more about these risks in the Digital IDs and Data Protection resources.

Back to top

Questions

If you are trying to understand the implications of IoT and sensors in your work environment, or are considering using some aspects of IoT and related technologies as part of your DRG programming, ask yourself these questions:

  1. Are IoT-connected devices suitable tools for the problem you are trying to solve? What are the indicators or guiding factors that determine if the use of IoT or sensor technology is a suitable and required solution to a particular problem or challenge?
  2. What data will be collected, analyzed, shared, and stored? Who else will have access to this information? How are these data protected? How can you ensure that you collect the minimum amount of data needed?
  3. Are any personal or sensitive data being collected? How are you obtaining informed consent in this case? Is there the possibility that devices on your network will combine datasets that together create sensitive or personally identifying information?
  4. Are the technologies and networks you are using sufficiently interoperable for you to bring new technologies and even new providers into the network? Are they designed with open standards and portability in mind? Is there any risk of becoming locked into a particular technology provider?
  5. How are you addressing the risk of vulnerabilities or flaws in the software? How can you mitigate these risks from the start? For instance, is it necessary for these devices to connect to the internet? Can they connect to a private intranet or a peer internet network instead?

Back to top

Case studies

Instructor pictured in Tanzania, where GPS-captured data can be used to map and assign land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications
IoT to enhance the process of fortifying flour with key nutrients

IoT to enhance the process of fortifying flour with key nutrients

Sanku (Project Healthy Children), an organization that aims to end malnutrition around the world, is equipping small flour mills across Africa with IoT technology to provide nutritious fortified flour to millions of people. Daily production data are sent in real-time, via the cellular link, to a dashboard that allows the organization to monitor the mills’ performance. Data collected include flour produced, nutrients dispensed, people reached, and any technical issues with the machine’s performance. “The IoT is enabling us to completely automate our operations and how we run our business.… It’s no longer a struggle trying to determine which mills need to be visited, which dosifiers need maintenance, and when products need to be delivered to avoid stock-outs.”

The Guardian Project develops apps for human rights defenders

The Guardian Project’s Proof Mode is an open-source camera application for mobile devices designed for activists, human rights defenders, and journalists. When a photo or video is shot using the device, the app gathers as much metadata as possible, such as a timestamp, the device’s identity, and location from different sensors present in the device. The app also adds a publicly verifiable digital signature to the metadata file, all of which ultimately provide the user with secure and verifiable digital evidence.

Another app by the Guardian Project, Haven, uses the sensors present in a mobile device, such as the accelerometer, camera, microphone, proximity sensor, and battery (power-on status) for monitoring changes to a phone’s surroundings. These changes are a) stored as images and sound files, b) registered in an event log that can be accessed remotely or anytime later, and c) used to trigger an alarm or to send secure notifications about intrusions or suspicious activity. The app developers explain that Haven is designed for journalists, human rights defenders, and people at risk of forced disappearance.

Temperature sensors to measure stove use behaviors in Ulaanbaatar

Temperature sensors to measure stove use behaviors in Ulaanbaatar

“…[I]n an impact evaluation of the [subsidy program]…, which aimed to reduce air pollution and decrease fuel expenditures through the subsidized distribution of more fuel-efficient heating stoves…[t]o gather unbiased and precise measurements of stove use behavior, small temperature sensors (stove use monitors or SUMs) were placed in a sub-set of surveyed households… The SUMs data on ambient temperature also indicated that homes with [subsidy program] stoves were kept warmer than those with traditional stoves, although the fuel used was the same on average. This suggests that households were utilizing fuel efficiency to increase home temperatures, whether knowingly or not. Ultimately, inexpensive temperature sensors were critical for collecting accurate outcome data on and explaining unexpected results.”

Using IoT sensors for healthcare

Using IoT sensors for healthcare

IoT sensors, like wearable heart monitors or sleep trackers, can help healthcare professionals in the treatment of patients. Monitors use connected information systems to analyze relevant health data for diseases like Parkinson’s, Alzheimers, and heart disease by helping doctors understand patterns, thereby providing better healthcare service delivery. By providing preventative care, healthcare systems can reduce costs and increase service to a wider range of patients.

Motion sensors to monitor hand-pump functionality in Rwanda, Case study 3

Motion sensors to monitor hand-pump functionality in Rwanda,  Case study 3

“In the featured pilot project in Rwanda, more than 200 censors were installed at water pumps, with data from each sensor transmitted wirelessly through the cellular network to a dashboard for the operations and maintenance team. The dashboard displays the real-time status of each pump equipped with a sensor. This enables operations and maintenance teams to employ an “ambulance” model, dispatching teams only to water points flagged for repair or check-up.”

Back to top

References

Find below the works cited in this resource.

Additional resources

Back to top

Categories

Satellite Systems

Irvine03 CubeSat Source: https://ipsf.net/news/nasa-selects-irvine03-cubesat-for-launch-mission/

What is a satellite?

A satellite is an object that orbits a planet or star; it can be a natural body like the Moon orbiting Earth, or an artificial object deployed by humans for diverse functions, including communication, Earth observation, navigation, and scientific exploration.

While Earth has one natural satellite, the Moon, several thousand artificial satellites trace orbits around the Earth. These human-made satellites range from 10 centimeter cubes weighing about a kilogram, called SmallSats, to the International Space Station. Each carries instruments to perform specific tasks like connecting distant points through telecommunications links and observing Earth’s surface.

NASA & STS-132 Crew: Flyaround view of the International Space Station Source: https://images.nasa.gov/details-s132e012208

How do satellites work?

Satellites use specialized instruments to perform applications such as communication, Earth observation, navigation, and scientific research, collecting and transmitting relevant data back to ground stations, while being remotely managed and controlled.

At the most basic level, satellite systems have three component segments: the space segment, the terrestrial segment, and the data link between the two. In satellite systems that are comprised of multiple space objects, there is also often a data link between the satellites. Since satellites in Earth’s orbit can be several thousand kilometers away from the nearest human, all of the instruments, tools, and fuel a satellite might need must be loaded into the machine at the start. This makes it difficult to change a satellite’s primary mission, although different end users may use the same satellite-derived data for varying purposes.

The terrestrial segment is most often a ground station that receives radiofrequency signals from satellites, but some systems have multiple ground stations or even transmit data to end users directly. For instance, while ground stations can be acres of antenna and data processing facilities, a television satellite dish or a satellite phone are two types of personal ground stations.

What is an orbit?

Diagram of the orbits around the Earth Source: https://earthobservatory.nasa.gov/ContentFeature/OrbitsCatalog/images/orbits_schematic.png

Orbits are the result of two objects in space interacting with just the right balance of gravity and momentum. If a satellite has too much momentum, it will overcome Earth’s gravity and escape out of orbit and move into deep space. If a satellite has too little momentum, it will be pulled down into Earth’s atmosphere. As long as a satellite’s momentum remains constant, the object will travel in a predictable, infinitely repeating path around the Earth.

Not all satellites have the same momentum, and therefore different satellites orbit the Earth on different paths. These orbits are broadly grouped by their altitude above the Earth’s surface. These categories are, from lowest to highest altitude, low Earth orbit (LEO), medium Earth orbit (MEO), and geostationary or geosynchronous equatorial orbit (GEO). While there is no globally recognized “edge” of space, low Earth orbit is generally considered the region below 1000 km above the Earth’s surface.

At the lowest altitudes, satellites must use onboard propulsion systems to overcome the effects of Earth’s atmosphere, which drags satellites out of orbit. When a satellite cannot overcome this drag, it de-orbits and often burns up upon reentry into Earth’s atmosphere. Sometimes, satellites or their component parts survive the reentry and crash into the surface of the Earth or into the ocean. Recent technological advancements have enabled satellite operators to achieve orbit at these very low altitudes. Typically, satellites in these low orbits take less than two hours to make one full trip around the globe. The amount of time a satellite takes to make one rotation around the Earth is called the “period.”

In contrast, geostationary or geosynchronous orbits take a full 24 hours to circle the globe. Because their period keeps pace with the rotation of the Earth, these satellites appear to stay fixed in one spot above the Earth unless an operator maneuvers them. GEO orbits are about 36,000 km above the surface of the Earth. The MEO region encompasses the remaining space between LEO and GEO.

Certain altitudes are better suited for certain types of tasks than others. For instance, because satellites in LEO are so close to the Earth’s surface, no one satellite can provide wide coverage of the Earth’s surface. Satellites in MEO and GEO can “see” more of the Earth at any one point in time, by virtue of their distance away from the Earth. The area of the Earth that a satellite can observe or service is called the “field of regard.” The size of this field is an important factor in deciding how many satellites an operator needs to provide a service and how high those satellites should be in orbit.

Satellite imagery of Mount Merapi, Indonesia Source: https://www.planet.com/gallery/#!/post/mount-merapi-fumes
Megaconstellations and Modern Advancements

Early satellites were relatively small machines that performed rudimentary tasks or demonstrated a capability. During the early days of space exploration, designing and building a satellite was an expensive and long-term undertaking. Launching the satellite into space was another expensive step along the way to deploying a satellite. As engineers gained expertise in building and launching satellites, these machines grew in size and sophistication. Engineers designed hulking satellites weighing thousands of kilograms to carry several instruments, many of which remain in space today.

The paradigm of building one large object has shifted toward building many small objects to accomplish the same mission. These small satellites support the same mission in concert, forming networks called constellations. The concept of operating constellations of satellites is not particularly new – ambitious business plans from the 1980s aimed to leverage dozens of satellites to offer global telecommunications services. Constellations of satellites are often designed to provide a baseline of regional coverage, with the potential to enlarge the service range later. For instance, Japan’s Quasi-Zenith Satellite System uses a constellation of four satellites working in concert to provide navigation services in Asia-Pacific. The principle of using many satellites in concert has become more popular over time.

The plummeting cost of satellite manufacturing and launch has facilitated more exotic designs that include thousands of satellites, called megaconstellations. Operating hundreds or thousands of coordinated satellites in a megaconstellation offers distinct benefits. Megaconstellations can consist of thousands of satellites in LEO. Satellites in LEO have small fields of regard, meaning they can only service a sliver of the Earth’s surface at any given time. Adding another satellite, or several satellites, increases the service area by expanding the field of regard. Megaconstellations take this principle to the extreme, knitting thousands of individual satellites’ fields of regard together to create a blanket of coverage. Coordinating and precisely positioning satellites ensures the network can send signals to any point on the Earth at any time.

Operating in LEO offers other benefits. Megaconstellations orbiting at relatively low altitudes can send and receive signals from the ground more quickly than those further away from the Earth’s surface. Because the signal does not have to travel as far, LEO megaconstellations reduce the time a signal is “in transit” between ground stations and satellite terminals, called “latency.” This facilitates faster communications with less lag. Megaconstellations with low latency can help organizations become more efficient and productive as they transition to 5G technologies.

The further the distance between a satellite and the Earth, the more onboard power the satellite needs to send a signal from space to Earth. Minimizing the distance between satellites and ground stations also minimizes the amount of onboard power needed to produce the signal. This in turn helps reduce the satellite’s size, and often the price of manufacturing. Thus, although megaconstellations require hundreds if not thousands of satellites to provide global coverage, these satellites are generally cheaper per unit. This helps satellite owners stockpile replacement satellites in case any of the assets fail to reach orbit or break once they are in space.

The general trends of satellites becoming cheaper and decreasing launch costs have enabled more than just megaconstellations. Reducing the costs of fabricating and placing a satellite in orbit has opened the playing field to new actors, especially those who may have been excluded from participating in satellite-systems developments based on price alone. Space is no longer restricted to high income countries; now low and middle income countries (LMICs) can own the entire satellite-development lifecycle, including mission design, satellite fabrication, testing and validation, and operations. Relatively lower costs also allow prospective satellite operators to undertake missions that may not have been financially attractive to large or foreign corporations that did not have shared societal motivations.

Satellite Lifecycles/Environmental Issues/Debris Risks

In addition to the thousands of operational satellites in orbit, there are millions of pieces of space trash. Orbital debris is essentially anything in orbit that does not work – this includes everything from non-functional satellites to fragments of exploding bolts that are used to separate spacecraft from rocket boosters. Clouds of debris are generated when two space objects collide, independent of whether the collision was accidental or intentional. Even very small pieces of debris are dangerous – debris as small as a centimeter can be lethal in collisions with operational satellites. Some regions of space are more threatened than others, due to the density of the debris or the potential for debris-creating events.

There is an emerging movement to both reduce the amount of debris created by space activities and to remove the existing derelict objects. This emphasis on space sustainability bodes well for the future. Nevertheless, the current state of the orbital environment presents elevated debris risks. The increase in the amount of debris – originating from the nations most established in space – has imposed risks on new entrants.

Solar panels from the Hubble Space Telescope showing debris impacts Source: https://www.esa.int/var/esa/storage/images/esa_multimedia/images/2009/05/esa_built-solar_cells_retrieved_from_the_hubble_space_telescope_in_2002/10102613-2-eng-GB/ESA_built-solar_cells_retrieved_from_the_Hubble_Space_Telescope_in_2002_article.jpg

Back to top

How are satellites relevant in civic space and for democracy?

Satellites provide services and collect data that greatly benefit society. Satellite systems provide broadband and telecommunications services that offer citizens another avenue for digital connectivity. Digital connectivity is an invaluable tool that can expand citizens’ access to civic space, support democratic processes, and empower freedom of expression. While the fundamental principles and physics underpinning these applications remain constant, novel paradigms in the satellite-system design, such as megaconstellations, have reduced the costs to access these services. Other types of satellites have experienced more linear, but nonetheless impactful, technology advancements. For example, better optical sensors allow satellites to collect more precise and clear imagery of Earth. These satellite-derived data are invaluable for both crisis response and long-term planning, enabling well-organized emergency response work as well as empowering efforts to strengthen democracy. Other sensors allow scientists to analyze the impact of climate change and design more appropriate remediation processes.

Internet connectivity has famously enabled activism and fostered communities of civic-minded individuals around the world. Satellite-enabled connectivity builds on these trends, helping link citizens to social services and each other. Satellite internet networks overcome many of the logistical challenges that prevent terrestrial broadband networks from serving rural or difficult-to-reach communities. Public-private partnerships have improved services in areas that suffered from poor or nonexistent broadband connectivity.

Other Earth observation tools can be used to improve democratic processes. Detailed maps derived from satellite imagery can help prepare for, execute, and analyze election results. Satellite data provides a clear view of electoral maps, allowing civil society to identify issues and propose meaningful changes. For instance, satellite maps can identify underserved populations and validate new polling stations in the runup to an election. Precise maps can also reveal voting trends and, when overlaid with socioeconomic or demographic information from other sources, can inform renewed efforts on voter outreach and campaign strategy. Satellite connectivity has a proven history in facilitating the collection and transmission of votes in a secure, transparent, and timely manner.

Satellite services directly support development work over a range of efforts, including agricultural development, environmental monitoring, and mapping socioeconomic indicators. These types of data support both project planning and monitoring and evaluation. In the past, large satellites used massive optical or other types of sensors to collect data while passing over the Earth. The miniaturization of these sensors allows operators to launch several satellites, reducing the amount of time it takes to revisit a site of interest. Emerging satellite-system-design paradigms like large constellations of Earth-observation satellites can revisit areas of the Earth more frequently, collecting data that allow researchers to monitor changes with more nuance and fidelity.

The Mulanje Massif, captured by the ISERV system aboard the International Space Station
Source: https://www.nasa.gov/image-article/servirs-iserv-image-of-mulanje-massif-malawi/

Satellite imagery and Earth-observation data go beyond playing a role in monitoring the impact of development efforts and can be used to plan responses to crises. In a post-pandemic world, good data on epidemiology and other public health issues have never been more valuable. Satellites are instrumental in collecting that data. Satellite data is increasingly leveraged for public health applications, including understanding the underlying factors that affect who is most at risk of illness. Recent advances in satellite data collection have helped researchers build a deeper and more nuanced understanding of public health issues. This in turn aids tailored responses, and in some cases can support preventative efforts. For instance, analysis of data collected by satellites can help identify where the next public health hazard might occur, enabling preventative action. This type of satellite service can be made even more powerful when used in concert with other emerging technologies like Artificial Intelligence and Machine Learning and Big Data projects.

A composite image of the Earth at night produced by imagery from the Moderate Resolution Imaging Spectroradiometer. This type of imagery has been used by public health researchers to better estimate at-risk populations

Current satellite technology and services are vulnerable to authoritarian or antidemocratic efforts. As satellites are, at their core, hardware, physical attacks remain a serious threat. Ground stations and terminals are often targeted in attempts to keep citizens from accessing satellite-enabled connectivity. Television antennas and satellite internet terminals are difficult to hide without reducing their efficacy, making them easy targets for police or antidemocratic security services who wish to limit citizens’ access. Designs for future systems have not been able to address the vulnerabilities current terminals have. In some extreme cases, satellite signals might be jammed to prevent citizens from accessing a service. Domestic regulations pose another hurdle. States maintain jurisdiction over the radiofrequency spectrum within their borders, and can use licensing and regulatory processes to control what type of connectivity systems are available to its citizens and foreign visitors.

Back to top

Opportunities

Satellites can have positive impacts when used to further democracy, human rights and governance. Read below to learn how to more effectively and safely think about satellite use in your work.

Skip a Step

Many citizens in digital deserts are now able to bypass traditional connectivity methods and leapfrog to benefitting from satellite-enabled connectivity. Improved internet connectivity provides a new avenue for citizens to benefit from civil services and engage in political discourse. Internet access can be expanded without needing expensive and intensive local infrastructure projects.

Digital Inclusion

Satellite data and services have agricultural uses beyond crop monitoring and resource optimization. Smallshare farmers, especially in LMICs without established banking infrastructure, are often excluded from traditional financial markets that only provide credit and not savings, loans, or other services. Women are also disproportionately affected by financial exclusion. Innovative lenders like the Harvesting Farmers Network use satellite technologies and remote sensing to address these gaps and help underserved agricultural producers. Earth-observation data can be used to assess agricultural productivity, helping lenders move beyond requiring a paper trail or other documentation and reducing barriers to financial market access.

Access to banking through satellite-enabled connectivity addresses populations beyond smallshare agricultural producers. Satellite connectivity helps geographically isolated populations utilize financial services. Satellites are helping un- or underserved populations across sub-Saharan Africa access banking, while Mexico has partnered with commercial satellite internet providers to achieve similar digital financial inclusion goals.

More Data, Less Hardware

Satellites may be expensive systems, but access to satellite services and data need not be a prohibitively large financial outlay. Satellite operators sometimes make data collected by their systems free to the public. This practice is common across government and industry. For example, the United States’ National Aeronautics and Space Administration provides a variety of free datasets to support an open and collaborative scientific culture around the world. Satellite industry actors take a slightly different approach to open data. Some commercial entities like Maxar have a long history of providing free and open data in times of crisis or after disasters to assist humanitarian responses.

 

Open sharing of satellite data across borders helps researchers tackle public health issues. Yet, there are still opportunities to better use satellite data. There is room to improve both the collection of remote sensing data and how we use that satellite-derived data. It is important for end-users to understand the effects of data preprocessing, as preprocessing can both help and hinder analyses. Different techniques can affect the utility of satellite data, sometimes streamlining the analytical process and eliminating the need for in-house expertise. On the other hand, receiving preprocessed data could limit the sophistication of the final analysis. When available, raw data might be the best option if an organization has the technical capacity and time to process the data. Thus, it is important to use imagery and remote sensing data that fit both an organization’s purpose and technical expertise.

A color-enhanced image of phytoplankton in the Patagonian Shelf Break, taken by the Suomi National Polar Orbiting Partnership Satellite Source: https://www.nasa.gov/image-article/colorful-plankton-full-patagonian-waters/

South-South Cooperation and Rejecting Post-Colonial Expectations

More and more countries are participating in developing satellite technology or utilizing the data from satellites, including those in the Global South. Many of these governments are collaborating or partnering with established industrial actors or other more advanced spacefaring nations. As more LMICs develop their local capacities, they also expand the potential for deeper South-South cooperation. Furthermore, the Global South can push back against colonial narratives by investing in satellite and space systems. States with colonial histories can push past expectations that they should base their economies on natural-resource extraction or other rudimentary products by delivering highly technical assets like satellites on a global scale.

Amazonia-1, Brazil’s first satellite, launching from Sriharikota in India Source: http://www.inpe.br/amazonia1/img/galeria/66.jpg

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below about how to discern possible dangers associated with satellites in DRG work, as well as how to mitigate unintended – and intended – consequences.

Onerous Regulation

Arranging satellite connectivity is not as simple as turning a device on – broadcasters must receive specific authorizations and licenses from a country’s government to beam connectivity into their territory. Well-meaning but onerous government bureaucratic processes may delay when a population could start to benefit from satellite connectivity. In other cases, political interests might prevent satellite operators from serving a population in an attempt to control citizens’ access to information or opposition campaigns.

Signal Vulnerability

Satellite signals are vulnerable to interference, even if a satellite operator has full license to operate in a country. Signals are susceptible to both political and physical interference. Governments could choose to revoke licenses, effectively ending a satellite operator’s ability to legally provide connectivity services within a country’s borders with little to no warning. The bureaucratic hoops a service provider must jump through to receive a license are often more onerous than the process for a government to revoke a satellite connectivity provider’s right to broadcast a signal. There are few best practices or exemplary guidelines on what constitutes a reason to revoke a license, so each state is a unique case. It is not clear that many states have made thoughtful progress on understanding why and under what circumstances a satellite provider would lose a license to operate.

Overreliance

Just as a government could revoke a license, so too could a commercial company stop providing satellite services. Civil society must therefore be wary of becoming overly reliant on a single provider, lest this provider decide to cut service. A provider may cease serving a country for many reasons, including financial difficulties or political motivations. For example, Starlink connectivity was impeded, if not turned off entirely, in Ukraine during the war with Russia.

Actors in civil society who wish to work with other entities for satellite projects should also be careful to not become overly dependent on partners that hold overwhelming leverage over a project. The incentives that motivate technology transfer and the sharing of expertise are not always aligned among partners. Alignment issues can cause friction and affect the benefits of a project. This risk is also likely to be relevant in state-to-state interactions.

Unethical Data Access

In the wrong hands, satellite data could be used for a variety of malevolent purposes. Location data, maps, or logs of when a device was transmitting a signal to a satellite could be used by bad actors to erode one’s physical privacy. Satellite connectivity providers may sell users’ data, but some types of sensitive data could be obtained by third parties with sophisticated collection techniques. Few countries have established robust domestic regulations to limit the negative effects of electronic surveillance of satellite-enabled connectivity.

Financial Burden

Even with the attendant risks of overreliance, partnerships with commercial entities or foreign states may be necessary due to the high cost of developing and launching satellites. While advancements in manufacturing and launch have reduced the costs of deploying and operating a satellite, fit-for-purpose systems are still often prohibitively expensive. This is especially salient in light of states that have limited fiscal space and an obligation to address other social issues.

Talent Retention

States that do make a concerted effort to develop a satellite industry or provide state-supported satellite services for their citizens might also face challenges in retaining technical capacity. It is difficult for low- and middle-income countries to keep well-trained engineers and other professionals engaged in domestic satellite issues. These issues are even more acute when citizens are reliant on foreign partners and do not see pathways to growth and productivity at home. This problem is also exacerbated by the fact that government salaries cannot hope to match salaries in the private sector for tech experts. Without a domestic talent pool to draw from, states risk not being able to advocate for themselves in both negotiations for technical services and multilateral forums on space governance and norm setting.

Lack of Multilateral Governance

New paradigms like megaconstellations threaten future generations’ ability to benefit from technologies in Earth’s orbits. This risk of orbital overcrowding is similar to terrestrial-environmental-sustainability principles. Earth’s orbits may be massive in terms of total volume, but orbits are finite resources. There is a fine line between maximizing the uses of Earth’s orbits and launching so many objects into space that no satellite can operate safely. This overcrowding issue affects all of humanity, but is particularly acute for emerging or aspirational spacefaring states that may be forced to operate in a high-risk environment, having missed out on a window of opportunity to take their first steps in space during a relatively safer period of time. Such a situation has secondary effects – those states that are unable to safely commence their space activities are also less likely to be able to demonstrate and reinforce normative expectations for responsible behaviors. Pathways for participating in the current multilateral space governance processes are made more challenging by not having a demonstrated space capability.

There are few global rules that support sustainable and equitable uses of space. Some states have recently adopted more stringent regulations on how companies can use space, but the uncoordinated effort of a few states is unlikely to ensure humanity’s access to a low-risk orbital environment for generations to come. Achieving these space sustainability goals is a global endeavor that requires multilateral cooperation.

Back to top

Questions

To understand the implications of satellite used in your work, ask yourself these questions:

  1. Are there barriers that prevent the benefits of satellites from being leveraged in your country? What are they? Funding? Expertise? Lack of local governance?
  2. Are satellite-derived data or services tailored to your specific needs?
  3. How competitive is the market for satellite services in your area, and how does this competition, or lack thereof, affect the cost of accessing satellite services?
  4. Are the connectivity-enabling satellites you plan to use up to date on cyber security measures?
  5. What types of ground station(s) does the space system use, and is that infrastructure sufficiently secured from seizure or tampering?
  6. Does the satellite owner or operator adhere to or promote sustainable uses of space?
  7. What structural or regulatory changes must be enacted within your country of interest to extract the greatest value from a satellite system?
  8. How have satellite systems been implemented in other states and, if so, are there ways to avoid or overcome challenges prior to implementation?
  9. How can your use of satellite services or data promote the adoption of nascent international behaviors that would conserve your ability to access space services over the long term?
  10. Are you creating risky dependencies? How trustworthy and stable are the organizations you are relying on? Do you have a backup plan?
  11. Are the applications you are accessing through satellite connectivity secure and safe?

Back to top

Case Studies

Vanuatu voting registration

The United Nations Development Programme (UNDP) and the United Nations Satellite Centre (UNOSAT) partnered on an initiative to assist Vanuatu to register voters in preparation for Vanuatu’s 2021 provincial elections. UNOSAT used satellite data to develop the first complete dataset to represent all villages in the archipelago. This data was used in concert with measurements of voter turnout to quantify the impact of polling-station locations. Satellite data were used to locate difficult-to-reach populations and maximize voter turnout. Using satellite data helped streamline election-related work and reduced the burden on election officials.

Partnerships to Provide Imagery in Support of Peace

Satellites’ abilities to capture overhead imagery is especially valuable in documenting human-rights violations in states that restrict activists’ and inspectors’ access. A recent partnership between Human Rights Watch and Planet, a US-based company that operates Earth-observation satellites, enables activist groups to hold national leadership accountable. In this case, Human Rights Watch analyzed satellite images of Myanmar provided by Planet to confirm the burning of ethnically Rohingya villages. The frequent collection of satellite imagery showed that several dozen villages were burned, contradicting Myanmar leadership’s declarations that the state-sponsored clearance operations had ended. Activists used this uncovered truth to call for an urgent cessation of violence and support the delivery of humanitarian aid.

Satellite Television

Satellites enable many forms of mass communication, including television. While television is a diversion or luxury in many places around the world, it is also a powerful tool for shaping political discourse. Satellite television can provide citizenry with programming from around the globe, expanding horizons beyond local programming. Satellite television came to India in 1991 after years of state control over broadcast media. On the one hand, the format of receiving satellite internet was a marker of modernism, while on the other hand, the programming it provided became a societal phenomenon. Satellite television brought more than 300 new channels to India, nurturing cultural engagement and supporting how citizens considered engaging with each other and with the state. This was especially liberating in the post-colonial context, as Indian society now controlled their media outlets and showcased considerations of social identity through satellite television. For more information, please see Television in India: Satellites, politics, and cultural change.

Servir Ecologic Work

Through the Servir program, a collaborative initiative led by the United States Agency for International Development and the U.S. National Aeronautics and Space Administration, U.S. government agencies partner with local organizations in affected regions to use satellite data to design solutions to tackle environmental challenges around the world. Among many other contributions, the Servir team is working in concert with partners in Peru and Brazil to use satellite and geospatial data in precise maps to help inform decisions about agricultural and environmental policies. This work supports stakeholder efforts to understand the complex interface between agriculture productivity and environmental sustainability. The results are used to design policy incentives that promote sustainable farming of cocoa and palm oil. Local stakeholders including the farming communities can use the satellite-derived data to optimize their land use.

South-South Cooperation on Agricultural Monitoring

Satellites are invaluable tools in agricultural development. The CropWatch program, initiated by the Chinese Academy of Sciences, works to provide LMICs with access to data collected by satellites and training for using these data for their specific needs. The CropWatch program supports agricultural monitoring and enables states to better prepare for food security challenges. States have been able to engage with each other through extensive training programs, allowing for South-South collaboration on shared issues. The data collected through CropWatch can be tailored to accommodate local requirements.

Access to a Voice

Clandestine use of satellite internet has allowed protesters in Iran to access the internet via alternate methods. The Iranian government exercises close control over traditional methods of accessing the internet to stifle protests and civil activism. These methods of controlling or limiting free speech, democratic activism, and civil organization are yet to be effective in limiting citizens’ access to satellite internet, provided by services like Starlink. The Iranian government still exercises some control over satellite internet in the country – ground-station terminals need to be smuggled into the borders to provide service to activists.

Amnesty Decode Darfur Project

Satellites help confirm ground truths. Amnesty International has a long history of using satellite imagery to produce credible evidence of human-rights abuses. This project called for digital volunteers to map Darfur and identify potentially vulnerable populations. The next phase of the project compared satellite imagery of the same locations taken at different times to pinpoint evidence of attacks by the Sudanese government and associated security forces. Amnesty maintains its own in-house satellite-imagery-analysis team to corroborate on-the-ground accounts of violence, but this project showed that even amateur volunteer analysis of satellite imagery can be a viable way to investigate human-rights abuses and hold states accountable.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Smart Cities

What are smart cities?

Smart cities can take many forms, but generally leverage digital technologies like artificial intelligence (AI) and the Internet of Things (IOT) to improve urban living. The technologies and data collection that underpin smart cities have the potential to automate and improve service delivery, strengthen disaster preparedness, boost connectivity, and enhance citizen participation. But if smart cities are implemented without transparency and respect for the rule of law, they risk eroding good governance norms, undermining privacy, and extinguishing free expression.

How do smart cities work?

Solar power lights an evening market in Msimba, Tanzania. Smart cities monitor and integrate their infrastructure to optimize resource use. Photo credit: Jake Lyell.
Solar power lights an evening market in Msimba, Tanzania. Smart cities integrate technology with existing infrastructure to collect data and optimize resource use. Photo credit: Jake Lyell.

Smart cities integrate technology with new and existing infrastructure—such as roads, airports, municipal buildings, and sometimes even private residences—to optimize resource allocation, assess maintenance needs, and monitor citizen safety. The term “smart city” does not refer to a single technology, but rather to multiple technologies working together to improve the livability of an urban area. There is no official checklist for the technologies a city needs to implement to be considered “smart.” But a smart city does require urban planning, including a growth strategy managed by the local government with significant contributions from the private sector.

Data is at the heart of the smart city

Smart cities generally rely on real-time data processing and visualization tools to inform decision making. This usually means gathering and analyzing data from smart sensors installed across the city and connected through the Internet of Things to address issues like vehicular traffic, air pollution, waste management, and physical security.

Data collection in smart cities also provides a feedback mechanism to strengthen the relationship between citizens and local government when accompanied by transparency measures, such as making public information about official budgets and resource allocations. However, the misuse of sensitive personal data can alienate citizens and reduce trust. A detailed, rights-respecting data-management strategy can help ensure citizens understand (and consent to) how their data are collected, processed, and stored, and how they will be used to benefit the community.

All Smart Cities are different

Cities are extremely diverse, and the implementation of smart-city solutions will vary depending on location, priorities, resources, and capabilities. Some smart cities are built by overlaying ICTs on existing infrastructure, like in Nairobi, while others are built “from scratch,” like Kenya’s “Silicon Valley,” Konza City. Alongside technological development, other non-digital elements of smart cities include improvements to housing, increased walkability, the creation of new parks, the preservation of wildlife, etc. Ultimately an emphasis on improved governance and sustainability can generate better outcomes for citizens than an explicit focus on technology, digitization, and growth.

Smart cities in developing countries face unique legal, regulatory, and socioeconomic challenges.

Drivers for Smart City Development in Developing Countries

  • Financing capacity of the government
  • Regulatory environment that citizens and investors trust
  • Technology and infrastructure readiness
  • Human capital
  • Stability in economic development
  • Active citizen engagement and participation
  • Knowledge transfer and participation from the private sector
  • Ecosystem that promotes innovation and learning

Barriers to Smart City Development in Developing Countries

  • Budget constraints and financing issues
  • Lack of investment in basic infrastructure
  • Lack of technology-related infrastructure readiness
  • Fragmented authority
  • Lack of governance frameworks and regulatory safeguards
  • Lack of skilled human capital
  • Environmental concerns
  • Lack of citizen participation
  • Technology illiteracy and knowledge deficit

Children playing at Limonade plaza, Haiti. Improving the quality of life for citizens is at the heart of smart city projects. Photo credit: Kendra Helmer/USAID.
Children playing at Limonade plaza, Haiti. Smart city projects can improve the quality of life for citizens. Photo credit: Kendra Helmer/USAID.

The development of a smart city that truly benefits citizens requires careful planning that typically takes several years before city infrastructure can be updated. The implementation of a smart city should take place gradually as political will, civic demand, and private-sector interests converge. Smart city projects can only be successful when the city has developed basic infrastructure and put into place legal protections to ensure citizens’ privacy is respected and safeguarded.The infrastructure needed for smart cities is expensive and requires routine, ongoing maintenance and review by skilled professionals. Many planned smart-city projects have been reduced to graveyards of forgotten sensors due to the lack of proper maintenance, or because the data gathered were not ultimately valuable for the government and citizens.

Common Elements of a Smart City

Below is an overview of technologies and practices common to smart cities, though this list is by no means exhaustive or universal.

Open Wi-Fi: Affordable and reliable internet connectivity is essential for a smart city. Some smart cities provide free access to high-speed internet through city-wide, wireless infrastructure. Free Wi-Fi can facilitate data collection, support emergency response services, and encourage residents to use public places.

Internet of Things (IoT): The Internet of Things is an expanding network of physical devices connected through the internet. From vehicles to refrigerators to heating systems, these devices communicate with users, developers, applications, and one another by collecting, exchanging, and processing data. For example, data collected from smart water meters can inform better responses to problems like water leaks or water waste. The IoT is largely facilitated by the rise of smartphones, which allow people to easily connect to one another and to other devices.

5G: Smart city services need internet with high speeds and large bandwidth to handle the amount of data generated by the IoT and to process these data in real time. The increased connectivity and computing capacity of 5G internet infrastructure facilitates many of the internet-related processes needed for smart cities.

Smart Grids: Smart grids are energy networks that use sensors to collect real-time data about energy usage and the requirements of infrastructure and citizens. Beyond controlling utilities, smart grids monitor power, distribute broadband to improve connectivity, and control processes like traffic. Smart grids rely on a collection of power-system operators and involve a wide network of parties, including vendors, suppliers, contractors, distributed generation operators, and consumers.

Intelligent Transport Systems (ITS): Through intelligent transport systems, various transportation mechanisms can be coordinated to reduce energy usage, decrease traffic congestion, and decrease travel times.. ITSs focus on “last mile delivery”, or optimizing the final step of the delivery process. Autonomous vehicles often are associated with smart cities, but ITS goes far beyond individual vehicles.

Surveillance: As with connected objects, data about residents can be transmitted, aggregated, and analyzed. In some cases, existing CCTV cameras can be paired with advanced video-analytics software and linked to the IoT to manage traffic and public safety. Solutions for fixed video-surveillance infrastructure account for the vast majority of smart city surveillance globally, but mobile-surveillance solutions are also growing fast. The expansion of surveillance to personal identification is a hotly debated topic with significant ramifications for civil society and DRG actors.

Digital IDs and Services Delivery: Digital-identification services can link citizens to their city by facilitating the opening of a bank account or access to health services. Digital IDs centralize all information and transaction history, which is convenient for citizens but also introduces some security concerns. Techniques like minimal disclosure (relying on as little data as possible) and decentralized technologies like self sovereign identity (SSI) can help separate identity, transaction, and device.

E-government: Electronic government—the use of technology to provide government services to the public—aims to improve service delivery, enhance citizen engagement, and build trust. Making more information, such as government budgets, public and available to citizens is a primary element of e-government. Mobile smartphone service is another strategy, as mobile technology combined with an e-government platform can offer citizens remote access to municipal services.

Chief Technology Officer: Some smart cities have a Chief Technology Officer (CTO) or Chief Information Officer (CIO), who leads the city’s efforts to develop creative and effective technology solutions in collaboration with residents and elected officials. The CTO or CIO studies the community, learns the needs of the citizens, plans and executes related initiatives, and oversees implementation and continued improvements.

Interoperability: The many different services and tools used in a smart city should be able to function together, to communicate with each other, and to share data. This requires dialogue and careful planning between business suppliers and city governments. Interoperability means that new infrastructure must be able to function on top of a city’s existing infrastructure (for example, installing new “smart” LED lighting on top of existing city streetlight systems).

“A smart city is a process of continuous improvements in city operation methods. Not a big bang.”

Smart city project leader in Bordeaux, France

Back to top

How are smart cities relevant in civic space and for democracy?

As described in more detail in the opportunities  section of this resource, smart cities can enhance energy efficiency, improve disaster preparedness, and increase civic participation. But smart cities are, in many ways, a double-edged sword, and they can also facilitate excessive surveillance and infringe on the rights to free assembly and expression.

Streetlights in Makassar, Indonesia. Smart cities have the potential to reach carbon reduction and renewable energy goals and improve economic efficiency and power distribution. Photo credit: USAID.
Streetlights in Makassar, Indonesia. Smart cities have the potential to reach carbon reduction and renewable energy goals and improve economic efficiency and power distribution. Photo credit: USAID.

In authoritarian countries, smart cities can become powerful instruments for manipulation and control. Smart cities in China, for example, are linked to the Chinese Communist Party’s concept of “social management,” or the ruling party’s attempts to shape, manage, and control society. When implemented without transparency or respect for the rule of law, smart-city technologies—like a smart electricity meter intended to improve the accuracy of readings—can be abused by the government as an indicator of “abnormal” behaviors indicative of “illegal” gatherings. In extreme instances, smart-city-facilitated surveillance and monitoring could dissuade citizens from gathering to protest or otherwise expressing opposition to local laws and guidelines.

The involvement of authoritarian actors in the design and operation of smart cities presents a significant threat to democracy, particularly in countries with pre-existing illiberal trends or weak oversight institutions. The partners of the Chinese tech company Huawei—which provides smart-city “solutions” that include facial and license-plate recognition, social media monitoring, and other surveillance capabilities—tend to be non-liberal, raising concerns that the Chinese Communist Party is exporting authoritarianism. In at least two cases, Huawei technicians “helped African governments spy on their political opponents, including [by] intercepting their encrypted communications and social media, and using cell data to track their whereabouts.”

Developing a rights-respecting smart city requires the active participation of society, from the initial planning stages to the implementation of the project. Mechanisms that enable citizens to voice their concerns and provide feedback could go a long way toward building trust and encouraging civic participation down the line. Education and training programs should also be implemented during smart city planning to help citizens understand how to use the technology around them, as well as how it will benefit their day-to-day lives.

Smart cities can create new avenues for participation in democratic processes, such as through e-voting. Proponents of e-voting stress benefits like “faster results, cost-reduction, and remote accessibility, which can potentially increase voter turnout.” But they tend to “underestimate the risks such as election fraud, security breaches, verification challenges, and software bugs and failures.” While smart cities center around technology-focused policymaking, the challenges experienced by urban communities require structural solutions of which technology is just one component.

Nairobi Business Commercial District, Kenya. Some smart cities, like Nairobi, are built on the existing infrastructure of cities. Photo credit: USAID East Africa Trade and Investment Hub.
Nairobi Business Commercial District, Kenya. Some smart cities, like Nairobi, are built on the existing infrastructure of cities. Photo credit: USAID East Africa Trade and Investment Hub.

Smart-city technology may also result in a more privatized government infrastructure, ultimately “displac[ing] public services, replac[ing] democracy with corporate decision-making, and allow[ing] government agencies to shirk constitutional protections and accountability laws in favor of collecting more data.” In some instances, authorities working to secure contracts for smart city technologies have declined to disclose information about the negotiations or circumvented standard public procurement procedures altogether.

Thus, privacy standards, data protection regulations, and due process systems are all vital components of a smart city that truly benefits citizens. Robust legal infrastructure can also provide citizens with recourse in the event of discrimination or abuse, even prior to the development of a smart city. In India, “the drive for smart cities triggered evictions of people from slums and informal settlements without adequate compensation of alternate accommodation.” Too often smart cities that brand themselves as “inclusive” primarily benefit the elite and fail to address the needs of women, children, migrants, minorities, persons with disabilities, persons operating in the informal economy, low-income groups, or persons with lower levels of digital literacy. Given the varying legal standards across countries, human rights frameworks can help inform the equitable implementation of smart cities to ensure they benefit the whole of society, including vulnerable communities. Civil society and governments should consider 1) whether the technology is appropriate for the objective and achieves its goal, 2) whether the technology is necessary in that it does not exceed its purpose and there is no other way to achieve the goal, and 3) whether the technology is proportionate, meaning that related challenges or drawbacks will not outweigh the benefit of the result.

Back to top

Opportunities

Smart cities can have a number of positive impacts when used to further democracy, human rights, and good governance.

Environmental Sustainability

According to the OECD, modern cities use almost two-thirds of the world’s energy, produce up to 80% of global greenhouse-gas emissions, and generate 50% of global waste. Smart cities can contribute toward Sustainable Development Goal 11 on making cities and human settlements inclusive, safe, resilient, and sustainable by leveraging data to improve economic efficiency and power distribution, ultimately reducing a city’s carbon footprint and introducing new opportunities for renewable energy. Smart cities are often linked to circular economic practices, which include “up-cycling” of rainwater, waste products, and even open public data (see below).. In addition, smart city technologies can be leveraged to help prevent the loss of biodiversity and natural habitats.

Disaster Preparedness

Smart cities can help improve disaster preparedness, mitigation, response, and recovery. Data collection and analysis can be applied to monitoring environmental threats, and remote sensors can map hazards. For example, open data and artificial intelligence can be used to identify which areas are most likely to be hardest hit by earthquakes. Early warning systems, social media alert systems, GIS, and mobile systems can also contribute to disaster management. A major issue during natural disasters is the loss of communication; in a smart city, interconnected systems can share information about what areas need assistance or replenishment when individual communication channels go down.

Social Inclusion

Smart cities can facilitate social inclusion in important ways: through fast, secure internet access; improvements in access to government and social services; avenues for citizen input and participation; improvements in transportation and urban mobility; etc. For example, smart cities can establish a network of urban access points where residents can access digital-skills training while the digitization of health services can improve healthcare opportunities and help patients connect to their medical records. Cities may even be able to improve services for vulnerable groups by responsibly leveraging sensitive datasets to improve their understanding of these citizens’ needs—though this data must be given with full consent, and robust privacy and security safeguards must be in place. Smart city technologies can also be used to preserve cultural heritage.

Knowledge Sharing and Open Information

An open approach to data captured by smart technologies can bring government, businesses, and civil society closer together. Public or open data—unlike sensitive, private data—are data that anyone can access, use, and share. An open-access approach to data means allowing the public to access these kinds of public, reusable data to leverage the social and economic benefits for themselves. This approach can also provide transparency and reinforce accountability and trust between citizens and government—for example by showing the use of public funds. In addition to open data, the design of software underlying smart city infrastructure can be shared with the public through open-source design and open standards. Open source refers to technology whose source code is freely available publicly, so that anyone can review it, replicate it, modify it, or extend it. Open standards are guidelines that help ensure that technology is designed to be open source in the first place.

Citizen Participation

Smart cities can encourage citizens to participate more actively in their communities and their governance by facilitating volunteering and community engagement opportunities and by soliciting feedback on the quality of services and infrastructure. Sometimes referred to as “e-participation,” digital tools can reduce the barriers between citizens and decision making, facilitating their involvement in the development of laws and standards, in the choice of urban initiatives, etc. The United Nations identifies three steps in e-participation: E-information, E-consultation, and E-decision-making.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. This section describes how to discern the possible dangers associated with Smart Cities in DRG work, as well as how to mitigate unintended—and intended—consequences.

Surveillance and Forced-Participation

As noted above, smart cities often rely on some level of citizen surveillance, the drawbacks of which are typically de-emphasized in marketing campaigns. A planned smart-city project in Toronto, Canada touted as a tool for addressing affordability and transportation issues in the city was ultimately derailed by the COVID-19 pandemic and significant scrutiny over privacy and data harvesting.

In many countries, individuals must give informed consent for their data to be legally collected, stored, and analyzed. Even if users blindly opt-in to providing their data to a website or app, at least there is a clear option for opting out of doing so. In public spaces, however, there is no straightforward way for people to opt out of giving their consent.. Do citizens consent to being surveilled when they are crossing the street? Have they been informed about how data collected on their movements and behaviors will be used? In democracies, there are opportunities for recourse if personal data collected through surveillance are misused, but in more illiberal settings this may not be the case. In China, for example, the use of millions of surveillance cameras that recognize faces, body shapes, and how people walk facilitates the tracking of individuals to stifle dissent.

Discrimination is sometimes made easier because of smart city surveillance and facial recognition technology. Smart city infrastructure can provide law enforcement and security agencies with the ability to track and target certain groups such as ethnic or racial minorities. This happens in democratic societies as well as non-democratic ones. A 2019 study conducted by the U.S. National Institute of Standards and Technology found that facial-recognition algorithms perform poorly when examining the faces of women, people of color, the elderly, and children. This is particularly concerning given that many police departments use facial recognition technology to identify suspects and make arrests. In addition to facial recognition,  data analytics are used to anticipate potential locations of future crime (a practice known as predictive policing ). A typical response to this analysis is an increase in the surveillance of “high-risk” areas, often neighborhoods inhabited by lower income and minority communities.

Unethical Data Handling and Freedom of Expression

As a city becomes more digitally connected, data sharing increases. For example, a smartphone user may share geo-location data and other meta-data with multiple applications, which in turn share that data with other services. And yet as cities aggregate and process data about residents, expectations of privacy in people’s daily lives break down. The collection of some types of data, like information about where you have traveled in your car or how fast you typically drive, may seem harmless. But when paired with other data, patterns are quickly established that may reveal more sensitive information about your health and habits, your family and networks, the composition of your household, your religious practices, etc.

Personal information is valuable to companies, and many companies test their technology in countries with the fewest data restrictions. In the hands of private companies, data can be exploited to target advertising, calibrate insurance costs, etc. There are also risks when data are collected by third parties (particularly foreign companies) that might lock users into their services, neglect to share information about security flaws, have inadequate data-protection mechanisms, or maintain data-sharing agreements with other governments. Governments also stand to benefit from access to intimate data about their citizens: “[P]ersonal information collected as part of a health survey could be repurposed for a client that is, say, a political party desperate to win an election.” According to Ghanaian social innovator and entrepreneur Bright Simmons, “the battle for data protection and digital rights is the new fight for civil rights on the continent.”

Worsening Inequality and Marginalization

In many cases, smartphones and the apps contained within them facilitate access to the full benefits of a smart city. . As of 2019, an estimated five billion people owned a mobile device, and over half of those devices were smartphones. But these numbers vary between advanced and developing economies, as well as between communities or groups within a given economy, potentially generating inequity in access to services and civic participation. Citizens with low literacy and numeracy skills, or who do not speak the language used by an application, will have further difficulty connecting through these interfaces. The reliance on apps also alienates unhoused populations who may not be able to charge their devices regularly or  be at higher risk of their devices being stolen.

The term “digital divide” generally refers to the gap between people who have access to and familiarity with high-quality and secure technology, and those who do not.  Smart cities are often criticized as being designed for the elite and privileging those who are already digitally connected. If this is the case, smart cities could exacerbate gentrification and the displacement of the unhoused.

The use of surveillance in smart cities can also be used to repress minority groups. Much has been reported on government surveillance of China’s Uyghur Muslim population in Xinjiang..

“It aggregates data – from people’s blood type and height, to information about their electricity usage and package deliveries – and alerts authorities when it deems someone or something suspicious. It is part of the Integrated Joint Operations Platform (IJOP), the main system for mass surveillance in Xinjiang.”As described by Human Rights Watch

Data Despotism and Automation Failures

Smart cities have been accused of “data despotism.” If city governments can access so much data about their citizens, then why bother speaking with them directly? Because of potential algorithmic discrimination, flaws in data analysis and interpretation, or inefficiencies between technology and humans, an overreliance on digital technology can harm society’s most vulnerable.

Much literature, too, is available on the “digital welfare state.” Former United Nations Special Rapporteur on extreme poverty and human rights Philip Alston observed that new digital technologies are changing the relationship between governments and those most in need of social protection: “Crucial decisions to go digital have been taken by government ministers without consultation, or even by departmental officials without any significant policy discussions taking place.”

When basic human services are automated  and human operators are taken out of the transaction, glitches in the software and tiny flaws in eligibility systems can be dangerous and even fatal. In India, where many welfare and social services have been automated, a 50-year-old man died of malnutrition due to a glitch in his biometric thumbprint identifier that prevented him from accessing a ration shop. “Decisions about you are made by a centralised server, and you don’t even know what has gone wrong…People don’t know why [welfare support] has stopped and they don’t know who to go to to fix the problem,” explained Reetika Khera, an associate professor of economics at the Indian Institute of Management Ahmedabad.

These automated processes also create new opportunities for corruption. Benefits like pensions and wages linked to India’s digital ID system  (called Aadhaar) are often delayed or fail to arrive altogether. When a 70-year-old woman found that her pension was being sent to another person’s bank account, the government told her to resolve the situation by speaking directly to that person.

Worsening Displacement

Like other urban projects, smart-city development can displace residents as existing neighborhoods are razed for new construction. An estimated 60% to 80% of the world’s forcibly displaced population lives in urban areas (not in camps as many would think), and one billion people (a number expected to double by 2030) in developing cities live in “slum” areas—defined by the UN as areas without access to improved water, sanitation, security, durable housing, and sufficient living area. In other words, urban areas are home to large populations of society’s most vulnerable, including internally displaced persons and migrants who do not benefit from the same legal protections as citizens. Smart cities may seem like an ideal solution to urban challenges, but they risk further disadvantaging these vulnerable groups; not to mention that smart cities neglect the needs of rural populations entirely.

“Corporatization”: Dominance of the Private Sector

Smart cities present an enormous market opportunity for the private sector, sparking fears of the “corporatization of city governance.” Large IT, telecommunication, and energy-management companies such as Huawei, Alibaba, Tencent, Baidu, Cisco, Google, Schneider Electric, IBM, and Microsoft are the driving forces behind the technology for smart-city initiatives. As Sara Degli-Esposti, an honorary research fellow at Coventry University, explained: “We can’t understand smart cities without talking of digital giants’ business models…These corporations are already global entities that largely escape governmental oversight. What level of control do local governments expect to exercise over these players?”

The important role thereby afforded to international private companies in municipal governance raises sovereignty concerns for governments, along with the privacy concerns for citizens cited above. In addition, reliance on private- sector software and systems can create a condition of business lock-in (when it becomes too expensive to switch to another business supplier). Business lock-in can get worse over time: as more services are added to a network, the cost of moving to a new system becomes even more prohibitive.

Security Risks

Connecting devices through a smart grid or through the Internet of Things brings serious security vulnerabilities for individuals and infrastructure. Connected networks have more points of vulnerability and are susceptible to hacking and cyberattacks. As smart systems collect more personal data about users (like health records), there is an increased risk that unauthorized actors will gain access to this information. The convenience of public, open Wi-Fi also comes at a cost, as it is much less secure than private networks. The IoT has been widely criticized for its lack of security, in part because of its novelty and lack of regulation. Connected devices are generally manufactured to be inexpensive and accessible, without cybersecurity as the primary concern.

The more closely linked infrastructure is, the faster and more far-reaching an attack can be. Digitally linked infrastructure like smart grids increases cybersecurity risks due to the increased number of operators and third parties connected to the grid, and this multiplies supply-chain risk-management considerations. According to Anjos Nijk, director of the European Network for Cyber Security: “With the current speed of digitisation of the grid systems.. and the speed of connecting new systems and technologies to the grids, such as smart metering, electrical vehicle charging and IoT, grid systems become vulnerable and the ‘attack surface’ expands rapidly.” Damaging one part of a large, interconnected system can lead to a cascade effect on other systems, potentially resulting in large-scale blackouts or the disabling of critical health and transportation infrastructure. Energy grids can be brought down by hackers, as experienced in the December 2015 Ukraine power grid cyberattack.

Back to top

Questions

If you are trying to understand the implications of smart cities in your work environment, or are considering how to use aspects of smart cities as part of your DRG programming, ask yourself these questions:

  1. Does the service in question need to be digital or connected to the internet? Will digitization improve this service for citizens, and does the anticipated improvement outweigh the risks?
  2. Are programs in place to ensure that citizens’ basic needs are being met (access to food, safety, housing, livelihood, education)?
  3. What external actors have control of or access to critical aspects of the technology and infrastructure this project will rely on, and what cybersecurity measures are in place?
  4. Who will build and maintain the infrastructure and data? Is there a risk of being locked into certain technologies or agreements with service providers?
  5. Who has access to collected data and how are the data being interpreted, used, and stored? What external actors have access? Are data available for safe, legal re-use by the public? How are open data being re-used or shared publicly?
  6. How will smart city services respect citizens’ privacy? How will residents’ consent be obtained when they utilize services that capture data about them?Can they opt out of sharing this information? What legal protections are in place around data protection and privacy?
  7. Are the smart services transparent and accountable? Do researchers and civil society have access to the “behind the scenes” functioning of these services (data, code, APIs, algorithms, etc.)?
  8. What measures are in place to address biases in these services? How will this service be certain not to exacerbate socioeconomic barriers and existing inequalities? What programs and measures are in place to improve inclusion?
  9. How will these developments respect and preserve historical sites and neighborhoods?  How will changes adapt to local cultural identities?

Back to top

Case Studies

Barcelona, Spain

Barcelona is often referred to as a best-practice smart city due to its strongly democratic, citizen-driven design. Its smart-city infrastructure consists of three primary components: Sentilo, an open-source data-collection platform; CityOS, a system for processing and analyzing the collected data; and user interfaces that enable citizens to access the data. This open-source design mitigates the risk of business lock-in and allows citizens to maintain collective ownership of their data, as well as provide input on how it is processed. A digital participatory platform, Decidim (“We Decide”), enables citizen participation in government through the suggestion and debate of ideas. Barcelona has also implanted e-democracy initiatives and projects to improve the digital literacy of its citizens. In 2018, Barcelona’s Chief Technology and Digital Innovation Officer Francesca Bria commented on reversing the smarty city paradigm: “Instead of starting from technology and extracting all the data we can before thinking about how to use it, we started aligning the tech agenda with the agenda of the city.”

Belgrade, Serbia

Starting in 2019, the Serbian government began implementing a Safe City project in the capital city of Belgrade. The installation of 1,200 smart surveillance cameras provided by Chinese tech giant Huawei raised red flags among the public, civil society, and even some European Union institutions. The Serbian Commissioner for Information of Public Importance and Personal Data Protection was among the first to sound the alarm, stating “there is no legal basis for the implementation of the Safe City project” and that new regulation was needed to address facial-recognition technology and the processing of biometric data. As Danilo Krivokapić, director of the Belgrade-based digital rights organization SHARE Foundation observed, “The public was not informed about the technical scope or price of the system, the specific needs it was meant to address, or the safeguards that would be needed to mitigate potential human rights risks.” In an effort to improve transparency around the project, the SHARE Foundation developed a crowdsourced map showing verified camera locations and their technical features, which ended up differing substantially from a list of camera locations provided by officials. Two years after the rollout of the Safe City project in Belgrade, a group of MEPs wrote a letter to the European Parliament’s Minister of Interior to voice their concerns about Belgrade becoming “the first city in Europe to have the vast majority of its territory covered by mass surveillance technologies.”

Konza, Kenya

Konza Technopolis, a flagship of Kenya’s Vision 2030 economic-development plan, promises to be a “world-class city, powered by a thriving information, communications, and technology (ICT) sector; superior reliable infrastructure; and business-friendly governance systems.” Plans for the city include the gathering of data from smart devices and sensors embedded in the urban environment to inform the delivery of digitally enhanced services. According to the official website for Konza, the city’s population will have direct access to collected data (such as traffic maps, emergency warnings, and information about energy and water consumption), which will enable citizens to “participate directly in the operations of the city, practice more sustainable living patterns, and enhance overall inclusiveness.” Between the announcement of plans for the development of Konza in 2008 and a journalist’s trip to the city in 2021, little progress seemed to have been made despite claims that the city would create 100,000 jobs by 2020 and generate $1 billion a year for the Kenyan economy. Yet investment from South Korea may have given new life to the project in 2023, as new projects were set to take place, including the development of an Intelligent Transport System (ITS) and an integrated control center.

Neom, Saudi Arabia

In 2021, Saudi Crown Prince Mohamed bin Salman revealed initial plans for The Line, a futuristic linear city that would be constructed vertically, have no roads or cars, and run purely on renewable energy. The Line is part of the $500 billion Neom mega-city project, which has been described not just as a “smart” city, but as a “cognitive” one. This cognitive city is built on three pillars: “the ability for citizens and enterprises to connect digitally to physical things; the ability to be able to compute or to analyze those things; and the ability to contextualize, using that connectivity to drive new decisions.” Planning documents produced by U.S. consultants include some technologies that do not even exist yet, such as flying taxis, “cloud seeding” to produce rain, and robot maids. In addition to being somewhat fantastical, the project has also been controversial from the outset. Around 20,000 people, including members of the Huwaitat indigenous tribe, faced forced relocation due to construction for the project; according to Al Jazeera, a prominent Huwaitat activist was arrested and imprisoned in 2020 over the tribe’s refusal to relocate. Concerns also stemmed from the strengthening of ties between the crown prince and Chinese Communist Party chairman Xi Jinping, who agreed to provide powerful surveillance technology to Saudi Arabia. Marwa Fatafta, a policy manager at the Berlin-based digital rights organization Access Now, warned that smart city capabilities could be deployed as a tool for invasive surveillance by state security services. This could include deploying facial recognition technology to track real-time movements and linking this information with other datasets, such as biometric information. Saudi Arabia has a demonstrated track record of using technology to crack down on online expression, including through the use of Pegasus spyware to monitor critics and the stealing of personal data from Twitter users who criticized the government.

Singapore

Singapore’s Smart Nation initiative was launched in 2014 to harness ICT, networks, and data to develop solutions to an aging population, urban density, and energy sustainability. In 2023, Singapore was named the top Asian city in the Institute for Management Development’s Smart City index, which ranks 141 cities by how they use technology to achieve a higher quality of life. Singapore’s smart-city infrastructure includes self-driving cars; patrol robots programmed to detect “undesirable” behavior; home utilities management systems; robots working in construction, libraries, metro stations, coffee shops, and the medical industry; cashless payment systems; and augmented and virtual reality services. Hundreds of gadgets, sensors, and cameras spread across 160 kilometers of expressways and road tunnels (collectively called the Intelligent Transport Systems or ITS) gather data to monitor and manage traffic flows and make roads safer. Singapore’s e-health initiative includes an online portal that allows patients to book appointments and refill prescriptions, telemedicine services that allow patients to consult with doctors online, and wearable IoT devices that monitor patients’ progress during telerehab. In a country where an estimated 90% of the population own smartphones, Singapore’s Smart Nation app is a one-stop platform for accessing a wide range of government services and information.

Toronto, Canada

In 2017, Toronto awarded a contract to Sidewalk Labs, a smart-city subsidiary of Google’s parent company Alphabet, to develop the city’s eastern waterfront into a high-tech utopia. The project aimed to advance a new model of inclusive development, “striving for the highest levels of sustainability, economic opportunity, housing affordability, and new mobility,” and serve as a model for solving urban issues in cities around the world. Sidewalk Labs planned to build sustainable housing, construct new types of roads for driverless cars, and use sensors to collect data and inform energy usage, help curb pollution, and lessen traffic. However, the project faced constant criticism from city residents and even Ontario’s information and privacy commissioner over the company’s approach to privacy and intellectual property. A privacy expert left their consulting role on the initiative to “send a strong statement” about the data privacy issues the project faced after learning that third parties could access identifiable information gathered in the waterfront district. Ultimately the project was abandoned in 2022, allegedly due to the unprecedented economic uncertainty brought on by the COVID-19 pandemic.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19