Artificial Intelligence & Machine Learning

What is AI and ML?

Artificial Intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can mimic human intelligence. There is no single, precise, universal definition of AI.

Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms on large datasets trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine Learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?

Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems and evolutionary computation. (See Artificial Intelligence – A modern approach).

artificial intelligence, types

The diagram above shows the many different types of technology fields that comprise AI. When referring to AI, one can be referring to any or several of these technologies or fields, and applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes “natural language processing” to understand what you’re asking, and then uses “vision” to find a banana and show it to you. The question of how Siri understood your question and how Siri knows something is a banana is answered by the algorithms and training used to develop Siri. In this example, Siri would be drawing from “question answering” and “image recognition.”

Most of these technologies and fields are very technical and relate more to computer science than political science. It is important to know that AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems.

As noted above, AI doesn’t have a universal definition. There are lots of myths surrounding AI—everything from the notion that it’s going to take over the world by enslaving humans, to curing cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as outline some of the benefits and risks posed by AI. It is hoped that this primer will enable you to a conversation about how best to regulate AI so that its potential can be harnessed to improve democracy and governance.

Definitions

Algorithm:An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is that of a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.

Algorithmic decision-making/Algorithmic decision system (ADS): A system in which an algorithm makes decisions on its own or supports humans in doing so. ADSs usually function by data mining, regardless of whether they rely on machine learning or not. Examples of a fully automated ADSs are the electronic passport control check-point at airports, and an online decision made by a bank to award a customer an unsecured loan based on the person’s credit history and data profile with the bank. An example of a semi-automated ADS are the driver-assistance features in a car that control its brake, throttle, steering, speed and direction.

Big Data: Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. Data is classified as Big Data based on its volume, velocity, variety, veracity and value. This video provides a short explainer video with an introduction to big data and the concept of the 5Vs.

Class label: The label applied after the ML system has classified its inputs, for example, Is a given email message spam or not spam?

Data mining: The practice of examining large pre-existing databases in order to generate new information.” Data mining is also defined as “knowledge discovery from data.

Deep model, also called a “deep neural network” is a type of neural network containing multiple hidden layers.

Label: A label is what the ML system is predicting.

Model: The representation of what a machine learning system has learned from the training data.

Neural network: A biological neural network (BNN) is a system in the brain that enables a creature to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.

Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.

Robot: Robots are programmable, artificially intelligent automated devices. Fully autonomous robots, e.g., self-driving vehicles, are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses/ behaviors accordingly in order to perform complex tasks without human intervention. – Report of COMEST on robotics ethics (2017).

Scoring: ¨Scoring is also called prediction, and is the process of generating values based on a trained machine-learning model, given some new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome.” When used vis-a-vis people, scoring is a statistical prediction that determines if an individual fits a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.

Supervised learning: ML systems learn how to combine inputs to produce predictions on never-before-seen data.

Unsupervised learning: Refers to training a model to find patterns in a dataset, typically an unlabeled dataset.

Training: The process of determining the ideal parameters comprising a model.

 

How do artificial intelligence and machine learning work?

Artificial Intelligence

Artificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic and economics to “understanding, modeling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices.”

AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:

  • Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
  • Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
  • Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Narrow AI, also known as weak AI, is AI designed to perform a specific, singular task, for example, voice-enabled virtual assistants such as Siri and Cortana, web search engines, and facial-recognition systems.

Artificial General Intelligence and Artificial Superintelligence have not yet been achieved and likely will not be for the next few years or decades.

Machine Learning

Machine Learning is an application of Artificial Intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine-learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern/correlation to make predictions about something. Most of the AI in use today is driven by machine learning.

Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: Supervised Learning; Unsupervised Learning; and Deep Learning.

Supervised Learning

Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a labeled data set. One starts with a data set containing training examples with associated labels. Take the example of a simple spam-filtering system that is being trained using spam as well as non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email address or IP address of the sender. Using the correlation, it tries to predict the correct label (spam/not spam) to apply to all the future emails it gets.

“Spam” and “not spam” in this instance are called “class labels”. The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labelled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about — in this case, it is the “spaminess” of email. The “correct answer,” so to speak, in the endeavor to categorize email is called the “desired outcome” or “outcome of interest.” This type of learning paradigm is called “supervised learning.”

Unsupervised Learning

Unsupervised learning involves having neural networks learn to find a relationship or pattern without having access to datasets of input-output pairs that have been labelled already. They do so by organizing and grouping the data on their own, finding recurring patterns, and detecting a deviation from the usual pattern. These systems tend to be less predictable than those with labeled datasets and tend to be deployed in environments that may change at some frequency and/or are unstructured or partially structured. Examples include:

  1. an optical character-recognition system that can “read” handwritten text even if it has never encountered the handwriting before.
  2. the recommended products a user sees on online shopping websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference and the prices of their previous purchases.
  3. detection of fraudulent monetary transactions based on, say, their timing and locations. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.

A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available, which can be used to train the neural network to act upon a larger, un-labelled dataset. An example of semi-supervised learning is software that creates deepfakes – photos, videos and audio files that look and sound real to humans but are not.

Deep Learning

Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks, to  create AI that can detect financial fraud, conduct medical image analysis, translate large amounts of text without human intervention and automate moderation of content of social networking websites . These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine and sort huge amounts of data, which enables them, theoretically, to find new solutions to existing problems.

Although there are other types of machine learning, these three – Supervised Learning, Unsupervised Learning and Deep Learning – represent the basic techniques used to create and train AI systems.

Bias in AI and ML

Artificial intelligence doesn’t come from nowhere; it comes from data received and derived from its developers and from you and me.

And humans have biases. When an AI system learns from humans, it may inherit their individual and societal biases. In cases where it does not learn directly from humans, the “predictive model” as described above may be biased because of the presence of biases in the selection and sampling of data that train the AI system, the “class labels” identified by humans, the way class labels are “marked” and any errors that may have occurred while identifying them, the choice of the “target variable,” “desired outcome” (as opposed to an undesired outcome), “reward”, “regret” and so on. Bias may also occur because of the design of the system; its developers, designers, investors or makers may have ended up baking their own biases into it.

There are three types of biases in computing systems:

  • Pre-existing bias has its roots in social institutions, practices, and attitudes.
  • Technical bias arises from technical constraints or considerations.
  • Emergent bias arises in a context of use.

Bias may affect, for example, the political advertisements one sees on the Internet, the content pushed to the top of the pile in the feeds of social media websites, the amount of insurance premium one needs to pay, if one is screened out of a recruitment process, or if one is allowed to go past border-control checks in another country.

Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate gets compounded or magnified and greatly affects the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.

Bias in AI/ ML systems may create new inequalities, exacerbate existing ones, reproduce existing biases, discriminatory treatment and practices, and hide discrimination. See this explainer related to AI bias.

Back to top

How are AI and ML relevant in civic space and for democracy?

Elephant tusks pictured in Uganda. In wildlife conservation, AI/ ML algorithms and past data can be used to predict poacher attacks. Photo credit: NRCN.

The widespread proliferation, rapid deployment, scale, complexity and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human-rights bodies, businesses and the general public alike. AI systems may require varying degrees of human interaction or none at all. AI/ML when applied in design, operation and delivery of services offers the potential to provide new services, and improve the speed, targeting, precision, efficiency, consistency, quality or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships and patterns, and offer new solutions. By analyzing large amounts of data, ML systems save time, money and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks  and discovering new species of viruses.

TB Microscopy Diagnosis in Uzbekistan. AI/ML systems aid healthcare professionals in medical diagnosis and easier detection of diseases. Photo credit: USAID.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, security and safety, crime prevention, policing, law enforcement, urban management and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time, about energy consumption, traffic congestion, pollution levels, and waste, in order to monitor and manage them and identify patterns in their generation, consumption and handling.

Digital maps created in Mugumu, Tanzania. Artificial intelligence can support planning of infrastructure development and preparation for disaster. Photo credit: Bobby Neptune for DAI.

AI is also used in climate monitoring, weather forecasting, prediction of disasters and hazards, and planning of infrastructure development. In healthcare, AI systems aid healthcare professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, face recognition systems, drones , and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, personhood, privacy, security, mass surveillance, reinforcement of social inequality and negative impacts on democracy (See the Risks section ).

Fish caught off the coast of Kema, North Sulawesi, Indonesia. Facial recognition is used to identify species of fish to contribute to sustainable fishing practices. Photo credit: courtesy of USAID SNAPPER.

The full impact of the deployment of AI systems on the individual, society and democracy is not known or knowable, which creates many legal, social, regulatory, technical and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The EU General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”

Back to top

Opportunities

Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.

Detect and overcome bias

Humans come with individual and cognitive biases and prejudices and may not always act or think rationally. By removing humans from the decision-making process, AI systems potentially eliminate the impact of human bias and irrational decisions, provided the systems are not biased themselves, and that they are intelligible, transparent and auditable. AI systems that aid traceability and transparency can be used to avoid, detect or trace human bias (some of which may be discriminatory) as well as non-human bias, such as bias originating from technical limitations. Much research has shown how automated filtering of job applications reproduces multiple biases; however research has also shown that AI can be used to combat unconscious recruiter biases in hiring. For processes like job hiring where many hidden human biases go undetected,  responsibly-designed algorithms can act as a double check for humans and bring those hidden biases into view, and in some cases even nudge people into less-biased outcomes, for example by masking candidates’ names and other bias-triggering features on a resume.

Improve security and safety

Automated systems based on AI can be used to detect attacks, such as credit card fraud or a cyberattack on public infrastructure. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud even more quickly, even before it occurs. It is like a game of cat and mouse. Computers are creating more complex, unusual patterns to avoid detection, the human understanding of these patterns is limited— humans need to use equally agile and unusual patterns too, that can adapt and iterate in real time, and Machine Learning can provide this.

Moderate harmful online content

Enormous quantities of content uploaded every second to the social web (videos on YouTube and TikTok, photos and posts to Instagram and Facebook, etc.). There is simply too much for human reviewers to examine themselves. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen through every post for illegal or harmful content (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfake and other computer-generated content requires similarly advanced approaches to identify it. Deepfakes take their name from the deep learning artificial-intelligence technology used to make them. Fact-checkers and other actors working to diffuse the dangerous, misleading power of these false videos are developing their own artificial intelligence to identify these videos as false.

Web Search

Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the infinite stretches of the internet. Search engines on the web (like Google and Bing) or within platforms (like searches within Wikipedia or within The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor certain kinds of results that may be beneficial to society or of higher quality. For example, Google has an initiative to highlight “original reporting.”

Translation

Machine Learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Discrimination against marginalized groups

There are several ways in which AI may make decisions that can lead to discrimination, including  how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data;  during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.

The accuracy of AI systems is based on how ML processes Big Data , which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to conclusively prove or demonstrate that it has made a discriminatory decision, and why it makes certain decisions about some individuals or groups of people. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.

A study by the Royal Statistical Society notes that “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation, software for credit-scoring, banking, housing, insurance, healthcare, and selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.

The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving minorities such as refugees or “life and death” matters such as medical care. A 2018 report by The University of Toronto and Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.

Security vulnerabilities

Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately-owned systems such as IoT devices and self-driven cars, for example.

If malicious entities target AI systems deployed in public infrastructure, such as smart cities , smart grids, and nuclear installations as well as healthcare facilities and banking systems, among others, they  “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”

Privacy and data protection

The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection (See also the Data Protection . Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks and activities. Criminals, rogue states/ governments/government bodies, and people with malicious intent often try to target these data for various reasons ranging from carrying out monetary fraud to commercial gains to political motives. For instance, health data captured from smartphone applications and Internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.

Chilling effect

AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Most people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.

Opacity (Black box nature of AI systems)

Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, ‘behind-the-scenes’ processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal/judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike those of judges which are required to state the reasons on which their legal order or judgment is based. The legal order or judgment is quite likely to be on public record.

Technological unemployment

Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, some others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.

Loss of individual autonomy and personhood

Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may impact the wellbeing, physical integrity, quality of life of people, the information they find or are targeted with, the services and products they can or cannot avail, among other things. This affects what constitutes an individual’s consent (or lack thereof), the way consent is formed, communicated and understood, and the context in which it is valid. “[T]he dilution of the free basis of our individual consent – either through outright information distortion or even just the absence of transparency – imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence

Back to top

Questions

If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:

  1. Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
  2. Who is designing and overseeing the technology? Can they explain to you what is happening at different steps of the process?
  3. What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
  4. What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
  5. Are you confident the technology will work as it is claimed when it is used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
  6. Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
  7. What measures do you have in place to identify and address potentially harmful biases in the technology?
  8. What regulatory safeguards and redress mechanisms do you have in place, for people who claim that the technology has been unfair to them or abused them in any way?
  9. Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
  10. Are you certain that the technology abides with relevant regulations and legal standards, including GDPR?
  11. Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?

Back to top

Case Studies

“Preventing echo chambers: depolarising the conversation on social media”

“Preventing echo chambers: depolarising the conversation on social media” 

“RNW Media’s digital teams… have pioneered moderation strategies to support inclusive digital communities and have significantly boosted the participation of women in specific country settings. Through Social Listening methodologies, RNW Media analyses online conversations on multiple platforms across the digital landscape to identify digital influencers and map topics and sentiments in the national arena. Using Natural Language Processing techniques (such as sentiment and topic detection models), RNW Media can mine text and analyze this data to unravel deep insights into how online dialogue is developing over time. This helps to establish the social impact of the online moderation strategies while at the same time collecting evidence that can be used to advocate for young people’s needs.”

Forecasting climate change, improving agricultural productivity

In 2014, the International Center for Tropical Agriculture, the Government of Colombia, and Colombia’s National Federation of Rice Growers, using weather and crop data collected over the prior decade, predicted climate change and resultant crop loss for farmers in different regions of the country. The prediction “helped 170 farmers in Córdoba avoid direct economic losses of an estimated $ 3.6 million and potentially improve productivity of rice by 1 to 3 tons per hectare. To achieve this, different data sources were analyzed in a complementary fashion to provide a more complete profile of climate change… Additionally, analytical algorithms were adopted and modified from other disciplines, such as biology and neuroscience, and were used to run statistical models and compare with weather records.”

Doberman.io developed an iOS app

Doberman.io developed an iOS app that employs machine learning and speech recognition to automatically analyze speech in a meeting room. The app determines the amount of time each person has spoken and tries to identify the sex of each speaker, using a visualization of the contribution of each speaker almost in real time with the relative percentages of time during which males and females have spoken. “When the meeting starts, the app uses the mic to record what’s being said and will continuously show you the equality of that meeting. When the meeting has ended and the recording stops, you’ll get a full report of the meeting.”

Food security: Detecting diseases in crops using image analysis (2016)

Food security: Detecting diseases in crops using image analysis (2016) 

“Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach.”

Can an ML model potentially predict the closure of civic spaces more effectively than traditional approaches? The USAID-funded INSPIRES project is testing the proposition that machine learning can help identify early flags that civic space may shift and generate opportunities to evaluate the success of interventions that strive to build civil society resilience to potential shocks.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

5G Technology

What is 5G technology?

Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.
Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.

5G refers to the fifth generation of cellular network technology. It is the next iteration in mobile technology, but there is not yet an official, finalized standard for it. Therefore, 5G refers to a handful of different technologies that are anticipated, but not guaranteed, to emerge in the next decade and will form a web of “super connectivity” for specific use cases and contexts. 5G aims above all to connect devices to one another (the Internet of Things ). However, currently there is a large gap between marketing and reality that this resource will explain. This video by Engadget is an excellent introduction to 5G technology and the excitement and caution around it.

As of 2020, there is not yet an official 5G standard from the overseeing authority, the United Nations International Telecommunications Union (ITU), and telecommunications carriers do not yet agree on what 5G means. Once 5G is fully standardized and when the infrastructure necessary for it is in place, 5G will further enable “smart homes”, “smart cities ”, autonomous vehicles, and other internet-related automation . As opposed to previous cellular technology, 5G is not designed primarily to connect people, but rather to connect devices.

What do we mean by “G?”

“G” refers to generation. “G” indicates a threshold, when a significant shift has finally been achieved in capability, architecture, and technology. These titles are given by the telecommunications industry through the standards authority known as 3GPP. 3GPP creates technical specifications every ten years or so, hence the use of the word “generation”. The acronym IMT is also used, which stands for International Mobile Telecommunications, along with the year that standard became official: For example, 3G is also called IMT 2000.

1GAllowed analogue phone calls; brought mobile devices (mobility)
2GAllowed digital phone calls and messaging; allowed for mass adoption, and eventually enabled mobile data (2.5G)
3GAllowed phone calls, messaging, and internet access
3.5GAllowed stronger internet
4GAllowed faster internet, (better video streaming)
5G“The internet of things”

Will allow devices to connect to one another
6G“The internet of senses”

Little is yet known

This video gives a simplified overview of 1G-4G.

Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.
Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.

There is a gap in many developing countries between the cellular standard that users subscribe to and the standard they actually use: many subscribe to 4G but, because it does not perform as advertised, they switch back to 3G. This switching is not always evident to the consumer, and it may be harder for the consumer to notice this “fallback” than with previous networks.

Even once the official standard of 5G is decided on, the infrastructure is in place and users have access to it through appropriate devices, the product is not guaranteed to work as promised: in fact, chances are it will not. 5G will still rely on 3G and 4G technologies, and carriers will still be operating their 3G and 4G networks in parallel.

How does 5G technology work?

There are several key performance indicators (KPIs) that 5G hopes to achieve. Basically, 5G will strengthen cellular networks by using more radio frequencies  along with new techniques to strengthen and multiply connection points. This means faster connection: cutting down the time between a command click on your device and the time it takes the phone to execute that function. This also will allow more devices to connect to one another.

Understanding Spectrum

To understand 5G, it is important to understand a bit about the electromagnetic radio spectrum. This video gives an overview to how cellphones use spectrum.

5G will bring faster speed and stronger services by using more spectrum. To establish a 5G network, it is necessary to secure spectrum for that purpose in advance. Governments and companies have to negotiate spectrum – usually through auctioning off bands, sometimes for huge sums. Spectrum allocation can be a very complicated and political process. Many experts fear that 5G, which requires lots of spectrum, threatens so-called ‘network diversity’ — the idea that spectrum should be used for a variety of purposes across government, business, and society — and may require allocating too much spectrum for this one purpose.

For more reading on spectrum allocation, see the Internet Society’s publication Innovations in Spectrum Management (2019).

Millimeter Waves

5G hopes to tap into new, unused bands at the top of the radio spectrum, known as millimeter waves (mmwaves). They are much less crowded than the lower bands and so they allow faster data transfers. But millimeter waves are tricky: they only travel a range of about 1.6 km when there is nothing in their way, and mmwaves can be absorbed by trees, by walls, and even by the air; and rain and fog can reduce the signal down to a distance of 1km. As a result, 5G will require many cell towers, rather than a few massive towers like we have with 4G. 5G will need towers every 100 meters outside, and every 50 meters inside. As will be detailed later, this is why 5G is really best suited for select parts of dense urban centers. The theoretical potential of millimeter waves is exciting, but in reality, most 5G carriers are trying to deploy 5G back down in the lower parts of the spectrum. Different carriers are experimenting now with different bands.

Don’t forget about fiber!

5G technology needs to run on fiber infrastructure. Fiber can be understood as the nervous system of a mobile network, connecting data centers to cell towers.

5G requires data centers, fiber, cell towers, and small cells

Investment in fiber is critical for 5G deployment and for broadband more generally, but fiber is expensive to install initially. (A 2017 Deloitte study estimated that 5G deployment in the United States would require at least $130 billion investment in fiber). At the moment, fiber is relatively scarce even in industrially developed countries, but it is especially scarce in industrially developing countries and in rural areas. Mobile operators, including the International Telecommunications Union, believe fiber is the best “backhaul” (connective material) due to its long life, high capacity, high reliability and ability to support very high traffic. But the initial investment is expensive and often cost-prohibitive to suppliers and operators, especially in less densely populated areas. 5G is sometimes advertised as a replacement for fiber; however, fiber and 5G are complementary technologies, and fiber cannot be ignored: if anything, fiber is a more secure investment that works with many other technologies, far beyond 5G.

The triangular chart below, from the International Telecommunications Union, is often used to explain the primary features that make up 5G technology (enhanced capacity, low latency, and enhanced connectivity) and the potential applications of these features.

Features that make up 5G technology: enhanced capacity, low latency, and enhanced connectivity, and the potential applications of these features

Who supplies 5G technology?

The market of 5G providers is very concentrated, even more so than previous generations. A handful of companies are capable of supplying telecommunications operators with the necessary technology. As of July 2020, the main suppliers are Huawei (based in China), Ericsson (based in Sweden), and Nokia (based in Finland. Many other carriers are working on developing their 5G technology; for example in the United States, Sprint and T-Mobile merged to work on 5G, raising competition concerns.

In 2019, the United States government passed a defense authorization spending act, NDAA Section 889, that essentially prohibits US agencies from using telecommunications equipment made by Chinese suppliers (for example, Huawei and ZTE). The restriction was put in place over fears that the Chinese government may use its telecommunications infrastructure for espionage (see more in the Risks section). NDAA Section 889 could apply to any contracts made with the US government, and so it is critical for organizations considering partnerships with Chinese suppliers to keep in mind the legal challenges of trying to engage with both the US and Chinese governments in relation to 5G.

Of course, this means that the choice of 5G manufacturers suddenly becomes much more limited. Chinese companies have by far the largest market share of 5G technology. Huawei has the most patents filed, and the strongest lobbying presence within the International Telecommunications Union.

The 5G playing field is fiercely political, with strong tensions between China and the United States. Because 5G technology is closely connected to chip manufacturing, it is important to keep an eye on “the chip wars”. Suppliers reliant on American and Chinese companies are likely to get caught in the crossfire as the trade war gets worse between these countries, because supply chains and manufacturing of equipment is often dependent on both countries. Peter Bloom, founder of Rhizomatica, points out that the global chip market is projected to grow to $22.41 billion by 2026 (from approximately $2.03 billion in 2020). Bloom cautions: “The push towards 5G encompasses a plethora of interest groups, particularly governments, financing institutions and telecommunications companies, that demands to be better analyzed in order to understand where things are moving, whose interests are being served and the possible consequences of these changes.”

 

Back to top

How is 5G relevant in civic space and for democracy?

Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O'Bryan/ USAID.
Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O’Bryan/ USAID.

5G is the first generation that does not prioritize access and connectivity for humans. 5G is a kind of ‘race-car’ technology — a super-connectivity for luxury use cases and specific environments, for instance, for enhanced virtual reality experiences and massively multiplayer video games. Many of the use cases advertised are theoretical or experimental and do not yet exist widely in our society, like remote surgery. (Indeed, telesurgery is one of the most often cited examples of the benefits of 5G, but it remains a prototype technology with many technical and legal issues to work out, that would also necessitate global network development.)

Access to education, health and information are fundamental rights; but multiplayer video games, virtual reality, and autonomous vehicles—all of which would rely on 5G – are not. 5G is a distraction from the critical infrastructure needed to get people online to fully enjoy their fundamental rights and to allow for democratic functioning. The focus on 5G actually diverts attention away from immediate solutions to improving access and bridging the digital divide.

Roughly 50% of the world’s population is still not connected to the internet, and they are primarily living in developing contexts. Unfortunately, 5G will not address this divide. What is needed to improve internet access in industrially developing contexts is more fiber, more internet access points (IXPs), more cell towers, more Internet routers, more wireless spectrum, and reliable electricity. In an industry white paper, only 1 page of 125 discusses a “scaled down” version of 5G that will address the needs of areas with extremely low average revenue per user (ARPU). These solutions include further limiting the geographic areas of service.

Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.
Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.

This presentation by the American corporation INTEL at an ITU regional forum in 2016, “5G for Developing Countries” advertises the usual aspirations for 5G: autonomous vehicles (labeled as “smart transportation”), virtual reality (labeled as “e-learning”), remote surgery (labeled as “e-health”), and sensors that can be placed for water management and agriculture. The Kenya ICT Action Network hosted a webinar in the spring of 2020 in partnership with HUAWEI. The presentation summary and slides are available here. Similar highly specific and theoretical future use-cases are advertised here as well: autonomous vehicles, industrial automation , smart homes, smart cities, smart logistics.

In both presentations, the emphasis is on connecting objects: showing how 5G is really adapted for big industries, not for individuals. Even if 5G were accessible in remote rural areas, individuals would likely have to purchase the most expensive, unlimited data plans to access 5G. This cost comes on top of having to acquire 5G-compatible smartphones and devices. Telecommunications companies themselves estimate that only 3% of Sub Saharan Africa will use 5G. It is estimated that by 2025, most people will still be using 3G (roughly 60%), and 4G (roughly 40%), which is a technology that has existed for 10 years.


5G Broadband / Fixed Wireless Access (FWA)

Because most people in industrially developing contexts connect to the internet via cellphone infrastructure and mobile broadband, what would be most useful to them would be “5G broadband”, also called 5G Fixed Wireless Access, or FWA. FWA is designed to replace “last mile” infrastructure with a wireless 5G network. Indeed, that “last mile” — that final distance to the end user — is often the biggest barrier to internet access across the world. But because the vast majority of these 5G networks will rely on physical fiber connection, FWA without fiber won’t be of the same quality. These FWA networks will also be more expensive for network operators to maintain than traditional infrastructure or “standard fixed broadband.”

This article by one of the top 5G providers, Ericsson, asserts that FWA will be one of the main uses of 5G, but the article shows that the operators will have a wide ability to adjust their rates, and also admits that many markets will still be addressed with 3G and 4G.

5G will not replace other kinds of internet connectivity for citizens

While 5G requires enormous investment in physical infrastructure, new generations of cellular Wi-Fi access are becoming more accessible and affordable. There is also an increasing variety of Community Network solutions, including Wi-Fi meshnets, and sometimes even community-owned fiber. For further reading see: 5G and the Internet of EveryOne: Motivation, Enablers, and Research Agenda, IEEE (2018). These are important alternatives to 5G that should be considered in any context (developed and developing, urban and rural).

“…if we are talking about thirst and lack of water, 5g is mainly a new type of drink cocktail, a new flavor to attract sophisticated consumers, as long as you live in profitable places for the service and you can pay for it. Renewal of communications equipment and devices is a business opportunity for manufacturers mainly, but not just the best “water” to the unconnected, rural, … (non-premium clients), even a problem as investment from operators gets first pushed by the trend towards satisfying high paying urban customers and not to spread connectivity to low pay social/universal inclusion customers…” – IGF Dynamic Coalition on Community Networks, in communication with the author of this resource.

It is critical not to forget about previous generation networks. 2G will continue to be important for providing broad coverage. 2G is already very present (around 95% in low- and middle- income countries), requires less data, and carries voice and SMS traffic well, which means that it is a safe and reliable option for many situations. Also, upgrading existing 2G sites to 3G or 4G is less costly than building new sites.

5G and the private sector

The technology that 5G facilitates (the Internet of Things , smart cities , smart homes) will encourage the installation of chips and sensors in an increasing number of objects. The devices 5G proposes to connect are not primarily phones and computers, but sensors, vehicles, industrial equipment, implanted medical devices, drones, cameras, etc. Linking these devices raises a number of security and privacy concerns, as will be explored in the Risks section .

The actors that stand to benefit most from 5G are not citizens or democratic governments, but corporate actors. The business model powering 5G is around industry access to connected devices: in manufacturing, in the auto industry, in transport and logistics, in power generation and efficiency monitoring, etc. Consider the automotive industry. 5G can be used for everything from tracking components to the supply chain and in the manufacturing process, all the way to providing mobile connectivity for passengers and the vehicles themselves,” explains Freddy Boom, Chief Country Officer at Greensill. 5G will boost the economic growth of those actors able to benefit from it, particularly those invested in automation, but it would be a leap to assume the distribution of these benefits across society.

The introduction of 5G will bring the private sector massively into public space through the network carriers, operators and other third parties behind the many connected devices. This overtaking of public space by private actors (usually foreign private actors), should be carefully considered from the lens of democracy and fundamental rights. Though the private sector has already entered our public spaces (streets, parks, shopping malls) with previous cellular networks, 5G’s arrival, bringing with it more connected objects and more frequent cell towers, will increase this presence.

Back to top

Opportunities

The advertised benefits of 5G usually fall into three areas outlined below. A fourth area of benefit will also be explained—though less often cited in the literature, it would be the most directly beneficial for citizens. It should be noted that these benefits will not be available soon, and perhaps never available widely. Many of these will remain elite services, only available under precise conditions and for high cost. Others will require standardization, legal and regulatory infrastructure, and widespread adoption before they can become a social reality.

The chart below, taken from a GSMA report, shows the generally listed benefits of 5G. The benefits in the white section could be achieved on previous networks like 4G, and those in the brown section would require 5G. This further emphasizes the fact that many of the objectives of 5G are actually possible without it.

Benefits of 5G

Augmented Reality & Tactile Internet

5G has many potential uses in entertainment, especially in gaming. Low latency will allow massively multiplayer games, higher quality video conferencing, faster downloading of high-quality videos, etc. Augmented and virtual reality are advertised as ways to create immersive experiences in online learning. 5G’s ability to connect devices will allow for wearable medical devices that can be controlled remotely (though not without cybersecurity risks). Probably the most exciting example of “tactile internet” is the possibility of remote surgery: an operation could be performed by a robot that is remotely controlled by a surgeon somewhere across the world. The systems necessary for this are very much in their infancy and will also depend on the development of other technology, as well as regulatory and legal standards and a viable business model.

Autonomous Vehicles

The major benefit of 5G will come in the automobile sector. It is hoped that the high speed of 5G will allow cars to coordinate safely with one another and with other infrastructure. For self-driving vehicles to be safe, they will need to be able to communicate with one another and with everything around them within milliseconds. The super speed of 5G is important for achieving this. (At the same time, 5G raises other security concerns for autonomous vehicles).

Machine-to-machine connectivity (IoT/smart home/smart city)

Machine-to-machine connectivity, or M2M, already exists in many devices and services , but 5G would further facilitate this. This stands to benefit industrial players (manufacturers, logistics suppliers, etc.) most of all, but could arguably benefit individuals or cities who want to track their use of certain resources like energy or water. Installed sensors can be used to collect data which can be analyzed for efficiency and the system can then be optimized towards a goal. Typical M2M applications in the smart home include thermostats and smoke detectors, consumer electronics and healthcare monitoring. It should be noted that many such devices can operate on 4G, 3G, and even 2G networks.

5G-based Fixed-Wireless Access (FWA) Can Provide Gigabit Broadband to Homes

Probably the most relevant benefit of 5G to industrially developing contexts will be the potential of FWA. FWA is less often cited in the marketing literature, because it does not allow the industrial benefits promised in full: because it allows breadth of connectivity rather than the revolutionary strength/intensity, it should be thought of as a different kind of “5G”. (See the 5G Broadband / Fixed Wireless Access section). As explained, FWA will still require infrastructure investments, and will not necessarily be more affordable than broadband alternatives due to the increasing power given to the carriers.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with 5G in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Personal Privacy

With 5G connecting more and more devices, the private sector will be moving further into public space through sensors, cameras, chips, etc. Many of the connected devices will be things we never expected to be connected to the internet before: washing machines, toilets, cribs, etc. Some will even be inside our bodies, like smart pacemakers. The placement of devices with chips into our homes and environments eases the process of collecting data about us and other forms of surveillance.

A growing number of third-party actors have sophisticated methods for collecting and analyzing data related to us. Some devices may only ultimately collect meta-data, but this can still seriously reduce privacy. Meta-data is information connected to our communications that does not include the content of those communications: for example, numbers called, websites visited, geographical location or the time and date a call was made, etc. The EU’s highest court has ruled that this kind of information can be considered just as sensitive as the actual contents of communications because of insights that the data can offer into our private lives. 5G will allow telecommunications operators and other actors access to meta-data that can be assembled for insights about us that reduce our privacy.

Last, 5G requires many small cell base stations, so the presence of these towers will be much closer to people’s homes and workplaces, on streetlights, lamp posts, etc. This will make location tracking much more precise and make location privacy nearly impossible.

Espionage

For most, 5G will be supplied by foreign companies — in the case of Huawei and ZTE, governments that do not uphold human rights obligations or hold democratic values. For this reason, some governments are concerned about the potential of abuse of data and even about foreign spying. Several countries, including the United States, Australia and the United Kingdom, have taken actions to limit the use of Chinese equipment in their 5G networks due to fears of potential spying. The US has perhaps taken the strongest stance, through NDAA Section 889 mentioned above. However, a 2019 report on the security risks of 5G by the European Commission and European Agency for Cybersecurity warns against using a single supplier to provide 5G infrastructure because of espionage risks. The general argument against a single supplier (usually made against the Chinese supplier Huawei), is that if the supplier provides the core network infrastructure for 5G, the supplier’s government (China) will gain immense surveillance capacity, through meta-data, or even through a “back door” or intentionally installed vulnerability. Government spying through private sector and telecom equipment is commonplace, and China is not the only culprit. But the massive network capacity of 5G and the many connected devices collecting personal information will enhance the information at stake and the risk.

Cybersecurity Risks

As a general rule, the more digitally connected we are, the more vulnerable to cyber threats; 5G aims to make us and our devices ultra-connected. With 5G and the Internet of Things, the attack surface will be truly enormous. If a self-driving car on a smart grid is hacked or breaks down, this could bring immediate physical danger, not just information leakages. 5G centralizes infrastructure around a core, which makes it especially vulnerable. The complex supply chain also multiples risks: according to an EU coordinated cybersecurity risk assessment, “[t]he deployment of 5G networks is taking place in a complex global cybersecurity threat landscape, notably characterized by an increase in supply-chain attacks.” Because of the wide application of 5G based networks, 5G brings the increased possibility of internet shutdowns, endangering large parts of the network.

5G infrastructure can simply have technical deficiencies. Because 5G technology is still in pilot phases, many of these deficiencies are not yet known. 5G advertises some enhanced security functions, but security holes remain because devices will still be connected to older networks.

Massive Investment Costs and Questionable Returns

As A4AI explains, “The rollout of 5G technology will demand significant investment in infrastructure, including in new towers capable of providing more capacity, and bigger data centres running on efficient energy.” These costs will likely be passed on to consumers, who will have to purchase compatible devices and sufficient data.  5G requires massive infrastructure investment — even in places with strong 4G infrastructure, existing fiber-optic cables, good last-mile connections, and reliable electricity. Estimates for the total cost of 5G deployment — including investment in technology and spectrum — are as high as $2.7 trillion USD. There is a high cost of deploying dense clusters of 5G cells. Due to the many security risks, regulatory uncertainties, and generally untested nature of the technology, 5G is not necessarily a safe investment even in wealthy urban centers. The high cost of introducing 5G will be an obstacle for expansion and prices are unlikely to fall enough to make 5G widely affordable.

Because this is such a complex new product, there is a risk of purchasing low-quality equipment. 5G is heavily reliant on software and services from third-party suppliers, which multiplies the chance of defects in parts of the equipment (poorly written code, poor engineering, for example). The process of patching these flaws can be long, complicated, and costly. Some vulnerabilities may go unidentified for a long time but can suddenly cause severe security problems. Lack of compliance to industry or legal standards could cause similar problems. In some cases, new equipment may not be flawed or faulty, but it may simply be incompatible with existing equipment or with other purchases from other suppliers. With so many third- party suppliers in the mix, this may result in incompatible parts. Moreover, there will be large costs just to running the 5G network properly: securing it from cyberattacks, patching holes/addressing flaws, and keeping up the material infrastructure. Skilled and trusted human operators are needed for these tasks.

Foreign Dependency and Geopolitical Risks

Installing new infrastructure means dependency on private sector actors, usually from foreign countries. Over-reliance on foreign private actors raises multiple concerns as mentioned, related to cybersecurity, privacy, espionage, excessive cost, compatibility, etc. Because there are only a handful of actors that are fully capable of supplying 5G, there is also the risk of becoming dependent on a foreign country. With current geopolitical tensions between the US and China, countries trying to install 5G technology may get caught in the crossfire of a trade war. As Jan-Peter Kleinhans, security and 5G expert at Stiftung Neue Verantwortung (SNV) explains “The case of Huawei and 5G is part of a broader development in information and communications technology (ICT). We are moving away from a unipolar world with the US as the technology leader, to a bipolar world in which China plays an increasingly dominant role in ICT development”. The financial burdens of this bipolar world will be passed onto suppliers and customers.

Class/Wealth & Urban/Rural Divides

“Without a comprehensive plan for fiber infrastructure, 5G will not revolutionize Internet access or speeds for rural customers. So anytime the industry is asserting that 5G will revolutionize rural broadband access, they are more than just hyping it, they are just plainly misleading people.” Ernesto Falcon, the Electronic Frontier Foundation.

5G is not a lucrative investment for carriers in more rural areas and developing contexts, where the density of potentially connected devices is lower. There is industry consensus, supported by the ITU itself, that the initial deployment of 5G will be in dense urban areas, particularly wealthy areas with industry presence. Rural and poorer areas with less existing infrastructure are likely to be left behind because it is not a good commercial investment for the private sector. For rural and even suburban areas, millimeter waves and cellular networks that require dense cell towers are not going to be the solution. As a result, 5G will not bridge the digital divide for lower income and urban areas. It will reinforce it, by giving better, super-connectivity to those who already have access and can afford even more expensive access and devices, while making the cost of connectivity high for others.

Energy Use and Environmental Impact

Huawei has shared that the typical 5G site has power requirements over 11.5 kilowatts, almost 70% more than sites deploying 2G, 3G and 4G. Some estimate 5G technology will use two to three times more energy than previous mobile technologies. All agree It will require more infrastructure, which means more power supply, and more battery capacity, all of which will have environmental consequences. The most significant environmental issues associated with implementation will come from manufacturing the many component parts, along with the proliferation of new devices that will use the 5G network. 5G will encourage more demand and consumption of digital devices, and therefore the creation of more e-waste, which will also have serious environmental consequences. According to Peter Bloom, founder of Rhizomatica, most environmental damages of 5G will take place in the global south. This will include damage to the environment and to communities where the mining of materials and minerals takes place, as well as pollution from the electronic waste. In the United States, the National Oceanic and Atmospheric Administration and NASA reported last year that the decision to open up high spectrum bands (24 gigahertz spectrum) would affect weather forecasting capabilities for decades.

Back to top

Questions

To understand the potential of 5G for your work environment or community, ask yourself these questions to try to assess if 5G is the most appropriate, secure, cost effective and human-centric solution:

  1. Are people already able to connect to the internet sufficiently? Is the necessary infrastructure in place for people to connect to the internet, through 3 or 4G, or through Wi-Fi (fiber, internet access points, electricity)?
  2. Are the conditions in place to effectively deploy 5G? That is, is there sufficient fiber backhaul and 4G infrastructure (recall that 5G is not yet a standalone technology).
  3. What specific use case(s) do you have for 5G that would not be achievable using a previous generation network?
  4. What other plans are being made to address the digital divide, through Wi-Fi deployment and mesh networks, digital literacy and digital training, etc.?
  5. Who stands to benefit from 5G deployment? Who will be able to access 5G? Do they have the appropriate devices and sufficient data? Will access be affordable?
  6. Who is supplying the infrastructure? How much can they be trusted regarding quality, pricing, security, data privacy , and espionage?
  7. Do the benefits of 5G outweigh the costs and risks (in relation to security, financial investment, potential geopolitical consequences)
  8. Are there sufficient skilled human resources to maintain the 5G infrastructure? How will failures and vulnerabilities be dealt with?

Back to top

Case Studies

South Korea, China, and the United States are the countries with the most 5G technology deployed to date. Still, 5G infrastructure is mainly in its pilot stages and far from achieving the advertised potential.

Nigeria

Nigeria had plans to roll out 5G in multiple major cities before 2020. The Nigerian Telecommunications Commission held a three-month trial period in November 2019, but the planned installation was then decommissioned. One of the purposes of the trial was to study health and security challenges, and security agencies and other relevant stakeholders were invited to participate. After the government called off the installation, the NCC explained that it is prioritizing a policy of “technology neutrality” and that it encourages the continued development of secure technology. Data depletion issues were also a concern: users quickly replete data at 4G levels, and it was expected that 5G would magnify this problem.

Brazil

Ericsson (based in Sweden) is investing 1 billion reais (or $238.30 million) in Brazil to create its first 5G assembly line in Latin American. The company argues that this will create jobs and attract investment, but it is also pressuring Brazil to auction its spectrum quickly, which should raise red flags. Like the UK and many other countries, Brazil is cautious about allowing Chinese company Huawei too big of a role in their infrastructure. The head of the country’s Institutional Security Cabinet, General Augusto Heleno, said the government is gathering information, but not going to entirely ignore Huawei’s bid: “We can’t pretend we’re not watching… The big threat in all this 5G discussion is about the fact that it will allow whoever owns the technology to know who you are, how much do you earn and what’s in your bank account.”

Latin America and the Caribbean

A recent publication by the InterAmerican Development Bank and the Government of South Korea on 5G in Latin American and Caribbean (LAC) countries encourages them to adopt 5G, but cautions on the many challenges these countries will face: high implementation costs, the need to secure spectrum, the need to develop institutions and regulatory systems, need for financial support, etc.

The United Kingdom

In the United Kingdom, a dedicated Commission was established within the Department of Culture, Media, and Sport (DCMS) to grant the authority and responsibility to build 5G infrastructure with DCMS, and digital infrastructure groups were organized to promote the exchange of views and cooperation among ministries. Even with all this coordination, 5G has caused fear, confusion, and violence in the country as we have seen during the pandemic. The UK is also in a tricky spot because of tensions between China and the United States – in January of 2020, they ignored pressure from the US administration and allowed Huawei to build “non-core” parts of their 5G network. The US government warned that they could end intelligence sharing with allies that use Huawei’s equipment, which could be a blow to the UK. Meanwhile, this Guardian article from June 2019 describes the experience of 5G connection for users who bought a 5G phone to access it.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Blockchain

What is Blockchain?

A blockchain is a distributed database existing on multiple computers at the same time, with a detailed and un-changeable transaction history leveraging cryptography. Blockchain-based technologies, perhaps most famous for their use in “cryptocurrencies ” such as Bitcoin, are also referred to as “distributed ledger technology (DLT).”

How does Blockchain work?

Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.
Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.

Blockchain is a constantly growing database as new sets of recordings, or ‘blocks,’ are added to it. Each block contains a timestamp and a link to the previous block, so they form a chain. The resulting blockchain is not managed by any particular body; instead, everyone in the network has access to the whole database. Old blocks are preserved forever, and new blocks are added to the ledger irreversibly, making it impossible to erase or manipulate the database records.

Blockchain can provide solutions for very specific problems. The most clear-cut use case is for public, shared data where all changes or additions need to be clearly tracked, and where no data will ever need to be redacted. Different uses require different inputs (computing power, bandwidth, centralized management), which need to be carefully considered based on each context. Blockchain is also an over-hyped concept applied to a range of different problems where it may not be the most appropriate technology, or even in some cases, a responsible technology to use.

There are two core concepts around Blockchain technology: the transaction history aspect and the distributed aspect. They are technically tightly interwoven, but it is worth considering them and understanding them independently as well.

'Immutable' Transaction History

Imagine stacking blocks. With increasing effort, one can continue adding more blocks to the tower, but once a block is in the stack, it cannot be removed without fundamentally and very visibly altering—and in some cases destroying—the tower of blocks. A blockchain is similar in that each “block” contains some amount of information—information that may be used, for example, to track currency transactions and store actual data. (You can explore the bitcoin Blockchain, which itself has already been used to transmit messages and more, to learn about a real life example.).

This is a core aspect of the blockchain technology, generally called immutability, meaning data, once sored, cannot be altered. In rare cases, a 100% consensus among users can permit changes although it is incredibly tedious.

Blockchain is at its simplest, a valuable digital tool that replicates online the value of a paper-and-ink logbook. While this can be valuable to track a variety of sequential transactions or events (ownership of a specific item / parcel of land / supply chain) and could even be theoretically applied to concepts like voting or community ownership and management of resources, it comes with an important caveat. Mistakes can never be truly unmade, and changes to data tracked in a blockchain can never be updated.

Many of the potential applications of blockchain would naturally want one of the pieces of data tracked to be the identity of a person or legal organization. If that entity changes, their previous identity will be forever, immutably tracked and linked to the new identity. On top of being damaging to a person fleeing persecution or legally changing their identity, in case of transgender individuals, for example, this is also a violation of the right to privacy established under international human rights law.

Distributed and Decentralized

The second core tenet of blockchain technology is the absence of a central authority or oracle of “truth.” By nature of the unchangeable transaction records, every stakeholder contributing to a blockchain tracks and verifies the data it contains. At scale, this provides powerful protection against problems common not only to NGOs but to the private sector and other fields that are reliant on one service to maintain a consistent data store. This feature can protect a central system from collapsing or being censored, corrupted, lost, or hacked — but at the risk of placing significant hurdles in the development of the protocol and requirements for those interacting with the data.

A common misconception is that blockchain is completely open and transparent. Blockchains may be private, with various forms of permissions applied. In such cases, some users have more control over the data and transactions than others. Privacy settings for blockchain can make for easier management, but also replicate some of the specific challenges that blockchains, in theory, are solving.

Back to top

How is blockchain relevant in civic space and for democracy?

Blockchain technology has the potential to provide substantial benefits in the development sector broadly, as well as specifically for human rights programs. By providing a decentralized, verifiable source of data, blockchain technology can be a more transparent, efficient form of information and data management for improved governance, accountability, financial transparency, and even digital identities. While blockchain can be effective when used strategically on specific problems, practitioners who choose to use it must do so fastidiously. The decisions to use DLTs should be based on a detailed analysis and research on comparable technologies, including non-DLT options. . The decisions to use DLTs should be based on a detailed analysis and research on comparable technologies, including non-DLT options.

By providing a decentralized, verifiable source of data, blockchain technology can enable a more transparent, efficient form of information and data management. Practitioners should understand that blockchain technology can be applied to humanitarian challenges, but it is not a separate humanitarian innovation in itself.

Blockchain for the Humanitarian Sector – Future Opportunities

Blockchains lend themselves to some interesting tools being used by companies, governments, and civil society. Examples of how blockchain technology may be used in civic space include: land titles, digital IDs (especially for displaced persons), health records, voucher-based cash transfers, supply chain, censorship resistant publications and applications, digital currency , decentralized data management , crowdfunding and smart contracts. Some of these examples are discussed below. Specific examples of the use of blockchain technology may be found below under case studies.

A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.
A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.

Blockchain technology has the potential to provide substantial benefits in the humanitarian sector, such as protected data sharing, supply chain, donor financing, cash programmes and crowdfunding. By providing a decentralized, verifiable source of data, blockchain technology can enable a more transparent, efficient form of information and data management. Practitioners should understand that blockchain technology can be applied to humanitarian challenges, but it is not a separate humanitarian innovation in itself.

Blockchain for the Humanitarian Sector – Future Opportunities

Blockchain’s core tenets – an immutable transaction history and its distributed and decentralize nature – lend themselves to some interesting tools being used by companies, governments, and civil society. These will be explored more fully in the Case Studies section, below; but at a high level, many actors are looking at leveraging blockchain in the following ways:

Smart Contracts

Smart contracts are agreements that provide automatic payments on the completion of a specific task or event. For example, in civic space, smart contracts could be used to execute agreements between NGOs and local governments to expedite transactions, lower costs, and reduce mutual suspicions. However, since these contracts are “defined” in code, any software bugs become potential loopholes in which the contract could be exploited. Once case of this happened when an attacker exploited a software bug in a smart contract-based firm called The DAO for approximately $50M.

Innovative Currency and Payment Systems
Many new cryptocurrencies are considering ways to leverage blockchain for transactions without the volatility of bitcoin, and with other properties, such as speed, cost, stability and anonymity, at the forefront. Cryptocurrencies are also occasionally combined with smart contracts, to establish shared ownership through funding of projects.
Potential for fund-raising
In addition, the digital currency subset of blockchain is being used to establish shared ownership (not dissimilar to stocks / shares of large companies) of projects.
Censorship-resistant technology

The decentralized, immutable nature of blockchain provides clear benefits to protecting speech, but not without significant risks. There have been high-visibility uses of blockchain to publish censored speech in China, Turkey, and Catalonia. Article 19 has written an in-depth report specifically on the interplay between freedom of expression and blockchain technologies, which provides a balanced view of the potential benefits and risks, and guidance for stakeholders considering engaging in this facet.

Decentralized computation and storage

Ethereum is a cryptocurrency focused on using the blockchain system to help manage decentralized computation and storage through smart contracts and digital payments. Ethereum encourages the development of “distributed apps” which are tied to transactions on the Ethereum Blockchain. Examples include automated auctions, voting, a Twitter-like tool, and apps that pay for content creation/sharing. See case studies in the cryptocurrencies primer for more detail.

The vast majority of these solutions presume some form of micro-payment as part of the transaction. This transfer formalizes and records the action in the blockchain, but the design element has obvious challenges for equal access.

Back to top

Opportunities

Blockchain can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about blockchain in your work.

Proof of Digital Integrity

Data stored or tracked using blockchain technologies have a clear, sequential, and unalterable chain of verifications. Once data is added to the blockchain, there is ongoing mathematical proof that it has not been altered. This does not provide any assurance that the original data is valid or true, and it means that any data added cannot be deleted or changed – only appended to. Working in civil society, this benefit has been applied to concepts such as creating records for land titles/ownership; to improve voting, to ensure one person matches with one unchangeable vote; or to prevent fraud and corruption and enhance transparency in international philanthropy. It has been used for digital identities to help people retain ownership over their identity and documents, and in humanitarian contexts to make voucher-based cash transfers more efficient. As an enabler for digital currency, blockchain increases the sources of philanthropic wealth and, in some circumstances, facilitates cross-border funding of civil society.

A function such as this can provide a solution to the legal invisibility most often borne by refugees and migrants. Rohingya refugees in Bangladesh, for example, are often at risk of discrimination and exploitation, because they are stateless. Proponents of blockchain argue that its distributed system can grant individuals with “self-sovereign identity,” a process through which they create and register their identity themselves and can therefore control and share that information.  However, if blockchain architects do not secure transaction permissions and public/private state variables, governments could use machine-learning algorithms to monitor public blockchain activity and gain insight into whatever daily, lower- level activity of their citizens are linkable to their blockchain identities. This might include interpersonal and business payments, timing, and any other locations or businesses where citizens need to “show their ID” for services, be they health, financial, or other. Furthermore, such a use of blockchain assumes that individuals would be prepared and able to adopt that technology, an unlikely possibility due to the financial insecurity many vulnerable groups, such as refugees, face.

The importance of legal identity has been recognised by international bodies. “Legal invisibility” is a major problem for various groups, placing them at risk of discrimination and exploitation, such as migrants often given fake ID documents for transport across borders, or people without proof of identity. Some argue that the decentralised nature of blockchain can provide a remedy to this by granting individuals with “self-sovereign identity” where they are the ones to create and register identity and the only ones to control what to do with it and with whom to share it” […] If blockchain architects aren’t careful in the way they align transaction permissions and public/private state variables, governments could use state-sponsored machine learning algorithms to monitor public blockchain activity and gain insight into the lower level activity of their citizens

Blockchain and freedom of expression

Decentralized Store of data

Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.
Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.

Blockchain is resistant to traditional problems of one central authority or data store being attacked or experiencing outages. In a blockchain, data are constantly being shared and verified across all members—although a blockchain requires large amounts of energy, storage and bandwidth to maintain a shared data store. This decentralization is most valued in digital currencies, which rely on the scale of their blockchain to balance not having a country or region “owning” and regulating the printing of the currency. Blockchain has also been explored to distribute data and coordinate resources without a reliance on a central authority and simultaneously being resistant to censorship.

Blockchains promise anonymity, or at least pseudonymity, because limited information regarding individuals is stored in transaction logs. However, this does not make the underlying access platforms resistant to protecting blockchain and freedom of expression. For instance, the central internet regulator in China proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers.
Blockchain and freedom of expression

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with blockchain in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unequal Access

Blockchain presents multiple barriers to engaging with it. Connectivity, reliable and robust bandwidth and local storage are all needed. Therefore, mobile phones are often an insufficient device to host or download blockchains. The infrastructure it requires can serve as a barrier to access in areas where Internet connectivity primarily occurs via mobile devices. Because every full node (host of a blockchain) stores a copy of the entire transaction log, blockchains only grow longer and larger with time, and thus can be extremely resource-intensive to download on a mobile device. For instance, over the span of a few years, the blockchains underlying Bitcoin grew from several gigabytes to several hundred. While the use of blockchain offline is possible, offline components are among the most vulnerable to cyberattacks, and this could put the entire system at risk.

Blockchains — whether they are fully independent are part of existing blockchains — require some percentage of actors to lend processing power to the blockchain, which — especially as they scale — itself becomes either exclusionary or creates classes of privileged users.

Another problem that can undermine the intended benefits of the system is the unequal access to possibilities to convert blockchain-based currencies to traditional currency, for example in relation to philanthropy or to support civil society organizations in restrictive regulatory environments, as for cryptocurrencies to have actual value, someone has to be willing to pay money for it.

Lack of digital literacy

Beyond these technical challenges, blockchain technology requires a strong baseline understanding of technology and its use in situations where digital literacy itself is a challenge.

There are paths around some of these problems, but any blockchain use needs to reflect on what potential inequalities could be exacerbated by or with this technology.

Further, these technologies are inherently complex, and outside the atypical case where individuals do possess the technical sophistication and means to install blockchain software and set up nodes; the question remains as to how the majority of individuals can effectively access them. This is especially true of individuals who may have added difficulty interfacing with technologies due to disability, literacy or age. Ill-equipped users are at increased risk of their investments or information being exposed to hacking and theft.

Blockchain and freedom of expression

Breaches of Privacy

Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.
Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.

Storing sensitive information on a blockchain – such a biometrics or gender – combined with the immutable aspects of the system, can lead to considerable risks for individuals when this information is accessed by others with the intention to harm. Even when specific personally identifiable information is not stored on a blockchain, pseudonymous accounts are difficult to protect from being mapped to real-world identities, especially if they are connected with financial transactions, services, and/or actual identities. This can erode rights to privacy and protection of personal data, as well as exacerbate the vulnerability of already marginalized populations and persons who change fundamental aspects of their person (gender, name). Explicit consent, modification and deletion of one’s own data are often protected through data protection and privacy legislation, such as the General Data Protection Regulation in the EU that serves as example for many other laws around the world. An overview of legislation in this area around the world is kept up to date by the United Nations Conference on Trade and Development.

For example, in September 2017, concerns surfaced about the Bangladeshi government’s plans to create a ‘merged ID’ that would combine citizens’ biometric, financial and communications data (Rahman, 2017). At that time, some local organizations had started exploring a DLT solution to identify and serve the needs of local Rohingya asylum-seekers and refugees. Because aid agencies are required to comply with national laws, any data recorded on a DLT platform could be subject to automatic data-sharing with government authorities. If these sets of records were to be combined, they would create an indelible, uneditable, untamperable set of records of highly vulnerable Rohingya asylum-seekers, ready for cross-referencing with other datasets. “As development and humanitarian donors and agencies rush to adopt new technologies that facilitate surveillance, they may be creating and supporting systems that pose serious threats to individuals’ human rights.”

These issues raise questions about meaningful, informed consent – how and to what extent do aid recipients understand DLTs and their implications when they receive assistance? […] Most experts agree that data protection needs to be considered not only in the realm of privacy, empowerment and dignity, but also in terms of potential physical impact or harm (ICRC and Brussels Privacy Hub, 2017; ICRC, 2018a)

Blockchain and distributed ledger technologies in the humanitarian sector

Environmental Impact

As blockchains scale, they require increasing amounts of computational power to stay in sync. In most digital currency blockchains, this scale problem is balanced by rewarding people who contribute to the processing power required with currency. The University of Cambridge estimated in fall 2019 that Bitcoin alone currently uses .28% of global electricity consumption, which, if Bitcoin were a country, would place it as the 41st most energy-consuming country, just ahead of Switzerland. Further, the negative impact is demonstrated by research showing that each Bitcoin transaction takes as much energy as needed to run a well-appointed house and all the appliances in it for an entire week.

Regulatory Uncertainty

As is often the case for emerging technology, the regulations surrounding blockchain are either ambiguous or nonexistent. In some cases, such as when it can publish censored speech, regulators overcorrect and block access to the entire system or remove pseudonymous protections of the system in country. In Western democracies, there are evolving financial regulations as well as concerns around the immutable nature of the records stored in a blockchain. Personally-Identifiable Information (see Privacy, above) in a blockchain cannot be removed or changed as required by the European Union’s General Data Protection Regulation (GDPR) and widely illegal content has already been inserted into the bitcoin blockchain.

Trust, Control, and Management Issues

While a blockchain has no “central database” which could be hacked, it also has no central authority to adjudicate or resolve problems. A lost or compromised password is almost guaranteed to result in the loss of ability to access funds or worse, digital identities. Compromised passwords or illegitimate use of the blockchain can harm individuals involved, especially when personal information is accessed or when child sexual abuse images are stored forever. Building mechanisms to address this problem undermines other key features of the blockchain.

That said, an enormous amount of trust is inherently placed in the software-development process around blockchain technologies, especially those using smart contracts. Any flaw in the software, and any intentional “back door”, could enable an attack that undermines or subverts the entire goal of the project.

Where is trust being placed: whether it is in the coders, the developers, those who design and govern mobile devices or apps; and whether trust is in fact being shifted from social institutions to private actors. All stakeholders should consider what implications does this have and how are these actors accountable to human rights standards

Blockchain and freedom of expression

Back to top

Questions

If you are trying to understand the implications of blockchain in your work environment, or are considering using aspects of blockchain as part of your DRG programming, ask yourself these questions:

  1. Does blockchain provide specific, needed features that existing solutions with proven track records and sustainability do not?
  2. Do you really need blockchain, or would a database be sufficient?
  3. How will this respect data privacy and control laws such as the GDPR?
  4. Do your intended beneficiaries have the internet bandwidth needed to use the product you are developing with blockchain?
  5. What external actors/partners will control critical aspects of the tool or infrastructure this project will rely on?
  6. What external actors/partners will have access to the data this project creates? What access conditions, limits, or ownership will they have?
  7. What level of transparency and trust do you have with these actors/partners?
  8. How you are you conducting and measuring informed consent processes for any data gathered?
  9. How will this project mitigate technical, financial, and/or infrastructural inequalities and ensure they are not exacerbated?
  10. Will the use of blockchain in your project comply with data protection and privacy laws?
  11. Do other existing laws and policies address the risks and offer mitigating measures related to the use of blockchain, such as anti-money-laundering regulation?
  12. Do existing laws enable the benefits you have identified for the blockchain-enabled project?
  13. Are these laws aligned with international human rights law such as the right to privacy, to freedom of expression and opinion and to enjoy the benefits of scientific progress?

Back to top

Case Studies

Blockchain in the humanitarian sector

The 2019 report “Blockchain and distributed ledger technologies in the humanitarian sector” provides multiple examples of humanitarian use of DLTs, including for financial inclusion, land titling, donation transparency, fraud reduction, cross-border transfers, cash programming, grant management and organizational governance, among others.

  • The World Food Programme’s Building Blocks project uses blockchain technology (Ethereum, four nodes and one controlling entity) to make its voucher-based cash transfers more efficient, transparent and secure, and to improve collaboration across the humanitarian system.
  • The Start Network and its member organisations Dorcas Aid International and Trócaire partnered with Disberse, a for-profit financial institution for the aid sector, on pilot programmes using DLT (Ethereum-based, two and three nodes and one controlling entity) to increase the humanitarian community’s comfort with the technology.
  • Helperbit uses the Bitcoin public network to create a decentralised, parametric peer-to-peer insurance service and donation system (multi-signature e-wallet) to change practices of humanitarian assistance both before and after an emergency.
  • Sikka, a digital-assets transfer platform (Ethereum, one node and one controlling entity), was created by World Vision International Nepal Innovation Lab to address the challenge of financial access during times of crises for financially marginalised and in-need communities.
  • The IFRC and Kenya Red Cross implemented the Blockchain Open Loop Payments Pilot Project (Multichain, four nodes with three controlling entities), through Red Rose, to explore how blockchain could increase the transparency and accountability of cash transfer programmes, including in relation to self-sovereign digital identities.”

Blockchain to permanently keep news articles
Civil.co is a journalism-supporting organization that has harnessed the blockchain to permanently keep news articles online in the face of censorship. Civil’s blockchain aims to encourage community trust of the news in different ways. First, the blockchain itself is used to publish articles, meaning a user with sufficient technical skills can theoretically verify that the articles came from where they say they did. Civil also supports this with two non-blockchain “technologies”: a “constitution” which their newsrooms must adopt, and a ranking system through which their community of readers and journalists can vote up news and newsrooms they find trustworthy. By publishing on a peer-to-peer blockchain, their publishing has additional resistance to censorship. Readers can also pay journalists for articles using Civil’s tokens. It is worth noting that Civil’s initial fundraising fell flat and its model is still struggling to prove itself.
Blockchain and Impact
In “Blockchain: Can We Talk About Impact Yet?”, Shailee Adinolfi, John Burg and Tara Vassefi respond to a MERLTech blog post that not only failed to find successful applications of bockchain in international development, but was unable to identify companies willing to talk about the process. This article highlights three case studies of in-progress projects with discussion and links to project information and/or case studies.
Digital Currencies and Blockchain in the Social Sector
Digital Currencies and Blockchain in the Social Sector,” David Lehr and Paul Lamb summarize work in international development leveraging blockchain for philanthropy, international development funding, remittances, identity, land rights, democracy and governance, and environmental protection.
Alas, the Blockchain Won’t Save Journalism After All
In “Alas, the Blockchain Won’t Save Journalism After All,” Jonah Engel Bromwich describes Civil.co’s initial attempts adapting blockchain to a media funding strategy.
Successful case studies by Consensys
Consensys, a company building and investing in blockchain solutions, including some in the civil sector, summarizes (successful) use cases in “Real-World Blockchain Case Studies.”
Blockchain to prevent child trafficking
UN Blockchain4Humanity Global Challenge considers the use of blockchain to help prevent child trafficking in Moldova. This use is also discussed in the Reuters article, “Scan on exit: can blockchain save Moldova’s children from traffickers?
Blockchain in a refugee camp in Jordan
The MIT Technology Review presents a case study of the use of biometrics and blockchain in a refugee camp in Jordan, that presents challenges to the right to data protection of the vulnerable individuals involved “Inside the Jordan refugee camp that runs on Blockchain.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Cryptocurrency

What are cryptocurrencies?

Cryptocurrency is a form of digital money. It was created in the wake of the 2008 global financial crisis to decentralize the system of financial transactions. Cryptocurrency is almost a direct contrast to the global financial system: no currency is attached to a state authority, it is unbound by geographic regulations, and most importantly, the maintenance of the system is community driven by a network of users. All transactions are logged anonymously on a public ledger, such as the Bitcoin’s blockchain .

Definitions

Blockchain: Blockchain is a type of technology used in many digital currencies as a bank ledger. Unlike a normal bank ledger, copies of that ledger are distributed digitally, among computers all over the world, automatically updating with every transaction.

Currency: A currency is a widely accepted system of money in circulation, usually designated by a nation or group of nations. Currency is commonly found in the form of paper and coins, but can also be digital (as this primer explores).

Fiat money: Government-issued currency, such as the USD. Sometimes referred to as Fiat currency.

Hashing: The process through which cryptocurrency transactions are verified. When one person pays another using Bitcoin, for example, computers on the blockchain automatically check that the transaction is accurate.

Hash: The mathematical problem computers must solve to add transactions to the blockchain.

Initial Coin Offering (ICO): The process by which a new cryptocurrency or digital “token” invites investment.

Mining: The process by which a computer solves a hash. The first computer to solve the hash permanently stores the transaction as a block on the blockchain. When a computer successfully adds a block to the blockchain, it is rewarded with a coin. Arriving at the right answer for a hash before another miner relates to how fast a computer can produce hashes. In the early years of bitcoin, for example, mining could be performed effectively using open-source software on standard desktop computers. More recently, only special-purpose machines known as application-specific integrated circuit (ASIC) miners can mine bitcoin cost-effectively, because they are optimized for the task. Mining pools (groups of miners) and companies now control most bitcoin mining activity.

How do cryptocurrencies work?

Money transfer agencies in Nepal. Cryptocurrencies potentially allow users to send and receive remittances and access foreign financial markets. Photo credit: Brooke Patterson/USAID.

Users purchase cryptocurrency with a credit card, debit card, bank account or through mining. They then store the currency in a digital “wallet,” either online, on a computer, or offline, on a portable storage device, such as USB sticks. These wallets are used to send and receive money through “public addresses” or keys that link the money to a specific type of cryptocurrency. These  addresses are strings of characters that signify a wallet’s identity for transactions. A user’s public address can be shared with anyone to receive funds and can also be represented as a QR code. Anyone with whom a user makes a transaction can see the balance in the public address that he or she uses.

While transactions are publicly recorded, identifying user information is not. For example, on the Bitcoin blockchain, only a user’s public address appears next to a transaction—making transactions confidential but not necessarily anonymous.

Cryptocurrencies have increasingly struggled with intense periods of volatility, most of which stem from the decentralized system of which they are part. The lack of a central body means that cryptocurrencies are not legal tender, they are not regulated, there is little to no insurance if an individual’s digital wallet is hacked, and most payments are not reversible. As a result, cryptocurrencies are inherently speculative.  In 2017, Bitcoin peaked at a price of nearly $20,000 per coin, but on average, it has been valued at about $7,000 per coin. Newer cryptocurrencies, such as Tether, have attempted to offset volatility by tying their market value to an external currency like USD or gold. However, the industry overall has not yet reconciled how to maintain an autonomous, decentralized system with overall stability.

Types of Cryptocurrencies

The value of a certain cryptocurrency is heavily dependent on the faith of its investors, its integration into financial markets, public interest in using it, and its performance compared to other cryptocurrencies. Bitcoin, founded in 2008, was the first and only cryptocurrency until 2011 when “altcoins” began to appear. Estimates for the number of cryptocurrencies vary, but as of August 2018, there were about 1,600 different types of cryptocurrencies.

  • Bitcoin
    It has the largest user base and a market capitalization in the hundreds of billions. While Bitcoin initially attracted financial institutions like Goldman Sachs, the collapse of Bitcoin’s value (along with other cryptocurrencies) in 2018 has since increased skepticism towards its long-term viability.
  • Ethereum
    Ethereum is a decentralized software platform that enables Smart Contracts and Decentralized Applications (DApps) to be built and automated without interference from a third party (like Bitcoin: they both run on Blockchain technology). Ethereum launched in 2015 and is currently the second largest cryptocurrency based on market capitalization after Bitcoin.
  • Ripple (XRP)
    Ripple is a real-time payment-processing network that offers both instant and low-cost international payments, to compete with other transaction systems such as SWIFT or VISA. It is the third largest cryptocurrency.
  • Tether (USDT)
    Tether is one of the first and most popular of a group of “stablecoins” — cryptocurrencies that peg their market value to a currency or other external reference point to reduce volatility.
  • Libra
    Libra is Facebook’s cryptocurrency due to be released in late 2020 or later. The company faced regulatory backlash: US and European regulators and legislators raised competition concerns, seeing as Facebook is one of the most powerful companies in the world. There were also fears that Libra would disrupt financial markets in smaller countries.Facebook has since rebranded its associated digital wallet as “Novi”, a standalone app. When Novi launches, it will include access to several stablecoins — each of them backed by a single fiat currency, such as USD, EUR, GBP or SGD. Facebook’s entry into cryptocurrency is expected to significantly alter the industry. Only a limited set of countries will be able to access it when it is launched.
  • Monero
    Monero is the largest of what are known as privacy coins. Unlike Bitcoin, Monero transactions and account balances are not public by default.
  • Zcash
    Another anonymity-preserving cryptocurrency, Zcash, is operated under a foundation of the same name. It is branded as a mission-based, privacy-centric cryptocurrency that enables users “to protect their privacy on their own terms”, regarding privacy as essential to human dignity and to the healthy functioning of civil society.

Fish vendor in Indonesia. Women are most of the unbanked, and financial technologies can provide tools to address this gap. Photo credit: Afandi Djauhari/NetHope.

Back to top

How are cryptocurrencies relevant in civic space and for democracy?

Cryptocurrencies are, in many ways, ideal for the needs of NGOs, humanitarians and other civil society actors. Civic space actors who require blocking-resistant, low-fee transactions might find cryptocurrencies both convenient and secure. The use of cryptocurrencies in the developing world reveals their role as not just vehicles for aid, but also tools that facilitate the development of small- to medium-sized enterprises (SMEs) looking to enter international trade. For example, UNICEF created a cryptofund in 2019 in order to receive and distribute funding in cryptocurrencies (ether and bitcoin).  In June of 2020, UNICEF announced its largest investment yet in startups located in developing and economies, who are helping respond to the Covid-19 pandemic.

However, regarding cryptocurrencies through only a traditional development lens – i.e. that they may only be useful for refugees or countries with unreliable fiat currencies – simplifies the economic landscape of such developing countries. Many countries are home to a significant youth population who are poised to harness cryptocurrency in innovative ways, for instance to send and receive remittances, to access foreign financial markets and investment possibilities, and even to encourage ecological or ethical purchasing behaviors (see the Case Studies section ). During the Coronavirus lockdown in India, and after the country’s reserve bank lifted a ban it had on cryptocurrencies, many young people started trading in Indian cryptocurrencies and using cryptocurrencies to transfer money to one another. Still, the future of crypto in India and elsewhere is uncertain. The frontier nature of cryptocurrencies poses significant risks to users when it comes to insurance and, in some cases, security.

Moreover, as will be discussed below, the distributed technology (blockchain) underlying cryptocurrencies is seen as offering resistance to censorship, as the data are distributed over a large network of computers. Cryptocurrencies could also give a broader range of people access to banking.

Back to top

Opportunities

Cryptocurrencies can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about cryptocurrencies in your work.

Accessibility

Cryptocurrencies are more accessible to a broader range of users than regular cash currency transactions are; they are not subject to government regulation and they don’t have high processing fees. Cross-border transactions in particular benefit from the features of cryptocurrencies; international banking fees and poor exchange rates can be extremely costly (See the BitPesa case study , below). In some cases, the value of cryptocurrencies may even be more stable than the local currency. Cryptocurrencies that require participants to log in (on “permissioned” systems) necessitate that an organization controls participation in its system. In some cases, certain users also help run the system in other ways, like operating servers. When this is the case, it is important to understand who those users are, how they are selected, and how their ability to use the system could be taken away if they turn out to be bad actors.

Additionally, Initial Coin Offerings (ICOs) lower the entry barrier to investing, cutting venture capitalists and investment banks out of the investing process. While similar to Initial Public Offering (IPO)s, ICOs differ significantly in that they allow companies to interact directly with individual investors. This also poses a risk to investors, as the safeguards offered by investment banks for traditional IPOs do not apply. (See Lack of Governance and Regulatory Uncertainty ). The lack of regulatory bodies has also spurred the growth of scam ICOs. When an ICO or cryptocurrency does not have a legitimate strategy for generating value, they are typically a scam ICO.

Anonymity and Censorship Resistance

The decentralized, peer-to-peer nature of cryptocurrencies may be of great comfort to those seeking anonymity, such as human rights defenders working in closed spaces or people simply seeking an equivalent to “cash” for online purchases (see the Cryptocurrencies in Volatile Markets case study , below). Cryptocurrencies can be useful for someone who wishes to donate anonymously to a foundation or organization, when that donation could put them at risk if their identity were known.

Since the data that supports the currency is distributed over a large network of computers, it is more difficult for a bad actor to locate and target a transaction or system operation. But a currency’s ability to protect anonymity largely depends on the specific goal of the cryptocurrency. Zcash, for example, was specifically developed to hide transaction amounts and user addresses from public view. Cryptocurrencies with a large number of participants are also resistant to more benign, routine, system outages because some data stores in the network can operate if others are breached.

Back to top

Risks

User in the Philippines received transaction confirmation. Users purchase cryptocurrency with a credit card, debit card, bank account or through mining. Photo credit: Brooke Patterson/USAID.

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with cryptocurrencies in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Anonymity

While no central authority records cryptocurrency transactions, the public nature of the transactions does not prevent governments from recording them. Identity that can be associated with records on a blockchain is particularly a problem under totalitarian surveillance regimes. The central internet regulator in China, for example,  proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers. In order to trade or exchange a cryptocurrency into an established fiat currency, a new digital currency would need to incorporate Know Your Customer (KYC), Anti-Money Laundering (AML), and Combating the Financing of Terrorism (CFT) regulations into its process for signing up new users and validating their identities. These processes pose a high barrier to undocumented migrants and anyone else not holding a valid government ID.

As described in the case study below, the partially anarchical environment of cryptocurrencies can also foster criminal activity.

Case Study: The Dark Side of the Anonymous User Bitcoin and other cryptocurrencies are praised for supporting financial transactions that do not reveal a user’s identity. But this has made them popular on “dark web” sites like Silk Road, where cryptocurrency can be exchanged for illegal goods and services like drugs, weapons, or sex work. Silk Road was eventually taken down by the U.S. Federal Bureau of investigation, when its founder, Ross Ulbricht, used the same name to advertise the site and seek employees in another forum, linking to a Gmail address. Google provided the contents of that address to the authorities when subpoenaed.

The lessons to take from the Silk Road case are that anonymity is rarely perfect and unbreakable, and cryptocurrency’s identity protection is not an ironclad guarantee. On a public blockchain, a single identity slip (even in some other forum) can tie all of the transactions of that cryptocurrency account to one user. The owner of that wallet can then be connected to their subsequent purchases, as easily as a cookie tracks a user’s web browsing activity.

Lack of Governance

The lack of a central body greatly increases the risk of investing in a cryptocurrency. There is little to no recourse for users if the system is attacked digitally and their currency is stolen. In 2016, criminals hacked the Ethereum blockchain and stole $50 million worth of that currency, known as the DAO (Decentralized Autonomous Organization) Bailout. It resulted in a major division in the Ethereum community, leading to a total split in governance where some participants wanted to see the value of that $50 million returned to holders, and others wanted to continue treating that value as permanently lost.

Regulatory Uncertainty

The legal and regulatory frameworks for blockchain are developing at a slower pace than the technology. Each jurisdiction – whether within a country or a financial zone, such as the 26 European countries known as the Schengen area that have abolished passports and border controls – regulate cryptocurrencies differently, and there is yet to be a global financial standard for regulating them. The seven Arab nations bordering the Persian Gulf  (Gulf States) , for example, have enacted a number of different laws on cryptocurrencies: they face an outright ban in the United Arab Emirates while they are partially banned in Saudi Arabia and Qatar.  Other countries have developed tax laws, anti-money-laundering laws, and anti-terrorism laws to regulate cryptocurrencies. In many places, cryptocurrency is taxed as a property, instead of as a currency.

Cryptocurrency’s commitment to autonomy – that is, its separation from a fiat currency – has pitted it as an antagonist to many established regulatory bodies. Observers note that eliminating the ability of intermediaries (e.g., governments or banks) to claim transaction fees, for example, alters existing power balances and may trigger prohibitive regulations even as it temporarily decreases financial costs. Thus, there is always a risk that governments will develop policies unfavorable to financial technologies (fintech), rendering cryptocurrency and mobile money useless within its borders. The constantly evolving nature of laws around fintech proves difficult for any new digital currency.

Environmental Inefficiency

The larger a blockchain grows, the more computational power it requires. In late 2019, the University of Cambridge estimated that Bitcoin uses .28% of the global electricity consumption. If Bitcoin were a country, it would be ranked the 41st most energy-consuming country, just ahead of Switzerland.

Digital Literacy and Access Requirements

Blockchain technology underlaying cryptocurrencies requires access to the Internet, and areas with inadequate infrastructure or capacity would not be usable contexts for cryptocurrencies, although limited possibilities of using cryptocurrency without Internet access do exist. “This digital divide also extends to technological understanding between those who know how to ‘operate securely on the Internet, and those who do not’”, as noted by the DH Network. Cryptocurrency apps are not usable on lower-end devices, which requires users to use a smartphone or computer. The apps themselves involve a steep learning curve. Additionally, the slow speed of transactions — which can take minutes or up to an hour — are a significant disadvantage, especially when compared to the seconds-fast speed of standard Visa transactions.

Back to top

Questions

If you are trying to understand the implications of cryptocurrencies in your work environment, or are considering using cryptocurrencies as part of your DRG programming, ask yourself these questions:

  1. Do the issues you or your organization are seeking to address require cryptocurrency? Can more traditional currency solutions apply to the problem?
  2. Is cryptocurrency an appropriate currency for the populations you are working with? Will it help them access the resources they need? Is it accepted by the other relevant stakeholders?
  3. Do you or your organization need an immutable database distributed across multiple servers? Would it be ok to have the currency and transactions connected to a central server?
  4. Is the cryptocurrency you wish to use viable? Do you trust the currency and have good reason to assume it will be sufficiently stable in the future?
  5. Is the currency legal in the areas where you will be operating? If not, will this pose problems for your organization?
  6. How will you obtain this currency? What risks are involved? What external actors will you be reliant on?
  7. Will the users of this currency be able to benefit from it easily and safely? Will they have the required devices and knowledge?

Back to top

Case Studies

Mobile money agency in Ghana. The use of cryptocurrencies in the developing world can facilitate the development of small- to medium-sized enterprises looking to enter international trade. Photo credit: John O’Bryan/ USAID.
BitPesa Proves Cost-Effective SMEs

To many humanitarian actors, the ideal role for cryptocurrencies would be to facilitate the provision of remittances to families. But in a number of cases, cryptocurrency has been shown to be used not as a remittance channel, but for people to send money to themselves. For example, in Kenya, many of the international money transfers conducted on the Nairobi-based bitcoin exchange, BitPesa, are from an individual’s foreign account to a Kenyan one, or vice versa. Many cite the use of BitPesa as a tool to save money on bank transfer fees and high foreign exchange rates. Young Kenyans sometimes use the site to access global trading markets that they otherwise would not be able to access in Kenya, where many trading sites are not available. Increasingly, BitPesa is highly useful for upper-middle-class entrepreneurs who are building international businesses through trade and online commerce. However, the largest telecommunications company in Kenya – Safaricom – recently banned the use of Bitcoin on its mobile-money-exchange system, mPesa. When companies hold a monopoly or a significant majority of the market that relates to mobile and internet connectivity, the use of cryptocurrencies may be significantly curbed.

Cryptocurrencies in Volatile Markets

Between August 2014 and November 2016, the number of Bitcoin users in Venezuela rose from 450 to 85,000. The financial crisis in the country has prompted many of its citizens to search for new options. Bitcoin has been used to purchase medicine (in short supply there), Amazon gift cards, and to send remittances. There are no laws regulating Bitcoin in Venezuela, which has emboldened people further. Countries with financial markets that have experienced similar rates of inflation to Venezuela- such as South Sudan, Zimbabwe, and Argentina – all have relatively active cryptocurrency markets.

Cryptocurrencies for Social Impact

Many new cryptocurrencies have attempted to monetize the social impacts of their users. SolarCoin rewards people for installing solar panels. Tree Coin gathers resources for planting trees in the developing world (as one way to fight climate change) and rewards local people for maintaining those trees. Impak Coin is “the first app to reward and simplify responsible consumption” by helping users find socially responsible businesses. The coin it offered is intended to be used to buy products and services from these businesses, and to support users in microlending and crowdlending. It was part of an ecosystem of technologies that included ratings based on the UN’s Sustainable Development Goals and the Impact Management Project. True to its principles, Impak has proposed to begin assessing its impact.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Data Protection

What is data protection?

Data protection refers to practices, measures and laws that aim to prevent certain information about a person from being collected, used or shared in a way that is harmful to that person.

Interview with fisherman in Bone South Sulawesi, Indonesia. Data collectors must receive training on how to avoid bias during the data collection process. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Data protection isn’t new. Bad actors have always sought to gain access to individuals’ private records. Before the digital era, data protection meant protecting individuals’ private data from someone physically accessing, viewing or taking files and documents. Data protection laws have been in existence for more than 40 years.

Now that many aspects of peoples’ lives have moved online, private, personal and identifiable information is regularly shared with all sorts of private and public entities. Data protection seeks to ensure that this information is collected, stored and maintained responsibly and that unintended consequences of using data are minimized or mitigated.

What are data?

Data refer to digital information, such as text messages, videos, clicks, digital fingerprints, a bitcoin, search history and even mere cursor movements. Data can be stored on computers, mobile devices, in clouds and on external drives. It can be shared via e-mail, messaging apps and file transfer tools. Your posts, likes and retweets, your videos about cats and protests, and everything you share on social media is data.

Metadata are a subset of data. It is information stored within a document or file. It’s an electronic fingerprint that contains information about the document or file. Let’s use an email as an example. If you send an email to your friend, the text of the email is data. The email itself, however, contains all sorts of metadata like, who created it, who the recipient is, the IP address of the author, the size of the email, etc.

Large amounts of data get combined and stored together. These large files containing thousands or millions of individual files are known as datasets. Datasets then get combined into very large datasets. These very large datasets, referred to as to big data , are used to train machine-learning systems.

Personal Data and Personally Identifiable Information

Data can seem to be quite abstract, but the pieces of information are very often reflective of the identities or behaviors of actual persons. Not all data require protection, but some data, even metadata, can reveal a lot about a person. This is referred to as Personally Identifiable Information (PII). PII is commonly referred to as personal data. PII is information that can be used to distinguish or trace an individual’s identity such as a name, passport number or biometric data like fingerprints and facial patterns. PII is also information that is linked or linkable to an individual, such as date of birth and religion.

Personal data can be collected, analyzed and shared for the benefit of the persons involved, but they can also be used for harmful purposes. Personal data are valuable for many public and private actors. For example, they are collected by social media platforms and sold to advertising companies. They are collected by governments to serve law-enforcement purposes like prosecution of crimes. Politicians value personal data to target voters with certain political information. Personal data can be monetized by people with criminal purposes such as selling false identities.

“Sharing data is a regular practice that is becoming increasingly ubiquitous as society moves online. Sharing data does not only bring users benefits, but is often also necessary to fulfill administrative duties or engage with today’s society. But this is not without risk. Your personal information reveals a lot about you, your thoughts, and your life, which is why it needs to be protected.”

Access Now’s ‘Creating a Data Protection Framework’, November 2018.

How does data protection relate to the right to privacy?

The right to protection of personal data is closely interconnected to, but distinct from, the right to privacy. The understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. Data protection is not always considered a right in itself. Read more about the differences between privacy and data protection here.

Data privacy is also a common way of speaking about sensitive data and the importance to protect it against unintentional sharing and undue or illegal  gathering and use of data about an individual or group. USAID recently shared a resource about promoting data privacy in COVID-19 and development, which defines data privacy as ‘the  right  of  an  individual  or  group  to  maintain  control  over  and  confidentiality  of  information  about  themselves’.

How does data protection work?

Participant of the USAID WeMUNIZE program in Nigeria. Data protection must be considered for existing datasets as well. Photo credit: KC Nwakalor for USAID / Digital Development Communications

Personal data can and should be protected by measures that protect from harm the identity or other information about a person and that respects their right to privacy. Examples of such measures include determining which data are vulnerable based on privacy-risk assessments; keeping sensitive data offline; limiting who has access to certain data; anonymizing sensitive data; and only collecting necessary data.

There are a couple of established principles and practices to protect sensitive data. In many countries, these measures are enforced via laws, which contain the key principles that are important to guarantee data protection.

“Data Protection laws seek to protect people’s data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.”

Privacy International on data protection

A couple of important terms and principles are outlined below, based on The European Union’s General Data Protection Regulation (GDPR).

  • Data Subject: any person whose personal data are being processed, such as added to a contacts database or to a mailing list for promotional emails.
  • Processing data means that any operation is performed on the personal data, manually or automated.
  • Data Controller: the actor that determines the purposes for, and means by which, personal data are processed.
  • Data Processor: the actor that processes personal data on behalf of the controller, often a third-party external to the controller, such as a party that offers mailing list or survey services.
  • Informed Consent: individuals understand and agree that their personal data are collected, accessed, used and/or shared and how they can withdraw their consent.
  • Purpose limitation: personal data are only collected for a specific and justified use and the data cannot be used for other purposes by other parties.
  • Data minimization: that data collection is minimized and limited to essential details.

 

Healthcare provider in Eswatini. Quality data and protected datasets can accelerate impact in the public health sector. Photo credit: Ncamsile Maseko & Lindani Sifundza.

Access Now’s guide lists eight data-protection principles that come largely from international standards, in particular the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (widely known as Convention 108) and the Organization for Economic Development (OECD) Privacy Guidelines and are considered to be “minimum standards” for the protection of fundamental rights by countries that have ratified international data protection frameworks. 

A development project that uses data, whether establishing a mailing list or analyzing datasets, should comply with laws on data protection. When there is no national legal framework, international principles, norms and standards can serve as a baseline to achieve the same level of protection of data and people. Compliance with these principles may seem burdensome, but implementing a few steps related to data protection from the beginning of the project will help to achieve the intended results without putting people at risk. 

common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms

The figure above shows how common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms.  

The European Union’s General Data Protection Regulation (GDPR)

The data-protection law in the EU, the GDPR, went into effect in 2018. It is often considered the world’s strongest data-protection law. The law aims to enhance how people can access their information and limits what organizations can do with personal data from EU citizens. Although coming from the EU, the GDPR can also apply to organizations that are based outside the region when EU citizens’ data are concerned. GDPR therefore has a global impact.

The obligations stemming from the GDPR and other data protection laws may have broad implications for civil society organizations. For information about the GDPR- compliance process and other resources, see the European Center for Not-for-Profit Law ‘s guide on data-protection standards for civil society organizations.

Notwithstanding its protections, the GDPR also has been used to harass CSOs and journalists. For example, a mining company used a provision of the GDPR to try to force Global Witness to disclose sources it used in an anti-mining campaign. Global Witness successfully resisted these attempts.

Personal or organizational protection tactics

How to protect your own sensitive information or the data of your organization will depend on your specific situation in terms of activities and legal environment. A first step is to assess your specific needs in terms of security and data protection. For example, which information could, in the wrong hands, have negative consequences for you and your organization 

Digitalsecurity specialists have developed online resources you can use to protect yourself. Examples are the Security Planner, an easy-to-use guide with expert-reviewed advice for staying safer online with recommendations on implementing basic online practices. The Digital Safety Manual offers information and practical tips on enhancing digital security for government officials working with civil society and Human Rights Defenders (HRDs). This manual offers 12 cards tailored to various common activities in the collaboration between governments (and other partners) and civil society organizations. The first card helps to assess the digital security

Digital Safety Manual

  1. Assessing Digital Security Needs
  2. Basic Device Security
  3. Passwords and Account Protection
  4. Connecting to the Internet Securely
  5. Secure Calls, Chat, and Email
  6. Security and Social Media Use
  7. Secure Data Storage and Deletion
  8. Secure File Transfer
  9. Secure Contract Handling
  10. Targeted Malware and Other Attacks
  11. Phone Tracking and Surveillance
  12. Security Concerns Related to In-Person Meetings

 

The Digital First Aid Kit is a free resource for rapid responders, digital security trainers, and tech-savvy activists to better protect themselves and the communities they support against the most common types of digital emergencies. Global digital safety responders and mentors can help with specific questions or mentorship, for example, The Digital Defenders Partnership and the Computer Incident Response Centre for Civil Society (CiviCERT) . 

Back to top

How is data protection relevant in civic space and for democracy?

Many initiatives that aim to strengthen civic space or improve democracy use digital technology. There is a widespread belief that the increasing volume of data and the tools to process them can be used for good. And indeed, integrating digital technology and the use of data in democracy, human rights and governance programming can have significant benefits; for example, they can connect communities around the globe, reach underserved populations better, and help mitigate inequality.

“Within social change work, there is usually a stark power asymmetry. From humanitarian work, to campaigning, documenting human rights violations to movement building, advocacy organisations are often led by – and work with – vulnerable or marginalised communities. We often approach social change work through a critical lens, prioritising how to mitigate power asymmetries. We believe we need to do the same thing when it comes to the data we work with – question it, understand its limitations, and learn from it in responsible ways.”

What is Responsible Data?

When quality information is available to the right people when they need it, the data are protected against misuse and the project is designed with protection of its users in mind, it can accelerate impact.

  • USAID’s funding of improved vineyard inspection using drones and GPS-data in Moldova, allowing farmers to quickly inspect, identify, and isolate vines infected by a ​phytoplasma disease of the vine. 
  • Círculo is a digital tool for female journalists in Mexico to help them create strong networks of support, strengthen their safety protocols and meet needs related to protection of themselves and their data. The tool was developed with the end-users through chat groups and in-person workshops to make sure everything built in the app was something they needed and could trust.

At the same time, data-driven development brings a new responsibility to prevent misuse of data, when designing,  implementing or monitoring development projects. When the use of personal data is a means to identify people who are eligible for humanitarian services, privacy and security concerns are very real.  

  • Refugee camps In Jordan have required community members to allow scans of their irises to purchase food and supplies and take out cash from ATMs. This practice has not integrated meaningful ways to ask for consent or allow people to opt-out. Additionally, the use and collection of highly sensitive personal data like biometrics to enable daily purchasing habits is disproportionate, because other less personal digital technologies are available and used in many parts of the world.  

Governments, international organizations, private actors can all – even unintentionally – misuse personal data for other purposes than intended, negatively affecting the wellbeing of the people related to that data. Some examples have been highlighted by Privacy International: 

  • The case of Tullow Oil, the largest oil and gas exploration and production company in Africa, shows how a private actor considered extensive and detailed research by a micro-targeting research company into the behaviors of local communities in order to get ‘cognitive and emotional strategies to influence and modify Turkana attitudes and behavior’ to the Tullow Oil’s advantage.
  • In Ghana, the Ministry of Health commissioned a large study on health practices and requirements in Ghana. This resulted in an order from the ruling political party to model future vote distribution within each constituency based on how respondents said they would vote, and a negative campaign trying to get opposition supporters not to vote.  

There are resources and experts available to help with this process. The Principles for Digital Development  website offers recommendations, tips and resources to protect privacy and security throughout a project lifecycle, such as the analysis and planning stage, for designing and developing projects and when deploying and implementing. Measurement and evaluation are also covered. The Responsible Data website offers the Illustrated Hand-Book of the Modern Development Specialist with attractive, understandable guidance through all steps of a data-driven development project: designing it, managing data, with specific information about collecting, understanding and sharing it, and closing a project. 

NGO worker prepares for data collection in Buru Maluku, Indonesia. When collecting new data, it’s important to design the process carefully and think through how it affects the individuals involved. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Back to top

Opportunities

Data protection measures further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about data protection in your work.

Privacy respected and people protected

Implementing dataprotection standards in development projects protects people against potential harm from abuse of their data. Abuse happens when an individual, company or government accesses personal data and uses them for purposes other than those for which the data were collected. Intelligence services and law enforcement authorities often have legal and technical means to enforce access to datasets and abuse the data. Individuals hired by governments can access datasets through hacking the security of software or clouds. This has often led to intimidation, silencing and arrests of human rights defenders and civil society leaders criticizing their government. Privacy International maps examples of governments and private actors abusing individuals’ data.  

Strong protective measures against data abuse ensure respect for the fundamental right to privacy of the people whose data are collected and used. Protective measures allow positive development such as improving official statistics, better service delivery, targeted early warning mechanisms and effective disaster response. 

It is important to determine how data are protected throughout the entire life cycle of a project. Individuals should also be ensured of protection after the project ends, either abruptly or as intended, when the project moves into a different phase or when it receives funding from different sources. Oxfam has developed a leaflet to help anyone handling, sharing or accessing program data to properly consider responsible data issues throughout the data lifecycle, from making a plan to disposing data. 

Back to top

Risks

The collection and use of data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with collection and use of data in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unauthorized access to data

Data need to be stored somewhere. On a computer or an external drive, in a cloud or on a local server. Wherever the data are stored, precautions need to be taken to protect the data from unauthorized access and to avoid revealing the identities of vulnerable persons. The level of protection that is needed depends on the sensitivity of the data, i.e. to what extent it could have negative consequences if the information fell into the wrong hands.

Data can be stored on a nearby and well-protected server that is connected to drives with strong encryption and very limited access, which is a method to stay in control of the data you own. Cloud services offered by well-known tech companies often offer basic protection measures and wide access to the dataset for free versions. More advanced security features are available for paying customers, such as storage of data in certain jurisdictions with data- protection legislation. The guidelines on how to secure private data stored and accessed in the cloud help to understand various aspects of clouds and to decide about a specific situation.

Every system needs to be secured against cyberattacks and manipulation. One common challenge is finding a way to protect identities in the dataset, for example, by removing all information that could identify individuals from the data, i.e. anonymizing it. Proper anonymization is of key importance and harder than often assumed.

One can imagine that a dataset of GPS-locations of People Living with Albinism across Uganda requires strong protection. Persecution is based on the belief that certain body parts of people with albinism can transmit magical powers, or that they are presumed to be cursed and bring bad luck. A spatial-profiling project mapping the exact location of individuals belonging to a vulnerable group can improve outreach and delivery of support services to them. However, hacking of the database or other unlawful access to their personal data might put them at risk by people wanting to exploit or harm them.

One could also imagine that the people operating an alternative system to send out warning sirens for air strikes in Syria run the risk of being targeted by authorities. While data collection and sharing by this group aims to prevent death and injury, it diminishes the impact of air strikes by the Syrian authorities. The location data of the individuals running and contributing to the system needs to be protected against access or exposure.

Another risk is that private actors who run or cooperate in data-driven projects could be tempted to sell data if they are offered large sums of money. Such buyers could be advertising companies or politicians that aim to target commercial or political campaigns at specific people.

The Tiko system designed by social enterprise Triggerise rewards young people for positive health-seeking behaviors, such as visiting pharmacies and seeking information online. Among other things, the system gathers and stores sensitive personal and health information about young female subscribers who use the platform to seek guidance on contraceptives and safe abortions, and it tracks their visits to local clinics. If these data are not protected, governments that have criminalized abortion could potentially access and use that data to carry out law-enforcement actions against pregnant women and medical providers.

Unsafe collection of data

When you are planning to collect new data, it is important to carefully design the collection process and think through how it affects the individuals involved. It should be clear from the start what kind of data will be collected, for what purpose, and that the people involved agree with that purpose. For example, an effort to map people with disabilities in a specific city can improve services. However, the database should not expose these people to risks, such as attacks or stigmatization that can be targeted at specific homes. Also, establishing this database should answer to the needs of the people involved and not driven by the mere wish to use data. For further guidance, see the chapter Getting Data in the Hand-book of the Modern Development Specialist and the OHCHR Guidance to adopt a Human Rights Based Approach to Data, focused on collection and disaggregation. 

If data are collected in person by people recruited for this process, proper training is required. They need to be able to create a safe space to obtain informed consent from people whose data are being collected and know how to avoid bias during the data-collection process. 

Unknowns in existing datasets

Data-driven initiatives can either gather new data, for example, through a survey of students and teachers in a school, or use existing datasets from secondary sources, for example by using a government census or scraping social media sources. Data protection must also be considered when you plan to use existing datasets, such as images of the Earth for spatial mapping. You need to analyze what kind of data you want to use and whether it is necessary to use a specific dataset to reach your objective. For third-party datasets, it is important to gain insight into how the data that you want to use were obtained, whether the principles of data protection were met during the collection phase, who licensed the data and who funded the process. If you are not able to get this information, you must carefully consider whether to use the data or not. See the Hand-book of the Modern Development Specialist on working with existing data. 

Back to top

Questions

If you are trying to understand the implications of lacking data protection measures in your work environment, or are considering using data as part of your DRG programming, ask yourself these questions:

  1. Are data protection laws adopted in the country or countries concerned?
    Are these laws aligned with international human rights law, including provisions protecting the right to privacy?
  2. How will the use of data in your project comply with data protection and privacy standards?
  3. What kind of data do you plan to use? Are personal or other sensitive data involved?
  4. What could happen to the persons related to that data if the government accesses these data?
  5. What could happen if the data are sold to a private actor for other purposes than intended?
  6. What precaution and mitigation measures are taken to protect the data and the individuals related to the data?
  7. How is the data protected against manipulation and access and misuse by third parties?
  8. Do you have sufficient expertise integrated during the entire course of the project to make sure that data are handled well?
  9. If you plan to collect data, what is the purpose of the collection of data? Is data collection necessary to reach this purpose?
  10. How are collectors of personal data trained? How is informed consent generated when data are collected?
  11. If you are creating or using databases, how is anonymity of the individuals related to the data guaranteed?
  12. How is the data that you plan to use obtained and stored? Is the level of protection appropriate to the sensitivity of the data?
  13. Who has access to the data? What measures are taken to guarantee that data are accessed for the intended purpose?
  14. Which other entities – companies, partners – process, analyze, visualize and otherwise use the data in your project? What measures are taken by them to protect the data? Have agreements been made with them to avoid monetization or misuse?
  15. If you build a platform, how are the registered users of your platform protected?
  16. Is the database, the system to store data or the platform auditable to independent research?

Back to top

Case Studies

People Living with HIV Stigma Index and Implementation Brief

The People Living with HIV Stigma Index is a standardized questionnaire and sampling strategy to gather critical data on intersecting stigmas and discrimination affecting people living with HIV. It monitors HIV-related stigma and discrimination in various countries and provides evidence for advocacy in countries. The data in this project are the experiences of people living with HIV. The implementation brief provides insight in data protection measures. People living with HIV are at the center of the entire process, continuously linking the data that is collected about them to the people themselves, starting from research design, through implementation, to  using the findings for advocacy. Data are gathered through a peer-to-peer interview process, with people living with HIV from diverse backgrounds serving as trained interviewers. A standard  implementation  methodology has been developed, including the establishment if a steering committee with key  stakeholders and population groups. 

RNW Media’s Love Matters Program Data Protection

RNW Media’s Love Matters Program offers online platforms to foster discussion and information-sharing on love, sex and relationships to 18-30 year-olds in areas where information on sexual and reproductive health and rights (SRHR) is censored or taboo. RNW Media’s digital teams introduced creative approaches to data processing and analysis, Social Listening methodologies and Natural Language Processing techniques to make the platforms more inclusive, create targeted content and identify influencers and trending topics. Governments have imposed restrictions such as license fees or registrations for online influencers as a way of monitoring and blocking “undesirable” content, and RNW Media has invested in security of its platforms and literacy of the users to protect them from access to their sensitive personal information. Read more in the publication 33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p 12-14. 

The Indigenous Navigator

The Indigenous Navigator is a framework and set of tools for and by indigenous peoples to systematically monitor the level of recognition and implementation of their rights. The data in this project are experiences of indigenous communities and organizations and tools facilitate indigenous communities’ own generation of quality data. One objective of the navigator is that this quality data can be fed into existing human rights and sustainable development monitoring processes at local, national, regional and international levels. The project’s page about privacy shows data protection measures such as the requirement of community consent and how to obtain it and an explanation about how the Indigenous Navigator uses personal data.  

Girl Effect

Girl Effect, a creative non-profit working where girls are marginalized and vulnerable, uses media and mobile tech to empower girls. The organisation embraces digital tools and interventions and acknowledge that any organisation that uses data also has a responsibility to protect the people it talks to or connects online. Their ‘Digital safeguarding tips and guidance’ provides in-depth guidance on implementing data protection measures while working with vulnerable people. Referring to Girl Effect as inspiration, Oxfam has developed and implemented a Responsible Data Policy and shares many supporting resources online. The publication ‘Privacy and data security under GDPR for quantitative impact evaluation’ provides detailed considerations of the data protection measures Oxfam implements while doing quantitative impact evaluation through digital and paper-based surveys and interviews.  

The LAND (Land Administration for National Development) Partnership

The LAND (Land Administration for National Development) Partnership led by Kadaster International aims to design fast and affordable land administration to meet people’s needs. Through the processing and storage of geodata such as GPS, aerial photographs and satellite imagery (determining general boundaries instead of fixed boundaries), a digital spatial framework is established that enables affordable, real-time and participatory registration of land by its owners. Kadaster is aware of the sensitive nature of some of the data in the system that needs to be protected, in view of possible manipulation and privacy violation, and the need to train people in the digital processing of data. Read more in the publication 33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p. 25-26.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Social Media

What is social media?

Social media provide spaces for people and organizations to share and access news and information, communicate with beneficiaries and advocate for change. Social-media content includes the text, photos, videos, infographics, or any other material placed on a blog, Facebook page, Twitter account, etc. for the audience to consume, interact with, and circulate. Content is curated by platforms and delivered to users according to what is most likely to attract their attention. There is an ever-expanding amount of content available on these platforms.

Digital inclusion center in the Peruvian Amazon. For NGOs, social media platforms can be useful to reach new audiences and to raise awareness of services. Photo credit: Jack Gordon for USAID / Digital Development Communications.

Theoretically, through social media, everyone has a way to speak out and reach audiences across the world, which can be empowering and bring people together. At the same time, much of what is shared on social media can be misleading, hateful, and dangerous , which theoretically imposes a level of responsibility by the owners of platforms to moderate content.

How does social media work?

Social media platforms are owned by private companies, with business models usually based on advertising and monetization of users’ data. This affects the way that content appears to users, and influences data-sharing practices. Moderating content on these social-media spaces brings its own challenges and complications because it requires balancing multiple fundamental freedoms. Understanding the content moderation practices and business models of the platforms is essential to reap the benefits while mitigating the risks of using social media.

Business Models

Most social-media platforms rely on advertising. Advertisers pay for engagement, such as clicks, likes and shares. Therefore, sensational and attention-grabbing content is more valuable. This motivates platforms to use automated-recommendation technology that relies on algorithmic decision-making to prioritize content likely to grab attention. The main strategy of “user-targeted amplification” shows users content that is most likely to interest them based on detailed data that are collected about them. See more in the Risk section under Data Monetization by social media companies and tailored information streams.

The Emergence of Programmatic Advertising

The transition of advertising to digital systems has dramatically altered the advertising business. In an analog world, advertising placements were predicated on aggregate demographics, collected by publishers and measurement firms. These measurements were rough, capable at best of tracking subscribers and household-level engagement. Advertisers hoped their ads would be seen by enough of their target demographic (for example, men between 18 and 35 with income at a certain level) to be worth their while. Even more challenging was tracking the efficacy of the ads. Systems for measuring if an ad resulted in a sale were limited largely to mail-in cards and special discount codes.

The emergence of digital systems changed all of that. Pioneered for the most part by Google and then supercharged by Facebook in the early years of the 21st century, a new promise emerged: “Place ads through our platform, and we can put the right ad in front of the right person at the right time. Not only that, but we can report back to you (advertiser) who saw the ad, if they clicked on it, and if that click led to a ‘conversion’ or a sale.”

But this promise has come with significant unintended consequences. The way that the platforms—and the massive ad tech industry that has rapidly emerged alongside them—deliver on this promise requires a level of data gathering, tracking and individual surveillance unprecedented in human history. The tracking of individual behaviors, preferences and habits powers the wildly profitable digital advertising industry, dominated by platforms that can control these data at scale.

Managing huge consumer data sets at the scale and speed required to deliver value to advertisers has come to mean a heavy dependence on algorithms to do the searching, sorting, tracking, placement and delivery of ads. This development of sophisticated algorithms led to the emergence of programmatic advertising, which is the placement of ads in real time on websites with no human intervention. Programmatic advertising made up roughly two thirds of the $237 billion global ad market in 2019.

The digitization of the advertising market, particularly the dominance of programmatic advertising, has resulted in a highly uneven playing field. The technology companies enter with a significant advantage: they built the new structures and set the terms of engagement. What began as a value add in the new digital space— “We will give advertisers efficiency and publishers new audience and revenue streams”—has evolved to disadvantage both.

One of the primary challenges is in how audience engagement is measured and tracked. The primary performance indicators in the digital world are views and clicks. As mentioned above (and well documented in the literature), an incentive structure based on views and clicks (engagement) tends to favor sensational and eye-catching content. In the race for engagement, misleading or false content, with dramatic headlines and incendiary claims, consistently wins out over more balanced news and information. See also the section on digital advertising in the disinformation resource.

Advertising motivated content

Platforms leverage tools like hashtags and search engine optimization (SEO) to rank and cluster content around certain topics. Unfortunately, automated content curation motivated by advertising does not tend to prioritize healthful, educational, or rigorous content. Instead, frivolous, distracting, potentially untrue or even harmful content tends to spread more widely: conspiracy theories, shocking or violent content and “click-bait” (misleading phrases designed to entice viewing). Many platforms have features of upvoting (“like” buttons) which, similar to hashtags and SEO, influence the algorithmic moderation and promote certain content to circulate more widely. These features together cause “virality,” one of the defining features of the social-media ecosystem: the tendency of an image, video, or piece of information to be circulated rapidly and widely.

In some cases, virality can spark political activism and raise awareness (like the #MeToo hashtag), but it can also amplify tragedies and spread inaccurate information (anti-vaccine information and other health rumors, etc.). Additionally, the business models of the platforms reward quantity over quality (number of “likes”, “followers”, and views), encouraging a growth logic that has led to the problem of information saturation or information overload, overwhelming users with seemingly infinite content. Indeed, design decisions like the “infinite scroll” intended to make our social media spaces ever larger and more entertaining have been associated with impulsive behaviors, increased distraction, attention-seeking behavior, lower self-esteem, etc.

Many digital advertising strategies raise risks regarding access to information, privacy, and discrimination, in part because of their pervasiveness and subtlety. Influencer marketing, for example, is the practice of sponsoring a social media influencer to promote or use a certain product by working it into their social-media content, while native advertising is the practice of embedding ads in or beside other non-paid content. Most consumers do not know what native advertising is and may not even know when they are being delivered ads.

It is not new for brands to strategically place their content. However, today there is much more advertising, and it is seamlessly integrated with other content. In addition, the design of platforms makes content from diverse sources—advertisers and news agencies, experts and amateurs—indistinguishable. Individuals’ right to information and basic guarantees of transparency are at stake if advertisements are placed on equal footing with desired content.

Content Moderation

Content moderation is at the heart of the service that social-media platforms provide: the hosting and curation of the content uploaded by their users. Content moderation is not just the review of content, but every design decision made by the platforms, from the Terms of Service and their Community Guidelines, to the algorithms they use to rank and order content, to the types of content they allow and encourage through design features (“like”, “follow”, “block”, “restrict”, etc.).

Content moderation is particularly challenging because of the issues it raises around freedom of expression. While it is necessary to address massive quantities of harmful content that circulate widely, educational, historic, or journalistic content is often censored by algorithmic moderation systems. In 2016, Facebook took down a post with a Pulitzer Prize-winning image of a naked 9-year-old girl fleeing a napalm bombing and suspended the account of the journalist who had posted it.

Though nations differ in their stances on freedom of speech, international human rights provide a framework for how to balance freedom of expression against other rights, and against protections for vulnerable groups. Still, content-moderation challenges increase as content itself evolves, for instance through increase of live streaming, ephemeral content, voice assistants, etc. Moderating internet memes is particularly challenging, for instance, because of their ambiguity and ever-changing nature; and yet meme culture is a central tool used by the far right to share ideology and glorify violence. Some communications manipulations are also intentionally difficult to detect, for example, “dog whistling” (sending coded messages to subgroups of the population) and “gaslighting” (psychological manipulation to make people doubt their own knowledge or judgement).

Automated moderation

Content moderation is usually performed by a mix of humans and artificial intelligence , with the precise mix dependent on the platform and the category of content. The largest platforms like Facebook and YouTube use automated tools to filter content as it is uploaded. Facebook, for example, claims it is able to detect up to 80% of hate speech content in some languages as it is posted, then submitting it for human review. Though the working conditions for the human moderators have been heavily critiqued, the accuracy and transparency of the algorithms are also disputed and, expectedly, have some concerning biases. Humans are of course subject to biases as well, but algorithmic bias in content moderation poses more serious threats around equity and freedom of expression.

The complexity of content-moderation decisions does not lend itself easily to automation, and the porosity between legal and illegal/permissible and impermissible content leads to both cases of legitimate content being censored, and cases of harmful and illegal content passing through the filter (cyberbullying, defamation, etc.).

The moderation of content posted to social media has been increasingly important during the COVID-19 pandemic, when access to misleading and inaccurate information about the virus can result in death. The current moderation strategy of Facebook, for example, has been described as creating “a platform that is effectively at war with itself: the News Feed algorithm relentlessly promotes irresistible click-bait about Bill Gates, vaccines, and hydroxychloroquine; the trust and safety team then dutifully counters it with bolded, underlined doses of reality.”

Addressing harmful content

In some countries, local laws may address content moderation, but they relate mainly to child abuse images or illegal content that incites violence. Most platforms also have community standards or safety and security policies that state the kind of content allowed, and that sets the rules for harmful content. Enforcement of legal requirements and the platforms’ own standards relies primarily on content being flagged by social media users. The social-media platforms are only responsible for harmful content shared on their platforms once it has been reported to them.

Some platforms have established mechanisms that allow civil society organizations (CSOs) to contribute to the flagging process by becoming a so-called trusted flagger. With Facebook, this allows verification of accounts for civic organizations, provides higher levels of protection, faster response for incident reports, and accounts are less easily automatically disabled. For example, Access Now’s Digital Security Helpline is a trusted partner, and Facebook also offers access to its Trusted Partners Program to partners of members of the Design 4 Democracy Coalition. However, this program does not compensate for the limited accessibility to the platform for CSOs that encounter problems.

Back to top

How is social media relevant in civic space and for democracy?

The flow of information on social media involves many fundamental human rights and supports the functioning of a democracy, to the extent that it allows freedom of expression, democratic debate, and civic participation. Social-media platforms have become core communication channels for CSOs. As more aspects of our lives take place within digital environments, social media becomes critical to aspects as fundamental as access to education, work and livelihood, health, and other services.

For example, citizen journalism, which has flourished through social media, has allowed internet users across the world to supplement the mainstream media with facts and perspectives ‘from the ground’ that might otherwise be overlooked or misrepresented. In some contexts, civic space actors rely on social-media platforms to produce and disseminate critical information during humanitarian crises or emergencies.

Digital inclusion center in the Peruvian Amazon. The business models and content moderation practices of social media platforms directly affect the content displayed to users. Photo Credit: Chandy Mao, Development Innovations.

However, the information shared over social media is mediated by private companies and governments, who possess new tactics for censorship, control and information manipulation . Censorship is no longer necessarily the denial of information but can be the denial of attention or credibility. Further, the porosity of online and offline space can be dangerous for individuals and for democracy, as harassment, hate speech, and “trolling” behaviors offer new methods for violence, including organized violence. Doxxing and targeted digital attacks have also been used to intimidate journalists and political minorities or opponents. Read more about online violence and targeted digital attacks in the Risks section .

Social-media content is also pervasive and harmonized—in our personalized news feeds, information shared by amateurs, advertisers, or for political objectives can be difficult to distinguish from quality news, giving rise to a range of information disorders, from the accidental forwarding of inaccurate information to the intentional sharing of harmful content, as explored in the disinformation primer .

The platforms’ algorithms are designed to reward quantity over quality (number of “likes,” “followers,” and views), which has led to the problem of information saturation or information overload, overwhelming users with seemingly infinite content. Indeed, design decisions like the “infinite scroll,” which intended to make our social-media spaces ever larger and more entertaining, have been associated with impulsive behaviors, increased distraction, attention-seeking behavior, lower self-esteem, etc.

Furthermore, social-media platforms have become gatekeepers of information and connections. It has become harder to work and live without these platforms: those not using social media may miss important public announcements, events, community information or even family updates.

Back to top

Opportunities

Students from the Kandal Province, Cambodia. Social media platforms have opened up new platforms for video storytelling. Photo credit: Chandy Mao, Development Innovations.

Social media can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about social media use in your work.

Citizen Journalism

Social media has been credited with providing channels for citizens, activists, and experts to report instantly and directly—from disaster settings, during protests, from within local communities, etc. Citizen journalism, also referred to as participatory journalism or guerrilla journalism, does not have a definite set of principles and should not be considered as a replacement for professional journalism, but it is an important supplement to mainstream journalism. Collaborative journalism, the partnership between citizen and professional journalists, as well as crowdsourcing strategies, are further techniques permitted by social media that have enhanced journalism, helping to promote voices from the ground and to magnify diverse voices and viewpoints. The outlet France 24 has developed a network of 5,000 contributors, the “observateurs,” who are able to cover important events directly by virtue of being on scene at the time, as well as to confirm the accuracy of information.

Social-media platforms as well as blogging tools have allowed for the decentralization of expertise, bridging elite and non-elite forms of knowledge. Without proper fact-checking or supplementary sources and proper context, citizen reporting carries risks— including security risks to the authors themselves—but it is an important democratizing force and source of information.

Crowdsourcing

In crowdsourcing, the public is mobilized to share data together to tell a larger story or accomplish a greater goal. Crowdsourcing can be a method for financing, for journalism/reporting, or simply for gathering ideas. Usually some kind of software tool or platform is put in place that the public can easily access and contribute to. Crisis mapping, for example, is a type of crowdsourcing through which the public shares data in real time during a crisis (a natural disaster, an election, a protest, etc.). These data are then ordered and displayed in a useful way. For instance, crisis mapping can be used in the wake of an earthquake to show first responders the areas that have been hit and need immediate assistance. Ushahidi is an open-source crisis-mapping software developed in Kenya after the violent outbreak following the election in 2007. The tool was first created to allow Kenyans to flag incidents, to form a complete and accurate picture of the situation on the ground, to share with the media, outside governments, and relevant civil society and relief organizations. In Kenya, the tool gathered texts, tweets, and photos and created crowdsourced maps of incidents of violence, election fraud, and other abuse. Ushahidi now has a global team and works in 30 different languages.

Digital Activism

Social media has allowed local and global movements to spring up overnight, inviting broad participation and visibility. Twitter hashtags in particular have been instrumental for coalition building, coordination, and for raising awareness among international audiences, media and government. Researchers began to take note of digital activism around the 2011 “Arab Spring,” when movements in Tunisia, Morocco, Syria, Libya, Egypt and Bahrain, among others countries, leveraged social media and were quickly followed by the Occupy Wallstreet movement in the United States. Ukranian’s Euromaidan movement in late 2013 and the Hong Kong protests in 2019 are also examples of political movements that used social media to galvanize support.

In 2013, the acquittal of George Zimmerman in the death of unarmed 17-year-old Trayvon Martin inspired the creation of the  #BlackLivesMatter hashtag. This movement grew stronger in response to the tragic killing of Michael Brown. The hashtag, at the front of an organized national protest movement, provided an outlet for people to join an online conversation and articulate alternative narratives in real time about subjects that the media and the rest of the United States (and more recently with the killing of George Floyd, the world) had not paid sufficient attention to: police brutality, systemic racism, racial profiling, inequality, etc.

The #MeToo movement against sexual misconduct in the media industry, which also became a global movement, has allowed a multitude of people to participate in activism previously bound to a certain time and place.

Some researchers and activists fear “slacktivism” and the effect of social media giving people an excuse to stay at home rather than make a more dynamic response, and some fear that the tools of social media are ultimately insufficient for enacting meaningful social change, which requires nuanced political arguments. (Interestingly, a 2018 Pew Research survey on attitudes toward digital activism showed that just 39% of white Americans believed social media was an important tool to use to express themselves, while 54% percent of Black people said that it was an important tool for them.)

Social media has enabled new online groups to gather together and to express a common sentiment as a form of solidarity or as a means to protest. Especially since the COVID-19 pandemic broke out, many physical protests have been suspended or cancelled, and virtual protests proceeded.

Expansion and engagement with international audience at low costs

Social media provides a valuable opportunity for CSOs to reach their goals and engage with existing and new audiences. A good social-media strategy is best underpinned by a permanent staff position to grow a strong and consistent social-media presence based on the organization’s purpose, values, and culture. This person should know how to seek information, be aware of both the risks and benefits of sharing information online and understand the importance of using sound judgment when posting. The USAID ‘Social Networking: A Guide to Strengthening Civil Society through Social Media’ provides a set of questions as guidance to develop a sound social-media policy, asking organizations to think about values, roles, content, tone, controversy and privacy.

Increased awareness of services

Social media can be integrated into programmatic activities to strengthen the reach and impact of the program, for example, by generating awareness of an organization’s services to a new demographic. Organizations can promote their programs and services while responding to questions and fostering open dialogue. Widely used social media platforms can be useful to reach new audiences for training and consulting activities through webinars or individual meetings designed for NGOs.

Opportunities for Philanthropy and Fundraising

Social-media fundraising presents an important opportunity for non-profits, but organizations should carefully consider the type of campaign and platforms they choose. TechSoup, a non-profit providing tech support for NGOs, offers advice and an online course on fundraising with social media for non-profits.

After the blast in Beirut’s harbor in the summer of 2020, many Lebanese people started online fundraising pages for their organizations. Social-media platforms were used extensively to share funding suggestions to the global audience watching the disaster unfold, reinforced by traditional media coverage.

Emergency communication

In some contexts, civic actors rely on social media platforms to produce and disseminate critical information, for example, during humanitarian crises or emergencies. Even in a widespread disaster, the internet often remains a significant communication channel, which makes social media a useful complementary means for emergency teams and the public. Reliance on the internet, however, increases vulnerability in case of network shutdowns.

Back to top

Risks

In Kyiv, Ukrainian students share pictures at the opening ceremony of a Parliamentary Education Center. Photo credit: Press Service of the Verkhovna Rada of Ukraine, Andrii Nesterenko.

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with social media platforms in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Polarization and Ideological Segregation

The ways in which content flows and is presented in social media due to the platforms’ business models risk limiting our access to information, particularly to information that challenges our preexisting beliefs, by exposing us to content likely to attract our attention and support our views. The concept of the filter bubble refers to the filtering of information by online platforms and through our own intellectual biases that worsen polarization by allowing us to live in echo chambers. This is easily witnessed in a YouTube feed: when you search for a song by an artist, you will likely find more songs by the same artist, or similar ones—the algorithms are designed to prolong your viewing, and assume you want more of something similar. The same has been identified for political content. Social media algorithms encourage confirmation bias, exposing us to content we will agree with and enjoy, often at the expense of the accuracy, rigor, educational or social value of that content.

The massive and precise data amassed by advertisers and social media companies about our preferences and opinions permit the practice of micro-targeting, which involves the display of tailored content based on data about users’ online behaviors, connections, and demographics, among others, as will be further explained below.

The increasingly tailored distribution of news and information is a threat to political discourse, diversity of opinions and democracy. Users can become detached even from factual information that disagrees with their viewpoints and isolated within their own cultural or ideological bubbles.

Because tailoring news and other information on social media is driven largely by nontransparent, opaque algorithms that are owned by private companies, it is hard for users to avoid these bubbles. Access to and intake of the very diverse information available on social media, with its many viewpoints, perspectives, ideas and opinions, requires an explicit effort by the individual user to go beyond passive consumption of the content that is presented to them.

Misinformation and Disinformation

The internet and the dominant online platforms provide new tools that amplify and alter the danger presented by false, inaccurate or taken out of context information. The online space increasingly drives discourse and is where much of today’s disinformation takes root. Refer to the Disinformation resource for a detailed overview of these problems.

Online Violence and Targeted Digital Attacks

Social media facilitates a number of violent behaviors such as defamation, harassment, bullying, stalking, “trolling” and “doxxing.” Cyberbullying among children, like traditional offline bullying, can harm students’ performance in school and causes real psychological damage. Cyberbullying is particularly harmful because victims experience the violence alone, isolated in cyberspace. They often do not seek help from parents and teachers, who they believe are not able to intervene. Cyberbullying is also difficult to address because it can move across social-media platforms, beginning on one and moving to another. Like cyberbullying, cyber harassment and cyberstalking have very tangible offline effects. Women are most often the victims of cyber harassment and cyberviolence, sometimes through the use of stalkerware installed by their partners to track their movements. A frightening cyber harassment trend has accelerated in France during the COVID-19 confinement in the form of “fisha” accounts, where bullies, aggressors or jilted ex-boyfriends will publish and circulate naked photos of teenage girls without their consent.

Journalists, women in particular, are often subject to cyber harassment and threats, particularly those who write about socially sensitive or political topics. Online violence against journalists can lead to journalistic self-censorship, affecting the quality of the information environment and democratic debate. Online tools provide new ways to spread and amplify hate speech and harassment. The use of fake accounts, bots, and even bot-nets (automated networks of accounts) allow perpetrators to attack, overwhelm, and even disable the social media accounts of their victims. Doxxing, by revealing sensitive information about journalists, is another strategy that can be used for censorship.

The 2014 case of Gamergate, when several women video-game developers were attacked by a coordinated harassment campaign that included doxxing and threats of rape and death, illustrates the strength and capacity of loosely-connected hate groups online to rally together, inflict real violence, and even drown out criticism. Many of the actions of the most active Gamergate trolls are illegal, but their identities are unknown. Importantly, it has been suggested by supporters of Gamergate that the most violent trolls were a “smaller, but vocal minority”—evidence of the magnifying power of internet channels and their use for coordinated online harassment.

Online hoaxes, scams, and frauds, like in their traditional offline forms, usually aim to extract money or sensitive information from the target. The practice of phishing is increasingly common on social media: an attacker pretends to be a contact or a reputable source in order to send malware or to extract personal information or account credentials. Spearphishing is a targeted phishing attack that leverages information about the recipient and details related to the surrounding circumstances. Coronavirus-related spearphishing attacks have increased steadily during the pandemic.

Data monetization by social media companies and tailored information streams

Most social-media platforms are free to use. Users simply register and then are able to use the platform. Social-media platforms do not receive revenue directly from users, like in a traditional subscription service; rather they generate profit primarily through digital advertising. Digital advertising is based on the collection of users’ data by social-media companies, which allows advertisers to target their ads to specific users and types of users. Social-media platforms monitor their users and build detailed profiles that they sell to advertisers. The data tracked includes information about the user’s connections and behavior on the platform, such as friends, posts, likes, searches, clicks and mouse movements. Data are also extensively collected outside platforms, including information about users’ location, webpages visited, online shopping and banking behavior. Additionally, many companies regularly access the contact book and photos of their users.

In the case of Facebook, this has led to a long-held and widespread conspiracy theory that the company listens to conversations to serve tailored advertisements. No one has ever been able to find clear evidence that this is actually happening. Research has shown that a company like Facebook does not need to listen in to your conversations, because it has the capacity to track you in so many other ways: “Not only does the system know exactly where you are at every moment, it knows who your friends are, what they are interested in, and who you are spending time with. It can track you across all your devices, log call and text metadata on phones, and even watch you write something that you end up deleting and never actually send.”

The massive and precise data amassed by advertisers and social-media companies about our preferences and opinions permit the practice of micro-targeting, that is, displaying targeted advertisements based on what you have recently purchased, searched for or liked. But just as online advertisers can target us with products, political parties can target us with more relevant or personalized messaging. Studies are currently trying to determine the extent to which political micro-targeting is a serious concern for the functioning of democratic elections. The question has also been raised by researchers and digital rights activists as to how micro-targeting may be interfering with our freedom of thought.

The increasingly tailored distribution of news and information is a threat to political discourse, diversity of opinions and democracy. Users can become detached from information that disagrees with their viewpoints and isolated within their own cultural or ideological bubbles. This is easily witnessed in a YouTube feed: when you search for a song by an artist, you will likely be shown more by the same artist, or similar ones—the algorithms are designed to prolong your viewing, and assume you want more of something similar. The same has been identified for ideological content.

Government surveillance and access to personal data

The content shared over social media is monitored by governments, who use social media for censorship, control and information manipulation. Many democratic governments are known to engage in extensive social-media monitoring for law enforcement and intelligence-gathering purposes. These practices should be guided by robust legal frameworks to safeguard individuals’ rights online, such as privacy and data-protection laws , but many countries have not yet enacted these types of laws.

There are also many examples of authoritarian governments using personal and other data harvested through social media to intimidate activists, silence opposition, and bring development projects to a halt. The information shared on social media often allows bad actors to build extensive profiles of individuals that enable targeted online and offline attacks, often through social engineering techniques. For example, a phishing email can be carefully crafted based on social-media data to trick an activist to click on a malicious link that provides access to the device, documents or social-media accounts.

Sometimes, however, a strong, real-time presence on social media can protect a prominent activist against threats by the government. A disappearance or arrest will be immediately noticed by followers or friends of a person who suddenly becomes silent on social media.

Market power and differing regulation

We rely on social-media platforms to help fulfill our fundamental rights (freedom of expression, assembly, etc.). However, these platforms are massive global monopolies and have been referred to as “the new governors.” This market concentration is troubling to national and international governance mechanisms. Simply breaking up the biggest platform companies will not fully solve the information disorders and social problems presented by social media. Civil society and governments also need visibility of the design choices made by the platforms to understand how to address their negative aspects.

The growing influence of social-media platforms has given many governments reasons to impose laws on online content. There is a surge in laws across the world regulating illegal and harmful content, such as incitement to terrorism or violence, false information, and hate speech. These laws often criminalize speech and contain punishments of jail terms or high fines for something like a retweet on Twitter. Even in countries where the rule of law is respected, legal approaches to regulating online content may be ineffective due to the many technical challenges of content moderation. There is also a risk of violating internet users’ freedom of expression by reinforcing imperfect and non-transparent moderation practices and over-deletion. Lastly, they constitute a challenge to social media companies to navigate between compliance with local laws and defending international human rights law.

Amplification/virality

As noted above, virality is one of the defining features of the social-media ecosystem: the tendency of an image, video, or piece of information to circulate rapidly and widely. In some cases, virality can spark political activism and raise awareness (like the #MeToo hashtag), but it can also amplify tragedies (the video of the Christchurch massacre in New Zealand) and spread inaccurate information.

Impact on journalism

Social media has had a profound impact on the field of journalism. While it has allowed the emergence of the citizen-journalist, locally-reported and crowd-sourced information, social-media companies have displaced the relationship between advertising and the traditional newspaper, and created a rewards system for sensationalist, click-bait content that will attract the widest attention globally, over quality journalism that may be pertinent to local communities. In many places, these monopolies have also been partially responsible for the collapse of local news.

The reason for this impact is that, while advertising has successfully made the transition to digital, with global revenues currently at $247 billion and growing by 4% year over year, very little of that revenue is making its way to publishers. The ad tech supply chain, dominated by Google and Facebook, now consumes 90% of all new growth in the world’s major markets and 61 cents of every dollar spent on digital advertising worldwide.

The disruption of the publishing business model has been a slow-motion disaster for news organizations around the world. Since 2012, digital newspaper ad revenues worldwide have grown from an anemic $7.3 billion to $9.95 billion in 2016, while the Google/Facebook duopoly will earn $174 billion or 61% of the global digital advertising market.

In addition, the way search tools work dramatically affects local publishers, as search is a powerful vector for news and information. Researchers have found that search rankings have a marked impact on our attention. Not only do we tend to think information that is ranked more highly is more trusted and relevant, but we tend to click on top results more often than lower ones. The Google search engine concentrates our attention on a narrow range of news sources, a trend that works against diverse and pluralistic media outlets. It also tends to work against the advertising revenue of smaller and community publishers, which is based on user attention and traffic. It is a downward spiral: search results favor larger outlets, those results drive more user engagement which makes their inventory more valuable in the advertising market, those publishers grow larger driving more favorable search results and onward we go.

Back to top

Questions

To understand the implications of social media information flows and choice of platforms used in your work, ask yourself these questions:

  1. Does your organization have a social-media strategy? What does your organization hope to achieve through social media use?
  2. Do you have staff who can oversee and ethically moderate your social-media accounts and content?
  3. Which platform do you intend to use to accomplish your organization’s goals? What is the business model of that platform? How does this business model affect you as a user?
  4. How is content ordered and moderated on the platform used (humans, volunteers, AI, etc.)? Can content go viral?
  5. Where is the platform legally headquartered? What jurisdiction and legal frameworks does it fall under?
  6. Do the platforms chosen have mechanisms for users to signal harassment and hate speech for review and possible removal?
  7. Do the platforms have mechanisms for users to be heard when content is unfairly taken down or accounts unfairly blocked?
  8. Are the platforms collecting data about users? Who else has access to collected data and how is it being used?
  9. How does the platform involve their community of users and civil society (for instance, in flagging dangerous content, in giving feedback on design features, in fact-checking information, etc.)? Are there local representatives?
  10. Do the platforms chosen have privacy features like encryption? If so, what level of encryption and for what precise services (for example, only on the app, only in private message threads)? What are the default settings?

Back to top

Case Studies

Crowdsourced mapping in crisis zones: collaboration, organisation and impact

Amelia Hunt & Doug Specht, Journal of International Humanitarian Action, 2019

“Crowdsourced mapping has become an integral part of humanitarian response, with high profile deployments of platforms following the Haiti and Nepal earthquakes, and the multiple projects initiated during the Ebola outbreak in North West Africa in 2014, being prominent examples. There have also been hundreds of deployments of crowdsourced mapping projects across the globe that did not have a high profile. This paper, through an analysis of 51 mapping deployments between 2010 and 2016, complimented with expert interviews, seeks to explore the organisational structures that create the conditions for effective mapping actions, and the relationship between the commissioning body, often a non-governmental organisation (NGO) and the volunteers who regularly make up the team charged with producing the map.”

Scaling Social Movements Through Social Media: The Case of Black Lives Matter

Marcia Mundt, Karen Ross , and Charla M Burnett, Social Media and Society, 2018

“Drawing on a case study of Black Lives Matter (BLM) that includes both analysis of public social media accounts and interviews with BLM groups, [the authors] highlight possibilities created by social media for building connections, mobilizing participants and tangible resources, coalition building, and amplifying alternative narratives. [They] also discuss challenges and risks associated with using social media as a platform for scaling up. Our analysis suggests that while benefits of social media use outweigh its risks, careful management of online media platforms is necessary to mitigate concrete, physical risks that social media can create for activists.”

Facebook’s Role in the Genocide in Myanmar: New Reporting Complicates the Narrative

Evelyn Douek, LawFare, 2018

“Members of the Myanmar military have systematically used Facebook as a tool in the government’s campaign of ethnic cleansing against Myanmar’s Rohingya Muslim minority, according to an incredible piece of reporting by the New York Times on Oct. 15. The Times writes that the military harnessed Facebook over a period of years to disseminate hate propaganda, false news and inflammatory posts. The story adds to the horrors known about the ongoing violence in Myanmar, but it also should complicate the ongoing debate about Facebook’s role and responsibility for spreading hate and exacerbating conflict in Myanmar and other developing countries…”

How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument

Gary King, Jennifer Pan & Margaret E. Roberts, American Political Science Review, 2017 (Study)

“The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.”

Environmental campaigning: Earth Hour

The World Wide Fund For Nature (WWF) launched Earth Hour in 2007. This social media campaign calls for everyone – individuals and businesses alike – to switch off their lights for one hour. In 2017, the campaign’s 10th anniversary, millions of people and thousands of landmarks around the world turned their lights off for a single hour. The WWF uses the #EarthHour hashtag (amongst others) to galvanize its followers. Elements of this successful campaign are the limited time frame which makes engagement actionable, the push to share individual actions on multiple platforms, the enticing language and countdown timers.

The Cyber Harassment Helpline

The Cyber Harassment Helpline is an accessible toll-free Helpline for victims and survivors of online harassment and violence in Pakistan. Digital Rights Foundation founded this helpline in 2016 in response to the increasing examples of harassment of social media users and in particular women. The helpline focuses in particular on marginalized groups in Pakistan and prefers not to communicate via social media platforms for reasons of privacy and confidentiality.

Back to top

References

Find below the works cited in this resource.

Additional Resources

  • BellingCat: An independent international collective of researchers, investigators and citizen journalists using open source and social media investigation.
  • Documentary “The Social Dilemma.” Preview available here.
  • Graphika: an investigative research company that leverages AI to study online communities, analyzing how online social networks form, evolve, and are manipulated.
  • Tufekci, Zeynep. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale University Press. Read this book excerpt.

Back to top

Categories

Digital Development in the time of COVID-19