5G Technology

What is 5G technology?

Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.
Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.

New generations of technology come along almost every 10 years. 5G, or the fifth generation of mobile technologies, is expected to be 100 times faster and have 1000 times more capacity than previous generations, facilitating fast and reliable connectivity, wider data flow, and machine-to-machine communications. 5G is not designed primarily to connect people, but rather to connect devices. 2G facilitated access to voice calls and texting, 3G drove video and social media services, and 4G realized digital streaming and data-heavy applications. 5G will support smart homes, 3D video, the cloud, remote medical services, virtual and augmented reality, and machine-to-machine communications for industry automation. However, even as the United States, Europe, and the Asia Pacific region transition from 4G to 5G, many other parts of the world still rely primarily on 2G and 3G networks, and further disparities exist between rural and urban connectivity. Watch this video for an introduction to 5G technology and both the excitement and caution surrounding it.

What do we mean by “G?”

“G” refers to generation and indicates a threshold for a significant shift in capability, architecture, and technology. These designations are made by the telecommunications industry through the standards-setting authority known as 3GPP. 3GPP creates new technical specifications approximately every 10 years, hence the use of the word “generation”. An alternate naming convention uses the acronym IMT (which stands for International Mobile Telecommunications), along with the year the standard became official. As an example, you may see 3G also referred to as IMT 2000.

1GAllowed analogue phone calls; brought mobile devices (mobility)
2GAllowed digital phone calls and messaging; allowed for mass adoption, and eventually enabled mobile data (2.5G)
3GAllowed phone calls, messaging, and internet access
3.5GAllowed stronger internet
4GAllowed faster internet, (better video streaming)
5G“The Internet of Things”

Will allow devices to connect to one another
6G“The Internet of Senses”

Little is yet known

This video provides a simplified overview of 1G-4G.

Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.
Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.

There is a gap in many developing countries between the cellular standard that users subscribe to and the standard they actually use: many subscribe to 4G, but, because it does not perform as advertised, may switch back to 3G. This switch or “fallback” is not always evident to the consumer, and it may be harder to notice with 5G compared to previous networks.

Even once 5G infrastructure is in place and users have access to it through capable devices, the technology is not necessarily guaranteed to work as promised: in fact, chances are it will not. 5G will still rely on 3G and 4G technologies, and carriers will still be operating their 3G and 4G networks in parallel.

How does 5G technology work?

There are several key performance indicators (KPIs) that 5G hopes to achieve. Basically, 5G will strengthen cellular networks by using more radio frequencies along with new techniques to strengthen and multiply connection points. This means faster connection: cutting down the time between a click on your device and the time it takes the phone to execute that command. This also will allow more devices to connect to one another through the Internet of Things.

Understanding Spectrum

To understand 5G, it is important to understand a bit about the electromagnetic radio spectrum. This video gives an overview of how cell phones use spectrum.

5G will bring faster speed and stronger services by using more spectrum. To establish a 5G network, it is necessary to secure spectrum for that purpose in advance. Governments and companies have to negotiate spectrum—usually by auctioning off “bands,” sometimes for huge sums. Spectrum allocation can be a very complicated and political process. Many experts fear that 5G, which requires lots of spectrum, threatens so-called “network diversity”—the idea that spectrum should be used for a variety of purposes across government, business, and society.

For more on spectrum allocation, see the Internet Society’s publication on Innovations in Spectrum Management (2019).

Millimeter Waves

5G hopes to tap into new, unused bands at the top of the radio spectrum, known as millimeter waves (mmwaves). These are much less crowded than the lower bands, allowing faster data transfers. But millimeter waves are tricky: their maximum range is approximately 1.6 km, and trees, walls, rain, and fog can limit the distance the signal travels to only 1km. As a result, 5G will require a higher volume of cell towers, compared to the few massive towers required for 4G. 5G will need towers every 100 meters outside, and every 50 meters inside, which is why 5G is best suited for dense urban centers (as discussed in more detail below). The theoretical potential of millimeter waves is exciting, but in reality, most 5G carriers are trying to deploy 5G in the lower parts of the spectrum.

Don’t forget about fiber!

5G technology runs on fiber infrastructure. Fiber can be understood as the nervous system of a mobile network, connecting data centers to cell towers.

5G requires data centers, fiber, cell towers, and small cells

Mobile operators and international standards setting bodies, including the International Telecommunications Union, believe fiber is the best connective material due to its long life, high capacity, high reliability, and ability to support very high traffic. But the initial investment is expensive (a 2017 Deloitte study estimated that 5G deployment in the United States would require at least $130 billion investment in fiber) and often cost prohibitive to suppliers and operators, especially in developing countries and rural areas. 5G is sometimes advertised as a replacement for fiber; however, fiber and 5G are complementary technologies.

The chart below is often used to explain the primary features that make up 5G technology (enhanced capacity, low latency, and enhanced connectivity) and the potential applications of these features.

Features that make up 5G technology: enhanced capacity, low latency, and enhanced connectivity, and the potential applications of these features

Who supplies 5G technology?

The market of 5G providers is very concentrated, even more so than for previous generations. A handful of companies are capable of supplying telecommunications operators with the necessary technology. Huawei (China), Ericsson (Sweden), and Nokia (Finland) have led the charge to expand 5G  and typically interface with local telecom companies, sometimes providing end-to-end equipment and maintenance services.

In 2019, the United States government passed a defense authorization spending act, NDAA Section 889, that essentially prohibits U.S. agencies from using telecommunications equipment made by Chinese suppliers (for example, Huawei and ZTE). The restriction was put in place over fears that the Chinese government may use its telecommunications infrastructure for espionage (see more in the Risks section). NDAA Section 889 could apply to any contracts made with the U.S. government, and so it is critical for organizations considering partnerships with Chinese suppliers to keep in mind the legal challenges of trying to engage with both the U.S. and Chinese governments in relation to 5G.

Of course, this means that the choice of 5G manufacturers suddenly becomes much more limited. Chinese companies have by far the largest market share of 5G technology. Huawei has the most patents filed, and the strongest lobbying presence within the International Telecommunications Union.

The 5G playing field is fiercely political, with strong tensions between China and the United States. Because 5G technology is closely connected to chip manufacturing, it is important to keep an eye on “the chip wars”. Suppliers reliant on American and Chinese companies are likely to get caught in the crossfire as the trade war between these countries worsens, because supply chains and manufacturing of equipment is often dependent on both countries. Peter Bloom, founder of Rhizomatica, points out that the global chip market is projected to grow to $22.41 billion by 2026. Bloom cautions: “The push towards 5G encompasses a plethora of interest groups, particularly governments, financing institutions, and telecommunications companies, that demands to be better analyzed in order to understand where things are moving, whose interests are being served, and the possible consequences of these changes.”

Back to top

How is 5G relevant in civic space and for democracy?

Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O'Bryan/ USAID.
Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O’Bryan/ USAID.

5G is the first generation that does not prioritize access and connectivity for humans. Instead, 5G provides a level of super-connectivity for luxury use cases and specific environments; for instance, for enhanced virtual reality experiences and massively multiplayer video games. Many of the use cases advertised, like remote surgery, are theoretical or experimental and do not yet exist widely in society. Indeed, telesurgery is one of the most-often-cited examples of the benefits of 5G, but it remains a prototype technology. Implementing this technology at scale would require tackling many technical and legal issues to work out, along with developing a global network.

Access to education, healthcare, and information are fundamental rights; but multiplayer video games, virtual reality, and autonomous vehicles—all of which would rely on 5G – are not. 5G is a distraction from the critical infrastructure needed to get people online to fully enjoy their fundamental rights and to allow for democratic functioning. The focus on 5G actually diverts attention away from immediate solutions to improving access and bridging the digital divide.

The percentage of the global population using the internet is on the rise, but a significant portion of the world is still not connected to the internet.  5G is not likely to address the divide in internet access between rural and urban populations, or between developed and developing economies. What is needed to improve internet access in industrially developing contexts is more fiber, more internet access points (IXPs), more cell towers, more Internet routers, more wireless spectrum, and reliable electricity. In an industry white paper, only one out of 125 pages discusses a “scaled down” version of 5G that will address the needs of areas with extremely low average revenue per user (ARPU). These solutions include further limiting the geographic areas of service.

Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.
Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.

This presentation by the American corporation INTEL at an ITU regional forum in 2016 advertises the usual aspirations for 5G: autonomous vehicles (labeled as “smart transportation”), virtual reality (labeled as “e-learning”), remote surgery (labeled as “e-health”), and sensors  to support water management and agriculture. Similar highly specific and theoretical future use cases—autonomous vehicles, industrial automation , smart homes, smart cities, smart logistics—were advertised during a 2020 webinar hosted by the Kenya ICT Action Network in partnership with Huawi.

In both presentations, the emphasis is on connecting objects, demonstrating how 5G is designed for big industries, rather than for individuals. Even if 5G were accessible in remote rural areas, individuals would likely have to purchase the most expensive, unlimited data plans to access 5G. This cost comes on top of having to acquire 5G-compatible smartphones and devices. Telecommunications companies themselves estimate that only 3% of Sub Saharan Africa will use 5G. It is estimated that by 2025, most people will still be using 3G (roughly 60%) and 4G (roughly 40%), which is a technology that has existed for 10 years.


5G Broadband / Fixed Wireless Access (FWA)

Because most people in industrially developing contexts connect to the internet via cell phone infrastructure and mobile broadband, what would be most useful to them would be “5G broadband,” also called 5G Fixed Wireless Access (FWA). FWA is designed to replace “last mile” infrastructure with a wireless 5G network. Indeed, that “last mile”—the final distance to the end user—is often the biggest barrier to internet access across the world. But because the vast majority of these 5G networks will rely on physical fiber connection, FWA without fiber won’t be of the same quality. These FWA networks will also be more expensive for network operators to maintain than traditional infrastructure or “standard fixed broadband.”

This article by one of the top 5G providers, Ericsson, asserts that FWA will be one of the main uses of 5G, but the article shows that the operators will have a wide ability to adjust their rates, and also admits that many markets will still be addressed with 3G and 4G.

5G will not replace other kinds of internet connectivity for citizens

While 5G requires enormous investment in physical infrastructure, new generations of cellular Wi-Fi access are becoming more accessible and affordable. There is also an increasing variety of “community network” solutions, including Wi-Fi meshnets and sometimes even community-owned fiber. For further reading see: 5G and the Internet of EveryOne: Motivation, Enablers, and Research Agenda, IEEE (2018). These are important alternatives to 5G that should be considered in any context (developed and developing, urban and rural).

“If we are talking about thirst and lack of water, 5G is mainly a new type of drink cocktail, a new flavor to attract sophisticated consumers, as long as you live in profitable places for the service and you can pay for it. Renewal of communications equipment and devices is a business opportunity for manufacturers mainly, but not just the best ‘water’ to the unconnected, rural, … (non-premium clients), even a problem as investment from operators gets first pushed by the trend towards satisfying high paying urban customers and not to spread connectivity to low pay social/universal inclusion customers.” – IGF Dynamic Coalition on Community Networks, in communication with the author of this resource.

It is critical not to forget about previous generation networks. 2G will continue to be important for providing broad coverage. 2G is already very present (around 95% in low- and middle- income countries), requires less data, and carries voice and SMS traffic well, which means that it is a safe and reliable option for many situations. Also, upgrading existing 2G sites to 3G or 4G is less costly than building new sites.

5G and the private sector

The technology that 5G facilitates (the Internet of Things , smart cities , smart homes) will encourage the installation of chips and sensors in an increasing number of objects. The devices 5G proposes to connect are not primarily phones and computers, but sensors, vehicles, industrial equipment, implanted medical devices, drones, cameras, etc. Linking these devices raises a number of security and privacy concerns, as explored in the Risks section .

The actors that stand to benefit most from 5G are not citizens or democratic governments, but corporate actors. The business model powering 5G centers around industry access to connected devices: in manufacturing, in the auto industry, in transport and logistics, in power generation and efficiency monitoring, etc. 5G will boost the economic growth of those actors able to benefit from it, particularly those invested in automation, but it would be a leap to assume the distribution of these benefits across society.

The introduction of 5G will bring the private sector massively into public space through the network carriers, operators, and other third parties behind the many connected devices. This overtaking of public space by private actors (usually foreign private actors) should be carefully considered from the lens of democracy and fundamental rights. Though the private sector has already entered our public spaces (streets, parks, shopping malls) with previous cellular networks, 5G’s arrival, bringing with it more connected objects and more frequent cell towers, will increase this presence.

While 5G networks hold the promise of enhanced connectivity, there is growing concern about their misuse for anti-democratic practices. Governments in various regions have been observed using technology to obstruct transparency and suppress dissent, with instances of internet shutdowns during elections and surveillance of political opponents. From 2014 to 2016 for example, internet shutdowns were used in a third of the elections in sub-Saharan Africa.

These practices are often facilitated by collaborations with companies providing advanced surveillance tools, enabling the monitoring of journalists and activists without due process. The substantial increase in data transmission that 5G offers raises the stakes, potentially allowing for more pervasive surveillance and more significant threats to the privacy and rights of individuals, particularly those marginalized. Furthermore, as electoral systems become more technologically reliant, with initiatives to move voting online, the risk of cyberattacks exploiting 5G vulnerabilities could compromise the integrity of democratic elections, making the protection against such intrusions a critical priority.

Back to top

Opportunities

The advertised benefits of 5G usually fall into three areas, as outlined below. A fourth area of benefits will also be explained—though less often cited in the literature, it would be the most directly beneficial for citizens. It should be noted that these benefits will not be available soon, and perhaps never available widely. Many of these will remain elite services, only available under precise conditions and for high cost. Others will require standardization, legal and regulatory infrastructure, and widespread adoption before they can become a social reality.

The chart below, taken from a GSMA report, shows the generally listed benefits of 5G. The benefits in the white section could be achieved on previous networks like 4G, and those in the purple section would require 5G. This further emphasizes the fact that many of the objectives of 5G are actually possible without it.

Benefits of 5G

Augmented Reality & Tactile Internet

5G has many potential uses in entertainment, especially in gaming. Low latency will allow massively multiplayer games, higher quality video conferencing, faster downloading of high-quality videos, etc. Augmented and virtual reality are advertised as ways to create immersive experiences in online learning. 5G’s ability to connect devices will allow for wearable medical devices that can be controlled remotely (though not without cybersecurity risks). Probably the most exciting example of “tactile internet” is the possibility of remote surgery: an operation could be performed by a robot  that is remotely controlled by a surgeon somewhere across the world. The systems necessary for this are very much in their infancy and will also depend on the development of other technology, as well as regulatory and legal standards and a viable business model.

Autonomous Vehicles

The major benefit of 5G will come in the automobile sector. It is hoped that the high speed of 5G will allow cars to coordinate safely with one another and with other infrastructure. For self-driving vehicles to be safe, they will need to be able to communicate with one another and with everything around them within milliseconds. The super speed of 5G is important for achieving this. (At the same time, 5G raises other security concerns for autonomous vehicles.)

Machine-to-machine connectivity (IoT/smart home/smart city)

Machine-to-machine connectivity, or M2M, already exists in many devices and services , but 5G would further facilitate this. This stands to benefit industrial players (manufacturers, logistics suppliers, etc.) most of all, but could arguably benefit individuals or cities  who want to track their use of certain resources like energy or water. Installed sensors can be used to collect data  which in turn can be analyzed for efficiency and the system can then be optimized. Typical M2M applications in the smart home include thermostats and smoke detectors, consumer electronics, and healthcare monitoring. It should be noted that many such devices can operate on 4G, 3G, and even 2G networks.

5G-based Fixed-Wireless Access (FWA) Can Provide Gigabit Broadband to Homes

Probably the most relevant benefit of 5G to industrially developing contexts will be the potential of FWA. FWA is less often cited in the marketing literature, because it does not allow the industrial benefits promised in full. Because it allows breadth of connectivity rather than revolutionary strength or intensity, it should be thought of as a different kind of “5G”. (See the 5G Broadband / Fixed Wireless Access  section.) As explained, FWA will still require infrastructure investments, and will not necessarily be more affordable than broadband alternatives due to the increasing power given to the carriers.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below to learn how to discern the possible dangers associated with 5G in DRG work, as well as how to mitigate unintended—and intended—consequences.

Personal Privacy

With 5G connecting more and more devices, the private sector will be moving further into public space through sensors, cameras, chips, etc. Many connected devices will be things we never expected to be connected to the internet before: washing machines, toilets, cribs, etc. Some will even be inside our bodies, like smart pacemakers. The placement of devices with chips into our homes and environments facilitates the collection of  data about us, as well as other forms of surveillance.

A growing number of third-party actors have sophisticated methods for collecting and analyzing personal data. Some devices may only ultimately collect meta-data, but this can still seriously reduce privacy. Meta-data is information connected to our communications that does not include the content of those communications: for example, numbers called, websites visited, geographical location, or the time and date a call was made. The EU’s highest court has ruled that this kind of information can be considered just as sensitive as the actual contents of communications because of insights that the data can offer into our private lives. 5G will allow telecommunications operators and other actors access to meta-data that can be assembled for insights about us that reduce our privacy.

Last, 5G requires many small cell base stations, so the presence of these towers will be much closer to people’s homes and workplaces, on street lights, lamp posts, etc. This will make location tracking much more precise and make location privacy nearly impossible.

Espionage

For most, 5G will be supplied by foreign companies. In the case of Huawei and ZTE, the government of the country these companies operate in (the People’s Republic of China) do not uphold human rights obligations or democratic values. For this reason, some governments are concerned about the potential of abuse of data for foreign espionage. Several countries, including the United States, Australia, and the United Kingdom, have taken actions to limit the use of Chinese equipment in their 5G networks due to fears of potential spying. A 2019 report on the security risks of 5G by the European Commission and European Agency for Cybersecurity warns against using a single supplier to provide 5G infrastructure because of espionage risks. The general argument against a single supplier (usually made against the Chinese supplier Huawei), is that if the supplier provides the core network infrastructure for 5G, the supplier’s government (China) will gain immense surveillance capacity through meta-data or even through a “backdoor” vulnerability. Government spying through the private sector and telecom equipment is commonplace, and China is not the only culprit. But the massive network capacity of 5G and the many connected devices collecting personal information will enhance the information at stake and the risk.

Cybersecurity Risks

As a general rule, the more digitally connected we are, the more vulnerable we become to cyber threats. 5G aims to make us and our devices ultra-connected. If a self-driving car on a smart grid is hacked or breaks down, this could bring immediate physical danger, not just information leakages. 5G centralizes infrastructure around a core, which makes it especially vulnerable. Because of the wide application of 5G based networks, 5G brings the increased possibility of internet shutdowns, endangering large parts of the network.

5G infrastructure can simply have technical deficiencies. Because 5G technology is still in pilot phases, many of these deficiencies are not yet known. 5G advertises some enhanced security functions, but security holes remain because devices will still be connected to older networks.

Massive Investment Costs and Questionable Returns

As A4AI explains, “The rollout of 5G technology will demand significant investment in infrastructure, including in new towers capable of providing more capacity, and bigger data centres running on efficient energy.” These costs will likely be passed on to consumers, who will have to purchase compatible devices and sufficient data. 5G requires massive infrastructure investment—even in places with strong 4G infrastructure, existing fiber-optic cables, good last-mile connections, and reliable electricity. Estimates for the total cost of 5G deployment—including investment in technology and spectrum—are as high as $2.7 trillion USD. Due to the many security risks, regulatory uncertainties, and generally untested nature of the technology, 5G is not necessarily a safe investment even in wealthy urban centers. The high cost of introducing 5G will be an obstacle for expansion and prices are unlikely to fall enough to make 5G widely affordable.

Because this is such a complex new product, there is a risk of purchasing low-quality equipment. 5G is heavily reliant on software and services from third-party suppliers, which multiplies the chance of defects in parts of the equipment (poorly written code, poor engineering, etc.). The process of patching these flaws can be long, complicated, and costly. Some vulnerabilities may go unidentified for a long time but can suddenly cause severe security problems. Lack of compliance to industry or legal standards could cause similar problems. In some cases, new equipment may not be flawed or faulty, but it may simply be incompatible with existing equipment or with other purchases from other suppliers. Moreover, there will be large costs just to run the 5G network properly: securing it from cyberattacks, patching holes and addressing flaws, and keeping up the material infrastructure. Skilled and trusted human operators are needed for these tasks.

Foreign Dependency and Geopolitical Risks

Installing new infrastructure means dependency on private sector actors, usually from foreign countries. Over-reliance on foreign private actors raises multiple concerns, as mentioned, related to cybersecurity, privacy, espionage, excessive cost, compatibility, etc. Because there are only a handful of actors that are fully capable of supplying 5G, there is also the risk of becoming dependent on a foreign country. With current geopolitical tensions between the U.S. and China, countries trying to install 5G technology may get caught in the crossfire of a trade war. As Jan-Peter Kleinhans, a security and 5G expert at Stiftung Neue Verantwortung (SNV), explains “The case of Huawei and 5G is part of a broader development in information and communications technology (ICT). We are moving away from a unipolar world with the U.S. as the technology leader, to a bipolar world in which China plays an increasingly dominant role in ICT development.” The financial burdens of this bipolar world will be passed onto suppliers and customers.

Class/Wealth & Urban/Rural Divides

“Without a comprehensive plan for fiber infrastructure, 5G will not revolutionize Internet access or speeds for rural customers. So anytime the industry is asserting that 5G will revolutionize rural broadband access, they are more than just hyping it, they are just plainly misleading people.” — Ernesto Falcon, the Electronic Frontier Foundation.

5G is not a lucrative investment for carriers in more rural areas and developing contexts, where the density of potentially connected devices is lower. There is industry consensus, supported by the ITU itself, that the initial deployment of 5G will be in dense urban areas, particularly wealthy areas with industry presence. Rural and poorer areas with less existing infrastructure are likely to be left behind because it is not a good commercial investment for the private sector. For rural and even suburban areas, millimeter waves and cellular networks that require dense cell towers will likely not be a viable solution. As a result, 5G will not bridge the digital divide for lower income and urban areas. It will reinforce it by giving super-connectivity to those who already have access and can afford even more expensive devices, while making the cost of connectivity high for others.

Energy Use and Environmental Impact

Huawei has shared that the typical 5G site has power requirements over 11.5 kilowatts, almost 70% more than sites deploying 2G, 3G, and 4G. Some estimate 5G technology will use two to three times more energy than previous mobile technologies. 5G will require more infrastructure, which means more power supply and more battery capacity, all of which will have environmental consequences. The most significant environmental issues associated with implementation will come from manufacturing the many component parts, along with the proliferation of new devices that will use the 5G network. 5G will encourage more demand and consumption of digital devices, and therefore the creation of more e-waste, which will also have serious environmental consequences. According to Peter Bloom, founder of Rhizomatica, most environmental damages from 5G will take place in the global south. This will include damage to the environment and to communities where the mining of materials and minerals takes place, as well as pollution from electronic waste. In the United States, the National Oceanic and Atmospheric Administration and NASA reported last year that the decision to open up high spectrum bands (24 gigahertz spectrum) would affect weather forecasting capabilities for decades.

Back to top

Questions

To understand the potential of 5G for your work environment or community, ask yourself these questions to assess if 5G is the most appropriate, secure, cost effective, and human-centric solution:

  1. Are people already able to connect to the internet sufficiently? Is the necessary infrastructure (fiber, internet access points, electricity) in place for people to connect to the internet through 3G or 4G, or through Wi-Fi?
  2. Are the conditions in place to effectively deploy 5G? That is, is there sufficient fiber backhaul and 4G infrastructure (recall that 5G is not yet a standalone technology).
  3. What specific use case(s) do you have for 5G that would not be achievable using a previous generation network?
  4. What other plans are being made to address the digital divide through Wi-Fi deployment and mesh networks, digital literacy and digital training, etc.?
  5. Who stands to benefit from 5G deployment? Who will be able to access 5G? Do they have the appropriate devices and sufficient data? Will access be affordable?
  6. Who is supplying the infrastructure? How much can they be trusted regarding quality, pricing, security, data privacy, and potential espionage?
  7. Do the benefits of 5G outweigh the costs and risks (in relation to security, financial investment, and potential geopolitical consequences)?
  8. Are there sufficient skilled human resources to maintain the 5G infrastructure? How will failures and vulnerabilities be dealt with?

Back to top

Case Studies

Latin America and the Caribbean

5G: The Driver for the Next-Generation Digital Society in Latin America and the Caribbean

“Many countries around the world are in a hurry to adopt 5G to quickly secure the significant economic and social benefits that it brings. Given the enormous opportunities that 5G networks will create, Latin American and Caribbean (LAC) countries must actively adopt 5G. However, to successfully deploy 5G networks in the region, it is important to resolve the challenges that they will face, including high implementation costs, securing spectrum, the need to develop institutions, and issues around activation. For 5G networks to be successfully established and utilized, LAC governments must take a number of actions, including regulatory improvement, establishing institutions, and providing financial support related to investment in the 5G network.”

The United Kingdom

The United Kingdom was among the first markets to launch 5G globally in 2019. As UK operators have ramped up 5G investment, the market has been on par with other European countries in terms of performance, but still lags behind “5G pioneers” like South Korea and China. In 2020, the British government banned operators from using 5G equipment supplied by Chinese telecommunications company Huawei due to security concerns, setting a deadline of 2023 for the removal of Huawei’s equipment and services from core network functions and 2027 for complete removal. The Digital Connectivity Forum warned in 2022 that the UK was at risk of not fully tapping into the potential of 5G due to insufficient investment, which could hurt the development of new technology services like autonomous vehicles, automated logistics, and telemedicine.

The Gulf States

The Gulf states were among the first in the world to launch commercial 5G services, and have invested heavily into 5G and advanced technologies. Local Arab service providers are partnering with ZTE and Nokia to expand their reach in Arab and Asian countries. In many Gulf countries, 5G and Internet service providers are predominantly government-owned, thus consolidating government influence over 5G-backed services or platforms. This could make requests for sharing data or Internet shutdowns easier for governments. Dubai is already deploying facial recognition technology developed by companies with ties to the CCP for its “Police Without Policemen” program. (Ahmed, R. et al., 13)

South Korea

South Korea established itself as an early market leader for 5G development. Their networks within Asia will be instrumental in the diffusion of 5G development within the region. Currently, South Korea’s Samsung is primarily present in the 5G devices market. Samsung is under consideration as a replacement for Huawei in discussions by the “D10 Club,” a telecoms supplier group that was established by the UK and consisting of G7 members plus India, Australia, and South Korea. However, details of the D10 Club agenda have yet to be established. While South Korea and others attempt to expand their role in 5G, ICT decoupling from Huawei and security-trade tradeoffs are proving to make the process complicated. (Ahmed, R. et al., 14)

Africa

Which countries have rolled out 5G in Africa?

“Governments in Africa are optimistic that they will one day use 5G to do large-scale farming using drones, introduce autonomous cars into roads, plug into the metaverse, activate smart homes and improve cyber security. Some analysts predict that 5G will add an additional $2.2 trillion to Africa’s economy by 2034. But Africa’s 5G first movers are facing teething problems that stand to delay their 5G goals. The challenges have revolved around spectrum regulation clarity, commercial viability, deployment deadlines, and low citizen purchasing power of 5G enabled smartphones, and expensive internet.” As of mid-2022, Botswana, Egypt, Ethiopia, Gabon, Kenya, Lesotho, Madagascar, Mauritius, Nigeria, Senegal, Seychelles, South Africa, Uganda, and Zimbabwe were testing or had deployed 5G, though many of these countries faced delays in their rollout.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Artificial Intelligence & Machine Learning

What is AI and ML?

Artificial intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can approximate human intelligence. There is no single, precise, universal definition of AI.

Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?

Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems, and evolutionary computation.

artificial intelligence, types

The diagram above shows the many different types of technology fields that comprise AI. AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems. When referring to AI, one can be referring to any or several of these technologies or fields. Applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes natural language processing (question answering) to understand what you’re asking, and then uses vision (image recognition) to find a banana and show it to you.

As noted above, AI doesn’t have a universal definition. There are many myths surrounding AI—from the fear that AI will take over the world by enslaving humans, to the hope that AI can one day be used to cure cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as to outline some of the benefits and risks posed by AI.

Definitions

Algorithm: An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.

Algorithmic decision-making/Algorithmic decision system (ADS): Algorithmic decision systems use data and statistical analyses to make automated decisions, such as determining whether people are eligible for a benefit or a penalty. Examples of fully automated algorithmic decision systems include the electronic passport control check-point at airports or an automated decision by a bank to grant a customer an unsecured loan based on the person’s credit history and data profile with the bank. Driver-assistance features that control a vehicle’s brake, throttle, steering, speed, and direction are an example of a semi-automated ADS.

Big Data: There are many definitions of “big data,” but we can generally think of it as extremely large data sets that, when analyzed, may reveal patterns, trends, and associations, including those relating to human behavior. Big Data is characterized by the five V’s: the volume, velocity, variety, veracity, and value of the data in question. This video provides a short introduction to big data and the concept of the five V’s.

Class label: A class label is applied after a machine learning system has classified its inputs; for example, determining whether an email is spam.

Data mining: Data mining, also known as knowledge discovery in data, is the “process of analyzing dense volumes of data to find patterns, discover trends, and gain insight into how the data can be used.”

Generative AI[1]: Generative AI is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. See section on Generative AI for more details.

Label: A label is the thing a machine learning model is predicting, such as the future price of wheat, the kind of animal shown in a picture, or the meaning of an audio clip.

Large language model: A large language model (LLM) is “a type of artificial intelligence that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new content.” An LLM is a type of generative AI[2]  that has been specifically architected to help generate text-based content.

Model: A model is the representation of what a machine learning system has learned from the training data.

Neural network: A biological neural network (BNN) is a system in the brain that makes it possible to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.

Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.

Robot: Robots are programmable, automated devices. Fully autonomous robots (e.g., self-driving vehicles) are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses and behaviors accordingly in order to perform complex tasks without human intervention.

Scoring: Scoring, also called prediction, is the process of a trained machine learning model generating values based on new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome. When used vis-a-vis people, scoring is a statistical prediction that determines whether an individual fits into a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.

Supervised learning: In supervised learning, ML systems are trained on well-labeled data. Using labeled inputs and outputs, the model can measure its accuracy and learn over time.

Unsupervised learning: Unsupervised learning uses machine learning algorithms to find patterns in unlabeled datasets without the need for human intervention.

Training: In machine learning, training is the process of determining the ideal parameters comprising a model.

 

How do artificial intelligence and machine learning work?

Artificial Intelligence

Artificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic, and economics to “understand, model, and replicate intelligence and cognitive processes.”

AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:

  • Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
  • Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
  • Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Artificial general intelligence and artificial superintelligence have not yet been achieved and likely will not be for the next few years or decades.

Machine Learning

Machine learning is an application of artificial intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern or correlation to make predictions. Most of the AI in use today is driven by machine learning.

Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: supervised learning; unsupervised learning; and deep learning.

Supervised Learning

Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a data set  containing training examples with associated labels. Take the example of a spam-filtering system that is being trained using spam and non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email or IP address of the sender. Using this correlation, the system tries to predict the correct label (spam/not spam) to apply to all the future emails it processes.

“Spam” and “not spam” in this instance are called “class labels.” The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labeled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about—in this case, it is the “spaminess” of an email. The “correct answer,” so to speak, in the categorization of email is called the “desired outcome” or “outcome of interest.”

Unsupervised Learning

Unsupervised learning involves neural networks finding a relationship or pattern without access to previously labeled datasets of input-output pairs. The neural networks organize and group the data on their own, finding recurring patterns and detecting deviations from these patterns. These systems tend to be less predictable than those that use labeled datasets, and are most often deployed in environments that may change at some frequency and are unstructured or partially structured. Examples include:

  1. An optical character-recognition system that can “read” handwritten text, even if it has never encountered the handwriting before.
  2. The recommended products a user sees on retail websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference, and the prices of their previous purchases.
  3. The detection of fraudulent monetary transactions based on timing and location. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.

A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available to train the neural network to act upon a larger, unlabeled dataset. An example of semi-supervised learning is software that creates deepfakes, or digitally altered audio, videos, or images.

Deep Learning

Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks to create AI that can detect financial fraud, conduct medical-image analysis, translate large amounts of text without human intervention, and automate the moderation of content on social networking websites. These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine, and sort huge amounts of data, which theoretically enables them to identify new solutions to existing problems.

Generative AI

Generative AI[3] is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. The launch of OpenAI’s chatbot, ChatGPT, in late 2022 placed a spotlight on generative AI and created a race among companies to churn out alternate (and ideally superior) versions of this technology. Excitement over large language models and other forms of generative AI was also accompanied by concerns about accuracy, bias within these tools, data privacy, and how these tools can be used to spread disinformation more efficiently.

Although there are other types of machine learning, these three—supervised learning, unsupervised learning and deep learning—represent the basic techniques used to create and train AI systems.

Bias in AI and ML

Artificial intelligence is built by humans, and trained on data generated by them. Inevitably, there is a risk that individual and societal human biases will be inherited by AI systems.

There are three common types of biases in computing systems:

  • Pre-existing bias has its roots in social institutions, practices, and attitudes.
  • Technical bias arises from technical constraints or considerations.
  • Emergent bias arises in a context of use.

Bias in artificial intelligence may affect, for example, the political advertisements one sees on the internet, the content pushed to the top of social media news feeds, the cost of an insurance premium, the results of a recruitment screening process, or the ability to pass through border-control checks in another country.

Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate can get compounded or magnified and greatly affect the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.

Bias in AI/ML systems can result in discriminatory practices, ultimately leading to the exacerbation of existing inequalities or the generation of new ones.. For more information, see this explainer related to AI bias and the Risks section of this resource.

Back to top

How are AI and ML relevant in civic space and for democracy?

Elephant tusks pictured in Uganda. In wildlife conservation, AI/ML algorithms and past data can be used to predict poacher attacks. Photo credit: NRCN.

The widespread proliferation, rapid deployment, scale, complexity, and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human rights bodies, businesses, and the general public alike. AI systems may require varying degrees of human interaction or none at all. When applied in design, operation, and delivery of services, AI/ML offers the potential to provide new services and improve the speed, targeting, precision, efficiency, consistency, quality, or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships, and patterns, and offering new solutions. By analyzing large amounts of data, ML systems save time, money, and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks, and discovering new species of viruses.

Tuberculosis microscopy diagnosis in Uzbekistan. AI/ML systems aid healthcare professionals in medical diagnosis and the detection of diseases. Photo credit: USAID.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering, and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, and security, as well as in safety, crime prevention, policing, law enforcement, urban management, and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time about energy consumption, traffic congestion, pollution levels, and waste in order to monitor and manage these issues and identify patterns in their generation, consumption, and handling.

Digital maps created in Mugumu, Tanzania. Artificial intelligence can support planning of infrastructure development and preparation for disaster. Photo credit: Bobby Neptune for DAI.

AI is also used in climate monitoring, weather forecasting, the prediction of disasters and hazards, and the planning of infrastructure development. In healthcare, AI systems aid professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, facial recognition systems, drones, and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, privacy, security, mass surveillance, social inequality, and negative impacts on democracy (see the Risks section).

Fish caught off the coast of Kema, North Sulawesi, Indonesia. Facial recognition is used to identify species of fish to contribute to sustainable fishing practices. Photo credit: courtesy of USAID SNAPPER.

AI and ML have both positive and negative implications for public policy and elections, as well as democracy more broadly. While data may be used to maximize the effectiveness of a campaign through targeted messaging to help persuade prospective voters, it may also be used to deliver propaganda or misinformation to vulnerable audiences. During the 2016 U.S. presidential election, for example, Cambridge Analytica used big data and machine learning to tailor messages to voters based on predictions about their susceptibility to different arguments.

During elections in the United Kingdom and France in 2017, political bots were used to spread misinformation on social media and leak private campaign emails. These autonomous bots are “programmed to aggressively spread one-sided political messages to manufacture the illusion of public support” or even dissuade certain populations from voting. AI-enabled deepfakes (audio or video that has been fabricated or altered) also contribute to the spread of confusion and falsehoods about political candidates and other relevant actors. Though artificial intelligence can be used to exacerbate and amplify disinformation, it can also be applied in potential solutions to the challenge. See the Case Studies section  of this resource for examples of how the fact-checking industry is leveraging artificial intelligence to more effectively identify and debunk false  and misleading narratives.

Cyber attackers seeking to disrupt election processes use machine learning to effectively target victims and develop strategies for defeating cyber defenses. Although these tactics can be used to prevent cyber attacks, the level of investment in artificial intelligence technologies by malign actors in many cases exceeds that of legitimate governments or other official entities. Some of these actors also use AI-powered digital surveillance tools to track down and target opposition figures, human rights defenders, and other perceived critics.

As discussed elsewhere in this resource, “the potential of automated decision-making systems to reinforce bias and discrimination also impacts the right to equality and participation in public life.” Bias within AI systems can harm historically underrepresented communities and exacerbate existing gender divides and the online harms experienced by women candidates, politicians, activists, and journalists.

AI-driven solutions can help improve the transparency and legitimacy of campaign strategies, for example, by leveraging political bots for good to help identify articles that contain misinformation or by providing a tool for collecting and analyzing the concerns of voters. Artificial intelligence can also be used to make redistricting less partisan (though in some cases it also facilitates partisan gerrymandering) and prevent or detect fraud or significant administrative errors. Machine learning can inform advocacy by predicting which pieces of legislation will be approved based on algorithmic assessments of the text of the legislation, how many sponsors or supporters it has, and even the time of year it is introduced.

The full impact of the deployment of AI systems on the individual, society, and democracy is not known or knowable, which creates many legal, social, regulatory, technical, and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The European Union’s (EU) General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan, and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”

Back to top

Opportunities

Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights, and good governance. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.

Detect and overcome bias

Although artificial intelligence can reproduce human biases, as discussed above, it can also be used to combat unconscious biases in contexts like job recruitment.  Responsibly designed algorithms can bring hidden biases into view and, in some cases, nudge people into less-biased outcomes; for example by masking candidates’ names, ages, and other bias-triggering features on a resume.

Improve security and safety

AI systems can be used to detect attacks on public infrastructure, such as a cyber attack or credit card fraud. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud quickly, or even prevent it before it occurs. Machine learning can help identify agile and unusual patterns that match or exceed traditional strategies used to avoid detection.

Moderate harmful online content

Enormous quantities of content are uploaded every second to the internet and social media . There are simply too many videos, photos, and posts for humans to manually review. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen for content that violates their terms of service (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfakes and other computer-generated content requires similarly advanced identification tactics. Fact-checkers and other actors working to diffuse the dangerous, misleading power of deepfakes are developing their own artificial intelligence to identify these media as false.

Web Search

Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the vast stretches of the internet. Search engines on the web (like Google and Bing) or within platforms and websites (like searches within Wikipedia or The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor higher-quality results that may be beneficial to society. For example, Google has an initiative to highlight original reporting, which prioritizes the first instance of a news story rather than sources that republish the information.

Translation

Machine learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.

Back to top

Risks

The use of emerging technologies like AI can also create risks for democracy and in civil society programming. Read below to learn how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate  unintended—and intended—consequences.

Discrimination against marginalized groups

There are several ways in which AI may make decisions that can lead to discrimination, including how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data; during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.

The accuracy of AI systems is based on how ML processes Big Data, which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or external attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to determine why AI makes certain decisions about some individuals or groups of people, or conclusively prove it has made a discriminatory decision. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status, or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement, and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.

A study by the Royal Statistical Society notes that the “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation or software for credit-scoring, banking, insurance, healthcare, and the selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.

The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving vulnerable groups such as refugees, or about life or death circumstances, such as in medical care. A 2018 report by the University of Toronto’s Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.

Security vulnerabilities

Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately owned systems such as internet of things (IoT) devices and self-driven cars.

If malicious entities target AI systems deployed in public infrastructure, such as smart cities, smart grids, nuclear installations,healthcare facilities, and banking systems, among others, they “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”

Privacy and data protection

The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection. Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks. Criminals, illiberal governments, and people with malicious intent often  target these data for economic or political gain. For instance, health data captured from smartphone applications and internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The issue is not only leaks, but the data that people willingly give out without control about how it will be used down the road. This includes what we share with both companies and government agencies. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.

Chilling effect

AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination, and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Many people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.

Opacity (Black box nature of AI systems)

Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, behind-the-scenes processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal or judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike decisions made by  judges who are required to justify their legal order or judgment.

Technological unemployment

Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.

Loss of individual autonomy and personhood

Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may affect wellbeing, physical integrity, and quality of life. This affects what constitutes an individual’s consent (or lack thereof); the way consent is formed, communicated and understood; and the context in which it is valid. “[T]he dilution of the free basis of our individual consent—either through outright information distortion or even just the absence of transparency—imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence

Back to top

Questions

If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:

  1. Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
  2. Who is designing and overseeing the technology? Can they explain what is happening at different steps of the process?
  3. What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
  4. What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
  5. Are you confident the technology will work as intended when used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
  6. Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
  7. What measures do you have in place to identify and address potentially harmful biases in the technology?
  8. What regulatory safeguards and redress mechanisms do you have in place for people who claim that the technology has been unfair to them or abused them in any way?
  9. Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
  10. Are you certain that the technology abides with relevant regulations and legal standards, including the GDPR?
  11. Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?

Back to top

Case Studies

Leveraging artificial intelligence to promote information integrity

The United Nations Development Programme’s eMonitor+ is an AI-powered platform that helps “scan online media posts to identify electoral violations, misinformation, hate speech, political polarization and pluralism, and online violence against women.” Data analysis facilitated by eMonitor+ enables election commissions and media stakeholders to “observe the prevalence, nature, and impact of online violence.” The platform relies on machine learning to track and analyze content on digital media to generate graphical representations for data visualization. eMonitor+ has been used by Peru’s Asociación Civil Transparencia and Ama Llulla to map and analyze digital violence and hate speech in political dialogue, and by the Supervisory Election Commission during the 2022 Lebanese parliamentary election to monitor potential electoral violations, campaign spending, and misinformation. The High National Election Commission of Libya has also used eMonitor+ to monitor and identify online violence against women in elections.

“How Nigeria’s fact-checkers are using AI to counter election misinformation”

How Nigeria’s fact-checkers are using AI to counter election misinformation”

Ahead of Nigeria’s 2023 presidential election, the UK-based fact-checking organization Full Fact “offered its artificial intelligence suite—consisting of three tools that work in unison to automate lengthy fact-checking processes—to greatly expand fact-checking capacity in Nigeria.” According to Full Fact, these tools are not intended to replace human fact-checkers but rather assist with time-consuming, manual monitoring and review, leaving fact-checkers “more time to do the things they’re best at: understanding what’s important in public debate, interrogating claims, reviewing data, speaking with experts and sharing their findings.” The scalable tools which include search, alerts, and live functions allow fact-checkers to “monitor news websites, social media pages, and transcribe live TV or radio to find claims to fact check.”

Monitoring crop development: Agroscout

Monitoring crop development: Agroscout

The growing impact of climate change could further cut crop yields, especially in the world’s most food-insecure regions. And our food systems are responsible for about 30% of greenhouse gas emissions. Israeli startup AgroScout envisions a world where food is grown in a more sustainable way. “Our platform uses AI to monitor crop development in real-time, to more accurately plan processing and manufacturing operations across regions, crops and growers,” said Simcha Shore, founder and CEO of AgroScout. ‘By utilizing AI technology, AgroScout detects pests and diseases early, allowing farmers to apply precise treatments that reduce agrochemical use by up to 85%. This innovation helps minimize the environmental damage caused by traditional agrochemicals, making a positive contribution towards sustainable agriculture practices.’”

Machine Learning for Peace

The Machine Learning for Peace Project seeks to understand how civic space is changing in countries around the world using state of the art machine learning techniques. By leveraging the latest innovations in natural language processing, the project classifies “an enormous corpus of digital news into 19 types of civic space ‘events’ and 22 types of Resurgent Authoritarian Influence (RAI) events which capture the efforts of authoritarian regimes to wield influence on developing countries.” Among the civic space “events” being tracked are activism, coups, election activities, legal changes, and protests. The civic space event data is combined with “high frequency economic data to identify key drivers of civic space and forecast shifts in the coming months.” Ultimately, the project hopes to serve as a “useful tool for researchers seeking rich, high-frequency data on political regimes and for policymakers and activists fighting to defend democracy around the world.”

Food security: Detecting diseases in crops using image analysis

Food security: Detecting diseases in crops using image analysis

“Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops.” As a first step toward supplementing existing solutions for disease diagnosis with a smartphone-assisted diagnosis system, researchers used a public dataset of 54,306 images of diseased and healthy plant leaves to train a “deep convolutional neural network” to automatically identify 14 different crop species and 26 unique diseases (or the absence of those diseases).

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Automation

What is automation?

A worker at the assembly line of a car-wiring factory in Bizerte, Tunisia. The automation of labor disproportionately affects women, the poor, and other vulnerable members of society. Photo credit: Alison Wright for USAID, Tunisia, Africa

Automation involves techniques and methods applied to enable machines, devices, and systems to function with minimal or no human involvement. Automation is used, for example, in applications for managing the operation of traffic lights in a city, navigating aircrafts, running and configuring different elements of a telecommunications network, in robot-assisted surgeries, and even for automated storytelling (which uses artificial intelligence software to create verbal stories). Automation can improve efficiency and reduce error, but it also creates new opportunities for error and introduces new costs and challenges for government and society.

How does automation work?

Processes can be automated by programming certain procedures to be performed without human intervention (like a recurring payment for a credit card or phone app) or by linking electronic devices to communicate directly with one another (like self-driving vehicles communicating with other vehicles and with road infrastructure). Automation can involve the use of temperature sensors, light sensors, alarms, microcontrollers, robots, and more. Home automation, for example, may include home assistants such as Amazon Echo, Google Home, and OpenHAB. Some automation systems are virtual, for example, email filters that automatically sort incoming emails into different folders, and AI-enabled moderation systems for online content.

The exact architecture and functioning of automation systems depend on their purpose and application. However, automation should not be confused with artificial intelligence in which an algorithm-led process ‘learns’ and changes over time: for instance, an algorithm that reviews thousands of job applications and studies and learns from patterns in the applications is using artificial intelligence, while a chatbot that replies to candidates’ questions is using automation.

For more information on the different components of automation systems, read also the resources about the Internet of Things and sensors, robots and drones, and biometrics.

Back to top

How is automation relevant in civic space and for democracy?

Automated processes can be built to increase transparency, accuracy, efficiency, and scale. They can help minimize effort (labor) and time; reduce errors and costs; improve the quality and/or precision in tasks/processes; carry out tasks that are too strenuous, hazardous, or beyond the physical capabilities of humans; and generally free humans of repetitive, monotonous tasks.

From a historical perspective, automation is not new: the first industrial revolution in the 1700s harnessed the power of steam and water; the technological revolution of the 1880s relied on railways and telegraphs; and the digital revolution in the 20th century saw the beginning of computing. Each of these transitions brought fundamental changes not only to industrial production and the economy, but to society, government, and international relations.

Now, the fourth industrial revolution, or the ‘automation revolution’ as it is sometimes called, promises to once again disrupt work as we know it as well as relationships between people, machines, and programmed processes.

When used by governments, automated processes promise to deliver government services with greater speed, efficiency, and coverage. These developments are often called e-government, e-governance, or digital government. E-government includes government communication and information sharing on the web (sometimes even the publishing of government budgets and agendas), facilitation of financial transactions online such as electronic filing of tax returns, digitization of health records, electronic voting, and digital IDs.

Additionally, automation can be used in elections to help count votes, register voters, and record voter turnout to increase trust in the integrity of the democratic process. Without automation, counting votes can take weeks or months and can lead to results being challenged by anti-democratic forces and to possible voter disenchantment with democratic systems. E-voting and automated vote counting have already become politicized in many countries like Kazakhstan and Pakistan, although many countries are increasingly adopting e-voting systems to help increase voter turnout and participation and hasten the election process.

A health worker receives information on a disease outbreak in Brewerville, Liberia. Automated processes promise to deliver government services with greater speed, efficiency, and coverage. Photo credit: Sarah Grile.

The benefits of automating government services are numerous, as the UK’s K4D helpdesk explains, by lowering the cost of service delivery, improving quality and coverage  (for example, through telemedicine or drones); strengthening communication, monitoring, and feedback, and in some cases by encouraging citizen participation at the local level. In Indonesia, for example, the Civil Service Agency (BKN) introduced a computer-assisted testing system (CAT) to disrupt the previously long-standing manual testing system that created rampant opportunities for corruption in civil service recruitment by line ministry officials. With the new system, the database of questions is tightly controlled, and the results are posted in real time outside the testing center.

In India, an automated system relying on a specifically designed computer (an Advanced Virtual RISC) and the common telecommunications standard GSM (Global System for Mobile) is used to inform farmers about exact field conditions and to point to the necessary next steps with command functions such as irrigating, plowing, deploying seeds and carrying out other farming activities.

Drone used for irrigation scheduling in the southern part of Bangladesh. Automated systems have vast applications in agriculture. Photo credit: Alanuzzaman Kurishi.

As with previous industrial revolutions, automation changes the nature of work, and these changes could bring unemployment in certain sectors if not properly planned. The removal of humans from processes also brings new opportunities for error (such as ‘automation bias’) and raises new legal and ethical questions. See the Risks section below.

Back to top

Opportunities

Islamabad Electric Supply Company’s (IESCO) Power Distribution Control Center (PDC), Pakistan. Smart meters enable monitoring of power demand, supply, and load shedding in real-time. Photo credit: USAID.

Automation can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about automation in your work.

Increase in productivity

Automation may improve output while reducing the time and labor required, thus increasing the productivity of workers and the demand for other kinds of work. For example, automation can streamline document review, cutting down on the time that lawyers need to search through documents or academics through sources, etc. In Azerbaijan, the government partnered with the private sector in the use of an automated system to reduce the backlog of relatively simple court cases, such as claims for unpaid bills. In instances where automation increases the quality of services or goods and/or brings down their cost, a more significant demand for the goods or services can be served.

Improvements in processes and outputs

Automation can improve the speed, efficiency, quality, consistency, and coverage of service delivery and reduce human error, time spent, and costs. It can therefore allow activities to scale up. For example, the UNDP and the government of the Maldives used automation to create 3-D maps of the islands and chart their topography. Having this information on record would speed up further disaster relief and rescue efforts. The use of drones also reduced the time and money required to conduct this exercise: while mapping 11 islands would normally take almost a year, using a drone reduced the time to one day. See the Robots and Drones resource for additional examples.

Optimizing an automated task generally requires trade-offs among cost, precision, the permissible margin of error, and scale. Automation may sometimes require tolerating more errors in order to reduce costs or achieve greater scale. For more, see the section “Knowing when automation offers a suitable solution to the challenge at hand” in Automation of government processes.

For democratic processes, automation can help facilitate access for voters who cannot travel to polling stations via remote e-voting or using accessible systems at polling stations. Moreover, using automation for counting votes can help decrease user error in some cases and increase trust in the democratic process.

Increase transparency

Automation may increase transparency by making data and information easily available to the public, thus building public trust and aiding accountability. In India, the State Transport Department of Karnataka has automated driving test centers hoping to eliminate bribery in the issuing of driver’s licenses. A host of high-definition cameras and sensors placed along the test track captured the movement of the vehicle while a computerized system decides if the driver has passed or failed the test. See also “Are emerging technologies helping win the fight against corruption in developing countries?”

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with automation in DRG work, as well as how to mitigate unintended – and intended – consequences.

Labor issues

When automation is used to replace human labor, the resulting loss of jobs causes structural unemployment known as “technological unemployment.” Structural unemployment disproportionately affects women, the poor, and other vulnerable members of society, unless they are re-skilled and provided with adequate protections. Automation also requires skilled labor that can operate, oversee or maintain automated systems, eventually creating jobs for a smaller section of the population. But the immediate impact of this transformation of work can be harmful to people and communities without social safety nets or opportunities for finding other work.

Additionally, there have been links drawn between increased automation and a rise in preferences for populist politicians as job loss begins to affect particularly low-wage workers. A study conducted by the Proceedings of the National Academy of Sciences (PNAS) found a correlation between the impact of globalization and automation and increased vote shares for right-wing populist parties in several European countries. Although automation can have a positive impact on overall profits, low-wage, non-educated workers may feel particularly impacted as wages remain low with their tasks being replaced by automated systems.

Discrimination towards marginalized groups and minorities and increasing social inequality

Automation systems equipped with artificial intelligence (AI) may produce results that are discriminatory towards some marginalized and minority groups when the system has learned from biased learning patterns, from biased datasets, or from biased human decision-making. The outputs of AI-equipped automated systems may reflect real-life societal biases, prejudices, and discriminatory treatment towards some demographics. Biases can also occur from the human implementation of automated systems, for instance, when the systems do not function in the real world as they were able to function in a lab or theoretical setting, or when the humans working with the machines misinterpret or misuse the automated technology.

There are numerous examples of racial and other types of discrimination being either replicated or magnified by automation. To take an example from the field of predictive policing, ProPublica reported after conducting an investigation in 2016 that COMPAS, a data-driven AI tool meant to assist judges in the United States, was biased against Black people while determining if a convicted offender would commit more crimes in the future. For more on predictive policing see “How to Fight Bias with Predictive Policing” and “A Popular Algorithm Is No Better at Predicting Crimes Than Random People.

These risks exist in other domains as well. The University of Toronto and Citizen Lab report titled “Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system” notes that “[m]any [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims is lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Insufficient Legal Protections

Existing laws and regulations may not be applicable to automation systems and, in cases where they are, the application may not be well-defined. Not all countries have laws that protect individuals against these dangers. Under the GDPR  (the European General Data Protection Regulation), individuals have the right not to be subject to a decision based only on automated processing, including profiling. In other words, humans must oversee important decisions that affect individuals. But not all countries have or respect such regulations, and even the GDPR is not upheld in all situations. Meanwhile, individuals would have to actively claim their rights and contest these decisions, usually by seeking legal assistance, which is beyond the means of many. Groups at the receiving end of such discrimination tend to have fewer resources and limited access to human rights protections to contest such decisions.

Automation Bias

People tend to have faith in automation and tend to believe that technology is accurate, neutral, and non-discriminating. This can be described as “automation bias”: when humans working with or overseeing automated systems tend to give up responsibility to the machine and trust the machine’s decision-making uncritically. Automation bias has been shown to have harmful impacts across automated sectors, including leading to errors in healthcare. Automation bias also plays a role in the discrimination described above.

Uncharted ethical concerns

The ever-increasing use of automation brings ethical questions and concerns that may not have been considered before the arrival of the technology itself. For example, who is responsible if a self-driving car gets into an accident? How much personal information should be given to health-service providers to facilitate automated health monitoring? In many cases, further research is needed to even begin to address these dilemmas.

Issues related to individual consent

When automated systems make decisions that affect people’s lives, they blur the formation, context, and expression of an individual’s consent (or lack thereof) as described in this quote: “…[T]he dilution of the free basis of our individual consent – either through outright information distortion or even just the absence of transparency – imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation.” See additional information about informed consent in the Data Protection resource.

High capital costs

Large-scale automation technologies require very high capital costs, which is a risk in case the use of the technology becomes unviable in the long term or does not otherwise guarantee commensurate returns or recovery of costs. Hence, automation projects funded with public money (for example, some “smart city ” infrastructure) require thorough feasibility studies for assessing needs and ensuring long-term viability. On the other hand, initial costs also may be very high for individuals and communities. An automated solar-power installation or a rainwater-harvesting system is a large investment for a community. However, depending on the tariffs for grid power or water, the expenditure may be recovered in the long run.

Back to top

Questions

If you are trying to understand the implications of automation in your work environment, or are considering using aspects of automation as part of your DRG programming, ask yourself these questions:

  1. Is automation a suitable method for the problem you are trying to solve?
  2. What are the indicators or guiding factors that determine if automation is a suitable and required solution to a particular problem or challenge?
  3. What risks are involved regarding security, the potential for discrimination, etc? How will you minimize these risks? Do the benefits of using automation or automated technology outweigh these risks?
  4. Who will work with and oversee these technologies? What is their training and what are their responsibilities? Who is liable legally in case of an accident?
  5. What are the long-term effects of using these technologies in the surrounding environment or community? What are the effects on individuals, jobs, salaries, social welfare, etc.? What measures are necessary to ensure that the use of these technologies does not aggravate or reinforce inequality through automation bias or otherwise?
  6. How will you ensure that humans are overseeing any important decisions made about individuals using automated processes? (How will you abide by the GDPR or other applicable regulations?)
  7. What privacy and security safeguards are necessary for applying these technologies in a given context regarding, for example, cybersecurity, protection or personal privacy, protecting operators from accidents, etc.? How will you build-in these safeguards?

Back to top

Case studies

Automated Farming Vehicles

Automated Farming Vehicles

“Forecasts of world population increases in the coming decades demand new production processes that are more efficient, safer, and less destructive to the environment. Industries are working to fulfill this mission by developing the smart factory concept. The agriculture world should follow industry leadership and develop approaches to implement the smart farm concept. One of the most vital elements that must be configured to meet the requirements of the new smart farms is the unmanned ground vehicles (UGV).”

Automated Voting Systems in Estonia

Automated Voting Systems in Estonia

Since 2005, Estonia has allowed e-voting wherein citizens are able to cast their ballot online. In each succeeding election voters have increasingly chosen to cast online ballots to save time and participate in local and national elections with ease. Voters use digital IDs to help verify their identification and prevent fraud and ballots cast online are automatically cross-referenced with lists to ensure there is no duplication or voter fraud.

Automated Mining in South Africa

Automated Mining in South Africa

“Spiraling labour and energy costs are putting pressure on the financial performance of gold mines in South Africa, but the solution could be found in adopting digital technologies. By implementing automation operators can remove underground workers from harm’s way, and that is going to become an ever-bigger imperative if gold miners are to remain investable by international capital. This increased emphasis for the safety of the workforce and mines is motivating the development of the mining automation market. Earlier, old-style techniques of exploration and drilling compromised the security of mine labour force. Such examples have forced operators to develop smart resolutions and tools to confirm security of workers.”

Automating Processing of Uncontested Civil Cases to Reduce Court Backlogs in Azerbaijan, Case Study 14

Automating Processing of Uncontested Civil Cases to Reduce Court Backlogs in Azerbaijan, Case Study 14

“In Azerbaijan, the government developed a new approach to dealing with their own backlog of cases, one which addressed both supply side and demand side elements. Recognizing that much of the backlog stemmed from relatively simple civil cases, such as claims for unpaid bills, the government partnered with the private sector in the use of an automated system to streamline the handling of uncontested cases, thus freeing up judges’ time for more important cases.”

Reforming Civil Service Recruitment through Computerized Examinations in Indonesia, Case Study 6

Reforming Civil Service Recruitment through Computerized Examinations in Indonesia, Case Study 6

“In Indonesia, the Civil Service Agency (BKN) succeeded in introducing a computer-assisted testing system (CAT) to disrupt the previously long-standing manual testing system that created rampant opportunities for corruption in civil service recruitment by line ministry officials. Now the database of questions is tightly controlled, and the results are posted in real time outside the testing center. Since its launch in 2013, CAT has become the de facto standard for more than 62 ministries and agencies.”

Real Time Automation of Indian Agriculture

Real Time Automation of Indian Agriculture

“Real time automation of Indian agricultural system” using AVR (Advanced Virtual RISC) microcontroller and GSM (Global System for Mobile) is focused on making the agriculture process easier with the help of automation. The set up consists of processor which is an 8-bit microcontroller. GSM plays an important part by controlling the irrigation on field. GSM is used to send and receive the data collected by the sensors to the farmer. GSM acts as a connecting bridge between AVR microcontroller and farmer. Our study aims to implement the basic application of automation of the irrigation field by programming the components and building the necessary hardware. In our study different type of sensors like LM35, humidity sensor, soil moisture sensor, IR sensor used to find the exact field condition. GSM is used to inform the farmer about the exact field condition so that [they] can carry necessary steps. AT(Attention) commands are used to control the functions like irrigation, ploughing, deploying seeds and carrying out other farming activities.”

E-voting terminated in Kazakhstan

A study published in May 2020 on the discontinuation of e-voting in Kazakhstan highlights some of the political challenges around e-voting. Kazakhstan used e-voting between 2004 and 2011 and was considered a leading example. See “Kazakhstan: Voter registration Case Study (2006)” produced by the Ace Project Electoral Knowledge Network. However, the country returned to a traditional paper ballot due to a lack of confidence from citizens and civil society in the government’s ability to ensure the integrity of e-voting procedures. See “Politicization of e-voting rejection: reflections from Kazakhstan,” by Maxat Kassen. It is important to note that Kazakhstan did not employ biometric voting, but rather electronic voting machines that operated via touch screens.

Back to top

References

Additional resources

Back to top

Categories

Big Data

What are big data?

“Big data” are also data, but involve far larger amounts of data than can usually be handled on a desktop computer or in a traditional database. Big data are not only huge in volume, but they grow exponentially with time. Big data are so large and complex that none of the traditional data-management tools are able to store them or process them efficiently. If you have an amount of data that you can process on your computer or the database on your usual server without it crashing, “big data” are likely not what you are working with.

How does big data work?

The field of big data has evolved as technology’s ability to constantly capture information has skyrocketed. Big data are usually captured without being entered into a database by a human being, in real time: in other words, big data are “passively” captured by digital devices.

The internet provides infinite opportunities to gather information, ranging from so-called meta-information or metadata (geographic location, IP address, time, etc.) to more detailed information about users’ behaviors. This is often from online social media or credit card-purchasing behavior. Cookies are one of the principal ways that web browsers gather information about users: they are essentially tiny pieces of data stored on a web browser, or little bits of memory about something you did on a website. (For more on cookies, visit this resource).

Data sets can also be assembled from the Internet of Things, which involves sensors tied to other devices and networks. For example, censor-equipped streetlights might collect traffic information that can then be analyzed to optimize traffic flow. The collection of data through sensors is a common element of smart city infrastructure.

Healthcare workers in Indonesia. The use of big data can improve health systems and inform public health policies. Photo credit: courtesy of USAID EMAS.

Big data can also be medical or scientific data, such as DNA information or data related to disease outbreaks. This can be useful to humanitarian and development organizations. For example, during the Ebola outbreak in West Africa between 2014 and 2016, UNICEF combined data from a number of sources, including population estimates, information on air travel, estimates of regional mobility from mobile phone records and tagged social media locations, temperature data, and case data from WHO reports to better understand the disease and predict future outbreaks.

Big data are created and used by a variety of actors. In data-driven societies, most actors (private sector, governments, and other organizations) are encouraged to collect and analyze data to notice patterns and trends, measure success or failure, optimize their processes for efficiency, etc. Not all actors will create datasets themselves, often they will collect publicly available data or even purchase data from specialized companies. For instance, in the advertising industry, Data Brokers specialize in collecting and processing information about internet users, which they then sell to advertisers. Other actors will create their own datasets, like energy providers, railway companies, ride-sharing companies, and governments. Data are everywhere, and the actors capable of collecting them intelligently and analyzing them are numerous.

Back to top

How is big data relevant in civic space and for democracy?

In Tanzania, an open-source platform allows government and financial institutions to record all land transactions to create a comprehensive dataset. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.

From forecasting presidential elections to helping small-scale farmers deal with changing climate to predicting disease outbreaks, analysts are finding ways to turn Big Data into an invaluable resource for planning and decision-making. Big data are capable of providing civil society with powerful insights and the ability to share vital information. Big data tools have been deployed recently in civic space in a number of interesting ways, for example, to:

  • monitor elections and support open government (starting in Kenya with Ushahidi in 2008)
  • track epidemics like Ebola in Sierra Leone and other West African nations
  • track conflict-related deaths worldwide
  • understand the impact of ID systems on refugees in Italy
  • measure and predict agricultural success and distribution in Latin America
  • press forward with new discoveries in genetics and cancer treatment
  • make use of geographic information systems (GIS mapping applications) in a range of contexts, including planning urban growth and traffic flow sustainably, as has been done by the World Bank in various countries in South Asia, East Asia, Africa, and the Caribbean

The use of big data that are collected, processed, and analyzed to improve health systems or environmental sustainability, for example, can ultimately greatly benefit individuals and society. However, a number of concerns and cautions have been raised about the use of big datasets. Privacy and security concerns are foremost, as big data are often captured without our awareness and used in ways to which we may not have consented, sometimes sold many times through a chain of different companies we never interacted with, exposing data to security risks such as data breaches. It is crucial to consider that anonymous data can still be used to “re-identify” people represented in the dataset – achieving 85% accuracy using as little as postal code, gender, and date of birth – conceivably putting them at risk (see discussion of “re-identification” below).

There are also power imbalances (divides) in who is represented in the data as opposed to who has the power to use them. Those who are able to extract value from big data are often large companies or other actors with the financial means and capacity to collect (sometimes purchase), analyze, and understand the data.

This means the individuals and groups whose information is put into datasets (shoppers whose credit card data is processed, internet users whose clicks are registered on a website) do not generally benefit from the data they have given. For example, data about what items shoppers buy in a store is more likely used to maximize profits than to help customers with their buying decisions. The extractive way that data are taken from individuals’ behaviors and used for profit has been called “surveillance capitalism“, which some believe is undermining personal autonomy and eroding democracy.

The quality of datasets must also be taken into consideration, as those using the data may not know how or where they were gathered, processed, or integrated with other data. And when storing and transmitting big data, security concerns are multiplied by the increased numbers of machines, services, and partners involved. It is also important to keep in mind that big datasets themselves are not inherently useful, but they become useful along with the ability to analyze them and draw insights from them, using advanced algorithms, statistical models, etc.

Last but not least, there are important considerations related to protecting the fundamental rights of those whose information appears in datasets. Sensitive, personally identifiable, or potentially personally identifiable information can be used by other parties or for other purposes than those intended, to the detriment of the individuals involved. This is explored below and in the Risks section, as well as in other primers.

Protecting anonymity of those in the dataset

Anyone who has done research in the social or medical sciences should be familiar with the idea that when collecting data on human subjects, it is important to protect their identities so that they do not face negative consequences from being involved in research, such as being known to have a particular disease, voted in a particular way, engaged in stigmatized behavior, etc. (See the Data Protection resource). The traditional ways of protecting identities – removing certain identifying information, or only reporting statistics in aggregate – can and should also be used when handling big datasets to help protect those in the dataset. Data can also be hidden in multiple ways to protect privacy: methods include encryption (encoding), tokenization, and data masking. Talend identifies the strengths and weaknesses of the primary strategies for hiding data using these methods.

One of the biggest dangers involved in using big datasets is the possibility of re-identification: figuring out the real identities of individuals in the dataset, even if their personal information has been hidden or removed. To give a sense of how easy it could be to identify individuals in a large dataset, one study found that using only three fields of information—postal code, gender, and date of birth—it was possible to identify 87% of Americans individually, and then connect their identities to publicly-available databases containing hospital records. With more data points, researchers have demonstrated a near-perfect ability to identify individuals in a dataset: four random pieces of data credit card records could achieve 90% identifiability, and researchers were able to re-identify individuals with 99.98% accuracy using 15 data points.

Ten simple rules for responsible big data research, quoted from a paper of the same name by Zook, Barocas, Boyd, Crawford, Keller, Gangadharan, et al, 2017

  1. Acknowledge that data are people and that data can do harm. Most data represent or affect people. Simply starting with the assumption that all data are people until proven otherwise places the difficulty of disassociating data from specific individuals front and center.
  2. Recognize that privacy is more than a binary value. Privacy may be more or less important to individuals as they move through different contexts and situations. Looking at someone’s data in bulk may have different implications for their privacy than looking at one record. Privacy may be important to groups of people (say, by demographic) as well as to individuals.
  3. Guard against the reidentification of your data. Be aware that apparently harmless, unexpected data, like phone battery usage, could be used to re-identify data. Plan to ensure your data sharing and reporting lowers the risk that individuals could be identified.
  4. Practice ethical data sharing. There may be times when participants in your dataset expect you to share (such as with other medical researchers working on a cure), and others where they trust you not to share their data. Be aware that other identifying data about your participants may be gathered, sold, or shared about them elsewhere, and that combining that data with yours could identify participants individually. Be clear about how and when you will share data and stay responsible for protecting the privacy of the people whose data you collect.
  5. Consider the strengths and limitations of your data; big does not automatically mean better. Understand where your large dataset comes from, and how that may evolve over time. Don’t overstate your findings and acknowledge when they may be messy or have multiple meanings.
  6. Debate the tough, ethical choices. Talk with your colleagues about these ethical concerns. Follow the work of professional organizations to stay current with concerns.
  7. Develop a code of conduct for your organization, research community, or industry and engage your peers in creating it to ensure unexpected or under-represented perspectives are included.
  8. Design your data and systems for auditability. This both strengthens the quality of your research and services and can give early warnings about problematic uses of the data.
  9. Engage with the broader consequences of data and analysis practices. Keep social equality, the environmental impact of big data processing, and other society-wide impacts in view as you plan big data collection.
  10. Know when to break these rules. With debate, code of conduct, and auditability as your guide, consider that in a public health emergency or other disaster, you may find there are reasons to put the other rules aside.

Gaining informed consent

Those providing their data may not be aware at the time that their data may be sold later to data brokers who may then re-sell them.

Unfortunately, data privacy consent forms are generally hard for the average person to read, even in the wake of General Data Protection Regulation (GDPR ) expansion of privacy protections. Terms of Service (ToS documents) are so notoriously difficult to read that one filmmaker even made a documentary on the subject. Researchers who have studied terms of service and privacy policies have found that users generally accept them without reading them because they are too long and complex. Otherwise, users that need to access a platform or service for personal reasons (for example to get in contact with a relative) or for their livelihood (to deliver their products to customers) may not be able to simply reject the ToS when they have no viable or immediate alternative.

Important work is being done to try to protect users of platforms and services from these kinds of abusive data-sharing situations. For example, Carnegie Mellon’s Usable Privacy and Security laboratory (CUPS) has developed best practices to inform users about how their data may be used. These take the shape of data privacy “nutrition labels” that are similar to FDA-specified food nutrition labels and are evidence-based.

In Chipata, Zambia, a resident draws water from a well. Big data offer invaluable insights for the design of climate change solutions. Photo credit: Sandra Coburn.

Back to top

Opportunities

Big data can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about big data in your work.

Greater insight

Big datasets can present some of the richest, most comprehensive information that has ever been available in human history. Researchers using big datasets have access to information from a massive population. These insights can be much more useful and convenient than self-reported data or data gathered from logistically tricky observational studies. One major trade-off is between the richness of the insights gained through self-reported or very carefully collected data, versus the ability to generalize the insights from big data. Big data gathered from social-media activity or sensors also can allow for the real-time measurements of activity at a large scale. Big data insights are very important in the field of logistics. For example, the United States Postal Service collects data from across its package deliveries using GPS and vast networks of sensors and other tracking methods, and they then process these data with specialized algorithms. These insights allow them to optimize their deliveries for environmental sustainability.

Increased access to data

Making big datasets publicly available can begin to take steps toward closing divides in access to data. Apart from some public datasets, big data often ends up as the property of corporations, universities, and other large organizations. Even though the data produced are about individual people and their communities, those individuals and communities may not have the money or technical skills needed to access those data and make productive use of them. This creates the risk of worsening existing digital divides.

Publicly available data have helped communities understand and act on government corruption, municipal issues, human-rights abuses, and health crises, among other things. Though again, when data are made public, they are of particular importance to ensure strong privacy for those whose data is in the dataset. The work of the Our Data Bodies project provides additional guidance for how to engage with communities whose data is in the datasets. Their workshop materials can support community understanding and engagement in making ethical decisions around data collection and processing, and about how to monitor and audit data practices.

Back to top

Risks

The use of emerging technologies to collect data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with big data collection and use in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Surveillance

With the potential for re-identification as well as the nature and aims of some uses of big data, there is a risk that individuals included in a dataset will be subjected to surveillance by governments, law enforcement, or corporations. This may put the fundamental rights and safety of those in the dataset at risk.

The Chinese government is routinely criticized for the invasive surveillance of Chinese citizens through gathering and processing big data. More specifically, the Chinese government has been criticized for their system of social ranking of citizens based on their social media, purchasing, and education data, as well as the gathering of DNA of members of the Uighur minority (with the assistance of a US company, it should be noted). China is certainly not the only government to abuse citizen data in this way. Edward Snowden’s revelations about the US National Security Agency’s gathering and use of social media and other data were among the first public warnings about the surveillance potential of big data. Concerns have also been raised about partnerships involved in the development of India’s Aadhar biometric ID system, a technology whose producers are eager to sell it to other countries. In the United States, privacy advocates have raised concerns about companies and governments gathering data at scale about students by using their school-provided devices, a concern that should also be raised in any international context when laptops or mobiles are provided for students.

It must be emphasized that surveillance concerns are not limited to the institutions originally gathering the data, whether governments or corporations. When data are sold or combined with other datasets, it is possible that other actors, from email scammers to abusive domestic partners, could access the data and track, exploit, or otherwise harm people appearing in the dataset.

Data security concerns

Because big data are collected, cleaned, and combined through long, complex pipelines of software and storage, it presents significant challenges for security. These challenges are multiplied whenever the data are shared between many organizations. Any stream of data arriving in real time (for example, information about people checking into a hospital) will need to be specifically protected from tampering, disruption, or surveillance. Given that data may present significant risks to the privacy and safety of those included in the datasets and may be very valuable to criminals, it is important to ensure sufficient resources are provided for security.

Existing security tools for websites are not enough to cover the entire big data pipeline. Major investments in staff and infrastructure are needed to provide proper security coverage and respond to data breaches. And unfortunately, within the industry, there are known shortages of big data specialists, particularly security personnel familiar with the unique challenges big data presents. Internet of Things sensors present a particular risk if they are part of the data-gathering pipeline; these devices are notorious for having poor security. For example, a malicious actor could easily introduce fake sensors into the network or fill the collection pipeline with garbage data in order to render your data collection useless.

Exaggerated expectations of accuracy and objectivity

Big data companies and their promoters often make claims that big data can be more objective or accurate than traditionally-gathered data, supposedly because human judgment does not come into play and because the scale at which it is gathered is richer. This picture downplays the fact that algorithms and computer code also bring human judgment to bear on data, including biases and data that may be accidentally excluded. Human interpretation is also always necessary to make sense of patterns in big data; so again, claims of objectivity should be taken with healthy skepticism.

It is important to ask questions about data-gathering methods, algorithms involved in processing, and the assumptions or inferences made by the data gatherers/programmers and their analyses to avoid falling into the trap of assuming big data are “better.” For example, while data about the proximity of two cell phones tells you the fact that two people were near each other, only human interpretation can tell you why those two people were near each other. How an analyst interprets that closeness may differ from what the people carrying the cell phones might tell you. For example, this is a major challenge in using phones for “contact tracing” in epidemiology. During the COVID-19 health crisis, many countries raced to build contact tracing cellphone apps. The precise purposes and functioning of these apps varies widely (as has their effectiveness) but it is worth noting that major tech companies have preferred to refer to these apps as “exposure-risk notification” apps rather than contact tracing: this is because the apps can only tell you if you have been in proximity with someone with the coronavirus, not whether or not you have contacted the virus.

Misinterpretation

As with all data, there are pitfalls when it comes to interpreting and drawing conclusions. Because big data is often captured and analyzed in real-time, it may be particularly weak in providing historical context for the current patterns it is highlighting. Anyone analyzing big data should also consider what its source or sources were, whether the data was combined with other datasets, and how it was cleaned. Cleaning refers to the process of correcting or removing inaccurate or extraneous data. This is particularly important with social-media data, which can have lots of “noise” (extra information) and are therefore almost always cleaned.

Back to top

Questions

If you are trying to understand the implications of big data in your work environment, or are considering using aspects of big data as part of your DRG programming, ask yourself these questions:

  1. Is gathering big data the right approach for the question you’re trying to answer? How would your question be answered differently using interviews, historical research, or a focus on statistical significance?
  2. Do you already have these data, or are they publicly available? Is it really necessary to acquire these data yourself?
  3. What is your plan to make it impossible to identify individuals through their data in your dataset? If the data come from someone else, what kind of de-anonymization have they already performed?
  4. How could individuals be made more identifiable by someone else when you publish your data and findings? What steps can you take to lower the risk they will be identified?
  5. What is your plan for getting consent from those whose data you are collecting? How will you make sure your consent document is easy for them to understand?
  6. If your data come from another organization, how did they seek consent? Did that consent include consent for other organizations to use the data?
  7. If you are getting data from another organization, what is the original source of these data? Who collected them, and what were they trying to accomplish?
  8. What do you know about the quality of these data? Is someone inspecting them for errors, and if so, how? Did the collection tools fail at any point, or do you suspect that there might be some inaccuracies or mistakes?
  9. Have these data been integrated with other datasets? If data were used to fill in gaps, how was that accomplished?
  10. What is the end-to-end security plan for the data you are capturing or using? Are there third parties involved whose security propositions you need to understand?

Back to top

Case Studies

Village resident in Tanzania. Big data analytics can pinpoint strategies that work for small-scale farmers. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.
Big Data for climate-smart agriculture

Big Data for climate-smart agriculture

“Scientists at the International Center for Tropical Agriculture (CIAT) have applied Big Data tools to pinpoint strategies that work for small-scale farmers in a changing climate…. Researchers have applied Big Data analytics to agricultural and weather records in Colombia, revealing how climate variation impacts rice yields. These analyses identify the most productive rice varieties and planting times for specific sites and seasonal forecasts. The recommendations could potentially boost yields by 1 to 3 tons per hectare. The tools work wherever data is available, and are now being scaled out through Colombia, Argentina, Nicaragua, Peru and Uruguay.”

School Issued Devices and Student Privacy

School-Issued Devices and Student Privacy, particularly the Best Practices for Ed Tech Companies section.

“Students are using technology in the classroom at an unprecedented rate…. Student laptops and educational services are often available for a steeply reduced price and are sometimes even free. However, they come with real costs and unresolved ethical questions. Throughout EFF’s investigation over the past two years, [they] have found that educational technology services often collect far more information on kids than is necessary and store this information indefinitely. This privacy-implicating information goes beyond personally identifying information (PII) like name and date of birth, and can include browsing history, search terms, location data, contact lists, and behavioral information…All of this often happens without the awareness or consent of students and their families.”

Big Data and Thriving Cities: Innovations in Analytics to Build Sustainable, Resilient, Equitable and Livable Urban Spaces.

Big Data and Thriving Cities: Innovations in Analytics to Build Sustainable, Resilient, Equitable and Livable Urban Spaces.

This paper includes case studies of big data used to track changes in urbanization, traffic congestion, and crime in cities. “[I]nnovative applications of geospatial and sensing technologies and the penetration of mobile phone technology are providing unprecedented data collection. This data can be analyzed for many purposes, including tracking population and mobility, private sector investment, and transparency in federal and local government.”

Battling Ebola in Sierra Leone: Data Sharing to Improve Crisis Response.

Battling Ebola in Sierra Leone: Data Sharing to Improve Crisis Response.

“Data and information have important roles to play in the battle not just against Ebola, but more generally against a variety of natural and man-made crises. However, in order to maximize that potential, it is essential to foster the supply side of open data initiatives – i.e., to ensure the availability of sufficient, high-quality information. This can be especially challenging when there is no clear policy backing to push actors into compliance and to set clear standards for data quality and format. Particularly during a crisis, the early stages of open data efforts can be chaotic, and at times redundant. Improving coordination between multiple actors working toward similar ends – though difficult during a time of crisis – could help reduce redundancy and lead to efforts that are greater than the sum of their parts.”

Tracking Conflict-Related Deaths: A Preliminary Overview of Monitoring Systems.

Tracking Conflict-Related Deaths: A Preliminary Overview of Monitoring Systems.

“In the framework of the United Nations 2030 Agenda for Sustainable Development, states have pledged to track the number of people who are killed in armed conflict and to disaggregate the data by sex, age, and cause—as per Sustainable Development Goal (SDG) Indicator 16. However, there is no international consensus on definitions, methods, or standards to be used in generating the data. Moreover, monitoring systems run by international organizations and civil society differ in terms of their thematic coverage, geographical focus, and level of disaggregation.”

Balancing data utility and confidentiality in the US census.

Balancing data utility and confidentiality in the US census.

Describes how the Census is using differential privacy to protect the data of respondents. “As the Census Bureau prepares to enumerate the population of the United States in 2020, the bureau’s leadership has announced that they will make significant changes to the statistical tables the bureau intends to publish. Because of advances in computer science and the widespread availability of commercial data, the techniques that the bureau has historically used to protect the confidentiality of individual data points can no longer withstand new approaches for reconstructing and reidentifying confidential data. … [R]esearch at the Census Bureau has shown that it is now possible to reconstruct information about and reidentify a sizeable number of people from publicly available statistical tables. The old data privacy protections simply don’t work anymore. As such, Census Bureau leadership has accepted that they cannot continue with their current approach and wait until 2030 to make changes; they have decided to invest in a new approach to guaranteeing privacy that will significantly transform how the Census Bureau produces statistics.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Blockchain

What is Blockchain?

A blockchain is a distributed database existing on multiple computers at the same time, with a detailed and un-changeable transaction history leveraging cryptography. Blockchain-based technologies, perhaps most famous for their use in “cryptocurrencies ” such as Bitcoin, are also referred to as “distributed ledger technology (DLT).”

How does Blockchain work?

Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.
Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.

Blockchain is a constantly growing database as new sets of recordings, or ‘blocks,’ are added to it. Each block contains a timestamp and a link to the previous block, so they form a chain. The resulting blockchain is not managed by any particular body; instead, everyone in the network has access to the whole database. Old blocks are preserved forever, and new blocks are added to the ledger irreversibly, making it impossible to erase or manipulate the database records.

Blockchain can provide solutions for very specific problems. The most clear-cut use case is for public, shared data where all changes or additions need to be clearly tracked, and where no data will ever need to be redacted. Different uses require different inputs (computing power, bandwidth, centralized management), which need to be carefully considered based on each context. Blockchain is also an over-hyped concept applied to a range of different problems where it may not be the most appropriate technology, or in some cases, even a responsible technology to use.

There are two core concepts around Blockchain technology: the transaction history aspect and the distributed aspect. They are technically tightly interwoven, but it is worth considering them and understanding them independently as well.

'Immutable' Transaction History

Imagine stacking blocks. With increasing effort, one can continue adding more blocks to the tower, but once a block is in the stack, it cannot be removed without fundamentally and very visibly altering—and in some cases destroying—the tower of blocks. A blockchain is similar in that each “block” contains some amount of information—information that may be used, for example, to track currency transactions and store actual data. (You can explore the bitcoin blockchain, which itself has already been used to transmit messages and more, to learn about a real-life example.)

This is a core aspect of the blockchain technology, generally called immutability, meaning data, once stored, cannot be altered. In a practical sense, blockchain is immutable, though a 100% agreement among users could permit changes and actually making those changes would be incredibly tedious.

Blockchain is, at its simplest, a valuable digital tool that replicates online the value of a paper-and-ink logbook. While this can be useful to track a variety of sequential transactions or events (ownership of a specific item / parcel of land / supply chain) and could even be theoretically applied to concepts like voting or community ownership and management of resources, it comes with an important caveat. Mistakes can never be truly unmade, and changes to data tracked in a blockchain can never be updated.

Many of the potential applications of blockchain would rely on one of the pieces of data tracked being the identity of a person or legal organization. If that entity changes, their previous identity will be forever immutably tracked and linked to the new identity. On top of being damaging to a person fleeing persecution or legally changing their identity, in the case of transgender individuals, for example, this is also a violation of the right to privacy established under international human rights law.

Distributed and Decentralized

The second core tenet of blockchain technology is the absence of a central authority or oracle of “truth.” By nature of the unchangeable transaction records, every stakeholder contributing to a blockchain tracks and verifies the data it contains. At scale, this provides powerful protection against problems common not only to NGOs but to the private sector and other fields that are reliant on one service to maintain a consistent data store. This feature can protect a central system from collapsing or being censored, corrupted, lost, or hacked — but at the risk of placing significant hurdles in the development of the protocol and requirements for those interacting with the data.

A common misconception is that blockchain is completely open and transparent. Blockchains may be private, with various forms of permissions applied. In such cases, some users have more control over the data and transactions than others. Privacy settings for blockchain can make for easier management, but also replicate some of the specific challenges that blockchains, in theory, are solving.

Permissionless vs Permissioned Blockchain

Permissionless blockchains are public, so anyone can interact with and participate in them. Permissioned blockchains, on the other hand, are closed networks, which only specific actors can access and contribute to. As such, permissionless blockchains are more transparent and decentralized, while permissioned blockchains are governed by an entity or a group of entities that can customize the platform, choosing who can participate, the level of transparency, and whether or not to use digital assets. Another key difference is that public blockchains tend to be anonymous, while private ones, by nature, cannot be. Because of this, permissioned blockchain is chosen in many human-rights use cases, using identity to hold users accountable.

Back to top

How is blockchain relevant in civic space and for democracy?

Blockchain technology has the potential to provide substantial benefits in the development sector broadly, as well as specifically for human rights programs. By providing a decentralized, verifiable source of data, blockchain technology can be a more transparent, efficient form of information and data management for improved governance, accountability, financial transparency, and even digital identities. While blockchain can be effective when used strategically on specific problems, practitioners who choose to use it must do so fastidiously. The decisions to use DLTs should be based on a detailed analysis and research on comparable technologies, including non-DLT options. As blockchains are used more and more for governance and in the civic space, irresponsible applications threaten human rights, especially data security and the right to privacy.

By providing a decentralized, verifiable source of data, blockchain technology can enable a more transparent, efficient form of information and data management. Practitioners should understand that blockchain technology can be applied to humanitarian challenges, but it is not a separate humanitarian innovation in itself.

Blockchain for the Humanitarian Sector – Future Opportunities

Blockchains lend themselves to some interesting tools being used by companies, governments, and civil society. Examples of how blockchain technology may be used in civic space include: land titles (necessary for economic mobility and preventing corruption), digital IDs  (especially for displaced persons), health records, voucher-based cash transfers, supply chain, censorship resistant publications and applications, digital currency , decentralized data management , recording votes, crowdfunding and smart contracts. Some of these examples are discussed below. Specific examples of the use of blockchain technology may be found on this page under case studies.

A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.
A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.

Blockchain’s core tenets – an immutable transaction history and its distributed and decentralized nature – lend themselves to some interesting tools being used by companies, governments, and civil society. The risks and opportunities these present will be explored more fully in the relevant sections below, while specific examples will be given in the Case Studies section but, at a high level, many actors are looking at leveraging blockchain in the following ways:

Smart Contracts

Smart contracts are agreements that provide automatic payments on the completion of a specific task or event. For example, in civic space, smart contracts could be used to execute agreements between NGOs and local governments to expedite transactions, lower costs, and reduce mutual suspicions. However, since these contracts are “defined” in code, any software bugs can interfere with the intent of the contract or become potential loopholes in which the contract could be exploited. One case of this happened when an attacker exploited a software bug in a smart contract-based firm called The DAO for approximately $50M.

Liquid Democracy

Liquid democracy is a form of democracy wherein, rather than simply voting for elected leaders, citizens also engage in collective decision making. While direct democracy (each individual having a say on every choice a country makes) is not feasible, blockchain could lower the barriers to liquid democracy, a system which would put more power into the hands of the people. Blockchain would allow citizens to register their opinions on specific subject matters or delegate votes to subject matter experts.

Government Transparency

Blockchain can be used to tackle governmental corruption and waste in common areas like public procurement. Governments can use blockchain to publicize the steps of procurement processes and build citizen trust as citizens know the transactions recorded cannot have been tampered with. The tool can also be used to automate tax calculation and collection.

Innovative Currency and Payment Systems

Many new cryptocurrencies are considering ways to leverage blockchain for transactions without the volatility of bitcoin, and with other properties, such as speed, cost, stability and anonymity. Cryptocurrencies are also occasionally combined with smart contracts, to establish shared ownership through funding of projects.

Potential for fund-raising

In addition, the digital currency subset of blockchain is being used to establish shared ownership (not dissimilar to stocks / shares of large companies) of projects.

Potential for election integrity

The transparency and immutability of blockchain could be used to increase public confidence in elections by integrating electronic voting machines and blockchain. However, there are privacy concerns with publically tracking the tally of votes. Additionally, this system relies on electronic voting machines, which raise some security concerns, as computers can be hacked, and have been met by mistrust in several societies where they were suggested. Online voting through blockchain faces similar distrust, but integrating blockchain into voting would make audits much easier and more reliable. This traceability would also be a useful feature in transparently transmitting results from polling places to tabulation centers.

Censorship-resistant technology

The decentralized, immutable nature of blockchain provides clear benefits to protecting speech, but not without significant risks. There have been high-visibility uses of blockchain to publish censored speech in China, Turkey, and Catalonia. Article 19 has written an in-depth report specifically on the interplay between freedom of expression and blockchain technologies, which provides a balanced view of the potential benefits and risks and guidance for stakeholders considering engaging in this facet.

Decentralized computation and storage

Micro-payments through a blockchain can be used to formalize and record actions. This can be useful when carrying out activities with multiple stakeholders where trust, transparency, and a permanent record are valuable, for example, automated auctions (to prevent corruption), voting (to build voter trust), signing contracts (to keep a record of ownership and obligations that will outlast crises that destroy paper or even digital systems), and even for copyright purposes and preventing manipulation of facts.

Ethereum is a cryptocurrency focused on using the blockchain system to help manage decentralized computation and storage through smart contracts and digital payments. Ethereum encourages the development of “distributed apps” which are tied to transactions on the Ethereum Blockchain. Examples of these apps include a X-like tool, and apps that pay for content creation/sharing. See case studies in the cryptocurrencies primer for more detail.

The vast majority of these applications presume some form of micro-payment as part of the transaction. However, this requirement has ramifications for equal access as internet accessibility, capital, and access to online payment systems are all barriers to usage. Furthermore, with funds involved, informed consent is even more essential and challenging to ensure.

Back to top

Opportunities

Blockchain can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about blockchain in your work.

Proof of Digital Integrity

Data stored or tracked using blockchain technologies have a clear, sequential, and unalterable chain of verifications. Once data is added to the blockchain, there is ongoing mathematical proof that it has not been altered. This does not provide any assurance that the original data is valid or true, and it means that any data added cannot be deleted or changed – only appended to. However, in civil society, this benefit has been applied to concepts such as creating records for land titles/ownership; improving voting security by ensuring one person matches with one unchangeable vote; and preventing fraud and corruption while enhancing transparency in international philanthropy. It has been used to keep record of digital identities to help people retain ownership over their identity  and documents and, in humanitarian contexts, to make voucher-based cash transfers more efficient. As an enabler for digital currency, in some circumstances, blockchain facilitates cross-border funding of civil society. Blockchain could be used not only to preserve identification documents, but qualifications and degrees as well.

A function such as this can provide a solution to the legal invisibility most often borne by refugees and migrants. Rohingya refugees in Bangladesh, for example, are often at risk of discrimination and exploitation, because they are stateless. Proponents of blockchain argue that its distributed system can grant individuals with “self-sovereign identity,” a concept by which ownership of identity documents is taken from authorities and put in the hands of individuals. This allows individuals to use their identity documents across a number of authorities while authorities’ access requires a degree of consent. A self-sovereign identity model could be a solution to regulations raised by the GDPR and similar privacy-rights-supporting legislation.

However, if blockchain architects do not secure transaction permissions and public/private state variables, governments could use machine-learning algorithms to monitor public blockchain activity and gain insight into whatever daily, lower- level activities of their citizens are linkable to their blockchain identities. This might include payments (both interpersonal and business) and services, be they health, financial, or other. Anywhere citizens need to show their ID, their location and time would be tracked. While this is an infringement on privacy rights, it is extra problematic for marginalized groups whose legal status in a country can change rapidly and with no warning. Furthermore, such a use of blockchain assumes that individuals would be prepared and able to adopt that technology, an unlikely possibility due to the financial insecurity and lack of access to information and the internet many vulnerable groups, such as refugees, face. In this context, it is impossible to get meaningful informed consent from these target groups.

Blockchains promise anonymity, or at least pseudonymity, because limited information regarding individuals is stored in transaction logs. However, this does not guarantee that the platforms protect freedom of expression. For instance, the central internet regulator in China proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers.

Supply Chain Transparency

Blockchain has been used to create transparency in the supply chain and connect consumers directly with the producers of the products they are buying. This enables consumers to know companies are following ethical and sustainable production practices. For example, Moyee Coffee uses blockchain to track their supply chain, and makes this information available to customers, who can confirm the coffee beans were picked by paid, adult farmers and even tip those farmers directly.

Decentralized Store of Data

Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.
Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.

Blockchain is resistant to the traditional problems one central authority or data store faces when being attacked or experiencing outages. In a blockchain, data are constantly being shared and verified across all members—although blockchain has been criticized for requiring large amounts of energy, storage, and bandwidth to maintain a shared data store. This decentralization is most valued in digital currencies, which rely on the scale of their blockchain to balance not having a country or region “owning” and regulating the printing of the currency. Blockchain has also been explored to distribute data and coordinate resources without a reliance on a central authority in order to resist censorship.

Blockchains promise anonymity, or at least pseudonymity, because limited information regarding individuals is stored in transaction logs. However, this does not guarantee that the platforms protect freedom of expression. For instance, the central internet regulator in China proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers.
Blockchain and freedom of expression

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with blockchain in DRG work, as well as how to mitigate unintended – and intended – consequences.

Unequal Access

The minimal requirements for an individual or group to engage with Blockchain present a challenge for many. Connectivity, reliable and robust bandwidth, and local storage are all needed. Therefore, mobile phones are often an insufficient device to host or download blockchains. The infrastructure it requires can serve as a barrier to access in areas where Internet connectivity primarily occurs via mobile devices. Because every full node (host of a blockchain) stores a copy of the entire transaction log, blockchains only grow longer and larger with time, and thus can be extremely resource-intensive to download on a mobile device. For instance, over the span of a few years, the blockchains underlying Bitcoin grew from several gigabytes to several hundred. And for a cryptocurrency blockchain, this growth is a necessary sign of healthy economic growth. While the use of blockchain offline is possible, offline components are among the most vulnerable to cyberattacks, and this could put the entire system at risk.

Blockchains — whether they are fully independent or part of existing blockchains — require some percentage of actors to lend processing power to the blockchain, which — especially as they scale — itself becomes either exclusionary or creates classes of privileged users.

Another problem that can undermine the intended benefits of the system is the unequal access to opportunities to convert blockchain-based currencies to traditional currencies. This is especially a problem in relation to philanthropy or to support civil society organizations in restrictive regulatory environments. For cryptocurrencies  to have actual value, someone has to be willing to pay money for them.

Lack of digital literacy

Beyond these technical challenges, blockchain technology requires a strong baseline understanding of technology and its use in situations where digital literacy itself is a challenge. Use of the technology without a baseline understanding of the consequences is not really consent and could have dire consequences.

There are paths around some of these problems, but any blockchain use needs to reflect on what potential inequalities could be exacerbated by or with this technology.

Further, these technologies are inherently complex, and outside the atypical case where individuals do possess the technical sophistication and means to install blockchain software and set up nodes; the question remains as to how the majority of individuals can effectively access them. This is especially true of individuals who may have added difficulty interfacing with technologies due to disability, literacy, or age. Ill-equipped users are at increased risk of their investments or information being exposed to hacking and theft.

Blockchain and freedom of expression

Breaches of Privacy

Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.
Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.

Storing sensitive information  on a blockchain – such as biometrics  or gender – combined with the immutable aspects of the system, can lead to considerable risks for individuals when this information is accessed by others with the intention to harm. Even when specific personally identifiable information is not stored on a blockchain, pseudonymous accounts are difficult to protect from being mapped to real-world identities, especially if they are connected with financial transactions, services, and/or actual identities. This can erode rights to privacy and protection of personal data, as well as exacerbate the vulnerability of already marginalized populations and persons who change fundamental aspects of their person (gender, name). Data privacy rights, including explicit consent, modification, and deletion of one’s own data are often protected through data protection and privacy legislation, such as the General Data Protection Regulation (GDPR) in the EU that serves as a framework for many other policies around the world. An overview of legislation in this area around the world is kept up to date by the United Nations Conference on Trade and Development.

For example, in September 2017, concerns surfaced about the Bangladeshi government’s plans to create a ‘merged ID’ that would combine citizens’ biometric, financial, and communications data (Rahman, 2017). At that time, some local organizations had started exploring a DLT solution to identify and serve the needs of local Rohingya asylum-seekers and refugees. Because aid agencies are required to comply with national laws, any data recorded on a DLT platform could be subject to automatic data-sharing with government authorities. If these sets of records were to be combined, they would create an indelible, uneditable, untamperable set of records of highly vulnerable Rohingya asylum-seekers, ready for cross-referencing with other datasets. “As development and humanitarian donors and agencies rush to adopt new technologies that facilitate surveillance, they may be creating and supporting systems that pose serious threats to individuals’ human rights.”

These issues raise questions about meaningful, informed consent – how and to what extent do aid recipients understand DLTs and their implications when they receive assistance? […] Most experts agree that data protection needs to be considered not only in the realm of privacy, empowerment and dignity, but also in terms of potential physical impact or harm (ICRC and Brussels Privacy Hub, 2017; ICRC, 2018a)

Blockchain and distributed ledger technologies in the humanitarian sector

Environmental Impact

As blockchains scale, they require increasing amounts of computational power to stay in sync. In most digital currency blockchains, this scale problem is balanced by rewarding people who contribute to the processing power required with currency. The University of Cambridge estimated in fall 2019 that Bitcoin alone currently uses .28% of global electricity consumption, which, if Bitcoin were a country, would place it as the 41st most energy-consuming country, just ahead of Switzerland. Further, the negative impact is demonstrated by research showing that each Bitcoin transaction takes as much energy as needed to run a well-appointed house and all the appliances in it for an entire week.

Regulatory Uncertainty

As is often the case for emerging technology, the regulations surrounding blockchain are either ambiguous or nonexistent. In some cases, such as when the technology may be used to publish censored speech, regulators overcorrect and block access to the entire system or remove pseudonymous protections of the system in-country. In Western democracies, there are evolving financial regulations as well as concerns around the immutable nature of the records stored in a blockchain. Personally-Identifiable  Information (see Privacy, above) in a blockchain cannot be removed or changed as required by the GDPR right to be forgotten, and widely illegal content has already been inserted into the bitcoin blockchain.

Trust, Control, and Management Issues

While a blockchain has no “central database” which could be hacked, it also has no central authority to adjudicate or resolve problems. A lost or compromised password is almost guaranteed to result in the loss of ability to access funds or worse, digital identities. Compromised passwords or illegitimate use of the blockchain can harm individuals involved, especially when personal information is accessed or when child sexual abuse images are stored forever. Building mechanisms to address this problem undermines the key benefits of the blockchain.

That said, an enormous amount of trust is inherently placed in the software-development process around blockchain technologies, especially those using smart contracts. Any flaw in the software, and any intentional “back door”, could enable an attack that undermines or subverts the entire goal of the project.

Where is trust being placed: whether it is in the coders, the developers, those who design and govern mobile devices or apps; and whether trust is in fact being shifted from social institutions to private actors. All stakeholders should consider what implications does this have and how are these actors accountable to human rights standards.

Blockchain and freedom of expression

Back to top

Questions

If you are trying to understand the implications of blockchain in your work environment, or are considering using aspects of blockchain as part of your DRG programming, ask yourself these questions:

  1. Does blockchain provide specific, needed features that existing solutions with proven track records and sustainability do not?
  2. Do you really need blockchain, or would a database be sufficient?
  3. How will this implementation respect data privacy and control laws such as the GDPR?
  4. Do your intended beneficiaries have the internet bandwidth needed to use the product you are developing with blockchain?
  5. What external actors/partners will control critical aspects of the tool or infrastructure this project will rely on?
  6. What external actors/partners will have access to the data this project creates? What access conditions, limits, or ownership will they have?
  7. What level of transparency and trust do you have with these actors/partners?
  8. Are there ways to reduce dependency on these actors/partners?
  9. How are you conducting and measuring informed consent  processes for any data gathered?
  10. How will this project mitigate technical, financial, and/or infrastructural inequalities and ensure they are not exacerbated?
  11. Will the use of blockchain in your project comply with data protection and privacy laws?
  12. Do other existing laws and policies address the risks and offer mitigating measures related to the use of blockchain in your context, such as anti-money-laundering regulation?
  13. Are there laws in the works that may mitigate your project or increase costs?
  14. Do existing laws enable the benefits you have identified for the blockchain-enabled project?
  15. Are these laws aligned with international human rights law, such as the right to privacy, to freedom of expression and opinion, and to enjoy the benefits of scientific progress?

Back to top

Case Studies

Blockchain and the supply chain

Blockchain has been used for supply chain transparency of products that are commonly not ethically sourced. For example, in 2018, the World Wildlife Fund collaborated with Sea Quest Fiji Ltd., a tuna fishing and processing company and ConsenSys, a tech company with an implementer called TraSeable, to use blockchain to trace the origin of tuna caught in a Fijian longline fishery. Each fish was tagged when caught and the entire journey of the fish was recorded on the blockchain. This methodology is a weapon for sustainability and ethical business practices in other supply chains as well, including those that rely on child and forced labor.

Blockchain to combat corruption in registering land titles

A program was developed in Georgia to address corruption in land management in the country. Land ownership is a sector particularly vulnerable to corruption, in part because it is very easy for government officials to extract bribes to register land due to the fact that ownership is recognized through titles, which can easily be lost or destroyed. Blockchain was introduced to provide a transparent and immutable recording of each step of the process to register land, so that the procurement process could be tracked and there would be no danger of losing the record.

Blockchain for COVID-19 vaccine passports

After the COVID-19 vaccine was made public, many states considered implementing a vaccine passport system, whereby individuals would be required to show documentation to prove they were vaccinated in order to enter certain countries or buildings. Blockchain was considered as a tool to more easily store vaccine records and track doses without negative consequences for individuals who lose their records. While there are significant data privacy concerns in a system where there is no alternative to allowing one’s data to be stored on a blockchain, this would have significant public health benefits. Furthermore, it demonstrates that future identification documents are likely to rely on blockchain.

Blockchain to facilitate transactions for humanitarian aid

Humanitarian aid is the sector where blockchain for human rights and democracy has been adopted the most. Blockchain has been embraced as a way to combat corruption and ensure money and aid reach intended targets, to allow access to donations in countries where crises have affected the banking system, and in coordination with Digital IDs to allow donor organizations to better track funding and get money to people without traditional methods of receiving money.

Sikka, a project of the Nepal Innovation Lab, operates through partnerships with local vendors and cooperatives within the community, sending value vouchers and digital tokens to individuals through SMS. Value vouchers can be used to purchase humanitarian goods from vendors, while digital tokens can be exchanged for cash. The initiative also supplies donors with data for monitoring and evaluation purposes. The International Federation of the Red Cross and Red Crescent Societies (IFRC) has a similar project, the Blockchain Open Loop Cash Transfer Pilot Project for cash transfer programming. The Kenya-based project utilized a mobile money transfer service operating in the country, Safaricom M-Pesa, to send payments to the mobile wallets of beneficiaries without the need for national ID documentation, and blockchain was used to track the payments. A management platform called “Red Rose” allowed donor organizations to manage data, and the program explored many of the ethics concerns around the use of blockchain.

The Start Network is another humanitarian aid organization that has experimented with using blockchain to disperse funds because of the reduced transfer fees, transparency, and speed benefits. Using the Disperse platform, a distribution platform for foreign aid, the Start Network hoped to increase the humanitarian sector’s comfort with introducing new tech solutions.

AIDONIC is a private company with a donation management tool that incentivizes humanitarian donation with a platform allowing donors, even individuals, greater control over what their donations are used for. Small donors can choose specific initiatives, which will launch when fully funded, and throughout projects, donors can monitor, track, and trace their contributions.

Blockchain for collaboration

A similar humanitarian application of blockchain is collaboration. The World Food Program’s Building Blocks project allows organizations that work in the region but offer different types of humanitarian aid to coordinate their efforts. All of the actions of the humanitarian organizations are recorded on a shared private blockchain. Though the program has a policy to support data privacy, including not recording any data other than that required, pseudonymous data only being released to approved humanitarian orgs, and not recording any sensitive information, humanitarian aid applications of blockchain raise a lot of cybersecurity and data privacy concerns, and all members of the network must be approved. The project has not been as successful as hoped; only UN Women and the World Food Program are full members, but the network makes it easier for beneficiaries to access aid from both organizations, and it provides a clearer picture for aid organizations of what types of aid are being provided and what is missing.

Blockchain in electronic banking

In addition to its applications in humanitarian funding, blockchain has been used to address gaps in financial services outside of crisis zones. Project i2i provides a nontraditional solution for the unbanked population in the Philippines. While standing up the internet technology infrastructure necessary to establish traditional banking in rural areas is extremely challenging and resource intensive, with the blockchain, each bank only needs an iPad. With this, banks connect to the Ethereum network and users have access to a trustworthy and efficient system to process transactions. Though the system has successfully reduced the number of unbanked people in the Philippines, there are informed consent issues, as the majority of users have no other option and because of the data privacy rights.

Blockchain and data integrity

While data privacy is a serious concern, blockchain also has the potential to support democracy and human rights work through data collection, verification, and even through supporting data privacy. The Chemonics’ 2018 Blockchain for Development Solutions Lab used blockchain to make the process of collecting and verifying the biodata of USAID professionals more efficient. The use of blockchain reduced incidents of error and fraud and provided increased data protection because of the natural defense against hacking that blockchains provide and because instead of sharing ID documents through email, the program utilized encrypted keys on Chemonics.

Blockchain for fact checking images

Truepic is a company that provides fact checking solutions. The company supports information integrity by storing accurate information about pictures that have been verified. Truepic combines camera technology, which records pertinent details of every photo, with blockchain storage to create a database of verified imagery that cannot be tampered with. This database can then be used to fact check manipulated images.

Blockchain to permanently keep news articles

Civil.co was a journalism-supporting organization that harnessed the blockchain to permanently keep news articles online in the face of censorship. Civil’s usage of blockchain aimed to encourage community trust of the news. First, articles were published using the blockchain itself, meaning a user with sufficient technical skills could theoretically verify that the articles came from where they say they did. Civil also supported trust with two non-blockchain “technologies”: a “constitution” which all their newsrooms adopted and a ranking system through which their community of readers and journalists could vote up news and newsrooms they found trustworthy. Publishing on a peer-to-peer blockchain, gave their publishing additional resistance to censorship. Readers could also pay journalists for articles using Civil’s tokens. However, Civil struggled from the beginning to raise money, and its newsroom model failed to prove itself.

For more blockchain case studies, check out these resources:

  • New America keeps a Blockchain Impact Ledger with a database of blockchain projects and the people they serve.
  • The 2019 report “Blockchain and distributed ledger technologies in the humanitarian sector” provides multiple examples of humanitarian use of DLTs, including for financial inclusion, land titling, donation transparency, fraud reduction, cross-border transfers, cash programming, grant management and organizational governance, among others.
  • In “Blockchain: Can We Talk About Impact Yet?”, Shailee Adinolfi, John Burg and Tara Vassefi respond to a MERLTech blog post that not only failed to find successful applications of blockchain in international development, but was unable to identify companies willing to talk about the process. This article highlights three case studies of projects with discussion and links to project information and/or case studies.
  • Digital Currencies and Blockchain in the Social Sector,” David Lehr and Paul Lamb summarize work in international development leveraging blockchain for philanthropy, international development funding, remittances, identity, land rights, democracy and governance, and environmental protection.
  • Consensys, a company building and investing in blockchain solutions, including some in the civil sector, summarizes (successful) use cases in “Real-World Blockchain Case Studies.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Cryptocurrency

What are cryptocurrencies?

Cryptocurrency is a type of digital or virtual currency that uses cryptography for secure and private transactions and control of new units. Unlike traditional currencies issued by governments (like the US Dollar or Euro), cryptocurrencies are typically decentralized and operate on blockchain technology. It was created in the wake of the 2008 global financial crisis to decentralize the system of financial transactions. Cryptocurrency is almost a direct contrast to the global financial system: no currency is attached to state authority, it is unbound by geographic regulations, and most importantly, the maintenance of the system is community driven by a network of users. All transactions are logged anonymously on a public ledger, such as Bitcoin’s blockchain.

Definitions

Blockchain: Blockchain is a type of technology used in many digital currencies as a bank ledger. Unlike a normal bank ledger, copies of that ledger are distributed digitally, among computers all over the world, automatically updating with every transaction.

Cryptography: The practice of employing mathematical techniques to secure and protect information, transforming it into an unreadable format using encryption and hashing. In cryptocurrencies, cryptography safeguards transactions, privacy, and ownership verification using techniques like public-private keys and digital signatures on a blockchain.

Currency: A currency is a widely accepted system of money in circulation, usually designated by a nation or group of nations. Currency is commonly found in the form of paper and coins, but can also be digital (as this primer explores).

Fiat money: Government-issued currency, such as the USD. Sometimes referred to as Fiat currency.

Hashing: The process through which cryptocurrency transactions are verified. When one person pays another using Bitcoin, for example, computers on the blockchain automatically check that the transaction is accurate.

Hash: The mathematical problem computers must solve to add transactions to the blockchain.

Initial Coin Offering (ICO): The process by which a new cryptocurrency or digital “token” invites investment.

Mining: The process by which a computer solves a hash. The first computer to solve the hash permanently stores the transaction as a block on the blockchain. When a computer successfully adds a block to the blockchain, it is rewarded with a coin. Arriving at the right answer for a hash before another miner relates to how fast a computer can produce hashes. In the early years of Bitcoin, for example, mining could be performed effectively using open-source software on standard desktop computers. More recently, only special-purpose machines known as application-specific integrated circuit (ASIC) miners can mine bitcoin cost-effectively, because they are optimized for the task. Mining pools (groups of miners) and companies now control most Bitcoin mining activity.

How do cryptocurrencies work?

Money transfer agencies in Nepal. Cryptocurrencies potentially allow users to send and receive remittances and access foreign financial markets. Photo credit: Brooke Patterson/USAID.

Users purchase cryptocurrency with a credit card, debit card, bank account, or through mining. They then store the currency in a digital “wallet,” either online, on a computer, or offline, on a portable storage device, such as USB sticks. These wallets are used to send and receive money through “public addresses” or keys that link the money to a specific type of cryptocurrency. These addresses are strings of characters that signify a wallet’s identity for transactions. A user’s public address can be shared with anyone to receive funds and can also be represented as a QR code. Anyone with whom a user makes a transaction can see the balance in the public address that he or she uses.

While transactions are publicly recorded, identifying user information is not. For example, on the Bitcoin blockchain, only a user’s public address appears next to a transaction—making transactions confidential but not necessarily anonymous.

Cryptocurrencies have increasingly struggled with intense periods of volatility, most of which stem from the decentralized system of which they are part. The lack of a central body means that cryptocurrencies are not legal tender, they are not regulated, there is little to no insurance if an individual’s digital wallet is hacked, and most payments are not reversible. As a result, cryptocurrencies are inherently speculative. In November 2021, Bitcoin peaked at a price of nearly $65,000 per coin, but crashed almost a year after following the collapse of FTX which led to a domino effect in the crypto sector. Prior to the crash new supposed ‘meme coins’ which gained popularity on social media were seeing substantial price increases as investors flocked to the new coins. The crash that followed led to increased attention to tightening regulatory control over cryptocurrency and trading. While some cryptocurrencies such as Tether, have attempted to offset volatility by tying their market value to an external currency like USD or gold. However, the industry overall has not yet reconciled how to maintain an autonomous, decentralized system with overall stability.

Types of Cryptocurrencies

The value of a certain cryptocurrency is heavily dependent on the faith of its investors, its integration into financial markets, public interest in using it, and its performance compared to other cryptocurrencies. Bitcoin, founded in 2008, was the first and only cryptocurrency until 2011 when “altcoins” began to appear. Estimates for the number of cryptocurrencies vary, but as of June 2023, there were about 23,000 different types of cryptocurrencies.

  • Bitcoin
    It has the largest user base and a market capitalization in the hundreds of billions. While Bitcoin initially attracted financial institutions like Goldman Sachs, the collapse of Bitcoin’s value (along with other cryptocurrencies) in 2018 has since increased skepticism towards its long-term viability.
  • Ethereum
    Ethereum is a decentralized software platform that enables Smart Contracts and Decentralized Applications (DApps) to be built and automated without interference from a third party (like Bitcoin: they both run on Blockchain technology). Ethereum launched in 2015 and is currently the second-largest cryptocurrency based on market capitalization after Bitcoin.
  • Ripple (XRP)
    Ripple is a real-time payment-processing network that offers both instant and low-cost international payments, to compete with other transaction systems such as SWIFT or VISA. It is the third largest cryptocurrency.
  • Tether (USDT)
    Tether is one of the first and most popular of a group of “stablecoins” — cryptocurrencies that peg their market value to a currency or other external reference point to reduce volatility.
  • Monero
    Monero is the largest of what are known as privacy coins. Unlike Bitcoin, Monero transactions and account balances are not public by default.
  • Zcash
    Another anonymity-preserving cryptocurrency, Zcash, is operated under a foundation of the same name. It is branded as a mission-based, privacy-centric cryptocurrency that enables users “to protect their privacy on their own terms”, regarding privacy as essential to human dignity and to the healthy functioning of civil society.

Fish vendor in Indonesia. Women are the most underbanked sector, and financial technologies can provide tools to address this gap. Photo credit: Afandi Djauhari/NetHope.

Back to top

How are cryptocurrencies relevant in civic space and for democracy?

Cryptocurrencies are, in many ways, ideal for the needs of NGOs, humanitarians, and other civil society actors. Civic space actors who require blocking-resistant, low-fee transactions might find cryptocurrencies both convenient and secure. The use of cryptocurrencies in the developing world reveals their role as not just vehicles for aid, but also tools that facilitate the development of small- to medium-sized enterprises (SMEs) looking to enter international trade. For example, UNICEF created a cryptofund in 2019 in order to receive and distribute funding in cryptocurrencies (ether and bitcoin). In June of 2020, UNICEF announced its largest investment yet in startups located in developing economies, that are helping respond to the Covid-19 pandemic.

However, regarding cryptocurrencies through only a traditional development lens – i.e. that they may only be useful for refugees or countries with unreliable fiat currencies – simplifies the economic landscape of such low and middle income countries. Many countries are home to a significant youth population who are poised to harness cryptocurrency in innovative ways, for instance, to send and receive remittances, to access foreign financial markets and investment possibilities, and even to encourage ecological or ethical purchasing behaviors (see the Case Studies section). During the Coronavirus lockdown in India, and after the country’s reserve bank lifted a ban it had on cryptocurrencies, many young people started trading in Indian cryptocurrencies and using cryptocurrencies to transfer money to one another. Still, the future of crypto in India and elsewhere is uncertain. The frontier nature of cryptocurrencies poses significant risks to users when it comes to insurance and, in some cases, security.

Moreover, as will be discussed below, the distributed technology (blockchain) underlying cryptocurrencies is seen as offering resistance to censorship, as the data are distributed over a large network of computers. The blockchain offers a high level of anonymity, which may be helpful for those living under autocratic regimes and democratic activists conduct transactions that may otherwise be monitored. Cryptocurrencies could also give a broader range of people access to banking, an essential element of economic inclusion.

Back to top

Opportunities

Cryptocurrencies can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about cryptocurrencies in your work.

Accessibility

Cryptocurrencies are more accessible to a broader range of users than regular cash currency transactions are; they are not subject to government regulation and they don’t have high processing fees. Cross-border transactions in particular benefit from the features of cryptocurrencies; international banking fees and poor exchange rates can be extremely costly. (In some cases, the value of cryptocurrencies may even be more stable than the local currency (see volatile markets case study below). Cryptocurrencies that require participants to log in (on “permissioned” systems) necessitate that an organization controls participation in its system. In some cases, certain users also help run the system in other ways, like operating servers. When this is the case, it is important to understand who those users are, how they are selected, and how their ability to use the system could be taken away if they turn out to be bad actors.

Additionally, Initial Coin Offerings (ICOs) lower the entry barrier to investing, cutting venture capitalists and investment banks out of the investing process, thereby democratizing the processWhile similar to Initial Public Offerings (IPO)s, ICOs differ significantly in that they allow companies to interact directly with individual investors. This also poses a risk to investors, as the safeguards offered by investment banks for traditional IPOs do not apply. (See Lack of Governance and Regulatory Uncertainty ). The lack of regulatory bodies has also spurred the growth of scam ICOs. When an ICO or cryptocurrency does not have a legitimate strategy for generating value, they are typically a scam ICO.

Still, broad accessibility has not yet been achieved as a result of a combination of factors including user knowledge-gaps, internet and computing requirements, and incompatibility between traditional banking systems and cryptocurrency fintech. For an understanding of the usability and risk side of cryptocurrency use, and the disproportionate risks marginalized groups face, see section on digital literacy and access requirements.

Anonymity and Censorship Resistance

The decentralized, peer-to-peer nature of cryptocurrencies may be of great comfort to those seeking anonymity, such as human rights defenders working in closed spaces or people simply seeking an equivalent to “cash” for online purchases (see the Cryptocurrencies in Volatile Markets case study, below). Cryptocurrencies can be useful for someone who wishes to donate anonymously to a foundation or organization when that donation could put them at risk if their identity were known making it a powerful tool for activists. The anonymity of cryptocurrencies has also caused concern amongst advocacy groups who argue that, without open ledgers and tracking, crypto could be used by foreign illiberal actors to fund more authoritarian campaigns.

Since the data that supports the currency is distributed over a large network of computers, it is more difficult for a bad actor to locate and target a transaction or system operation. But a currency’s ability to protect anonymity largely depends on the specific goal of the cryptocurrency. Zcash, for example, was specifically developed to hide transaction amounts and user addresses from public view. Zcash has also played a role in allowing more charitable giving, and several charities tackling research, journalism, and climate change advocacy are powered by Zcash. Cryptocurrencies with a large number of participants are also resistant to more benign, routine, system outages because some data stores in the network can operate if others are breached.

Creating new governance systems

There have been few successful attempts at regulating cryptocurrency at the transnational level, with most governance frameworks remaining at the national level, if at all. Therefore, there are substantial opportunities for international cooperation on crypto governance and efforts to create multilateral networks and partnerships between the private and public sectors are growing. The Digital Currency Governance Consortium for example is composed of 80 organizations across the globe and helps to facilitate discussions around promoting competitiveness, financial stability and protections, and regulatory frameworks in regard to cryptocurrency.

Back to top

Risks

User in the Philippines received transaction confirmation. Users purchase cryptocurrency with a credit card, debit card, bank account or through mining. Photo credit: Brooke Patterson/USAID.

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with cryptocurrencies in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Anonymity

While no central authority records cryptocurrency transactions, the public nature of the transactions does not prevent governments from recording them. An identity that can be associated with records on a blockchain is particularly a problem under totalitarian surveillance regimes. The central internet regulator in China, for example, proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers. In order to trade or exchange a cryptocurrency into an established fiat currency, a new digital currency would need to incorporate Know Your Customer (KYC), Anti-Money Laundering (AML), and Combating the Financing of Terrorism (CFT) regulations into its process for signing up new users and validating their identities. These processes pose a high barrier to undocumented migrants and anyone else not holding a valid government ID.

As described in the case study below, the partially anarchical environment of cryptocurrencies can also foster criminal activity.

Case Study: The Dark Side of the Anonymous User Bitcoin and other cryptocurrencies are praised for supporting financial transactions that do not reveal a user’s identity. But this has made them popular on “dark web” sites like Silk Road, where cryptocurrency can be exchanged for illegal goods and services like drugs, weapons, or sex work. The Silk Road was eventually taken down by the U.S. Federal Bureau of Investigation, when its founder, Ross Ulbricht, used the same name to advertise the site and seek employees in another forum, linking to a Gmail address. Google provided the contents of that address to the authorities when subpoenaed.

The lessons to take from the Silk Road case are that anonymity is rarely perfect and unbreakable, and cryptocurrency’s identity protection is not an ironclad guarantee and law enforcement officials and governments have tried to increase the regulatory tools at their disposal and international cooperation on crimes involving cryptocurrency On a public blockchain, a single identity slip (even in some other forum) can tie all of the transactions of that cryptocurrency account to one user. The owner of that wallet can then be connected to their subsequent purchases, as easily as a cookie tracks a user’s web browsing activity.

Lack of Governance

The lack of a central body greatly increases the risk of investing in a cryptocurrency. There is little to no recourse for users if the system is attacked digitally and their currency is stolen. In 2022, criminals hacked the FTX blockchain and stole $415 million worth of cryptocurrency, one of the largest hacks in history, just hours after the company was rocked by an embezzlement scandal. The move led government regulators to increase scrutiny on the sector as users were left unable to recover much of the stolen funds.

Regulatory Uncertainty

The legal and regulatory frameworks for blockchain are developing at a slower pace than the technology. Each jurisdiction – whether within a country or a financial zone, such as the 27 European countries known as the Schengen Area that have abolished passports and border controls – regulates cryptocurrencies differently, and there is yet to be a global financial standard for regulating them. The seven Arab nations bordering the Persian Gulf  (Gulf States), for example, have enacted a number of different laws on cryptocurrencies: they face an outright ban in the United Arab Emirates and Saudi Arabia. Other countries have developed tax laws, anti-money-laundering laws, and anti-terrorism laws to regulate cryptocurrencies. In many places, cryptocurrency is taxed as a property, instead of as a currency.

Cryptocurrency’s commitment to autonomy – that is, its separation from a fiat currency – has pitted it as an antagonist to many established regulatory bodies. Observers note that eliminating the ability of intermediaries (e.g., governments or banks) to claim transaction fees, for example, alters existing power balances and may trigger prohibitive regulations even as it temporarily decreases financial costs. Thus, there is always a risk that governments will develop policies unfavorable to financial technologies (fintech), rendering cryptocurrency and mobile money useless within their borders. The constantly evolving nature of laws around fintech proves difficult for any new digital currency.

Environmental Inefficiency

The larger a blockchain grows, the more computational power it requires. In late 2019, the University of Cambridge estimated that Bitcoin uses .55% of global electricity consumption. This consumption level roughly equates to the usage of Malaysia and Sweden.

Digital Literacy and Access Requirements

Blockchain technology underlying cryptocurrencies requires access to the Internet, and areas with inadequate infrastructure or capacity would not be usable contexts for cryptocurrencies, although limited possibilities of using cryptocurrency without Internet access do exist. “This digital divide also extends to technological understanding between those who know how to ‘operate securely on the Internet, and those who do not’”, as noted by the DH Network. Cryptocurrency apps are not usable on lower-end devices, which require users to use a smartphone or computer. The apps themselves involve a steep learning curve. Additionally, the slow speed of transactions — which can take minutes or up to an hour — is a significant disadvantage, especially when compared to the seconds-fast speed of standard Visa transactions. Lastly, using platforms like Bitcoin can be particularly tricky for groups with lower rates of digital literacy and those with fewer resources who are less financially resilient to the volatility of the crypto market. Given the lack of consumer protections and regulation that exists on cryptocurrency in certain areas and the lack of awareness about the existing risks, lower income users and investors are more likely to face negative financial consequences during market fluctuations. Recently, however, some countries, like Ghana and the Gambia, are launching government initiatives to bridge the divide on digital literacy and connect otherwise marginalized groups with the necessary tools to effectively use crypto and other forms of emerging tech.

Back to top

Questions

If you are trying to understand the implications of cryptocurrencies in your work environment, or are considering using cryptocurrencies as part of your DRG programming, ask yourself these questions:

  1. Do the issues you or your organization are seeking to address require cryptocurrency? Can more traditional currency solutions apply to the problem?
  2. Is cryptocurrency an appropriate currency for the populations you are working with? Will it help them access the resources they need? Is it accepted by the other relevant stakeholders?
  3. Do you or your organization need an immutable database distributed across multiple servers? Would it be ok to have the currency and transactions connected to a central server?
  4. Is the cryptocurrency you wish to use viable? Do you trust the currency and have good reason to assume it will be sufficiently stable in the future?
  5. Is the currency legal in the areas where you will be operating? If not, will this pose problems for your organization?
  6. How will you obtain this currency? What risks are involved? What external actors will you be reliant on?
  7. Will the users of this currency be able to benefit from it easily and safely? Will they have the required devices and knowledge?

Back to top

Case Studies

Mobile money agency in Ghana. The use of cryptocurrencies in the developing world can facilitate the development of small- to medium-sized enterprises looking to enter international trade. Photo credit: John O’Bryan/ USAID.
Crypto is helping connect people in low-income countries to global markets

For many humanitarian actors, the ideal role for cryptocurrencies is to facilitate the transfer of remittances to families across borders. This is especially useful during conflicts when traditional banking systems may shut down. Cross border transfers can be costly and subject to complicated regulations but apps like Strike are helping to ease the process. Strike and Bitnob partnered to allow people in Kenya, Nigeria, and Ghana to easily receive instant payments from US bank accounts through the Bitcoin lightning network and convert payments to local currency. Bitcoin apps and other fintech are highly useful for upper-middle-class entrepreneurs in lower income countries who are building international businesses through trade and online commerce, and emerging apps like Strike may help to bring banking accessibility to underbanked areas.

Using Crypto to increase accessibility in authoritarian regimes

Some in human rights activism have argued that cryptocurrency has helped those in authoritarian regimes maintain financial ties to the outside world. Given the anonymity associated with transactions in cryptocurrency, the new form of technology can offer opportunities for trade and transactions where they may not otherwise be possible. In China and Russia for example, financial transactions that would normally be monitored by the state can be circumvented by using cryptocurrency. Bitcoin and other platforms also offer platforms for refugees and other persons without traditional forms of identity to access their finances. Conversely, critics have argued that various cryptocurrencies are often used in the purchasing of black market goods which often involve exploitative industries like drug and sex trafficking or may be used by widely-sanctioned countries like North Korea. Still, in situations where people may be cut off from traditional forms of banking, crypto may fill an important gap.

Cryptocurrencies in Volatile Markets

In recent years, countries with volatile markets have been slowly incorporating cryptocurrency in response to financial crises as citizens search for new options. Bitcoin has been used to purchase medicine, Amazon gift cards, and send remittances. Cryptocurrency has also become increasingly adopted at the institutional level. In January of 2023, two years after formally recognizing it as legal currency, El Salvador introduced legislation to regulate Bitcoin. Despite hopes that Bitcoin would be used to ease the process of sending remittances and increase accessibility for underbanked people, widespread use of the currency has not caught on as users cite high fees as reasons for avoiding the cryptocurrency. Moreover, many still cite uncertainty and a lack of knowledge as reasons that they have not switched from traditional forms of banking and exchange. The introduction of Bitcoin has also worsened El Salvador’s credit rating and reportedly caused further division with the International Monetary Fund (IMF). Additionally, Bitcoin is highly volatile as it is dependent on supply and demand rather than being pegged to an asset like most other currencies although the government in El Salvador has introduced legislation to regulate crypto exchanges.

Venezuela, which has also faced unprecedented inflation, has also turned to crypto. Between August 2014 and November 2016, the number of Bitcoin users in Venezuela rose from 450 to 85,000. The financial crisis in the country has prompted many of its citizens to search for new options.There are no laws regulating Bitcoin in Venezuela, which has emboldened people further. Some countries with financial markets that have experienced similar rates of inflation to Venezuela- such as South Sudan, Zimbabwe, and Argentina – have relatively active cryptocurrency markets.

Cryptocurrencies for Social Impact

Many new cryptocurrencies have attempted to monetize the social impacts of their users. SolarCoin rewards people for installing solar panels. Tree Coin gathers resources for planting trees in the developing world (as one way to fight climate change) and rewards local people for maintaining those trees. Impak Coin is “the first app to reward and simplify responsible consumption” by helping users find socially responsible businesses. The coin it offered is intended to be used to buy products and services from these businesses, and to support users in microlending and crowdlending. It was part of an ecosystem of technologies that included ratings based on the UN’s Sustainable Development Goals and the Impact Management Project. True to its principles, Impak has proposed to begin assessing its impact. In the future, the impact of SolarCoin may be limited, as the value remains relatively low in comparison to set-up costs, potentially deterring people from using it more widely. In contrast, however, Treecoin may be having a more direct impact on local communities as demonstrated in the Mangrove restoration project.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Data Protection

What is data protection?

Data protection refers to practices, measures, and laws that aim to prevent certain information about a person from being collected, used, or shared in a way that is harmful to that person.

Interview with fisherman in Bone South Sulawesi, Indonesia. Data collectors must receive training on how to avoid bias during the data collection process. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Data protection isn’t new. Bad actors have always sought to gain access to individuals’ private records. Before the digital era, data protection meant protecting individuals’ private data from someone physically accessing, viewing, or taking files and documents. Data protection laws have been in existence for more than 40 years.

Now that many aspects of peoples’ lives have moved online, private, personal, and identifiable information is regularly shared with all sorts of private and public entities. Data protection seeks to ensure that this information is collected, stored, and maintained responsibly and that unintended consequences of using data are minimized or mitigated.

What are data?

Data refer to digital information, such as text messages, videos, clicks, digital fingerprints, a bitcoin, search history, and even mere cursor movements. Data can be stored on computers, mobile devices, in clouds, and on external drives. It can be shared via email, messaging apps, and file transfer tools. Your posts, likes and retweets, your videos about cats and protests, and everything you share on social media is data.

Metadata are a subset of data. It is information stored within a document or file. It’s an electronic fingerprint that contains information about the document or file. Let’s use an email as an example. If you send an email to your friend, the text of the email is data. The email itself, however, contains all sorts of metadata like who created it, who the recipient is, the IP address of the author, the size of the email, etc.

Large amounts of data get combined and stored together. These large files containing thousands or millions of individual files are known as datasets. Datasets then get combined into very large datasets. These very large datasets, referred to as big data, are used to train machine-learning systems.

Personal Data and Personally Identifiable Information

Data can seem to be quite abstract, but the pieces of information are very often reflective of the identities or behaviors of actual persons. Not all data require protection, but some data, even metadata, can reveal a lot about a person. This is referred to as Personally Identifiable Information (PII). PII is commonly referred to as personal data. PII is information that can be used to distinguish or trace an individual’s identity such as a name, passport number, or biometric data like fingerprints and facial patterns. PII is also information that is linked or linkable to an individual, such as date of birth and religion.

Personal data can be collected, analyzed and shared for the benefit of the persons involved, but they can also be used for harmful purposes. Personal data are valuable for many public and private actors. For example, they are collected by social media platforms and sold to advertising companies. They are collected by governments to serve law-enforcement purposes like the prosecution of crimes. Politicians value personal data to target voters with certain political information. Personal data can be monetized by people for criminal purposes such as selling false identities.

“Sharing data is a regular practice that is becoming increasingly ubiquitous as society moves online. Sharing data does not only bring users benefits, but is often also necessary to fulfill administrative duties or engage with today’s society. But this is not without risk. Your personal information reveals a lot about you, your thoughts, and your life, which is why it needs to be protected.”

Access Now’s ‘Creating a Data Protection Framework’, November 2018.

How does data protection relate to the right to privacy?

The right to protection of personal data is closely interconnected to, but distinct from, the right to privacy. The understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. Data protection is not always considered a right in itself. Read more about the differences between privacy and data protection here.

Data privacy is also a common way of speaking about sensitive data and the importance of protecting it against unintentional sharing and undue or illegal  gathering and use of data about an individual or group. USAID’s Digital Strategy for 2020 – 2024 defines data privacy as ‘the  right  of  an  individual  or  group  to  maintain  control  over  and  confidentiality  of  information  about  themselves’.

How does data protection work?

Participant of the USAID WeMUNIZE program in Nigeria. Data protection must be considered for existing datasets as well. Photo credit: KC Nwakalor for USAID / Digital Development Communications

Personal data can and should be protected by measures that protect from harm the identity or other information about a person and that respects their right to privacy. Examples of such measures include determining which data are vulnerable based on privacy-risk assessments; keeping sensitive data offline; limiting who has access to certain data; anonymizing sensitive data; and only collecting necessary data.

There are a couple of established principles and practices to protect sensitive data. In many countries, these measures are enforced via laws, which contain the key principles that are important to guarantee data protection.

“Data Protection laws seek to protect people’s data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.”

Privacy International on data protection

A couple of important terms and principles are outlined below, based on The European Union’s General Data Protection Regulation (GDPR).

  • Data Subject: any person whose personal data are being processed, such as added to a contacts database or to a mailing list for promotional emails.
  • Processing data means that any operation is performed on personal data, manually or automated.
  • Data Controller: the actor that determines the purposes for, and means by which, personal data are processed.
  • Data Processor: the actor that processes personal data on behalf of the controller, often a third-party external to the controller, such as a party that offers mailing lists or survey services.
  • Informed Consent: individuals understand and agree that their personal data are collected, accessed, used, and/or shared and how they can withdraw their consent.
  • Purpose limitation: personal data are only collected for a specific and justified use and the data cannot be used for other purposes by other parties.
  • Data minimization: that data collection is minimized and limited to essential details.

 

Healthcare provider in Eswatini. Quality data and protected datasets can accelerate impact in the public health sector. Photo credit: Ncamsile Maseko & Lindani Sifundza.

Access Now’s guide lists eight data-protection principles that come largely from international standards, in particular,, the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (widely known as Convention 108) and the Organization for Economic Development (OECD) Privacy Guidelines and are considered to be “minimum standards” for the protection of fundamental rights by countries that have ratified international data protection frameworks.

A development project that uses data, whether establishing a mailing list or analyzing datasets, should comply with laws on data protection. When there is no national legal framework, international principles, norms, and standards can serve as a baseline to achieve the same level of protection of data and people. Compliance with these principles may seem burdensome, but implementing a few steps related to data protection from the beginning of the project will help to achieve the intended results without putting people at risk.

common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms

The figure above shows how common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms.

The European Union’s General Data Protection Regulation (GDPR)

The data protection law in the EU, the GDPR, went into effect in 2018. It is often considered the world’s strongest data protection law. The law aims to enhance how people can access their information and limits what organizations can do with personal data from EU citizens. Although coming from the EU, the GDPR can also apply to organizations that are based outside the region when EU citizens’ data are concerned. GDPR, therefore, has a global impact.

The obligations stemming from the GDPR and other data protection laws may have broad implications for civil society organizations. For information about the GDPR- compliance process and other resources, see the European Center for Not-for-Profit Law‘s guide on data-protection standards for civil society organizations.

Notwithstanding its protections, the GDPR also has been used to harass CSOs and journalists. For example, a mining company used a provision of the GDPR to try to force Global Witness to disclose sources it used in an anti-mining campaign. Global Witness successfully resisted these attempts.

Personal or organizational protection tactics

How to protect your own sensitive information or the data of your organization will depend on your specific situation in terms of activities and legal environment. The first step is to assess your specific needs in terms of security and data protection. For example, which information could, in the wrong hands, have negative consequences for you and your organization?

Digital–security specialists have developed online resources you can use to protect yourself. Examples are the Security Planner, an easy-to-use guide with expert-reviewed advice for staying safer online with recommendations on implementing basic online practices. The Digital Safety Manual offers information and practical tips on enhancing digital security for government officials working with civil society and Human Rights Defenders (HRDs). This manual offers 12 cards tailored to various common activities in the collaboration between governments (and other partners) and civil society organizations. The first card helps to assess the digital security.

Digital Safety Manual

 

The Digital First Aid Kit is a free resource for rapid responders, digital security trainers, and tech-savvy activists to better protect themselves and the communities they support against the most common types of digital emergencies. Global digital safety responders and mentors can help with specific questions or mentorship, for example, The Digital Defenders Partnership and the Computer Incident Response Centre for Civil Society (CiviCERT).

Back to top

How is data protection relevant in civic space and for democracy?

Many initiatives that aim to strengthen civic space or improve democracy use digital technology. There is a widespread belief that the increasing volume of data and the tools to process them can be used for good. And indeed, integrating digital technology and the use of data in democracy, human rights, and governance programming can have significant benefits; for example, they can connect communities around the globe, reach underserved populations better, and help mitigate inequality.

“Within social change work, there is usually a stark power asymmetry. From humanitarian work, to campaigning, documenting human rights violations to movement building, advocacy organisations are often led by – and work with – vulnerable or marginalised communities. We often approach social change work through a critical lens, prioritising how to mitigate power asymmetries. We believe we need to do the same thing when it comes to the data we work with – question it, understand its limitations, and learn from it in responsible ways.”

What is Responsible Data?

When quality information is available to the right people when they need it, the data are protected against misuse, and the project is designed with the protection of its users in mind, it can accelerate impact.

  • USAID’s funding of improved vineyard inspection using drones and GIS data in Moldova, allows farmers to quickly inspect, identify, and isolate vines infected by a ​phytoplasma disease of the vine.
  • Círculo is a digital tool for female journalists in Mexico to help them create strong networks of support, strengthen their safety protocols and meet needs related to the protection of themselves and their data. The tool was developed with the end-users through chat groups and in-person workshops to make sure everything built into the app was something they needed and could trust.

At the same time, data-driven development brings a new responsibility to prevent misuse of data, when designing,  implementing or monitoring development projects. When the use of personal data is a means to identify people who are eligible for humanitarian services, privacy and security concerns are very real.

  • Refugee camps In Jordan have required community members to allow scans of their irises to purchase food and supplies and take out cash from ATMs. This practice has not integrated meaningful ways to ask for consent or allow people to opt out. Additionally, the use and collection of highly sensitive personal data like biometrics to enable daily purchasing habits is disproportionate, because other less personal digital technologies are available and used in many parts of the world.

Governments, international organizations, and private actors can all – even unintentionally – misuse personal data for other purposes than intended, negatively affecting the well-being of the people related to that data. Some examples have been highlighted by Privacy International:

  • The case of Tullow Oil, the largest oil and gas exploration and production company in Africa, shows how a private actor considered extensive and detailed research by a micro-targeting research company into the behaviors of local communities in order to get ‘cognitive and emotional strategies to influence and modify Turkana attitudes and behavior’ to the Tullow Oil’s advantage.
  • In Ghana, the Ministry of Health commissioned a large study on health practices and requirements in Ghana. This resulted in an order from the ruling political party to model future vote distribution within each constituency based on how respondents said they would vote, and a negative campaign trying to get opposition supporters not to vote.

There are resources and experts available to help with this process. The Principles for Digital Development website offers recommendations, tips, and resources to protect privacy and security throughout a project lifecycle, such as the analysis and planning stage, for designing and developing projects and when deploying and implementing. Measurement and evaluation are also covered. The Responsible Data website offers the Illustrated Hand-Book of the Modern Development Specialist with attractive, understandable guidance through all steps of a data-driven development project: designing it, managing data, with specific information about collecting, understanding and sharing it, and closing a project.

NGO worker prepares for data collection in Buru Maluku, Indonesia. When collecting new data, it’s important to design the process carefully and think through how it affects the individuals involved. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Back to top

Opportunities

Data protection measures further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about data protection in your work.

Privacy respected and people protected

Implementing data–protection standards in development projects protects people against potential harm from abuse of their data. Abuse happens when an individual, company or government accesses personal data and uses them for purposes other than those for which the data were collected. Intelligence services and law enforcement authorities often have legal and technical means to enforce access to datasets and abuse the data. Individuals hired by governments can access datasets by hacking the security of software or clouds. This has often led to intimidation, silencing, and arrests of human rights defenders and civil society leaders criticizing their government. Privacy International maps examples of governments and private actors abusing individuals’ data.

Strong protective measures against data abuse ensure respect for the fundamental right to privacy of the people whose data are collected and used. Protective measures allow positive development such as improving official statistics, better service delivery, targeted early warning mechanisms, and effective disaster response.

It is important to determine how data are protected throughout the entire life cycle of a project. Individuals should also be ensured of protection after the project ends, either abruptly or as intended, when the project moves into a different phase or when it receives funding from different sources. Oxfam has developed a leaflet to help anyone handling, sharing, or accessing program data to properly consider responsible data issues throughout the data lifecycle, from making a plan to disposing of data.

Back to top

Risks

The collection and use of data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with collection and use of data in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unauthorized access to data

Data need to be stored somewhere. On a computer or an external drive, in a cloud, or on a local server. Wherever the data are stored, precautions need to be taken to protect the data from unauthorized access and to avoid revealing the identities of vulnerable persons. The level of protection that is needed depends on the sensitivity of the data, i.e. to what extent it could have negative consequences if the information fell into the wrong hands.

Data can be stored on a nearby and well-protected server that is connected to drives with strong encryption and very limited access, which is a method to stay in control of the data you own. Cloud services offered by well-known tech companies often offer basic protection measures and wide access to the dataset for free versions. More advanced security features are available for paying customers, such as storage of data in certain jurisdictions with data-protection legislation. The guidelines on how to secure private data stored and accessed in the cloud help to understand various aspects of clouds and to decide about a specific situation.

Every system needs to be secured against cyberattacks and manipulation. One common challenge is finding a way to protect identities in the dataset, for example, by removing all information that could identify individuals from the data, i.e. anonymizing it. Proper anonymization is of key importance and harder than often assumed.

One can imagine that a dataset of GPS locations of People Living with Albinism across Uganda requires strong protection. Persecution is based on the belief that certain body parts of people with albinism can transmit magical powers, or that they are presumed to be cursed and bring bad luck. A spatial-profiling project mapping the exact location of individuals belonging to a vulnerable group can improve outreach and delivery of support services to them. However, hacking of the database or other unlawful access to their personal data might put them at risk of people wanting to exploit or harm them.

One could also imagine that the people operating an alternative system to send out warning sirens for air strikes in Syria run the risk of being targeted by authorities. While data collection and sharing by this group aims to prevent death and injury, it diminishes the impact of air strikes by the Syrian authorities. The location data of the individuals running and contributing to the system needs to be protected against access or exposure.

Another risk is that private actors who run or cooperate in data-driven projects could be tempted to sell data if they are offered large sums of money. Such buyers could be advertising companies or politicians that aim to target commercial or political campaigns at specific people.

The Tiko system designed by social enterprise Triggerise rewards young people for positive health-seeking behaviors, such as visiting pharmacies and seeking information online. Among other things, the system gathers and stores sensitive personal and health information about young female subscribers who use the platform to seek guidance on contraceptives and safe abortions, and it tracks their visits to local clinics. If these data are not protected, governments that have criminalized abortion could potentially access and use that data to carry out law-enforcement actions against pregnant women and medical providers.

Unsafe collection of data

When you are planning to collect new data, it is important to carefully design the collection process and think through how it affects the individuals involved. It should be clear from the start what kind of data will be collected, for what purpose, and that the people involved agree with that purpose. For example, an effort to map people with disabilities in a specific city can improve services. However, the database should not expose these people to risks, such as attacks or stigmatization that can be targeted at specific homes. Also, establishing this database should answer to the needs of the people involved and not driven by the mere wish to use data. For further guidance, see the chapter Getting Data in the Hand-book of the Modern Development Specialist and the OHCHR Guidance to adopt a Human Rights Based Approach to Data, focused on collection and disaggregation.

If data are collected in person by people recruited for this process, proper training is required. They need to be able to create a safe space to obtain informed consent from people whose data are being collected and know how to avoid bias during the data-collection process.

Unknowns in existing datasets

Data-driven initiatives can either gather new data, for example, through a survey of students and teachers in a school or use existing datasets from secondary sources, for example by using a government census or scraping social media sources. Data protection must also be considered when you plan to use existing datasets, such as images of the Earth for spatial mapping. You need to analyze what kind of data you want to use and whether it is necessary to use a specific dataset to reach your objective. For third-party datasets, it is important to gain insight into how the data that you want to use were obtained, whether the principles of data protection were met during the collection phase, who licensed the data and who funded the process. If you are not able to get this information, you must carefully consider whether to use the data or not. See the Hand-book of the Modern Development Specialist on working with existing data.

Benefits of cloud storage

A trusted cloud-storage strategy offers greater security and ease of implementation compared to securing your own server. While determined adversaries can still hack into individual computers or local servers, it is significantly more challenging for them to breach the robust security defenses of reputable cloud storage providers like Google or Microsoft. These companies deploy extensive security resources and a strong business incentive to ensure maximum protection for their users. By relying on cloud storage, common risks such as physical theft, device damage, or malware can be mitigated since most documents and data are securely stored in the cloud. In case of incidents, it is convenient to resynchronize and resume operations on a new or cleaned computer, with little to no valuable information accessible locally.

Backing up data

Regardless of whether data is stored on physical devices or in the cloud, having a backup is crucial. Physical device storage carries the risk of data loss due to various incidents such as hardware damage, ransomware attacks, or theft. Cloud storage provides an advantage in this regard as it eliminates the reliance on specific devices that can be compromised or lost. Built-in backup solutions like Time Machine for Macs and File History for Windows devices, as well as automatic cloud backups for iPhones and Androids, offer some level of protection. However, even with cloud storage, the risk of human error remains, making it advisable to consider additional cloud backup solutions like Backupify or SpinOne Backup. For organizations using local servers and devices, secure backups become even more critical. It is recommended to encrypt external hard drives using strong passwords, utilize encryption tools like VeraCrypt or BitLocker, and keep backup devices in a separate location from the primary devices. Storing a copy in a highly secure location, such as a safe deposit box, can provide an extra layer of protection in case of disasters that affect both computers and their backups.

Back to top

Questions

If you are trying to understand the implications of lacking data protection measures in your work environment, or are considering using data as part of your DRG programming, ask yourself these questions:

  1. Are data protection laws adopted in the country or countries concerned? Are these laws aligned with international human rights law, including provisions protecting the right to privacy?
  2. How will the use of data in your project comply with data protection and privacy standards?
  3. What kind of data do you plan to use? Are personal or other sensitive data involved?
  4. What could happen to the persons related to that data if the government accesses these data?
  5. What could happen if the data are sold to a private actor for other purposes than intended?
  6. What precaution and mitigation measures are taken to protect the data and the individuals related to the data?
  7. How is the data protected against manipulation and access and misuse by third parties?
  8. Do you have sufficient expertise integrated during the entire course of the project to make sure that data are handled well?
  9. If you plan to collect data, what is the purpose of the collection of data? Is data collection necessary to reach this purpose?
  10. How are collectors of personal data trained? How is informed consent generated when data are collected?
  11. If you are creating or using databases, how is the anonymity of the individuals related to the data guaranteed?
  12. How is the data that you plan to use obtained and stored? Is the level of protection appropriate to the sensitivity of the data?
  13. Who has access to the data? What measures are taken to guarantee that data are accessed for the intended purpose?
  14. Which other entities – companies, partners – process, analyze, visualize, and otherwise use the data in your project? What measures are taken by them to protect the data? Have agreements been made with them to avoid monetization or misuse?
  15. If you build a platform, how are the registered users of your platform protected?
  16. Is the database, the system to store data or the platform auditable to independent research?

Back to top

Case Studies

People Living with HIV Stigma Index and Implementation Brief

The People Living with HIV Stigma Index is a standardized questionnaire and sampling strategy to gather critical data on intersecting stigmas and discrimination affecting people living with HIV. It monitors HIV-related stigma and discrimination in various countries and provides evidence for advocacy in countries. The data in this project are the experiences of people living with HIV. The implementation brief provides insight into data protection measures. People living with HIV are at the center of the entire process, continuously linking the data that is collected about them to the people themselves, starting from research design, through implementation, to using the findings for advocacy. Data are gathered through a peer-to-peer interview process, with people living with HIV from diverse backgrounds serving as trained interviewers. A standard implementation methodology has been developed, including the establishment if a steering committee with key  stakeholders and population groups.

RNW Media’s Love Matters Program Data Protection

RNW Media’s Love Matters Program offers online platforms to foster discussion and information-sharing on love, sex and relationships to 18-30 year-olds in areas where information on sexual and reproductive health and rights (SRHR) is censored or taboo. RNW Media’s digital teams introduced creative approaches to data processing and analysis, Social Listening methodologies and Natural Language Processing techniques to make the platforms more inclusive, create targeted content, and identify influencers and trending topics. Governments have imposed restrictions such as license fees or registrations for online influencers as a way of monitoring and blocking “undesirable” content, and RNW Media has invested in security of its platforms and literacy of the users to protect them from access to their sensitive personal information. Read more in the publication ‘33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p 12-14.

Amnesty International Report

Amnesty International Report

Thousands of democracy and human rights activists and organizations rely on secure communication channels every day to maintain the confidentiality of conversations in challenging political environments. Without such security practices, sensitive messages can be intercepted and used by authorities to target activists and break up protests. One prominent and well-documented example of this occurred in the aftermath of the 2010 elections in Belarus. As detailed in this Amnesty International report, phone recordings and other unencrypted communications were intercepted by the government and used in court against prominent opposition politicians and activists, many of whom spent years in prison. In 2020, another swell of post-election protests in Belarus saw thousands of protestors adopt user-friendly, secure messaging apps that were not as readily available just 10 years prior to protect their sensitive communications.

Norway Parliament Data

Norway Parliament Data

The Storting, Norway’s parliament, has experienced another cyberattack that involved the exploitation of recently disclosed vulnerabilities in Microsoft Exchange. These vulnerabilities, known as ProxyLogon, were addressed by emergency security updates released by Microsoft. The initial attacks were attributed to a state-sponsored hacking group from China called HAFNIUM, which utilized the vulnerabilities to compromise servers, establish backdoor web shells, and gain unauthorized access to internal networks of various organizations. The repeated cyberattacks on the Storting and the involvement of various hacking groups underscore the importance of data protection, timely security updates, and proactive measures to mitigate cyber risks. Organizations must remain vigilant, stay informed about the latest vulnerabilities, and take appropriate actions to safeguard their systems and data.

Girl Effect

Girl Effect, a creative non-profit working where girls are marginalized and vulnerable, uses media and mobile tech to empower girls. The organization embraces digital tools and interventions and acknowledges that any organisation that uses data also has a responsibility to protect the people it talks to or connects online. Their ‘Digital safeguarding tips and guidance’ provides in-depth guidance on implementing data protection measures while working with vulnerable people. Referring to Girl Effect as inspiration, Oxfam has developed and implemented a Responsible Data Policy and shares many supporting resources online. The publication ‘Privacy and data security under GDPR for quantitative impact evaluation’ provides detailed considerations of the data protection measures Oxfam implements while doing quantitative impact evaluation through digital and paper-based surveys and interviews.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Gender Divide

What is the digital gender divide?

The digital gender divide refers to the gap in access and use of the internet between women* and men, which can perpetuate and exacerbate gender inequalities and leave women out of an increasingly digital world. Despite the rapid growth of internet access around the globe (95% of people in 2023 live within reach of a mobile cellular network), women are still 6% less likely to use the internet compared to men; a gap that is actually widening in many low- and middle-income countries (LMICs) where, in 2023, women are 12% less likely than men to own a mobile phone and 19% less likely to actually access the internet on a mobile device.

Civil society leader in La Paz, Honduras. The gender digital divide affects every aspect of women’s lives. Photo credit: Honduras Local Governance Activity / USAID Honduras.

Though it might seem like a relatively small gap, because mobile phones and smartphones have surpassed computers as the primary way people access the internet, that statistic translates to 310 million fewer women online in LMICs than men. Without access to the internet, women cannot fully participate in different aspects of the economy, join educational opportunities, and fully utilize legal and social support systems.

The digital gender divide does not just stop at access to the internet however; it is also the gap in how women and men use the internet once they get online. Studies show that even when women own mobile phones, they tend to use them less frequently and intensely than men, especially for more sophisticated services, including searching for information, looking for jobs, or engaging in civic and political spaces. Additionally, there is less locally relevant content available to women internet users, because women themselves are more often content consumers than content creators. Furthermore, women face greater barriers to using the internet in innovative and recreational ways due to unwelcoming online communities and cultural expectations that the internet is not for women and women should only participate online in the context of their duty to their families.

The digital gender divide is also apparent in the exclusion of women from leadership or development roles in the information and communications technology (ICT) sector. In fact, the proportion of women working in the ICT sector has been declining over the last 20 years. According to a 2023 report, in the United States alone, women only hold around 23% of programming and software development jobs, down from 37% in the 1980s. This contributes to software, apps, and tools rarely reflecting the unique needs that women have, further alienating them. Apple, for instance, whose tech employees were 75.1% male in 2022, did not include a menstrual cycle tracker in its Health app until 2019, five years after it was launched (though it did have a sodium level tracker and blood alcohol tracker during that time).

Ward nurses providing vaccines in Haiti. Closing the gender digital divide is key to global public health efforts. Photo credit: Karen Kasmauski / MCSP and Jhpiego

A NOTE ON GENDER TERMINOLOGY
All references to “women” (except those that reference specific external studies or surveys, which has been set by those respective authors) is gender-inclusive of girls, women, or any person or persons identifying as a woman.

Why is there a digital gender divide?

At the root of the digital gender divide are entrenched traditional gender inequalities, including gender bias, socio-cultural norms, lack of affordability and digital literacy, digital safety issues, and women’s lower (compared to men’s) comfort levels navigating and existing in the digital world. While all of these factors play a part in keeping women from achieving equity in their access to and use of digital technologies, the relative importance of each factor depends largely on the region and individual circumstances.

Affordability

In LMICs especially, the biggest barrier to access is simple: affordability. While the costs of internet access and of devices have been decreasing, they are often still too expensive for many people. While this is true for both genders, women tend to face secondary barriers that keep them from getting access, such as not being financially independent, or being passed over by family members in favor of a male relative. Even when women have access to devices, they are often registered in a male relative’s name. The consequences of this can range from reinforcing the idea that the internet is not a place for women to preventing women from accessing social support systems. In Rwanda, an evaluation of The Digital Ambassador Programme pilot phase found that the costs of data bundles and/or access to devices were prohibitively expensive for a large number of potential women users, especially in the rural areas.

Education

Education is another major barrier for women all over the world. According to 2015 data from the Web Foundation, women in Africa and Asia who have some secondary education or have completed secondary school were six times more likely to be online than women with primary school or less.

Further, digital skills are also required to meaningfully engage with the Internet. While digital education varies widely by country (and even within countries), girls are still less likely to go to school over all, and those that do tend to have “lower self-efficacy and interest” in studying Science, Technology, Engineering and Math (STEM) topics, according to a report by UNICEF and the ITU, and STEM topics are often perceived as being ‘for men’ and are therefore less appealing to women and girls. While STEM subjects are not strictly required to use digital technologies, these subjects can help to expose girls to ICTs and build skills that help them be confident in their use of new and emerging technologies. Furthermore, studying these subjects is the first step along the pathway of a career in the ICT field, which is a necessary step to address inherent bias in technologies created and distributed largely by men. Without encouragement and confidence in their digital skills, women may shy away or avoid opportunities that are perceived to be technologically advanced, even when they do not actually require a high level of digital knowledge.

Social Norms

Social norms have an outsized impact on many aspects of the digital gender divide because they can also be a driving factor vis-à-vis other barriers. Social norms look different in different communities; in places where women are round-the-clock caregivers, they often do not have time to spend online, while in other situations women are discouraged from pursuing STEM careers. In other cases, the barriers are more strictly cultural. For example, a report by the OECD indicated that, in India and Egypt, around one-fifth of women believed that the Internet “was not an appropriate place for them” due to cultural reasons.

Online social norms also play a part in preventing women, especially those from LMICs, from engaging fully with the internet. Much of the digital marketplace is dominated by English and other Western languages, which women may have fewer opportunities to learn due to education inequalities. Furthermore, many online communities, especially those traditionally dominated by men, such as gaming communities, are unfriendly to women, often reaching the extent that women’s safety is compromised.

Online Violence

Scarcity of content that is relevant and empowering for women and other barriers that prevent women from participating freely and safely online are also fundamental aspects of the digital gender divide. Even when women access online environments, they face a disproportionate risk of gender-based violence (GBV) online: digital harassment, cyberstalking, doxxing, and the non-consensual distribution of images (e.g., “revenge porn”). Gender minorities are also targets of online GBV. Trans activists, for example, have experienced increased vulnerability in digital spaces, especially as they have become more visible and vocal. Cyber harassment of women is so extreme that the UN’s High Commissioner for Human Rights has warned, “if trends continue, instead of empowering women, online spaces may actually widen sex and gender-based discrimination and violence.

This barrier is particularly harmful to democracy as the internet has become a key venue for political discussion and activism. Research conducted by the National Democratic Institute has demonstrated that women and girls at all levels of political engagement and in all democratic sectors, from the media to elected office, are affected by the “‘chilling effect’ that drives politically-active women offline and in some cases out of the political realm entirely.” Furthermore, women in the public eye, including women in politics and leadership positions are more often targeted by this abuse, and in many cultures, it is considered “the cost of doing business” for women who participate in the democratic conversation and is simply accepted.

“…if trends continue, instead of empowering women, online spaces may actually widen sex and gender-based discrimination and violence.”

UN’s High Commissioner for Human Rights

Back to top

How is the digital gender divide relevant in civic space and for democracy?

The UN recognizes the importance of women’s inclusion and participation in a digital society. The fifth Sustainable Development Goal (SDG) calls to “enhance the use of enabling technology, in particular information and communications technology, to promote the empowerment of women.” Moreover, women’s digital inclusion and technological empowerment are relevant to achieving quality education, creating decent work and economic growth, reducing inequality, and building peaceful and inclusive institutions. While digital technologies offer unparalleled opportunities in areas ranging from economic development to health improvement, to education, cultural development, and political participation, gaps in access to and use of these technologies and heightened safety concerns exacerbate gender inequalities and hinder women’s ability to access resources and information that are key to improving their lives and the wellbeing of their communities.

Further, the ways in which technologies are designed and employed, and how data are collected and used impact men and women differently, often because of existing disparities. Whether using technologies to develop artificial intelligence systems and implement data protection frameworks or just for the everyday uses of social media, gender considerations should be at the center of decision–making and planning in the democracy, rights, and governance space.

Students in Zanzibar. Without access to the internet, women and girls cannot fully engage in economies, participate in educational opportunities, or access legal systems. Photo credit: Morgana Wingard / USAID.

Initiatives that ignore gender disparities in access to the Internet and ownership and use of mobile phones and other devices will exacerbate existing gender inequalities, especially for the most vulnerable and marginalized populations. In the context of the Covid-19 pandemic and increasing GBV during lockdown, technology provided some with resources to address GBV, but it also created new opportunities for ways to exploit women and chill online discourse. Millions of women and non-binary individuals who faced barriers to accessing the internet and online devices were left with limited avenues to help, whether via instant messaging services, calls to domestic abuse hotlines, or discreet apps that provide disguised support and information to survivors in case of surveillance by abusers. Furthermore, the shift to a greater reliance on technology for work, school, medical attention, and other basic aspects of life further limited the engagement of these women in these aspects of society and exposed women who were active online to more online GBV.

Most importantly, initiatives in the civic space must recognize women’s agency and knowledge and be gender-inclusive from the design stage. Women must participate as co-designers of programs and be engaged as competent members of society with equal potential to devise solutions rather than perceived as passive victims.

Back to top

Opportunities

There are a number of different areas to engage in that can have a positive impact in closing the digital gender divide. Read below to learn how to more effectively and safely think about some areas that your work may already touch on (or could include).

Widening job and education opportunities

In 2018, the ITU projected that 90% of future jobs will require ICT skills, and employers are increasingly citing digital skills and literacy as necessary for future employees according to the World Economic Forum. As traditional analog jobs in which women are overrepresented (such as in the manufacturing, service, and agricultural sectors) are replaced by automation, it is more vital than ever that women learn ICT skills to be able to compete for jobs. While digital literacy is becoming a requirement for many sectors, new, more flexible job opportunities are also becoming more common, and are eliminating traditional barriers to entry, such as age, experience, or location. Digital platforms can enable women in rural areas to connect with cities, where they can more easily sell goods or services. And part-time, contractor jobs in the “gig economy” (such as ride sharing, food delivery, and other freelance platforms) allow women more flexible schedules that are often necessitated by familial responsibilities.

The internet also expands opportunities for girls’ and womens’ educations. Online education opportunities, such as those for refugees, are reaching more and more learners, including girls. Online learning also gives those who missed out on education opportunities as children another chance to learn at their own pace, with flexibility in terms of time and location, that may be necessary given women’s responsibilities, and may allow women’s participation in the class to be more in proportion to that of men.

Increasing access to financial services

The majority of the world’s unbanked population is women. Women are more likely than men to lack credit history and the mobility to go to the bank. As such, financial technologies can play a large equalizing role, not only in terms of access to tools but also in terms of how financial products and services could be designed to respond to women’s needs. In the MENA region, for example, where 54% of men but only 42% of women have bank accounts, and up to 14 million unbanked adults in the region send or receive domestic remittances using cash or an over-the-counter service, opportunities to increase women’s financial inclusion through digital financial services are promising. Several governments have experimented with mobile technology for Government to People (G2P) payments. Research shows that this has reduced the time required to access payments, but the new method does not benefit everyone equally. When designing programs like this, it is necessary to keep in mind the digital gender divide and how women’s unique positioning will impact the effectiveness of the initiative.

Policy change for legal protections

There are few legal protections for women and gender-diverse people who seek justice for the online abuse they face. According to a 2015 UN Broadband Commission report, only one in five women live in a country where online abuse is likely to be punished. In many countries, perpetrators of online violence act with impunity, as laws have not been updated for the digital world, even when online harassment results in real-world violence. In the Democratic Republic of Congo (DRC), for instance, there are no laws that specifically protect women from online harassment, and women who have brought related crimes to the police risk being prosecuted for “ruining the reputation of the attacker.” And when cyber legislation is passed, it is not always effective. Sometimes it even results in the punishment of victimized women: women in Uganda have been arrested under the Anti-Pornography Act after ex-partners released “revenge porn” (nude photos of them posted without their consent) online. As many of these laws are new, and technologies are constantly changing, there is a need for lawyers and advocates to understand existing laws and gaps in legislation to propose policies and amend laws to allow women to be truly protected online and safe from abuse.

The European Union’s Digital Services Act (DSA), adopted in 2022, is landmark legislation regulating platforms. The act may force platforms to thoroughly assess threats to women online and enact comprehensive measures to address those threats. However, the DSA is newly introduced and how it is implemented will determine whether it is truly impactful. Furthermore, the DSA is limited to the EU, and, while other countries and regions may use it as a model, it would need to be localized.

Making the internet safe for women requires a multi-stakeholder approach. Governments should work in collaboration with the private sector and nonprofits. Technology companies have a responsibility to the public to provide solutions and support women who are attacked on their platforms or using their tools. Not only is this a necessary pursuit for ethical reasons, but, as women make up a very significant audience for these tools, there is consumer demand for solutions. Many of the interventions created to address this issue have been created by private companies. For example, Block Party was a tool created by a private company to give users the control to block harassment on Twitter. It was financially successful until Twitter drastically raised the cost of access to the Twitter API and forced Block Party to close. Despite financial and economic incentives to protect women online, currently, platforms are falling short.

While most platforms ban online gender based violence in their terms and conditions, rarely are there real punishments for violating this ban or effective solutions to protect those attacked.The best that can be hoped for is to have offending posts removed, and this is rarely done in a timely manner. The situation is even worse for non-English posts, which are often misinterpreted, with offensive slang ignored and common phrases censored. Furthermore, the way the reporting system is structured puts the burden on those attacked to sort through violent and traumatizing messages and convince the platform to remove them.

Nonprofits are uniquely placed to address online gendered abuse because they can and have moved more quickly than governments or tech companies to make and advocate for change. Nonprofits provide solutions, conduct research on the threat, facilitate security training, and develop recommendations for tech companies and governments. Furthermore, they play a key role in facilitating communication between all the stakeholders.

Digital security education and digital literacy training

Digital-security education can help women (especially those at higher risk, like human rights defenders and journalists) stay safe online and attain critical knowledge to survive and thrive politically, socially, and economically in an increasingly digital world. However, there are not enough digital-safety trainers that understand the context and the challenges at-risk women face. There are few digital-safety resources that provide contextualized guidance around the unique threats that women face or have usable solutions for the problems they need to solve. Furthermore, social and cultural pressures can prevent women from attending digital-safety trainings. Women can and will be content creators and build resources for themselves and others, but they first must be given the chance to learn about digital safety and security as part of a digital-literacy curriculum. Men and boys, too, need training on online harassment and digital-safety education.

Connecting and campaigning on issues that matter

Digital platforms enable women to connect with each other, build networks, and organize on justice issues. For example, the #MeToo movement against sexual misconduct in the media industry, which became a global movement, has allowed a multitude of people to participate in activism previously bound to a certain time and place. Read more about digital activism in the Social Media primer.

Beyond campaigning for women’s rights, the internet provides a low-cost way for women to get involved in the broader democratic conversation. Women can run for office, write for newspapers, and express their political opinions with only a phone and an internet connection. This is a much lower barrier than the past, when reaching a large crowd required a large financial investment (such as paying for TV advertising), and women had less control over the message being expressed (for example, media coverage of women politicians disproportionately focusing on physical appearance). Furthermore, the internet is a resource for learning political skills. Women with digital literacy skills can find courses, blogs, communities, and tools online to support any kind of democratic work.

Back to top

Risks

Young women at a digital inclusion center in the Peruvian Amazon.. Photo credit: Jack Gordon / USAID / Digital Development Communications.

There are many factors that threaten to widen the digital gender divide and prevent technology from being used to increase gender equality.. Read below to learn about some of these elements, as well as how to mitigate the negative consequences they present for the digital gender divide.

Considering the digital gender divide a “women’s issue”

The gender digital divide is a cross-cutting and holistic issue, affecting countries, societies, communities, and families, and not just as a “women’s issue.” When people dismiss the digital gender divide as a niche concern, it limits the resources devoted to the issue and leads to ineffective solutions that do not address the full scope of the problem. Closing the gender gap in access, use, and development of technology demands the involvement of societies as a whole. Approaches to close the divide must be holistic, take into account context-specific power and gender dynamics, and include active participation of men in the relevant communities to make a sustained difference.

Further, the gender digital divide should not be understood as restricted to the technology space, but as a social, political, and economic issue with far-reaching implications, including negative consequences for men and boys.

Disasters and crises intensify the education gap for women

Women’s and girls’ education opportunities are more tenuous during crises. Increasing domestic and caregiving responsibilities, a shift towards income generation, pressure to marry, and gaps in digital-literacy skills mean that many girls will stop receiving an education, even where access to the internet and distance-learning opportunities are available. In Ghana, for example, 16% of adolescent boys have digital skills compared to only 7% of girls. Similarly, lockdowns and school closures due to the Covid-19 pandemic had a disproportionate effect on girls, increasing the gender gap in education, especially in the most vulnerable contexts. According to UNESCO, more than 111 million girls who were forced out of school in March 2020 live in the countries where gender disparities in education are already the highest. In Mali, Niger, and South Sudan, countries with some of the lowest enrolment and completion rates for girls, closures left over 4 million girls out of school.

Online violence increases self-censorship and chills political engagement

Online GBV has proven an especially powerful tool for undermining women and women-identifying human-rights defenders, civil society leaders, and journalists, leading to self-censorship, weakening women’s political leadership and engagement, and restraining women’s self-expression and innovation. According to a 2021 Economist Intelligence Unit (EIU) report, 85% of women have been the target of or witnessed online violence, and 50% of women feel the internet is not a safe place to express their thoughts and opinions. This violence is particularly damaging for those with intersecting marginalized identities. If these trends are not addressed, closing the digital divide will never be possible, as many women who do get online will be pushed off because of the threats they face there. Women journalists, activists, politicians, and other female public figures are the targets of threats of sexual violence and other intimidation tactics. Online violence against journalists leads to journalistic self-censorship, affecting the quality of the information environment and democratic debate.

Online violence chills women’s participation in the digital space at every level. In addition to its impact on women political leaders, online harassment affects how women and girls who are not direct victims engage online. Some girls, witnessing the abuse their peers face online, are intimidated into not creating content. This form of violence is also used as a tool to punish and discourage women who don’t conform to traditional gender roles.

Solutions include education (training women on digital security to feel comfortable using technology and training men and boys on appropriate behavior in online environments), policy change (advocating for the adoption of policies that address online harassment and protect women’s rights online), and technology change (addressing the barriers to women’s involvement in the creation of tech to decrease gender disparities in the field and help ensure that the tools and software that are available serve women’s needs).

Artificial intelligence systems exacerbate biases

Disproportionate involvement of women in leadership in the development, coding, and design of AI and machine-learning systems leads to reinforcement of gender inequalities through the replication of stereotypes and maintenance of harmful social norms. For example, groups of predominantly male engineers have designed digital assistants such as Apple’s Siri and Amazon’s Alexa, which use women-sounding voices, reinforcing entrenched gender biases, such as women being more caring, sympathetic, cordial, and even submissive.

In 2019, UNESCO released “I’d blush if I could”, a research paper whose title was based on the response given by Siri when a human user addressed “her” in an extremely offensive manner. The paper noted that although the system was updated in April 2019 to reply to the insult more flatly (“I don’t know how to respond to that”), “the assistant’s submissiveness in the face of gender abuse remain[ed] unchanged since the technology’s wide release in 2011.” UNESCO suggested that by rendering the voices as women-sounding by default, tech companies were preconditioning users to rely on antiquated and harmful perceptions of women as subservient and failed to build in proper safeguards against abusive, gendered language.

Further, machine-learning systems rely on data that reflect larger gender biases. A group of researchers from Microsoft Research and Boston University trained a machine learning algorithm on Google News articles, and then asked it to complete the analogy: “Man is to Computer Programmer as Woman is to X.” The answer was “Homemaker,” reflecting the stereotyped portrayal and the deficit of women’s authoritative voices in the news. (Read more about bias in artificial intelligence systems in the Artificial Intelligence and Machine Learning Primer section on Bias in AI and ML).

In addition to preventing the reinforcement of gender stereotypes, increasing the participation of women in tech leadership and development helps to add a gendered lens to the field and enhance the ways in which new technologies can be used to improve women’s lives. For example, period tracking was first left out of health applications, and then, tech companies were slow to address concerns from US users after Roe v. Wade was repealed and period-tracking data privacy became a concern in the US.

New technologies allow for the increased surveillance of women

Surveillance is of particular concern to those working in closed and closing spaces, whose governments see them as a threat due to their activities promoting human rights and democracy. Research conducted by Privacy International shows that there is a uniqueness to the surveillance faced by women and gender non-conforming individuals. From data privacy implications related to menstrual-tracker apps, which might collect data without appropriate informed consent, to the ability of women to privately access information about sexual and reproductive health online, to stalkerware and GPS trackers installed on smartphones and internet of things (IoT) devices by intimate partners, pervasive technology use has exacerbated privacy concerns and the surveillance of women.

Research conducted by the CitizenLab, for example, highlights the alarming breadth of commercial software that exists for the explicit purpose of covertly tracking another’s mobile device activities, remotely and in real-time. This could include monitoring someone’s text messages, call logs, browser history, personal calendars, email accounts, and/or photos. Education on digital security and the risks of data collection is necessary so women can protect themselves online, give informed consent for data collection, and feel comfortable using their devices.

Increased technological unemployment

Job losses caused by the replacement of human labor with automated systems lead to “technological unemployment,” which disproportionately affects women, the poor, and other vulnerable groups, unless they are re-skilled and provided with adequate protections. Automation also requires skilled labor that can operate, oversee, and/or maintain automated systems, eventually creating jobs for a smaller section of the population. But the immediate impact of this transformation of work can be harmful for people and communities without social safety nets or opportunities for finding other work.

Back to top

Questions

Consider these questions when developing or assessing a project proposal that works with women or girls (which is pretty much all of them):

  1. Have women been involved in the design of your project?
  2. Have you considered the gendered impacts and unintended consequences of adopting a particular technology in your work?
  3. How are differences in access and use of technology likely to affect the outcomes of your project?
  4. Are you employing technologies that could reinforce harmful gender stereotypes or fail the needs of women participants?
  5. Are women exposed to additional safety concerns (compared to men) brought about by the use of the tools and technologies adopted in your project?
  6. Have you considered gaps in sex- or gender-disaggregated data in the dataset used to inform the design and implementation of your project? How could these gaps be bridged through additional primary or secondary research?
  7. How can your project meaningfully engage men and boys to address the gender digital divide?
  8. How can your organization’s work help mitigate and eventually close the gender digital divide?

Back to top

Case studies

There are many examples of programs that are engaging with women to have a positive effect on the digital gender divide. Find out more about a few of these below.

USAID’s WomenConnect Challenge

In 2018, USAID launched the WomenConnect Challenge to enable women’s access to, and use of, digital technologies. The first call for solutions brought in more than 530 ideas from 89 countries, and USAID selected nine organizations to receive $100,000 awards. In the Republic of Mozambique, the development-finance institution GAPI lowered barriers to women’s mobile access by providing offline Internet browsing, rent-to-own options, and tailored training in micro-entrepreneurship for women by region. Another first round awardee, AFCHIX, created opportunities for rural women in Kenya, Namibia, Sénégal, and Morocco to become network engineers and build their own community networks or Internet services. AFCHIX won another award in the third round of funding, which the organization used to integrate digital skills learning into community networks to facilitate organic growth of women using digital skills to create socioeconomic opportunities. The entrepreneurial and empowerment program helps women establish their own companies, provides important community services, and positions these individuals as role models.

Safe Sisters – Empowering women to take on digital security

In 2017, Internews and DefendDefenders piloted the Safe Sisters program in East Africa to empower women to protect themselves against online GBV. Safe Sisters is a digital-safety training-of-trainers program that provides women human rights defenders and journalists who are new to digital safety with techniques and tools to navigate online spaces safely, assume informed risks, and take control of their lives in an increasingly digital world. The program was created and run entirely by women, for women. In it, participants learn digital-security skills and get hands-on experience by training their own at-risk communities.

In building the Safe Sisters model, Internews has proven that, given the chance, women will dive into improving their understanding of digital safety, use this training to generate new job opportunities, and share their skills and knowledge in their communities. Women can also create context- and language-specific digital-safety resources and will fight for policies that protect their rights online and deter abuse. There is strong evidence of the lasting impact of the Safe Sisters program: two years after the program launched, 80% of the pilot cohort of 13 women were actively involved in digital safety; 10 had earned new professional opportunities because of their participation; and four had changed careers to pursue digital security professionally.

Internet Saathi

In 2015, Google India and Tata Trusts launched Internet Saathi, a program designed to equip women (known as Internet Saathis) in villages across the country with basic Internet skills and provide them with Internet-enabled devices. The Saathis then train other women in digital literacy skills, following the ‘train the trainer’ model. As of April 2019, there were more than 81,500 Internet Saathis who helped over 28 million women learn about the Internet across 289,000 villages. Read more about the Saathis here.

Girls in Tech

Girls in Tech is a nonprofit with chapters around the world. Its goal is to close the gender gap in the tech development field. The organization hosts events for girls, including panels and hackathons, which serve the dual purpose of encouraging girls to participate in developing technology and solving local and global issues, such as environmental crises and accessibility issues for people with disabilities. Girls in Tech gives girls the opportunity to get involved in designing technology through learning opportunities like bootcamps and mentorship. The organization hosts a startup pitch competition called AMPLIFY, which gives girls the resources and funding to make their designs a reality.

Women in Tech

Women in Tech is another international nonprofit and network with chapters around the globe that supports Diversity, Equity, and Inclusion in Science, Technology, Engineering, Arts, and Mathematics fields. It does this through focuses on Education – training women for careers in tech, including internships, tech awareness sessions, and scholarships; Business – including mentoring programs for women entrepreneurs, workshops, and incubation and acceleration camps; Social Inclusion – ensuring digital literacy skills programs are reaching marginalized groups and underprivileged communities; and Advocacy – raising awareness of the digital gender divide issue and how it can be solved.

EQUALS Global Partnership

The International Telecommunications Union (ITU), GSMA, the International Trade Centre, the United Nations University, and UN Women founded the EQUALS Global Partnership to tackle the digital gender divide through research, policy, and programming. EQUALS breaks the path to gender equality in technology into four core issue areas; Access, Skills, Leadership, and Research. The Partnership has a number of programs, some in collaboration with other organizations, to specifically target these issue areas. One research program, Fairness AI, examines bias in AI, while the Digital Literacy Pilot Programmes, which are the result of collaboration between the World Bank, GSMA, and the EQUALS Access Coalition, are programs focused on teaching digital literacy to women in Rwanda, Uganda, and Nigeria. More information about EQUALS Global Partnership’s projects can be found on the website.

Regional Coding Camps and Workshops

Many initiatives to address the digital gender divide utilize trainings to empower girls and women to feel confident in tech industries because simply accessing technology is only one factor contributing to the divide. Because cultural obligations often play a key role and because technology is more intimidating when it is taught in a non-native language, many of these educational programs are localized. One example of this is the African Girls Can Code Initiative (AGCCI), created by UN Women, the African Union Commission (AUC), and the ITU. The Initiative trains women and girls between the ages of 17 and 25 in coding and information, communications, and technology (ICT) skills in order to encourage them to pursue an education and career in these fields. AGCCI works to close the digital gender divide through both increasing women and girls’ knowledge of the field and mainstreaming women in these fields, tackling norms issues.

Mentorship Programs

Many interventions to encourage women’s engagement in technology also use mentorship programs. Some use direct peer mentorship, while others connect women with role models through interviews or conferences. Utilizing successful women is an effective solution because success in the tech field for women requires more than just tech skills. Women need to be able to address gender and culture-specific barriers that only other women who have the same lived experiences can understand. Furthermore, by elevating mentors, these interventions put women tech leaders in the spotlight, helping to shift norms and expectations around women’s authority in the tech field. The Women in Cybersecurity Mentorship Programme is one example. This initiative was created by the ITU, EQUALS, and the Forum of Incident Response and Security Teams (FIRST). It elevates women leaders in the cybersecurity field and is a resource for women at all levels to share professional best practices. Google Summer of Code is another, broader (open to all genders) mentorship opportunity. Applicants apply for mentorship on a coding project they are developing and mentors help introduce them to the norms and standards of the open source community, and they develop their projects as open source.

Outreachy is an internship program that aims to increase diversity in the open source community. Applicants are considered if they are impacted by underrepresentation in tech in the area in which they live. The initiative includes a number of different projects they can work on, lasts three months, and are conducted remotely with a stipend of 7000 USD to decrease barriers for marginalized groups to participate.

USAID/Microsoft Airband Initiative

The USAID/Microsoft Airband Initiative takes localized approaches to addressing the digital gender divide. For each region, partner organizations, which are local technology companies, work in collaboration with local gender inequality experts to design a project to increase connectivity, with a focus on women’s connectivity and reducing the digital gender divide. Making tech companies the center of the program helps to address barriers like determining sustainable price points. The second stage of the program utilizes USAID and Microsoft’s resources to scale up the local initiatives. The final stage looks to capitalize on the first two stages, recruiting new partners and encouraging independent programs.

UN Women’s Second Chance Education (SCE) Programme

The UN Women’s Second Chance Education (SCE) Programme utilizes e-learning to increase literacy and digital literacy, especially of women and girls who missed out on traditional education opportunities. The program was piloted between 2018 and 2023 in six countries of different contexts, including humanitarian crises, middle income, and amongst refugees, migrants, and indigenous peoples. The pilot has been successful overall, but access to the internet remains a challenge for vulnerable groups, and blended learning (utilizing both on and offline components) was particularly successful, especially in adapting to the unique needs, schedules, and challenges participants faced.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top


*A NOTE ON GENDER TERMINOLOGY

All references to “women” (except those that reference specific external studies or surveys, which have been set by those respective authors) are gender-inclusive of girls, women, or any person or persons identifying as a woman.

While much of this article focuses on women, people of all genders are harmed by the digital gender divide, and marginalized gender groups that do not identify as women face some of the same challenges utilizing the internet and have some of the same opportunities to use the internet to address offline barriers.

Categories

Digital IDs

What are digital IDs?

Families displaced by Boko Haram violence in Maiduguri, Northeast Nigeria. Implementation of a digital ID system requires informed consent from participants. Photo credit: USAID.
Families displaced by Boko Haram violence in Maiduguri, Northeast Nigeria. Implementation of a digital ID system requires informed consent from participants. Photo credit: USAID.

Digital IDs are identification systems that rely on digital technology. Biometric technology is one kind of tool often used for digital identification: biometrics allow people to prove their identity based on a physical characteristic or trait (biological data). Other forms of digital identification include cards and mobile technologies. This resource, which draws on the work of The Engine Room, will look at different forms and the implications of digital IDs, with a particular focus on biometric IDs, including their integration with health systems and their potential for e-participation.

“Biometrics are not new – photographs have been used in this sector for years, but current discourse around ‘biometrics’ commonly refers to fingerprints, face prints and iris scans. As technology continues to advance, capabilities for capturing other forms of biometric data are also improving, such that voice prints, retinal scans, vein patterns, tongue prints, lip movements, ear patterns, gait, and of course, DNA, can be used for authentication and identification purposes.””

The Engine Room

Definitions

Biometric Data: automatically measurable, distinctive physical characteristics or personal traits used to identify or verify the identity of an individual.

Consent: Article 4(11) of the General Data Protection Regulation (GDPR) defines consent: “Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.” See also the Data Protection resource.

Data Subject: the individual whose data is collected.

Digital ID:  an electronic identity-management system used to prove an individual’s identity or their right to access information or services.

E-voting: an election system that allows a voter to record their secure and secret ballot electronically.

Foundational Biometric Systems: systems that supply general identification for official uses, like national civil registries and national IDs.

Functional Biometric Systems: systems that respond to a demand for a particular service or transaction, like voter IDs, health records, or financial services.

Identification/One-to-Many Authentication: using the biometric identifier to identify the data subject from within a database of other biometric profiles.

Immutability: the quality of a characteristic that does not change over time (for example, DNA).

Portable Identity: an individual’s digital ID credentials may be taken with them beyond the initial issuing authority, to prove official identity for new user relationships/entities, without having to repeat verification each time.

Self-Sovereign Identity: a digital ID that gives the data subject full ownership over their digital identity, guaranteeing them lifetime portability, independent from any central authority.

Uniqueness: a characteristic that sufficiently distinguishes individuals from one another. Most forms of biometric data are singularly unique to the individual involved.

Verification/One-to-One Authentication: using the biometric identifier to confirm that the data subject is who they claim to be.

How do digital IDs work?

Young Iraq woman pictured at the Harsham IDP camp in Erbil, Iraq. Digital IDs and biometrics have potential to facilitate the voting process. Photo credit: Jim Huylebroek for Creative Associates International.
Young Iraq woman pictured at the Harsham IDP camp in Erbil, Iraq. Digital IDs and biometrics have potential to facilitate the voting process. Photo credit: Jim Huylebroek for Creative Associates International.

There are three primary categories of technology used for digital identification: biometrics, cards, and mobile. Within each of these areas, a wide range of technologies may be used.

The NIST (National Institute of Standards and Technology, one of the primary international authorities on digital IDs) identifies three parts of how the digital ID process works.

Part 1: Identity proofing and enrollment

This is the process of binding the data on the subject’s identity to an authenticator, which is a tool that is used to prove their identity.

  • With a biometric ID, this involves collecting the data (through an eye scan, fingerprinting, submitting a selfie, etc.), verifying that the person is who they claim to be, and connecting the individual to an identity account (profile).
  • With a non-biometric ID, this involves giving the individual a tool (an authenticator) they can use for authentication, like a password, a barcode, etc.

Part 2: Authentication

This is the process of using the digital ID to prove identity or access services.

Biometric authentication: There are two different types of biometric authentication.

  • Biometric Verification (or One-to-One Authentication) confirms that the person is who they say they are. This allows organizations to determine, for example, that a person is entitled to certain food, vaccine, or housing.
  • Biometric Identification (or One-to-Many Authentication) is used to identify an individual from within a database of biometric profiles. Organizations may use biometrics for identification to prevent fraudulent enrollments and to “de-duplicate” lists of people. One-to-many authentication systems pose more risks than one-to-one systems because they require a larger amount of data to be stored in one place and because they lead to more false matches. (Read more in the Risks section).

The chart below synthesizes the advantages and disadvantages of different biometric authentication tools. For further details, see the World Bank’s “Technology Landscape for Digital Identification (2018).”

Biometric ToolAdvantages Disadvantages
FingerprintsLess physically/personally invasive; advanced and relatively affordable methodNot fully inclusive: some fingerprints are harder to capture than others
Iris ScanFast, accurate, inclusive, and secureMore expensive technology; verification requires precise positioning of data subject; can be misused for surveillance purposes (verification without data subject’s permission)
Face RecognitionRelatively affordableProne to error; can be misused for surveillance purposes (verification without data subject’s permission); not enough standardization among technology suppliers, which could lead to vendor lock-in
Voice RecognitionRelatively affordable; no concerns about hygiene (unlike some other biometrics that involve touch)Collection process can be difficult and time-consuming; technology is difficult to scale
Behavior Recognition, also known as “Soft Biometrics” (i.e., a person’s gait, how they write their signature)Can be used in real timeProne to error; not yet a mature technology; can be misused for surveillance purposes (verification without data subject’s permission)
Vascular Recognition (A person’s distinct pattern of veins)Secure, accurate, and inclusive technologyMore expensive; not yet a mature technology and not yet widely understood; not interoperable/data are not easily portable
DNA ProfilingSecure; accurate; inclusive; useful for large populations Collection process is long; technology is expensive; involves extremely sensitive information which can be used to identify race, gender, and family relationships, etc. that could put the individual at risk

Non-biometric authentication: There are two common forms of digital ID that are not based on physical characteristics or traits, which also have authentication methods. Digital ID cards and digital ID applications on mobile devices can also be used to prove identity or to access services or aid (much like a passport, residence card, or driver’s license).

  • Cards: These are a common digital identifier, which can rely on many kinds of technology, from microchips to barcodes. Cards have been in use for a long time which makes them a mature technology, but they are also less secure because they can be lost or stolen. “Smart cards” exist in the form of an embedded microchip combined with a password. Cards can also be combined with biometric systems. For example, Mastercard and Thales began offering cards with fingerprint sensors in January 2020.
  • Apps on mobile devices: Digital IDs can be used on mobile devices by relying on a password, a “cryptographic” (specially encoded) SIM card, or a “Smart ID” app. These methods are fairly accurate and scalable, but they have security risks and also risks over the long term due to reliance on technology providers: the technology may not be interoperable or may become outdated (see Privatization of ID and Vendor Lock-In in the Risks section ).

Part 3: Portability and interoperability

Digital IDs are usually generated by a single issuing authority (NGO, government entity, health provider, etc.) for an individual. However, portability means that digital ID systems can be designed to allow the person to use their ID elsewhere than with the issuing authority — for example with another government entity or non-profit organization.

To understand interoperability, consider different email providers, for instance, Gmail and Yahoo Mail: these are separate service providers, but their users can send emails to one another. Data portability and interoperability are critical from a fundamental rights perspective, but it is first necessary that different networks (providers, governments) be interoperable with one another to allow for portability. Interoperability is increasingly important for providing services within and across countries, as can be seen in the European Union and Schengen community, the East African community, and the West African ECOWAS community.

Self-Sovereign Identity (SSI) is an important, emerging type of digital ID that gives a person full ownership over their digital identity, guaranteeing them lifetime portability, independent from any central authority. The Self-Sovereign Identity model aims to remove the trust issues and power imbalances that generally accompany digital identity, by giving a person full control over their data.

Back to top

How are digital IDs relevant in civic space and for democracy?

People across the world who are not identified by government documents face significant barriers to receiving government services and humanitarian assistance. Biometrics are widely used by donors and development actors to identify individuals and connect them with services. Biometric technology can increase access to finance, healthcare, education, and other critical services and benefits. It can also be used for voter registration and in facilitating civic participation.

Resident of the Garin Wazam site in Niger exchanges her e-voucher with food. Biometric technology can increase access to critical services and benefits. Photo credit: Guimba Souleymane, International Red Cross Niger.

The United Nations High Commissioner for Refugees (UNHCR) began its global Biometric Identity Management System (“BIMS”) in 2015, and the following year the World Food Program began using biometrics for multiple purposes, including refugee protection, cash-based interventions, and voter registration. In recent years, a growing preference in aid delivery for cash-based interventions has been part of the push towards digital IDs and biometrics, as these tools can facilitate monitoring and reporting of assistance distribution.

The automated nature of digital IDs brings many new challenges, from gathering meaningful informed consent, to guaranteeing personal security and organization-level security, to potentially harming human dignity and increasing exclusion. These technical and societal issues are detailed in the Risks section.

Ethical Principles for Biometrics

Founded in July 2001 in Australia, the Biometrics Institute is an independent and international membership organization for the biometrics community. In March of 2019, they released seven “Ethical Principles for Biometrics.”

  1. Ethical behaviour: We recognise that our members must act ethically even beyond the requirements of law. Ethical behaviour means avoiding actions which harm people and their environment.
  2. Ownership of the biometric and respect for individuals’ personal data: We accept that individuals have significant but not complete ownership of their personal data (regardless of where the data are stored and processed) especially their biometrics, requiring their personal data, even when shared, to be respected and treated with the utmost care by others.
  3. Serving humans: We hold that technology should serve humans and should take into account the public good, community safety and the net benefits to individuals.
  4. Justice and accountability: We accept the principles of openness, independent oversight, accountability and the right of appeal and appropriate redress.
  5. Promoting privacy-enhancing technology: We promote the highest quality of appropriate technology use including accuracy, error detection and repair, robust systems and quality control.
  6. Recognising dignity and equal rights: We support the recognition of dignity and equal rights for all individuals and families as the foundation of freedom, justice and peace in the world, in line with the United Nations Universal Declaration of Human Rights.
  7. Equality: We promote planning and implementation of technology to prevent discrimination or systemic bias based on religion, age, gender, race, sexuality or other descriptors of humans.

Back to top

Opportunities

Biometric voter registration in Kenya. Collection and storage of biometric data require strong data protection measures. Photo credit: USAID/Kenya Jefrey Karang’ae.

If you are trying to understand the implications of digital IDs in your work environment, or are considering using aspects of digital IDs as part of your DRG programming, ask yourself these questions:
Potential fraud reduction

Biometrics are frequently cited for their potential to reduce fraud and more generally manage financial risk by facilitating due diligence oversight and scrutiny of transactions. According to The Engine Room, these are frequently-cited justifications for the use of biometrics among development and humanitarian actors, but The Engine Room also found a lack of evidence to support this claim. It should not be assumed that fraud only occurs at the beneficiary level: the real problems with fraud may occur elsewhere in an ecosystem.

Facilitate E-Voting

Beyond the distribution of cash and services, the potential of digital IDs and biometrics is to facilitate the voting process. The right to vote, and to participate in democratic processes more broadly, is a fundamental human right. Recently, the use of biometric voter registration and biometric voting systems has become more widespread as a means of empowering civic participation and securing electoral systems, and protecting against voter fraud and multiple enrollments.

Advocates claim that e-voting can reduce the costs of participation and make the process more reliable. Meanwhile, critics claim that digital systems are at risk of failure, misuse, and security breach. Electronic ballot manipulation, poorly written code, or any other kind of technical failure could compromise the democratic process, particularly when there is no  backup paper trail. For more, see “Introducing Biometric Technology in Elections” (2017) by the International Institute for Democracy and Electoral Assistance, which includes detailed case studies on e-voting in Bangladesh, Fiji, Mongolia, Nigeria, Uganda, and Zambia.

Health Records

Securing electronic health records, particularly when care services are provided by multiple actors, can be very complicated, costly, and inefficient. Because biometrics link a unique verifier to a single individual, they are useful for patient identification, allowing doctors and health providers to connect someone to their health information and medical history. Biometrics have potential in vaccine distribution, for example, by being able to identify who has received specific vaccines (see the case study by The New Humanitarian about Gavi technology).

Access to healthcare can be particularly complicated in conflict zones, for migrants and displaced people, or for other groups without their documented health records. With interoperable biometrics, when patients need to transfer from one facility to another for whatever reason, their digital information can travel with them. For more, see the World Bank Group ID4D, “The Role of Digital Identification for Healthcare: The Emerging Use Cases” (2018).

Increased access to cash-based interventions

Digital ID systems have the potential to include the unbanked or those underserved by financial institutions in the local or even global economy. Digital IDs grant people access to regulated financial services by enabling them to prove their official identity. Populations in remote areas can benefit especially from digital IDs that permit remote, or non-face-to-face, identity proofing/enrollment for customer identification/verification. Biometrics can also make accessing banking services much more efficient, reducing the requirements and hurdles that beneficiaries would normally face. The WFP provides an example of a successful cash-based intervention: in 2017, it launched its first cash-based assistance for secondary school girls in northwestern Pakistan using biometric attendance data.

According to the Financial Action Task Force by bringing more people into the regulated financial sector, biometrics further reinforce financial safeguards.

Improved distribution of aid and social benefits

Biometric systems can reduce much of the administrative time and human effort behind aid assistance, liberating human resources to devote to service delivery. Biometrics permit aid delivery to be tracked in real-time, which allows governments and aid organizations to respond quickly to beneficiary problems.

Biometrics can also reduce redundancies in social benefit and grant delivery. For instance, in 2015, the World Bank Group found that biometric digital IDs in Botswana achieved a 25 percent savings in pensions and social grants by identifying duplicated records and deceased beneficiaries. Indeed, the issue of “ghost” beneficiaries is a common problem. In 2019, the Namibian Government Institutions Pension Fund (GIPF) began requiring pension recipients to register their biometrics at their nearest GIPF office and return to verify their identity three times a year. Of course, social-benefit distribution can be aided by biometrics, but it also requires human oversight, given the possibility of glitches in digital service delivery and the critical nature of these services (see more in the Risks section).

Proof of identity

Migrants, refugees, and asylum seekers often struggle to prove and maintain their identity when they relocate. Many lose the proof of their legal identities and assets — for example, degrees and certifications, health records, and financial assets — when they flee their homes. Responsibly-designed biometrics can help these populations reestablish and maintain proof of identity. For example in Finland, a blockchain startup called MONI has been working since 2015 with the Finnish Immigration Service to provide refugees in the country with a prepaid credit card backed by a digital identity number stored on a blockchain . The design of these technologies is critical: data should be distributed rather than centralized to prevent security risks and misuse or abuse that come with centralized ownership of sensitive information.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible risks associated with the use of digital ID tools in DRG work.

Dehumanization of beneficiaries

The way that biometrics are regarded — bestowing an identity on someone as if they did not have an identity previously — can be seen as problematic and even dehumanizing.

As The Engine Room explains, “the discourse around the ‘identifiability’ benefits of biometrics in humanitarian interventions often tends to conflate the role that biometrics play. Aid agencies cannot ‘give’ a beneficiary an identity, they can only record identifying features and check those against other records. Treating the acquisition of biometric data as constitutive of identity risks dehumanising beneficiaries, most of whom are already disempowered in their relationship with humanitarian entities upon whom they rely for survival. This attitude is evident in the remarks of one Burmese refugee undergoing fingerprint registration in Malaysia in 2006 — ‘I don’t know what it is for, but I do what UNHCR wants me to do’ — and of a Congolese refugee in Malawi, who upon completing biometric registration told staff, ‘I can be someone now.’”

Lack of informed consent

It is critical to obtain the informed consent of individuals in the process of biometric enrollment. But it’s rarely the case in humanitarian and development settings, given the many confusing technical aspects of the technology, language, and cultural barriers, etc. An agreement that is potentially coerced, as illustrated by the case of the biometric registration program in Kenya, which was challenged in court after many Kenyans felt pressured into it, does not constitute consent. It is difficult to guarantee and even evaluate consent when the power imbalance between the issuing authority and the data subject is so substantial. “Refugees, for instance, could feel they have no choice but to provide their information, because they are in a vulnerable situation.”

Minors also face a similar risk of coerced or uninformed consent. As the Engine Room pointed out in 2016, “UNHCR has adopted the approach that refusal to submit to biometric registration amounts to refusal to submit to registration at all. If this is true, this constrains beneficiaries’ right to contest the taking of biometric data and creates a considerable disincentive to beneficiaries voicing opposition to the biometric approach.”

For consent to be given truly, the individual must have an alternative method available to them so they feel they can refuse the procedure without being disproportionately penalized. Civil society organizations could play an important role in helping to remedy this power imbalance.

Security risks

Digital ID systems provide many important security features, but they increase other security risks, like the risk of data leakage, data corruption, or data use/misuse by unauthorized actors. Digital ID systems can involve very detailed data about the behaviors and movements of vulnerable individuals, for example, their financial histories and their attendance at schools, health clinics, and religious establishments. This information could be used against them, if in the hands of other actors (corrupt governments, marketers, criminals).

The loss, theft, or misuse of biometric data are some of the greatest risks for organizations deploying these technologies. By collecting and storing their biometric data in centralized databases, aid organizations could be putting their beneficiaries at serious risk, particularly if their beneficiaries are people fleeing persecution or conflict. In general, because digital IDs rely on the Internet or other open communications networks, there are multiple opportunities for cyberattacks and security breaches. The Engine Room also cites anecdotal accounts of humanitarian workers losing laptops, USB keys, and other digital files containing beneficiary data. See also the Data Protection resource.

Data Reuse and Misuse

Because biometrics are unique and immutable, once biometric data are out in the world, people are no longer the only owners of their identifiers. The Engine Room describes this as the “non-revocability” of biometrics. This means that biometrics could be used for other purposes than those originally intended. For instance, governments could require humanitarian actors to give them access to biometric databases for political purposes, or foreign countries could obtain biometric data for intelligence purposes. People cannot easily change their biometrics as they would a driver’s license or even their name: for instance, with facial recognition, they would need to undergo plastic surgery in order to remove their biometric data.

There is also the risk that biometrics will be put to use in future technologies that may be more intrusive or harmful than current usages. “Governments playing hosts to large refugee populations, such as Lebanon, have claimed a right to access to UNHCR’s biometric database, and donor States have supported UNHCR’s use of biometrics out of their own interest in using the biometric data acquired as part of the so-called ongoing “war on terror”

The Engine Room

For more on the potential reuse of biometric data for surveillance purposes, see also “Aiding surveillance: An exploration of how development and humanitarian aid initiatives are enabling surveillance in developing countries,” I&N Working Paper (2014).

Malfunctions and inaccuracies

Because they are so technical and rely on multiple steps and mechanisms, digital ID systems can experience many errors. Biometrics can return false matches, linking someone to the incorrect identity, or false negatives, failing to link someone to their actual identity. Technology does not always function as it does in the laboratory setting when it is deployed within real communities. Furthermore, some populations are at the receiving end of more errors than others: for instance, as has been widely proven, people of color are more often incorrectly identified by facial recognition technology.

Some technologies are more error-prone than others, for example, soft biometrics which measure elements like a person’s gait are less mature and accurate technologies than iris scans. Even fingerprints, though relatively mature and widely used, still have a high error rate. The performance of some biometrics can also diminish over time: aging can change a person’s facial features and even their irises in a way that can impede biometric authentication. Digital IDs can also suffer from connectivity issues: lack of reliable infrastructure can reduce the system’s functioning in a particular geographic area for a significant period of time. To mitigate this, it is important that digital ID systems be designed to support both offline and online transactions.

When it comes to providing life-saving aid services, even a small mistake or malfunction during a single step in the process can cause severe harm. Unlike manual processes where humans are involved and can intervene in the case of error, automated processes bring the possibility that no one will notice a seemingly small technicality until it is too late.

Exclusionary potential

Biometrics may exclude individuals for several reasons, according to The Engine Room: “Individuals may be reluctant to submit to providing biometric samples because of cultural, gender or power imbalances. Acquiring biometric samples can be more difficult for persons of darker skin color or persons with disabilities. Fingerprinting, in particular, can be difficult to undertake correctly, particularly when beneficiaries’ fingerprints are less pronounced due to manual and rural labor. All of these aspects may inhibit individuals’ provision of biometric data and thus exclude them from the provision of assistance.”

The kinds of errors mentioned in the section above are more frequent with respect to minority populations who tend to be underrepresented in training data sets, for example, people of color, and persons with disabilities.

Lack of access to technology or lower levels of technology literacy can compound exclusion: for example, lack of access to smartphones or lack of cellphone data or coverage may increase exclusion in the case of smartphone-reliant ID systems. As mentioned, manual laborers’ typically have worn fingerprints which can be difficult when using biometric readers; similarly, the elderly may experience match failure due to changes in their facial characteristics like hair loss or other signs of aging or illness — all increasing risk of exclusion.

The World Bank ID4D program explains that they often note differential rates in coverage for the following groups and their intersections: women and girls; orphans and vulnerable children; poor people; rural dwellers; ethnolinguistic minorities; migrants and refugees; stateless populations or populations at risk of statelessness; older people; persons with disabilities; non-nationals. It bears emphasizing that these groups tend to be the most vulnerable populations in society — precisely those that biometric technology and digital IDs aim to include and empower. When considering which kind of ID or biometric technology to deploy, it is critical to assess all of these types of potential errors in relation to the population, and in particular how to mitigate against the exclusion of certain groups.

Insufficient regulation

“Technology is moving so fast that laws and regulations are struggling to keep up… Without clear international legislation, businesses in the biometrics world are often faced with the dilemma, ‘Just because we can, should we?’”

Isabelle Moeller, Chief Executive of the Biometrics Institute

Digital identification technologies exist in a continually evolving regulatory environment, which presents challenges to providers and beneficiaries alike. There are many efforts to create international standards for biometrics and digital IDs — for example, by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). But beyond the GDPR, there is not yet sufficient international regulation to enforce these standards in many of the countries where they are being implemented.

Privatization of ID and Vendor Lock-In

The technology behind digital identities and biometrics is almost always provided by private-sector actors, often in partnership with governments and international organizations and institutions. The major role played by the private sector in the creation and maintenance of digital IDs can put both the beneficiaries and aid organizations and governments at risk of vendor lock-in: if the cost of switching to a new service provider is too expensive or onerous, the organization/actor may be forced to stay with their original supplier. Overreliance on a private-sector supplier can also bring security risks (for instance, when the original supplier’s technology is insecure) and can pose challenges to partnering with other services and providers when the technology is not interoperable. For these reasons, it is important for technology to be interoperable and to be designed with open standards.

IBM’s Facial Recognition Ban
In June of 2020, IBM decided to withdraw its facial-recognition technology from use by law enforcement in the U.S. These one-off decisions by private actors should not replace legal judgments and regulations. Debbie Reynolds, data privacy officer for Women in Identity, believes that facial recognition will not soon disappear, and so, considering the many flaws in the technology today, companies should focus on further improving the technology rather than on banning it. International regulation and enforcement are necessary first and foremost, as this will provide private actors with guidelines and incentives to design responsible, rights respecting technology over the long term.

Back to top

Questions

If you are considering using digital ID tools as part of your programming, ask yourself these questions to understand the possible implications for your work and for your community and partners.

  1. Has the beneficiary given their informed consent? How were you able to check their understanding? Was consent coerced in any way, perhaps due to a power dynamic or lack of alternative options?
  2. How does the community feel about the technology? Does the technology fit with cultural norms and uphold human dignity?
  3. How affordable is the technology for all stakeholders, including the data subjects?
  4. How mature is the technology? How long has the technology been in use, where, and with what results? How well is it understood by all stakeholders?
  5. Is the technology accredited? When and by whom? Is the technology based on widely accepted standards? Are these standards open?
  6. How interoperable is the technology with the other technologies in the identity ecosystem?
  7. How well does the technology perform? How long does it take to collect the data, to validate identity, etc? What is the error rate?
  8. How resilient is the digital system? Can it operate without internet access or without reliable internet access?
  9. How easy is the technology to scale and use with larger or other populations?
  10. How secure and accurate is the technology? Have all security risks been addressed? What methods do you have in terms of backup (for example, a paper trail for electronic voting)
  11. Is the collection of biometric data proportional in regards to the task at hand? Are you collecting the minimal amount of data necessary to achieve your goal?
  12. Where are all data being stored? What other parties might have access to this information? How are the data protected?
  13. Are any of the people who would receive biometric or digital IDs part of a vulnerable group? If digitally recording their identity could put them at risk, how could you mitigate against this? (for instance, avoiding a centralized database, minimizing the amount of data collected, taking cybersecurity precautions, etc.).
  14. What power does the beneficiary have over their data? Can they transfer their data elsewhere? Can they request that their data be erased, and can the data in fact be erased?
  15. If you are using digital IDs or biometrics to automate the fulfillment of fundamental rights or the delivery of critical services, is there sufficient human oversight?
  16. Who is a technological error most likely to exclude or harm? How will you address this potential harm or exclusion?

Back to top

Case studies

Aadhaar, India, the world’s largest national biometric system

Aadhaar is India’s national biometric ID program, and the largest in the world. It is an essential case study for understanding the potential benefits and risks of such a system. Aadhaar is controversial. Many have attributed hunger-related deaths to failures in the Aadhaar system, which does not have sufficient human oversight to intervene when the technology malfunctions and prevents individuals from accessing their benefits. However, in 2018, the Indian Supreme Court  upheld the legality of the system, saying it does not violate Indians’ right to privacy and could therefore remain in operation. “Aadhaar gives dignity to the marginalized,” the judges asserted, and “Dignity to the marginalized outweighs privacy.” While there are substantial risks, there are also significant opportunities for digital IDs in India, including increasing inclusivity and accessibility for otherwise unregistered individuals, to be able to access social services and participate in society.

WFP Iris Scan Technology in Zaatari Refugee Camp

In 2016, the World Food Program introduced biometric technology to the Zataari Refugee camp in Jordan. “WFP’s system relies on UNHCR biometric registration data of refugees. The system is powered by IrisGuard, the company that developed the iris scan platform, Jordan Ahli Bank, and its counterpart Middle East Payment Services. Once a shopper has their iris scanned, the system automatically communicates with UNHCR’s registration database to confirm the identity of the refugee, checks the account balance with Jordan Ahli Bank and Middle East Payment Services, and then confirms the purchase and prints out a receipt – all within seconds.” As of 2019, the program, which relies in part on blockchain technology, was supporting more than 100,000 refugees.

Kenya’s Huduma Namba

In January 2020, the New York Times reported that Kenya’s Digital IDs may exclude millions of minorities. In February, the Kenyan ID Huduma Namba was suspended by a High Court ruling, halting “the $60 million Huduma Namba scheme until adequate data protection policies are implemented. The panel of three judges ruled in a 500-page report that the National Integrated Identification Management System (NIIMS) scheme is constitutional, reports The Standard, but current laws are insufficient to guarantee data protection. […] Months after biometric capture began, the government passed its first data protection legislation in late November 2019, after the government tried to downgrade the role of data protection commissioner to a ‘semi-independent’ data protection agency with a chairperson appointed by the president. The data protection measures have yet to be implemented. The case was brought by civil rights groups including the Nubian Rights Forum and Kenya National Commission on Human Rights (KNCHR), citing data protection and privacy issues, that the way in which data protection legislation was handled in parliament prevented public participation, and how the NIIMs scheme is proving ethnically divisive in the country, particularly in border areas.”

Biometrics for child vaccination

As explored in The New Humanitarian, 2019: “A trial project is being launched with the underlying betting that biometric identification is the best way to help boost vaccination rates, linking children with their medical records. Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently. Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.”

Financial Action Task Force Case Studies

See also the case studies assembled by the Financial Action Task Force (FATF), the intergovernmental organization focused on combating terrorist financing. They released a comprehensive resource on Digital Identity in 2020, which includes brief case studies.

Digital Identity in the Migration and Refugee Context

Digital Identity in the Migration and Refugee Context

For migrants and refugees in Italy, identity data collection processes can “exacerbate existing biases, discrimination, or power imbalances.” One key challenge is obtaining meaningful consent. Often, biometric data are collected as soon as migrants and refugees arrive in a new country, at a moment when they may be vulnerable and overwhelmed. Language barriers exacerbate the issue, making it difficult to provide adequate context around the rights to privacy. Identity data are collected inconsistently by different organizations, all of whose data protection and privacy practices vary widely.

Using Digital IDs in Ukraine

Using Digital IDs in Ukraine

In 2019, USAID in partnership with the Ministry of Digital Transformation of Ukraine helped to launch the Diia app which allows citizens to access digital forms of identification that, since August of 2021, hold the same legal value as physical forms of identification. Diia has about 18 million total users in Ukraine and is the most frequently used app in the country. Support for the app is crucial for Ukraine’s digital development and has become increasingly important as the war has forced many to flee and caused damage to government buildings and existing infrastructure. The app allows users to store digital passports along with 14 other digital documents and access 25 public services all online.

Back to top

References

Find below the works cited in this resource.

This primer draws from the work of The Engine Room, and the resource they produced in collaboration with Oxfam on Biometrics in the Humanitarian Sector, published in March 2018.

Back to top

Categories

Disinformation

What is information manipulation?

Information manipulation encompasses a set of tactics involving the collection and dissemination of information to influence or disrupt democratic decision making. Many different types of content can be involved in information manipulation, such as disinformation, misinformation, mal-information, propaganda, and hate speech. This content can be used to influence public attitudes or beliefs, persuade individuals to act or behave in a certain way—such as suppressing the vote of a particular group of people—or incite hate and violence. A variety of actors can engage in information manipulation, from domestic governments, political parties, and campaigns to commercial actors to foreign governments and extremist groups. Information manipulation can co-opt traditional information channels, like television broadcasting, print, and radio, as well as social media.

Disinformation

Disinformation is false or misleading  information disseminated with the intent to deceive or cause harm. Disinformation is always purposeful, and is not necessarily composed strictly of outright lies or fabrications. The inclusion of some true facts or “half truths” stripped of context can make disinformation more believable and more difficult to recognize.

Disinformation = false information + intent to harm

Misinformation

Misinformation is the inadvertent sharing of false or misleading information. It differs from disinformation due to the absence of an intent to deceive or cause harm, and the person sharing the information generally believes it to be true. One example of misinformation are the rumors spread on social media during the COVID-19 pandemic about “cures” that had no basis in medical science.

Misinformation = false information + mistake

Misinformation, disinformation, malinformation

Claire Wardle & Hossein Derakhshan, 2017

Mal-information

Mal-information is truthful information presented without proper context  in an attempt to deceive, mislead, or cause harm.

Mal-information = true information + intent to harm

Hate speech[2]

Hate speech is the use of discriminatory language with reference to a person or group on the basis of identity, including an individual’s religion, ethnicity, nationality, ability, gender or sexual orientation. Hate speech is often a part of broader information manipulation efforts.

Propaganda[3]

Propaganda is information designed to promote a political goal, action or outcome. Propaganda often involves disinformation, but can also make use of facts, stolen information or half-truths to sway individuals. It often makes emotional appeals, rather than focusing on rational thoughts or arguments. Propaganda is commonly associated with state-affiliated actors, but can also be spread by other groups and individuals.

Related terms[4]
Dangerous speech

According to the Dangerous Speech Project, “dangerous speech” is any form of expression (speech, text, or images) that can increase the risk that its audience will condone or participate in violence against members of another group.” This concept provides a constructive framework for thinking about hate speech that is liable to cause violence. Hallmarks of dangerous speech include dehumanization (referring to people as insects, bacteria, etc.) and telling people they face a mortal threat from a disfavored minority group.

Fake News

The term “fake news” has no accepted definition and is inaccurately used as a synonym for disinformation. The term has been popularized in recent years and often is used to discredit information that one finds unfavorable, regardless of the truthfulness. As such, the terms “misinformation,” “disinformation,” or “mal-information” should be used in place of “fake news.”

 

Definitions

Astroturfing: The attempt to create an impression of widespread grassroots support or interest in a policy or idea by using fake online accounts, such as networks of bots or fake pressure groups.

Bots: A bot is a software program that performs automated, repetitive tasks. In the case of social media, a bot can refer to an automated social media account.

Click Bait: A sensationalized or misleading article title, link, or thumbnail designed to entice users to view content.

Cyberviolence and Cyberbullying: Cyberviolence refers to acts of abuse using digital media. Cyberbullying refers to recurrent cyberviolence that is characterized by an imbalanced power dynamic.

Deepfake: Photos or videos that have been altered or entirely fabricated through machine learning to create a false depiction of something, such as a politician making an imagined statement. Using Deepfakes to confuse voters during election periods by fabricating statements from candidates and election officials is a prime example of information manipulation.

Doxxing: Publishing private or identifying information, especially sensitive personal information, about a person online with malicious intent.

Information Warfare: The use of ICT such as social media to influence or weaken an opponent, including through disinformation and propaganda.

Trolling: Creating intentional discord in an online discussion space, starting quarrels between users, or generally upsetting people by posting inflammatory, insulting, or off-topic messages.

Upload Filter: Automated computer programs that scan content uploaded to an online platform before it is published. Upload filters are used by social media platforms to identify content that violates the companies’ Terms of Service, such as child sexual abuse material (CSAM).

User Generated Content (UGC): Any content such as images, videos, text, and audio that has been posted by users on online platforms.

Virality: The tendency of an image, video, or piece of information to be circulated rapidly and widely on online platforms. When content goes “viral,” important context around the information being shared such as the source, author, and publication date is often lost (known as “context collapse”), potentially distorting how the information is received and interpreted by viewers.

For additional definitions, see the EU Disinfo Lab’s disinformation glossary, a collection of 150+ terms to understand the information disorder.

How does information manipulation disrupt and distort the information ecosystem?

Information manipulation is not a new threat, but has been amplified in the digital era. Social media platforms and the algorithms that underpin them facilitate the spread of information—including false information—at unprecedented speeds. As citizens around the world come to rely on social media for personal communication, news consumption, and generally developing a day-to-day understanding of what is happening in the world, they become increasingly vulnerable to information manipulation on these platforms.

While popular social media platforms such as Facebook, X (formerly known as Twitter), and YouTube are often criticized for their role in facilitating information manipulation, this challenge is also prevalent on other platforms like Instagram, TikTok, Reddit, and even Pinterest. Information manipulation is also common on encrypted and non-encrypted messaging apps like LINE, Telegram, WhatsApp, Facebook Messenger, Signal, WeChat, and Viber. The rise of various commercial actors offering disinformation as a service makes it harder for social media companies to detect and take action against information manipulation, as trolls-for-hire are paid to pollute the information sphere. The distinction between automated bot accounts and human curated content is also becoming less clear, with some studies suggesting that less than 60% of global web traffic is human. See the   for more information on how the revenue model of many social media platforms can further disincentivize the proactive elimination of mis- and disinformation and inauthentic behavior.

Media training for journalists in Sviatohirsk, Ukraine. International human rights provides a framework to balance freedom of expression and other rights. Photo credit: UCBI USAID.

Research has shown that false and misleading content tends to reach online audiences more quickly than fact-based information on the same topic. One study on the spread of falsehoods on X, for example, found that a false news story tended to reach an audience of 1,500 people six times faster than an accurate one. Why is this? Following initial assessments of disinformation that focused on its supply—including how the internet and social media have increased the reach, speed, and magnitude of disinformation—researchers turned their focus to the demand for disinformation. Human psychology was found to play a role in the consumption of  information that reinforces existing views and biases, evokes a strong emotional response, and/or demonizes ‘out-groups’. Digital platforms that rank and recommend content to optimize engagement and time spent on the platform create “degenerate feedback loops,” that amplify and compound these tendencies.

[7] Newspaper pages cover a wall in Ethiopia. The disruption of the publishing business model has been a slow-motion disaster for news organizations around the world. Photo credit: Jessica Nabongo.
Innovations in generative artificial intelligence , including the launch of ChatGPT and similar advanced chatbots in late 2022, introduced additional opportunities for information manipulation by making disinformation cheaper and easier to produce for an even larger number of conspiracy theorists and malign actors. These increased risks stem in part from the ability of generative AI tools to “adapt language to match certain contexts and localize turns of phrases,” as well as to “multiply false narratives with the same message written in multiple ways, which could increase the amount of false content [online] and make it difficult to measure its virality.” Generative AI can also be used to make automated bot accounts sounds more human, and to create fake profile pictures or other forms of synthetic media like “deepfakes” look even more realistic.

Online information manipulation campaigns can be facilitated and exacerbated by flawed content moderation practices and digital advertising-based revenue models.

Content moderation

The rampant spread of mis- and disinformation, hate speech, and harassment on social media platforms has placed a spotlight on their content moderation policies and procedures, which are often implemented with little to no oversight. Social media platforms have been criticized for removing too much content, for not removing enough content, for developing algorithms that fail to detect the nuances of hate speech and misinformation, and for employing human moderators that suffer from low pay, poor working conditions, and trauma induced by the harmful content they review. Amid a cacophony of voices with opinions on what content should and should not be allowed on platforms, social media companies struggle to find the right balance between absolute free speech and protecting users from information manipulation. Legislation like the European Union’s Digital Services Act provides lawmakers with a tool to hold platforms accountable for their role in facilitating information manipulation and presents “a new way of thinking about content moderation that is especially valuable for the counter-disinformation community.”

See the Social Media resource[8] for more information on content moderation.

Digital Advertising Models

Digital advertising has enabled the self-financing of websites and blogs that share hateful, incendiary, and/or misleading content. Programmatic advertising, which relies on automated technology and algorithmic tools to buy and sell ads, is particularly problematic because advertisers may end up financing these outlets without ever being aware their ads have been placed on them.

The online movement Sleeping Giants emerged to tackle this challenge by alerting companies when their ads are placed next to inflammatory or controversial content. Check My Ads is another resource that offers services to help prevent brands from being associated with disinformation and dangerous speech. The Ads for News initiative supports local journalism by providing brands and media buyers with a curated, global inclusion list of trusted local news websites that have been screened to exclude disinformation and other content unsuitable for brands.

See the Social Media resource[9] for more information on digital advertising.

Information manipulation is a common practice of authoritarian regimes which use social and traditional media to sow chaos and confusion and undermine democratic processes. Government entities within Russia previously engaged in information manipulation during elections in Europe and elsewhere by leveraging automated bot accounts and troll farms to share and amplify disinformation on social media with the goal of deepening existing social and political divides within the targeted countries.

The practice of astroturfing by foreign governments, interest groups, and even advertisers is a common form of information manipulation. Astroturfing refers to the use of multiple online identities (bots) and fake lobbying groups to create the false impression of widespread grassroots support for a policy, idea, or product. This technique can be used to divert media attention or establish a particular narrative around an event early on in the news cycle. Other information manipulation tactics may include search engine manipulation, fake websites, trolling, “hack-and-leak” operations, account takeovers, and censorship.

Information manipulation does not only occur in online spaces. It can also be spread through traditional media sources (like television, radio, and print outlets), as well as through academia. For example, traditional media outlets may inadvertently amplify content created as part of an information manipulation campaign if that content has been shared by an important political figure, if the content is particularly sensational and likely to attract audiences, or even through coverage intended to dispute the content of the manipulation campaign. A sophisticated actor that engages in information manipulation may capture prominent news outlets or give grants to research entities to produce analysis that supports their objectives.

Back to top

How does information manipulation affect civic space and democracy?

News vendors ready their booths in Rangoon, Myanmar. Around the world, governments are pushing information manipulation campaigns targeting human rights defenders and marginalized communities. Photo credit: Richard Nyberg/USAID.

Information manipulation is as much an attack on trust as it is on truth. A contested information environment can contribute to decreasing levels of public trust as citizens become unable or unwilling to distinguish between legitimate and illegitimate sources of information. The hollowing out of the traditional media industry and the rise in consumption of news via social media contribute to this trend, particularly in contexts where information literacy is low and democratic institutions are weak. During health crises, natural disasters, and other emergencies, a lack of public trust combined with rampant mis- and disinformation can hinder response efforts and threaten citizens’ health and well-being.

Illiberal governments are increasingly investing in information manipulation campaigns targeting civil society, human rights defenders, and marginalized groups. In Myanmar, for example, military personnel engaged in a systematic campaign on Facebook to spread propaganda and incendiary comments about the country’s mostly Muslim Rohingya minority group, ultimately leading to widespread offline violence and the largest forced migration in recent history. The harassment and trolling of journalists and political candidates (particularly women) on digital platforms can lead to self-censorship or their departure from online spaces altogether, with a negative effect on the diversity of voices in the information space.

On the other hand, governments from Singapore to Kenya to Cambodia have also enacted and used “fake news” laws to restrict free expression and stifle dissent in the name of tackling information manipulation. During the COVID-19 pandemic, an Egyptian journalist who criticized the government’s response was charged with spreading false news, misusing social media, and joining a terrorist group—demonstrating how the overzealous application of such laws can facilitate censorship. These “fake news” laws can sometimes even extend beyond national borders, as in the case of a Malaysian NGO whose website was blocked in Singapore after publishing an article about the country’s death row prisoners. Legal provisions that enable the restriction of content may be written into a variety of laws on cybercrime, defamation, information technology, sedition, social media, etc.

As in the case of the Russia-Ukraine war (dubbed by some as the first full-blown “social media war”), conflicts now play out not only on the battlefield but on social media and other online spaces. Russia’s information campaign against Ukraine included the use of propaganda, fake social media accounts, and manipulated videos to sow division, generate confusion, and generally erode international support for Ukraine. These efforts have been more successful in swaying public opinion in some countries and regions than others, but generally hold important lessons for the role of disinformation in future international conflicts.
Information manipulation and elections


The strategic deployment of false, exaggerated, or contradictory information during elections is a potent tool for undercutting democratic principles. Deliberately blurred lines between truth and fiction amplify voter confusion and devalue fact-based political debate. Rumors, hearsay, and online harassment are used to damage political reputations, exacerbate social divisions, mobilize supporters, marginalize women and minority groups, and undermine the impact of change-makers.

Content intended to discourage or prevent citizens from voting can take the form of inaccurate information about the date of an election, or efforts to persuade people that an election is rigged and their vote won’t matter. Voter suppression content that questions the legitimacy of electoral processes or the security of voting systems can also lay the groundwork for disputing election results, as information manipulation degrades trust in election management bodies. This content does not require malicious intent, as someone who unknowingly shares incorrect information about the deadline for voter registration, for example, could still confuse other citizens and prevent them from being able to vote in the election.

Different actors have different goals for influencing narratives during an election. For example, political parties and campaigns may use mis- or disinformation, mal-information, or propaganda to discredit the opposition or manipulate political discourse in a way that serves their campaign agenda, while a foreign adversary might seek to influence the election outcome, advance national interests, or sow chaos. A coordinated information manipulation campaign on Whatsapp—which included doctored photos, manipulated audio clips, and fake “fact-checks” discrediting authentic news stories—played a troubling role in boosting far-right candidate Jair Bolsonaro to the Brazilian presidency in 2018 (see the case studies  section for more on the role of information manipulation in Brazil).

Ultimately, information manipulation campaigns can destabilize political environments, exacerbate potentials for electoral-related violence, pervert the will of voters, entrench authoritarians, and undermine confidence in democratic systems more broadly. In many fragile democracies, strong democratic institutions that could help counter the impact of fake news and broader information manipulation campaigns, such as a robust independent media, agile political parties and sophisticated civil society organizations, remain nascent.

Back to top

What can be done to address information manipulation?

Residents listen to Ratego FM in Siaya County, Kenya. CSOs are uniquely situated to facilitate educational and outreach initiatives that empower individuals to recognize disinformation. Photo credit: Amunga Eshuchi.
Fact checking

One tool that journalists, researchers, and civil society can use to counter information manipulation is fact-checking, or the process of verifying information and providing accurate, unbiased analysis of a claim. Hundreds of civil society fact-checking initiatives have sprung up in recent years around specific flashpoints, with the lessons learned and infrastructure built around those flashpoints then being applied to other issues that impact the same information ecosystems. During the 2018 Mexican general elections, the CSO-driven initiative Verificado 2018 partnered with Pop-Up News, Animal Político, AJ+ Español, and 80 other partners to fact-check and distribute election-related information, particularly among youth. Other successful fact-checking initiatives around the world include Africa Check, the Cyber News Verification Lab in Hong Kong, BOOM in India, Checazap in Brazil, the Centre for Democracy and Development’s Fact Check archive in West Africa, and Meedan’s Check initiative in Ukraine.

Social media companies have also invested in fact-checking teams and technologies, and have built partnerships with news agencies to counter the hoaxes, conspiracy theories, rumors, and propaganda that circulate on their platforms.

Prebunking

Fact-checks or “debunks” are effective, but they tend to be labor intensive and they are not read by everyone. “Prebunking” is an alternative approach based on the idea of inoculating people against false or misleading information by showing them examples of information manipulation so they are better equipped to spot and question it in the future. According to First Draft, there are three main types of prebunks:

  1. Fact-based: Correcting a specific false claim or narrative
  2. Logic-based: Explaining tactics used to manipulate
  3. Source-based: Pointing out bad sources of information

In 2022, Google launched a prebunking campaign in Poland, the Czech Republic, and Slovakia that used videos to dissect different techniques seen in false claims about Ukrainian refugees. Researchers have also created online games that let players pretend to be trolls spreading fake news with the goal of improving people’s understanding of how information manipulation campaigns are built, and ultimately increasing their skepticism of them.

Media literacy

Media literacy programs can help increase citizens’ ability to differentiate between factual and false or misleading content by encouraging the use of critical thinking skills while consuming traditional and online media content. These programs can also raise awareness about how information manipulation disproportionately harms women and marginalized groups. Media literacy is a life-long process, and should be but one part of a comprehensive toolkit to counter information manipulation.

IREX’s Learn to Discern (L2D) initiative aims to build communities’ resilience to disinformation, propaganda, and hate speech in traditional and online media. After piloting a media literacy curriculum in classrooms, libraries, and community centers in Ukraine, L2D was extended to other countries including Serbia, Tunisia, Jordan, and Indonesia. Global initiatives such as the Mozilla Foundation’s Web Literacy framework and Meta’s digital literacy library provide access to educational media literacy materials and offer an opportunity for users to learn how to effectively navigate the virtual world.

Platform interventions

Platforms can leverage design features to help mitigate information manipulation. For example, some platforms strategically send articles with corrective information to users who share false content. During the COVID-19 pandemic, Facebook showed users who engaged with false content messages that debunked the claims, and also redirected users who shared false information to authoritative sources like the World Health Organization (WHO) or the Centers for Disease Control and Prevention (CDC). These interventions, along with policies and features designed to limit the forwarding of viral content, aim to lessen the impact of information manipulation.

Open source intelligence (OSINT)

Open source intelligence is a method of gathering data and information from publicly available sources, including social media, websites, and news articles. Transparent, volunteer-led, crowdsourced information gathering and analysis can contribute to the debunking of falsehoods and myths, including in contexts like the Russia-Ukraine war.  The investigative journalism group Bellingcat has established itself as a leader in leveraging OSINT to report on conflicts and human-rights abuses, like the use of chemical weapons in Syria’s civil war.

Counterspeech

Counterspeech generally includes any direct response to hateful or harmful language that seeks to undermine it. According to the Dangerous Speech Project, there are two types of counterspeech: organized counter-messaging campaigns and spontaneous, organic responses. Counterspeech can sometimes aim to educate the perpetrator of the harmful content, but more often it aims to reframe the online discussion for onlookers, change the tone in online public spaces, and crowd out harmful content with positive, inclusive messaging. One example of counterspeech in action is #jagärhär (#Iamhere), a Facebook group of about 75,000 people mostly based in Sweden that mobilized to add positive notes on comment sections where hatred and misinformation were being spread.

Back to top

What can civil society do to limit information manipulation?

Reliable, authentic information is critical to transparent, inclusive, and accountable governance and to citizens’ ability to exercise their civic rights and responsibilities. Civil society, in particular, has an important role to play in limiting the effects of information manipulation and strengthening the resilience of local information ecosystems.

  1. First, civil society can act as a watchdog. By closely monitoring social media, civil society can identify and expose information manipulation campaigns affecting their local communities as they emerge. Continued access to social media data for both academic and non-academic researchers is critical for this work.
  2. Second, civil society is uniquely situated to implement educational and outreach initiatives, including media literacy programs, that empower individuals to recognize information manipulation. This may involve coordination with schools, libraries, community centers, and other stakeholders.
  3. Third, civil society can apply pressure to tech companies, businesses, and advertisers that wittingly or unwittingly host, support, or incentivize creators of false and misleading content.
  4. Fourth, civil society can work with governments to replace “anti-fake news laws” and other broad content restrictions with narrowly focused laws that combat disinformation while protecting the freedom of expression.

Back to top

Questions

If you are trying to understand how to mitigate the risks of information manipulation in your work, ask yourself the following questions:

  1. How does my organization verify information? What internal controls does my organization have to prevent the inadvertent spread of false or misleading content?
  2. What internal trainings or programming should we undertake to better understand the risks associated with information manipulation?
  3. How might we respond to an information-manipulation campaign targeting our organization or partners?
  4. What content-distribution strategies beyond publishing might we consider to prevent and counter information manipulation?
  5. When we publish something in error, what is our process for issuing corrections?
  6. What security protocols should be in place in case a staff member, participant, or partner is a target of disinformation, online violence, harassment, doxxing, etc.?
  7. What programs or initiatives can we create and implement to improve media literacy in our community?

Back to top

Case Studies

Russian Information Operations in Ukraine

Reality Built on Lies: 100 Days of Russia’s War of Aggression in Ukraine

“Perhaps the most significant characteristic of the Kremlin’s disinformation campaign and information manipulation targeting Ukraine throughout [the] Russian war of aggression is its adaptability to new realities. In other words, over the past hundred days, the Kremlin has been constantly moving the goalposts of its disinformation in an effort to redefine what the objectives of the ‘special operation’ are, and what ‘success’ might look like for the Russian Armed Forces invading Ukraine… Besides moving the goalposts for success, Russian state-controlled disinformation outlets also actively advance false ‘humanitarianism’ narratives for another purpose. As the world inevitably learned of the senseless atrocities Russia committed in Ukraine, increasingly amounting to war crimes, the pro-Kremlin disinformation ecosystem went into overdrive to deny, confuse, distract, dismay, and shift the blame.”

Gendered Disinformation Undermines Women’s Rights and Democracy

Monetizing Misogyny

The global initiative #ShePersisted interviewed over one hundred women political leaders and activists all over the world in an attempt to understand the patterns, impact, and modus operandi of gendered disinformation campaigns against women in politics. Case studies on Brazil, Hungary, India, Italy, and Tunisia explore how gendered disinformation has been used by political movements, and at times the government itself, to undermine women’s political participation, and to weaken democratic institutions and human rights. Crucially, the research also looks at the responsibilities and responses that both state actors and digital platforms have taken—or most often, failed to take—to address this issue.

COVID-19 Disinformation

Disinfodemic: Deciphering COVID-19 Disinformation

“In contaminating public understanding of different aspects of the pandemic and its effects, COVID-19 disinformation has harnessed a wide range of formats. Many have been honed in the context of anti-vaccination campaigns and political disinformation. They frequently smuggle falsehoods into people’s consciousness by focusing on beliefs rather than reason, and feelings instead of deduction. They rely on prejudices, polarization and identity politics, as well as credulity, cynicism and individuals’ search for simple sense-making in the face of great complexity and change. The contamination spreads in text, images, video and sound.”

Is China Succeeding at Shaping Global Narratives about COVID-19?

Is China Succeeding at Shaping Global Narratives about COVID-19?

“Faced with criticism over its handling of the pandemic, the Chinese government and its proxies have leveraged social media—especially Twitter—to spread its narratives and propaganda abroad… In early 2021, Chinese media spread claims that the Pfizer and Moderna vaccines are risky and even deadly, highlighting extremely rare sudden deaths or illnesses from people who received the vaccine in France, Germany, Mexico, Norway and Portugal… Taiwan was another major target of Chinese Covid-19 disinformation tactics—an unsurprising development given Beijing’s persistent use of disinformation against the island. China repeatedly sought to cast doubt on Taipei’s success at curtailing the spread of the virus… In addition to criticizing and spreading disinformation about other countries’ handling of the pandemic, Chinese media outlets and diplomats amplified unfounded conspiracy theories that SARS-CoV-2 originated outside of China.”

The Spread of Climate Disinformation to Spanish-Speaking Communities

Los Eco-Ilógicos

Green Latinos, with support from Friends of the Earth, commissioned Graphika to study how false and misleading narratives about climate change reach Spanish-speaking communities online. The analysis aimed to understand how these narratives spread through the online ecosystem  of Spanish-speaking internet users, the groups and individuals who seed and disseminate them, and the tactics these actors employ. Through the analysis, Graphika identified a sprawling online network of users across Latin America and Spain that consistently amplify climate misinformation narratives in Spanish. While some of these accounts focus specifically on climate-related conversations, the majority promote ideologically right-wing narratives, some of which touch on climate change. Many of the narratives identified also overlapped with existing online conversations unrelated to climate change, such as COVID-19 misinformation or conspiracy theories about a secret ruling organization of totalitarian, global elites.

Curbing Information Manipulation in African Elections

Fact from Fiction: Curbing Mis/disinformation in African Elections

“Election–related misinformation and disinformation seeks to, primarily, manipulate the decision-making of electorates, cast doubt in the electoral process, and delegitimize the outcome of the elections. This is a dangerous trend, particularly in fragile democracies where this is capable of inciting hate and stirring violent outbreak. Misinformation and disinformation were major factors in the escalation of post-election violence in Kenya following the 2017 general elections. Similarly, in 2020, the Central African Republic experienced deadly post-election violence as a result of a contested election, targeted disinformation efforts, and divisive language. The same happened in Cote d’Ivoire, where 50 people were killed in the political and intercommunal violence that plagued the presidential election on October 31, 2020.”

For additional context, see also the Atlantic Council’s June 2023 report on The Disinformation Landscape in West African and Beyond.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19