5G Technology

What is 5G technology?

Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.
Digital inclusion project in the Peruvian Amazon. Rural areas with less existing infrastructure are likely to be left behind in 5G development. Photo credit: Jack Gordon for USAID / Digital Development Communications.

New generations of technology come along almost every 10 years. 5G, or the fifth generation of mobile technologies, is expected to be 100 times faster and have 1000 times more capacity than previous generations, facilitating fast and reliable connectivity, wider data flow, and machine-to-machine communications. 5G is not designed primarily to connect people, but rather to connect devices. 2G facilitated access to voice calls and texting, 3G drove video and social media services, and 4G realized digital streaming and data-heavy applications. 5G will support smart homes, 3D video, the cloud, remote medical services, virtual and augmented reality, and machine-to-machine communications for industry automation. However, even as the United States, Europe, and the Asia Pacific region transition from 4G to 5G, many other parts of the world still rely primarily on 2G and 3G networks, and further disparities exist between rural and urban connectivity. Watch this video for an introduction to 5G technology and both the excitement and caution surrounding it.

What do we mean by “G?”

“G” refers to generation and indicates a threshold for a significant shift in capability, architecture, and technology. These designations are made by the telecommunications industry through the standards-setting authority known as 3GPP. 3GPP creates new technical specifications approximately every 10 years, hence the use of the word “generation”. An alternate naming convention uses the acronym IMT (which stands for International Mobile Telecommunications), along with the year the standard became official. As an example, you may see 3G also referred to as IMT 2000.

1GAllowed analogue phone calls; brought mobile devices (mobility)
2GAllowed digital phone calls and messaging; allowed for mass adoption, and eventually enabled mobile data (2.5G)
3GAllowed phone calls, messaging, and internet access
3.5GAllowed stronger internet
4GAllowed faster internet, (better video streaming)
5G“The Internet of Things”

Will allow devices to connect to one another
6G“The Internet of Senses”

Little is yet known

This video provides a simplified overview of 1G-4G.

Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.
Cellphone shop in Tanzania. 5G technology requires access to 5G-compatible smartphones and devices. Photo credit: Riaz Jahanpour for USAID Tanzania / Digital Development Communications.

There is a gap in many developing countries between the cellular standard that users subscribe to and the standard they actually use: many subscribe to 4G, but, because it does not perform as advertised, may switch back to 3G. This switch or “fallback” is not always evident to the consumer, and it may be harder to notice with 5G compared to previous networks.

Even once 5G infrastructure is in place and users have access to it through capable devices, the technology is not necessarily guaranteed to work as promised: in fact, chances are it will not. 5G will still rely on 3G and 4G technologies, and carriers will still be operating their 3G and 4G networks in parallel.

How does 5G technology work?

There are several key performance indicators (KPIs) that 5G hopes to achieve. Basically, 5G will strengthen cellular networks by using more radio frequencies along with new techniques to strengthen and multiply connection points. This means faster connection: cutting down the time between a click on your device and the time it takes the phone to execute that command. This also will allow more devices to connect to one another through the Internet of Things.

Understanding Spectrum

To understand 5G, it is important to understand a bit about the electromagnetic radio spectrum. This video gives an overview of how cell phones use spectrum.

5G will bring faster speed and stronger services by using more spectrum. To establish a 5G network, it is necessary to secure spectrum for that purpose in advance. Governments and companies have to negotiate spectrum—usually by auctioning off “bands,” sometimes for huge sums. Spectrum allocation can be a very complicated and political process. Many experts fear that 5G, which requires lots of spectrum, threatens so-called “network diversity”—the idea that spectrum should be used for a variety of purposes across government, business, and society.

For more on spectrum allocation, see the Internet Society’s publication on Innovations in Spectrum Management (2019).

Millimeter Waves

5G hopes to tap into new, unused bands at the top of the radio spectrum, known as millimeter waves (mmwaves). These are much less crowded than the lower bands, allowing faster data transfers. But millimeter waves are tricky: their maximum range is approximately 1.6 km, and trees, walls, rain, and fog can limit the distance the signal travels to only 1km. As a result, 5G will require a higher volume of cell towers, compared to the few massive towers required for 4G. 5G will need towers every 100 meters outside, and every 50 meters inside, which is why 5G is best suited for dense urban centers (as discussed in more detail below). The theoretical potential of millimeter waves is exciting, but in reality, most 5G carriers are trying to deploy 5G in the lower parts of the spectrum.

Don’t forget about fiber!

5G technology runs on fiber infrastructure. Fiber can be understood as the nervous system of a mobile network, connecting data centers to cell towers.

5G requires data centers, fiber, cell towers, and small cells

Mobile operators and international standards setting bodies, including the International Telecommunications Union, believe fiber is the best connective material due to its long life, high capacity, high reliability, and ability to support very high traffic. But the initial investment is expensive (a 2017 Deloitte study estimated that 5G deployment in the United States would require at least $130 billion investment in fiber) and often cost prohibitive to suppliers and operators, especially in developing countries and rural areas. 5G is sometimes advertised as a replacement for fiber; however, fiber and 5G are complementary technologies.

The chart below is often used to explain the primary features that make up 5G technology (enhanced capacity, low latency, and enhanced connectivity) and the potential applications of these features.

Features that make up 5G technology: enhanced capacity, low latency, and enhanced connectivity, and the potential applications of these features

Who supplies 5G technology?

The market of 5G providers is very concentrated, even more so than for previous generations. A handful of companies are capable of supplying telecommunications operators with the necessary technology. Huawei (China), Ericsson (Sweden), and Nokia (Finland) have led the charge to expand 5G  and typically interface with local telecom companies, sometimes providing end-to-end equipment and maintenance services.

In 2019, the United States government passed a defense authorization spending act, NDAA Section 889, that essentially prohibits U.S. agencies from using telecommunications equipment made by Chinese suppliers (for example, Huawei and ZTE). The restriction was put in place over fears that the Chinese government may use its telecommunications infrastructure for espionage (see more in the Risks section). NDAA Section 889 could apply to any contracts made with the U.S. government, and so it is critical for organizations considering partnerships with Chinese suppliers to keep in mind the legal challenges of trying to engage with both the U.S. and Chinese governments in relation to 5G.

Of course, this means that the choice of 5G manufacturers suddenly becomes much more limited. Chinese companies have by far the largest market share of 5G technology. Huawei has the most patents filed, and the strongest lobbying presence within the International Telecommunications Union.

The 5G playing field is fiercely political, with strong tensions between China and the United States. Because 5G technology is closely connected to chip manufacturing, it is important to keep an eye on “the chip wars”. Suppliers reliant on American and Chinese companies are likely to get caught in the crossfire as the trade war between these countries worsens, because supply chains and manufacturing of equipment is often dependent on both countries. Peter Bloom, founder of Rhizomatica, points out that the global chip market is projected to grow to $22.41 billion by 2026. Bloom cautions: “The push towards 5G encompasses a plethora of interest groups, particularly governments, financing institutions, and telecommunications companies, that demands to be better analyzed in order to understand where things are moving, whose interests are being served, and the possible consequences of these changes.”

Back to top

How is 5G relevant in civic space and for democracy?

Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O'Bryan/ USAID.
Mobile money agency in Ghana. Roughly 50% of the world’s population is still not connected to the internet. Photo credit: Credit: John O’Bryan/ USAID.

5G is the first generation that does not prioritize access and connectivity for humans. Instead, 5G provides a level of super-connectivity for luxury use cases and specific environments; for instance, for enhanced virtual reality experiences and massively multiplayer video games. Many of the use cases advertised, like remote surgery, are theoretical or experimental and do not yet exist widely in society. Indeed, telesurgery is one of the most-often-cited examples of the benefits of 5G, but it remains a prototype technology. Implementing this technology at scale would require tackling many technical and legal issues to work out, along with developing a global network.

Access to education, healthcare, and information are fundamental rights; but multiplayer video games, virtual reality, and autonomous vehicles—all of which would rely on 5G – are not. 5G is a distraction from the critical infrastructure needed to get people online to fully enjoy their fundamental rights and to allow for democratic functioning. The focus on 5G actually diverts attention away from immediate solutions to improving access and bridging the digital divide.

The percentage of the global population using the internet is on the rise, but a significant portion of the world is still not connected to the internet.  5G is not likely to address the divide in internet access between rural and urban populations, or between developed and developing economies. What is needed to improve internet access in industrially developing contexts is more fiber, more internet access points (IXPs), more cell towers, more Internet routers, more wireless spectrum, and reliable electricity. In an industry white paper, only one out of 125 pages discusses a “scaled down” version of 5G that will address the needs of areas with extremely low average revenue per user (ARPU). These solutions include further limiting the geographic areas of service.

Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.
Digital trainers in Mugumu, Tanzania. 5G is not designed primarily to connect people, but rather to connect devices. Photo credit: Photo by Bobby Neptune for DAI.

This presentation by the American corporation INTEL at an ITU regional forum in 2016 advertises the usual aspirations for 5G: autonomous vehicles (labeled as “smart transportation”), virtual reality (labeled as “e-learning”), remote surgery (labeled as “e-health”), and sensors  to support water management and agriculture. Similar highly specific and theoretical future use cases—autonomous vehicles, industrial automation , smart homes, smart cities, smart logistics—were advertised during a 2020 webinar hosted by the Kenya ICT Action Network in partnership with Huawi.

In both presentations, the emphasis is on connecting objects, demonstrating how 5G is designed for big industries, rather than for individuals. Even if 5G were accessible in remote rural areas, individuals would likely have to purchase the most expensive, unlimited data plans to access 5G. This cost comes on top of having to acquire 5G-compatible smartphones and devices. Telecommunications companies themselves estimate that only 3% of Sub Saharan Africa will use 5G. It is estimated that by 2025, most people will still be using 3G (roughly 60%) and 4G (roughly 40%), which is a technology that has existed for 10 years.


5G Broadband / Fixed Wireless Access (FWA)

Because most people in industrially developing contexts connect to the internet via cell phone infrastructure and mobile broadband, what would be most useful to them would be “5G broadband,” also called 5G Fixed Wireless Access (FWA). FWA is designed to replace “last mile” infrastructure with a wireless 5G network. Indeed, that “last mile”—the final distance to the end user—is often the biggest barrier to internet access across the world. But because the vast majority of these 5G networks will rely on physical fiber connection, FWA without fiber won’t be of the same quality. These FWA networks will also be more expensive for network operators to maintain than traditional infrastructure or “standard fixed broadband.”

This article by one of the top 5G providers, Ericsson, asserts that FWA will be one of the main uses of 5G, but the article shows that the operators will have a wide ability to adjust their rates, and also admits that many markets will still be addressed with 3G and 4G.

5G will not replace other kinds of internet connectivity for citizens

While 5G requires enormous investment in physical infrastructure, new generations of cellular Wi-Fi access are becoming more accessible and affordable. There is also an increasing variety of “community network” solutions, including Wi-Fi meshnets and sometimes even community-owned fiber. For further reading see: 5G and the Internet of EveryOne: Motivation, Enablers, and Research Agenda, IEEE (2018). These are important alternatives to 5G that should be considered in any context (developed and developing, urban and rural).

“If we are talking about thirst and lack of water, 5G is mainly a new type of drink cocktail, a new flavor to attract sophisticated consumers, as long as you live in profitable places for the service and you can pay for it. Renewal of communications equipment and devices is a business opportunity for manufacturers mainly, but not just the best ‘water’ to the unconnected, rural, … (non-premium clients), even a problem as investment from operators gets first pushed by the trend towards satisfying high paying urban customers and not to spread connectivity to low pay social/universal inclusion customers.” – IGF Dynamic Coalition on Community Networks, in communication with the author of this resource.

It is critical not to forget about previous generation networks. 2G will continue to be important for providing broad coverage. 2G is already very present (around 95% in low- and middle- income countries), requires less data, and carries voice and SMS traffic well, which means that it is a safe and reliable option for many situations. Also, upgrading existing 2G sites to 3G or 4G is less costly than building new sites.

5G and the private sector

The technology that 5G facilitates (the Internet of Things , smart cities , smart homes) will encourage the installation of chips and sensors in an increasing number of objects. The devices 5G proposes to connect are not primarily phones and computers, but sensors, vehicles, industrial equipment, implanted medical devices, drones, cameras, etc. Linking these devices raises a number of security and privacy concerns, as explored in the Risks section .

The actors that stand to benefit most from 5G are not citizens or democratic governments, but corporate actors. The business model powering 5G centers around industry access to connected devices: in manufacturing, in the auto industry, in transport and logistics, in power generation and efficiency monitoring, etc. 5G will boost the economic growth of those actors able to benefit from it, particularly those invested in automation, but it would be a leap to assume the distribution of these benefits across society.

The introduction of 5G will bring the private sector massively into public space through the network carriers, operators, and other third parties behind the many connected devices. This overtaking of public space by private actors (usually foreign private actors) should be carefully considered from the lens of democracy and fundamental rights. Though the private sector has already entered our public spaces (streets, parks, shopping malls) with previous cellular networks, 5G’s arrival, bringing with it more connected objects and more frequent cell towers, will increase this presence.

While 5G networks hold the promise of enhanced connectivity, there is growing concern about their misuse for anti-democratic practices. Governments in various regions have been observed using technology to obstruct transparency and suppress dissent, with instances of internet shutdowns during elections and surveillance of political opponents. From 2014 to 2016 for example, internet shutdowns were used in a third of the elections in sub-Saharan Africa.

These practices are often facilitated by collaborations with companies providing advanced surveillance tools, enabling the monitoring of journalists and activists without due process. The substantial increase in data transmission that 5G offers raises the stakes, potentially allowing for more pervasive surveillance and more significant threats to the privacy and rights of individuals, particularly those marginalized. Furthermore, as electoral systems become more technologically reliant, with initiatives to move voting online, the risk of cyberattacks exploiting 5G vulnerabilities could compromise the integrity of democratic elections, making the protection against such intrusions a critical priority.

Back to top

Opportunities

The advertised benefits of 5G usually fall into three areas, as outlined below. A fourth area of benefits will also be explained—though less often cited in the literature, it would be the most directly beneficial for citizens. It should be noted that these benefits will not be available soon, and perhaps never available widely. Many of these will remain elite services, only available under precise conditions and for high cost. Others will require standardization, legal and regulatory infrastructure, and widespread adoption before they can become a social reality.

The chart below, taken from a GSMA report, shows the generally listed benefits of 5G. The benefits in the white section could be achieved on previous networks like 4G, and those in the purple section would require 5G. This further emphasizes the fact that many of the objectives of 5G are actually possible without it.

Benefits of 5G

Augmented Reality & Tactile Internet

5G has many potential uses in entertainment, especially in gaming. Low latency will allow massively multiplayer games, higher quality video conferencing, faster downloading of high-quality videos, etc. Augmented and virtual reality are advertised as ways to create immersive experiences in online learning. 5G’s ability to connect devices will allow for wearable medical devices that can be controlled remotely (though not without cybersecurity risks). Probably the most exciting example of “tactile internet” is the possibility of remote surgery: an operation could be performed by a robot  that is remotely controlled by a surgeon somewhere across the world. The systems necessary for this are very much in their infancy and will also depend on the development of other technology, as well as regulatory and legal standards and a viable business model.

Autonomous Vehicles

The major benefit of 5G will come in the automobile sector. It is hoped that the high speed of 5G will allow cars to coordinate safely with one another and with other infrastructure. For self-driving vehicles to be safe, they will need to be able to communicate with one another and with everything around them within milliseconds. The super speed of 5G is important for achieving this. (At the same time, 5G raises other security concerns for autonomous vehicles.)

Machine-to-machine connectivity (IoT/smart home/smart city)

Machine-to-machine connectivity, or M2M, already exists in many devices and services , but 5G would further facilitate this. This stands to benefit industrial players (manufacturers, logistics suppliers, etc.) most of all, but could arguably benefit individuals or cities  who want to track their use of certain resources like energy or water. Installed sensors can be used to collect data  which in turn can be analyzed for efficiency and the system can then be optimized. Typical M2M applications in the smart home include thermostats and smoke detectors, consumer electronics, and healthcare monitoring. It should be noted that many such devices can operate on 4G, 3G, and even 2G networks.

5G-based Fixed-Wireless Access (FWA) Can Provide Gigabit Broadband to Homes

Probably the most relevant benefit of 5G to industrially developing contexts will be the potential of FWA. FWA is less often cited in the marketing literature, because it does not allow the industrial benefits promised in full. Because it allows breadth of connectivity rather than revolutionary strength or intensity, it should be thought of as a different kind of “5G”. (See the 5G Broadband / Fixed Wireless Access  section.) As explained, FWA will still require infrastructure investments, and will not necessarily be more affordable than broadband alternatives due to the increasing power given to the carriers.

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below to learn how to discern the possible dangers associated with 5G in DRG work, as well as how to mitigate unintended—and intended—consequences.

Personal Privacy

With 5G connecting more and more devices, the private sector will be moving further into public space through sensors, cameras, chips, etc. Many connected devices will be things we never expected to be connected to the internet before: washing machines, toilets, cribs, etc. Some will even be inside our bodies, like smart pacemakers. The placement of devices with chips into our homes and environments facilitates the collection of  data about us, as well as other forms of surveillance.

A growing number of third-party actors have sophisticated methods for collecting and analyzing personal data. Some devices may only ultimately collect meta-data, but this can still seriously reduce privacy. Meta-data is information connected to our communications that does not include the content of those communications: for example, numbers called, websites visited, geographical location, or the time and date a call was made. The EU’s highest court has ruled that this kind of information can be considered just as sensitive as the actual contents of communications because of insights that the data can offer into our private lives. 5G will allow telecommunications operators and other actors access to meta-data that can be assembled for insights about us that reduce our privacy.

Last, 5G requires many small cell base stations, so the presence of these towers will be much closer to people’s homes and workplaces, on street lights, lamp posts, etc. This will make location tracking much more precise and make location privacy nearly impossible.

Espionage

For most, 5G will be supplied by foreign companies. In the case of Huawei and ZTE, the government of the country these companies operate in (the People’s Republic of China) do not uphold human rights obligations or democratic values. For this reason, some governments are concerned about the potential of abuse of data for foreign espionage. Several countries, including the United States, Australia, and the United Kingdom, have taken actions to limit the use of Chinese equipment in their 5G networks due to fears of potential spying. A 2019 report on the security risks of 5G by the European Commission and European Agency for Cybersecurity warns against using a single supplier to provide 5G infrastructure because of espionage risks. The general argument against a single supplier (usually made against the Chinese supplier Huawei), is that if the supplier provides the core network infrastructure for 5G, the supplier’s government (China) will gain immense surveillance capacity through meta-data or even through a “backdoor” vulnerability. Government spying through the private sector and telecom equipment is commonplace, and China is not the only culprit. But the massive network capacity of 5G and the many connected devices collecting personal information will enhance the information at stake and the risk.

Cybersecurity Risks

As a general rule, the more digitally connected we are, the more vulnerable we become to cyber threats. 5G aims to make us and our devices ultra-connected. If a self-driving car on a smart grid is hacked or breaks down, this could bring immediate physical danger, not just information leakages. 5G centralizes infrastructure around a core, which makes it especially vulnerable. Because of the wide application of 5G based networks, 5G brings the increased possibility of internet shutdowns, endangering large parts of the network.

5G infrastructure can simply have technical deficiencies. Because 5G technology is still in pilot phases, many of these deficiencies are not yet known. 5G advertises some enhanced security functions, but security holes remain because devices will still be connected to older networks.

Massive Investment Costs and Questionable Returns

As A4AI explains, “The rollout of 5G technology will demand significant investment in infrastructure, including in new towers capable of providing more capacity, and bigger data centres running on efficient energy.” These costs will likely be passed on to consumers, who will have to purchase compatible devices and sufficient data. 5G requires massive infrastructure investment—even in places with strong 4G infrastructure, existing fiber-optic cables, good last-mile connections, and reliable electricity. Estimates for the total cost of 5G deployment—including investment in technology and spectrum—are as high as $2.7 trillion USD. Due to the many security risks, regulatory uncertainties, and generally untested nature of the technology, 5G is not necessarily a safe investment even in wealthy urban centers. The high cost of introducing 5G will be an obstacle for expansion and prices are unlikely to fall enough to make 5G widely affordable.

Because this is such a complex new product, there is a risk of purchasing low-quality equipment. 5G is heavily reliant on software and services from third-party suppliers, which multiplies the chance of defects in parts of the equipment (poorly written code, poor engineering, etc.). The process of patching these flaws can be long, complicated, and costly. Some vulnerabilities may go unidentified for a long time but can suddenly cause severe security problems. Lack of compliance to industry or legal standards could cause similar problems. In some cases, new equipment may not be flawed or faulty, but it may simply be incompatible with existing equipment or with other purchases from other suppliers. Moreover, there will be large costs just to run the 5G network properly: securing it from cyberattacks, patching holes and addressing flaws, and keeping up the material infrastructure. Skilled and trusted human operators are needed for these tasks.

Foreign Dependency and Geopolitical Risks

Installing new infrastructure means dependency on private sector actors, usually from foreign countries. Over-reliance on foreign private actors raises multiple concerns, as mentioned, related to cybersecurity, privacy, espionage, excessive cost, compatibility, etc. Because there are only a handful of actors that are fully capable of supplying 5G, there is also the risk of becoming dependent on a foreign country. With current geopolitical tensions between the U.S. and China, countries trying to install 5G technology may get caught in the crossfire of a trade war. As Jan-Peter Kleinhans, a security and 5G expert at Stiftung Neue Verantwortung (SNV), explains “The case of Huawei and 5G is part of a broader development in information and communications technology (ICT). We are moving away from a unipolar world with the U.S. as the technology leader, to a bipolar world in which China plays an increasingly dominant role in ICT development.” The financial burdens of this bipolar world will be passed onto suppliers and customers.

Class/Wealth & Urban/Rural Divides

“Without a comprehensive plan for fiber infrastructure, 5G will not revolutionize Internet access or speeds for rural customers. So anytime the industry is asserting that 5G will revolutionize rural broadband access, they are more than just hyping it, they are just plainly misleading people.” — Ernesto Falcon, the Electronic Frontier Foundation.

5G is not a lucrative investment for carriers in more rural areas and developing contexts, where the density of potentially connected devices is lower. There is industry consensus, supported by the ITU itself, that the initial deployment of 5G will be in dense urban areas, particularly wealthy areas with industry presence. Rural and poorer areas with less existing infrastructure are likely to be left behind because it is not a good commercial investment for the private sector. For rural and even suburban areas, millimeter waves and cellular networks that require dense cell towers will likely not be a viable solution. As a result, 5G will not bridge the digital divide for lower income and urban areas. It will reinforce it by giving super-connectivity to those who already have access and can afford even more expensive devices, while making the cost of connectivity high for others.

Energy Use and Environmental Impact

Huawei has shared that the typical 5G site has power requirements over 11.5 kilowatts, almost 70% more than sites deploying 2G, 3G, and 4G. Some estimate 5G technology will use two to three times more energy than previous mobile technologies. 5G will require more infrastructure, which means more power supply and more battery capacity, all of which will have environmental consequences. The most significant environmental issues associated with implementation will come from manufacturing the many component parts, along with the proliferation of new devices that will use the 5G network. 5G will encourage more demand and consumption of digital devices, and therefore the creation of more e-waste, which will also have serious environmental consequences. According to Peter Bloom, founder of Rhizomatica, most environmental damages from 5G will take place in the global south. This will include damage to the environment and to communities where the mining of materials and minerals takes place, as well as pollution from electronic waste. In the United States, the National Oceanic and Atmospheric Administration and NASA reported last year that the decision to open up high spectrum bands (24 gigahertz spectrum) would affect weather forecasting capabilities for decades.

Back to top

Questions

To understand the potential of 5G for your work environment or community, ask yourself these questions to assess if 5G is the most appropriate, secure, cost effective, and human-centric solution:

  1. Are people already able to connect to the internet sufficiently? Is the necessary infrastructure (fiber, internet access points, electricity) in place for people to connect to the internet through 3G or 4G, or through Wi-Fi?
  2. Are the conditions in place to effectively deploy 5G? That is, is there sufficient fiber backhaul and 4G infrastructure (recall that 5G is not yet a standalone technology).
  3. What specific use case(s) do you have for 5G that would not be achievable using a previous generation network?
  4. What other plans are being made to address the digital divide through Wi-Fi deployment and mesh networks, digital literacy and digital training, etc.?
  5. Who stands to benefit from 5G deployment? Who will be able to access 5G? Do they have the appropriate devices and sufficient data? Will access be affordable?
  6. Who is supplying the infrastructure? How much can they be trusted regarding quality, pricing, security, data privacy, and potential espionage?
  7. Do the benefits of 5G outweigh the costs and risks (in relation to security, financial investment, and potential geopolitical consequences)?
  8. Are there sufficient skilled human resources to maintain the 5G infrastructure? How will failures and vulnerabilities be dealt with?

Back to top

Case Studies

Latin America and the Caribbean

5G: The Driver for the Next-Generation Digital Society in Latin America and the Caribbean

“Many countries around the world are in a hurry to adopt 5G to quickly secure the significant economic and social benefits that it brings. Given the enormous opportunities that 5G networks will create, Latin American and Caribbean (LAC) countries must actively adopt 5G. However, to successfully deploy 5G networks in the region, it is important to resolve the challenges that they will face, including high implementation costs, securing spectrum, the need to develop institutions, and issues around activation. For 5G networks to be successfully established and utilized, LAC governments must take a number of actions, including regulatory improvement, establishing institutions, and providing financial support related to investment in the 5G network.”

The United Kingdom

The United Kingdom was among the first markets to launch 5G globally in 2019. As UK operators have ramped up 5G investment, the market has been on par with other European countries in terms of performance, but still lags behind “5G pioneers” like South Korea and China. In 2020, the British government banned operators from using 5G equipment supplied by Chinese telecommunications company Huawei due to security concerns, setting a deadline of 2023 for the removal of Huawei’s equipment and services from core network functions and 2027 for complete removal. The Digital Connectivity Forum warned in 2022 that the UK was at risk of not fully tapping into the potential of 5G due to insufficient investment, which could hurt the development of new technology services like autonomous vehicles, automated logistics, and telemedicine.

The Gulf States

The Gulf states were among the first in the world to launch commercial 5G services, and have invested heavily into 5G and advanced technologies. Local Arab service providers are partnering with ZTE and Nokia to expand their reach in Arab and Asian countries. In many Gulf countries, 5G and Internet service providers are predominantly government-owned, thus consolidating government influence over 5G-backed services or platforms. This could make requests for sharing data or Internet shutdowns easier for governments. Dubai is already deploying facial recognition technology developed by companies with ties to the CCP for its “Police Without Policemen” program. (Ahmed, R. et al., 13)

South Korea

South Korea established itself as an early market leader for 5G development. Their networks within Asia will be instrumental in the diffusion of 5G development within the region. Currently, South Korea’s Samsung is primarily present in the 5G devices market. Samsung is under consideration as a replacement for Huawei in discussions by the “D10 Club,” a telecoms supplier group that was established by the UK and consisting of G7 members plus India, Australia, and South Korea. However, details of the D10 Club agenda have yet to be established. While South Korea and others attempt to expand their role in 5G, ICT decoupling from Huawei and security-trade tradeoffs are proving to make the process complicated. (Ahmed, R. et al., 14)

Africa

Which countries have rolled out 5G in Africa?

“Governments in Africa are optimistic that they will one day use 5G to do large-scale farming using drones, introduce autonomous cars into roads, plug into the metaverse, activate smart homes and improve cyber security. Some analysts predict that 5G will add an additional $2.2 trillion to Africa’s economy by 2034. But Africa’s 5G first movers are facing teething problems that stand to delay their 5G goals. The challenges have revolved around spectrum regulation clarity, commercial viability, deployment deadlines, and low citizen purchasing power of 5G enabled smartphones, and expensive internet.” As of mid-2022, Botswana, Egypt, Ethiopia, Gabon, Kenya, Lesotho, Madagascar, Mauritius, Nigeria, Senegal, Seychelles, South Africa, Uganda, and Zimbabwe were testing or had deployed 5G, though many of these countries faced delays in their rollout.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Artificial Intelligence & Machine Learning

What is AI and ML?

Artificial intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can approximate human intelligence. There is no single, precise, universal definition of AI.

Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?

Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems, and evolutionary computation.

artificial intelligence, types

The diagram above shows the many different types of technology fields that comprise AI. AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems. When referring to AI, one can be referring to any or several of these technologies or fields. Applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes natural language processing (question answering) to understand what you’re asking, and then uses vision (image recognition) to find a banana and show it to you.

As noted above, AI doesn’t have a universal definition. There are many myths surrounding AI—from the fear that AI will take over the world by enslaving humans, to the hope that AI can one day be used to cure cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as to outline some of the benefits and risks posed by AI.

Definitions

Algorithm: An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.

Algorithmic decision-making/Algorithmic decision system (ADS): Algorithmic decision systems use data and statistical analyses to make automated decisions, such as determining whether people are eligible for a benefit or a penalty. Examples of fully automated algorithmic decision systems include the electronic passport control check-point at airports or an automated decision by a bank to grant a customer an unsecured loan based on the person’s credit history and data profile with the bank. Driver-assistance features that control a vehicle’s brake, throttle, steering, speed, and direction are an example of a semi-automated ADS.

Big Data: There are many definitions of “big data,” but we can generally think of it as extremely large data sets that, when analyzed, may reveal patterns, trends, and associations, including those relating to human behavior. Big Data is characterized by the five V’s: the volume, velocity, variety, veracity, and value of the data in question. This video provides a short introduction to big data and the concept of the five V’s.

Class label: A class label is applied after a machine learning system has classified its inputs; for example, determining whether an email is spam.

Data mining: Data mining, also known as knowledge discovery in data, is the “process of analyzing dense volumes of data to find patterns, discover trends, and gain insight into how the data can be used.”

Generative AI[1]: Generative AI is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. See section on Generative AI for more details.

Label: A label is the thing a machine learning model is predicting, such as the future price of wheat, the kind of animal shown in a picture, or the meaning of an audio clip.

Large language model: A large language model (LLM) is “a type of artificial intelligence that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new content.” An LLM is a type of generative AI[2]  that has been specifically architected to help generate text-based content.

Model: A model is the representation of what a machine learning system has learned from the training data.

Neural network: A biological neural network (BNN) is a system in the brain that makes it possible to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.

Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.

Robot: Robots are programmable, automated devices. Fully autonomous robots (e.g., self-driving vehicles) are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses and behaviors accordingly in order to perform complex tasks without human intervention.

Scoring: Scoring, also called prediction, is the process of a trained machine learning model generating values based on new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome. When used vis-a-vis people, scoring is a statistical prediction that determines whether an individual fits into a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.

Supervised learning: In supervised learning, ML systems are trained on well-labeled data. Using labeled inputs and outputs, the model can measure its accuracy and learn over time.

Unsupervised learning: Unsupervised learning uses machine learning algorithms to find patterns in unlabeled datasets without the need for human intervention.

Training: In machine learning, training is the process of determining the ideal parameters comprising a model.

 

How do artificial intelligence and machine learning work?

Artificial Intelligence

Artificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic, and economics to “understand, model, and replicate intelligence and cognitive processes.”

AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:

  • Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
  • Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
  • Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Artificial general intelligence and artificial superintelligence have not yet been achieved and likely will not be for the next few years or decades.

Machine Learning

Machine learning is an application of artificial intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern or correlation to make predictions. Most of the AI in use today is driven by machine learning.

Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: supervised learning; unsupervised learning; and deep learning.

Supervised Learning

Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a data set  containing training examples with associated labels. Take the example of a spam-filtering system that is being trained using spam and non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email or IP address of the sender. Using this correlation, the system tries to predict the correct label (spam/not spam) to apply to all the future emails it processes.

“Spam” and “not spam” in this instance are called “class labels.” The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labeled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about—in this case, it is the “spaminess” of an email. The “correct answer,” so to speak, in the categorization of email is called the “desired outcome” or “outcome of interest.”

Unsupervised Learning

Unsupervised learning involves neural networks finding a relationship or pattern without access to previously labeled datasets of input-output pairs. The neural networks organize and group the data on their own, finding recurring patterns and detecting deviations from these patterns. These systems tend to be less predictable than those that use labeled datasets, and are most often deployed in environments that may change at some frequency and are unstructured or partially structured. Examples include:

  1. An optical character-recognition system that can “read” handwritten text, even if it has never encountered the handwriting before.
  2. The recommended products a user sees on retail websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference, and the prices of their previous purchases.
  3. The detection of fraudulent monetary transactions based on timing and location. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.

A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available to train the neural network to act upon a larger, unlabeled dataset. An example of semi-supervised learning is software that creates deepfakes, or digitally altered audio, videos, or images.

Deep Learning

Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks to create AI that can detect financial fraud, conduct medical-image analysis, translate large amounts of text without human intervention, and automate the moderation of content on social networking websites. These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine, and sort huge amounts of data, which theoretically enables them to identify new solutions to existing problems.

Generative AI

Generative AI[3] is a type of deep-learning model that can generate high-quality text, images, and other content based on training data. The launch of OpenAI’s chatbot, ChatGPT, in late 2022 placed a spotlight on generative AI and created a race among companies to churn out alternate (and ideally superior) versions of this technology. Excitement over large language models and other forms of generative AI was also accompanied by concerns about accuracy, bias within these tools, data privacy, and how these tools can be used to spread disinformation more efficiently.

Although there are other types of machine learning, these three—supervised learning, unsupervised learning and deep learning—represent the basic techniques used to create and train AI systems.

Bias in AI and ML

Artificial intelligence is built by humans, and trained on data generated by them. Inevitably, there is a risk that individual and societal human biases will be inherited by AI systems.

There are three common types of biases in computing systems:

  • Pre-existing bias has its roots in social institutions, practices, and attitudes.
  • Technical bias arises from technical constraints or considerations.
  • Emergent bias arises in a context of use.

Bias in artificial intelligence may affect, for example, the political advertisements one sees on the internet, the content pushed to the top of social media news feeds, the cost of an insurance premium, the results of a recruitment screening process, or the ability to pass through border-control checks in another country.

Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate can get compounded or magnified and greatly affect the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.

Bias in AI/ML systems can result in discriminatory practices, ultimately leading to the exacerbation of existing inequalities or the generation of new ones.. For more information, see this explainer related to AI bias and the Risks section of this resource.

Back to top

How are AI and ML relevant in civic space and for democracy?

Elephant tusks pictured in Uganda. In wildlife conservation, AI/ML algorithms and past data can be used to predict poacher attacks. Photo credit: NRCN.

The widespread proliferation, rapid deployment, scale, complexity, and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human rights bodies, businesses, and the general public alike. AI systems may require varying degrees of human interaction or none at all. When applied in design, operation, and delivery of services, AI/ML offers the potential to provide new services and improve the speed, targeting, precision, efficiency, consistency, quality, or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships, and patterns, and offering new solutions. By analyzing large amounts of data, ML systems save time, money, and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks, and discovering new species of viruses.

Tuberculosis microscopy diagnosis in Uzbekistan. AI/ML systems aid healthcare professionals in medical diagnosis and the detection of diseases. Photo credit: USAID.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering, and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, and security, as well as in safety, crime prevention, policing, law enforcement, urban management, and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time about energy consumption, traffic congestion, pollution levels, and waste in order to monitor and manage these issues and identify patterns in their generation, consumption, and handling.

Digital maps created in Mugumu, Tanzania. Artificial intelligence can support planning of infrastructure development and preparation for disaster. Photo credit: Bobby Neptune for DAI.

AI is also used in climate monitoring, weather forecasting, the prediction of disasters and hazards, and the planning of infrastructure development. In healthcare, AI systems aid professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, facial recognition systems, drones, and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, privacy, security, mass surveillance, social inequality, and negative impacts on democracy (see the Risks section).

Fish caught off the coast of Kema, North Sulawesi, Indonesia. Facial recognition is used to identify species of fish to contribute to sustainable fishing practices. Photo credit: courtesy of USAID SNAPPER.

AI and ML have both positive and negative implications for public policy and elections, as well as democracy more broadly. While data may be used to maximize the effectiveness of a campaign through targeted messaging to help persuade prospective voters, it may also be used to deliver propaganda or misinformation to vulnerable audiences. During the 2016 U.S. presidential election, for example, Cambridge Analytica used big data and machine learning to tailor messages to voters based on predictions about their susceptibility to different arguments.

During elections in the United Kingdom and France in 2017, political bots were used to spread misinformation on social media and leak private campaign emails. These autonomous bots are “programmed to aggressively spread one-sided political messages to manufacture the illusion of public support” or even dissuade certain populations from voting. AI-enabled deepfakes (audio or video that has been fabricated or altered) also contribute to the spread of confusion and falsehoods about political candidates and other relevant actors. Though artificial intelligence can be used to exacerbate and amplify disinformation, it can also be applied in potential solutions to the challenge. See the Case Studies section  of this resource for examples of how the fact-checking industry is leveraging artificial intelligence to more effectively identify and debunk false  and misleading narratives.

Cyber attackers seeking to disrupt election processes use machine learning to effectively target victims and develop strategies for defeating cyber defenses. Although these tactics can be used to prevent cyber attacks, the level of investment in artificial intelligence technologies by malign actors in many cases exceeds that of legitimate governments or other official entities. Some of these actors also use AI-powered digital surveillance tools to track down and target opposition figures, human rights defenders, and other perceived critics.

As discussed elsewhere in this resource, “the potential of automated decision-making systems to reinforce bias and discrimination also impacts the right to equality and participation in public life.” Bias within AI systems can harm historically underrepresented communities and exacerbate existing gender divides and the online harms experienced by women candidates, politicians, activists, and journalists.

AI-driven solutions can help improve the transparency and legitimacy of campaign strategies, for example, by leveraging political bots for good to help identify articles that contain misinformation or by providing a tool for collecting and analyzing the concerns of voters. Artificial intelligence can also be used to make redistricting less partisan (though in some cases it also facilitates partisan gerrymandering) and prevent or detect fraud or significant administrative errors. Machine learning can inform advocacy by predicting which pieces of legislation will be approved based on algorithmic assessments of the text of the legislation, how many sponsors or supporters it has, and even the time of year it is introduced.

The full impact of the deployment of AI systems on the individual, society, and democracy is not known or knowable, which creates many legal, social, regulatory, technical, and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The European Union’s (EU) General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan, and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”

Back to top

Opportunities

Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights, and good governance. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.

Detect and overcome bias

Although artificial intelligence can reproduce human biases, as discussed above, it can also be used to combat unconscious biases in contexts like job recruitment.  Responsibly designed algorithms can bring hidden biases into view and, in some cases, nudge people into less-biased outcomes; for example by masking candidates’ names, ages, and other bias-triggering features on a resume.

Improve security and safety

AI systems can be used to detect attacks on public infrastructure, such as a cyber attack or credit card fraud. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud quickly, or even prevent it before it occurs. Machine learning can help identify agile and unusual patterns that match or exceed traditional strategies used to avoid detection.

Moderate harmful online content

Enormous quantities of content are uploaded every second to the internet and social media . There are simply too many videos, photos, and posts for humans to manually review. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen for content that violates their terms of service (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfakes and other computer-generated content requires similarly advanced identification tactics. Fact-checkers and other actors working to diffuse the dangerous, misleading power of deepfakes are developing their own artificial intelligence to identify these media as false.

Web Search

Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the vast stretches of the internet. Search engines on the web (like Google and Bing) or within platforms and websites (like searches within Wikipedia or The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor higher-quality results that may be beneficial to society. For example, Google has an initiative to highlight original reporting, which prioritizes the first instance of a news story rather than sources that republish the information.

Translation

Machine learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.

Back to top

Risks

The use of emerging technologies like AI can also create risks for democracy and in civil society programming. Read below to learn how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate  unintended—and intended—consequences.

Discrimination against marginalized groups

There are several ways in which AI may make decisions that can lead to discrimination, including how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data; during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.

The accuracy of AI systems is based on how ML processes Big Data, which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or external attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to determine why AI makes certain decisions about some individuals or groups of people, or conclusively prove it has made a discriminatory decision. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status, or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement, and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.

A study by the Royal Statistical Society notes that the “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation or software for credit-scoring, banking, insurance, healthcare, and the selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.

The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving vulnerable groups such as refugees, or about life or death circumstances, such as in medical care. A 2018 report by the University of Toronto’s Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.

Security vulnerabilities

Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately owned systems such as internet of things (IoT) devices and self-driven cars.

If malicious entities target AI systems deployed in public infrastructure, such as smart cities, smart grids, nuclear installations,healthcare facilities, and banking systems, among others, they “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”

Privacy and data protection

The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection. Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks. Criminals, illiberal governments, and people with malicious intent often  target these data for economic or political gain. For instance, health data captured from smartphone applications and internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The issue is not only leaks, but the data that people willingly give out without control about how it will be used down the road. This includes what we share with both companies and government agencies. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.

Chilling effect

AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination, and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Many people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.

Opacity (Black box nature of AI systems)

Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, behind-the-scenes processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal or judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike decisions made by  judges who are required to justify their legal order or judgment.

Technological unemployment

Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.

Loss of individual autonomy and personhood

Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may affect wellbeing, physical integrity, and quality of life. This affects what constitutes an individual’s consent (or lack thereof); the way consent is formed, communicated and understood; and the context in which it is valid. “[T]he dilution of the free basis of our individual consent—either through outright information distortion or even just the absence of transparency—imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence

Back to top

Questions

If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:

  1. Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
  2. Who is designing and overseeing the technology? Can they explain what is happening at different steps of the process?
  3. What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
  4. What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
  5. Are you confident the technology will work as intended when used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
  6. Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
  7. What measures do you have in place to identify and address potentially harmful biases in the technology?
  8. What regulatory safeguards and redress mechanisms do you have in place for people who claim that the technology has been unfair to them or abused them in any way?
  9. Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
  10. Are you certain that the technology abides with relevant regulations and legal standards, including the GDPR?
  11. Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?

Back to top

Case Studies

Leveraging artificial intelligence to promote information integrity

The United Nations Development Programme’s eMonitor+ is an AI-powered platform that helps “scan online media posts to identify electoral violations, misinformation, hate speech, political polarization and pluralism, and online violence against women.” Data analysis facilitated by eMonitor+ enables election commissions and media stakeholders to “observe the prevalence, nature, and impact of online violence.” The platform relies on machine learning to track and analyze content on digital media to generate graphical representations for data visualization. eMonitor+ has been used by Peru’s Asociación Civil Transparencia and Ama Llulla to map and analyze digital violence and hate speech in political dialogue, and by the Supervisory Election Commission during the 2022 Lebanese parliamentary election to monitor potential electoral violations, campaign spending, and misinformation. The High National Election Commission of Libya has also used eMonitor+ to monitor and identify online violence against women in elections.

“How Nigeria’s fact-checkers are using AI to counter election misinformation”

How Nigeria’s fact-checkers are using AI to counter election misinformation”

Ahead of Nigeria’s 2023 presidential election, the UK-based fact-checking organization Full Fact “offered its artificial intelligence suite—consisting of three tools that work in unison to automate lengthy fact-checking processes—to greatly expand fact-checking capacity in Nigeria.” According to Full Fact, these tools are not intended to replace human fact-checkers but rather assist with time-consuming, manual monitoring and review, leaving fact-checkers “more time to do the things they’re best at: understanding what’s important in public debate, interrogating claims, reviewing data, speaking with experts and sharing their findings.” The scalable tools which include search, alerts, and live functions allow fact-checkers to “monitor news websites, social media pages, and transcribe live TV or radio to find claims to fact check.”

Monitoring crop development: Agroscout

Monitoring crop development: Agroscout

The growing impact of climate change could further cut crop yields, especially in the world’s most food-insecure regions. And our food systems are responsible for about 30% of greenhouse gas emissions. Israeli startup AgroScout envisions a world where food is grown in a more sustainable way. “Our platform uses AI to monitor crop development in real-time, to more accurately plan processing and manufacturing operations across regions, crops and growers,” said Simcha Shore, founder and CEO of AgroScout. ‘By utilizing AI technology, AgroScout detects pests and diseases early, allowing farmers to apply precise treatments that reduce agrochemical use by up to 85%. This innovation helps minimize the environmental damage caused by traditional agrochemicals, making a positive contribution towards sustainable agriculture practices.’”

Machine Learning for Peace

The Machine Learning for Peace Project seeks to understand how civic space is changing in countries around the world using state of the art machine learning techniques. By leveraging the latest innovations in natural language processing, the project classifies “an enormous corpus of digital news into 19 types of civic space ‘events’ and 22 types of Resurgent Authoritarian Influence (RAI) events which capture the efforts of authoritarian regimes to wield influence on developing countries.” Among the civic space “events” being tracked are activism, coups, election activities, legal changes, and protests. The civic space event data is combined with “high frequency economic data to identify key drivers of civic space and forecast shifts in the coming months.” Ultimately, the project hopes to serve as a “useful tool for researchers seeking rich, high-frequency data on political regimes and for policymakers and activists fighting to defend democracy around the world.”

Food security: Detecting diseases in crops using image analysis

Food security: Detecting diseases in crops using image analysis

“Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops.” As a first step toward supplementing existing solutions for disease diagnosis with a smartphone-assisted diagnosis system, researchers used a public dataset of 54,306 images of diseased and healthy plant leaves to train a “deep convolutional neural network” to automatically identify 14 different crop species and 26 unique diseases (or the absence of those diseases).

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Blockchain

What is Blockchain?

A blockchain is a distributed database existing on multiple computers at the same time, with a detailed and un-changeable transaction history leveraging cryptography. Blockchain-based technologies, perhaps most famous for their use in “cryptocurrencies ” such as Bitcoin, are also referred to as “distributed ledger technology (DLT).”

How does Blockchain work?

Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.
Unlike hand-written records, like this bed net distribution in Tanzania, data added to a blockchain can’t be erased or manipulated. Photo credit: USAID.

Blockchain is a constantly growing database as new sets of recordings, or ‘blocks,’ are added to it. Each block contains a timestamp and a link to the previous block, so they form a chain. The resulting blockchain is not managed by any particular body; instead, everyone in the network has access to the whole database. Old blocks are preserved forever, and new blocks are added to the ledger irreversibly, making it impossible to erase or manipulate the database records.

Blockchain can provide solutions for very specific problems. The most clear-cut use case is for public, shared data where all changes or additions need to be clearly tracked, and where no data will ever need to be redacted. Different uses require different inputs (computing power, bandwidth, centralized management), which need to be carefully considered based on each context. Blockchain is also an over-hyped concept applied to a range of different problems where it may not be the most appropriate technology, or in some cases, even a responsible technology to use.

There are two core concepts around Blockchain technology: the transaction history aspect and the distributed aspect. They are technically tightly interwoven, but it is worth considering them and understanding them independently as well.

'Immutable' Transaction History

Imagine stacking blocks. With increasing effort, one can continue adding more blocks to the tower, but once a block is in the stack, it cannot be removed without fundamentally and very visibly altering—and in some cases destroying—the tower of blocks. A blockchain is similar in that each “block” contains some amount of information—information that may be used, for example, to track currency transactions and store actual data. (You can explore the bitcoin blockchain, which itself has already been used to transmit messages and more, to learn about a real-life example.)

This is a core aspect of the blockchain technology, generally called immutability, meaning data, once stored, cannot be altered. In a practical sense, blockchain is immutable, though a 100% agreement among users could permit changes and actually making those changes would be incredibly tedious.

Blockchain is, at its simplest, a valuable digital tool that replicates online the value of a paper-and-ink logbook. While this can be useful to track a variety of sequential transactions or events (ownership of a specific item / parcel of land / supply chain) and could even be theoretically applied to concepts like voting or community ownership and management of resources, it comes with an important caveat. Mistakes can never be truly unmade, and changes to data tracked in a blockchain can never be updated.

Many of the potential applications of blockchain would rely on one of the pieces of data tracked being the identity of a person or legal organization. If that entity changes, their previous identity will be forever immutably tracked and linked to the new identity. On top of being damaging to a person fleeing persecution or legally changing their identity, in the case of transgender individuals, for example, this is also a violation of the right to privacy established under international human rights law.

Distributed and Decentralized

The second core tenet of blockchain technology is the absence of a central authority or oracle of “truth.” By nature of the unchangeable transaction records, every stakeholder contributing to a blockchain tracks and verifies the data it contains. At scale, this provides powerful protection against problems common not only to NGOs but to the private sector and other fields that are reliant on one service to maintain a consistent data store. This feature can protect a central system from collapsing or being censored, corrupted, lost, or hacked — but at the risk of placing significant hurdles in the development of the protocol and requirements for those interacting with the data.

A common misconception is that blockchain is completely open and transparent. Blockchains may be private, with various forms of permissions applied. In such cases, some users have more control over the data and transactions than others. Privacy settings for blockchain can make for easier management, but also replicate some of the specific challenges that blockchains, in theory, are solving.

Permissionless vs Permissioned Blockchain

Permissionless blockchains are public, so anyone can interact with and participate in them. Permissioned blockchains, on the other hand, are closed networks, which only specific actors can access and contribute to. As such, permissionless blockchains are more transparent and decentralized, while permissioned blockchains are governed by an entity or a group of entities that can customize the platform, choosing who can participate, the level of transparency, and whether or not to use digital assets. Another key difference is that public blockchains tend to be anonymous, while private ones, by nature, cannot be. Because of this, permissioned blockchain is chosen in many human-rights use cases, using identity to hold users accountable.

Back to top

How is blockchain relevant in civic space and for democracy?

Blockchain technology has the potential to provide substantial benefits in the development sector broadly, as well as specifically for human rights programs. By providing a decentralized, verifiable source of data, blockchain technology can be a more transparent, efficient form of information and data management for improved governance, accountability, financial transparency, and even digital identities. While blockchain can be effective when used strategically on specific problems, practitioners who choose to use it must do so fastidiously. The decisions to use DLTs should be based on a detailed analysis and research on comparable technologies, including non-DLT options. As blockchains are used more and more for governance and in the civic space, irresponsible applications threaten human rights, especially data security and the right to privacy.

By providing a decentralized, verifiable source of data, blockchain technology can enable a more transparent, efficient form of information and data management. Practitioners should understand that blockchain technology can be applied to humanitarian challenges, but it is not a separate humanitarian innovation in itself.

Blockchain for the Humanitarian Sector – Future Opportunities

Blockchains lend themselves to some interesting tools being used by companies, governments, and civil society. Examples of how blockchain technology may be used in civic space include: land titles (necessary for economic mobility and preventing corruption), digital IDs  (especially for displaced persons), health records, voucher-based cash transfers, supply chain, censorship resistant publications and applications, digital currency , decentralized data management , recording votes, crowdfunding and smart contracts. Some of these examples are discussed below. Specific examples of the use of blockchain technology may be found on this page under case studies.

A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.
A USAID-funded project used a mobile app and software to track the sale and transfer of land rights in Tanzania. Blockchain technology may also be used to record land titles. Photo credit: Riaz Jahanpour for USAID / Digital Development Communications.

Blockchain’s core tenets – an immutable transaction history and its distributed and decentralized nature – lend themselves to some interesting tools being used by companies, governments, and civil society. The risks and opportunities these present will be explored more fully in the relevant sections below, while specific examples will be given in the Case Studies section but, at a high level, many actors are looking at leveraging blockchain in the following ways:

Smart Contracts

Smart contracts are agreements that provide automatic payments on the completion of a specific task or event. For example, in civic space, smart contracts could be used to execute agreements between NGOs and local governments to expedite transactions, lower costs, and reduce mutual suspicions. However, since these contracts are “defined” in code, any software bugs can interfere with the intent of the contract or become potential loopholes in which the contract could be exploited. One case of this happened when an attacker exploited a software bug in a smart contract-based firm called The DAO for approximately $50M.

Liquid Democracy

Liquid democracy is a form of democracy wherein, rather than simply voting for elected leaders, citizens also engage in collective decision making. While direct democracy (each individual having a say on every choice a country makes) is not feasible, blockchain could lower the barriers to liquid democracy, a system which would put more power into the hands of the people. Blockchain would allow citizens to register their opinions on specific subject matters or delegate votes to subject matter experts.

Government Transparency

Blockchain can be used to tackle governmental corruption and waste in common areas like public procurement. Governments can use blockchain to publicize the steps of procurement processes and build citizen trust as citizens know the transactions recorded cannot have been tampered with. The tool can also be used to automate tax calculation and collection.

Innovative Currency and Payment Systems

Many new cryptocurrencies are considering ways to leverage blockchain for transactions without the volatility of bitcoin, and with other properties, such as speed, cost, stability and anonymity. Cryptocurrencies are also occasionally combined with smart contracts, to establish shared ownership through funding of projects.

Potential for fund-raising

In addition, the digital currency subset of blockchain is being used to establish shared ownership (not dissimilar to stocks / shares of large companies) of projects.

Potential for election integrity

The transparency and immutability of blockchain could be used to increase public confidence in elections by integrating electronic voting machines and blockchain. However, there are privacy concerns with publically tracking the tally of votes. Additionally, this system relies on electronic voting machines, which raise some security concerns, as computers can be hacked, and have been met by mistrust in several societies where they were suggested. Online voting through blockchain faces similar distrust, but integrating blockchain into voting would make audits much easier and more reliable. This traceability would also be a useful feature in transparently transmitting results from polling places to tabulation centers.

Censorship-resistant technology

The decentralized, immutable nature of blockchain provides clear benefits to protecting speech, but not without significant risks. There have been high-visibility uses of blockchain to publish censored speech in China, Turkey, and Catalonia. Article 19 has written an in-depth report specifically on the interplay between freedom of expression and blockchain technologies, which provides a balanced view of the potential benefits and risks and guidance for stakeholders considering engaging in this facet.

Decentralized computation and storage

Micro-payments through a blockchain can be used to formalize and record actions. This can be useful when carrying out activities with multiple stakeholders where trust, transparency, and a permanent record are valuable, for example, automated auctions (to prevent corruption), voting (to build voter trust), signing contracts (to keep a record of ownership and obligations that will outlast crises that destroy paper or even digital systems), and even for copyright purposes and preventing manipulation of facts.

Ethereum is a cryptocurrency focused on using the blockchain system to help manage decentralized computation and storage through smart contracts and digital payments. Ethereum encourages the development of “distributed apps” which are tied to transactions on the Ethereum Blockchain. Examples of these apps include a X-like tool, and apps that pay for content creation/sharing. See case studies in the cryptocurrencies primer for more detail.

The vast majority of these applications presume some form of micro-payment as part of the transaction. However, this requirement has ramifications for equal access as internet accessibility, capital, and access to online payment systems are all barriers to usage. Furthermore, with funds involved, informed consent is even more essential and challenging to ensure.

Back to top

Opportunities

Blockchain can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about blockchain in your work.

Proof of Digital Integrity

Data stored or tracked using blockchain technologies have a clear, sequential, and unalterable chain of verifications. Once data is added to the blockchain, there is ongoing mathematical proof that it has not been altered. This does not provide any assurance that the original data is valid or true, and it means that any data added cannot be deleted or changed – only appended to. However, in civil society, this benefit has been applied to concepts such as creating records for land titles/ownership; improving voting security by ensuring one person matches with one unchangeable vote; and preventing fraud and corruption while enhancing transparency in international philanthropy. It has been used to keep record of digital identities to help people retain ownership over their identity  and documents and, in humanitarian contexts, to make voucher-based cash transfers more efficient. As an enabler for digital currency, in some circumstances, blockchain facilitates cross-border funding of civil society. Blockchain could be used not only to preserve identification documents, but qualifications and degrees as well.

A function such as this can provide a solution to the legal invisibility most often borne by refugees and migrants. Rohingya refugees in Bangladesh, for example, are often at risk of discrimination and exploitation, because they are stateless. Proponents of blockchain argue that its distributed system can grant individuals with “self-sovereign identity,” a concept by which ownership of identity documents is taken from authorities and put in the hands of individuals. This allows individuals to use their identity documents across a number of authorities while authorities’ access requires a degree of consent. A self-sovereign identity model could be a solution to regulations raised by the GDPR and similar privacy-rights-supporting legislation.

However, if blockchain architects do not secure transaction permissions and public/private state variables, governments could use machine-learning algorithms to monitor public blockchain activity and gain insight into whatever daily, lower- level activities of their citizens are linkable to their blockchain identities. This might include payments (both interpersonal and business) and services, be they health, financial, or other. Anywhere citizens need to show their ID, their location and time would be tracked. While this is an infringement on privacy rights, it is extra problematic for marginalized groups whose legal status in a country can change rapidly and with no warning. Furthermore, such a use of blockchain assumes that individuals would be prepared and able to adopt that technology, an unlikely possibility due to the financial insecurity and lack of access to information and the internet many vulnerable groups, such as refugees, face. In this context, it is impossible to get meaningful informed consent from these target groups.

Blockchains promise anonymity, or at least pseudonymity, because limited information regarding individuals is stored in transaction logs. However, this does not guarantee that the platforms protect freedom of expression. For instance, the central internet regulator in China proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers.

Supply Chain Transparency

Blockchain has been used to create transparency in the supply chain and connect consumers directly with the producers of the products they are buying. This enables consumers to know companies are following ethical and sustainable production practices. For example, Moyee Coffee uses blockchain to track their supply chain, and makes this information available to customers, who can confirm the coffee beans were picked by paid, adult farmers and even tip those farmers directly.

Decentralized Store of Data

Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.
Around the world, blockchain technology helps displaced people regain IDs and access to other social services. Here, a CARD agent in the Philippines tracks IDs by hand. Photo credit: Brooke Patterson/USAID.

Blockchain is resistant to the traditional problems one central authority or data store faces when being attacked or experiencing outages. In a blockchain, data are constantly being shared and verified across all members—although blockchain has been criticized for requiring large amounts of energy, storage, and bandwidth to maintain a shared data store. This decentralization is most valued in digital currencies, which rely on the scale of their blockchain to balance not having a country or region “owning” and regulating the printing of the currency. Blockchain has also been explored to distribute data and coordinate resources without a reliance on a central authority in order to resist censorship.

Blockchains promise anonymity, or at least pseudonymity, because limited information regarding individuals is stored in transaction logs. However, this does not guarantee that the platforms protect freedom of expression. For instance, the central internet regulator in China proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers.
Blockchain and freedom of expression

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with blockchain in DRG work, as well as how to mitigate unintended – and intended – consequences.

Unequal Access

The minimal requirements for an individual or group to engage with Blockchain present a challenge for many. Connectivity, reliable and robust bandwidth, and local storage are all needed. Therefore, mobile phones are often an insufficient device to host or download blockchains. The infrastructure it requires can serve as a barrier to access in areas where Internet connectivity primarily occurs via mobile devices. Because every full node (host of a blockchain) stores a copy of the entire transaction log, blockchains only grow longer and larger with time, and thus can be extremely resource-intensive to download on a mobile device. For instance, over the span of a few years, the blockchains underlying Bitcoin grew from several gigabytes to several hundred. And for a cryptocurrency blockchain, this growth is a necessary sign of healthy economic growth. While the use of blockchain offline is possible, offline components are among the most vulnerable to cyberattacks, and this could put the entire system at risk.

Blockchains — whether they are fully independent or part of existing blockchains — require some percentage of actors to lend processing power to the blockchain, which — especially as they scale — itself becomes either exclusionary or creates classes of privileged users.

Another problem that can undermine the intended benefits of the system is the unequal access to opportunities to convert blockchain-based currencies to traditional currencies. This is especially a problem in relation to philanthropy or to support civil society organizations in restrictive regulatory environments. For cryptocurrencies  to have actual value, someone has to be willing to pay money for them.

Lack of digital literacy

Beyond these technical challenges, blockchain technology requires a strong baseline understanding of technology and its use in situations where digital literacy itself is a challenge. Use of the technology without a baseline understanding of the consequences is not really consent and could have dire consequences.

There are paths around some of these problems, but any blockchain use needs to reflect on what potential inequalities could be exacerbated by or with this technology.

Further, these technologies are inherently complex, and outside the atypical case where individuals do possess the technical sophistication and means to install blockchain software and set up nodes; the question remains as to how the majority of individuals can effectively access them. This is especially true of individuals who may have added difficulty interfacing with technologies due to disability, literacy, or age. Ill-equipped users are at increased risk of their investments or information being exposed to hacking and theft.

Blockchain and freedom of expression

Breaches of Privacy

Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.
Account Ledgers for Nepali Savings and Credit Cooperatives shows the burden of paper. Blockchain replicates online the value of a paper-and-ink records. Photo credit: Brooke Patterson/USAID.

Storing sensitive information  on a blockchain – such as biometrics  or gender – combined with the immutable aspects of the system, can lead to considerable risks for individuals when this information is accessed by others with the intention to harm. Even when specific personally identifiable information is not stored on a blockchain, pseudonymous accounts are difficult to protect from being mapped to real-world identities, especially if they are connected with financial transactions, services, and/or actual identities. This can erode rights to privacy and protection of personal data, as well as exacerbate the vulnerability of already marginalized populations and persons who change fundamental aspects of their person (gender, name). Data privacy rights, including explicit consent, modification, and deletion of one’s own data are often protected through data protection and privacy legislation, such as the General Data Protection Regulation (GDPR) in the EU that serves as a framework for many other policies around the world. An overview of legislation in this area around the world is kept up to date by the United Nations Conference on Trade and Development.

For example, in September 2017, concerns surfaced about the Bangladeshi government’s plans to create a ‘merged ID’ that would combine citizens’ biometric, financial, and communications data (Rahman, 2017). At that time, some local organizations had started exploring a DLT solution to identify and serve the needs of local Rohingya asylum-seekers and refugees. Because aid agencies are required to comply with national laws, any data recorded on a DLT platform could be subject to automatic data-sharing with government authorities. If these sets of records were to be combined, they would create an indelible, uneditable, untamperable set of records of highly vulnerable Rohingya asylum-seekers, ready for cross-referencing with other datasets. “As development and humanitarian donors and agencies rush to adopt new technologies that facilitate surveillance, they may be creating and supporting systems that pose serious threats to individuals’ human rights.”

These issues raise questions about meaningful, informed consent – how and to what extent do aid recipients understand DLTs and their implications when they receive assistance? […] Most experts agree that data protection needs to be considered not only in the realm of privacy, empowerment and dignity, but also in terms of potential physical impact or harm (ICRC and Brussels Privacy Hub, 2017; ICRC, 2018a)

Blockchain and distributed ledger technologies in the humanitarian sector

Environmental Impact

As blockchains scale, they require increasing amounts of computational power to stay in sync. In most digital currency blockchains, this scale problem is balanced by rewarding people who contribute to the processing power required with currency. The University of Cambridge estimated in fall 2019 that Bitcoin alone currently uses .28% of global electricity consumption, which, if Bitcoin were a country, would place it as the 41st most energy-consuming country, just ahead of Switzerland. Further, the negative impact is demonstrated by research showing that each Bitcoin transaction takes as much energy as needed to run a well-appointed house and all the appliances in it for an entire week.

Regulatory Uncertainty

As is often the case for emerging technology, the regulations surrounding blockchain are either ambiguous or nonexistent. In some cases, such as when the technology may be used to publish censored speech, regulators overcorrect and block access to the entire system or remove pseudonymous protections of the system in-country. In Western democracies, there are evolving financial regulations as well as concerns around the immutable nature of the records stored in a blockchain. Personally-Identifiable  Information (see Privacy, above) in a blockchain cannot be removed or changed as required by the GDPR right to be forgotten, and widely illegal content has already been inserted into the bitcoin blockchain.

Trust, Control, and Management Issues

While a blockchain has no “central database” which could be hacked, it also has no central authority to adjudicate or resolve problems. A lost or compromised password is almost guaranteed to result in the loss of ability to access funds or worse, digital identities. Compromised passwords or illegitimate use of the blockchain can harm individuals involved, especially when personal information is accessed or when child sexual abuse images are stored forever. Building mechanisms to address this problem undermines the key benefits of the blockchain.

That said, an enormous amount of trust is inherently placed in the software-development process around blockchain technologies, especially those using smart contracts. Any flaw in the software, and any intentional “back door”, could enable an attack that undermines or subverts the entire goal of the project.

Where is trust being placed: whether it is in the coders, the developers, those who design and govern mobile devices or apps; and whether trust is in fact being shifted from social institutions to private actors. All stakeholders should consider what implications does this have and how are these actors accountable to human rights standards.

Blockchain and freedom of expression

Back to top

Questions

If you are trying to understand the implications of blockchain in your work environment, or are considering using aspects of blockchain as part of your DRG programming, ask yourself these questions:

  1. Does blockchain provide specific, needed features that existing solutions with proven track records and sustainability do not?
  2. Do you really need blockchain, or would a database be sufficient?
  3. How will this implementation respect data privacy and control laws such as the GDPR?
  4. Do your intended beneficiaries have the internet bandwidth needed to use the product you are developing with blockchain?
  5. What external actors/partners will control critical aspects of the tool or infrastructure this project will rely on?
  6. What external actors/partners will have access to the data this project creates? What access conditions, limits, or ownership will they have?
  7. What level of transparency and trust do you have with these actors/partners?
  8. Are there ways to reduce dependency on these actors/partners?
  9. How are you conducting and measuring informed consent  processes for any data gathered?
  10. How will this project mitigate technical, financial, and/or infrastructural inequalities and ensure they are not exacerbated?
  11. Will the use of blockchain in your project comply with data protection and privacy laws?
  12. Do other existing laws and policies address the risks and offer mitigating measures related to the use of blockchain in your context, such as anti-money-laundering regulation?
  13. Are there laws in the works that may mitigate your project or increase costs?
  14. Do existing laws enable the benefits you have identified for the blockchain-enabled project?
  15. Are these laws aligned with international human rights law, such as the right to privacy, to freedom of expression and opinion, and to enjoy the benefits of scientific progress?

Back to top

Case Studies

Blockchain and the supply chain

Blockchain has been used for supply chain transparency of products that are commonly not ethically sourced. For example, in 2018, the World Wildlife Fund collaborated with Sea Quest Fiji Ltd., a tuna fishing and processing company and ConsenSys, a tech company with an implementer called TraSeable, to use blockchain to trace the origin of tuna caught in a Fijian longline fishery. Each fish was tagged when caught and the entire journey of the fish was recorded on the blockchain. This methodology is a weapon for sustainability and ethical business practices in other supply chains as well, including those that rely on child and forced labor.

Blockchain to combat corruption in registering land titles

A program was developed in Georgia to address corruption in land management in the country. Land ownership is a sector particularly vulnerable to corruption, in part because it is very easy for government officials to extract bribes to register land due to the fact that ownership is recognized through titles, which can easily be lost or destroyed. Blockchain was introduced to provide a transparent and immutable recording of each step of the process to register land, so that the procurement process could be tracked and there would be no danger of losing the record.

Blockchain for COVID-19 vaccine passports

After the COVID-19 vaccine was made public, many states considered implementing a vaccine passport system, whereby individuals would be required to show documentation to prove they were vaccinated in order to enter certain countries or buildings. Blockchain was considered as a tool to more easily store vaccine records and track doses without negative consequences for individuals who lose their records. While there are significant data privacy concerns in a system where there is no alternative to allowing one’s data to be stored on a blockchain, this would have significant public health benefits. Furthermore, it demonstrates that future identification documents are likely to rely on blockchain.

Blockchain to facilitate transactions for humanitarian aid

Humanitarian aid is the sector where blockchain for human rights and democracy has been adopted the most. Blockchain has been embraced as a way to combat corruption and ensure money and aid reach intended targets, to allow access to donations in countries where crises have affected the banking system, and in coordination with Digital IDs to allow donor organizations to better track funding and get money to people without traditional methods of receiving money.

Sikka, a project of the Nepal Innovation Lab, operates through partnerships with local vendors and cooperatives within the community, sending value vouchers and digital tokens to individuals through SMS. Value vouchers can be used to purchase humanitarian goods from vendors, while digital tokens can be exchanged for cash. The initiative also supplies donors with data for monitoring and evaluation purposes. The International Federation of the Red Cross and Red Crescent Societies (IFRC) has a similar project, the Blockchain Open Loop Cash Transfer Pilot Project for cash transfer programming. The Kenya-based project utilized a mobile money transfer service operating in the country, Safaricom M-Pesa, to send payments to the mobile wallets of beneficiaries without the need for national ID documentation, and blockchain was used to track the payments. A management platform called “Red Rose” allowed donor organizations to manage data, and the program explored many of the ethics concerns around the use of blockchain.

The Start Network is another humanitarian aid organization that has experimented with using blockchain to disperse funds because of the reduced transfer fees, transparency, and speed benefits. Using the Disperse platform, a distribution platform for foreign aid, the Start Network hoped to increase the humanitarian sector’s comfort with introducing new tech solutions.

AIDONIC is a private company with a donation management tool that incentivizes humanitarian donation with a platform allowing donors, even individuals, greater control over what their donations are used for. Small donors can choose specific initiatives, which will launch when fully funded, and throughout projects, donors can monitor, track, and trace their contributions.

Blockchain for collaboration

A similar humanitarian application of blockchain is collaboration. The World Food Program’s Building Blocks project allows organizations that work in the region but offer different types of humanitarian aid to coordinate their efforts. All of the actions of the humanitarian organizations are recorded on a shared private blockchain. Though the program has a policy to support data privacy, including not recording any data other than that required, pseudonymous data only being released to approved humanitarian orgs, and not recording any sensitive information, humanitarian aid applications of blockchain raise a lot of cybersecurity and data privacy concerns, and all members of the network must be approved. The project has not been as successful as hoped; only UN Women and the World Food Program are full members, but the network makes it easier for beneficiaries to access aid from both organizations, and it provides a clearer picture for aid organizations of what types of aid are being provided and what is missing.

Blockchain in electronic banking

In addition to its applications in humanitarian funding, blockchain has been used to address gaps in financial services outside of crisis zones. Project i2i provides a nontraditional solution for the unbanked population in the Philippines. While standing up the internet technology infrastructure necessary to establish traditional banking in rural areas is extremely challenging and resource intensive, with the blockchain, each bank only needs an iPad. With this, banks connect to the Ethereum network and users have access to a trustworthy and efficient system to process transactions. Though the system has successfully reduced the number of unbanked people in the Philippines, there are informed consent issues, as the majority of users have no other option and because of the data privacy rights.

Blockchain and data integrity

While data privacy is a serious concern, blockchain also has the potential to support democracy and human rights work through data collection, verification, and even through supporting data privacy. The Chemonics’ 2018 Blockchain for Development Solutions Lab used blockchain to make the process of collecting and verifying the biodata of USAID professionals more efficient. The use of blockchain reduced incidents of error and fraud and provided increased data protection because of the natural defense against hacking that blockchains provide and because instead of sharing ID documents through email, the program utilized encrypted keys on Chemonics.

Blockchain for fact checking images

Truepic is a company that provides fact checking solutions. The company supports information integrity by storing accurate information about pictures that have been verified. Truepic combines camera technology, which records pertinent details of every photo, with blockchain storage to create a database of verified imagery that cannot be tampered with. This database can then be used to fact check manipulated images.

Blockchain to permanently keep news articles

Civil.co was a journalism-supporting organization that harnessed the blockchain to permanently keep news articles online in the face of censorship. Civil’s usage of blockchain aimed to encourage community trust of the news. First, articles were published using the blockchain itself, meaning a user with sufficient technical skills could theoretically verify that the articles came from where they say they did. Civil also supported trust with two non-blockchain “technologies”: a “constitution” which all their newsrooms adopted and a ranking system through which their community of readers and journalists could vote up news and newsrooms they found trustworthy. Publishing on a peer-to-peer blockchain, gave their publishing additional resistance to censorship. Readers could also pay journalists for articles using Civil’s tokens. However, Civil struggled from the beginning to raise money, and its newsroom model failed to prove itself.

For more blockchain case studies, check out these resources:

  • New America keeps a Blockchain Impact Ledger with a database of blockchain projects and the people they serve.
  • The 2019 report “Blockchain and distributed ledger technologies in the humanitarian sector” provides multiple examples of humanitarian use of DLTs, including for financial inclusion, land titling, donation transparency, fraud reduction, cross-border transfers, cash programming, grant management and organizational governance, among others.
  • In “Blockchain: Can We Talk About Impact Yet?”, Shailee Adinolfi, John Burg and Tara Vassefi respond to a MERLTech blog post that not only failed to find successful applications of blockchain in international development, but was unable to identify companies willing to talk about the process. This article highlights three case studies of projects with discussion and links to project information and/or case studies.
  • Digital Currencies and Blockchain in the Social Sector,” David Lehr and Paul Lamb summarize work in international development leveraging blockchain for philanthropy, international development funding, remittances, identity, land rights, democracy and governance, and environmental protection.
  • Consensys, a company building and investing in blockchain solutions, including some in the civil sector, summarizes (successful) use cases in “Real-World Blockchain Case Studies.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Cryptocurrency

What are cryptocurrencies?

Cryptocurrency is a type of digital or virtual currency that uses cryptography for secure and private transactions and control of new units. Unlike traditional currencies issued by governments (like the US Dollar or Euro), cryptocurrencies are typically decentralized and operate on blockchain technology. It was created in the wake of the 2008 global financial crisis to decentralize the system of financial transactions. Cryptocurrency is almost a direct contrast to the global financial system: no currency is attached to state authority, it is unbound by geographic regulations, and most importantly, the maintenance of the system is community driven by a network of users. All transactions are logged anonymously on a public ledger, such as Bitcoin’s blockchain.

Definitions

Blockchain: Blockchain is a type of technology used in many digital currencies as a bank ledger. Unlike a normal bank ledger, copies of that ledger are distributed digitally, among computers all over the world, automatically updating with every transaction.

Cryptography: The practice of employing mathematical techniques to secure and protect information, transforming it into an unreadable format using encryption and hashing. In cryptocurrencies, cryptography safeguards transactions, privacy, and ownership verification using techniques like public-private keys and digital signatures on a blockchain.

Currency: A currency is a widely accepted system of money in circulation, usually designated by a nation or group of nations. Currency is commonly found in the form of paper and coins, but can also be digital (as this primer explores).

Fiat money: Government-issued currency, such as the USD. Sometimes referred to as Fiat currency.

Hashing: The process through which cryptocurrency transactions are verified. When one person pays another using Bitcoin, for example, computers on the blockchain automatically check that the transaction is accurate.

Hash: The mathematical problem computers must solve to add transactions to the blockchain.

Initial Coin Offering (ICO): The process by which a new cryptocurrency or digital “token” invites investment.

Mining: The process by which a computer solves a hash. The first computer to solve the hash permanently stores the transaction as a block on the blockchain. When a computer successfully adds a block to the blockchain, it is rewarded with a coin. Arriving at the right answer for a hash before another miner relates to how fast a computer can produce hashes. In the early years of Bitcoin, for example, mining could be performed effectively using open-source software on standard desktop computers. More recently, only special-purpose machines known as application-specific integrated circuit (ASIC) miners can mine bitcoin cost-effectively, because they are optimized for the task. Mining pools (groups of miners) and companies now control most Bitcoin mining activity.

How do cryptocurrencies work?

Money transfer agencies in Nepal. Cryptocurrencies potentially allow users to send and receive remittances and access foreign financial markets. Photo credit: Brooke Patterson/USAID.

Users purchase cryptocurrency with a credit card, debit card, bank account, or through mining. They then store the currency in a digital “wallet,” either online, on a computer, or offline, on a portable storage device, such as USB sticks. These wallets are used to send and receive money through “public addresses” or keys that link the money to a specific type of cryptocurrency. These addresses are strings of characters that signify a wallet’s identity for transactions. A user’s public address can be shared with anyone to receive funds and can also be represented as a QR code. Anyone with whom a user makes a transaction can see the balance in the public address that he or she uses.

While transactions are publicly recorded, identifying user information is not. For example, on the Bitcoin blockchain, only a user’s public address appears next to a transaction—making transactions confidential but not necessarily anonymous.

Cryptocurrencies have increasingly struggled with intense periods of volatility, most of which stem from the decentralized system of which they are part. The lack of a central body means that cryptocurrencies are not legal tender, they are not regulated, there is little to no insurance if an individual’s digital wallet is hacked, and most payments are not reversible. As a result, cryptocurrencies are inherently speculative. In November 2021, Bitcoin peaked at a price of nearly $65,000 per coin, but crashed almost a year after following the collapse of FTX which led to a domino effect in the crypto sector. Prior to the crash new supposed ‘meme coins’ which gained popularity on social media were seeing substantial price increases as investors flocked to the new coins. The crash that followed led to increased attention to tightening regulatory control over cryptocurrency and trading. While some cryptocurrencies such as Tether, have attempted to offset volatility by tying their market value to an external currency like USD or gold. However, the industry overall has not yet reconciled how to maintain an autonomous, decentralized system with overall stability.

Types of Cryptocurrencies

The value of a certain cryptocurrency is heavily dependent on the faith of its investors, its integration into financial markets, public interest in using it, and its performance compared to other cryptocurrencies. Bitcoin, founded in 2008, was the first and only cryptocurrency until 2011 when “altcoins” began to appear. Estimates for the number of cryptocurrencies vary, but as of June 2023, there were about 23,000 different types of cryptocurrencies.

  • Bitcoin
    It has the largest user base and a market capitalization in the hundreds of billions. While Bitcoin initially attracted financial institutions like Goldman Sachs, the collapse of Bitcoin’s value (along with other cryptocurrencies) in 2018 has since increased skepticism towards its long-term viability.
  • Ethereum
    Ethereum is a decentralized software platform that enables Smart Contracts and Decentralized Applications (DApps) to be built and automated without interference from a third party (like Bitcoin: they both run on Blockchain technology). Ethereum launched in 2015 and is currently the second-largest cryptocurrency based on market capitalization after Bitcoin.
  • Ripple (XRP)
    Ripple is a real-time payment-processing network that offers both instant and low-cost international payments, to compete with other transaction systems such as SWIFT or VISA. It is the third largest cryptocurrency.
  • Tether (USDT)
    Tether is one of the first and most popular of a group of “stablecoins” — cryptocurrencies that peg their market value to a currency or other external reference point to reduce volatility.
  • Monero
    Monero is the largest of what are known as privacy coins. Unlike Bitcoin, Monero transactions and account balances are not public by default.
  • Zcash
    Another anonymity-preserving cryptocurrency, Zcash, is operated under a foundation of the same name. It is branded as a mission-based, privacy-centric cryptocurrency that enables users “to protect their privacy on their own terms”, regarding privacy as essential to human dignity and to the healthy functioning of civil society.

Fish vendor in Indonesia. Women are the most underbanked sector, and financial technologies can provide tools to address this gap. Photo credit: Afandi Djauhari/NetHope.

Back to top

How are cryptocurrencies relevant in civic space and for democracy?

Cryptocurrencies are, in many ways, ideal for the needs of NGOs, humanitarians, and other civil society actors. Civic space actors who require blocking-resistant, low-fee transactions might find cryptocurrencies both convenient and secure. The use of cryptocurrencies in the developing world reveals their role as not just vehicles for aid, but also tools that facilitate the development of small- to medium-sized enterprises (SMEs) looking to enter international trade. For example, UNICEF created a cryptofund in 2019 in order to receive and distribute funding in cryptocurrencies (ether and bitcoin). In June of 2020, UNICEF announced its largest investment yet in startups located in developing economies, that are helping respond to the Covid-19 pandemic.

However, regarding cryptocurrencies through only a traditional development lens – i.e. that they may only be useful for refugees or countries with unreliable fiat currencies – simplifies the economic landscape of such low and middle income countries. Many countries are home to a significant youth population who are poised to harness cryptocurrency in innovative ways, for instance, to send and receive remittances, to access foreign financial markets and investment possibilities, and even to encourage ecological or ethical purchasing behaviors (see the Case Studies section). During the Coronavirus lockdown in India, and after the country’s reserve bank lifted a ban it had on cryptocurrencies, many young people started trading in Indian cryptocurrencies and using cryptocurrencies to transfer money to one another. Still, the future of crypto in India and elsewhere is uncertain. The frontier nature of cryptocurrencies poses significant risks to users when it comes to insurance and, in some cases, security.

Moreover, as will be discussed below, the distributed technology (blockchain) underlying cryptocurrencies is seen as offering resistance to censorship, as the data are distributed over a large network of computers. The blockchain offers a high level of anonymity, which may be helpful for those living under autocratic regimes and democratic activists conduct transactions that may otherwise be monitored. Cryptocurrencies could also give a broader range of people access to banking, an essential element of economic inclusion.

Back to top

Opportunities

Cryptocurrencies can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about cryptocurrencies in your work.

Accessibility

Cryptocurrencies are more accessible to a broader range of users than regular cash currency transactions are; they are not subject to government regulation and they don’t have high processing fees. Cross-border transactions in particular benefit from the features of cryptocurrencies; international banking fees and poor exchange rates can be extremely costly. (In some cases, the value of cryptocurrencies may even be more stable than the local currency (see volatile markets case study below). Cryptocurrencies that require participants to log in (on “permissioned” systems) necessitate that an organization controls participation in its system. In some cases, certain users also help run the system in other ways, like operating servers. When this is the case, it is important to understand who those users are, how they are selected, and how their ability to use the system could be taken away if they turn out to be bad actors.

Additionally, Initial Coin Offerings (ICOs) lower the entry barrier to investing, cutting venture capitalists and investment banks out of the investing process, thereby democratizing the processWhile similar to Initial Public Offerings (IPO)s, ICOs differ significantly in that they allow companies to interact directly with individual investors. This also poses a risk to investors, as the safeguards offered by investment banks for traditional IPOs do not apply. (See Lack of Governance and Regulatory Uncertainty ). The lack of regulatory bodies has also spurred the growth of scam ICOs. When an ICO or cryptocurrency does not have a legitimate strategy for generating value, they are typically a scam ICO.

Still, broad accessibility has not yet been achieved as a result of a combination of factors including user knowledge-gaps, internet and computing requirements, and incompatibility between traditional banking systems and cryptocurrency fintech. For an understanding of the usability and risk side of cryptocurrency use, and the disproportionate risks marginalized groups face, see section on digital literacy and access requirements.

Anonymity and Censorship Resistance

The decentralized, peer-to-peer nature of cryptocurrencies may be of great comfort to those seeking anonymity, such as human rights defenders working in closed spaces or people simply seeking an equivalent to “cash” for online purchases (see the Cryptocurrencies in Volatile Markets case study, below). Cryptocurrencies can be useful for someone who wishes to donate anonymously to a foundation or organization when that donation could put them at risk if their identity were known making it a powerful tool for activists. The anonymity of cryptocurrencies has also caused concern amongst advocacy groups who argue that, without open ledgers and tracking, crypto could be used by foreign illiberal actors to fund more authoritarian campaigns.

Since the data that supports the currency is distributed over a large network of computers, it is more difficult for a bad actor to locate and target a transaction or system operation. But a currency’s ability to protect anonymity largely depends on the specific goal of the cryptocurrency. Zcash, for example, was specifically developed to hide transaction amounts and user addresses from public view. Zcash has also played a role in allowing more charitable giving, and several charities tackling research, journalism, and climate change advocacy are powered by Zcash. Cryptocurrencies with a large number of participants are also resistant to more benign, routine, system outages because some data stores in the network can operate if others are breached.

Creating new governance systems

There have been few successful attempts at regulating cryptocurrency at the transnational level, with most governance frameworks remaining at the national level, if at all. Therefore, there are substantial opportunities for international cooperation on crypto governance and efforts to create multilateral networks and partnerships between the private and public sectors are growing. The Digital Currency Governance Consortium for example is composed of 80 organizations across the globe and helps to facilitate discussions around promoting competitiveness, financial stability and protections, and regulatory frameworks in regard to cryptocurrency.

Back to top

Risks

User in the Philippines received transaction confirmation. Users purchase cryptocurrency with a credit card, debit card, bank account or through mining. Photo credit: Brooke Patterson/USAID.

The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with cryptocurrencies in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Anonymity

While no central authority records cryptocurrency transactions, the public nature of the transactions does not prevent governments from recording them. An identity that can be associated with records on a blockchain is particularly a problem under totalitarian surveillance regimes. The central internet regulator in China, for example, proposed regulations that would require local blockchain companies to register users with their real names and national identification card numbers. In order to trade or exchange a cryptocurrency into an established fiat currency, a new digital currency would need to incorporate Know Your Customer (KYC), Anti-Money Laundering (AML), and Combating the Financing of Terrorism (CFT) regulations into its process for signing up new users and validating their identities. These processes pose a high barrier to undocumented migrants and anyone else not holding a valid government ID.

As described in the case study below, the partially anarchical environment of cryptocurrencies can also foster criminal activity.

Case Study: The Dark Side of the Anonymous User Bitcoin and other cryptocurrencies are praised for supporting financial transactions that do not reveal a user’s identity. But this has made them popular on “dark web” sites like Silk Road, where cryptocurrency can be exchanged for illegal goods and services like drugs, weapons, or sex work. The Silk Road was eventually taken down by the U.S. Federal Bureau of Investigation, when its founder, Ross Ulbricht, used the same name to advertise the site and seek employees in another forum, linking to a Gmail address. Google provided the contents of that address to the authorities when subpoenaed.

The lessons to take from the Silk Road case are that anonymity is rarely perfect and unbreakable, and cryptocurrency’s identity protection is not an ironclad guarantee and law enforcement officials and governments have tried to increase the regulatory tools at their disposal and international cooperation on crimes involving cryptocurrency On a public blockchain, a single identity slip (even in some other forum) can tie all of the transactions of that cryptocurrency account to one user. The owner of that wallet can then be connected to their subsequent purchases, as easily as a cookie tracks a user’s web browsing activity.

Lack of Governance

The lack of a central body greatly increases the risk of investing in a cryptocurrency. There is little to no recourse for users if the system is attacked digitally and their currency is stolen. In 2022, criminals hacked the FTX blockchain and stole $415 million worth of cryptocurrency, one of the largest hacks in history, just hours after the company was rocked by an embezzlement scandal. The move led government regulators to increase scrutiny on the sector as users were left unable to recover much of the stolen funds.

Regulatory Uncertainty

The legal and regulatory frameworks for blockchain are developing at a slower pace than the technology. Each jurisdiction – whether within a country or a financial zone, such as the 27 European countries known as the Schengen Area that have abolished passports and border controls – regulates cryptocurrencies differently, and there is yet to be a global financial standard for regulating them. The seven Arab nations bordering the Persian Gulf  (Gulf States), for example, have enacted a number of different laws on cryptocurrencies: they face an outright ban in the United Arab Emirates and Saudi Arabia. Other countries have developed tax laws, anti-money-laundering laws, and anti-terrorism laws to regulate cryptocurrencies. In many places, cryptocurrency is taxed as a property, instead of as a currency.

Cryptocurrency’s commitment to autonomy – that is, its separation from a fiat currency – has pitted it as an antagonist to many established regulatory bodies. Observers note that eliminating the ability of intermediaries (e.g., governments or banks) to claim transaction fees, for example, alters existing power balances and may trigger prohibitive regulations even as it temporarily decreases financial costs. Thus, there is always a risk that governments will develop policies unfavorable to financial technologies (fintech), rendering cryptocurrency and mobile money useless within their borders. The constantly evolving nature of laws around fintech proves difficult for any new digital currency.

Environmental Inefficiency

The larger a blockchain grows, the more computational power it requires. In late 2019, the University of Cambridge estimated that Bitcoin uses .55% of global electricity consumption. This consumption level roughly equates to the usage of Malaysia and Sweden.

Digital Literacy and Access Requirements

Blockchain technology underlying cryptocurrencies requires access to the Internet, and areas with inadequate infrastructure or capacity would not be usable contexts for cryptocurrencies, although limited possibilities of using cryptocurrency without Internet access do exist. “This digital divide also extends to technological understanding between those who know how to ‘operate securely on the Internet, and those who do not’”, as noted by the DH Network. Cryptocurrency apps are not usable on lower-end devices, which require users to use a smartphone or computer. The apps themselves involve a steep learning curve. Additionally, the slow speed of transactions — which can take minutes or up to an hour — is a significant disadvantage, especially when compared to the seconds-fast speed of standard Visa transactions. Lastly, using platforms like Bitcoin can be particularly tricky for groups with lower rates of digital literacy and those with fewer resources who are less financially resilient to the volatility of the crypto market. Given the lack of consumer protections and regulation that exists on cryptocurrency in certain areas and the lack of awareness about the existing risks, lower income users and investors are more likely to face negative financial consequences during market fluctuations. Recently, however, some countries, like Ghana and the Gambia, are launching government initiatives to bridge the divide on digital literacy and connect otherwise marginalized groups with the necessary tools to effectively use crypto and other forms of emerging tech.

Back to top

Questions

If you are trying to understand the implications of cryptocurrencies in your work environment, or are considering using cryptocurrencies as part of your DRG programming, ask yourself these questions:

  1. Do the issues you or your organization are seeking to address require cryptocurrency? Can more traditional currency solutions apply to the problem?
  2. Is cryptocurrency an appropriate currency for the populations you are working with? Will it help them access the resources they need? Is it accepted by the other relevant stakeholders?
  3. Do you or your organization need an immutable database distributed across multiple servers? Would it be ok to have the currency and transactions connected to a central server?
  4. Is the cryptocurrency you wish to use viable? Do you trust the currency and have good reason to assume it will be sufficiently stable in the future?
  5. Is the currency legal in the areas where you will be operating? If not, will this pose problems for your organization?
  6. How will you obtain this currency? What risks are involved? What external actors will you be reliant on?
  7. Will the users of this currency be able to benefit from it easily and safely? Will they have the required devices and knowledge?

Back to top

Case Studies

Mobile money agency in Ghana. The use of cryptocurrencies in the developing world can facilitate the development of small- to medium-sized enterprises looking to enter international trade. Photo credit: John O’Bryan/ USAID.
Crypto is helping connect people in low-income countries to global markets

For many humanitarian actors, the ideal role for cryptocurrencies is to facilitate the transfer of remittances to families across borders. This is especially useful during conflicts when traditional banking systems may shut down. Cross border transfers can be costly and subject to complicated regulations but apps like Strike are helping to ease the process. Strike and Bitnob partnered to allow people in Kenya, Nigeria, and Ghana to easily receive instant payments from US bank accounts through the Bitcoin lightning network and convert payments to local currency. Bitcoin apps and other fintech are highly useful for upper-middle-class entrepreneurs in lower income countries who are building international businesses through trade and online commerce, and emerging apps like Strike may help to bring banking accessibility to underbanked areas.

Using Crypto to increase accessibility in authoritarian regimes

Some in human rights activism have argued that cryptocurrency has helped those in authoritarian regimes maintain financial ties to the outside world. Given the anonymity associated with transactions in cryptocurrency, the new form of technology can offer opportunities for trade and transactions where they may not otherwise be possible. In China and Russia for example, financial transactions that would normally be monitored by the state can be circumvented by using cryptocurrency. Bitcoin and other platforms also offer platforms for refugees and other persons without traditional forms of identity to access their finances. Conversely, critics have argued that various cryptocurrencies are often used in the purchasing of black market goods which often involve exploitative industries like drug and sex trafficking or may be used by widely-sanctioned countries like North Korea. Still, in situations where people may be cut off from traditional forms of banking, crypto may fill an important gap.

Cryptocurrencies in Volatile Markets

In recent years, countries with volatile markets have been slowly incorporating cryptocurrency in response to financial crises as citizens search for new options. Bitcoin has been used to purchase medicine, Amazon gift cards, and send remittances. Cryptocurrency has also become increasingly adopted at the institutional level. In January of 2023, two years after formally recognizing it as legal currency, El Salvador introduced legislation to regulate Bitcoin. Despite hopes that Bitcoin would be used to ease the process of sending remittances and increase accessibility for underbanked people, widespread use of the currency has not caught on as users cite high fees as reasons for avoiding the cryptocurrency. Moreover, many still cite uncertainty and a lack of knowledge as reasons that they have not switched from traditional forms of banking and exchange. The introduction of Bitcoin has also worsened El Salvador’s credit rating and reportedly caused further division with the International Monetary Fund (IMF). Additionally, Bitcoin is highly volatile as it is dependent on supply and demand rather than being pegged to an asset like most other currencies although the government in El Salvador has introduced legislation to regulate crypto exchanges.

Venezuela, which has also faced unprecedented inflation, has also turned to crypto. Between August 2014 and November 2016, the number of Bitcoin users in Venezuela rose from 450 to 85,000. The financial crisis in the country has prompted many of its citizens to search for new options.There are no laws regulating Bitcoin in Venezuela, which has emboldened people further. Some countries with financial markets that have experienced similar rates of inflation to Venezuela- such as South Sudan, Zimbabwe, and Argentina – have relatively active cryptocurrency markets.

Cryptocurrencies for Social Impact

Many new cryptocurrencies have attempted to monetize the social impacts of their users. SolarCoin rewards people for installing solar panels. Tree Coin gathers resources for planting trees in the developing world (as one way to fight climate change) and rewards local people for maintaining those trees. Impak Coin is “the first app to reward and simplify responsible consumption” by helping users find socially responsible businesses. The coin it offered is intended to be used to buy products and services from these businesses, and to support users in microlending and crowdlending. It was part of an ecosystem of technologies that included ratings based on the UN’s Sustainable Development Goals and the Impact Management Project. True to its principles, Impak has proposed to begin assessing its impact. In the future, the impact of SolarCoin may be limited, as the value remains relatively low in comparison to set-up costs, potentially deterring people from using it more widely. In contrast, however, Treecoin may be having a more direct impact on local communities as demonstrated in the Mangrove restoration project.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Data Protection

What is data protection?

Data protection refers to practices, measures, and laws that aim to prevent certain information about a person from being collected, used, or shared in a way that is harmful to that person.

Interview with fisherman in Bone South Sulawesi, Indonesia. Data collectors must receive training on how to avoid bias during the data collection process. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Data protection isn’t new. Bad actors have always sought to gain access to individuals’ private records. Before the digital era, data protection meant protecting individuals’ private data from someone physically accessing, viewing, or taking files and documents. Data protection laws have been in existence for more than 40 years.

Now that many aspects of peoples’ lives have moved online, private, personal, and identifiable information is regularly shared with all sorts of private and public entities. Data protection seeks to ensure that this information is collected, stored, and maintained responsibly and that unintended consequences of using data are minimized or mitigated.

What are data?

Data refer to digital information, such as text messages, videos, clicks, digital fingerprints, a bitcoin, search history, and even mere cursor movements. Data can be stored on computers, mobile devices, in clouds, and on external drives. It can be shared via email, messaging apps, and file transfer tools. Your posts, likes and retweets, your videos about cats and protests, and everything you share on social media is data.

Metadata are a subset of data. It is information stored within a document or file. It’s an electronic fingerprint that contains information about the document or file. Let’s use an email as an example. If you send an email to your friend, the text of the email is data. The email itself, however, contains all sorts of metadata like who created it, who the recipient is, the IP address of the author, the size of the email, etc.

Large amounts of data get combined and stored together. These large files containing thousands or millions of individual files are known as datasets. Datasets then get combined into very large datasets. These very large datasets, referred to as big data, are used to train machine-learning systems.

Personal Data and Personally Identifiable Information

Data can seem to be quite abstract, but the pieces of information are very often reflective of the identities or behaviors of actual persons. Not all data require protection, but some data, even metadata, can reveal a lot about a person. This is referred to as Personally Identifiable Information (PII). PII is commonly referred to as personal data. PII is information that can be used to distinguish or trace an individual’s identity such as a name, passport number, or biometric data like fingerprints and facial patterns. PII is also information that is linked or linkable to an individual, such as date of birth and religion.

Personal data can be collected, analyzed and shared for the benefit of the persons involved, but they can also be used for harmful purposes. Personal data are valuable for many public and private actors. For example, they are collected by social media platforms and sold to advertising companies. They are collected by governments to serve law-enforcement purposes like the prosecution of crimes. Politicians value personal data to target voters with certain political information. Personal data can be monetized by people for criminal purposes such as selling false identities.

“Sharing data is a regular practice that is becoming increasingly ubiquitous as society moves online. Sharing data does not only bring users benefits, but is often also necessary to fulfill administrative duties or engage with today’s society. But this is not without risk. Your personal information reveals a lot about you, your thoughts, and your life, which is why it needs to be protected.”

Access Now’s ‘Creating a Data Protection Framework’, November 2018.

How does data protection relate to the right to privacy?

The right to protection of personal data is closely interconnected to, but distinct from, the right to privacy. The understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. Data protection is not always considered a right in itself. Read more about the differences between privacy and data protection here.

Data privacy is also a common way of speaking about sensitive data and the importance of protecting it against unintentional sharing and undue or illegal  gathering and use of data about an individual or group. USAID’s Digital Strategy for 2020 – 2024 defines data privacy as ‘the  right  of  an  individual  or  group  to  maintain  control  over  and  confidentiality  of  information  about  themselves’.

How does data protection work?

Participant of the USAID WeMUNIZE program in Nigeria. Data protection must be considered for existing datasets as well. Photo credit: KC Nwakalor for USAID / Digital Development Communications

Personal data can and should be protected by measures that protect from harm the identity or other information about a person and that respects their right to privacy. Examples of such measures include determining which data are vulnerable based on privacy-risk assessments; keeping sensitive data offline; limiting who has access to certain data; anonymizing sensitive data; and only collecting necessary data.

There are a couple of established principles and practices to protect sensitive data. In many countries, these measures are enforced via laws, which contain the key principles that are important to guarantee data protection.

“Data Protection laws seek to protect people’s data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.”

Privacy International on data protection

A couple of important terms and principles are outlined below, based on The European Union’s General Data Protection Regulation (GDPR).

  • Data Subject: any person whose personal data are being processed, such as added to a contacts database or to a mailing list for promotional emails.
  • Processing data means that any operation is performed on personal data, manually or automated.
  • Data Controller: the actor that determines the purposes for, and means by which, personal data are processed.
  • Data Processor: the actor that processes personal data on behalf of the controller, often a third-party external to the controller, such as a party that offers mailing lists or survey services.
  • Informed Consent: individuals understand and agree that their personal data are collected, accessed, used, and/or shared and how they can withdraw their consent.
  • Purpose limitation: personal data are only collected for a specific and justified use and the data cannot be used for other purposes by other parties.
  • Data minimization: that data collection is minimized and limited to essential details.

 

Healthcare provider in Eswatini. Quality data and protected datasets can accelerate impact in the public health sector. Photo credit: Ncamsile Maseko & Lindani Sifundza.

Access Now’s guide lists eight data-protection principles that come largely from international standards, in particular,, the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (widely known as Convention 108) and the Organization for Economic Development (OECD) Privacy Guidelines and are considered to be “minimum standards” for the protection of fundamental rights by countries that have ratified international data protection frameworks.

A development project that uses data, whether establishing a mailing list or analyzing datasets, should comply with laws on data protection. When there is no national legal framework, international principles, norms, and standards can serve as a baseline to achieve the same level of protection of data and people. Compliance with these principles may seem burdensome, but implementing a few steps related to data protection from the beginning of the project will help to achieve the intended results without putting people at risk.

common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms

The figure above shows how common practices of civil society organizations relate to the terms and principles of the data protection framework of laws and norms.

The European Union’s General Data Protection Regulation (GDPR)

The data protection law in the EU, the GDPR, went into effect in 2018. It is often considered the world’s strongest data protection law. The law aims to enhance how people can access their information and limits what organizations can do with personal data from EU citizens. Although coming from the EU, the GDPR can also apply to organizations that are based outside the region when EU citizens’ data are concerned. GDPR, therefore, has a global impact.

The obligations stemming from the GDPR and other data protection laws may have broad implications for civil society organizations. For information about the GDPR- compliance process and other resources, see the European Center for Not-for-Profit Law‘s guide on data-protection standards for civil society organizations.

Notwithstanding its protections, the GDPR also has been used to harass CSOs and journalists. For example, a mining company used a provision of the GDPR to try to force Global Witness to disclose sources it used in an anti-mining campaign. Global Witness successfully resisted these attempts.

Personal or organizational protection tactics

How to protect your own sensitive information or the data of your organization will depend on your specific situation in terms of activities and legal environment. The first step is to assess your specific needs in terms of security and data protection. For example, which information could, in the wrong hands, have negative consequences for you and your organization?

Digital–security specialists have developed online resources you can use to protect yourself. Examples are the Security Planner, an easy-to-use guide with expert-reviewed advice for staying safer online with recommendations on implementing basic online practices. The Digital Safety Manual offers information and practical tips on enhancing digital security for government officials working with civil society and Human Rights Defenders (HRDs). This manual offers 12 cards tailored to various common activities in the collaboration between governments (and other partners) and civil society organizations. The first card helps to assess the digital security.

Digital Safety Manual

 

The Digital First Aid Kit is a free resource for rapid responders, digital security trainers, and tech-savvy activists to better protect themselves and the communities they support against the most common types of digital emergencies. Global digital safety responders and mentors can help with specific questions or mentorship, for example, The Digital Defenders Partnership and the Computer Incident Response Centre for Civil Society (CiviCERT).

Back to top

How is data protection relevant in civic space and for democracy?

Many initiatives that aim to strengthen civic space or improve democracy use digital technology. There is a widespread belief that the increasing volume of data and the tools to process them can be used for good. And indeed, integrating digital technology and the use of data in democracy, human rights, and governance programming can have significant benefits; for example, they can connect communities around the globe, reach underserved populations better, and help mitigate inequality.

“Within social change work, there is usually a stark power asymmetry. From humanitarian work, to campaigning, documenting human rights violations to movement building, advocacy organisations are often led by – and work with – vulnerable or marginalised communities. We often approach social change work through a critical lens, prioritising how to mitigate power asymmetries. We believe we need to do the same thing when it comes to the data we work with – question it, understand its limitations, and learn from it in responsible ways.”

What is Responsible Data?

When quality information is available to the right people when they need it, the data are protected against misuse, and the project is designed with the protection of its users in mind, it can accelerate impact.

  • USAID’s funding of improved vineyard inspection using drones and GIS data in Moldova, allows farmers to quickly inspect, identify, and isolate vines infected by a ​phytoplasma disease of the vine.
  • Círculo is a digital tool for female journalists in Mexico to help them create strong networks of support, strengthen their safety protocols and meet needs related to the protection of themselves and their data. The tool was developed with the end-users through chat groups and in-person workshops to make sure everything built into the app was something they needed and could trust.

At the same time, data-driven development brings a new responsibility to prevent misuse of data, when designing,  implementing or monitoring development projects. When the use of personal data is a means to identify people who are eligible for humanitarian services, privacy and security concerns are very real.

  • Refugee camps In Jordan have required community members to allow scans of their irises to purchase food and supplies and take out cash from ATMs. This practice has not integrated meaningful ways to ask for consent or allow people to opt out. Additionally, the use and collection of highly sensitive personal data like biometrics to enable daily purchasing habits is disproportionate, because other less personal digital technologies are available and used in many parts of the world.

Governments, international organizations, and private actors can all – even unintentionally – misuse personal data for other purposes than intended, negatively affecting the well-being of the people related to that data. Some examples have been highlighted by Privacy International:

  • The case of Tullow Oil, the largest oil and gas exploration and production company in Africa, shows how a private actor considered extensive and detailed research by a micro-targeting research company into the behaviors of local communities in order to get ‘cognitive and emotional strategies to influence and modify Turkana attitudes and behavior’ to the Tullow Oil’s advantage.
  • In Ghana, the Ministry of Health commissioned a large study on health practices and requirements in Ghana. This resulted in an order from the ruling political party to model future vote distribution within each constituency based on how respondents said they would vote, and a negative campaign trying to get opposition supporters not to vote.

There are resources and experts available to help with this process. The Principles for Digital Development website offers recommendations, tips, and resources to protect privacy and security throughout a project lifecycle, such as the analysis and planning stage, for designing and developing projects and when deploying and implementing. Measurement and evaluation are also covered. The Responsible Data website offers the Illustrated Hand-Book of the Modern Development Specialist with attractive, understandable guidance through all steps of a data-driven development project: designing it, managing data, with specific information about collecting, understanding and sharing it, and closing a project.

NGO worker prepares for data collection in Buru Maluku, Indonesia. When collecting new data, it’s important to design the process carefully and think through how it affects the individuals involved. Photo credit: Indah Rufiati/MDPI – Courtesy of USAID Oceans.

Back to top

Opportunities

Data protection measures further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about data protection in your work.

Privacy respected and people protected

Implementing data–protection standards in development projects protects people against potential harm from abuse of their data. Abuse happens when an individual, company or government accesses personal data and uses them for purposes other than those for which the data were collected. Intelligence services and law enforcement authorities often have legal and technical means to enforce access to datasets and abuse the data. Individuals hired by governments can access datasets by hacking the security of software or clouds. This has often led to intimidation, silencing, and arrests of human rights defenders and civil society leaders criticizing their government. Privacy International maps examples of governments and private actors abusing individuals’ data.

Strong protective measures against data abuse ensure respect for the fundamental right to privacy of the people whose data are collected and used. Protective measures allow positive development such as improving official statistics, better service delivery, targeted early warning mechanisms, and effective disaster response.

It is important to determine how data are protected throughout the entire life cycle of a project. Individuals should also be ensured of protection after the project ends, either abruptly or as intended, when the project moves into a different phase or when it receives funding from different sources. Oxfam has developed a leaflet to help anyone handling, sharing, or accessing program data to properly consider responsible data issues throughout the data lifecycle, from making a plan to disposing of data.

Back to top

Risks

The collection and use of data can also create risks in civil society programming. Read below on how to discern the possible dangers associated with collection and use of data in DRG work, as well as how to mitigate for unintended – and intended – consequences.

Unauthorized access to data

Data need to be stored somewhere. On a computer or an external drive, in a cloud, or on a local server. Wherever the data are stored, precautions need to be taken to protect the data from unauthorized access and to avoid revealing the identities of vulnerable persons. The level of protection that is needed depends on the sensitivity of the data, i.e. to what extent it could have negative consequences if the information fell into the wrong hands.

Data can be stored on a nearby and well-protected server that is connected to drives with strong encryption and very limited access, which is a method to stay in control of the data you own. Cloud services offered by well-known tech companies often offer basic protection measures and wide access to the dataset for free versions. More advanced security features are available for paying customers, such as storage of data in certain jurisdictions with data-protection legislation. The guidelines on how to secure private data stored and accessed in the cloud help to understand various aspects of clouds and to decide about a specific situation.

Every system needs to be secured against cyberattacks and manipulation. One common challenge is finding a way to protect identities in the dataset, for example, by removing all information that could identify individuals from the data, i.e. anonymizing it. Proper anonymization is of key importance and harder than often assumed.

One can imagine that a dataset of GPS locations of People Living with Albinism across Uganda requires strong protection. Persecution is based on the belief that certain body parts of people with albinism can transmit magical powers, or that they are presumed to be cursed and bring bad luck. A spatial-profiling project mapping the exact location of individuals belonging to a vulnerable group can improve outreach and delivery of support services to them. However, hacking of the database or other unlawful access to their personal data might put them at risk of people wanting to exploit or harm them.

One could also imagine that the people operating an alternative system to send out warning sirens for air strikes in Syria run the risk of being targeted by authorities. While data collection and sharing by this group aims to prevent death and injury, it diminishes the impact of air strikes by the Syrian authorities. The location data of the individuals running and contributing to the system needs to be protected against access or exposure.

Another risk is that private actors who run or cooperate in data-driven projects could be tempted to sell data if they are offered large sums of money. Such buyers could be advertising companies or politicians that aim to target commercial or political campaigns at specific people.

The Tiko system designed by social enterprise Triggerise rewards young people for positive health-seeking behaviors, such as visiting pharmacies and seeking information online. Among other things, the system gathers and stores sensitive personal and health information about young female subscribers who use the platform to seek guidance on contraceptives and safe abortions, and it tracks their visits to local clinics. If these data are not protected, governments that have criminalized abortion could potentially access and use that data to carry out law-enforcement actions against pregnant women and medical providers.

Unsafe collection of data

When you are planning to collect new data, it is important to carefully design the collection process and think through how it affects the individuals involved. It should be clear from the start what kind of data will be collected, for what purpose, and that the people involved agree with that purpose. For example, an effort to map people with disabilities in a specific city can improve services. However, the database should not expose these people to risks, such as attacks or stigmatization that can be targeted at specific homes. Also, establishing this database should answer to the needs of the people involved and not driven by the mere wish to use data. For further guidance, see the chapter Getting Data in the Hand-book of the Modern Development Specialist and the OHCHR Guidance to adopt a Human Rights Based Approach to Data, focused on collection and disaggregation.

If data are collected in person by people recruited for this process, proper training is required. They need to be able to create a safe space to obtain informed consent from people whose data are being collected and know how to avoid bias during the data-collection process.

Unknowns in existing datasets

Data-driven initiatives can either gather new data, for example, through a survey of students and teachers in a school or use existing datasets from secondary sources, for example by using a government census or scraping social media sources. Data protection must also be considered when you plan to use existing datasets, such as images of the Earth for spatial mapping. You need to analyze what kind of data you want to use and whether it is necessary to use a specific dataset to reach your objective. For third-party datasets, it is important to gain insight into how the data that you want to use were obtained, whether the principles of data protection were met during the collection phase, who licensed the data and who funded the process. If you are not able to get this information, you must carefully consider whether to use the data or not. See the Hand-book of the Modern Development Specialist on working with existing data.

Benefits of cloud storage

A trusted cloud-storage strategy offers greater security and ease of implementation compared to securing your own server. While determined adversaries can still hack into individual computers or local servers, it is significantly more challenging for them to breach the robust security defenses of reputable cloud storage providers like Google or Microsoft. These companies deploy extensive security resources and a strong business incentive to ensure maximum protection for their users. By relying on cloud storage, common risks such as physical theft, device damage, or malware can be mitigated since most documents and data are securely stored in the cloud. In case of incidents, it is convenient to resynchronize and resume operations on a new or cleaned computer, with little to no valuable information accessible locally.

Backing up data

Regardless of whether data is stored on physical devices or in the cloud, having a backup is crucial. Physical device storage carries the risk of data loss due to various incidents such as hardware damage, ransomware attacks, or theft. Cloud storage provides an advantage in this regard as it eliminates the reliance on specific devices that can be compromised or lost. Built-in backup solutions like Time Machine for Macs and File History for Windows devices, as well as automatic cloud backups for iPhones and Androids, offer some level of protection. However, even with cloud storage, the risk of human error remains, making it advisable to consider additional cloud backup solutions like Backupify or SpinOne Backup. For organizations using local servers and devices, secure backups become even more critical. It is recommended to encrypt external hard drives using strong passwords, utilize encryption tools like VeraCrypt or BitLocker, and keep backup devices in a separate location from the primary devices. Storing a copy in a highly secure location, such as a safe deposit box, can provide an extra layer of protection in case of disasters that affect both computers and their backups.

Back to top

Questions

If you are trying to understand the implications of lacking data protection measures in your work environment, or are considering using data as part of your DRG programming, ask yourself these questions:

  1. Are data protection laws adopted in the country or countries concerned? Are these laws aligned with international human rights law, including provisions protecting the right to privacy?
  2. How will the use of data in your project comply with data protection and privacy standards?
  3. What kind of data do you plan to use? Are personal or other sensitive data involved?
  4. What could happen to the persons related to that data if the government accesses these data?
  5. What could happen if the data are sold to a private actor for other purposes than intended?
  6. What precaution and mitigation measures are taken to protect the data and the individuals related to the data?
  7. How is the data protected against manipulation and access and misuse by third parties?
  8. Do you have sufficient expertise integrated during the entire course of the project to make sure that data are handled well?
  9. If you plan to collect data, what is the purpose of the collection of data? Is data collection necessary to reach this purpose?
  10. How are collectors of personal data trained? How is informed consent generated when data are collected?
  11. If you are creating or using databases, how is the anonymity of the individuals related to the data guaranteed?
  12. How is the data that you plan to use obtained and stored? Is the level of protection appropriate to the sensitivity of the data?
  13. Who has access to the data? What measures are taken to guarantee that data are accessed for the intended purpose?
  14. Which other entities – companies, partners – process, analyze, visualize, and otherwise use the data in your project? What measures are taken by them to protect the data? Have agreements been made with them to avoid monetization or misuse?
  15. If you build a platform, how are the registered users of your platform protected?
  16. Is the database, the system to store data or the platform auditable to independent research?

Back to top

Case Studies

People Living with HIV Stigma Index and Implementation Brief

The People Living with HIV Stigma Index is a standardized questionnaire and sampling strategy to gather critical data on intersecting stigmas and discrimination affecting people living with HIV. It monitors HIV-related stigma and discrimination in various countries and provides evidence for advocacy in countries. The data in this project are the experiences of people living with HIV. The implementation brief provides insight into data protection measures. People living with HIV are at the center of the entire process, continuously linking the data that is collected about them to the people themselves, starting from research design, through implementation, to using the findings for advocacy. Data are gathered through a peer-to-peer interview process, with people living with HIV from diverse backgrounds serving as trained interviewers. A standard implementation methodology has been developed, including the establishment if a steering committee with key  stakeholders and population groups.

RNW Media’s Love Matters Program Data Protection

RNW Media’s Love Matters Program offers online platforms to foster discussion and information-sharing on love, sex and relationships to 18-30 year-olds in areas where information on sexual and reproductive health and rights (SRHR) is censored or taboo. RNW Media’s digital teams introduced creative approaches to data processing and analysis, Social Listening methodologies and Natural Language Processing techniques to make the platforms more inclusive, create targeted content, and identify influencers and trending topics. Governments have imposed restrictions such as license fees or registrations for online influencers as a way of monitoring and blocking “undesirable” content, and RNW Media has invested in security of its platforms and literacy of the users to protect them from access to their sensitive personal information. Read more in the publication ‘33 Showcases – Digitalisation and Development – Inspiration from Dutch development cooperation’, Dutch Ministry of Foreign Affairs, 2019, p 12-14.

Amnesty International Report

Amnesty International Report

Thousands of democracy and human rights activists and organizations rely on secure communication channels every day to maintain the confidentiality of conversations in challenging political environments. Without such security practices, sensitive messages can be intercepted and used by authorities to target activists and break up protests. One prominent and well-documented example of this occurred in the aftermath of the 2010 elections in Belarus. As detailed in this Amnesty International report, phone recordings and other unencrypted communications were intercepted by the government and used in court against prominent opposition politicians and activists, many of whom spent years in prison. In 2020, another swell of post-election protests in Belarus saw thousands of protestors adopt user-friendly, secure messaging apps that were not as readily available just 10 years prior to protect their sensitive communications.

Norway Parliament Data

Norway Parliament Data

The Storting, Norway’s parliament, has experienced another cyberattack that involved the exploitation of recently disclosed vulnerabilities in Microsoft Exchange. These vulnerabilities, known as ProxyLogon, were addressed by emergency security updates released by Microsoft. The initial attacks were attributed to a state-sponsored hacking group from China called HAFNIUM, which utilized the vulnerabilities to compromise servers, establish backdoor web shells, and gain unauthorized access to internal networks of various organizations. The repeated cyberattacks on the Storting and the involvement of various hacking groups underscore the importance of data protection, timely security updates, and proactive measures to mitigate cyber risks. Organizations must remain vigilant, stay informed about the latest vulnerabilities, and take appropriate actions to safeguard their systems and data.

Girl Effect

Girl Effect, a creative non-profit working where girls are marginalized and vulnerable, uses media and mobile tech to empower girls. The organization embraces digital tools and interventions and acknowledges that any organisation that uses data also has a responsibility to protect the people it talks to or connects online. Their ‘Digital safeguarding tips and guidance’ provides in-depth guidance on implementing data protection measures while working with vulnerable people. Referring to Girl Effect as inspiration, Oxfam has developed and implemented a Responsible Data Policy and shares many supporting resources online. The publication ‘Privacy and data security under GDPR for quantitative impact evaluation’ provides detailed considerations of the data protection measures Oxfam implements while doing quantitative impact evaluation through digital and paper-based surveys and interviews.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Generative AI

What is Generative AI?

Generative artificial intelligence (GenAI) refers to a class of artificial-intelligence techniques and models that creates new, original content based on data on which the models were trained. The output can be text, images, or videos that reflect or respond to the input. Much as artificial intelligence applications can span many industries, so too can GenAI. Many of these applications are in the area of art and creativity, as GenAI can be used to create art, music, video games, and poetry based on the patterns observed in training data. But its learning of language also makes it well suited to facilitate communication, for example, as chatbots or conversational agents that can simulate human-like conversations, language translation, realistic speech synthesis or text-to-speech. These are just a few examples. This article elaborates on the ways in which GenAI offers both opportunities and risks in civic space and to democracy and what government institutions, international organizations, activists, and civil society organizations can do to capitalize on the opportunities and guard against the risks.

How does GenAI work?

At the core of GenAI are generative models, which are algorithms or information architectures designed to learn the underlying patterns and statistics of training data. These models can then use this learned knowledge to produce new outputs that resemble the original data distribution. The idea is to capture the underlying patterns and statistics of the training data so that the AI model can generate new samples that belong to the same distribution.

Steps of the GenAI Process

As the figure above illustrates, GenAI models are developed through a process by which a curated database is used to train neural networks with machine learning techniques. These networks learn to identify patterns in the data, which allows them to generate new content or make predictions based on the learned information. From there, users can input commands in the form of words, numbers, or images into these algorithmic models, and the model produces content that responds based on the input and the patterns learned from the training data. As they are trained on ever-larger datasets, the GenAI models gain a broader range of possible content they can generate across different media, from audio to images and text.

Until recently, GenAI simply mimicked the style and substance of the input. For example, someone could input a snippet of a poem or news article into a model, and the model would output a complete poem or news article that sounded like the original content. An example of what this looks like in the linguistics field that you may have seen in your own email is predictive language along the lines of a Google Smart Compose that completes a sentence based on a combination of the initial words you use and the probabilistic expectation of what could follow. For example, a machine studying billions of words from datasets would generate a probabilistic expectation of a sentence that starts with “please come ___.” In 95% of cases, the machine might have seen “here” as the next word, in 3% of cases “with me” and in 2% of cases “soon.” Thus, when completing sentences or generating outputs, the algorithm that learned the language would use the sentence structure and combination of words that it had seen previously. Because the models are probabilistic, they might sometimes make errors that do not reflect the nuanced intentions of the input.

GenAI now has far more expansive capabilities. Far beyond text, GenAI is also a tool for producing images from text. For example, tools such as DALL-E, Stable Diffusion, and MidJourney allow a user to input text descriptions that the model then uses to produce a corresponding image. These images vary in their realism–for example, some look like they are out of a science fiction scene while others look like a painting while others are more like a photograph. Additionally, it is worth noting that these tools are constantly improving, ensuring that the boundaries of what can be achieved with text-to-image generation continue to expand.

Conversational AI

Recent models have incorporated machine learning from language patterns but also factual information about politics, society, and economics. Recent models are also able to take input commands from images and voice, further expanding their versatility and utility in various applications.

Consumer-facing models that simulate human conversation–“conversational AI”–have proliferated recently and operate more as chatbots, responding to queries and questions, much in the way that a search engine would function. Some examples include asking the model to answer any of the following:

  • Provide a photo of a political leader playing a ukulele in the style of Salvador Dali.
  • Talk about Kenya’s capital, form of government, or character, or about the history of decolonization in South Asia.
  • Write and perform a song about adolescence that mimics a Drake song.

In other words, these newer models may function like a blend between a Google search and an exchange with a knowledgeable individual about their area of expertise. Much like a socially attentive individual, these models can be taught during a conversation. If you were to ask a question about the best restaurants in Manila, and the chatbot responds with a list of restaurants that include some Continental European restaurants, you can then follow up and express a preference for Filipino restaurants, which will prompt the chatbot to tailor its output to your specific preferences. The model learns based on feedback, although models such as ChatGPT will be quick to point out that it is only trained on data up to a certain date, which means some restaurants will have gone out of business and some award-winning restaurants may have cropped up. The example highlights a fundamental tension between up-to-date models or content and the ability to refine models. If we try to have models learn from information as it is produced, those models will generate up-to-date answers but will not be able to filter outputs for bad information, hate speech, or conspiracy theories.

Definitions

GenAI involves several key concepts:

Generative Models: Generative models are a class of machine learning models designed to create or generate new data outputs that resemble a given set of training data. These models learn underlying patterns and structures from the training data and use that knowledge to generate new, similar data outputs.

ChatGPT: ChatGPT is a Generative Pre-trained Transformer (GPT) model developed by OpenAI. While researchers had developed and used language models for decades, ChatGPT was the first consumer-facing language model. Trained to understand and produce human-like text in a dialogue setting, it was specifically designed for generating conversational responses and engaging in interactive text-based conversations. As such, it is well-suited for creating chatbots, virtual assistants, and other conversational AI applications.

Neural Network: A neural network is a computational model intended to function like the brain’s interconnected neurons. It is an important part of deep learning because it performs a calculation, and the strength of connections (weights) between neurons determines the flow of information and influences the output.

Training Data: Training data are the data used to train generative models. These data are crucial since the model learns patterns and structures from these data to create new content. For example, in the context of text generation, training data would consist of a large collection of text documents, sentences, or paragraphs. The quality and diversity of the training data have a significant impact on the performance of the GenAI model because it helps the model generate more relevant content.

Hallucination: In the context of GenAI, the term “hallucination” refers to a phenomenon where the AI model produces outputs that are not grounded in reality or accurate representations of the input data. In other words, the AI generates content that seems to exist, but in reality, it is entirely fabricated and has no basis in the actual data on which it was trained. For instance, a language model might produce paragraphs of text that seem coherent and factual but, upon closer inspection, might include false information, events that never happened, or connections between concepts that are logically flawed. The problem results from noise in the training data. Addressing and minimizing hallucinations in GenAI is an ongoing research challenge. Researchers and developers strive to improve the models’ understanding of context, coherence, and factual accuracy to reduce the likelihood of generating content that can be considered hallucinatory.

Prompt: GenAI prompt is a specific input or instruction provided to a GenAI model to guide it in producing a desired output. In image generation, a prompt might involve specifying the style, content, or other attributes you want the generated image to have. The quality and relevance of the generated output often depend on the clarity and specificity of the prompt. A well-crafted prompt can lead to more accurate and desirable generated content.

Evaluation Metrics: Evaluating the quality of outputs from GenAI models can be challenging, but several evaluation metrics have been developed to assess various aspects of generated content. Metrics like Inception Score, Frechet Inception Distance (FID), and Perceptual Path Length (PPL) attempt to measure aspects of model performance such as the diversity of responses (so that they do not all sound like copies of each other), relevance (so the responses are on topic) and coherence (so that responses stay on topic) of the output.

Prompt Engineering: Prompt engineering is the process of designing and refining prompts or instructions given to GenAI systems, such as chatbots or language models like GPT-3.5, to elicit specific and desired responses. It involves crafting the input text or query in such a way that the model generates outputs that align with the user’s intent or the desired task. It is useful for optimizing the benefits of GenAI but requires a deep understanding of the model’s behavior and capabilities as well as the specific requirements of the application or task. Well-crafted prompts can enhance the user experience by ensuring that the models provide valuable and accurate responses.

Back to top

How is GenAI relevant in civic space and for democracy?

The rapid development and diffusion of GenAI technologies–across medicine, environmental sustainability, politics, and journalism, among many other fields–is creating and will create enormous opportunities. GenAI is being used for drug discovery, molecule design, medical-imaging analysis, and personalized treatment recommendations. It is being used to model and simulate ecosystems, predict environmental changes, and devise conservation strategies. It offers more accessible answers about bureaucratic procedures so citizens better understand their government, which is a fundamental change to how citizens access information and how governments operate. It is supporting the generation of written content such as articles, reports, and advertisements.

Across all of these sectors, GenAI also introduces potential risks. Governments, working with the private sector and civil society organizations, are taking different approaches to balancing capitalizing on the opportunities while guarding against the risks, reflecting different philosophies about risk and the role of innovation in their respective economies and different legal precedents and political landscapes across countries. Many of the pioneering efforts are taking place in the countries where AI is being used most, such as in the United States or countries in the European Union, or in tech-heavy countries such as China. Conversations about regulation in other countries have lagged. In Africa, for example, experts at the Africa Tech Week conference in spring 2023 expressed concern about the lag in Africa’s access to AI and the need to catch up to reap the benefits of AI in the economy, medicine, and society, though they also gestured toward privacy issues and the importance of diversity in AI research teams to guard against bias. These conversations suggest that both access and regulation are developing at different rates across different contexts, and those regions developing and testing regulations now may be role models or at least provide lessons learned for other countries as they regulate.

The European Union has moved quickly to regulate AI, using a tiered, risk-based approach that designates some types of “high risk uses” as prohibited. GenAI systems that do not have risk-assessment and -mitigation plans, clear information for users, explainability, activity logging, and other requirements are considered high risk. Most GenAI systems would not meet those standards, according to a 2021 Stanford University study. However, executives from 150 European companies have collectively pushed back against aggressive regulation, suggesting that overly stringent AI regulation will incentivize companies to establish headquarters outside of Europe and stifle innovation and economic development in the region. An open letter acknowledges that some regulation may be warranted but that GenAI will be “decisive” and “powerful” and that “Europe cannot afford to stay on the sidelines.”

China has been one of the most aggressive countries when it comes to AI regulation. The Cybersecurity Administration of China requires that AI be transparent, unbiased, and not used for generating misinformation or social unrest. Existing rules highly regulate deepfakes—synthetic media in which a person’s likeness, including their face and voice, is replaced with someone else’s likeness, typically using AI. Any service provider that uses content produced by GenAI must also obtain consent from deepfake subjects, label outputs, and then counter any misinformation. However, enacting such regulations does not mean that state actors will not use AI for malicious purposes or for influence operations themselves as we discuss below.

The United States has held a number of hearings to better understand the technology and its impact on democracy, but by September 2023 had not put in place any significant legislation to regulate GenAI. The Federal Trade Commission, responsible for promoting consumer protection, issued a 20-page letter to OpenAI, the creator of ChatGPT, requesting responses to its concerns about consumer privacy and security. In addition, the US government has worked with the major GenAI firms to establish voluntary transparency and safety safeguards as the risks and benefits of the technology evolve.

Going beyond regional or country-level regulatory initiatives, the UN Secretary General, António Guterrez, has advocated for transparency, accountability, and oversight of AI. Mr. Guterrez observed: “The international community has a long history of responding to new technologies with the potential to disrupt our societies and economies. We have come together at the United Nations to set new international rules, sign new treaties and establish new global agencies. While many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach.” The statement gestures toward the fact that digital space does not know boundaries and that the software technologies innovated in one country will inevitably cross over to others, suggesting that meaningful norms or constraints on GenAI will likely require a coordinated, international approach. To that end, some researchers have proposed an international artificial intelligence organization that would help certify compliance with international standards on AI safety, which also acknowledges the inherently international nature of AI development and deployment.

Back to top

Opportunities

Enhancing Representation

One of the main challenges in democracy and for civil society is ensuring that constituent voices are heard and represented, which in part involves citizens themselves participating in the democratic process. GenAI may be useful in providing both policymakers and citizens a way to communicate more efficiently and enhance trust in institutions. Another avenue for enhancing representation is for GenAI to provide data that allow researchers and policymakers an opportunity to understand various social, economic, and environmental issues and constituents’ concerns about these issues. For example, GenAI could be used to synthesize large volumes of incoming commentary from open comment lines or emails and then better understand the bottom-up concerns that citizens have about their democracy. To be sure, these data-analysis tools need to ensure data privacy, but can provide data visualization for institutional leaders to understand what people care about.

Easy Read Access

Many regulations and pieces of legislation are dense and difficult to comprehend for anyone outside the decisionmaking establishment. These accessibility challenges are magnified for  individuals with disabilities such as cognitive impairments. GenAI can summarize long pieces of legislation and translate dense governmental publications into an easy read format, with images and simple language. Civil society organizations can also use GenAI to develop social media campaigns and other content to make it more accessible to those with disabilities.

Civic Engagement

GenAI can enhance civic engagement by generating personalized content tailored to individual interests and preferences through a combination of data analysis and machine learning. This could involve generating informative materials, news summaries, or visualizations that appeal to citizens and encourage them to participate in civic discussions and activities. The marketing industry has long capitalized on the realization that content specific to individual consumers is more likely to elicit consumption or engagement, and the idea holds in civil society. The more the content is personalized and targeted to a specific individual or category of individual, the more likely that individual will be to respond. Again, the use of data for helping classify citizen preferences inherently relies on user data. Not all societies will endorse this use of data. For example, the European Union has shown a wariness about privacy, suggesting that one size will not fit all in terms of this particular use of GenAI for civic engagement.

That being said, this tool could help dislodge voter apathy that can lead to disaffection and disengagement from politics. Instead of boilerplate communication urging young people to vote, for example, GenAI could produce clever content known to resonate with young women or marginalized groups, helping to counter some of the additional barriers to engagement that marginalized groups face. In an educational setting, personalized content could be used to cater to the needs of students in different regions and with different learning abilities, while also providing virtual tutors or language-learning tools.

Public Deliberation

Another way that GenAI could enable public participation and deliberation is through GenAI-powered chatbots and conversational agents. These tools can facilitate public deliberation by engaging citizens in dialogue, addressing their concerns, and helping them navigate complex civic issues. These agents can provide information, answer questions, and stimulate discussions. Some municipalities have already launched AI-powered virtual assistants and chatbots that automate civic services, streamlining processes such as citizen inquiries, service requests, and administrative tasks. This can lead to increased efficiency and responsiveness in government operations. Lack of municipal resources—for example, staff—can mean that citizens also lack the information they need to be meaningful participants in their society. With relatively limited resources, a chatbot can be trained on local data to provide specific information needed to narrow that gap.

Chatbots can be trained in multiple languages, making civic information and resources more accessible to diverse populations. They can assist people with disabilities by generating alternative formats for information, such as audio descriptions or text-to-speech conversions. GenAI can be trained on local dialects and languages, promoting indigenous cultures and making digital content more accessible to diverse populations.

It is important to note that the deployment of GenAI must be done with sensitivity to local contexts, cultural considerations, and privacy concerns. Adopting a human-centered design approach to collaborations among AI researchers, developers, civil society groups, and local communities can help to ensure that these technologies are adapted appropriately and equitably to address specific needs and challenges.

Predictive Analytics

GenAI can also be used for predictive analytics to forecast potential outcomes of policy decisions. For example, AI-powered generative models can analyze local soil and weather data to optimize crop yield and recommend suitable agricultural practices for specific regions. It can be used to generate realistic simulations to predict potential impacts and develop disaster response strategies for relief operations. It can analyze local environmental conditions and energy demand to optimize the deployment of renewable energy sources like solar and wind power, promoting sustainable power solutions.

By analyzing historical data and generating simulations, policymakers can make more informed and evidence-based choices for the betterment of society. These same tools can assist not only policymakers but also civil society organizations in generating data visualizations or summarizing information about citizen preferences. This can aid in producing more informative and timely content about citizen preferences and the state of key issues, like the number of people who are homeless.

Environmental Sustainability

GenAI can be used in ways that lead to favorable environmental impacts. For example, it can be used in fields such as architecture and product design to optimize designs for efficiency. It can be used to optimize processes in the energy industry that can enhance energy efficiency. It also has potential for use in logistics where GenAI can optimize routes and schedules, thereby reducing fuel consumption and emissions.

Back to top

Risks

To harness the potential of GenAI for democracy and the civic space, a balanced approach that addresses ethical concerns, fosters transparency, promotes inclusive technology development, and engages multiple stakeholders is necessary. Collaboration among researchers, policymakers, civil society, and technology developers can help ensure that GenAI contributes positively to democratic processes and civic engagement. The ability to generate large volumes of credible content can create opportunities for policymakers and citizens to connect with each other–but those same capabilities of advanced GenAI models create possible risks as well.

Online Misinformation

Although GenAI has improved, the models still hallucinate and produce convincing-sounding outputs, for example, facts or stories that sound plausible but are not correct. While there are many cases in which these hallucinations are benign–such as a scientific query about the age of the universe–there are other cases where the consequences are destabilizing politically or societally.

Given that GenAI is public facing, individuals can use these technologies without understanding the limitations. They could then inadvertently spread misinformation from an inaccurate answer to a question about politics or history, for example, an inaccurate statement about a political leader that ends up inflaming an already acrimonious political environment. The spread of AI-generated misinformation flooding the information ecosystem has the potential to reduce trust in the information ecosystem as a whole, leading people to be skeptical of all facts and to conform to the beliefs of their social circles. The spread of information may mean that members of society believe things that are not true about political candidates, election procedures, or wars.

Examples of GenAI generating disinformation include not just text but also deepfakes. While deepfakes have benign potential applications, such as for entertainment or special effects, they can also be misused to create highly realistic videos that spread false information or fabricated events that make it difficult for viewers to discern between fake and real content, which can lead to the spread of misinformation and erode trust in the media. Relatedly, they can be used for political manipulation, in which videos of politicians or public figures are altered to make them appear to say or do things that could defame, harm their reputation, or influence public opinion.

GenAI makes it more efficient to generate and amplify disinformation, intentionally created for the purposes of misleading a reader, because it can produce, in large quantities, seemingly original and seemingly credible but nonetheless inaccurate information. None of the stories or comments would necessarily repeat, which could then lead to an even more credible-seeming narrative. Foreign disinformation campaigns have often been identified on the basis of spelling or grammatical errors, but the ability to use these new GenAI technologies means the efficient creation of native-sounding content that can fool the usual filters that a platform might use to identify large-scale disinformation campaigns. GenAI may also proliferate social bots that are indistinguishable from humans and can micro-target individuals with disinformation in a tailored way.

Astroturfing Campaigns

Since GenAI technologies are public facing and easy to use, they can be used to manipulate not only the mass public, but also different levels of government elites. Political leaders are expected to engage with their constituents’ concerns, as reflected in communications such as emails that reveal public opinion and sentiment. But what if a malicious actor used ChatGPT or another GenAI model to create large volumes of advocacy content and distributed it to political leaders as if it were from real citizens? This would be a form of astroturfing, a deceptive practice that masks the source of content with an aim of creating a perception of grassroots support. Research suggests that elected officials in the United States have been susceptible to these attacks. Leaders could well allow this volume of content to influence their political agenda, passing laws or establishing bureaucracies in response to the apparent groundswell of support that in fact was manufactured by the ability to generate large volumes of credible-sounding content.

Bias

GenAI also raises discrimination and bias concerns. If the training data used to create the generative model contains biased or discriminatory information, the model will produce biased or offensive outputs. This could perpetuate harmful stereotypes and contribute to privacy violations for certain groups. If a GenAI model is trained on a dataset containing biased language patterns, it might produce text that reinforces gender stereotypes. For instance, it might associate certain professions or roles with a particular gender, even if there is no inherent connection. If a GenAI model is trained on a dataset with skewed racial or ethnic representation, it can produce images that unintentionally depict certain groups in a negative or stereotypical manner. These models might also, if trained on biased or discriminatory datasets, produce content that is culturally insensitive or uses derogatory terms. Text-to-image GenAI mangles the features of a “Black woman” at high rates, which is harmful to the groups misrepresented. The cause is overrepresentation of non-Black groups in the training datasets. One solution is more balanced, diverse datasets instead of just Western and English-language data that would contain Western bias and create biases by lacking other perspectives and languages. Another is to train the model so that users cannot “jailbreak” it into spewing racist or inappropriate content.

However, the issue of bias extends beyond training data that is openly racist or sexist. AI models draw conclusions from data points; so an AI model might look at hiring data and see that the demographic group that has been most successful getting hired at a tech company is white men and conclude that white men are the most qualified for working at a tech company, though in reality the reason white men may be more successful is because they do not face the same structural barriers that affect other demographics, such as being unable to afford a tech degree, facing sexism in classes, or racism in the hiring department.

Privacy

GenAI raises several privacy concerns. One is that the datasets could contain sensitive or personal information. Unless that content is properly anonymized or protected, personal information could be exposed or misused. Because GenAI outputs are intended to be realistic-looking, generated content that resembles real individuals could be used to re-identify individuals whose data was intended to be anonymized, also undermining privacy protections. Further, during the training process, GenAI models may inadvertently learn and memorize parts of the training data, including sensitive or private information. This could lead to data leakage when generating new content. Policymakers and the GenAI platforms themselves have not yet resolved the concern about how to protect privacy in the datasets, outputs, or even the prompts themselves, which can include sensitive data or reflect a user’s intentions in ways that could be harmful if not secure.

Copyright and Intellectual Property

One of the fundamental concerns around GenAI is who owns the copyright for work that GenAI creates. Copyright law attributes authorship and ownership to human creators. However, in the case of AI-generated content, determining  authorship, the cornerstone of copyright protection, becomes challenging. It is unclear whether the creator should be the programmer, the user, the AI system itself, or a combination of these parties. AI systems learn from existing copyrighted content to generate new work that could resemble existing copyrighted material. This raises questions about whether AI-generated content could be considered derivative work and thus infringe upon the original copyright holder’s rights or whether the use of GenAI would be considered fair use, which allows limited use of copyrighted material without permission from the holder of the copyright. Because the technology is still new, the legal frameworks for judging fair use versus copyright infringement are still evolving and might look different depending on the jurisdiction and its legal culture. As that body of law develops, it should balance innovation with treating creators, users, and AI systems’ developers fairly.

Environmental Impacts

Training GenAI models and storing and transmitting data uses significant computational resources, often with hardware that consumes energy that can contribute to carbon emissions if it is not powered by renewable sources. These impacts can be mitigated in part through the use of renewable energy and by optimizing algorithms to reduce computational demands.

Unequal Access

Although access to GenAI tools is becoming more widespread, the emergence of the technology risks expanding the digital divide between those with access to technology and those without. There are several reasons why unequal access–and its consequences–may be particularly germane in the case of GenAI:

  • The computing power required is enormous, which can strain the infrastructure of countries that have inadequate power supply, internet access, data storage, or cloud computing.
  • Low and middle income countries (LMICs) may lack the high-tech talent pool necessary for AI innovation and implementation. One report suggests that the whole continent of Africa has 700,000 software developers, compared to California, which has 630,000. This problem is exacerbated by the fact that, once qualified, developers from LMICs often leave for countries where they can earn more.
  • Mainstream, consumer-facing models like ChatGPT were trained on a handful of languages, including English, Spanish, German, and Chinese, which means that individuals seeking to use GenAI in these languages have access advantages unavailable to Swahili speakers, for example, not to mention local dialects.
  • Localizing GenAI requires large amounts of data from the particular context, and low-resourced environments often rely on models developed by larger tech companies in the United States or China.

The ultimate result may be the disempowerment of marginalized groups who have fewer opportunities and means to share their stories and perspectives through AI-generated content. Because these technologies may enhance an individual’s economic prospects, unequal access to GenAI can in turn increase economic inequality as those with access are able to engage in creative expression, content generation, and business innovation more efficiently.

Back to top

Questions

If you are conducting a project and considering whether to use GenAI for it, ask yourself these questions:

  1. Are there cases where individual interactions between people might be more effective, more empathetic, and even more efficient than using AI for communication?
  2. What ethical concerns—whether from biases or privacy—might the use of GenAI introduce? Can they be mitigated?
  3. Can local sources of data and talent be employed to create localized GenAI?
  4. Are there legal, regulatory, or security measures that will guard against the misuses of GenAI and protect the populations that might be vulnerable to these misuses?
  5. Can sensitive or proprietary information be protected in the process of developing datasets that serve as training data for GenAI models?
  6. In what ways can GenAI technology bridge the digital divide and increase digital access in a tech-dependent society (or as societies become more tech-dependent)? How can we mitigate the tendency of new GenAI technologies to widen the digital divide?
  7. Are there forms of digital literacy for members of society, civil society, or a political class that can mitigate against the risks of deepfakes or large-scale generated misinformation text?
  8. How can you mitigate against the negative environmental impacts associated with the use of GenAI?
  9. Can GenAI be used to tailor approaches to education, access to government and civil society, and opportunities for innovation and economic advancement?
  10. Is the data your model trained on accurate data, representative of all identities, including marginalized groups? What inherent biases might the dataset carry?

Back to top

Case Studies

GenAI largely emerged in a widespread, consumer-facing way in the first half of 2023, which limits the number of real-world case studies. This section on case studies therefore includes cases where forms of GenAI have proved problematic in terms of deception or misinformation; ways that GenAI may conceivably affect all sectors, including democracy, to increase efficiencies and access; and experiences or discussions of specific country approaches to privacy-innovation tradeoffs.

Experiences with Disinformation and Deception

In Gabon, a possible deepfake played a significant role in the country’s politics. The president had reportedly experienced a stroke but had not been seen in public. The government ultimately issued a video on New Year’s Eve 2018 intending to assuage concerns about the president’s health, but critics suggested that he had inauthentic blinking patterns and facial expressions in the video and that it was a deepfake. Rumors that the video was inauthentic proliferated, leading many to conclude that the president was not in good health, which led to an attempted coup, due to the belief that the president’s ability to withstand the overthrow attempt would be weakened. The example demonstrates the serious ramifications of a loss of trust in the information environment.

In March 2023, a GenAI image of the Pope in a Balenciaga puffy coat went viral on the internet, fooling readers because of the likeness between the image and the Pope. Balenciaga, several months before, had faced backlash because of an ad campaign that had featured children in harnesses and bondage. The Pope seemingly wearing Balenciaga then implied that he and the Catholic church embraced these practices. The internet consensus ultimately concluded that it was a deepfake after identifying telltale signs such as a blurry coffee cup and resolution problems with the Pope’s eyelid. Nonetheless, the incident illustrated just how easily these images can be generated and fool readers. It also illustrated the way in which reputations could be stained through deepfakes.

In September 2023, the Microsoft Threat Initiative released a report pointing to numerous instances of online influence operations. Ahead of the 2022 election, Microsoft identified Chinese Communist Party (CCP)-affiliated social media accounts that were impersonating American voters, responding to comments in order to influence opinions through exchanges and persuasion. In 2023, Microsoft then observed the use of AI-created visuals that portrayed American images such as the Statue of Liberty in a negative light. These images had hallmarks of AI such as the wrong number of fingers on a hand but were nonetheless provocative and convincing. In early 2023, Meta similarly found the CCP engaged in an influence operation by posting comments critical of American foreign policy, which Meta was able to identify due to the types of spelling and grammatical mistakes combined with the time of day (appropriate hours for China rather than the US).

Current and Future Applications

As GenAI tools improve, they will become even more effective in these online influence campaigns. On the other hand, applications with positive outcomes will also become more effective. GenAI, for example, will increasingly step in to fill gaps in government resources. An estimated four billion people lack access to basic health services, with a significant limitation being the low number of health care providers. While GenAI is not a substitute for direct access to an individual health care provider, it can at least bridge some access gaps in certain settings. One healthcare chatbot, Ada Health, is powered by OpenAI and can correspond with individuals about their symptoms. ChatGPT has demonstrated an ability to pass medical qualification exams and should not be used as a stand-in for a doctor, but, in resource-constrained environments, it could at least provide an initial screening, a savings of costs, time, and resources. Relatedly, analogous tools can be used in mental health settings. The World Economic Forum reported in 2021 that an estimated 100 million individuals in Africa have clinical depression, but there are only 1.4 health care providers per 100,000 people, compared to the global average of 9 providers per 100,000 people. People in need of care, who lack better options, are increasingly relying on mental health chatbots until a more comprehensive approach can be implemented because, while the level of care they can provide is limited, it is better than nothing. These GenAI-based resources are not without challenges–potential privacy problems and suboptimal responses–and societies and individuals will have to determine whether these tools are better than the alternative but may be considered in resource-constrained environments.

Other future scenarios involve using GenAI to increase government efficiency on a range of tasks. One such scenario entails a government bureaucrat trained in economics assigned to work on a policy brief related to the environment. The individual begins the policy brief but then puts the question into a GenAI tool, which helps draft an outline of ideas, reminds the individual about points that had been missed, identifies key relevant international legal guideposts, and then translates the English-language brief into French. Another scenario involves an individual citizen trying to figure out where to vote, pay taxes, clarify government processes, make sense of policies for citizens deciding between candidates, or explain certain policy concepts. These scenarios are already possible and accessible at all levels within society and will only become more prevalent as individuals become more familiar with the technology. However, it is important that users understand the limitations and how to appropriately use the technology to prevent situations in which they are spreading misinformation or failing to find accurate information.

In an electoral context, GenAI can help evaluate aspects of democracy, such as electoral integrity. Manual tabulation of votes, for example, takes time and is onerous. However, new AI tools have played a role in ascertaining the degree of electoral irregularities. Neural networks have been used in Kenya to “read” paper forms submitted at the local level and enumerate the degree of electoral irregularities and then correlate those with electoral outcomes to assess whether these irregularities were the result of fraud or human error. These technologies may actually alleviate some of the workload burden placed on electoral institutions. In the future, advances in GenAI will be able to provide data visualization that further eases the cognitive load of efforts to adjudicate electoral integrity.

Approaches to the Privacy-Innovation Dilemma

Countries such as Brazil have raised concerns about the potential misuses of GenAI. After the release of ChatGPT in November 2022, the Brazilian government received a detailed report, written by academic and legal experts as well as company leaders and members of a national data-protection watchdog, urging that these technologies be regulated. The report raised three main concerns:

  • That citizen rights be protected by ensuring that there be “non-discrimination and correction of direct, indirect, illegal, or abusive discriminatory biases” as well as clarity and transparency as to when citizens were interacting with AI.
  • That the government categorize risks and inform citizens of the potential risks. Based on this analysis, “high risk” sectors included essential services, biometric verification and job recruitment, and “excessive risk” included the exploitation of vulnerable peoples and social scoring (a system that tracks individual behavior for trustworthiness and blacklists those with too many demerits or equivalents), both practices that should be scrutinized closely.
  • That the government issue governance measures and administrative sanctions, first by determining how businesses that fall afoul of regulations would be penalized and second by recommending a penalty of 2% of revenue for mild non-compliance and the equivalent of 9 million USD for more serious harms.

At the time of this writing in 2023, the government was debating next steps, but the report and deliberations are illustrative of the concerns and recommendations that have been issued with respect to GenAI in the Global South.  

In India, the government has approached AI in general and GenAI in particular with a less skeptical eye, which sheds light on the differences in how governments may approach these technologies and the basis for those differences. In 2018, the Indian government proposed a National Strategy for AI, which prioritized the development of AI in agriculture, education, healthcare, smart cities, and smart mobility. In 2020, the National Artificial Intelligence Strategy called for all systems to be transparent, accountable, and unbiased. In March 2021, the Indian government announced that it would use “light touch” regulation and that the bigger risk was not from AI but from not seizing on the opportunities presented by AI. India has an advanced technological research and development sector that is poised to benefit from AI. Advancing this sector is, according to the Ministry of Electronics and Information Technology, “significant and strategic,” although it acknowledged that it needed some policies and infrastructure measures that would address bias, discrimination, and ethical concerns.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Satellite Systems

Irvine03 CubeSat Source: https://ipsf.net/news/nasa-selects-irvine03-cubesat-for-launch-mission/

What is a satellite?

A satellite is an object that orbits a planet or star; it can be a natural body like the Moon orbiting Earth, or an artificial object deployed by humans for diverse functions, including communication, Earth observation, navigation, and scientific exploration.

While Earth has one natural satellite, the Moon, several thousand artificial satellites trace orbits around the Earth. These human-made satellites range from 10 centimeter cubes weighing about a kilogram, called SmallSats, to the International Space Station. Each carries instruments to perform specific tasks like connecting distant points through telecommunications links and observing Earth’s surface.

NASA & STS-132 Crew: Flyaround view of the International Space Station Source: https://images.nasa.gov/details-s132e012208

How do satellites work?

Satellites use specialized instruments to perform applications such as communication, Earth observation, navigation, and scientific research, collecting and transmitting relevant data back to ground stations, while being remotely managed and controlled.

At the most basic level, satellite systems have three component segments: the space segment, the terrestrial segment, and the data link between the two. In satellite systems that are comprised of multiple space objects, there is also often a data link between the satellites. Since satellites in Earth’s orbit can be several thousand kilometers away from the nearest human, all of the instruments, tools, and fuel a satellite might need must be loaded into the machine at the start. This makes it difficult to change a satellite’s primary mission, although different end users may use the same satellite-derived data for varying purposes.

The terrestrial segment is most often a ground station that receives radiofrequency signals from satellites, but some systems have multiple ground stations or even transmit data to end users directly. For instance, while ground stations can be acres of antenna and data processing facilities, a television satellite dish or a satellite phone are two types of personal ground stations.

What is an orbit?

Diagram of the orbits around the Earth Source: https://earthobservatory.nasa.gov/ContentFeature/OrbitsCatalog/images/orbits_schematic.png

Orbits are the result of two objects in space interacting with just the right balance of gravity and momentum. If a satellite has too much momentum, it will overcome Earth’s gravity and escape out of orbit and move into deep space. If a satellite has too little momentum, it will be pulled down into Earth’s atmosphere. As long as a satellite’s momentum remains constant, the object will travel in a predictable, infinitely repeating path around the Earth.

Not all satellites have the same momentum, and therefore different satellites orbit the Earth on different paths. These orbits are broadly grouped by their altitude above the Earth’s surface. These categories are, from lowest to highest altitude, low Earth orbit (LEO), medium Earth orbit (MEO), and geostationary or geosynchronous equatorial orbit (GEO). While there is no globally recognized “edge” of space, low Earth orbit is generally considered the region below 1000 km above the Earth’s surface.

At the lowest altitudes, satellites must use onboard propulsion systems to overcome the effects of Earth’s atmosphere, which drags satellites out of orbit. When a satellite cannot overcome this drag, it de-orbits and often burns up upon reentry into Earth’s atmosphere. Sometimes, satellites or their component parts survive the reentry and crash into the surface of the Earth or into the ocean. Recent technological advancements have enabled satellite operators to achieve orbit at these very low altitudes. Typically, satellites in these low orbits take less than two hours to make one full trip around the globe. The amount of time a satellite takes to make one rotation around the Earth is called the “period.”

In contrast, geostationary or geosynchronous orbits take a full 24 hours to circle the globe. Because their period keeps pace with the rotation of the Earth, these satellites appear to stay fixed in one spot above the Earth unless an operator maneuvers them. GEO orbits are about 36,000 km above the surface of the Earth. The MEO region encompasses the remaining space between LEO and GEO.

Certain altitudes are better suited for certain types of tasks than others. For instance, because satellites in LEO are so close to the Earth’s surface, no one satellite can provide wide coverage of the Earth’s surface. Satellites in MEO and GEO can “see” more of the Earth at any one point in time, by virtue of their distance away from the Earth. The area of the Earth that a satellite can observe or service is called the “field of regard.” The size of this field is an important factor in deciding how many satellites an operator needs to provide a service and how high those satellites should be in orbit.

Satellite imagery of Mount Merapi, Indonesia Source: https://www.planet.com/gallery/#!/post/mount-merapi-fumes
Megaconstellations and Modern Advancements

Early satellites were relatively small machines that performed rudimentary tasks or demonstrated a capability. During the early days of space exploration, designing and building a satellite was an expensive and long-term undertaking. Launching the satellite into space was another expensive step along the way to deploying a satellite. As engineers gained expertise in building and launching satellites, these machines grew in size and sophistication. Engineers designed hulking satellites weighing thousands of kilograms to carry several instruments, many of which remain in space today.

The paradigm of building one large object has shifted toward building many small objects to accomplish the same mission. These small satellites support the same mission in concert, forming networks called constellations. The concept of operating constellations of satellites is not particularly new – ambitious business plans from the 1980s aimed to leverage dozens of satellites to offer global telecommunications services. Constellations of satellites are often designed to provide a baseline of regional coverage, with the potential to enlarge the service range later. For instance, Japan’s Quasi-Zenith Satellite System uses a constellation of four satellites working in concert to provide navigation services in Asia-Pacific. The principle of using many satellites in concert has become more popular over time.

The plummeting cost of satellite manufacturing and launch has facilitated more exotic designs that include thousands of satellites, called megaconstellations. Operating hundreds or thousands of coordinated satellites in a megaconstellation offers distinct benefits. Megaconstellations can consist of thousands of satellites in LEO. Satellites in LEO have small fields of regard, meaning they can only service a sliver of the Earth’s surface at any given time. Adding another satellite, or several satellites, increases the service area by expanding the field of regard. Megaconstellations take this principle to the extreme, knitting thousands of individual satellites’ fields of regard together to create a blanket of coverage. Coordinating and precisely positioning satellites ensures the network can send signals to any point on the Earth at any time.

Operating in LEO offers other benefits. Megaconstellations orbiting at relatively low altitudes can send and receive signals from the ground more quickly than those further away from the Earth’s surface. Because the signal does not have to travel as far, LEO megaconstellations reduce the time a signal is “in transit” between ground stations and satellite terminals, called “latency.” This facilitates faster communications with less lag. Megaconstellations with low latency can help organizations become more efficient and productive as they transition to 5G technologies.

The further the distance between a satellite and the Earth, the more onboard power the satellite needs to send a signal from space to Earth. Minimizing the distance between satellites and ground stations also minimizes the amount of onboard power needed to produce the signal. This in turn helps reduce the satellite’s size, and often the price of manufacturing. Thus, although megaconstellations require hundreds if not thousands of satellites to provide global coverage, these satellites are generally cheaper per unit. This helps satellite owners stockpile replacement satellites in case any of the assets fail to reach orbit or break once they are in space.

The general trends of satellites becoming cheaper and decreasing launch costs have enabled more than just megaconstellations. Reducing the costs of fabricating and placing a satellite in orbit has opened the playing field to new actors, especially those who may have been excluded from participating in satellite-systems developments based on price alone. Space is no longer restricted to high income countries; now low and middle income countries (LMICs) can own the entire satellite-development lifecycle, including mission design, satellite fabrication, testing and validation, and operations. Relatively lower costs also allow prospective satellite operators to undertake missions that may not have been financially attractive to large or foreign corporations that did not have shared societal motivations.

Satellite Lifecycles/Environmental Issues/Debris Risks

In addition to the thousands of operational satellites in orbit, there are millions of pieces of space trash. Orbital debris is essentially anything in orbit that does not work – this includes everything from non-functional satellites to fragments of exploding bolts that are used to separate spacecraft from rocket boosters. Clouds of debris are generated when two space objects collide, independent of whether the collision was accidental or intentional. Even very small pieces of debris are dangerous – debris as small as a centimeter can be lethal in collisions with operational satellites. Some regions of space are more threatened than others, due to the density of the debris or the potential for debris-creating events.

There is an emerging movement to both reduce the amount of debris created by space activities and to remove the existing derelict objects. This emphasis on space sustainability bodes well for the future. Nevertheless, the current state of the orbital environment presents elevated debris risks. The increase in the amount of debris – originating from the nations most established in space – has imposed risks on new entrants.

Solar panels from the Hubble Space Telescope showing debris impacts Source: https://www.esa.int/var/esa/storage/images/esa_multimedia/images/2009/05/esa_built-solar_cells_retrieved_from_the_hubble_space_telescope_in_2002/10102613-2-eng-GB/ESA_built-solar_cells_retrieved_from_the_Hubble_Space_Telescope_in_2002_article.jpg

Back to top

How are satellites relevant in civic space and for democracy?

Satellites provide services and collect data that greatly benefit society. Satellite systems provide broadband and telecommunications services that offer citizens another avenue for digital connectivity. Digital connectivity is an invaluable tool that can expand citizens’ access to civic space, support democratic processes, and empower freedom of expression. While the fundamental principles and physics underpinning these applications remain constant, novel paradigms in the satellite-system design, such as megaconstellations, have reduced the costs to access these services. Other types of satellites have experienced more linear, but nonetheless impactful, technology advancements. For example, better optical sensors allow satellites to collect more precise and clear imagery of Earth. These satellite-derived data are invaluable for both crisis response and long-term planning, enabling well-organized emergency response work as well as empowering efforts to strengthen democracy. Other sensors allow scientists to analyze the impact of climate change and design more appropriate remediation processes.

Internet connectivity has famously enabled activism and fostered communities of civic-minded individuals around the world. Satellite-enabled connectivity builds on these trends, helping link citizens to social services and each other. Satellite internet networks overcome many of the logistical challenges that prevent terrestrial broadband networks from serving rural or difficult-to-reach communities. Public-private partnerships have improved services in areas that suffered from poor or nonexistent broadband connectivity.

Other Earth observation tools can be used to improve democratic processes. Detailed maps derived from satellite imagery can help prepare for, execute, and analyze election results. Satellite data provides a clear view of electoral maps, allowing civil society to identify issues and propose meaningful changes. For instance, satellite maps can identify underserved populations and validate new polling stations in the runup to an election. Precise maps can also reveal voting trends and, when overlaid with socioeconomic or demographic information from other sources, can inform renewed efforts on voter outreach and campaign strategy. Satellite connectivity has a proven history in facilitating the collection and transmission of votes in a secure, transparent, and timely manner.

Satellite services directly support development work over a range of efforts, including agricultural development, environmental monitoring, and mapping socioeconomic indicators. These types of data support both project planning and monitoring and evaluation. In the past, large satellites used massive optical or other types of sensors to collect data while passing over the Earth. The miniaturization of these sensors allows operators to launch several satellites, reducing the amount of time it takes to revisit a site of interest. Emerging satellite-system-design paradigms like large constellations of Earth-observation satellites can revisit areas of the Earth more frequently, collecting data that allow researchers to monitor changes with more nuance and fidelity.

The Mulanje Massif, captured by the ISERV system aboard the International Space Station
Source: https://www.nasa.gov/image-article/servirs-iserv-image-of-mulanje-massif-malawi/

Satellite imagery and Earth-observation data go beyond playing a role in monitoring the impact of development efforts and can be used to plan responses to crises. In a post-pandemic world, good data on epidemiology and other public health issues have never been more valuable. Satellites are instrumental in collecting that data. Satellite data is increasingly leveraged for public health applications, including understanding the underlying factors that affect who is most at risk of illness. Recent advances in satellite data collection have helped researchers build a deeper and more nuanced understanding of public health issues. This in turn aids tailored responses, and in some cases can support preventative efforts. For instance, analysis of data collected by satellites can help identify where the next public health hazard might occur, enabling preventative action. This type of satellite service can be made even more powerful when used in concert with other emerging technologies like Artificial Intelligence and Machine Learning and Big Data projects.

A composite image of the Earth at night produced by imagery from the Moderate Resolution Imaging Spectroradiometer. This type of imagery has been used by public health researchers to better estimate at-risk populations

Current satellite technology and services are vulnerable to authoritarian or antidemocratic efforts. As satellites are, at their core, hardware, physical attacks remain a serious threat. Ground stations and terminals are often targeted in attempts to keep citizens from accessing satellite-enabled connectivity. Television antennas and satellite internet terminals are difficult to hide without reducing their efficacy, making them easy targets for police or antidemocratic security services who wish to limit citizens’ access. Designs for future systems have not been able to address the vulnerabilities current terminals have. In some extreme cases, satellite signals might be jammed to prevent citizens from accessing a service. Domestic regulations pose another hurdle. States maintain jurisdiction over the radiofrequency spectrum within their borders, and can use licensing and regulatory processes to control what type of connectivity systems are available to its citizens and foreign visitors.

Back to top

Opportunities

Satellites can have positive impacts when used to further democracy, human rights and governance. Read below to learn how to more effectively and safely think about satellite use in your work.

Skip a Step

Many citizens in digital deserts are now able to bypass traditional connectivity methods and leapfrog to benefitting from satellite-enabled connectivity. Improved internet connectivity provides a new avenue for citizens to benefit from civil services and engage in political discourse. Internet access can be expanded without needing expensive and intensive local infrastructure projects.

Digital Inclusion

Satellite data and services have agricultural uses beyond crop monitoring and resource optimization. Smallshare farmers, especially in LMICs without established banking infrastructure, are often excluded from traditional financial markets that only provide credit and not savings, loans, or other services. Women are also disproportionately affected by financial exclusion. Innovative lenders like the Harvesting Farmers Network use satellite technologies and remote sensing to address these gaps and help underserved agricultural producers. Earth-observation data can be used to assess agricultural productivity, helping lenders move beyond requiring a paper trail or other documentation and reducing barriers to financial market access.

Access to banking through satellite-enabled connectivity addresses populations beyond smallshare agricultural producers. Satellite connectivity helps geographically isolated populations utilize financial services. Satellites are helping un- or underserved populations across sub-Saharan Africa access banking, while Mexico has partnered with commercial satellite internet providers to achieve similar digital financial inclusion goals.

More Data, Less Hardware

Satellites may be expensive systems, but access to satellite services and data need not be a prohibitively large financial outlay. Satellite operators sometimes make data collected by their systems free to the public. This practice is common across government and industry. For example, the United States’ National Aeronautics and Space Administration provides a variety of free datasets to support an open and collaborative scientific culture around the world. Satellite industry actors take a slightly different approach to open data. Some commercial entities like Maxar have a long history of providing free and open data in times of crisis or after disasters to assist humanitarian responses.

 

Open sharing of satellite data across borders helps researchers tackle public health issues. Yet, there are still opportunities to better use satellite data. There is room to improve both the collection of remote sensing data and how we use that satellite-derived data. It is important for end-users to understand the effects of data preprocessing, as preprocessing can both help and hinder analyses. Different techniques can affect the utility of satellite data, sometimes streamlining the analytical process and eliminating the need for in-house expertise. On the other hand, receiving preprocessed data could limit the sophistication of the final analysis. When available, raw data might be the best option if an organization has the technical capacity and time to process the data. Thus, it is important to use imagery and remote sensing data that fit both an organization’s purpose and technical expertise.

A color-enhanced image of phytoplankton in the Patagonian Shelf Break, taken by the Suomi National Polar Orbiting Partnership Satellite Source: https://www.nasa.gov/image-article/colorful-plankton-full-patagonian-waters/

South-South Cooperation and Rejecting Post-Colonial Expectations

More and more countries are participating in developing satellite technology or utilizing the data from satellites, including those in the Global South. Many of these governments are collaborating or partnering with established industrial actors or other more advanced spacefaring nations. As more LMICs develop their local capacities, they also expand the potential for deeper South-South cooperation. Furthermore, the Global South can push back against colonial narratives by investing in satellite and space systems. States with colonial histories can push past expectations that they should base their economies on natural-resource extraction or other rudimentary products by delivering highly technical assets like satellites on a global scale.

Amazonia-1, Brazil’s first satellite, launching from Sriharikota in India Source: http://www.inpe.br/amazonia1/img/galeria/66.jpg

Back to top

Risks

The use of emerging technologies can also create risks in civil society programming. Read below about how to discern possible dangers associated with satellites in DRG work, as well as how to mitigate unintended – and intended – consequences.

Onerous Regulation

Arranging satellite connectivity is not as simple as turning a device on – broadcasters must receive specific authorizations and licenses from a country’s government to beam connectivity into their territory. Well-meaning but onerous government bureaucratic processes may delay when a population could start to benefit from satellite connectivity. In other cases, political interests might prevent satellite operators from serving a population in an attempt to control citizens’ access to information or opposition campaigns.

Signal Vulnerability

Satellite signals are vulnerable to interference, even if a satellite operator has full license to operate in a country. Signals are susceptible to both political and physical interference. Governments could choose to revoke licenses, effectively ending a satellite operator’s ability to legally provide connectivity services within a country’s borders with little to no warning. The bureaucratic hoops a service provider must jump through to receive a license are often more onerous than the process for a government to revoke a satellite connectivity provider’s right to broadcast a signal. There are few best practices or exemplary guidelines on what constitutes a reason to revoke a license, so each state is a unique case. It is not clear that many states have made thoughtful progress on understanding why and under what circumstances a satellite provider would lose a license to operate.

Overreliance

Just as a government could revoke a license, so too could a commercial company stop providing satellite services. Civil society must therefore be wary of becoming overly reliant on a single provider, lest this provider decide to cut service. A provider may cease serving a country for many reasons, including financial difficulties or political motivations. For example, Starlink connectivity was impeded, if not turned off entirely, in Ukraine during the war with Russia.

Actors in civil society who wish to work with other entities for satellite projects should also be careful to not become overly dependent on partners that hold overwhelming leverage over a project. The incentives that motivate technology transfer and the sharing of expertise are not always aligned among partners. Alignment issues can cause friction and affect the benefits of a project. This risk is also likely to be relevant in state-to-state interactions.

Unethical Data Access

In the wrong hands, satellite data could be used for a variety of malevolent purposes. Location data, maps, or logs of when a device was transmitting a signal to a satellite could be used by bad actors to erode one’s physical privacy. Satellite connectivity providers may sell users’ data, but some types of sensitive data could be obtained by third parties with sophisticated collection techniques. Few countries have established robust domestic regulations to limit the negative effects of electronic surveillance of satellite-enabled connectivity.

Financial Burden

Even with the attendant risks of overreliance, partnerships with commercial entities or foreign states may be necessary due to the high cost of developing and launching satellites. While advancements in manufacturing and launch have reduced the costs of deploying and operating a satellite, fit-for-purpose systems are still often prohibitively expensive. This is especially salient in light of states that have limited fiscal space and an obligation to address other social issues.

Talent Retention

States that do make a concerted effort to develop a satellite industry or provide state-supported satellite services for their citizens might also face challenges in retaining technical capacity. It is difficult for low- and middle-income countries to keep well-trained engineers and other professionals engaged in domestic satellite issues. These issues are even more acute when citizens are reliant on foreign partners and do not see pathways to growth and productivity at home. This problem is also exacerbated by the fact that government salaries cannot hope to match salaries in the private sector for tech experts. Without a domestic talent pool to draw from, states risk not being able to advocate for themselves in both negotiations for technical services and multilateral forums on space governance and norm setting.

Lack of Multilateral Governance

New paradigms like megaconstellations threaten future generations’ ability to benefit from technologies in Earth’s orbits. This risk of orbital overcrowding is similar to terrestrial-environmental-sustainability principles. Earth’s orbits may be massive in terms of total volume, but orbits are finite resources. There is a fine line between maximizing the uses of Earth’s orbits and launching so many objects into space that no satellite can operate safely. This overcrowding issue affects all of humanity, but is particularly acute for emerging or aspirational spacefaring states that may be forced to operate in a high-risk environment, having missed out on a window of opportunity to take their first steps in space during a relatively safer period of time. Such a situation has secondary effects – those states that are unable to safely commence their space activities are also less likely to be able to demonstrate and reinforce normative expectations for responsible behaviors. Pathways for participating in the current multilateral space governance processes are made more challenging by not having a demonstrated space capability.

There are few global rules that support sustainable and equitable uses of space. Some states have recently adopted more stringent regulations on how companies can use space, but the uncoordinated effort of a few states is unlikely to ensure humanity’s access to a low-risk orbital environment for generations to come. Achieving these space sustainability goals is a global endeavor that requires multilateral cooperation.

Back to top

Questions

To understand the implications of satellite used in your work, ask yourself these questions:

  1. Are there barriers that prevent the benefits of satellites from being leveraged in your country? What are they? Funding? Expertise? Lack of local governance?
  2. Are satellite-derived data or services tailored to your specific needs?
  3. How competitive is the market for satellite services in your area, and how does this competition, or lack thereof, affect the cost of accessing satellite services?
  4. Are the connectivity-enabling satellites you plan to use up to date on cyber security measures?
  5. What types of ground station(s) does the space system use, and is that infrastructure sufficiently secured from seizure or tampering?
  6. Does the satellite owner or operator adhere to or promote sustainable uses of space?
  7. What structural or regulatory changes must be enacted within your country of interest to extract the greatest value from a satellite system?
  8. How have satellite systems been implemented in other states and, if so, are there ways to avoid or overcome challenges prior to implementation?
  9. How can your use of satellite services or data promote the adoption of nascent international behaviors that would conserve your ability to access space services over the long term?
  10. Are you creating risky dependencies? How trustworthy and stable are the organizations you are relying on? Do you have a backup plan?
  11. Are the applications you are accessing through satellite connectivity secure and safe?

Back to top

Case Studies

Vanuatu voting registration

The United Nations Development Programme (UNDP) and the United Nations Satellite Centre (UNOSAT) partnered on an initiative to assist Vanuatu to register voters in preparation for Vanuatu’s 2021 provincial elections. UNOSAT used satellite data to develop the first complete dataset to represent all villages in the archipelago. This data was used in concert with measurements of voter turnout to quantify the impact of polling-station locations. Satellite data were used to locate difficult-to-reach populations and maximize voter turnout. Using satellite data helped streamline election-related work and reduced the burden on election officials.

Partnerships to Provide Imagery in Support of Peace

Satellites’ abilities to capture overhead imagery is especially valuable in documenting human-rights violations in states that restrict activists’ and inspectors’ access. A recent partnership between Human Rights Watch and Planet, a US-based company that operates Earth-observation satellites, enables activist groups to hold national leadership accountable. In this case, Human Rights Watch analyzed satellite images of Myanmar provided by Planet to confirm the burning of ethnically Rohingya villages. The frequent collection of satellite imagery showed that several dozen villages were burned, contradicting Myanmar leadership’s declarations that the state-sponsored clearance operations had ended. Activists used this uncovered truth to call for an urgent cessation of violence and support the delivery of humanitarian aid.

Satellite Television

Satellites enable many forms of mass communication, including television. While television is a diversion or luxury in many places around the world, it is also a powerful tool for shaping political discourse. Satellite television can provide citizenry with programming from around the globe, expanding horizons beyond local programming. Satellite television came to India in 1991 after years of state control over broadcast media. On the one hand, the format of receiving satellite internet was a marker of modernism, while on the other hand, the programming it provided became a societal phenomenon. Satellite television brought more than 300 new channels to India, nurturing cultural engagement and supporting how citizens considered engaging with each other and with the state. This was especially liberating in the post-colonial context, as Indian society now controlled their media outlets and showcased considerations of social identity through satellite television. For more information, please see Television in India: Satellites, politics, and cultural change.

Servir Ecologic Work

Through the Servir program, a collaborative initiative led by the United States Agency for International Development and the U.S. National Aeronautics and Space Administration, U.S. government agencies partner with local organizations in affected regions to use satellite data to design solutions to tackle environmental challenges around the world. Among many other contributions, the Servir team is working in concert with partners in Peru and Brazil to use satellite and geospatial data in precise maps to help inform decisions about agricultural and environmental policies. This work supports stakeholder efforts to understand the complex interface between agriculture productivity and environmental sustainability. The results are used to design policy incentives that promote sustainable farming of cocoa and palm oil. Local stakeholders including the farming communities can use the satellite-derived data to optimize their land use.

South-South Cooperation on Agricultural Monitoring

Satellites are invaluable tools in agricultural development. The CropWatch program, initiated by the Chinese Academy of Sciences, works to provide LMICs with access to data collected by satellites and training for using these data for their specific needs. The CropWatch program supports agricultural monitoring and enables states to better prepare for food security challenges. States have been able to engage with each other through extensive training programs, allowing for South-South collaboration on shared issues. The data collected through CropWatch can be tailored to accommodate local requirements.

Access to a Voice

Clandestine use of satellite internet has allowed protesters in Iran to access the internet via alternate methods. The Iranian government exercises close control over traditional methods of accessing the internet to stifle protests and civil activism. These methods of controlling or limiting free speech, democratic activism, and civil organization are yet to be effective in limiting citizens’ access to satellite internet, provided by services like Starlink. The Iranian government still exercises some control over satellite internet in the country – ground-station terminals need to be smuggled into the borders to provide service to activists.

Amnesty Decode Darfur Project

Satellites help confirm ground truths. Amnesty International has a long history of using satellite imagery to produce credible evidence of human-rights abuses. This project called for digital volunteers to map Darfur and identify potentially vulnerable populations. The next phase of the project compared satellite imagery of the same locations taken at different times to pinpoint evidence of attacks by the Sudanese government and associated security forces. Amnesty maintains its own in-house satellite-imagery-analysis team to corroborate on-the-ground accounts of violence, but this project showed that even amateur volunteer analysis of satellite imagery can be a viable way to investigate human-rights abuses and hold states accountable.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Social Media

What is social media?

Social media provides spaces for people and organizations to share and access news and information, communicate with beneficiaries, and advocate for change. Social media content includes text, photos, videos, infographics, or any other material placed on a blog, Facebook page, X (formerly known as Twitter) account, etc. for an audience to consume, interact with, and circulate. This content is curated by platforms and delivered to users according to what is most likely to attract their attention. There is an ever-expanding amount of content available on these platforms.

Digital inclusion center in the Peruvian Amazon. For NGOs, social media platforms can be useful to reach new audiences and to raise awareness of services. Photo credit: Jack Gordon for USAID / Digital Development Communications.

Theoretically, through social media everyone has a way to speak out and reach audiences across the world, which can be empowering and bring people together. At the same time, much of what is shared on social media can be misleading, hateful, and dangerous, which theoretically imposes a level of responsibility by the owners of platforms to moderate content.

How does social media work?

Social media platforms are owned by private companies, with business models usually based on advertising and monetization of users’ data. This affects the way that content appears to users, and influences data-sharing practices. Moderating content on these social media spaces brings its own challenges and complications because it requires balancing multiple fundamental freedoms. Understanding the content moderation practices and business models of the platforms is essential to reap the benefits while mitigating the risks of using social media.

Business Models

Most social media platforms rely on advertising. Advertisers pay for engagement, such as clicks, likes, and shares. Therefore, sensational and attention-grabbing content is more valuable. This motivates platforms to use automated-recommendation technology that relies on algorithmic decision-making to prioritize content likely to grab attention. The main strategy of “user-targeted amplification” shows users content that is most likely to interest them based on detailed data  that are collected about them. See more in the Risk section under Data Monetization  by social media companies and tailored information streams.

The Emergence of Programmatic Advertising

The transition of advertising to digital systems has dramatically altered the advertising business. In an analog world, advertising placements were predicated on aggregate demographics, collected by publishers and measurement firms. These measurements were rough, capable at best of tracking subscribers and household-level engagement. Advertisers hoped their ads would be seen by enough of their target demographic (for example, men between 18 and 35 with income at a certain level) to be worth their while. Even more challenging was tracking the efficacy of the ads. Systems for measuring whether an ad resulted in a sale were limited largely to mail-in cards and special discount codes.

The emergence of digital systems changed all of that. Pioneered for the most part by Google and then supercharged by Facebook in the early 21st century, a new promise emerged: “Place ads through our platform, and we can put the right ad in front of the right person at the right time. Not only that, but we can report back to you (the advertiser) which users saw the ad, whether they clicked on it, and if that click led to a ‘conversion’ or a sale.”

But this promise has come with significant unintended consequences. The way that the platforms—and the massive ad tech industry that has rapidly emerged alongside them—deliver on this promise requires a level of data gathering, tracking, and individual surveillance unprecedented in human history. The tracking of individual behaviors, preferences, and habits powers the wildly profitable digital advertising industry, dominated by platforms that can control these data at scale.

Managing huge consumer data sets at the scale and speed required to deliver value to advertisers has come to mean a heavy dependence on algorithms to do the searching, sorting, tracking, placement, and delivery of ads. This development of sophisticated algorithms led to the emergence of programmatic advertising, which is the placement of ads in real time on websites with no human intervention. Programmatic advertising made up roughly two thirds of the $237 billion global ad market in 2019.

The digitization of the advertising market, particularly the dominance of programmatic advertising, has resulted in a highly uneven playing field. The technology companies possess a significant advantage: they built the new structures and set the terms of engagement. What began as a value-add in the new digital space—“We will give advertisers efficiency and publishers new audiences and revenue streams”—has evolved to disadvantage both groups.

One of the primary challenges is in how audience engagement is measured and tracked. The primary performance indicators in the digital world are views and clicks. As mentioned above, an incentive structure based on views and clicks (engagement) tends to favor sensational and eye-catching content. In the race for engagement, misleading or false content with dramatic headlines and incendiary claims consistently wins out over more balanced news and information. See also the section on digital advertising in the disinformation resource.

Advertising-motivated content

Platforms leverage tools like hashtags and search engine optimization (SEO) to rank and cluster content around certain topics. Unfortunately, automated content curation motivated by advertising does not tend to prioritize healthful, educational, or rigorous content. Instead, conspiracy theories, shocking or violent content, and “click-bait” (misleading phrases designed to entice viewing) tend to spread more widely. Many platforms have features of upvoting (“like” buttons) which, similar to hashtags and SEO, influence the algorithmic moderation and promote certain content to circulate more widely. These features together cause “virality,” one of the defining features of the social-media ecosystem: the tendency of an image, video, or piece of information to be circulated rapidly and widely.

In some cases, virality can spark political activism and raise awareness (like the #MeToo movement), but it can also amplify tragedies and spread inaccurate information  (anti-vaccine information and other health rumors, etc.). Additionally, the business models of the platforms reward quantity over quality (number of “likes”, “followers”, and views), encouraging a growth logic that has led to the problem of information saturation or information overload, overwhelming users with seemingly infinite content. Indeed, design decisions like the “infinite scroll” intended to make our social media spaces ever larger and more entertaining have been associated with impulsive behaviors, increased distraction, attention-seeking behavior, lower self-esteem, etc.

Many digital advertising strategies raise risks regarding access to information, privacy, and discrimination, in part because of their pervasiveness and subtlety. Influencer marketing, for example, is the practice of sponsoring a social media influencer to promote or use a certain product by working it into their social-media content, while native advertising is the practice of embedding ads in or beside other non-paid content. Most consumers do not know what native advertising is and may not even know when they are being delivered ads.

It is not new for brands to strategically place their content. However, today there is much more advertising, and it is seamlessly integrated with other content. In addition, the design of platforms makes content from diverse sources—advertisers and news agencies, experts and amateurs—indistinguishable. Individuals’ right to information and basic guarantees of transparency are at stake if advertisements are placed on equal footing with desired content.

Content Moderation

Content moderation is at the heart of the services that social-media platforms provide: the hosting and curation of the content uploaded by their users. Content moderation is not just the review of content, but every design decision made by the platforms, from the Terms of Service and their Community Guidelines, to the algorithms used to rank and order content, to the types of content allowed and encouraged through design features (“like”, “follow”, “block”, “restrict”, etc.).

Content moderation is particularly challenging because of the issues it raises around freedom of expression. While it is necessary to address massive quantities of harmful content that circulate widely, educational, historic, or journalistic content is often censored by algorithmic moderation systems. In 2016, for example, Facebook took down a post with a Pulitzer Prize-winning image of a naked 9-year-old girl fleeing a napalm bombing and suspended the account of the journalist who had posted it.

Though nations differ in their stances on freedom of speech, international human rights provide a framework for how to balance freedom of expression against other rights, and against protections for vulnerable groups. Still, content-moderation challenges increase as content itself evolves, for instance through increase of live streaming, ephemeral content, voice assistants, etc. Moderating internet memes is particularly challenging, for instance, because of their ambiguity and ever-changing nature; and yet meme culture is a central tool used by the far right to share ideology and glorify violence. Some information manipulation is also intentionally difficult to detect; for example, “dog whistling” (sending coded messages to subgroups of the population) and “gaslighting” (psychological manipulation to make people doubt their own knowledge or judgment).

Automated moderation

Content moderation is usually performed by a mix of humans and artificial intelligence , with the precise mix dependent on the platform and the category of content. The largest platforms like Facebook and YouTube use automated tools to filter content as it is uploaded. Facebook, for example, claims it is able to detect up to 80% of hate speech content in some languages as it is posted, before it reaches the level of human review. Though the working conditions for the human moderators have been heavily criticized, algorithms are not a perfect alternative. Their accuracy and transparency have been disputed, and experts have warned of some concerning biases stemming from algorithmic content moderation.

The complexity of content-moderation decisions does not lend itself easily to automation, and the porosity between legal and illegal or permissible and impermissible content leads to legitimate content being censored and harmful and illegal content (cyberbullying, defamation, etc.) passing through the filters.

The moderation of content posted to social media was increasingly important during the COVID-19 pandemic, when access to misleading and inaccurate information about the virus had the potential to result in severe illness or bodily harm. One characterization of Facebook described “a platform that is effectively at war with itself: the News Feed algorithm relentlessly promotes irresistible click-bait about Bill Gates, vaccines, and hydroxychloroquine; the trust and safety team then dutifully counters it with bolded, underlined doses of reality.”

Community moderation

Some social media platforms have come to rely on their users for content moderation. Reddit was one of the first social networks to popularize community-led moderation and allows subreddits to tack additional rules onto the company’s master content policy. These rules are then enforced by human moderators and, in some cases, automated bots. While the decentralization of moderation gives user communities more autonomy and decision-making power over their conversations, it also relies inherently on unpaid labor and exposes untrained volunteers to potentially problematic content.

Another approach to community-led moderation is X’s Community Notes, which is essentially a crowd-sourced fact-checking system. The feature allows users who are members of the program to add additional context to posts (formerly called tweets) that may contain false or misleading information, which other users then vote on if they find the context to be helpful.

Addressing harmful content

In some countries, local laws may address content moderation, but they relate mainly to child abuse images or illegal content that incites violence. Most platforms also have community standards or safety and security policies that state the kind of content allowed, and that sets the rules for harmful content. Enforcement of legal requirements and the platforms’ own standards relies primarily on content being flagged by social media users. The social-media platforms are only responsible for harmful content shared on their platforms once it has been reported to them.

Some platforms have established mechanisms that allow civil society organizations (CSOs) to contribute to the reporting process by becoming so-called “trusted flaggers.” Facebook’s Trusted Partner program, for example, provides partners with a dedicated escalation channel for reporting content that violates the company’s Community Standards.. However, even with programs like this in place,  limited access to platforms to raise local challenges and trends remains an obstacle for CSOs, marginalized groups, and other communities, especially in the Global South.

Regulation

The question of how to regulate and enforce the policies of social media platforms remains far from settled. As of this writing, there are several common approaches to social-media regulation.

Self-regulation

The standard model of social-media regulation has long been self-regulation, with platforms establishing and enforcing their own standards for safety and equity. Incentives for self-regulation, including avoiding the imposition of more restrictive government regulation and broadening a building consumer trust to broaden a platform’s user base (and ultimately boosting profits). On the other hand, there are obvious limits to self-regulation when these incentives are outweighed by perceived costs. Self-regulation can also be contingent on the ownership of a company, as demonstrated by the reversal of numerous policy decisions in the name of “free speech” by Elon Musk after his takeover of X (known as Twitter, at the time).

In 2020, the Facebook Oversight Board was established as an accountability mechanism for users to appeal decisions by Facebook to remove content that violates its policies against harmful and hateful posts. While the Oversight Board’s content decisions on individual cases are binding, its broader policy recommendations are not. For example, Meta was required to remove a video posted by Cambodian Prime Minister Hun Sen that threatened his opponents with physical violence, but it declined to comply with the Board’s recommendation to suspend the Prime Minister’s account entirely. Though the Oversight Board’s mandate and model is promising, there have been concerns about its capacity to respond to the volume of requests it receives in a timely manner.

Government Regulation

In recent years, individual governments and regional blocs have introduced legislation to hold social media companies accountable for the harmful content that spreads on their platforms, as well as to protect the privacy of citizens given the massive amounts of data these companies collect. Perhaps the most prominent and far-reaching example of this kind of legislation is the European Union’s Digital Services Act (DSA), which came into effect for “Very Large Online Platforms” such as Facebook and Instagram (Meta), TikTok, YouTube (Google), and X in late August of 2023. Under the rules of the DSA, online platforms risk significant fines if they fail to prevent and remove posts containing illegal content. The DSA also bans targeted advertising based on a person’s sexual orientation, religion, ethnicity, or political beliefs and requires platforms to provide more transparency on how their algorithms work.

With government regulation comes the risk of over-regulation via “fake news” laws and threats to free speech and online safety. In 2023, for example, security researchers warned that the draft legislation of the U.K.’s Online Safety Bill would compromise the security provided to users of end-to-end encrypted communications services, such as WhatsApp and Signal. Proposed Brazilian legislation to increase transparency and accountability for online platforms was also widely criticized—and received strong backlash from the platforms themselves—as negotiations took place between closed doors without proper engagement with civil society and other sectors.

Back to top

How is social media relevant in civic space and for democracy?

Social media encourages and facilitates the spread of information at unprecedented speeds, distances, and volumes. As a result, information in the public sphere is no longer controlled by journalistic “gatekeepers.” Rather, social media provide platforms for groups excluded from traditional media to connect and be heard. Citizen journalism has flourished on social media, enabling users from around the world to supplement mainstream media narratives with on-the-ground local perspectives that previously may have been overlooked or misrepresented. Read more about citizen journalism under the Opportunities section of this resource.

Social media can also serve as a resource for citizens and first responders during emergencies, humanitarian crises, and natural disasters, as described in more detail in the Opportunities section. In the aftermath of the deadly earthquake that struck Turkey and Syria in February 2023, for example, people trapped under the rubble turned to social media to alert rescue crews to their location. Social media platforms have also been used during this and other crises to mobilize volunteers and crowdsource donations for food and medical aid.

Digital inclusion center in the Peruvian Amazon. The business models and content moderation practices of social media platforms directly affect the content displayed to users. Photo Credit: Chandy Mao, Development Innovations.

However, like any technology, social media can be used in ways that negatively affect free expression, democratic debate, and civic participation. Profit-driven companies like X have in the past complied with content takedown requests from individual governments, prompting censorship concerns. When private companies control the flow of information, censorship can occur not only through such direct mechanisms, but also through the determination of which content is deemed most credible or worthy of public attention.

The effects of harassment, hate speech, and “trolling” on social media can spill over into offline spaces, presenting a unique danger for women, journalists, political candidates, and marginalized groups. According to UNESCO, 20% of respondents to a 2020 survey on online violence against women journalists reported being attacked offline in connection with online violence. Read more about online violence and targeted digital attacks in the Risks section of this resource, as well as the resource on Digital Gender Divide[1].

Social media platforms have only become more prevalent in our daily lives (with the average internet user spending nearly 2.5 hours per day on social media), and those not active on the platforms risk missing important public announcements, information about community events, and opportunities to communicate with family and friends. Design features like the “infinite scroll,” which allows users to endlessly swipe through content without clicking, are intentionally addictive—and associated with impulsive behavior and lower self-esteem. The oversaturation of content in curated news feeds makes it ever more difficult for users to distinguish factual, unbiased information from the onslaught of clickbait and sensational narratives. Read about the intentional sharing of misleading or false information to deceive or cause harm in our Disinformation[2] resource.

Social media and elections

Social media platforms have become increasingly important to the engagement of citizens, candidates, and political parties during elections, referendums, and other political events. On the one hand, lesser-known candidates can leverage social media to reach a broader audience by conducting direct outreach and sharing information about their campaign, while citizens can use social media to communicate with candidates about immediate concerns in their local communities. On the other hand, disinformation circulating on social media can amplify voter confusion, reduce turnout, galvanize social cleavages, suppress political participation of women and marginalized populations, and degrade overall trust in democratic institutions.

Social media companies like Google, Meta, and X do have a track record of adjusting their policies and investing in new products ahead of global elections. They also collaborate directly with electoral authorities and independent fact-checkers to mitigate disinformation and other online harms. However, these efforts often fall short. As one example, despite Facebook’s self-proclaimed efforts to safeguard election integrity, Global Witness found that the platform failed to detect election-related disinformation in ads ahead of the 2022 Brazilian presidential election (a similar pattern was also uncovered in Myanmar, Ethiopia, and Kenya). Facebook and other social media platforms were strongly criticized for their inaction in the lead up to and during the subsequent riots instigated by far-right supporters of former president Jair Bolsonaro. In fragile democracies, the institutions that could help counter the impact of fake news and disinformation disseminated on social media—such as independent media, agile political parties, and sophisticated civil society organizations—remain nascent.

Meanwhile, online political advertising has introduced new challenges to election transparency and accountability as the undeclared sponsoring of content has become easier through unofficial pages paid for by official campaigns. Social media companies have made efforts to increase the transparency of political ads by making “ad libraries” available in some countries and introducing new requirements for the purchase and identification of political ads. But these efforts have varied by country, with most attention directed to larger or more influential markets.

Social media monitoring can help civil society researchers better understand their local information environment, including common disinformation narratives during election cycles. The National Democratic Institute, for example, used Facebook’s social monitoring platform Crowdtangle to track the online political environment in Moldova following Maia Sandu’s victory in the November 2020 presidential elections. However, social media platforms have made this work more challenging by introducing exorbitant fees to access data or ceasing support for user interfaces that make analysis easier for non-technical users.

Back to top

Opportunities

Students from the Kandal Province, Cambodia. Social media platforms have opened up new platforms for video storytelling. Photo credit: Chandy Mao, Development Innovations.

Social media can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about social media use in your work.

Citizen Journalism

Social media has been credited with providing channels for citizens, activists, and experts to report instantly and directly—from disaster settings, during protests, from within local communities, etc. Citizen journalism, also referred to as participatory journalism or guerrilla journalism, does not have a definite set of principles and is an important supplement to (but not a replacement for) mainstream journalism. Collaborative journalism, the partnership between citizen and professional journalists, as well as crowdsourcing strategies, are additional techniques facilitated by social media that have enhanced journalism, helping to promote voices from the ground and to magnify diverse voices and viewpoints. The outlet France 24 has developed a network of 5,000 contributors, the “observateurs,” who are able to cover important events directly by virtue of being on scene at the time, as well as to confirm the accuracy of information.

Social media and blogging platforms have allowed for the decentralization of expertise, bridging elite and non-elite forms of knowledge. Without proper fact-checking or supplementary sources and proper context, citizen reporting carries risks—including security risks to the authors themselves—but it is an important democratizing force and source of information.

Crowdsourcing

In crowdsourcing, the public is mobilized to share data together to tell a larger story or accomplish a greater goal. Crowdsourcing can be a method for financing, for journalism and reporting, or simply for gathering ideas. Usually some kind of software tool or platform is put in place that the public can easily access and contribute to. Crisis mapping, for example, is a type of crowdsourcing through which the public shares data in real time during a crisis (a natural disaster, an election, a protest, etc.). These data are then ordered and displayed in a useful way. For instance, crisis mapping can be used in the wake of an earthquake to show first responders the areas that have been hit and need immediate assistance. Ushahidi is an open-source crisis-mapping software developed in Kenya after the violent outbreak following the election in 2007. The tool was first created to allow Kenyans to flag incidents, form a complete and accurate picture of the situation on the ground, and share information with the media, outside governments, and relevant civil society and relief organizations. In Kenya, the tool gathered texts, posts, and photos and created crowdsourced maps of incidents of violence, election fraud, and other abuse. Ushahidi now has a global team with deployments in more than 160 countries and more than 40 languages.

Digital Activism

Social media has allowed local and global movements to spring up overnight, inviting broad participation and visibility. Twitter hashtags in particular have been instrumental for coalition building, coordination, and raising awareness among international audiences, media, and governments. Researchers began to take note of digital activism around the 2011 “Arab Spring,” when movements in Tunisia, Morocco, Syria, Libya, Egypt, and Bahrain, among others countries, leveraged social media to galvanize support. This pattern continued with the Occupy Wallstreet movement in the United States, the Ukranian Euromaidan movement in late 2013, and the Hong Kong protests in 2019.

In 2013, the acquittal of George Zimmerman in the death of unarmed 17-year-old Trayvon Martin inspired the creation of the  #BlackLivesMatter hashtag. This movement grew stronger in response to the tragic killings of Michael Brown in 2014 and George Floyd in 2020. The hashtag, at the front of an organized national protest movement, provided an outlet for people to join an online conversation and articulate alternative narratives in real time about subjects that the media and the rest of the United States had not paid sufficient attention to: police brutality, systemic racism, racial profiling, inequality, etc.

The #MeToo movement against sexual misconduct in the media industry, which also became a global movement, allowed a multitude of people to participate in activism previously bound to a certain time and place.

Some researchers and activists fear that social media will lead to “slacktivism” by giving people an excuse to stay at home rather than make a more dynamic response. Others fear that  social media is ultimately insufficient for enacting meaningful social change, which requires nuanced political arguments. (Interestingly, a 2018 Pew Research survey on attitudes toward digital activism showed that just 39% of white Americans believed social media was an important tool for expressing themselves, while 54% percent of Black people said that it was an important tool for them.)

Social media has enabled new online groups to gather together and to express a common sentiment as a form of solidarity or as a means to protest. Especially after the COVID-19 pandemic broke out, many physical protests were suspended or canceled, and virtual protests proceeded in their place.

Expansion and engagement with international audience at low costs

Social media provides a valuable opportunity for CSOs to reach their goals and engage with existing and new audiences. A good social-media strategy is underpinned by a permanent staff position to grow a strong and consistent social media presence based on the organization’s purpose, values, and culture. This person should know how to seek information, be aware of both the risks and benefits of sharing information online, and understand the importance of using sound judgment when posting on social media. The USAID “Social Networking: A Guide to Strengthening Civil Society through Social Media” provides a set of questions as guidance to develop a sound social-media policy, asking organizations to think about values, roles, content, tone, controversy, and privacy.

Increased awareness of services

Social media can be integrated into programmatic activities to strengthen the reach and impact of programming, for example, by generating awareness of an organization’s services to a new demographic. Organizations can promote their programs and services while responding to questions and fostering open dialogue. Widely used social media platforms can be useful to reach new audiences for training and consulting activities through webinars or individual meetings designed for NGOs.

Opportunities for Philanthropy and Fundraising

Social-media fundraising presents an important opportunity for nonprofits. After the blast in Beirut’s harbor in the summer of 2020, many Lebanese people started online fundraising pages for their organizations. Social media platforms were used extensively to share funding suggestions to the global audience watching the disaster unfold, reinforced by traditional media coverage. However, organizations should carefully consider the type of campaign and platforms they choose. TechSoup, a nonprofit providing tech support for NGOs, offers advice and an online course on fundraising with social media for nonprofits.

Emergency communication

In some contexts, civic actors rely on social media platforms to produce and disseminate critical information, for example, during humanitarian crises or emergencies. Even in a widespread disaster, the internet often remains a significant communication channel, which makes social media a useful, complementary means for emergency teams and the public. Reliance on the internet, however, increases vulnerability in the event of network shutdowns.

Back to top

Risks

In Kyiv, Ukrainian students share pictures at the opening ceremony of a Parliamentary Education Center. Photo credit: Press Service of the Verkhovna Rada of Ukraine, Andrii Nesterenko.

The use of social media can also create risks in civil society programming. Read below on how to discern the possible dangers associated with social media platforms in DRG work, as well as how to mitigate unintended – and intended – consequences.

Polarization and Ideological Segregation

The ways in which content flows and is presented on social media due to the platforms’ business models risk limiting our access to information, particularly to information that challenges our preexisting beliefs, by exposing us to content likely to attract our attention and support our views. The concept of the filter bubble refers to the filtering of information by online platforms to exclude information we as users have not already expressed an interest in. When paired with our own intellectual biases, filter bubbles worsen polarization by allowing us to live in echo chambers. This is easily witnessed in a YouTube feed: when you search for a song by an artist, you will likely be directed to more songs by the same artist, or similar ones—the algorithms are designed to prolong your viewing, and assume you want more of something similar. The same trend has been observed with political content. Social media algorithms encourage confirmation bias, exposing us to content we will agree with and enjoy, often at the expense of the accuracy, rigor, or educational and social value of that content.

The massive and precise data amassed by advertisers and social media companies about our preferences and opinions facilitates the practice of micro-targeting, which involves the display of tailored content based on data about users’ online behaviors, connections, and demographics, as will be further explained below.

The increasingly tailored distribution of news and information on social media is a threat to political discourse, diversity of opinions, and democracy. Users can become detached even from factual information that disagrees with their viewpoints, and isolated within their own cultural or ideological bubbles.

Because tailoring news and other information on social media is driven largely by nontransparent, opaque algorithms that are owned by private companies, it is hard for users to avoid these bubbles. Access to and intake of the very diverse information available on social media, with its many viewpoints, perspectives, ideas, and opinions, requires an explicit effort by the individual user to go beyond passive consumption of the content presented to them by the algorithm.

Misinformation and Disinformation

The internet and social media provide new tools that amplify and alter the danger presented by false, inaccurate, or out-of-context information. The online space increasingly drives discourse and is where much of today’s disinformation takes root. Refer to the Disinformation  resource for a detailed overview of these problems.

Online Violence and Targeted Digital Attacks

Social media facilitates a number of violent behaviors such as defamation, harassment, bullying, stalking, “trolling,” and “doxxing.” Cyberbullying among children, much like traditional offline bullying, can harm students’ performance in school and causes real psychological damage. Cyberbullying is particularly harmful because victims experience the violence alone, isolated in cyberspace. They often do not seek help from parents and teachers, who they believe are not able to intervene. Cyberbullying is also difficult to address because it can move across social-media platforms, beginning on one and moving to another. Like cyberbullying, cyber harassment and cyberstalking have very tangible offline effects. Women are most often the victims of cyber harassment and cyberviolence, sometimes through the use of stalkerware installed by their partners to track their movements. A frightening cyber-harassment trend  accelerated in France during the COVID-19 pandemic in the form of “fisha” accounts, where bullies, aggressors, or jilted ex-boyfriends would publish and circulate naked photos of teenage girls without their consent.

Journalists, women in particular, are often subject to cyber harassment and threats. Online violence against journalists, particularly those who write about socially sensitive or political topics, can lead to self-censorship, affecting the quality of the information environment and democratic debate. Social media provides new ways to spread and amplify hate speech and harassment. The use of fake accounts, bots, and bot-nets (automated networks of accounts) allow perpetrators to attack, overwhelm, and even disable the social media accounts of their victims. Revealing sensitive information about journalists through doxxing is another strategy that can be used to induce self-censorship.

The 2014 case of Gamergate, when several women video-game developers were attacked by a coordinated harassment campaign that included doxxing and threats of rape and death, illustrates the strength and capacity of loosely connected hate groups online to rally together, inflict real violence, and even drown out criticism. Many of the actions of the most active Gamergate trolls were illegal, but their identities were unknown. Importantly, it has been suggested by supporters of Gamergate that the most violent trolls were a “smaller, but vocal minority” — evidence of the magnifying power of internet channels and their use for coordinated online harassment.

Online hoaxes, scams, and frauds, like in their traditional offline forms, usually aim to extract money or sensitive information from a target. The practice of phishing is increasingly common on social media: an attacker pretends to be a contact or a reputable source in order to send malware or extract personal information and account credentials. Spearphishing is a targeted phishing attack that leverages information about the recipient and details related to the surrounding circumstances to achieve this same aim.

Data monetization by social media companies and tailored information streams

Most social media platforms are free to use. Social media platforms do not receive revenue directly from users, like in a traditional subscription service; rather they generate profit primarily through digital advertising. Digital advertising is based on the collection of users’ data by social-media companies, which allows advertisers to target their ads to specific users and types of users. Social media platforms monitor their users and build detailed profiles that they sell to advertisers. The data tracked includes information about the user’s connections and behavior on the platform, such as friends, posts, likes, searches, clicks, and mouse movements. Data are also extensively collected outside platforms, including information about users’ location, web pages visited, online shopping, and banking behavior. Additionally, many companies regularly request permission to access the contacts and photos of their users.

In the case of Facebook, this has led to a long-held and widespread conspiracy theory that the company listens to conversations to serve tailored advertisements. No one has ever been able to find clear evidence that this is actually happening. Research has shown that a company like Facebook does not need to listen in to your conversations, because it has the capacity to track you in so many other ways: “Not only does the system know exactly where you are at every moment, it knows who your friends are, what they are interested in, and who you are spending time with. It can track you across all your devices, log call and text metadata on phones, and even watch you write something that you end up deleting and never actually send.”

The massive and precise data amassed by advertisers and social-media companies about our preferences and opinions permit the practice of micro-targeting, that is, displaying targeted advertisements based on what you have recently purchased, searched for or liked. But just as online advertisers can target us with products, political parties can target us with more relevant or personalized messaging. Studies have attempted  to determine the extent to which political micro-targeting is a serious concern for the functioning of democratic elections. The question has also been raised by researchers and digital rights activists as to how micro-targeting may be interfering with our freedom of thought.

Government surveillance and access to personal data

The content shared on social media can be monitored by governments, who use social media for censorship, control, and information manipulation. Even democratic governments are known to engage in extensive social-media monitoring for law enforcement and intelligence-gathering purposes. These practices should be guided by robust legal frameworks and data protection laws to safeguard individuals’ rights online, but many countries have not yet enacted this type of legislation.

There are also many examples of authoritarian governments using personal and other data harvested through social media to intimidate activists, silence opposition, and bring development projects to a halt. The information shared on social media often allows bad actors to build extensive profiles of individuals, enabling targeted online and offline attacks. Through social engineering, a phishing email can be carefully crafted based on social media data to trick an activist into clicking on a malicious link that provides access to their device, documents, or social-media accounts.

Sometimes, however, a strong, real-time presence on social media can protect a prominent activist against threats by the government. A disappearance or arrest would be immediately noticed by followers or friends of a person who suddenly becomes silent on social media.

Market power and differing regulation

We rely on social-media platforms to help fulfill our fundamental rights (freedom of expression, assembly, etc.). However, these platforms are massive global monopolies and have been referred to as “the new governors.” This market concentration is troubling to national and international governance mechanisms. Simply breaking up the biggest platform companies will not fully solve the information disorders and social problems fueled by social media. Civil society and governments also need visibility into the design choices made by the platforms to understand how to address the harms they facilitate.

The growing influence of social-media platforms has given many governments reasons to impose laws on online content. There is a surge in laws across the world regulating illegal and harmful content, such as incitement to terrorism or violence, false information, and hate speech. These laws often criminalize speech and contain punishments of jail terms or high fines for something like a retweet on X. Even in countries where the rule of law is respected, legal approaches to regulating online content may be ineffective due to the many technical challenges of content moderation. There is also a risk of violating internet users’ freedom of expression by reinforcing imperfect and non-transparent moderation practices and over-deletion. Lastly, they constitute a challenge to social media companies to navigate between compliance with local laws and defending international human rights law.

Impact on journalism

Social media has had a profound impact on the field of journalism. While it has enabled the emergence of the citizen-journalist, local reporting, and crowd-sourced information, social-media companies have displaced the relationship between advertising and the traditional newspaper. In turn this has created a rewards system that privileges sensationalist, click-bait-style content over quality journalism that may be pertinent to local communities.

In addition, the way search tools work dramatically affects local publishers, as search is a powerful vector for news and information. Researchers have found that search rankings have a marked impact on our attention. Not only do we tend to think information that is ranked more highly is more trusted and relevant, but we tend to click on top results more often than lower ones. The Google search engine concentrates our attention on a narrow range of news sources, a trend that works against diverse and pluralistic media outlets. It also tends to work against the advertising revenue of smaller and community publishers, which is based on user attention and traffic. In this downward spiral, search results favor larger outlets, and those results drive more user engagement; in turn, their inventory becomes more valuable in the advertising market, and those publishers grow larger driving more favorable search results and onward we go.

Back to top

Questions

To understand the implications of social media information flows and choice of platforms used in your work, ask yourself these questions:

  1. Does your organization have a social-media strategy? What does your organization hope to achieve through social media use?
  2. Do you have staff who can oversee and ethically moderate your social-media accounts and content?
  3. Which platform do you intend to use to accomplish your organization’s goals? What is the business model of that platform? How does this business model affect you as a user?
  4. How is content ordered and moderated on the platforms you use (by humans, volunteers, AI, etc.)?
  5. Where is the platform legally headquartered? What jurisdiction and legal frameworks does it fall under?
  6. Do the platforms chosen have mechanisms for users to flag harassment and hate speech for review and possible removal?
  7. Do the platforms have mechanisms for users to dispute decisions on content takedowns or blocked accounts?
  8. What user data are the platforms collecting? Who else has access to collected data and how is it being used?
  9. How does the platform engage its community of users and civil society (for instance, in flagging dangerous content, in giving feedback on design features, in fact-checking information, etc.)? Does the platform employ local staff in your country or region?
  10. Do the platform(s) have privacy features like encryption? If so, what level of encryption do they offer and for what precise services (for example, only on the app, only in private message threads)? What are the default settings?

Back to top

Case Studies

Everyone saw Brazil violence coming. Except social media giants

Everyone saw Brazil violence coming. Except social media giants

“When far-right rioters stormed Brazil’s key government buildings on January 8, social media companies were again caught flat-footed. In WhatsApp groups—many with thousands of subscribers—viral videos of the attacks quickly spread like wildfire… On Twitter, social media users posted thousands of images and videos in support of the attacks under the hashtag #manifestacao, or protest. On Facebook, the same hashtag garnered tens of thousands of engagements via likes, shares and comments, mostly in favor of the riots… In failing to clamp down on such content, the violence in Brazil again highlights the central role social media companies play in the fundamental machinery of 21st century democracy. These firms now provide digital tools like encrypted messaging services used by activists to coordinate offline violence and rely on automated algorithms designed to promote partisan content that can undermine people’s trust in elections.”

Crowdsourced mapping in crisis zones: collaboration, organization and impact

Crowdsourced mapping in crisis zones: collaboration, organization and impact

“Within a crisis, crowdsourced mapping allows geo-tagged digital photos, aid requests posted on Twitter, aerial imagery, Facebook posts, SMS messages, and other digital sources to be collected and analyzed by multiple online volunteers…[to build] an understanding of the damage in an area and help responders focus on those in need. By generating maps using information sourced from multiple outlets, such as social media…a rich impression of an emergency situation can be generated by the power of ‘the crowd’.” Crowdsourced mapping has been employed in multiple countries during natural disasters, refugee crises, and even election periods.

What makes a movement go viral? Social media, social justice coalesce under #JusticeForGeorgeFloyd

What makes a movement go viral? Social media, social justice coalesce under #JusticeForGeorgeFloyd

A 2022 USC study was among the first to measure the link between social media posts and participation in the #BlackLivesMatter protests after the 2020 death of George Floyd. “The researchers found that Instagram, as a visual content platform, was particularly effective in mobilizing coalitions around racial justice by allowing new opinion leaders to enter public discourse. Independent journalists, activists, entertainers, meme groups and fashion magazines were among the many opinion leaders that emerged throughout the protests through visual communications that went viral. This contrasts with text-based platforms like Twitter that allow voices with institutional power (such as politicians, traditional news media or police departments) to control the flow of information.”

Myanmar: The social atrocity: Meta and the right to remedy for the Rohingya

Myanmar: The social atrocity: Meta and the right to remedy for the Rohingya

A 2022 Amnesty International report investigated Meta’s role in the serious human rights violations perpetrated during the Myanmar security forces’ brutal campaign of ethnic cleansing against Rohingya Muslims starting in August 2017. The report found that “Meta’s algorithms proactively amplified and promoted content which incited violence, hatred, and discrimination against the Rohingya – pouring fuel on the fire of long-standing discrimination and substantially increasing the risk of an outbreak of mass violence.”

How China uses influencers to build a propaganda network

How China uses influencers to build a propaganda network

“As China continues to assert its economic might, it is using the global social media ecosystem to expand its already formidable influence. The country has quietly built a network of social media personalities who parrot the government’s perspective in posts seen by hundreds of thousands of people, operating in virtual lockstep as they promote China’s virtues, deflect international criticism of its human rights abuses, and advance Beijing’s talking points on world affairs like Russia’s war against Ukraine. Some of China’s state-affiliated reporters have posited themselves as trendy Instagram influencers or bloggers. The country has also hired firms to recruit influencers to deliver carefully crafted messages that boost its image to social media users. And it is benefitting from a cadre of Westerners who have devoted YouTube channels and Twitter feeds to echoing pro-China narratives on everything from Beijing’s treatment of Uyghur Muslims to Olympian Eileen Gu, an American who competed for China in the [2022] Winter Games.”

Why Latin American Leaders Are Obsessed With TikTok

Why Latin American Leaders Are Obsessed With TikTok

“Latin American heads of state have long been early adopters of new social media platforms. Now they have seized on TikTok as a less formal, more effective tool for all sorts of political messaging. In Venezuela, Nicolas Maduro has been using the platform to share bite-sized pieces of propaganda on the alleged successes of his socialist agenda, among dozens of videos of himself dancing salsa. In Ecuador, Argentina and Chile, presidents use the app to give followers a view behind the scenes of government. In Brazil, former President Jair Bolsonaro and his successor Luiz Inácio Lula da Silva have been competing for views in the aftermath of a contested election…In much of the West, TikTok is the subject of political suspicion; in Latin America, it’s a cornerstone of political strategy.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19