Digital Gender Divide

What is the digital gender divide?

The digital gender divide refers to the gap in access and use of the internet between women* and men, which can perpetuate and exacerbate gender inequalities and leave women out of an increasingly digital world. Despite the rapid growth of internet access around the globe (95% of people in 2023 live within reach of a mobile cellular network), women are still 6% less likely to use the internet compared to men; a gap that is actually widening in many low- and middle-income countries (LMICs) where, in 2023, women are 12% less likely than men to own a mobile phone and 19% less likely to actually access the internet on a mobile device.

Civil society leader in La Paz, Honduras. The gender digital divide affects every aspect of women’s lives. Photo credit: Honduras Local Governance Activity / USAID Honduras.

Though it might seem like a relatively small gap, because mobile phones and smartphones have surpassed computers as the primary way people access the internet, that statistic translates to 310 million fewer women online in LMICs than men. Without access to the internet, women cannot fully participate in different aspects of the economy, join educational opportunities, and fully utilize legal and social support systems.

The digital gender divide does not just stop at access to the internet however; it is also the gap in how women and men use the internet once they get online. Studies show that even when women own mobile phones, they tend to use them less frequently and intensely than men, especially for more sophisticated services, including searching for information, looking for jobs, or engaging in civic and political spaces. Additionally, there is less locally relevant content available to women internet users, because women themselves are more often content consumers than content creators. Furthermore, women face greater barriers to using the internet in innovative and recreational ways due to unwelcoming online communities and cultural expectations that the internet is not for women and women should only participate online in the context of their duty to their families.

The digital gender divide is also apparent in the exclusion of women from leadership or development roles in the information and communications technology (ICT) sector. In fact, the proportion of women working in the ICT sector has been declining over the last 20 years. According to a 2023 report, in the United States alone, women only hold around 23% of programming and software development jobs, down from 37% in the 1980s. This contributes to software, apps, and tools rarely reflecting the unique needs that women have, further alienating them. Apple, for instance, whose tech employees were 75.1% male in 2022, did not include a menstrual cycle tracker in its Health app until 2019, five years after it was launched (though it did have a sodium level tracker and blood alcohol tracker during that time).

Ward nurses providing vaccines in Haiti. Closing the gender digital divide is key to global public health efforts. Photo credit: Karen Kasmauski / MCSP and Jhpiego

A NOTE ON GENDER TERMINOLOGY
All references to “women” (except those that reference specific external studies or surveys, which has been set by those respective authors) is gender-inclusive of girls, women, or any person or persons identifying as a woman.

Why is there a digital gender divide?

At the root of the digital gender divide are entrenched traditional gender inequalities, including gender bias, socio-cultural norms, lack of affordability and digital literacy, digital safety issues, and women’s lower (compared to men’s) comfort levels navigating and existing in the digital world. While all of these factors play a part in keeping women from achieving equity in their access to and use of digital technologies, the relative importance of each factor depends largely on the region and individual circumstances.

Affordability

In LMICs especially, the biggest barrier to access is simple: affordability. While the costs of internet access and of devices have been decreasing, they are often still too expensive for many people. While this is true for both genders, women tend to face secondary barriers that keep them from getting access, such as not being financially independent, or being passed over by family members in favor of a male relative. Even when women have access to devices, they are often registered in a male relative’s name. The consequences of this can range from reinforcing the idea that the internet is not a place for women to preventing women from accessing social support systems. In Rwanda, an evaluation of The Digital Ambassador Programme pilot phase found that the costs of data bundles and/or access to devices were prohibitively expensive for a large number of potential women users, especially in the rural areas.

Education

Education is another major barrier for women all over the world. According to 2015 data from the Web Foundation, women in Africa and Asia who have some secondary education or have completed secondary school were six times more likely to be online than women with primary school or less.

Further, digital skills are also required to meaningfully engage with the Internet. While digital education varies widely by country (and even within countries), girls are still less likely to go to school over all, and those that do tend to have “lower self-efficacy and interest” in studying Science, Technology, Engineering and Math (STEM) topics, according to a report by UNICEF and the ITU, and STEM topics are often perceived as being ‘for men’ and are therefore less appealing to women and girls. While STEM subjects are not strictly required to use digital technologies, these subjects can help to expose girls to ICTs and build skills that help them be confident in their use of new and emerging technologies. Furthermore, studying these subjects is the first step along the pathway of a career in the ICT field, which is a necessary step to address inherent bias in technologies created and distributed largely by men. Without encouragement and confidence in their digital skills, women may shy away or avoid opportunities that are perceived to be technologically advanced, even when they do not actually require a high level of digital knowledge.

Social Norms

Social norms have an outsized impact on many aspects of the digital gender divide because they can also be a driving factor vis-à-vis other barriers. Social norms look different in different communities; in places where women are round-the-clock caregivers, they often do not have time to spend online, while in other situations women are discouraged from pursuing STEM careers. In other cases, the barriers are more strictly cultural. For example, a report by the OECD indicated that, in India and Egypt, around one-fifth of women believed that the Internet “was not an appropriate place for them” due to cultural reasons.

Online social norms also play a part in preventing women, especially those from LMICs, from engaging fully with the internet. Much of the digital marketplace is dominated by English and other Western languages, which women may have fewer opportunities to learn due to education inequalities. Furthermore, many online communities, especially those traditionally dominated by men, such as gaming communities, are unfriendly to women, often reaching the extent that women’s safety is compromised.

Online Violence

Scarcity of content that is relevant and empowering for women and other barriers that prevent women from participating freely and safely online are also fundamental aspects of the digital gender divide. Even when women access online environments, they face a disproportionate risk of gender-based violence (GBV) online: digital harassment, cyberstalking, doxxing, and the non-consensual distribution of images (e.g., “revenge porn”). Gender minorities are also targets of online GBV. Trans activists, for example, have experienced increased vulnerability in digital spaces, especially as they have become more visible and vocal. Cyber harassment of women is so extreme that the UN’s High Commissioner for Human Rights has warned, “if trends continue, instead of empowering women, online spaces may actually widen sex and gender-based discrimination and violence.

This barrier is particularly harmful to democracy as the internet has become a key venue for political discussion and activism. Research conducted by the National Democratic Institute has demonstrated that women and girls at all levels of political engagement and in all democratic sectors, from the media to elected office, are affected by the “‘chilling effect’ that drives politically-active women offline and in some cases out of the political realm entirely.” Furthermore, women in the public eye, including women in politics and leadership positions are more often targeted by this abuse, and in many cultures, it is considered “the cost of doing business” for women who participate in the democratic conversation and is simply accepted.

“…if trends continue, instead of empowering women, online spaces may actually widen sex and gender-based discrimination and violence.”

UN’s High Commissioner for Human Rights

Back to top

How is the digital gender divide relevant in civic space and for democracy?

The UN recognizes the importance of women’s inclusion and participation in a digital society. The fifth Sustainable Development Goal (SDG) calls to “enhance the use of enabling technology, in particular information and communications technology, to promote the empowerment of women.” Moreover, women’s digital inclusion and technological empowerment are relevant to achieving quality education, creating decent work and economic growth, reducing inequality, and building peaceful and inclusive institutions. While digital technologies offer unparalleled opportunities in areas ranging from economic development to health improvement, to education, cultural development, and political participation, gaps in access to and use of these technologies and heightened safety concerns exacerbate gender inequalities and hinder women’s ability to access resources and information that are key to improving their lives and the wellbeing of their communities.

Further, the ways in which technologies are designed and employed, and how data are collected and used impact men and women differently, often because of existing disparities. Whether using technologies to develop artificial intelligence systems and implement data protection frameworks or just for the everyday uses of social media, gender considerations should be at the center of decision–making and planning in the democracy, rights, and governance space.

Students in Zanzibar. Without access to the internet, women and girls cannot fully engage in economies, participate in educational opportunities, or access legal systems. Photo credit: Morgana Wingard / USAID.

Initiatives that ignore gender disparities in access to the Internet and ownership and use of mobile phones and other devices will exacerbate existing gender inequalities, especially for the most vulnerable and marginalized populations. In the context of the Covid-19 pandemic and increasing GBV during lockdown, technology provided some with resources to address GBV, but it also created new opportunities for ways to exploit women and chill online discourse. Millions of women and non-binary individuals who faced barriers to accessing the internet and online devices were left with limited avenues to help, whether via instant messaging services, calls to domestic abuse hotlines, or discreet apps that provide disguised support and information to survivors in case of surveillance by abusers. Furthermore, the shift to a greater reliance on technology for work, school, medical attention, and other basic aspects of life further limited the engagement of these women in these aspects of society and exposed women who were active online to more online GBV.

Most importantly, initiatives in the civic space must recognize women’s agency and knowledge and be gender-inclusive from the design stage. Women must participate as co-designers of programs and be engaged as competent members of society with equal potential to devise solutions rather than perceived as passive victims.

Back to top

Opportunities

There are a number of different areas to engage in that can have a positive impact in closing the digital gender divide. Read below to learn how to more effectively and safely think about some areas that your work may already touch on (or could include).

Widening job and education opportunities

In 2018, the ITU projected that 90% of future jobs will require ICT skills, and employers are increasingly citing digital skills and literacy as necessary for future employees according to the World Economic Forum. As traditional analog jobs in which women are overrepresented (such as in the manufacturing, service, and agricultural sectors) are replaced by automation, it is more vital than ever that women learn ICT skills to be able to compete for jobs. While digital literacy is becoming a requirement for many sectors, new, more flexible job opportunities are also becoming more common, and are eliminating traditional barriers to entry, such as age, experience, or location. Digital platforms can enable women in rural areas to connect with cities, where they can more easily sell goods or services. And part-time, contractor jobs in the “gig economy” (such as ride sharing, food delivery, and other freelance platforms) allow women more flexible schedules that are often necessitated by familial responsibilities.

The internet also expands opportunities for girls’ and womens’ educations. Online education opportunities, such as those for refugees, are reaching more and more learners, including girls. Online learning also gives those who missed out on education opportunities as children another chance to learn at their own pace, with flexibility in terms of time and location, that may be necessary given women’s responsibilities, and may allow women’s participation in the class to be more in proportion to that of men.

Increasing access to financial services

The majority of the world’s unbanked population is women. Women are more likely than men to lack credit history and the mobility to go to the bank. As such, financial technologies can play a large equalizing role, not only in terms of access to tools but also in terms of how financial products and services could be designed to respond to women’s needs. In the MENA region, for example, where 54% of men but only 42% of women have bank accounts, and up to 14 million unbanked adults in the region send or receive domestic remittances using cash or an over-the-counter service, opportunities to increase women’s financial inclusion through digital financial services are promising. Several governments have experimented with mobile technology for Government to People (G2P) payments. Research shows that this has reduced the time required to access payments, but the new method does not benefit everyone equally. When designing programs like this, it is necessary to keep in mind the digital gender divide and how women’s unique positioning will impact the effectiveness of the initiative.

Policy change for legal protections

There are few legal protections for women and gender-diverse people who seek justice for the online abuse they face. According to a 2015 UN Broadband Commission report, only one in five women live in a country where online abuse is likely to be punished. In many countries, perpetrators of online violence act with impunity, as laws have not been updated for the digital world, even when online harassment results in real-world violence. In the Democratic Republic of Congo (DRC), for instance, there are no laws that specifically protect women from online harassment, and women who have brought related crimes to the police risk being prosecuted for “ruining the reputation of the attacker.” And when cyber legislation is passed, it is not always effective. Sometimes it even results in the punishment of victimized women: women in Uganda have been arrested under the Anti-Pornography Act after ex-partners released “revenge porn” (nude photos of them posted without their consent) online. As many of these laws are new, and technologies are constantly changing, there is a need for lawyers and advocates to understand existing laws and gaps in legislation to propose policies and amend laws to allow women to be truly protected online and safe from abuse.

The European Union’s Digital Services Act (DSA), adopted in 2022, is landmark legislation regulating platforms. The act may force platforms to thoroughly assess threats to women online and enact comprehensive measures to address those threats. However, the DSA is newly introduced and how it is implemented will determine whether it is truly impactful. Furthermore, the DSA is limited to the EU, and, while other countries and regions may use it as a model, it would need to be localized.

Making the internet safe for women requires a multi-stakeholder approach. Governments should work in collaboration with the private sector and nonprofits. Technology companies have a responsibility to the public to provide solutions and support women who are attacked on their platforms or using their tools. Not only is this a necessary pursuit for ethical reasons, but, as women make up a very significant audience for these tools, there is consumer demand for solutions. Many of the interventions created to address this issue have been created by private companies. For example, Block Party was a tool created by a private company to give users the control to block harassment on Twitter. It was financially successful until Twitter drastically raised the cost of access to the Twitter API and forced Block Party to close. Despite financial and economic incentives to protect women online, currently, platforms are falling short.

While most platforms ban online gender based violence in their terms and conditions, rarely are there real punishments for violating this ban or effective solutions to protect those attacked.The best that can be hoped for is to have offending posts removed, and this is rarely done in a timely manner. The situation is even worse for non-English posts, which are often misinterpreted, with offensive slang ignored and common phrases censored. Furthermore, the way the reporting system is structured puts the burden on those attacked to sort through violent and traumatizing messages and convince the platform to remove them.

Nonprofits are uniquely placed to address online gendered abuse because they can and have moved more quickly than governments or tech companies to make and advocate for change. Nonprofits provide solutions, conduct research on the threat, facilitate security training, and develop recommendations for tech companies and governments. Furthermore, they play a key role in facilitating communication between all the stakeholders.

Digital security education and digital literacy training

Digital-security education can help women (especially those at higher risk, like human rights defenders and journalists) stay safe online and attain critical knowledge to survive and thrive politically, socially, and economically in an increasingly digital world. However, there are not enough digital-safety trainers that understand the context and the challenges at-risk women face. There are few digital-safety resources that provide contextualized guidance around the unique threats that women face or have usable solutions for the problems they need to solve. Furthermore, social and cultural pressures can prevent women from attending digital-safety trainings. Women can and will be content creators and build resources for themselves and others, but they first must be given the chance to learn about digital safety and security as part of a digital-literacy curriculum. Men and boys, too, need training on online harassment and digital-safety education.

Connecting and campaigning on issues that matter

Digital platforms enable women to connect with each other, build networks, and organize on justice issues. For example, the #MeToo movement against sexual misconduct in the media industry, which became a global movement, has allowed a multitude of people to participate in activism previously bound to a certain time and place. Read more about digital activism in the Social Media primer.

Beyond campaigning for women’s rights, the internet provides a low-cost way for women to get involved in the broader democratic conversation. Women can run for office, write for newspapers, and express their political opinions with only a phone and an internet connection. This is a much lower barrier than the past, when reaching a large crowd required a large financial investment (such as paying for TV advertising), and women had less control over the message being expressed (for example, media coverage of women politicians disproportionately focusing on physical appearance). Furthermore, the internet is a resource for learning political skills. Women with digital literacy skills can find courses, blogs, communities, and tools online to support any kind of democratic work.

Back to top

Risks

Young women at a digital inclusion center in the Peruvian Amazon.. Photo credit: Jack Gordon / USAID / Digital Development Communications.

There are many factors that threaten to widen the digital gender divide and prevent technology from being used to increase gender equality.. Read below to learn about some of these elements, as well as how to mitigate the negative consequences they present for the digital gender divide.

Considering the digital gender divide a “women’s issue”

The gender digital divide is a cross-cutting and holistic issue, affecting countries, societies, communities, and families, and not just as a “women’s issue.” When people dismiss the digital gender divide as a niche concern, it limits the resources devoted to the issue and leads to ineffective solutions that do not address the full scope of the problem. Closing the gender gap in access, use, and development of technology demands the involvement of societies as a whole. Approaches to close the divide must be holistic, take into account context-specific power and gender dynamics, and include active participation of men in the relevant communities to make a sustained difference.

Further, the gender digital divide should not be understood as restricted to the technology space, but as a social, political, and economic issue with far-reaching implications, including negative consequences for men and boys.

Disasters and crises intensify the education gap for women

Women’s and girls’ education opportunities are more tenuous during crises. Increasing domestic and caregiving responsibilities, a shift towards income generation, pressure to marry, and gaps in digital-literacy skills mean that many girls will stop receiving an education, even where access to the internet and distance-learning opportunities are available. In Ghana, for example, 16% of adolescent boys have digital skills compared to only 7% of girls. Similarly, lockdowns and school closures due to the Covid-19 pandemic had a disproportionate effect on girls, increasing the gender gap in education, especially in the most vulnerable contexts. According to UNESCO, more than 111 million girls who were forced out of school in March 2020 live in the countries where gender disparities in education are already the highest. In Mali, Niger, and South Sudan, countries with some of the lowest enrolment and completion rates for girls, closures left over 4 million girls out of school.

Online violence increases self-censorship and chills political engagement

Online GBV has proven an especially powerful tool for undermining women and women-identifying human-rights defenders, civil society leaders, and journalists, leading to self-censorship, weakening women’s political leadership and engagement, and restraining women’s self-expression and innovation. According to a 2021 Economist Intelligence Unit (EIU) report, 85% of women have been the target of or witnessed online violence, and 50% of women feel the internet is not a safe place to express their thoughts and opinions. This violence is particularly damaging for those with intersecting marginalized identities. If these trends are not addressed, closing the digital divide will never be possible, as many women who do get online will be pushed off because of the threats they face there. Women journalists, activists, politicians, and other female public figures are the targets of threats of sexual violence and other intimidation tactics. Online violence against journalists leads to journalistic self-censorship, affecting the quality of the information environment and democratic debate.

Online violence chills women’s participation in the digital space at every level. In addition to its impact on women political leaders, online harassment affects how women and girls who are not direct victims engage online. Some girls, witnessing the abuse their peers face online, are intimidated into not creating content. This form of violence is also used as a tool to punish and discourage women who don’t conform to traditional gender roles.

Solutions include education (training women on digital security to feel comfortable using technology and training men and boys on appropriate behavior in online environments), policy change (advocating for the adoption of policies that address online harassment and protect women’s rights online), and technology change (addressing the barriers to women’s involvement in the creation of tech to decrease gender disparities in the field and help ensure that the tools and software that are available serve women’s needs).

Artificial intelligence systems exacerbate biases

Disproportionate involvement of women in leadership in the development, coding, and design of AI and machine-learning systems leads to reinforcement of gender inequalities through the replication of stereotypes and maintenance of harmful social norms. For example, groups of predominantly male engineers have designed digital assistants such as Apple’s Siri and Amazon’s Alexa, which use women-sounding voices, reinforcing entrenched gender biases, such as women being more caring, sympathetic, cordial, and even submissive.

In 2019, UNESCO released “I’d blush if I could”, a research paper whose title was based on the response given by Siri when a human user addressed “her” in an extremely offensive manner. The paper noted that although the system was updated in April 2019 to reply to the insult more flatly (“I don’t know how to respond to that”), “the assistant’s submissiveness in the face of gender abuse remain[ed] unchanged since the technology’s wide release in 2011.” UNESCO suggested that by rendering the voices as women-sounding by default, tech companies were preconditioning users to rely on antiquated and harmful perceptions of women as subservient and failed to build in proper safeguards against abusive, gendered language.

Further, machine-learning systems rely on data that reflect larger gender biases. A group of researchers from Microsoft Research and Boston University trained a machine learning algorithm on Google News articles, and then asked it to complete the analogy: “Man is to Computer Programmer as Woman is to X.” The answer was “Homemaker,” reflecting the stereotyped portrayal and the deficit of women’s authoritative voices in the news. (Read more about bias in artificial intelligence systems in the Artificial Intelligence and Machine Learning Primer section on Bias in AI and ML).

In addition to preventing the reinforcement of gender stereotypes, increasing the participation of women in tech leadership and development helps to add a gendered lens to the field and enhance the ways in which new technologies can be used to improve women’s lives. For example, period tracking was first left out of health applications, and then, tech companies were slow to address concerns from US users after Roe v. Wade was repealed and period-tracking data privacy became a concern in the US.

New technologies allow for the increased surveillance of women

Surveillance is of particular concern to those working in closed and closing spaces, whose governments see them as a threat due to their activities promoting human rights and democracy. Research conducted by Privacy International shows that there is a uniqueness to the surveillance faced by women and gender non-conforming individuals. From data privacy implications related to menstrual-tracker apps, which might collect data without appropriate informed consent, to the ability of women to privately access information about sexual and reproductive health online, to stalkerware and GPS trackers installed on smartphones and internet of things (IoT) devices by intimate partners, pervasive technology use has exacerbated privacy concerns and the surveillance of women.

Research conducted by the CitizenLab, for example, highlights the alarming breadth of commercial software that exists for the explicit purpose of covertly tracking another’s mobile device activities, remotely and in real-time. This could include monitoring someone’s text messages, call logs, browser history, personal calendars, email accounts, and/or photos. Education on digital security and the risks of data collection is necessary so women can protect themselves online, give informed consent for data collection, and feel comfortable using their devices.

Increased technological unemployment

Job losses caused by the replacement of human labor with automated systems lead to “technological unemployment,” which disproportionately affects women, the poor, and other vulnerable groups, unless they are re-skilled and provided with adequate protections. Automation also requires skilled labor that can operate, oversee, and/or maintain automated systems, eventually creating jobs for a smaller section of the population. But the immediate impact of this transformation of work can be harmful for people and communities without social safety nets or opportunities for finding other work.

Back to top

Questions

Consider these questions when developing or assessing a project proposal that works with women or girls (which is pretty much all of them):

  1. Have women been involved in the design of your project?
  2. Have you considered the gendered impacts and unintended consequences of adopting a particular technology in your work?
  3. How are differences in access and use of technology likely to affect the outcomes of your project?
  4. Are you employing technologies that could reinforce harmful gender stereotypes or fail the needs of women participants?
  5. Are women exposed to additional safety concerns (compared to men) brought about by the use of the tools and technologies adopted in your project?
  6. Have you considered gaps in sex- or gender-disaggregated data in the dataset used to inform the design and implementation of your project? How could these gaps be bridged through additional primary or secondary research?
  7. How can your project meaningfully engage men and boys to address the gender digital divide?
  8. How can your organization’s work help mitigate and eventually close the gender digital divide?

Back to top

Case studies

There are many examples of programs that are engaging with women to have a positive effect on the digital gender divide. Find out more about a few of these below.

USAID’s WomenConnect Challenge

In 2018, USAID launched the WomenConnect Challenge to enable women’s access to, and use of, digital technologies. The first call for solutions brought in more than 530 ideas from 89 countries, and USAID selected nine organizations to receive $100,000 awards. In the Republic of Mozambique, the development-finance institution GAPI lowered barriers to women’s mobile access by providing offline Internet browsing, rent-to-own options, and tailored training in micro-entrepreneurship for women by region. Another first round awardee, AFCHIX, created opportunities for rural women in Kenya, Namibia, Sénégal, and Morocco to become network engineers and build their own community networks or Internet services. AFCHIX won another award in the third round of funding, which the organization used to integrate digital skills learning into community networks to facilitate organic growth of women using digital skills to create socioeconomic opportunities. The entrepreneurial and empowerment program helps women establish their own companies, provides important community services, and positions these individuals as role models.

Safe Sisters – Empowering women to take on digital security

In 2017, Internews and DefendDefenders piloted the Safe Sisters program in East Africa to empower women to protect themselves against online GBV. Safe Sisters is a digital-safety training-of-trainers program that provides women human rights defenders and journalists who are new to digital safety with techniques and tools to navigate online spaces safely, assume informed risks, and take control of their lives in an increasingly digital world. The program was created and run entirely by women, for women. In it, participants learn digital-security skills and get hands-on experience by training their own at-risk communities.

In building the Safe Sisters model, Internews has proven that, given the chance, women will dive into improving their understanding of digital safety, use this training to generate new job opportunities, and share their skills and knowledge in their communities. Women can also create context- and language-specific digital-safety resources and will fight for policies that protect their rights online and deter abuse. There is strong evidence of the lasting impact of the Safe Sisters program: two years after the program launched, 80% of the pilot cohort of 13 women were actively involved in digital safety; 10 had earned new professional opportunities because of their participation; and four had changed careers to pursue digital security professionally.

Internet Saathi

In 2015, Google India and Tata Trusts launched Internet Saathi, a program designed to equip women (known as Internet Saathis) in villages across the country with basic Internet skills and provide them with Internet-enabled devices. The Saathis then train other women in digital literacy skills, following the ‘train the trainer’ model. As of April 2019, there were more than 81,500 Internet Saathis who helped over 28 million women learn about the Internet across 289,000 villages. Read more about the Saathis here.

Girls in Tech

Girls in Tech is a nonprofit with chapters around the world. Its goal is to close the gender gap in the tech development field. The organization hosts events for girls, including panels and hackathons, which serve the dual purpose of encouraging girls to participate in developing technology and solving local and global issues, such as environmental crises and accessibility issues for people with disabilities. Girls in Tech gives girls the opportunity to get involved in designing technology through learning opportunities like bootcamps and mentorship. The organization hosts a startup pitch competition called AMPLIFY, which gives girls the resources and funding to make their designs a reality.

Women in Tech

Women in Tech is another international nonprofit and network with chapters around the globe that supports Diversity, Equity, and Inclusion in Science, Technology, Engineering, Arts, and Mathematics fields. It does this through focuses on Education – training women for careers in tech, including internships, tech awareness sessions, and scholarships; Business – including mentoring programs for women entrepreneurs, workshops, and incubation and acceleration camps; Social Inclusion – ensuring digital literacy skills programs are reaching marginalized groups and underprivileged communities; and Advocacy – raising awareness of the digital gender divide issue and how it can be solved.

EQUALS Global Partnership

The International Telecommunications Union (ITU), GSMA, the International Trade Centre, the United Nations University, and UN Women founded the EQUALS Global Partnership to tackle the digital gender divide through research, policy, and programming. EQUALS breaks the path to gender equality in technology into four core issue areas; Access, Skills, Leadership, and Research. The Partnership has a number of programs, some in collaboration with other organizations, to specifically target these issue areas. One research program, Fairness AI, examines bias in AI, while the Digital Literacy Pilot Programmes, which are the result of collaboration between the World Bank, GSMA, and the EQUALS Access Coalition, are programs focused on teaching digital literacy to women in Rwanda, Uganda, and Nigeria. More information about EQUALS Global Partnership’s projects can be found on the website.

Regional Coding Camps and Workshops

Many initiatives to address the digital gender divide utilize trainings to empower girls and women to feel confident in tech industries because simply accessing technology is only one factor contributing to the divide. Because cultural obligations often play a key role and because technology is more intimidating when it is taught in a non-native language, many of these educational programs are localized. One example of this is the African Girls Can Code Initiative (AGCCI), created by UN Women, the African Union Commission (AUC), and the ITU. The Initiative trains women and girls between the ages of 17 and 25 in coding and information, communications, and technology (ICT) skills in order to encourage them to pursue an education and career in these fields. AGCCI works to close the digital gender divide through both increasing women and girls’ knowledge of the field and mainstreaming women in these fields, tackling norms issues.

Mentorship Programs

Many interventions to encourage women’s engagement in technology also use mentorship programs. Some use direct peer mentorship, while others connect women with role models through interviews or conferences. Utilizing successful women is an effective solution because success in the tech field for women requires more than just tech skills. Women need to be able to address gender and culture-specific barriers that only other women who have the same lived experiences can understand. Furthermore, by elevating mentors, these interventions put women tech leaders in the spotlight, helping to shift norms and expectations around women’s authority in the tech field. The Women in Cybersecurity Mentorship Programme is one example. This initiative was created by the ITU, EQUALS, and the Forum of Incident Response and Security Teams (FIRST). It elevates women leaders in the cybersecurity field and is a resource for women at all levels to share professional best practices. Google Summer of Code is another, broader (open to all genders) mentorship opportunity. Applicants apply for mentorship on a coding project they are developing and mentors help introduce them to the norms and standards of the open source community, and they develop their projects as open source.

Outreachy is an internship program that aims to increase diversity in the open source community. Applicants are considered if they are impacted by underrepresentation in tech in the area in which they live. The initiative includes a number of different projects they can work on, lasts three months, and are conducted remotely with a stipend of 7000 USD to decrease barriers for marginalized groups to participate.

USAID/Microsoft Airband Initiative

The USAID/Microsoft Airband Initiative takes localized approaches to addressing the digital gender divide. For each region, partner organizations, which are local technology companies, work in collaboration with local gender inequality experts to design a project to increase connectivity, with a focus on women’s connectivity and reducing the digital gender divide. Making tech companies the center of the program helps to address barriers like determining sustainable price points. The second stage of the program utilizes USAID and Microsoft’s resources to scale up the local initiatives. The final stage looks to capitalize on the first two stages, recruiting new partners and encouraging independent programs.

UN Women’s Second Chance Education (SCE) Programme

The UN Women’s Second Chance Education (SCE) Programme utilizes e-learning to increase literacy and digital literacy, especially of women and girls who missed out on traditional education opportunities. The program was piloted between 2018 and 2023 in six countries of different contexts, including humanitarian crises, middle income, and amongst refugees, migrants, and indigenous peoples. The pilot has been successful overall, but access to the internet remains a challenge for vulnerable groups, and blended learning (utilizing both on and offline components) was particularly successful, especially in adapting to the unique needs, schedules, and challenges participants faced.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top


*A NOTE ON GENDER TERMINOLOGY

All references to “women” (except those that reference specific external studies or surveys, which have been set by those respective authors) are gender-inclusive of girls, women, or any person or persons identifying as a woman.

While much of this article focuses on women, people of all genders are harmed by the digital gender divide, and marginalized gender groups that do not identify as women face some of the same challenges utilizing the internet and have some of the same opportunities to use the internet to address offline barriers.

Categories

Extended Reality / Augmented Reality / Virtual Reality (XR/AR/VR)

What is Extended Reality (XR)?

Extended Reality (XR) is a collective term encompassing Augmented Reality (AR) and Virtual Reality (VR), technologies that transform our interaction with the world by either enhancing or completely reimagining our perception of reality.

Utilizing advancements in computer graphics, sensors, cameras, and displays, XR creates immersive experiences that range from overlaying digital information onto our physical surroundings in AR, to immersing users in entirely digital environments in VR. XR represents a significant shift in how we engage with and perceive digital content, offering intuitive and natural interfaces for a wide range of applications in various sectors, including democracy, human rights, and governance.

What is Virtual Reality (VR)?

Virtual Reality (VR) is a technology that immerses users in a simulated three-dimensional (3D) environment, allowing them to interact with it in a way that simulates real-world experiences, engaging senses like sight, hearing, and touch. Unlike traditional user interfaces, VR places the user inside an experience. Instead of viewing a screen in front of them, users are immersed and able to interact with 3D worlds.

VR uses a specialized headset, known as a VR Head Mounted Display (HMD), to create a 3D, computer-generated world that fully encompasses your vision and hearing. This immersive technology not only visualizes but also enables interaction through hand controllers. These controllers provide haptic feedback, a feature that simulates the sense of touch, enhancing the realism of the virtual experience. VR’s most notable application is in immersive gaming, where it allows players to fully engage in complex fantasy worlds.

What is Augmented Reality (AR)?

Augmented Reality (AR) is a technology that overlays digital information and objects onto the real world, enhancing what we see, hear, and feel. For instance, it can be used as a tourist application to help a user find her way through an unfamiliar city and identify restaurants, hotels, and sights. Rather than immersing the user in an imaginary or distant virtual world, the physical world is enhanced by augmenting it in real time with digital information and entities.

AR became widely popular in 2016 with the game Pokémon Go, in which players found virtual characters in real locations, and Snapchat, which adds playful filters like funny glasses to users’ faces. AR is used in more practical applications as well such as aiding surgeries, enhancing car displays, and visualizing furniture in homes. Its seamless integration with the real world and focus on enhancing, rather than replacing reality, positions AR as a potential key player in future web and metaverse technologies, replacing traditional computing interfaces like phones and desktops by accurately blending real and virtual elements in real time.

What is Mixed Reality (MR)?

Mixed Reality (MR) is a technology that merges real-world and digital elements. It combines elements of Virtual Reality (VR), which creates a completely computer-generated environment, with Augmented Reality (AR), which overlays digital information onto the real world. In MR, users can seamlessly interact with both real and virtual objects in real time. Digital objects in MR are designed to respond to real-world conditions like light and sound, making them appear more realistic and integrated into our physical environment. Unlike VR, MR does not fully replace the real world with a digital one; instead, it enhances your real-world experience by adding digital elements, providing a more interactive and immersive experience.

MR has diverse applications, such as guiding surgeons in minimally invasive procedures using interactive 3D images and models through MR headsets. MR devices are envisioned as versatile tools poised to deliver value across multiple domains.

What is the Metaverse?

The Metaverse, a term first coined in the 1992 novel “Snow Crash,” is an immersive, interconnected virtual world in which people use avatars to interact with each other and digital environments via the internet. It blends the physical and digital realms using Extended Reality (XR) technologies like AR and VR, creating a space for diverse interactions and community building. Gaining traction through advancements in technology and investments from major companies, the Metaverse offers a platform that mirrors real-world experiences in a digitally enhanced environment, allowing simultaneous connections among numerous users.

Metaverse and how it leverages XR. Metaverse can be built using VR technology to create Virtual Metaverse and using AR technology to create Augmented Metaverse. (Figure adapted from Ahmed et al., 2023)

A metaverse can span the spectrum of virtuality and may incorporate a “virtual metaverse” or an “augmented metaverse” as shown above. Features of this technology range from employing avatars within virtual realms to utilizing smartphones for accessing metaverse environments, and from wearing AR glasses that superimpose computer-generated visuals onto reality, to experiencing MR scenarios that flawlessly blend elements from both the physical and virtual domains.

Spectrum ranging from Reality to Virtuality [Milgram and Kishino (1994) continuum.] Figure adapted from “Reality Media” (Source: Bolter, Engberg, & MacIntyre, 2021)
The above figure illustrates a spectrum from the real environment (physical world) at one end to a completely virtual environment (VR) at the other. Augmented Reality (AR) and Augmented Virtuality (AV) are placed in between, with AR mostly showing the physical world enhanced by digital elements, and AV being largely computer-generated but including some elements from the real world. Mixed Reality (MR) is a term for any combination of the physical and virtual worlds along this spectrum.

Back to top

How is AR/VR relevant in civic space and for democracy?

In the rapidly evolving landscape of technology, the potential of AR/VR stands out, especially in its relevance to democracy, human rights, and governance (DRG). These technologies are not just futuristic concepts; they are tools that can reshape how we interact with the world and with each other, making them vital for the DRG community.

At the forefront is the power of AR/VR to transform democratic participation. These technologies can create immersive and interactive platforms that bring the democratic process into the digital age. Imagine participating in a virtual town hall from your living room, debating policies with avatars of people from around the world. This is not just about convenience; it’s about enhancing engagement, making participation in governance accessible to all, irrespective of geographic or physical limitations.

Moreover, AR/VR technologies offer a unique opportunity to give voice to marginalized communities. Through immersive experiences, people can gain a visceral understanding of the challenges faced by others, fostering empathy and breaking down barriers. For instance, VR experiences that simulate the life of someone living in a conflict zone or struggling with poverty can be powerful tools in human rights advocacy, making abstract issues concrete and urgent.

Another significant aspect is the global collaboration facilitated by AR/VR. These technologies enable DRG professionals to connect, share experiences, and learn from each other across borders. Such collaboration is essential in a world where human rights and democratic values are increasingly interdependent. The global exchange of ideas and best practices can lead to more robust and resilient strategies in promoting democracy and governance.

The potential of AR/VR in advocacy and awareness is significant. Traditional methods of raising awareness about human rights issues can be complemented and enhanced by the immersive nature of these technologies. They bring a new dimension to storytelling, allowing people to experience rather than just observe. This could be a game-changer in how we mobilize support for causes and educate the public about critical issues.

However, navigating the digital frontier of AR/VR technology calls for a vigilant approach to data privacy, security, and equitable access, recognizing these as not only technical challenges but also human rights and ethical governance concerns.

The complexity of governing these technologies necessitates the involvement of elected governments and representatives to address systemic risks, foster shared responsibility, and protect vulnerable populations. This governance extends beyond government oversight, requiring the engagement of a wide range of stakeholders, including industry experts and civil society, to ensure fair and inclusive management. The debate over governance approaches ranges from advocating government regulation to protect society, to promoting self-regulation for responsible innovation. A potentially effective middle ground is co-regulation, where governments, industry, and relevant stakeholders collaborate to develop and enforce rules. This balanced strategy is crucial for ensuring the ethical and impactful use of AR/VR in enhancing democratic engagement and upholding human rights.

Back to top

Opportunities

AR/VR offers a diverse array of applications in the realms of democracy, human rights, and governance. The following section delves into various opportunities that AR/VR technology brings to civic and democracy work.

Augmented Democracy

Democracy is much more than just elections and governance by elected people. A fully functional democracy is characterized by citizen participation in the public space; participatory governance; freedom of speech and opportunities; access to information; due legal process and enforcement of justice; protection from abuse by the powerful, etc. Chilean physicist César Hidalgo, formerly director of the MIT Collective Learning group at MIT Media Lab, has worked on an ambitious project that he called “Augmented Democracy.” Augmented Democracy banks on the idea of using technology such as AR/VR, along with other digital tools, including AI and digital twins, to expand the ability of people to participate directly in a large volume of democratic decisions. Citizens can be represented in the virtual world by a digital twin, an avatar, or a software agent. Through such technology, humans can participate more fully in all of the public policy issues in a scalable, convenient fashion. Hidalgo asserts that democracy can be enhanced and augmented using technology to automate several of the tasks of governments and in the future, politicians and citizens will be supported by algorithms and specialist teams, fostering a collective intelligence that serves the people more effectively.

Participatory Governance

Using AR/VR, enhanced opportunities to participate in governance become available. When used in conjunction with other technologies such as AI, participatory governance becomes feasible at scale in which the voice of citizens and their representatives is incorporated into all of the decisions pertaining to public policy and welfare. However, “a participatory public space” is only one possibility. As we shall see later in the Risks section, we cannot ascribe outcomes to technology deterministically because the intent and purpose of deployment matters a lot. If due care is not exercised, the use of technology in public spaces may result in other less desirable scenarios such as “autocratic augmented reality” or “big-tech monopoly” (Gudowsky et al.). On the other hand, a well-structured metaverse could enable greater democratic participation and may offer citizens new ways to engage in civic affairs with AR/VR, leading to more inclusive governance. For instance, virtual town hall meetings, debates, and community forums could bring together people from diverse backgrounds, overcoming geographical barriers and promoting democratic discussions. AR/VR could facilitate virtual protests and demonstrations, providing a safe platform for expression in regions where physical gatherings might be restricted.

AR/VR in Healthcare

Perhaps the most well-known applications of AR/VR in the civic space pertain to the healthcare and education industries. The benefit of AR/VR for healthcare is well-established and replicated through multiple scientific studies. Even skeptics, typically doubtful of AR/VR/metaverse technology’s wider benefits, acknowledge its proven effectiveness in healthcare, as noted by experts like Bolter et al. (2021) and Bailenson (2018). These technologies have shown promise in areas such as therapeutics, mental therapy, emotional support, and specifically in exposure therapy for phobias and managing stress and anxiety. Illustrating this, Garcia-Palacios et al. (2002) demonstrated the successful use of VR in treating spider phobia through a controlled study, further validating the technology’s therapeutic potential.

AR/VR in Education

Along with healthcare, education and training provide the most compelling use cases of immersive technologies such as AR/VR. The primary value of AR/VR is that it provides a unique first-person immersive experience that can enhance human perception and educate or train learners in the relevant environment. Thus, with AR/VR, education is not reading about a situation or watching it, but being present in that situation. Such training can be useful in a wide variety of fields. For instance, Boeing presented results of a study that suggested that training performed through AR enabled workers to be more productive and assemble plane wings much faster than when instructions were provided using traditional methods. Such training has also been shown to be effective in diversity training, where empathy can be engendered through immersive experiences.

Enhanced Accessibility and Inclusiveness

AR/VR technology allows for the creation of interactive environments that can be customized to meet the needs of individuals with various abilities and disabilities. For example, virtual public spaces can be adapted for those with visual impairments by focusing on other senses, using haptic (touch-based) or audio interfaces for enhanced engagement. People who are colorblind can benefit from a ‘colorblind mode’ – a feature already present in many AR/VR applications and games, which adjusts colors to make them distinguishable. Additionally, individuals who need alternative communication methods can utilize text-to-speech features, even choosing a unique voice for their digital avatars. Beyond these adaptations, AR/VR technologies can help promote workplace equity, through offering people with physical disabilities equal access to experiences and opportunities that might otherwise be inaccessible, leveling the playing field in both social and professional settings.

Generating Empathy and Awareness

AR/VR presents a powerful usability feature through which users can experience what it is like to be in the shoes of someone else. Such perspective-enhancing use of AR/VR can be used to increase empathy and promote awareness of others’ circumstances. VR expert Jeremy Bailenson and his team at Stanford Virtual Human Interaction Lab have worked on VR for behavior change and have created numerous first-person VR experiences to highlight social problems such as racism, sexism, and other forms of discrimination (see some examples in Case Studies). In the future, using technology in real time with AR/VR-enabled, wearable and broadband wireless communication, one may be able to walk a proverbial mile in someone else’s shoes in real time, raising greater awareness of the difficulties faced by others. Such VR use can help in removing biases and in making progress on issues such as poverty and discrimination.

Immersive Virtual Communities and Support Systems

AR/VR technologies offer a unique form of empowerment for marginalized communities, providing a virtual space for self-expression and authentic interaction. These platforms enable users to create avatars and environments that truly reflect their identities, free from societal constraints. This digital realm fosters social development and offers a safe space for communities often isolated in the physical world. By connecting these individuals with broader networks, AR/VR facilitates access to educational and support resources that promote individual and communal growth. Additionally, AR/VR serves as a digital archive for diverse cultures and traditions, aiding in the preservation and celebration of cultural diversity. As highlighted in Jeremy Bailenson’s “Experience on Demand,” these technologies also provide therapeutic benefits, offering emotional support to those affected by trauma. Through virtual experiences, individuals can revisit cherished memories or envision hopeful futures, underscoring the technology’s role in emotional healing and psychological wellbeing.

Virtual Activism

Virtual reality, unlike traditional media, does not provide merely a mediated experience. When it is done well, explains Jeremy Bailenson, it is an actual experience. Therefore, VR can be the agent of long-lasting behavior change and can be more engaging and persuasive than other types of traditional media. This makes AR/VR ideally suited for virtual activism, which seeks to bring actual changes to the life of marginalized communities. For instance, VR has been used by UN Virtual Reality to provide a new lens on an existing migrant crisis; create awareness around climate change; and engender humanitarian empathy. Some examples are elaborated upon in the Case Studies.

Virtual Sustainable Economy

AR/VR and the metaverse could enable new, more sustainable economic models. Decentralized systems like blockchain technology can be used to support digital ownership of virtual assets, and empower the disenfranchised economically, and to challenge traditional, centralized power structures. Furthermore, since AR/VR and the metaverse promise to be the next evolution of the Internet – which is more immersive, and multi-sensory, individuals may be able to participate in various activities and experiences virtually. This could reduce the need for physical travel and infrastructure, resulting in more economical and sustainable living, reducing carbon footprints, and mitigating climate change.

Back to top

Risks

The use of AR/VR in democracy, human rights, and governance work carries various risks. The following sections will explore these risks in a little more detail. They will also provide strategies on how to mitigate these risks effectively.

Limited applications and Inclusiveness

For AR/VR technologies to be effectively and inclusively used in democratic and other applications, it is essential to overcome several key challenges. Currently, these technologies fall short in areas like advanced tactile feedback, comprehensive sign language support, and broad accessibility for various disabilities. To truly have a global impact, AR/VR must adapt to diverse communication methods, including visual, verbal, and tactile approaches, and cater to an array of languages, from spoken to sign. They should also be designed to support different cognitive abilities and neurodiversity, in line with the principles set by the IEEE Global Initiative on Ethics of Extended Reality. There is a pressing need for content to be culturally and linguistically localized as well, along with the development of relevant skills, making AR/VR applications more applicable and beneficial across various cultural and linguistic contexts.

Furthermore, access to AR/VR technologies and the broader metaverse and XR ecosystem is critically dependent on advanced digital infrastructure, such as strong internet connectivity, high-performance computing systems, and specialized equipment. As noted by Matthew Ball in his 2022 analysis, significant improvements in computational efficiency are necessary to make these technologies widely accessible and capable of delivering seamless, real-time experiences, which is particularly crucial in AR to avoid disruptive delays. Without making these advancements affordable, AR/VR applications at scale remain limited.

Concentration of Power & Monopolies of Corporations

The concept of the Metaverse, as envisioned by industry experts, carries immense potential for shaping the future of human interaction and experience. However, the concentrated control of this expansive digital realm by a single dominant corporation raises critical concerns over the balance of power and authority. As Matthew Ball (2022) puts it, the Metaverse’s influence could eclipse that of governments, bestowing unprecedented authority upon the corporation that commands it. The concentration of power within this immersive digital ecosystem brings forth apprehensions about accountability, oversight, and the potential implications for personal freedoms.

Another significant concern is how companies gather and use our data. While they can use data to improve their programs and lives in many ways, the World Bank (2021) warns that collecting vast amounts of data can lead to companies getting too much economic and political power, which could be used to harm citizens. The more data is used over and over, the more chances there are for it to be misused. Especially in situations characterized by concentrations of power, like in authoritarian regimes or corporate monopolies, the risks of privacy violations, surveillance, and manipulation become much higher.

Privacy Violation with Expanded Intrusive Digital Surveillance

The emergence of AR/VR technologies has revolutionized immersive experiences but also raises significant privacy concerns due to the extensive data collection involved. These devices collect a wide range of personal data, including biometric information like blood pressure, pulse oximetry, voice prints, facial features, and even detailed body movements. This kind of data gathering poses specific risks, particularly to vulnerable and marginalized groups, as it goes much further than simple identification. Current regulatory frameworks are not adequately equipped to address these privacy issues in the rapidly evolving XR environment. This situation underscores the urgent need for updated regulations that can protect individual privacy in the face of such advanced technological capabilities.

Moreover, AR/VR technologies bring unique challenges in the form of manipulative advertising and potential behavior modification. Using biometric data, these devices can infer users’ deepest desires, leading to highly targeted and potentially invasive advertising that taps into subconscious motivations. Such techniques blur the line between personal privacy and corporate interests, necessitating robust privacy frameworks. Additionally, the potential of AR/VR to influence or manipulate human behavior is a critical concern. As these technologies can shape our perceptions and choices, it is essential to involve diverse perspectives in their design and enforce proactive regulations to prevent irreversible impacts on their infrastructure and business models. Furthermore, the impact of XR technology extends to bystanders, who may unknowingly be recorded or observed, especially with the integration of technologies like facial recognition, posing further risks to privacy and security.

Unintended Harmful Consequences of AR/VR

When introducing AR/VR technology into democracy-related programs or other social initiatives, it is crucial to consider the broader, often unintended, consequences these technologies might have. AR/VR offers immersive experiences that can enhance learning and engagement, but these very qualities also bear risks. For example, while VR can create compelling simulations of real-world scenarios, promoting empathy and understanding, it can also lead to phenomena like “VR Fatigue” or “VR Hangover.” Users might experience a disconnection from reality, feeling alienated from their physical environment or their own bodies. Moreover, the prevalence of “cybersickness,” akin to motion sickness, caused by discrepancies in sensory inputs, can result in discomfort, nausea, or dizziness, detracting from the intended positive impacts of these technologies.

Another significant concern is the potential for AR/VR to shape users’ perceptions and behaviors in undesirable ways. The immersive nature of these technologies can intensify the effects of filter bubbles and echo chambers, isolating users within highly personalized, yet potentially distorted, information spheres. This effect can exacerbate the fragmentation of shared reality, impeding constructive discourse in democratic contexts. Additionally, the blending of virtual and real experiences can blur the lines between factual information and fabrication, making users more susceptible to misinformation. Furthermore, the perceived anonymity and detachment in VR environments might encourage anti-social behavior, as people might engage in actions they would avoid in real life. There is also the risk of empathy, generally a force for good, being manipulated for divisive or exploitative purposes. Thus, while AR/VR holds great promise for enhancing democratic and social programs, potential negative impacts call for careful, ethically guided implementation.

“Too True to Be Good”: Disenchantment with Reality & Pygmalion Effect

In our era of augmented and virtual realities, where digital escapism often seems more enticing than the physical world, there is a growing risk to our shared understanding and democracy as people might become disenchanted with reality and retreat into virtual realms (Turkle, 1996; Bailenson,2018). The transformative nature of AR/VR introduces a novel concept where individuals might gravitate towards virtual worlds at the expense of engaging with their physical surroundings (which are now considered “too true to be good”). The use of VR by disadvantaged and exploited populations may provide them relief from the challenges of their lived experience, but it also diminishes the likelihood of their resistance to those conditions. Moreover, as AR/VR advances and becomes integrated with advanced AI in the metaverse, there is a risk of blurring the lines between the virtual and real worlds. Human beings have a tendency to anthropomorphize machines and bots that have some humanistic features (e.g., eyes or language) and treat them as humans (Reeve & Nass, 1996). We might treat AI and virtual entities as if they are human, potentially leading to confusion and challenges in our interactions. There are also severe risks associated with overindulgence in immersive experiences having a high degree of VR realism (Greengard, 2019). VR Expert Denny Unger, CEO of Cloudhead Games, cautions that extreme immersion could extend beyond discomfort and result in even more severe outcomes, including potential heart attacks and fatal incidents.

Neglect of physical self and environment

Jeremy Bailenson’s (2018) observation that being present in virtual reality (VR) often means being absent from the real world is a crucial point for those considering using VR in democracy and other important work. When people dive into VR, they can become so focused on the virtual world that they lose touch with what is happening around them in real life. In his book “Experience on Demand,” Bailenson explains how this deep engagement in VR can lead to users neglecting their own physical needs and environment. This is similar to how people might feel disconnected from themselves and their surroundings in certain psychological conditions. There is also a worry that VR companies might design their products to be addictive, making it hard for users to pull away. This raises important questions about the long-term effects of using VR a lot and highlights the need for strategies to prevent these issues.

Safety and Security

In the realm of immersive technologies, safety is a primary concern. There is a notable lack of understanding about the impact of virtual reality (VR), particularly on young users. Ensuring the emotional and physical safety of children in VR environments requires well-defined guidelines and safety measures. The enticing nature of VR must be balanced with awareness of the real world to protect younger users. Discussions about age restrictions and responsible use of VR are critical in this rapidly advancing technological landscape. Spiegel (2018) emphasizes the importance of age restrictions to protect young users from the potential negative effects of prolonged VR exposure, arguing for the benefits of such limitations.

On another front, the lack of strong identity verification in virtual spaces raises concerns about identity theft and avatar misuse, particularly affecting children who could be victims of fraud or wrongly accused of offenses. The absence of effective identity protection increases the vulnerability of users, highlighting the need for advanced security measures. Additionally, virtual violence, like harassment incidents reported in VR games, poses a significant risk. These are not new issues; for instance, Julian Dibbell’s 1994 article “A Rape in Cyberspace” brought attention to the challenge of preventing virtual sexual assault. This underlines the urgent need for policies to address and prevent harassment and violence in VR, ensuring these spaces are safe and inclusive for all users.

Alignment with Values and Meaning-Making

When incorporating AR/VR technologies into programs, it is crucial to be mindful of their significant impact on culture and values. As Neil Postman pointed out, technology invariably shapes culture, often creating a mix of winners and losers. Each technological tool carries inherent biases, whether political, social, or epistemological. These biases subtly influence our daily lives, sometimes without our conscious awareness. Hence, when introducing AR/VR into new environments, consider how these technologies align or conflict with local cultural values. As Nelson and Stolterman (2014) observed, culture is dynamic, caught between tradition and innovation. Engaging the community in the design process can enhance the acceptance and effectiveness of your project.

In the context of democracy, human rights, and governance, it is essential to balance individual desires with the collective good. AR/VR can offer captivating, artificial experiences, but as philosopher Robert Nozick’s (2018) “Experience Machine” thought experiment illustrates, these cannot replace the complexities and authenticity of real-life experiences. People often value reality, authenticity, and the freedom to make life choices over artificial pleasure. In deploying AR/VR, the goal should be to empower individuals, enhancing their participation in democratic processes and enriching their lives, rather than offering mere escapism. Ethical guidelines and responsible design practices are key in ensuring the conscientious use of virtual environments. By guiding users towards more meaningful and fulfilling experiences, AR/VR can be used to positively impact society while respecting and enriching its cultural fabric.

Back to top

Questions

If you are trying to understand the implications of AR/VR in your DRG work, you should consider the following questions:

  1. Does the AR/VR use enhance human engagement with the physical world and real-world issues, or does it disengage people from the real world? Will the AR/VR tool being developed create siloed spaces that will alienate people from each other and from the real world? What steps have been taken to avoid such isolation?
  2. Can this project be done in the real world and is it really needed in virtual reality? Does it offer any benefit over doing the same thing in the real world? Does it cause any harm compared to doing it in the real world?
  3. In deploying AR/VR technology, consider if it might unintentionally reinforce real-world inequalities. Reflect on the digital and economic barriers to access: Is your application compatible with various devices and networks, ensuring wide accessibility? Beware of creating a “reality divide,” in which marginalized groups are pushed towards virtual alternatives while others enjoy physical experiences. Always consider offering real-world options for those less engaged with AR/VR, promoting inclusivity and broad participation.
  4. Have the system-level repercussions of AR/VR technology usage been considered? What will be the effect of the intervention on the community and the society at large? Are there any chances that the proposed technology will result in the problem of technology addiction? Can any unintended consequences be anticipated and negative risks (such as technology addiction) be mitigated?
  5. Which policy and regulatory frameworks are being followed to ensure that AR/VR-related technologies, or more broadly XR and metaverse-related technologies, do not violate human rights and contribute positively to human development and democracy?
  6. Have the necessary steps been taken to accommodate and promote diversity, equity and inclusion so that the technology is appropriate for the needs and sensitivities of different groups? Have the designers and developers of AR/VR taken on board input from underrepresented and marginalized groups to ensure participatory and inclusive design?
  7. Is there transparency in the system regarding what data are collected, who they are shared with and how they are used? Does the data policy comply with international best practices regarding consumer protection and human rights?
  8. Are users given significant choice and control over their privacy, autonomy, and access to their information and online avatars? What measures are in place to limit unauthorized access to data containing private and sensitive information?
  9. If biometric signals are being monitored, or sensitive information such as eye-gaze detection is performed, what steps and frameworks have been followed to ensure that they are used for purposes that are ethical and pro-social?
  10. Is there transparency in the system about the use of AI entities in the Virtual Space such that there is no deception or ambiguity for the user of the AR/VR application?
  11. What steps and frameworks have been followed to ensure that any behavior modification or nudging performed by the technology is guided by ethics and law and is culturally sensitive and pro-social? Is the AR/VR technology complying with the UN Human Rights principles applied to communication surveillance and business?
  12. Has appropriate consideration been given to safeguarding children’s rights if the application is intended for use by children?
  13. Is the technology culturally appropriate? Which cultural effects are likely when this technology is adopted? Will these effects be welcomed by the local population, or will it face opposition?

Back to top

Case Studies

AR/VR can have positive impacts when used to further DRG issues. Read below to learn how to think about AR/VR use more effectively and safely in your work.

UN Sustainable Development Goals (SDG) Action Campaign

Starting in January 2015, the UN SDG Action Campaign has overseen the United Nations Virtual Reality Series (UN VR), aiming to make the world’s most urgent issues resonate with decision makers and people worldwide. By pushing the limits of empathy, this initiative delves into the human narratives underlying developmental struggles. Through the UN VR Series, individuals in positions to effect change gain a more profound insight into the daily experiences of those who are at risk of being marginalized, thereby fostering a deeper comprehension of their circumstances.

UN Secretary-General Ban Ki-moon & Executive Director, WHO Margaret Chan (c) David Gough. Credit. UN VR: https://unvr.sdgactioncampaign.org/

As a recent example advocacy and activism and the use of immersive storytelling to brief decision makers, in April 2022, the United Nations Department of Political and Peacebuilding Affairs (UN DPPA), together with the Government of Japan, released the VR experience “Sea of Islands” that takes viewers to the Pacific islands, allowing them to witness the profound ramifications of the climate crisis in the Asia Pacific area. Through this medium, the urgency, magnitude, and critical nature of climate change become tangible and accessible.

Poster of the VR Film: Sea of Islands. Source: https://media.un.org/en/asset/k1s/k1sbvxqll2

VR For Democratizing Access to Education

AR/VR technologies hold great promise in the field of educational technology (“edtech”) due to their immersive capabilities, engaging demeanor, and potential to democratize access and address issues such as cost and distance (Dick, 2021a). AR/VR can play a crucial role in facilitating the understanding of abstract concepts and enable hands-on practice within safe virtual environments, particularly benefiting STEM courses, medical simulations, arts, and humanities studies. Additionally, by incorporating gamified, hands-on learning approaches across various subjects, these technologies enhance cognitive development and classroom engagement. Another advantage is their capacity to offer personalized learning experiences, benefiting all students, including those with cognitive and learning disabilities. An illustrative instance is Floreo, which employs VR-based lessons to impart social and life skills to young individuals with autism spectrum disorder (ASD). The United Nations Children’s Fund (UNICEF) has a series of initiatives under its AR/VR for Good Initiative. For example, Nigerian start-up Imisi 3D, founded by Judith Okonkwo, aims to use VR in the classroom. Imisi 3D’s solution promises to provide quality education tools through VR, enrich the learning experiences of children, and make education accessible to more people.

Source: UNICEF Nigeria/2019/Achirga

Spotlight on Refugees and Victims of War

A number of projects have turned to VR to highlight the plight of refugees and those affected by war. One of UN VR’s first documentaries, released in 2015, is Clouds Over Sidra, the story of a 12-year-old girl Sidra who has lived in Zaʿatari Refugee Camp since the summer of 2013. The storyline follows Sidra around the Zaʿatari Camp, where approximately 80,000 Syrians, approximately half of them children, have taken refuge from conflict and turmoil. Through the VR film, Sidra takes audiences on a journey through her daily existence, providing insights into activities such as eating, sleeping, learning, and playing within the expansive desert landscape of tents. By plunging viewers into this world that would otherwise remain distant, the UN strives to offer existing donors a tangible glimpse into the impact of their contributions and, for potential donors, an understanding of the areas that still require substantial support.

The Life of Migrants in a Refugee Camp in VR (UN VR Project Clouds over Sidra) Source: http://unvr.sdgactioncampaign.org/cloudsoversidra/

Another UN VR project, My Mother’s Wing, offers an unparalleled perspective of the war-torn Gaza Strip, presenting a firsthand account of a young mother’s journey as she grapples with the heart-wrenching loss of two of her children during the bombardment of the UNRWA school in July 2014. This poignant film sheds light on the blend of sorrow and hope that colors her daily existence, showcasing her pivotal role as a beacon of strength within her family. Amid the process of healing, she emerges as a pillar of support, nurturing a sense of optimism that empowers her family to persevere with renewed hope.

Experience of a War-Torn Area (UN VR Project My Mother’s Wing) Source: https://unvr.sdgactioncampaign.org/a-mother-in-gaza/

Improving Accessibility in the Global South with AR

In various parts of the world, millions of adults struggle to read basic things such as bus schedules or bank forms. To rectify this situation, AR technology can be used with phone cameras to help people who struggle with reading. As an example, Google Lens offers support for translation and can read the text out loud when pointed at the text. It highlights the words as they are spoken, so that it becomes possible to follow along and understand the full context. One can also tap on a specific word to search for it and learn its definition. Google Lens is designed to work not only with expensive smartphones but also with cheap phones equipped with cameras.

Google Translate with Google Lens for Real-Time Live Translation of Consumer Train Tickets Source: https://storage.googleapis.com/gweb-uniblog-publish-prod/original_images/Consumer_TrainTicket.gif

Another example AR app “IKEA Place” shows the power of AR-driven spatial design and consumer engagement. The app employs AR technology to allow users to virtually integrate furniture products into their living spaces, enhancing decision-making processes, and elevating customer satisfaction. Such AR technology can also be applied in the realm of civic planning. By providing an authentic representation of products in real-world environments, the app can aid urban planners and architects in simulating various design elements within public spaces, contributing to informed decision-making for cityscapes and communal areas.

IKEA Place: Using AR to visualize furniture within living spaces. Source: Ikea.com

More examples of how AR/VR technology can be used to enhance accessibility are noted in Dick (2021b).

Spotlight on Gender and Caste Discrimination

The presence of women at the core of our democratic system marks a significant stride toward realizing both gender equality (SDG 5) and robust institutions (SDG 16).  The VR film titled “Compliment,” created by Lucy Bonner, a graduate student at Parsons School of Design, aimed to draw attention to harassment and discrimination endured by women in unsafe environments, which regrettably remains a global issue. Through this VR movie, viewers can step into the shoes of a woman navigating the streets, gaining a firsthand perspective on the distressing spectrum of harassment that many women experience often on a daily basis.

View of a Scene from the VR Movie, “Compliment.”
Source: http://lucymbonner.com/compliment.html

There are other forms of systemic discrimination including caste-based discrimination. A VR based film “Course to Question” produced by Novus Select in collaboration with UN Women and Vital Voices and supported by Google offers a glimpse into the struggles of activists combating caste-based discrimination. This movie highlights the plight of Dalit women who continue to occupy the lowest rungs of caste, color, and gender hierarchies. Formerly known as “untouchables,” the Dalits are members of the lowest caste in India and are fighting back against systems of oppression. They are systematically deprived of fundamental human rights, including access to basic necessities like food, education, and fair labor.

Scene from UN Women’s VR movie “Courage to Question” highlighting discrimination faced by Dalit women Snapshot from https://www.youtube.com/watch?v=pJCl8FNv22M

Maternal Health Training

The UN Population Fund, formerly the UN Fund for Population Activities (UNFPA), pioneered a VR innovation in 2022 to improve maternal-health training, the first project to implement VR in Timor-Leste and possibly in the Asia-Pacific region. The VR program includes VR modules which contain Emergency Obstetric and Newborn Care (EmONC) skills and procedures to save the lives of mothers and babies using VR goggles. The aim of this project is to create digitally mediated learning environments in which real medical situations are visualized for trainees to boost learning experiences and outcomes and “refresh the skills of hundreds of trained doctors and midwives to help them save lives and avoid maternal deaths.”

Source: https://timor-leste.unfpa.org/en/news/unfpa-develop-novel-innovation-help-reduce-maternal-deaths-timor-leste-using-virtual-reality © UNFPA Timor-Leste.

Highlighting Racism and Dire Poverty

The immersive VR experience, “1000 Cut Journey,” takes participants on a profound exploration. They step into the shoes of Michael Sterling, a Black male, and traverse through pivotal moments of his life, gaining firsthand insight into the impact of racism. This journey guides participants through his experiences as a child facing disciplinary measures in the classroom, as an adolescent dealing with encounters with the police, and as a young adult grappling with workplace discrimination (Cogburn et al., 2018).

View from 1000 Cut Journey, a VR film on Racism.
Source: https://www.virtualrealitymarketing.com/case-studies/1000-cut-journey/

1000 Cut Journey serves as a powerful tool to foster a significant shift in perspective. By immersing individuals in the narrative of Michael Sterling, it facilitates a deeper, more authentic engagement with the complex issues surrounding racism.

In another project from Stanford University’s Virtual Human Interaction Lab, one can experience firsthand the lives of indigent homeless people and walk in the shoes of those who can no longer afford a home inside a VR experience. Through this, the researchers aim to raise awareness and also study the effect of VR experiences on empathy. Researchers have uncovered through their research that a VR experience, compared to other perspective taking exercises, engenders longer-lasting behavior change.

View of Becoming Homeless — A Human Experience VR Film.
Source: https://xrgigs.com/offered/becoming-homeless-a-human-experience/

Participatory Governance using XR

The MIT Media Lab and HafenCity University Hamburg teamed up to create CityScope, an innovative tool blending AI, algorithms, and human insight for participatory governance. This tool utilizes detailed data, including demographics and urban planning information, to encourage citizen involvement in addressing community issues and collective decision-making. It allows users to examine various scenarios, fostering informed dialogue and collaborative solutions. This project highlights how combining technology and human creativity can enhance citizen engagement in urban development.

District leaders and residents meet to explore possible sites for refugee communities. Credit: Walter Schiesswohl. Source: https://medium.com/mit-media-lab/

Another example is vTaiwan, an innovative approach to participatory governance, seamlessly bringing together government ministries, scholars, citizens, and business leaders to redefine modern democracy. This transformative process eliminates redundancy by converging online and offline consultations through platforms like vtaiwan.tw and using it for proposals, opinion gathering, reflection, and legislation. Taiwan has also used VR to highlight its response to the COVID-19 crisis through the VR film The Three Crucial Steps which showcases how three steps—prudent action, rapid response and early deployment—have played a critical role in Taiwan’s successful COVID-19 response.

Taiwanese Deputy Minister of Foreign Affairs watches the Ministry’s VR film, Three Crucial Steps, about Taiwan’s response to COVID-19. Photo: Louise Watt.
Source: https://topics.amcham.com.tw/2020/12/taiwan-new-trails-with-extended-reality/

Taiwan harnesses the power of open-source tools and advanced real-time systems such as Pol.is, which use statistical analysis and machine learning to decode the sentiments of its extensive user base (exceeding 200,000 participants) who can participate with immersive VR through the pioneering integration of 3D cameras in live-streamed dialogues. This evolutionary movement, born in 2014 and still ongoing, serves as a model of technology-enhanced 21st-century democratic governance.

Clinical VR and Virtual Therapy

VR holds significant potential for application in clinical settings, particularly for virtual therapy and rehabilitation, as noted by Rizzo et al. (2023). To exemplify the clinical utility of VR, consider the treatment if burn pain, which is often described by medical professionals as excruciating, frequently leading to patients susceptible to post-traumatic stress. For more than two decades, VR has provided a measure of solace to burn patients through innovative solutions like the VR game SnowWorld, developed by researchers from the University of Washington.

A patient uses SnowWorld VR during treatment for burns.
Photo and Copyright: Hunter G. Hoffman. Credit: University of Washington.
Source: https://depts.washington.edu/hplab/research/virtual-reality/

Throughout the process of the operative care of burn wounds, patients are made to immerse themselves in the SnowWorld VR experience. Remarkably, this immersive engagement has proven successful in either drowning out or mitigating the pain signals that afflict patients. The concept behind SnowWorld’s design revolves around the idea of snow, leveraging the stark contrast between cold and ice to counter the sensations of pain from the burn wounds. The intention is to divert patients’ thoughts away from their accidents or burn injuries. The effectiveness of VR in managing incoming pain highlights the great potential of AR/VR technology in clinical therapy and healthcare.

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Social Media

What is social media?

Social media provides spaces for people and organizations to share and access news and information, communicate with beneficiaries, and advocate for change. Social media content includes text, photos, videos, infographics, or any other material placed on a blog, Facebook page, X (formerly known as Twitter) account, etc. for an audience to consume, interact with, and circulate. This content is curated by platforms and delivered to users according to what is most likely to attract their attention. There is an ever-expanding amount of content available on these platforms.

Digital inclusion center in the Peruvian Amazon. For NGOs, social media platforms can be useful to reach new audiences and to raise awareness of services. Photo credit: Jack Gordon for USAID / Digital Development Communications.

Theoretically, through social media everyone has a way to speak out and reach audiences across the world, which can be empowering and bring people together. At the same time, much of what is shared on social media can be misleading, hateful, and dangerous, which theoretically imposes a level of responsibility by the owners of platforms to moderate content.

How does social media work?

Social media platforms are owned by private companies, with business models usually based on advertising and monetization of users’ data. This affects the way that content appears to users, and influences data-sharing practices. Moderating content on these social media spaces brings its own challenges and complications because it requires balancing multiple fundamental freedoms. Understanding the content moderation practices and business models of the platforms is essential to reap the benefits while mitigating the risks of using social media.

Business Models

Most social media platforms rely on advertising. Advertisers pay for engagement, such as clicks, likes, and shares. Therefore, sensational and attention-grabbing content is more valuable. This motivates platforms to use automated-recommendation technology that relies on algorithmic decision-making to prioritize content likely to grab attention. The main strategy of “user-targeted amplification” shows users content that is most likely to interest them based on detailed data  that are collected about them. See more in the Risk section under Data Monetization  by social media companies and tailored information streams.

The Emergence of Programmatic Advertising

The transition of advertising to digital systems has dramatically altered the advertising business. In an analog world, advertising placements were predicated on aggregate demographics, collected by publishers and measurement firms. These measurements were rough, capable at best of tracking subscribers and household-level engagement. Advertisers hoped their ads would be seen by enough of their target demographic (for example, men between 18 and 35 with income at a certain level) to be worth their while. Even more challenging was tracking the efficacy of the ads. Systems for measuring whether an ad resulted in a sale were limited largely to mail-in cards and special discount codes.

The emergence of digital systems changed all of that. Pioneered for the most part by Google and then supercharged by Facebook in the early 21st century, a new promise emerged: “Place ads through our platform, and we can put the right ad in front of the right person at the right time. Not only that, but we can report back to you (the advertiser) which users saw the ad, whether they clicked on it, and if that click led to a ‘conversion’ or a sale.”

But this promise has come with significant unintended consequences. The way that the platforms—and the massive ad tech industry that has rapidly emerged alongside them—deliver on this promise requires a level of data gathering, tracking, and individual surveillance unprecedented in human history. The tracking of individual behaviors, preferences, and habits powers the wildly profitable digital advertising industry, dominated by platforms that can control these data at scale.

Managing huge consumer data sets at the scale and speed required to deliver value to advertisers has come to mean a heavy dependence on algorithms to do the searching, sorting, tracking, placement, and delivery of ads. This development of sophisticated algorithms led to the emergence of programmatic advertising, which is the placement of ads in real time on websites with no human intervention. Programmatic advertising made up roughly two thirds of the $237 billion global ad market in 2019.

The digitization of the advertising market, particularly the dominance of programmatic advertising, has resulted in a highly uneven playing field. The technology companies possess a significant advantage: they built the new structures and set the terms of engagement. What began as a value-add in the new digital space—“We will give advertisers efficiency and publishers new audiences and revenue streams”—has evolved to disadvantage both groups.

One of the primary challenges is in how audience engagement is measured and tracked. The primary performance indicators in the digital world are views and clicks. As mentioned above, an incentive structure based on views and clicks (engagement) tends to favor sensational and eye-catching content. In the race for engagement, misleading or false content with dramatic headlines and incendiary claims consistently wins out over more balanced news and information. See also the section on digital advertising in the disinformation resource.

Advertising-motivated content

Platforms leverage tools like hashtags and search engine optimization (SEO) to rank and cluster content around certain topics. Unfortunately, automated content curation motivated by advertising does not tend to prioritize healthful, educational, or rigorous content. Instead, conspiracy theories, shocking or violent content, and “click-bait” (misleading phrases designed to entice viewing) tend to spread more widely. Many platforms have features of upvoting (“like” buttons) which, similar to hashtags and SEO, influence the algorithmic moderation and promote certain content to circulate more widely. These features together cause “virality,” one of the defining features of the social-media ecosystem: the tendency of an image, video, or piece of information to be circulated rapidly and widely.

In some cases, virality can spark political activism and raise awareness (like the #MeToo movement), but it can also amplify tragedies and spread inaccurate information  (anti-vaccine information and other health rumors, etc.). Additionally, the business models of the platforms reward quantity over quality (number of “likes”, “followers”, and views), encouraging a growth logic that has led to the problem of information saturation or information overload, overwhelming users with seemingly infinite content. Indeed, design decisions like the “infinite scroll” intended to make our social media spaces ever larger and more entertaining have been associated with impulsive behaviors, increased distraction, attention-seeking behavior, lower self-esteem, etc.

Many digital advertising strategies raise risks regarding access to information, privacy, and discrimination, in part because of their pervasiveness and subtlety. Influencer marketing, for example, is the practice of sponsoring a social media influencer to promote or use a certain product by working it into their social-media content, while native advertising is the practice of embedding ads in or beside other non-paid content. Most consumers do not know what native advertising is and may not even know when they are being delivered ads.

It is not new for brands to strategically place their content. However, today there is much more advertising, and it is seamlessly integrated with other content. In addition, the design of platforms makes content from diverse sources—advertisers and news agencies, experts and amateurs—indistinguishable. Individuals’ right to information and basic guarantees of transparency are at stake if advertisements are placed on equal footing with desired content.

Content Moderation

Content moderation is at the heart of the services that social-media platforms provide: the hosting and curation of the content uploaded by their users. Content moderation is not just the review of content, but every design decision made by the platforms, from the Terms of Service and their Community Guidelines, to the algorithms used to rank and order content, to the types of content allowed and encouraged through design features (“like”, “follow”, “block”, “restrict”, etc.).

Content moderation is particularly challenging because of the issues it raises around freedom of expression. While it is necessary to address massive quantities of harmful content that circulate widely, educational, historic, or journalistic content is often censored by algorithmic moderation systems. In 2016, for example, Facebook took down a post with a Pulitzer Prize-winning image of a naked 9-year-old girl fleeing a napalm bombing and suspended the account of the journalist who had posted it.

Though nations differ in their stances on freedom of speech, international human rights provide a framework for how to balance freedom of expression against other rights, and against protections for vulnerable groups. Still, content-moderation challenges increase as content itself evolves, for instance through increase of live streaming, ephemeral content, voice assistants, etc. Moderating internet memes is particularly challenging, for instance, because of their ambiguity and ever-changing nature; and yet meme culture is a central tool used by the far right to share ideology and glorify violence. Some information manipulation is also intentionally difficult to detect; for example, “dog whistling” (sending coded messages to subgroups of the population) and “gaslighting” (psychological manipulation to make people doubt their own knowledge or judgment).

Automated moderation

Content moderation is usually performed by a mix of humans and artificial intelligence , with the precise mix dependent on the platform and the category of content. The largest platforms like Facebook and YouTube use automated tools to filter content as it is uploaded. Facebook, for example, claims it is able to detect up to 80% of hate speech content in some languages as it is posted, before it reaches the level of human review. Though the working conditions for the human moderators have been heavily criticized, algorithms are not a perfect alternative. Their accuracy and transparency have been disputed, and experts have warned of some concerning biases stemming from algorithmic content moderation.

The complexity of content-moderation decisions does not lend itself easily to automation, and the porosity between legal and illegal or permissible and impermissible content leads to legitimate content being censored and harmful and illegal content (cyberbullying, defamation, etc.) passing through the filters.

The moderation of content posted to social media was increasingly important during the COVID-19 pandemic, when access to misleading and inaccurate information about the virus had the potential to result in severe illness or bodily harm. One characterization of Facebook described “a platform that is effectively at war with itself: the News Feed algorithm relentlessly promotes irresistible click-bait about Bill Gates, vaccines, and hydroxychloroquine; the trust and safety team then dutifully counters it with bolded, underlined doses of reality.”

Community moderation

Some social media platforms have come to rely on their users for content moderation. Reddit was one of the first social networks to popularize community-led moderation and allows subreddits to tack additional rules onto the company’s master content policy. These rules are then enforced by human moderators and, in some cases, automated bots. While the decentralization of moderation gives user communities more autonomy and decision-making power over their conversations, it also relies inherently on unpaid labor and exposes untrained volunteers to potentially problematic content.

Another approach to community-led moderation is X’s Community Notes, which is essentially a crowd-sourced fact-checking system. The feature allows users who are members of the program to add additional context to posts (formerly called tweets) that may contain false or misleading information, which other users then vote on if they find the context to be helpful.

Addressing harmful content

In some countries, local laws may address content moderation, but they relate mainly to child abuse images or illegal content that incites violence. Most platforms also have community standards or safety and security policies that state the kind of content allowed, and that sets the rules for harmful content. Enforcement of legal requirements and the platforms’ own standards relies primarily on content being flagged by social media users. The social-media platforms are only responsible for harmful content shared on their platforms once it has been reported to them.

Some platforms have established mechanisms that allow civil society organizations (CSOs) to contribute to the reporting process by becoming so-called “trusted flaggers.” Facebook’s Trusted Partner program, for example, provides partners with a dedicated escalation channel for reporting content that violates the company’s Community Standards.. However, even with programs like this in place,  limited access to platforms to raise local challenges and trends remains an obstacle for CSOs, marginalized groups, and other communities, especially in the Global South.

Regulation

The question of how to regulate and enforce the policies of social media platforms remains far from settled. As of this writing, there are several common approaches to social-media regulation.

Self-regulation

The standard model of social-media regulation has long been self-regulation, with platforms establishing and enforcing their own standards for safety and equity. Incentives for self-regulation, including avoiding the imposition of more restrictive government regulation and broadening a building consumer trust to broaden a platform’s user base (and ultimately boosting profits). On the other hand, there are obvious limits to self-regulation when these incentives are outweighed by perceived costs. Self-regulation can also be contingent on the ownership of a company, as demonstrated by the reversal of numerous policy decisions in the name of “free speech” by Elon Musk after his takeover of X (known as Twitter, at the time).

In 2020, the Facebook Oversight Board was established as an accountability mechanism for users to appeal decisions by Facebook to remove content that violates its policies against harmful and hateful posts. While the Oversight Board’s content decisions on individual cases are binding, its broader policy recommendations are not. For example, Meta was required to remove a video posted by Cambodian Prime Minister Hun Sen that threatened his opponents with physical violence, but it declined to comply with the Board’s recommendation to suspend the Prime Minister’s account entirely. Though the Oversight Board’s mandate and model is promising, there have been concerns about its capacity to respond to the volume of requests it receives in a timely manner.

Government Regulation

In recent years, individual governments and regional blocs have introduced legislation to hold social media companies accountable for the harmful content that spreads on their platforms, as well as to protect the privacy of citizens given the massive amounts of data these companies collect. Perhaps the most prominent and far-reaching example of this kind of legislation is the European Union’s Digital Services Act (DSA), which came into effect for “Very Large Online Platforms” such as Facebook and Instagram (Meta), TikTok, YouTube (Google), and X in late August of 2023. Under the rules of the DSA, online platforms risk significant fines if they fail to prevent and remove posts containing illegal content. The DSA also bans targeted advertising based on a person’s sexual orientation, religion, ethnicity, or political beliefs and requires platforms to provide more transparency on how their algorithms work.

With government regulation comes the risk of over-regulation via “fake news” laws and threats to free speech and online safety. In 2023, for example, security researchers warned that the draft legislation of the U.K.’s Online Safety Bill would compromise the security provided to users of end-to-end encrypted communications services, such as WhatsApp and Signal. Proposed Brazilian legislation to increase transparency and accountability for online platforms was also widely criticized—and received strong backlash from the platforms themselves—as negotiations took place between closed doors without proper engagement with civil society and other sectors.

Back to top

How is social media relevant in civic space and for democracy?

Social media encourages and facilitates the spread of information at unprecedented speeds, distances, and volumes. As a result, information in the public sphere is no longer controlled by journalistic “gatekeepers.” Rather, social media provide platforms for groups excluded from traditional media to connect and be heard. Citizen journalism has flourished on social media, enabling users from around the world to supplement mainstream media narratives with on-the-ground local perspectives that previously may have been overlooked or misrepresented. Read more about citizen journalism under the Opportunities section of this resource.

Social media can also serve as a resource for citizens and first responders during emergencies, humanitarian crises, and natural disasters, as described in more detail in the Opportunities section. In the aftermath of the deadly earthquake that struck Turkey and Syria in February 2023, for example, people trapped under the rubble turned to social media to alert rescue crews to their location. Social media platforms have also been used during this and other crises to mobilize volunteers and crowdsource donations for food and medical aid.

Digital inclusion center in the Peruvian Amazon. The business models and content moderation practices of social media platforms directly affect the content displayed to users. Photo Credit: Chandy Mao, Development Innovations.

However, like any technology, social media can be used in ways that negatively affect free expression, democratic debate, and civic participation. Profit-driven companies like X have in the past complied with content takedown requests from individual governments, prompting censorship concerns. When private companies control the flow of information, censorship can occur not only through such direct mechanisms, but also through the determination of which content is deemed most credible or worthy of public attention.

The effects of harassment, hate speech, and “trolling” on social media can spill over into offline spaces, presenting a unique danger for women, journalists, political candidates, and marginalized groups. According to UNESCO, 20% of respondents to a 2020 survey on online violence against women journalists reported being attacked offline in connection with online violence. Read more about online violence and targeted digital attacks in the Risks section of this resource, as well as the resource on Digital Gender Divide[1].

Social media platforms have only become more prevalent in our daily lives (with the average internet user spending nearly 2.5 hours per day on social media), and those not active on the platforms risk missing important public announcements, information about community events, and opportunities to communicate with family and friends. Design features like the “infinite scroll,” which allows users to endlessly swipe through content without clicking, are intentionally addictive—and associated with impulsive behavior and lower self-esteem. The oversaturation of content in curated news feeds makes it ever more difficult for users to distinguish factual, unbiased information from the onslaught of clickbait and sensational narratives. Read about the intentional sharing of misleading or false information to deceive or cause harm in our Disinformation[2] resource.

Social media and elections

Social media platforms have become increasingly important to the engagement of citizens, candidates, and political parties during elections, referendums, and other political events. On the one hand, lesser-known candidates can leverage social media to reach a broader audience by conducting direct outreach and sharing information about their campaign, while citizens can use social media to communicate with candidates about immediate concerns in their local communities. On the other hand, disinformation circulating on social media can amplify voter confusion, reduce turnout, galvanize social cleavages, suppress political participation of women and marginalized populations, and degrade overall trust in democratic institutions.

Social media companies like Google, Meta, and X do have a track record of adjusting their policies and investing in new products ahead of global elections. They also collaborate directly with electoral authorities and independent fact-checkers to mitigate disinformation and other online harms. However, these efforts often fall short. As one example, despite Facebook’s self-proclaimed efforts to safeguard election integrity, Global Witness found that the platform failed to detect election-related disinformation in ads ahead of the 2022 Brazilian presidential election (a similar pattern was also uncovered in Myanmar, Ethiopia, and Kenya). Facebook and other social media platforms were strongly criticized for their inaction in the lead up to and during the subsequent riots instigated by far-right supporters of former president Jair Bolsonaro. In fragile democracies, the institutions that could help counter the impact of fake news and disinformation disseminated on social media—such as independent media, agile political parties, and sophisticated civil society organizations—remain nascent.

Meanwhile, online political advertising has introduced new challenges to election transparency and accountability as the undeclared sponsoring of content has become easier through unofficial pages paid for by official campaigns. Social media companies have made efforts to increase the transparency of political ads by making “ad libraries” available in some countries and introducing new requirements for the purchase and identification of political ads. But these efforts have varied by country, with most attention directed to larger or more influential markets.

Social media monitoring can help civil society researchers better understand their local information environment, including common disinformation narratives during election cycles. The National Democratic Institute, for example, used Facebook’s social monitoring platform Crowdtangle to track the online political environment in Moldova following Maia Sandu’s victory in the November 2020 presidential elections. However, social media platforms have made this work more challenging by introducing exorbitant fees to access data or ceasing support for user interfaces that make analysis easier for non-technical users.

Back to top

Opportunities

Students from the Kandal Province, Cambodia. Social media platforms have opened up new platforms for video storytelling. Photo credit: Chandy Mao, Development Innovations.

Social media can have positive impacts when used to further democracy, human rights, and governance issues. Read below to learn how to more effectively and safely think about social media use in your work.

Citizen Journalism

Social media has been credited with providing channels for citizens, activists, and experts to report instantly and directly—from disaster settings, during protests, from within local communities, etc. Citizen journalism, also referred to as participatory journalism or guerrilla journalism, does not have a definite set of principles and is an important supplement to (but not a replacement for) mainstream journalism. Collaborative journalism, the partnership between citizen and professional journalists, as well as crowdsourcing strategies, are additional techniques facilitated by social media that have enhanced journalism, helping to promote voices from the ground and to magnify diverse voices and viewpoints. The outlet France 24 has developed a network of 5,000 contributors, the “observateurs,” who are able to cover important events directly by virtue of being on scene at the time, as well as to confirm the accuracy of information.

Social media and blogging platforms have allowed for the decentralization of expertise, bridging elite and non-elite forms of knowledge. Without proper fact-checking or supplementary sources and proper context, citizen reporting carries risks—including security risks to the authors themselves—but it is an important democratizing force and source of information.

Crowdsourcing

In crowdsourcing, the public is mobilized to share data together to tell a larger story or accomplish a greater goal. Crowdsourcing can be a method for financing, for journalism and reporting, or simply for gathering ideas. Usually some kind of software tool or platform is put in place that the public can easily access and contribute to. Crisis mapping, for example, is a type of crowdsourcing through which the public shares data in real time during a crisis (a natural disaster, an election, a protest, etc.). These data are then ordered and displayed in a useful way. For instance, crisis mapping can be used in the wake of an earthquake to show first responders the areas that have been hit and need immediate assistance. Ushahidi is an open-source crisis-mapping software developed in Kenya after the violent outbreak following the election in 2007. The tool was first created to allow Kenyans to flag incidents, form a complete and accurate picture of the situation on the ground, and share information with the media, outside governments, and relevant civil society and relief organizations. In Kenya, the tool gathered texts, posts, and photos and created crowdsourced maps of incidents of violence, election fraud, and other abuse. Ushahidi now has a global team with deployments in more than 160 countries and more than 40 languages.

Digital Activism

Social media has allowed local and global movements to spring up overnight, inviting broad participation and visibility. Twitter hashtags in particular have been instrumental for coalition building, coordination, and raising awareness among international audiences, media, and governments. Researchers began to take note of digital activism around the 2011 “Arab Spring,” when movements in Tunisia, Morocco, Syria, Libya, Egypt, and Bahrain, among others countries, leveraged social media to galvanize support. This pattern continued with the Occupy Wallstreet movement in the United States, the Ukranian Euromaidan movement in late 2013, and the Hong Kong protests in 2019.

In 2013, the acquittal of George Zimmerman in the death of unarmed 17-year-old Trayvon Martin inspired the creation of the  #BlackLivesMatter hashtag. This movement grew stronger in response to the tragic killings of Michael Brown in 2014 and George Floyd in 2020. The hashtag, at the front of an organized national protest movement, provided an outlet for people to join an online conversation and articulate alternative narratives in real time about subjects that the media and the rest of the United States had not paid sufficient attention to: police brutality, systemic racism, racial profiling, inequality, etc.

The #MeToo movement against sexual misconduct in the media industry, which also became a global movement, allowed a multitude of people to participate in activism previously bound to a certain time and place.

Some researchers and activists fear that social media will lead to “slacktivism” by giving people an excuse to stay at home rather than make a more dynamic response. Others fear that  social media is ultimately insufficient for enacting meaningful social change, which requires nuanced political arguments. (Interestingly, a 2018 Pew Research survey on attitudes toward digital activism showed that just 39% of white Americans believed social media was an important tool for expressing themselves, while 54% percent of Black people said that it was an important tool for them.)

Social media has enabled new online groups to gather together and to express a common sentiment as a form of solidarity or as a means to protest. Especially after the COVID-19 pandemic broke out, many physical protests were suspended or canceled, and virtual protests proceeded in their place.

Expansion and engagement with international audience at low costs

Social media provides a valuable opportunity for CSOs to reach their goals and engage with existing and new audiences. A good social-media strategy is underpinned by a permanent staff position to grow a strong and consistent social media presence based on the organization’s purpose, values, and culture. This person should know how to seek information, be aware of both the risks and benefits of sharing information online, and understand the importance of using sound judgment when posting on social media. The USAID “Social Networking: A Guide to Strengthening Civil Society through Social Media” provides a set of questions as guidance to develop a sound social-media policy, asking organizations to think about values, roles, content, tone, controversy, and privacy.

Increased awareness of services

Social media can be integrated into programmatic activities to strengthen the reach and impact of programming, for example, by generating awareness of an organization’s services to a new demographic. Organizations can promote their programs and services while responding to questions and fostering open dialogue. Widely used social media platforms can be useful to reach new audiences for training and consulting activities through webinars or individual meetings designed for NGOs.

Opportunities for Philanthropy and Fundraising

Social-media fundraising presents an important opportunity for nonprofits. After the blast in Beirut’s harbor in the summer of 2020, many Lebanese people started online fundraising pages for their organizations. Social media platforms were used extensively to share funding suggestions to the global audience watching the disaster unfold, reinforced by traditional media coverage. However, organizations should carefully consider the type of campaign and platforms they choose. TechSoup, a nonprofit providing tech support for NGOs, offers advice and an online course on fundraising with social media for nonprofits.

Emergency communication

In some contexts, civic actors rely on social media platforms to produce and disseminate critical information, for example, during humanitarian crises or emergencies. Even in a widespread disaster, the internet often remains a significant communication channel, which makes social media a useful, complementary means for emergency teams and the public. Reliance on the internet, however, increases vulnerability in the event of network shutdowns.

Back to top

Risks

In Kyiv, Ukrainian students share pictures at the opening ceremony of a Parliamentary Education Center. Photo credit: Press Service of the Verkhovna Rada of Ukraine, Andrii Nesterenko.

The use of social media can also create risks in civil society programming. Read below on how to discern the possible dangers associated with social media platforms in DRG work, as well as how to mitigate unintended – and intended – consequences.

Polarization and Ideological Segregation

The ways in which content flows and is presented on social media due to the platforms’ business models risk limiting our access to information, particularly to information that challenges our preexisting beliefs, by exposing us to content likely to attract our attention and support our views. The concept of the filter bubble refers to the filtering of information by online platforms to exclude information we as users have not already expressed an interest in. When paired with our own intellectual biases, filter bubbles worsen polarization by allowing us to live in echo chambers. This is easily witnessed in a YouTube feed: when you search for a song by an artist, you will likely be directed to more songs by the same artist, or similar ones—the algorithms are designed to prolong your viewing, and assume you want more of something similar. The same trend has been observed with political content. Social media algorithms encourage confirmation bias, exposing us to content we will agree with and enjoy, often at the expense of the accuracy, rigor, or educational and social value of that content.

The massive and precise data amassed by advertisers and social media companies about our preferences and opinions facilitates the practice of micro-targeting, which involves the display of tailored content based on data about users’ online behaviors, connections, and demographics, as will be further explained below.

The increasingly tailored distribution of news and information on social media is a threat to political discourse, diversity of opinions, and democracy. Users can become detached even from factual information that disagrees with their viewpoints, and isolated within their own cultural or ideological bubbles.

Because tailoring news and other information on social media is driven largely by nontransparent, opaque algorithms that are owned by private companies, it is hard for users to avoid these bubbles. Access to and intake of the very diverse information available on social media, with its many viewpoints, perspectives, ideas, and opinions, requires an explicit effort by the individual user to go beyond passive consumption of the content presented to them by the algorithm.

Misinformation and Disinformation

The internet and social media provide new tools that amplify and alter the danger presented by false, inaccurate, or out-of-context information. The online space increasingly drives discourse and is where much of today’s disinformation takes root. Refer to the Disinformation  resource for a detailed overview of these problems.

Online Violence and Targeted Digital Attacks

Social media facilitates a number of violent behaviors such as defamation, harassment, bullying, stalking, “trolling,” and “doxxing.” Cyberbullying among children, much like traditional offline bullying, can harm students’ performance in school and causes real psychological damage. Cyberbullying is particularly harmful because victims experience the violence alone, isolated in cyberspace. They often do not seek help from parents and teachers, who they believe are not able to intervene. Cyberbullying is also difficult to address because it can move across social-media platforms, beginning on one and moving to another. Like cyberbullying, cyber harassment and cyberstalking have very tangible offline effects. Women are most often the victims of cyber harassment and cyberviolence, sometimes through the use of stalkerware installed by their partners to track their movements. A frightening cyber-harassment trend  accelerated in France during the COVID-19 pandemic in the form of “fisha” accounts, where bullies, aggressors, or jilted ex-boyfriends would publish and circulate naked photos of teenage girls without their consent.

Journalists, women in particular, are often subject to cyber harassment and threats. Online violence against journalists, particularly those who write about socially sensitive or political topics, can lead to self-censorship, affecting the quality of the information environment and democratic debate. Social media provides new ways to spread and amplify hate speech and harassment. The use of fake accounts, bots, and bot-nets (automated networks of accounts) allow perpetrators to attack, overwhelm, and even disable the social media accounts of their victims. Revealing sensitive information about journalists through doxxing is another strategy that can be used to induce self-censorship.

The 2014 case of Gamergate, when several women video-game developers were attacked by a coordinated harassment campaign that included doxxing and threats of rape and death, illustrates the strength and capacity of loosely connected hate groups online to rally together, inflict real violence, and even drown out criticism. Many of the actions of the most active Gamergate trolls were illegal, but their identities were unknown. Importantly, it has been suggested by supporters of Gamergate that the most violent trolls were a “smaller, but vocal minority” — evidence of the magnifying power of internet channels and their use for coordinated online harassment.

Online hoaxes, scams, and frauds, like in their traditional offline forms, usually aim to extract money or sensitive information from a target. The practice of phishing is increasingly common on social media: an attacker pretends to be a contact or a reputable source in order to send malware or extract personal information and account credentials. Spearphishing is a targeted phishing attack that leverages information about the recipient and details related to the surrounding circumstances to achieve this same aim.

Data monetization by social media companies and tailored information streams

Most social media platforms are free to use. Social media platforms do not receive revenue directly from users, like in a traditional subscription service; rather they generate profit primarily through digital advertising. Digital advertising is based on the collection of users’ data by social-media companies, which allows advertisers to target their ads to specific users and types of users. Social media platforms monitor their users and build detailed profiles that they sell to advertisers. The data tracked includes information about the user’s connections and behavior on the platform, such as friends, posts, likes, searches, clicks, and mouse movements. Data are also extensively collected outside platforms, including information about users’ location, web pages visited, online shopping, and banking behavior. Additionally, many companies regularly request permission to access the contacts and photos of their users.

In the case of Facebook, this has led to a long-held and widespread conspiracy theory that the company listens to conversations to serve tailored advertisements. No one has ever been able to find clear evidence that this is actually happening. Research has shown that a company like Facebook does not need to listen in to your conversations, because it has the capacity to track you in so many other ways: “Not only does the system know exactly where you are at every moment, it knows who your friends are, what they are interested in, and who you are spending time with. It can track you across all your devices, log call and text metadata on phones, and even watch you write something that you end up deleting and never actually send.”

The massive and precise data amassed by advertisers and social-media companies about our preferences and opinions permit the practice of micro-targeting, that is, displaying targeted advertisements based on what you have recently purchased, searched for or liked. But just as online advertisers can target us with products, political parties can target us with more relevant or personalized messaging. Studies have attempted  to determine the extent to which political micro-targeting is a serious concern for the functioning of democratic elections. The question has also been raised by researchers and digital rights activists as to how micro-targeting may be interfering with our freedom of thought.

Government surveillance and access to personal data

The content shared on social media can be monitored by governments, who use social media for censorship, control, and information manipulation. Even democratic governments are known to engage in extensive social-media monitoring for law enforcement and intelligence-gathering purposes. These practices should be guided by robust legal frameworks and data protection laws to safeguard individuals’ rights online, but many countries have not yet enacted this type of legislation.

There are also many examples of authoritarian governments using personal and other data harvested through social media to intimidate activists, silence opposition, and bring development projects to a halt. The information shared on social media often allows bad actors to build extensive profiles of individuals, enabling targeted online and offline attacks. Through social engineering, a phishing email can be carefully crafted based on social media data to trick an activist into clicking on a malicious link that provides access to their device, documents, or social-media accounts.

Sometimes, however, a strong, real-time presence on social media can protect a prominent activist against threats by the government. A disappearance or arrest would be immediately noticed by followers or friends of a person who suddenly becomes silent on social media.

Market power and differing regulation

We rely on social-media platforms to help fulfill our fundamental rights (freedom of expression, assembly, etc.). However, these platforms are massive global monopolies and have been referred to as “the new governors.” This market concentration is troubling to national and international governance mechanisms. Simply breaking up the biggest platform companies will not fully solve the information disorders and social problems fueled by social media. Civil society and governments also need visibility into the design choices made by the platforms to understand how to address the harms they facilitate.

The growing influence of social-media platforms has given many governments reasons to impose laws on online content. There is a surge in laws across the world regulating illegal and harmful content, such as incitement to terrorism or violence, false information, and hate speech. These laws often criminalize speech and contain punishments of jail terms or high fines for something like a retweet on X. Even in countries where the rule of law is respected, legal approaches to regulating online content may be ineffective due to the many technical challenges of content moderation. There is also a risk of violating internet users’ freedom of expression by reinforcing imperfect and non-transparent moderation practices and over-deletion. Lastly, they constitute a challenge to social media companies to navigate between compliance with local laws and defending international human rights law.

Impact on journalism

Social media has had a profound impact on the field of journalism. While it has enabled the emergence of the citizen-journalist, local reporting, and crowd-sourced information, social-media companies have displaced the relationship between advertising and the traditional newspaper. In turn this has created a rewards system that privileges sensationalist, click-bait-style content over quality journalism that may be pertinent to local communities.

In addition, the way search tools work dramatically affects local publishers, as search is a powerful vector for news and information. Researchers have found that search rankings have a marked impact on our attention. Not only do we tend to think information that is ranked more highly is more trusted and relevant, but we tend to click on top results more often than lower ones. The Google search engine concentrates our attention on a narrow range of news sources, a trend that works against diverse and pluralistic media outlets. It also tends to work against the advertising revenue of smaller and community publishers, which is based on user attention and traffic. In this downward spiral, search results favor larger outlets, and those results drive more user engagement; in turn, their inventory becomes more valuable in the advertising market, and those publishers grow larger driving more favorable search results and onward we go.

Back to top

Questions

To understand the implications of social media information flows and choice of platforms used in your work, ask yourself these questions:

  1. Does your organization have a social-media strategy? What does your organization hope to achieve through social media use?
  2. Do you have staff who can oversee and ethically moderate your social-media accounts and content?
  3. Which platform do you intend to use to accomplish your organization’s goals? What is the business model of that platform? How does this business model affect you as a user?
  4. How is content ordered and moderated on the platforms you use (by humans, volunteers, AI, etc.)?
  5. Where is the platform legally headquartered? What jurisdiction and legal frameworks does it fall under?
  6. Do the platforms chosen have mechanisms for users to flag harassment and hate speech for review and possible removal?
  7. Do the platforms have mechanisms for users to dispute decisions on content takedowns or blocked accounts?
  8. What user data are the platforms collecting? Who else has access to collected data and how is it being used?
  9. How does the platform engage its community of users and civil society (for instance, in flagging dangerous content, in giving feedback on design features, in fact-checking information, etc.)? Does the platform employ local staff in your country or region?
  10. Do the platform(s) have privacy features like encryption? If so, what level of encryption do they offer and for what precise services (for example, only on the app, only in private message threads)? What are the default settings?

Back to top

Case Studies

Everyone saw Brazil violence coming. Except social media giants

Everyone saw Brazil violence coming. Except social media giants

“When far-right rioters stormed Brazil’s key government buildings on January 8, social media companies were again caught flat-footed. In WhatsApp groups—many with thousands of subscribers—viral videos of the attacks quickly spread like wildfire… On Twitter, social media users posted thousands of images and videos in support of the attacks under the hashtag #manifestacao, or protest. On Facebook, the same hashtag garnered tens of thousands of engagements via likes, shares and comments, mostly in favor of the riots… In failing to clamp down on such content, the violence in Brazil again highlights the central role social media companies play in the fundamental machinery of 21st century democracy. These firms now provide digital tools like encrypted messaging services used by activists to coordinate offline violence and rely on automated algorithms designed to promote partisan content that can undermine people’s trust in elections.”

Crowdsourced mapping in crisis zones: collaboration, organization and impact

Crowdsourced mapping in crisis zones: collaboration, organization and impact

“Within a crisis, crowdsourced mapping allows geo-tagged digital photos, aid requests posted on Twitter, aerial imagery, Facebook posts, SMS messages, and other digital sources to be collected and analyzed by multiple online volunteers…[to build] an understanding of the damage in an area and help responders focus on those in need. By generating maps using information sourced from multiple outlets, such as social media…a rich impression of an emergency situation can be generated by the power of ‘the crowd’.” Crowdsourced mapping has been employed in multiple countries during natural disasters, refugee crises, and even election periods.

What makes a movement go viral? Social media, social justice coalesce under #JusticeForGeorgeFloyd

What makes a movement go viral? Social media, social justice coalesce under #JusticeForGeorgeFloyd

A 2022 USC study was among the first to measure the link between social media posts and participation in the #BlackLivesMatter protests after the 2020 death of George Floyd. “The researchers found that Instagram, as a visual content platform, was particularly effective in mobilizing coalitions around racial justice by allowing new opinion leaders to enter public discourse. Independent journalists, activists, entertainers, meme groups and fashion magazines were among the many opinion leaders that emerged throughout the protests through visual communications that went viral. This contrasts with text-based platforms like Twitter that allow voices with institutional power (such as politicians, traditional news media or police departments) to control the flow of information.”

Myanmar: The social atrocity: Meta and the right to remedy for the Rohingya

Myanmar: The social atrocity: Meta and the right to remedy for the Rohingya

A 2022 Amnesty International report investigated Meta’s role in the serious human rights violations perpetrated during the Myanmar security forces’ brutal campaign of ethnic cleansing against Rohingya Muslims starting in August 2017. The report found that “Meta’s algorithms proactively amplified and promoted content which incited violence, hatred, and discrimination against the Rohingya – pouring fuel on the fire of long-standing discrimination and substantially increasing the risk of an outbreak of mass violence.”

How China uses influencers to build a propaganda network

How China uses influencers to build a propaganda network

“As China continues to assert its economic might, it is using the global social media ecosystem to expand its already formidable influence. The country has quietly built a network of social media personalities who parrot the government’s perspective in posts seen by hundreds of thousands of people, operating in virtual lockstep as they promote China’s virtues, deflect international criticism of its human rights abuses, and advance Beijing’s talking points on world affairs like Russia’s war against Ukraine. Some of China’s state-affiliated reporters have posited themselves as trendy Instagram influencers or bloggers. The country has also hired firms to recruit influencers to deliver carefully crafted messages that boost its image to social media users. And it is benefitting from a cadre of Westerners who have devoted YouTube channels and Twitter feeds to echoing pro-China narratives on everything from Beijing’s treatment of Uyghur Muslims to Olympian Eileen Gu, an American who competed for China in the [2022] Winter Games.”

Why Latin American Leaders Are Obsessed With TikTok

Why Latin American Leaders Are Obsessed With TikTok

“Latin American heads of state have long been early adopters of new social media platforms. Now they have seized on TikTok as a less formal, more effective tool for all sorts of political messaging. In Venezuela, Nicolas Maduro has been using the platform to share bite-sized pieces of propaganda on the alleged successes of his socialist agenda, among dozens of videos of himself dancing salsa. In Ecuador, Argentina and Chile, presidents use the app to give followers a view behind the scenes of government. In Brazil, former President Jair Bolsonaro and his successor Luiz Inácio Lula da Silva have been competing for views in the aftermath of a contested election…In much of the West, TikTok is the subject of political suspicion; in Latin America, it’s a cornerstone of political strategy.”

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19