View all technologies

Disinformation

What is disinformation? What is misinformation?

Disinformation is misleading or wrongful information that is disseminated with an intent to deceive, mislead or cause harm. Information is presented in such a way as to purposely mislead or is created with the intent to mislead. Put another way, disinformation is false or manipulated information that is knowingly shared to cause harm or is created with reckless disregard of likely harm.

Often, disinformation includes some truthful components or “half-truths.” This makes it more difficult for the audience to recognize something as disinformation and makes the content more believable.

Political disinformation, a subset of disinformation, is false information that is disseminated with an intent to shape perceptions around some aspect of political discourse.

Disinformation = false information + intent to harm

Misinformation is misleading or wrongful information that is disseminated without an intent to deceive, mislead or cause harm. It is false information that people spread in error without intending to deceive others. Misinformation contains and/or describes false content, but the person sharing the content does not realize that it contains false and/or misleading information. Misinformation does not need to be wholly false; it can include information whose inaccuracy is unintentional, i.e., information reported in error.

Misinformation = false information + mistake

Closely related concepts

Mal-information

Mal-information is truthful information presented in deceptive ways in an attempt to mislead. Although the information is genuine, it is presented and shared in a manner intended to cause harm.

Mal-information = true information + intent to harm

 

Dangerous speech

Dangerous speech is any form of expression (speech, text, or images) that can increase the risk that its audience will condone or participate in violence against members of another group.” Developed by the Dangerous Speech Project, this concept provides a constructive framework for approaching hate speech and thinking about speech that is liable to cause violence.

Fake News

The term “fake news,” although widely used, has no accepted definition. It has become a catch-all phrase for news or information that people do not like. Everything can be considered “fake news,” and the term is often deployed to mislead. As such, it is a term that should be avoided. This resource will avoid the buzzword “fake news,” a term that is imprecise and overused. The term has also been appropriated by politicians to undermine the press and political opposition. It is better to define content in question as disinformation, misinformation or mal-information.

Relationship between Disinformation and Misinformation

Misinformation, disinformation, malinformation

Claire Wardle & Hossein Derakhshan, 2017

Often, purveyors of disinformation design campaigns to appeal to a wide audience so that readers will continue to spread disingenuous messages. The people spreading the disinformation have no intent to cause harm and may not even know that the information they are spreading is false. Studies in the United States have shown that people older than 65 are most likely to spread disinformation and misinformation.

Actors behind disinformation campaigns also use bots and “astroturfing” techniques to mask their intentions and give disinformation a veneer of credibility.

Definitions

Astroturfing: the attempt to create an impression of widespread grassroots support or interest in a policy or idea using fake online accounts, perhaps networks of bots and fake pressure groups.

Bots (Coordinated Inauthentic Behavior): a bot can refer generally to a software program that performs repetitive tasks, but in the case of social media, refers to an automated social media account.

Click Bait: an article title, link, or thumbnail designed to entice users to view the content, which is usually sensationalized and misleading.

Cyberviolence, Cyberbullying: Cyberviolence refers to acts of abuse using digital media. Cyberbullying refers to cyberviolence that is characterized by an imbalanced power dynamic and is recurrent.

Deepfake: usually photos or videos that are altered or entirely fabricated to create a false depiction of something.

Disinformation: information that is false and deliberately created to harm a person, social group, organization or country.

Doxxing: researching and publicly revealing private or identifying information, especially sensitive personal information (from the word “documents”).

Information Warfare: also referred to as Information Operations or Influence Operations, it is the use of ICT technology, including social media, to influence or weaken an opponent, including with disinformation and propaganda.

Harmful Content (Toxic Content): Precisely what qualifies as harmful content, sometimes called toxic content, depends on the platform’s definition and interpretation. It can refer to content that is both legal or illegal by national laws — defamation, terrorist content, etc. — or content that is legal but not welcomed by community guidelines (slurs, stereotypes, etc.) A helpful supplement to these slippery terms, the Dangerous Speech Project provides a framework for thinking about speech that is liable to cause violence.

Mal-information: reconfigured true (factually correct) information, or factually correct information that is shared publicly with the intention of harming (can include doxxing, defamation, forms of trolling and harassment).

Misinformation: Information that is false or factually inaccurate, but not created with the intention of causing harm (rumors, myths).

Trolling: creating discord in an online discussion space, starting quarrels or upsetting people by posting inflammatory or off-topic messages.

Upload Filter: automated computer programs that scan content when it is uploaded to an online platform, before it is published. It is a strategy used by larger platforms to locale illegal or unaccepted content (often to spot for copyright infringement of Child Sexual Abuse Material).

User Generated Content (UGC): any form of content, such as images, videos, text, and audio, that has been posted by users on online platforms.

Virality: the tendency of an image, video, or piece of information to be circulated rapidly and widely.

How do disinformation and misinformation spread?

Disinformation, misinformation, mal-information, and dangerous speech – none of these is a new threat. What has changed are the mediums with which they are primarily produced and by which they are spread. The internet and dominant online platforms provide new tools that amplify and alter the danger presented by these old challenges. Social media platforms, like Facebook and Twitter, in particular, spread and amplify disinformation, misinformation and dangerous speech at breakneck speeds. Social media, blogs and other online spaces increasingly drive discourse and is where much of today’s disinformation takes root. Also, while networked communication is largely driving change, traditional media also plays a critical role in amplifying disinformation – often inadvertently serving the purposes of disinformation actors even when seeking to provide a countering force.

Media training for journalists in Sviatohirsk, Ukraine. International human rights provides a framework to balance freedom of expression and other rights. Photo credit: UCBI USAID.

Large digital platforms rely on advertising revenue and so prioritize content that maximizes time spent ‘on platform’; the rise of behavioral surplus’ as a revenue model further commodifies ‘engagement’ by transforming it into data. The incentives to manipulate a model that rewards engagement are clear – as are the results. Studies suggest that less than 60% of global web traffic is human, and some years more than half the world’s internet activity comes from automated ‘bots‘. As much as half of all YouTube views in a given year may have been bots ‘masquerading as human’. Platforms are not incentivized to eliminate fake traffic as it earns them real revenue. The same models that reward ‘inauthentic’ behavior online also advantage content that generates engagement, be it intentionally deceptive, inflammatory or simple clickbait looking for easy advertising-revenue dollars.

Research also suggests that human psychology may advantage information that reinforces existing views and biases, plays to emotions, or demonizes ‘out-groups’ such as minorities, over more factbased, balanced information. Digital platforms that rank and recommend content algorithmically, often ‘optimized’ for ‘engagement’ and time spent on the platform, create ‘degenerate feedback loops’, amplifying and compounding these effects. Research has shown that online disinformation and misinformation reach audiences more quickly than accurate information on the same subjects; on Twitter a false news story will reach an audience of 1500 people six times faster, on average, than an accurate one.

It is challenging to determine intentionality and therefore to distinguish between misinformation and disinformation. Indeed, disinformation tactics often involve tricking others into unknowingly forwarding inaccurate or harmful information. Despite a public policy emphasis on medialiteracy education for young people, those most at risk of believing and relaying misinformation are actually old people. Dangerous speech, rumors, myths, and propaganda are also nothing new. It is rather the architecture, interfaces and algorithms of the web and of social media — things like pseudo-anonymity, international networks, virality— that have made these phenomena more potent.

Social media has nourished and empowered conspiracy theories in new ways, magnified and accelerated by network effects and virality. First, when content goes viral, the context around that information falls away (“context collapse”), and important qualifying aspects like the source, the author, even the date, fail to reach the viewer, distorting the way that information is received.

Further, the algorithms used by social media platforms have been shown to reinforce conspiracy theories, causing people to be increasingly exposed to and interact with increasingly sensationalist and unsupported content. YouTube in particular has been criticized for this, which may be in part attributable to the inherent differences between video and text content, and the fact Google did not anticipate that YouTube would become a critical source of news for people around the world.

Newspaper pages cover a wall in Ethiopia. The disruption of the publishing business model has been a slow-motion disaster for news organizations around the world. Photo credit: Jessica Nabongo.

The COVID-19 “infodemic,” as it has been declared by the World Health Organization, has seen a proliferation of conspiracy theories related to the virus that are gaining support far beyond radical fringes of society. In the UK, theories connecting COVID-19 to 5G technology have inspired people to set fire to cell phone towers and led to the stabbing of an engineer. According to expert Claire Wardle, co-founder of First Draft, the coronavirus has created the “perfect conditions for conspiracy theories.”

Disinformation and misinformation campaigns are created and amplified through various yet overlapping techniques. There are three key aspects and mechanisms to the spread of disinformation and misinformation:

  • Creators of disinformation and misinformation engage in Information Manipulation.
  • Poor, unaccountable and unregulated Content Moderation means that disinformation and misinformation remain online.
  • Digital Advertising Models incentivize the creation and publication of incendiary, click-bait content.
Information Manipulation

Web content can be weaponized, altered and distorted to cause harm. Deepfakes are a prime example, in which photos or videos are altered or entirely fabricated to create a false depiction of something. Visual formats are particularly challenging to decipher. Deepfakes can be playful and artistic, or they can be used for political purposes, to harass and blackmail people, or to confuse and create chaos.

Deepfakes and informationmanipulation techniques are essentially a game of cat and mouse: as the technology becomes more advanced, less expensive, and more widespread, specialists and journalists are developing techniques to decipher them.

There is concern that states will use informationmanipulation techniques against one another. It was revealed that Russia interfered in the US and European elections by sending content through fake and often automated socialmedia accounts and blogs to deepen existing political and social chasms. This information warfare” is not a new concept, but can certainly leverage new techniques in the digital age. The practice of using fake accounts and “bots” (sometimes called “coordinated inauthentic behavior”) is particularly associated with the Russian government and its “troll farms.” The concrete effect of these practices on election outcomes is difficult to measure, though considered by some scholars to be negligible.

The practice of astroturfing is also a common form of information manipulation, whether by foreign governments, interest groups, or even advertisers. The term comes from the word astroturf, or fake grass, and refers to the use of multiple online identities (bots) and fake lobbying groups to create the false impression of widespread grassroots support for a policy, idea, or product. Astroturfing can also be used as a technique to divert media attention or to establish a particular narrative around an event early on in the news cycle.

The threat of information manipulation has given many governments cause to impose sweeping laws on online content. There is a surge in laws across the world regulating terrorist content, false information, and hate speech. Not only do these laws risk being ineffective due to the many technical challenges of content moderation, they also risk freedom of expression, either by reinforcing imperfect and non-transparent moderation practices, or in some cases by directly limiting political opposition.

Content moderation

Currently, only the social media companies can moderate the content they host, and remove the networks of ‘bad actors’ that exploit the tools they provide – yet they do so with almost no accountability. Regulation of online discourse is already a highly contested space, with arguments ranging from freedom of expression to jurisdiction and beyond. These issues will only grow as disinformation actors adapt tactics and target more illegible spaces including private and encrypted groups.

The contentmoderation debate is so critical to disinformation because technical issues are collidingwith philosophical questions, fundamental freedoms and democratic mechanisms caught in the balance. Particularly important is the tension between freedom of expression and disinformation. Where is the line between freedom of speech and the potential harm of certain speech on individuals or on society? The idea of “free speech” on the web brings new risks. Though nations differ in their stances on freedom of speech, international human rights provides a framework for how to balance freedom of expression against other rights, and against protections for vulnerable groups. Still, contentmoderation challenges may become more challenging as speech/content itself evolves, for instance through increased live streaming, ephemeral content, voice assistants, etc. For more, see also the Social Media resource.

Digital Advertising Models

The advertising models of the web have also made it possible for hateful, incendiary, or misleading websites and blogs to finance themselves. The system of Programmatic Advertising, explained in the next paragraph, is particularly problematic because advertisers in this system do not know where their ads will be displayed and end up financing outlets or sites that may be counter to their values.

Programmatic Advertising is the automated bidding on advertising space in “real time” for the opportunity to show an ad to a specific audience; it is automated and conducted by algorithms, and takes place in milliseconds, in the time it takes your webpage to load. Programmatic advertising made up roughly two thirds of the $237 billion global ad market in 2019, with the Google/Facebook duopoly earning $174 billion or 61% of the global digital advertising market. This means that local, independent news organizations around world are seeing their ad revenues decimated, as advertising brands and their money go towards an algorithm that priorities sensational and eye-catching content. The primary performance indicators in digital are views and clicks, measured by the common metric, CPM (cost per thousand views). In the race for engagement, “fake news”—with its dramatic headlines and incendiary claims—consistently wins out over more balanced news and information.

The online movement Sleeping Giants has emerged in response to this problem, alerting companies when their ads are placed next to content they might not want to support. Check My Ads offers services to help keep your brands away from fake news, disinformation, and hate speech. Another example of response is the United for News “Inclusion List,” which provides media buyers with a global list of reputable local news sites for media buyers.

Back to top

How does disinformation and misinformation affect civic space and democracy?

News vendors ready their booths in Rangoon, Myanmar. Around the world, governments are pushing disinformation campaigns targeting human rights defenders and marginalized communities. Photo credit: Richard Nyberg/USAID.

Disinformation, mal-information, and dangerous speech campaigns are used by powerful interests and actors to manipulate elections, target, harass and discredit individual journalists and activists, defend corporate interests, and mobilize communities to support discrimination and violence. Misinformation, which may or may not have malicious origins, can be equally devastating; in the Philippines, for example, the percentage of people strongly agreeing that vaccines are important dropped from 93% in 2015 to just 32% in 2018, while in Ebola-affected areas of the Congo in 2018, more than 25% of people believed Ebola was a hoax.

The Erosion of Public Trust

A contested information environment contributes to increasing levels of public distrust. When the public is unable or unwilling to distinguish between legitimate and illegitimate or unreliable sources of information, basic democratic and societal foundations themselves come under threat.

This erosion of trust, whether or not the result of deliberate action, presents much broader, fundamental challenges, including to democracy, the media, and to the very concept of a shared consensus around reality. Disinformation must be seen as an attack on trust as much as truth. Information attacks often support both sides of an argument in order to increase discord; Russian bot-nets associated with the 2016 US election were subsequently used to promote both anti-vaccine and pro-vaccine information. Emerging trends such as deepfakes will likely further this untethering from reality, facilitating the dismissal of any contested claim, and offering more plausible deniability to real perpetrators.

The hollowing out of the news media industry facilitates this trend. When people no longer have information sources they recognize as representative, they seek authenticity elsewhere. These pressures are being felt in countries and contexts with a strong history of independent media; but where institutions and information literacy are coming from weaker base, the challenges are even greater.

Restricting Speech on the Internet

In response to the proliferation of disinformation and misinformation, governments are increasingly enacting “anti-fake news laws” and other laws that prohibit “false” information from being published on the internet. At present, approximately 10 countries have enacted legislation prohibiting “fake news” or false information. These include  Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) , Kenya’s Cybercrime Law, and Cambodia’s Social Media Directive. Several other countries have proposed laws that would prohibit sharing “fake news,” including,  Brazil’s proposed “fake news” law, the Kyrgyz Republic’s “false information” law, as well as Thailand, Nigeria, and the Philippines.  These laws are part of a global trend towards undercutting rights to freedom of expression under the guise of tackling disinformation or harmful online speech and are often used to stifle dissent and criticism of governments and their policies.

Civil society organizations need to be aware of the relevant laws in the countries where they work to ensure that anything posted or published online does not run afoul of these types of “anti-fake news” laws. The jurisdictional reach of these laws also needs to be considered. For example, Singapore’s law creates global jurisdiction, meaning that a publication emanating from anywhere in the world, but which discusses, refers to, or references Singapore, must comply with Singaporean law. A Malaysian NGO had its website blocked and may face prosecution in Singapore for publishing an article about Singapore’s death row prisoners.

Laws that are aimed to combat disinformation and misinformation can take a variety of forms and names. For example, these types of content restrictions may be included in cybercrime laws, media/press laws, defamation laws, information technology laws, sedition laws, social media laws or others.

Political interference

The perception of political interference is backed up by alarming examples. Looking just at elections taking place between October 2018 and May 2019 in the world’s four largest democracies – India, the US, Indonesia and Brazil, and as diverse as Australia, Israel, Nigeria and Ukraine, there is evidence of: massive coordinated ‘bot’ campaigns; deceptive targeted political advertising; interference by geopolitical actors; disinformation and dangerous speech spread by encrypted messenger groups; manipulated or decontextualized viral videos; attacks aimed at harassing and discrediting journalists and candidates; troll farms; and astroturfing. It is hard to imagine a contemporary election without disinformation and misinformation playing some role. In Gabon, a disputed deepfake video contributed to an attempted coup.

Traditional media can reinforce disinformation, misinformation and dangerous speech, even when trying to provide accurate and balanced coverage. In the 2016 US election, while manipulation of social media has received most attention, it was mainstream media that played the greatest role in amplifying Russian interference through largely unreflective coverage of hacked information drip-fed to exploit the media cycle.  Media approaches explicitly intended to counter disinformation, such as fact-checking, are largely post-hoc and narrow in reach; the impact of fact checking is incompletely understood and poorly monitored. Once people have formed an opinion, it is hard to change their minds.

Disinformation Campaigns Against Civil Society

Governments are increasingly developing and pushing disinformation campaigns that target civil society, human-rights defenders and marginalized communities. One the most significant examples was Myanmar’s army using Facebook to push disinformation in order to incite violence against the Rohingya. Jair Bolsonaro utilized a widespread disinformation campaign on WhatsApp as part of his strategy to win the Brazilian presidency. Similarly, Georgian civil society finds itself constantly battling disinformation campaigns from Russia.

Governments are investing more time and resources to discredit civil society. A study from 2019 found that more than 70 countries have invested in tools to create and publish disinformation and misinformation. Countries use cyber troops, bots and private, purchased accounts to either create or disseminate disinformation in an attempt to shape public opinion.

Online Violence and Targeted Digital Attacks

The blurring of online and offline space can be dangerous for individuals and for democracy, as harassment, dangerous speech, and “trolling” behaviors offer new methods for violence, including organized violence. Online violence has been used to target journalists, human-rights defenders, and political minorities or opponents.

Journalists are often subject to cyber harassment and threats, particularly those who write about socially-sensitive or political topics. Women journalists receive the brunt of digital harassment, through threats of sexual violence and other intimidation tactics. Online violence against journalists can lead to journalistic self-censorship, affecting the quality of the information environment and democratic debate.

The tools of the web provide new ways to spread and amplify hate speech and harassment. The use of fake accounts, bots, and even bot-nets (automated networks of accounts) allow perpetrators to attack, overwhelm, and even disable the social media accounts of their victims. Doxxing, by revealing sensitive information about journalists, is another strategy that can be used for censorship. For more, see also the Social Media resource.

Back to top

What can be done to address disinformation and misinformation?

Residents listen to Ratego FM in Siaya County, Kenya. CSOs are uniquely situated to provide educational and outreach initiatives that empower individuals to recognize disinformation. Photo credit: Amunga Eshuchi.
Fact checking

In addition to the presence of citizen journalists to confirm information from the ground, many digital tools allow journalists and media outlets to confirm the accuracy of stories. The larger platforms are investing major resources in fact-checking teams and technologies, building partnerships with news agencies, and supporting the growing fact checking industry, which is increasingly necessary to counter the hoaxes, conspiracy theories, rumors, and propaganda that appear on their platforms.

“Information detoxing”

Platforms can also leverage design strategies that work along with fact checking to help address the problem of misinformation. Many platforms use a practice of “information detoxing,” strategically sending articles with corrective information to users who share false content. In the context of COVID-19, platforms like Facebook have begun telling users when they have viewed false information, as well as redirecting the users who share false information  to authoritative sources like the World Health Organization (WHO) or the Centers for Disease Control and Prevention (CDC). These types of interventions, along with design changes like limiting the forwarding of viral content, aim to slow the spread of misinformation to try to reduce the harm of dangerous rumors.

Open source intelligence (OSINT)

The mass of user-generated content and data available on the web, combined with technological advancements for collecting and analyzing this information, has allowed for the emergence of the field of open source intelligence (OSINT). Open source intelligence combines long-form investigative journalism and online forensic research; investigators analyze publicly available information, gathered both on the surface web and from the deep web (pages on the World Wide Web, but whose contents are not indexed by standard web search-engines). The website Bellingcat has established itself as a leader in this space through its reporting on war zones and human-rights abuses, like the use of chemical weapons in the Syrian Civil War and the truth behind the downing of the Malaysia Airlines Flight 17.

Rumor Tracking

Rumor Tracking is another strategy made possible by social media to address the dangerous proliferation of misinformation. Internews has developed a rumor tracking methodology that has been successfully applied to many contexts, including recently in the COVID-19 response. Rumor tracking is conducted via monitoring social media and other public discussion platforms to understand people’s concerns and identify what false information is being shared. It is critical to understand the information context and to use local languages; Internews has partnered with Translators without Borders and Standby Task Force to analyze rumors around COVID-19, collecting data in six languages. This is particularly important given that the major social-media platforms tend to focus their content moderation and fact-checking on the most-spoken languages and neglect local languages. Social media can also provide effective channels for knowledge sharing and the dissemination of accurate information. A recent study in Zimbabwe found that carefully crafted WhatsApp messages, shared in local languages, can change beliefs and behaviors in response to COVID-related misinformation.

Counterspeech

In response to forms of publicly-visible harassment, trolling, and dangerous speech, the practice of counterspeech is also gaining traction, particularly in activist communities. Counterspeech, the online response by activists to hateful or harmful language, may aim to persuade the perpetrator/author of the hateful content, but more often, counterspeech aims to positively change the online discussion for onlookers, to change the tone in online public space, and supplement stereotypes and harmful messages with inclusive, civic messaging. One example is the #Jagärhär (#Iamhere) group mobilization strategy begun by the Swedish journalist Mina Dennert to rally massive response to abusive online trolls. The Swedish initiative has inspired similar online mobilization groups in other countries.

Back to top

What can civil society do to limit disinformation?

Civil society has an important role to play in limiting the spread of disinformation, misinformation and dangerous content.

  1. First, civil society can act as a watchdog. By closely following social media in their communities, civil society can identify and expose disinformation campaigns as they emerge.
  2. Second, civil society is uniquely situated to provide educational and outreach initiatives that empower individuals to recognize disinformation. Civil society can also work with schools and universities to design and implement media-literacy programs.
  3. Third, civil society can apply pressure to tech companies, businesses, and advertisers that wittingly or unwittingly host, support, or incentivize creators of false and misleading content.
  4. Fourth, civil society can work with governments to replace “anti-fake news laws” and other broad content restrictions with narrowly-focused laws that combat disinformation while protecting the freedom of expression.

Back to top

Questions

If you are trying to understand how to mitigate the risks of disinformation and misinformation in your work, ask yourself these questions:
  1. How does my organization verify information? What internal controls does my organization have to prevent the inadvertent spreading of disinformation or misinformation?
  2. What internal trainings or programming should we undertake to better understand the risks associated with disinformation?
  3. What are our potential responses to a disinformation campaign targeted at us or our partners?
  4. To stop disinformation, what strategies of distribution beyond publishing might we consider?
  5. When we publish something in error, what is our process for issuing corrections?
  6. What security protocols should be in place in case a staff member, participant or partner is threatened, doxxed, etc.?
  7. What programs or initiatives can we create and implement to improve media literacy in our community?

Back to top

Case Studies

How the 5G COVID-19 conspiracy spread

“The level of interest in the coronavirus pandemic – and the fear and uncertainty that comes with it – has caused tired, fringe conspiracy theories to be pulled into the mainstream. From obscure YouTube channels and Facebook pages, to national news headlines, baseless claims that 5G causes or exacerbates coronavirus are now having real-world consequences. People are burning down 5G masts in protest. Government ministers and public health experts are now being forced to confront this dangerous balderdash head-on, giving further oxygen and airtime to views that, were it not for the major technology platforms, would remain on the fringe of the fringe. ‘Like anti-vax content, this messaging is spreading via platforms which have been designed explicitly to help propagate the content which people find most compelling; most irresistible to click on,’ buzzword “fake news,” a term that is imprecise and overused says Smith from Demos.”

James Temperton, Wired UK Article, 2020.

Why the arrest of a journalist in Manila will echo around the world

Investigative journalist and founder of the news site The Rappler, Maria Ressa warns that the mass manipulation of social-media accounts in the Philippines was a testing ground for changing power structures globally and is a threat to democracy. “In the aftermath of Duterte’s election in 2016, Ressa and The Rappler were among the first to sound the alarm on how fake news, particularly fake news on Facebook, shaped the Philippine election — a line of coverage that proved particularly prescient. The outlet has also been at the forefront of covering Duterte’s call to shoot and kill suspected drug users and dealers. Her team of reporters has investigated allegations of police misconduct and highlighted the lack of justice for victims of the police-led campaign… In his 2017 State of the Union address, [Duterte] called out the company by name, implying, without citing evidence, that it was foreign owned. Not long after, the country’s Securities and Exchange Commission opened an investigation into the company’s ownership structure. The commission later revoked Rappler’s license, a decision that was denounced by journalists and rights groups…”

Maria Ressa, The Washington Post, 2019.

Taxing dissent: Uganda’s social media dilemma

“Since July 2018, when the law went into effect, Ugandans have to pay .05 cents (USD) per day to access the internet along with 50 over-the-top media services (OTTs) — streaming media offered directly through the internet. The taxes apply to social media platforms and apps such as WhatsApp, Facebook, Twitter, Skype and Viber. Given that these social media platforms have been main news distribution sources, journalists noted a significant decline in the level of engagement with readers. With one-third of Ugandans living below the poverty line, surviving on $1.90 USD per day, the new tax drove thousands offline and off social media to meet other basic needs…  Uganda’s finance ministry said the aim of the tax was to raise revenue, but President Yoweri Museveni also called for the tax to regulate ‘gossip.’ Activists slammed it as an attempt to restrict free speech and crackdown on dissent…”

Sandra Aceng, Global Voices, 2019.

How Facebook can Flatten the Curve of the Coronavirus Infodemic

“Facebook CEO Mark Zuckerberg and other company executives launched a media blitz to publicize the company’s expanded efforts to stop the spread of COVID-19 misinformation. Facebook announced that these efforts had been “quick,” “aggressive,” and “executed…quite well.” In February, our team began detecting and monitoring widespread misinformation about COVID-19 online. In March, our investigative team set out to analyse and assess the efficacy of Facebook’s efforts to combat this “infodemic” on its main platform. For this study, of the thousands of pieces of coronavirus-related misinformation content being shared on Facebook, we decided to examine over 100 pieces of misinformation content in six different languages about the virus that were rated false and misleading by reputable, independent fact-checkers and could cause public harm. We found that millions of the platform’s users are still being put at risk of consuming harmful misinformation on coronavirus at a large scale. Representing only the tip of the misinformation iceberg, we found that the pieces of content we sampled and analyzed were shared over 1.7 million times on Facebook, and viewed an estimated 117 million times. Even when taking into consideration the commendable efforts Facebook’s anti-misinformation team has applied to fight this infodemic, the platform’s current policies were insufficient and did not protect its users.”

Avaaz, April 15, 2020. (Study) 

Back to top

References

Find below the works cited in this resource.

Additional Resources

Back to top

Categories

Digital Development in the time of COVID-19