//

Poison or Medicine? How Facebook Algorithms Facilitated a Genocide

As the world grows digital, media landscapes are drastically changing. With digitalization and the transformation of media ecosystems came great hopes that social networks would foster meaningful political participation, enlarging the public sphere, and bringing democratic values forward. Yet, social platforms, while encouraging the expression of oppositional voices, have also been used to crack down on those very dissenting forces.

“True medicine if you know how to use it, poison if you don’t.” It is with these words that a young interviewee from Myanmar describes Facebook to researcher Whitten-Woodring. The latter quote perfectly encapsulates the dual nature of social media: while social platforms can be extremely beneficial to the fluorescence of societies and democracy, they may also, in certain cases, favor social fragmentation and the rise of authoritarianism. The role that Facebook has played in Myanmar over the last decade can be seen as a textbook example in showing how social platforms may exacerbate social division.

As the world grows digital, media landscapes are drastically changing. With digitalization and the transformation of media ecosystems came great hopes that social networks would foster meaningful political participation, enlarging the public sphere, and bringing democratic values forward. Yet, social platforms, while encouraging the expression of oppositional voices, have also been used to crack down on those very dissenting forces. This has been referred to as “the paradox of social media”. In certain instances, it seems that social networks facilitate the establishment or the sustainment of authoritarian rule. On top of that, laws crafted to limit the spread of harmful information are often being weaponized to limit contestation. Information manipulation has become a matter of growing political and human rights concern globally. The issue seems particularly concerning in Southeast Asia where digital dictatorship is rising as civic space is shrinking. Social media and the economic models behind them provide a fertile ground for “information disorders” or what we call the spread of false, misleading, or inflammatory content.

Social platforms have mainly been accused of accelerating societal polarization and inciting violence, leading in the most extreme cases to genocide and mass killings. As Pew Research Center writes, digital tools “are used to exploit people’s frailties, stoke their rage and drive them apart”. Information disorders amplify social fragmentation, one of their primary effects being that people are becoming more and more isolated, cutting themselves off from information that they don’t like or that contradicts their prior assumptions. Besides, social media’s algorithms may consolidate ethnic identities and reinforce “us versus them mentalities”. Researchers, in their analysis of the use of Facebook by armed groups in Myanmar have observed several instances of vilification of the other. Fabricated contents that dehumanize certain individuals or groups may encourage – and legitimate, even, foreseeable acts of violence against them. Academics have proven that misinformation, disinformation, and hate speech on social media have become key drivers of conflict dynamics, violence, and harm.

This partly has to do with the business models social networks rely on. Social media are profits-driven enterprises. Platforms’ benefits increase with user engagement, namely the time users spend on the platform as well as their interactions with viewed content. In order to boost user engagement, social media platforms utilize algorithms to curate content that appeals to users’ interests, appetites, and fears. To this end, algorithms are fed user data and they deliver more of the type of content that users have already interacted with. As social media structures are designed to prioritize content shared by “friends” or people with strong connections with their social circles, people in online communities start seeing similar sets of information. This creates echo chambers or “filter bubbles” in which social platform users are surrounded by information confirming their pre-existing beliefs. 

In the case of Myanmar, disinformation on social media has helped justify violence perpetrated against minority populations. The country has a history of misleading and inflammatory content being spread by the military, affiliated groups and religious actors. Myanmar, due to six decades of military dictatorship, was one of the most closed-off countries in the world until the early 2010s. This long period of authoritarianism had drastically restricted outside information as well as new technologies from permeating Myanmar society. The opening of the country led to media reforms and a liberalization of the information ecosystem under the presidency of Thein Sein. Censorship was abolished in 2012 and an increased telecom competition led to a drop in communication costs. The use of smartphones subsequently exploded. Facebook rapidly developed a Burmese language version of its platform that was offering a broad range of services, from news to weather and health information or job ads.

Mobile phones were sold with Facebook already installed and the network became the dominant social platform in the country, with online public discourse increasingly moving there. Facebook became the primary source of information for the major part of the population. The over-reliance on social platforms for news contributed greatly to the spread of harmful information. As explained earlier, Facebook’s features enabled such content to go viral. The most notable example of this is the role the tech giant played in inciting genocide against the minority muslim ethnic group Rohingya. Rohingya people were dehumanized online and depicted as aggressive outsiders seeking to outnumber Buddhists, thereby posing a threat to Myanmar’s national identity. Facebook posts, with their wide reach; interactivity and viral potential generated an atmosphere of heightened anxiety. 

Hateful content helped justifying violent acts against Muslim communities, portraying the former as normal responses to an evil other. The military also spread rumours claiming that both Buddhist and Muslim groups were about to carry out attacks against against the other. Such content contributed to escalation, with perceived threats materializing – and potential harm turning well too real. Anti-Rohingya posts were more popular and could therefore spread more rapidly. Algorithms have prioritized incendiary information, like extreme speech and disinformation targeting a particular group.  It is indeed the most graphic or violent that attracts the most user engagement on Facebook. Such types of messages are thus likely to be amplified through Facebook’s newsfeed algorithm. Echo effects also played an important role here, as those who engage with extreme speech get even more hate speech. Besides, content mediated through the pages of religious and social actors, further reinforced echo chambers and group boundaries. 


The diffusion of anti-Rohingya hate speech and misinformation on Facebook ultimately resulted in the displacement of more than a million people, with thousands others killed, sometimes in the most horrendous ways. Currently, social media are being weaponized by Myanmar’s military junta in the nasty war it has been waging against any form of opposition to its outrageous February 2021 coup. Online platforms, however, have also allowed the resistance to organize itself, leading to the formation of a National Unity Government in April 2021 and the establishment of several opposition groups throughout the country. Let’s hope that in the very near future, history books will recognize the role that social media played in fostering solidarity and unity in the face of oppression and injustice in Myanmar. True medicine, if you know how to use it.

By: Mélina Froidure

Image: Saw Wunna