<p>In July, an anonymous Twitter handle that purportedly offers ‘unpopular unapologetic truths’ distastefully advised its male followers to "only marry virgins". A quick Twitter search suggests that this wasn't the first time this account had engaged in such rhetoric, it wasn't the last either – but on this particular occasion it broke out from its regular set of followers to garner wider attention.</p>.<p>Understandably, there was outrage. Some of the account's past content was called out, regular followers of the account were called out, both the tweets in question and the account were reported in unison by multiple users and more. However, two days later the account itself declared victory stating that interest in its content had increased and 'weak' followers had been cleared out.</p>.<p>Earlier in the year, efforts by the campaign 'Stop Funding Hate' led to a movie streaming service, a business school and an ad-network excluding a far-right Indian website from their ad programs. However, the website itself claimed an increase in voluntary contributions 'upto 700 per cent' and also stated that there was no drop in advertising revenues.</p>.<p>And in an ongoing instance, in late August, a news anchor tweeted out a ‘teaser’ video of an upcoming series that claimed it would unearth a conspiracy enabling minorities to occupy a disproportionate number of civil services posts in the country. An indicative analysis, using the tool Hoaxy, seemed to show that a lot of the initial engagement came from tweets that were meant to call out the nature of the content via quote tweets.</p>.<p>Often, many of these accounts had a large number of followers themselves.</p>.<p>Around the same time, an <a data-saferedirecturl="https://www.google.com/url?q=https://twitter.com/katestarbird/status/1299029965845880832&source=gmail&ust=1601031635979000&usg=AFQjCNG9NeXBV4qszUOphJGEMfn6hXV4fA" href="https://twitter.com/katestarbird/status/1299029965845880832" target="_blank">analysis</a> by Kate Starbird, an eminent crisis informatics researcher, showed a misleading tweet by Donald Trump spreading “much farther” through quote tweets than through retweets. She also pointed out that a lot of the early quote tweets were critical in nature and calling on the platform to take action.</p>.<p>While the matter of this particular series itself is sub judice, let’s focus on the days just after the tweet in question. In four days, the anchor’s follower count had grown nearly five per cent. In the ensuing period there have also been multiple hashtag campaigns professing their support both for the anchor and channel.</p>.<p>What is common in each of these situations is that efforts to call out problematic content may have inadvertently benefitted the content creators by galvanising their supporters (in-group), propagating the content on digital platforms (algorithmic reward) and perhaps even recruiting new supporters who were inclined to agree with the content but are only choosing to participate as a result of the amplification and/or perceived attacks against their points-of-view or beliefs (disagreement with the out-group).</p>.<p>Undoubtedly, in each of these cases, the actions also had several benefits: Signalling that such content is not kosher has immense value, and when that comes from influential individuals, even more so. It can also result in strengthening in-group ties as well as expanding the group by encouraging more people to participate. The melding together of group dynamics and amplification/’algorithmic reward’ highlights the challenges faced even when people want to “do the right thing”. While, admittedly, these challenges are not necessarily new, new affordances made possible by social media platforms adds several degrees to the difficulty. The frequency of these events also means that we are now locked in a continuous cycle of conflict. Many of these conflicts may seem ephemeral, but they often bleed into each other and even if they don’t their effects are rarely as fleeting as the events themselves. The end result is the long term degradation of the information ecosystem we inhabit.</p>.<p>It should be noted that the intention is not to advocate against pushback. To quote, Whitney Phillips, Assistant Professor at Syracuse University: “Pushback is important, it is also deeply fraught.”</p>.<h4><strong>Moving towards solutions</strong></h4>.<p>The question that now arises, is how to tame the beast without feeding the monster? Admittedly, this isn’t made any easier by the ongoing clash between what are derisively referred to as ‘cancel culture’ and ‘both-sidesism’.</p>.<p>There are no silver bullets, but there are two frames that can aid a better understanding of the dynamics at work.</p>.<p>First is the idea of the internet's ambivalence, proposed by Whitney Phillips and Ryan Milner in their 2017 book, The Ambivalent Internet. It espouses the idea that the situations like those described here, responses to them and their effects are ambivalent. They argue that the same set of behaviours “that can wound can be harnessed for social justice”. Embracing this ambivalence at multiple levels is a step towards breaking out of ‘cancel-culture’ v/s ‘both-sides’ allegiance and considering events case-by-case. It exhorts one to consider who the speaker is, who is listening and whether the message is enforcing existing power structures or challenging them.</p>.<p>And second, the <a href="http://dangerousspeech.org/" target="_blank">Dangerous Speech framework</a> developed by Susan Benesch. Which proposes a five part framework consisting of the speaker, the message, the audience, the social and historical context, and the medium to arrive at a qualitative assessment of whether a message could result in increasing the willingness of people to commit violence.</p>.<p>The former helps contextualise a given situation and the latter helps understand the possibility of real-world harm. Together, they can help guide choices regarding the necessity, level and nature of pushback as well as an understanding of what to expect as potential consequences.</p>.<p><em>(Prateek Waghre is a research analyst at The Takshashila Institution and writes MisDisMal-Information, a newsletter on information disorder from an Indian perspective)</em></p>.<p><em>Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.</em></p>
<p>In July, an anonymous Twitter handle that purportedly offers ‘unpopular unapologetic truths’ distastefully advised its male followers to "only marry virgins". A quick Twitter search suggests that this wasn't the first time this account had engaged in such rhetoric, it wasn't the last either – but on this particular occasion it broke out from its regular set of followers to garner wider attention.</p>.<p>Understandably, there was outrage. Some of the account's past content was called out, regular followers of the account were called out, both the tweets in question and the account were reported in unison by multiple users and more. However, two days later the account itself declared victory stating that interest in its content had increased and 'weak' followers had been cleared out.</p>.<p>Earlier in the year, efforts by the campaign 'Stop Funding Hate' led to a movie streaming service, a business school and an ad-network excluding a far-right Indian website from their ad programs. However, the website itself claimed an increase in voluntary contributions 'upto 700 per cent' and also stated that there was no drop in advertising revenues.</p>.<p>And in an ongoing instance, in late August, a news anchor tweeted out a ‘teaser’ video of an upcoming series that claimed it would unearth a conspiracy enabling minorities to occupy a disproportionate number of civil services posts in the country. An indicative analysis, using the tool Hoaxy, seemed to show that a lot of the initial engagement came from tweets that were meant to call out the nature of the content via quote tweets.</p>.<p>Often, many of these accounts had a large number of followers themselves.</p>.<p>Around the same time, an <a data-saferedirecturl="https://www.google.com/url?q=https://twitter.com/katestarbird/status/1299029965845880832&source=gmail&ust=1601031635979000&usg=AFQjCNG9NeXBV4qszUOphJGEMfn6hXV4fA" href="https://twitter.com/katestarbird/status/1299029965845880832" target="_blank">analysis</a> by Kate Starbird, an eminent crisis informatics researcher, showed a misleading tweet by Donald Trump spreading “much farther” through quote tweets than through retweets. She also pointed out that a lot of the early quote tweets were critical in nature and calling on the platform to take action.</p>.<p>While the matter of this particular series itself is sub judice, let’s focus on the days just after the tweet in question. In four days, the anchor’s follower count had grown nearly five per cent. In the ensuing period there have also been multiple hashtag campaigns professing their support both for the anchor and channel.</p>.<p>What is common in each of these situations is that efforts to call out problematic content may have inadvertently benefitted the content creators by galvanising their supporters (in-group), propagating the content on digital platforms (algorithmic reward) and perhaps even recruiting new supporters who were inclined to agree with the content but are only choosing to participate as a result of the amplification and/or perceived attacks against their points-of-view or beliefs (disagreement with the out-group).</p>.<p>Undoubtedly, in each of these cases, the actions also had several benefits: Signalling that such content is not kosher has immense value, and when that comes from influential individuals, even more so. It can also result in strengthening in-group ties as well as expanding the group by encouraging more people to participate. The melding together of group dynamics and amplification/’algorithmic reward’ highlights the challenges faced even when people want to “do the right thing”. While, admittedly, these challenges are not necessarily new, new affordances made possible by social media platforms adds several degrees to the difficulty. The frequency of these events also means that we are now locked in a continuous cycle of conflict. Many of these conflicts may seem ephemeral, but they often bleed into each other and even if they don’t their effects are rarely as fleeting as the events themselves. The end result is the long term degradation of the information ecosystem we inhabit.</p>.<p>It should be noted that the intention is not to advocate against pushback. To quote, Whitney Phillips, Assistant Professor at Syracuse University: “Pushback is important, it is also deeply fraught.”</p>.<h4><strong>Moving towards solutions</strong></h4>.<p>The question that now arises, is how to tame the beast without feeding the monster? Admittedly, this isn’t made any easier by the ongoing clash between what are derisively referred to as ‘cancel culture’ and ‘both-sidesism’.</p>.<p>There are no silver bullets, but there are two frames that can aid a better understanding of the dynamics at work.</p>.<p>First is the idea of the internet's ambivalence, proposed by Whitney Phillips and Ryan Milner in their 2017 book, The Ambivalent Internet. It espouses the idea that the situations like those described here, responses to them and their effects are ambivalent. They argue that the same set of behaviours “that can wound can be harnessed for social justice”. Embracing this ambivalence at multiple levels is a step towards breaking out of ‘cancel-culture’ v/s ‘both-sides’ allegiance and considering events case-by-case. It exhorts one to consider who the speaker is, who is listening and whether the message is enforcing existing power structures or challenging them.</p>.<p>And second, the <a href="http://dangerousspeech.org/" target="_blank">Dangerous Speech framework</a> developed by Susan Benesch. Which proposes a five part framework consisting of the speaker, the message, the audience, the social and historical context, and the medium to arrive at a qualitative assessment of whether a message could result in increasing the willingness of people to commit violence.</p>.<p>The former helps contextualise a given situation and the latter helps understand the possibility of real-world harm. Together, they can help guide choices regarding the necessity, level and nature of pushback as well as an understanding of what to expect as potential consequences.</p>.<p><em>(Prateek Waghre is a research analyst at The Takshashila Institution and writes MisDisMal-Information, a newsletter on information disorder from an Indian perspective)</em></p>.<p><em>Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.</em></p>