A plan to document and discredit disinformation campaigns

A plan to document and discredit disinformation campaigns
North America
United States of AmericaUnited States of America
Report

The media manipulation casebook

Nieman Reports Article written by:

BRIAN FRIEDBERG - EMILY DREYFUSS - GABRIELLE LIM - JOAN DONOVAN

A tool to help journalists, researchers and legislators know how and when to respond to misinformation in all its forms.

In 2020, amid a pandemic, protests and presidential elections, disinformation lurks everywhere. It's on our social media, it comes out of the mouths of our politicians, and it's printed on flyers mailed to our doorsteps, intermingled indistinguishable with the facts. The World Health Organization has called it an infodemic. Part of this is the result of intentional media manipulation campaigns, scams, hoaxes, and tricks crafted by people with an agenda. This misinformation, like a virus, is contagious and life-threatening, for individuals and for democracy itself.

It didn't start out this way. The advent of online communication, and the great possibility of connection that came with it, allowed people to find themselves based on interest and affinity like never before, and on new tools for those engaged in cultural production. Innovative scientists, advocacy groups and independent media thrived on new advancements in network communication and broadband technology, establishing their communities on the open web and social media.

But as the naivety of the techno-utopian age fades into the horrors of the infodemic, we now see platforms running the defense after knowingly allowing radicalization to flourish. Direct damage caused by ransomware attacks to our vital institutions, cyber-soldiers of oppressive regimes, for-profit disinformation teams, harmful conspiracy theories based on anti-Semitism and medical disinformation, and the celebration of Extremist violence is breaking down our institutions, which have little or no ability to identify the origin of these attacks.

We, in the Technology and Social Change team at Harvard's Shorenstein Center for Media, Politics, and Policy, are publishing the Media Manipulation Casebook to help eliminate this noise. The case book is a database of case studies of media manipulation campaigns, some old, some ongoing, which we hope will provide a framework for analyzing this phenomenon. We intend this research platform to be both a resource for academics and a tool to help researchers, technologists, politicians, civil society organizations, and journalists know how and when to respond to the real threat of media manipulation.

The heart of the Casebook is the life cycle of media manipulation, which presents a methodology on how to understand the origins and impacts of media manipulation campaigns, both national and international, and their relationship with the ecosystem of more extensive information. Situated in the emerging field of critical Internet studies, it is the product of three years of research on how journalists, civil society groups and technologists deal with media manipulation and misinformation campaigns. We take seriously the need for a set of cross-sectoral definitions to help us make sense of the manipulators' tactics and the communication strategies they employ to mislead the public.

The different stages of the life cycle of media manipulation Project for social and technological change at the Shorenstein Center for Media, Politics and Public Policy

Here, we break down how each stage of the life cycle works and the ways that different groups of people trying to fight back can be most helpful. Media manipulation not only affects journalists and social media companies, it presents a collective challenge for all of us who believe that knowledge is power. Like a hammer in a world full of nails, the Casebook offers a way to analyze interactions in our media ecosystem that is consistent with current investigative and journalistic practices that seek to bring us closer to the truth.

Stage 1: campaign planning


Media manipulation campaigns are a product of our culture and Silicon Valley. As the products of the tech industry spread globally, driven by a technocratic machine and for profit, pre-existing social problems were also reproduced and amplified. In many of the media manipulation campaigns we cataloged in the Casebook, you see small groups of motivated actors, often driven by these toxic social forces, opportunistically using technology to scale and amplify their impact.

Establishing who these people are and why they act is extremely difficult. Social media platforms, the main targets of extremists and media manipulators, are increasingly opaque and difficult to study critically. This makes establishing the intent and attribution of misinformation artifacts and harmful propaganda a time-consuming and emotionally draining process for journalists and researchers. Behind every visible campaign plan is another layer of invisible communication with outsiders, another new platform to evade regulation and oversight.

But the opacity of content moderation regarding these materials makes critical outside research and journalism such a necessary part of pushing for change.

Uncovering evidence of campaign planning and coordination requires domain expertise, which takes time. This information can be collected in real time by an observer dedicated to the task of understanding the dynamics of subcultural spaces online, but is often only available forensically. We know to what extent the far-right organized for Unite the Right because of the chat leaks posted by Unicorn Riots, for example. In our case studies, where possible, we illustrate what the beginning of a campaign looks like and explain how other researchers and journalists can cultivate that domain expertise themselves. Our case studies on the phenomenon of fake Antifa accounts on social media and the digital blackface "Operation Blaxit" show how planning and coordination can be detectable for those who know where to look.

Discovering campaign planning and setting intent is impossible without qualitative research that contextualizes how and why it was created. Rather than relying on large anonymized data sets delivered by these platforms or increasingly restrictive access to information, our research methods incorporate ethnographic, sociological, and anthropological insights from human communication to make sense of the mess. Included as part of our methodological package is the “Investigative Digital Ethnography,” a guide for academics and journalists seeking to design social media investigations that lead to deep insight into the target communities of disinformation and those that reliably produce it. While there will also be another layer to a disinformation campaign that we cannot see, we, as journalists and researchers, must conduct clear and reproducible investigation to collectively address the many online harms we face today.

 

Stage 2: Seeding the campaign on social platforms and the Web


Stage 2 is when a campaign moves from planning to execution, when memes, hashtags, forgeries, and false or misleading information are seeded on social media, fringe news sites, blogs, and forums. Often with the help of willing participants, online influencers, and network factions, this stage documents the earliest point at which a campaign goes beyond its original creators. If the messages and calls to action are engaging enough, the campaign grows and reaches new audiences who often have no idea of ​​the origins or motivations behind what they are seeing now.

The intervention at this stage is unclear. At what point do you intervene? How egregious is the content? What is the likely outcome? Will the intervention be counterproductive? This is where civil society organizations (CSOs) play an important role. Because of their expertise in the domain and their connections with individuals and groups that may be most affected by a poorly motivated influence operation, CSOs will not only know where to look, but will better understand attack vectors, wedge problems that arise. they will face. exploited, and the context and nuances to discern what action (if any) needs to be taken. Thus, CSOs with the ability to monitor such activities become an invaluable actor in preventing a potentially dangerous influence operation from moving to the next stage.

Often with more technical knowledge and faster action, CSOs can counter messages before they reach key audiences, dismiss potential misconceptions about an issue, and agitate response from the platform. Here, humor and creativity are assets that activists can harness to counter error and misinformation. CSOs are often the first to notice when something seems questionable and can be a trusted resource. Tech companies and researchers should also take note, as the most effective interventions will likely involve all parties

Stage 3: Responses from industry, activists, politicians and journalists
Stage 3 of the lifecycle model documents how highly visible people and organizations react and respond outside of a manipulation campaign. These individuals or institutions can be politicians, government agencies, celebrities, influencers, civil society organizations, or journalists. It is almost always after the reactions of these culturally powerful people that a campaign of manipulation becomes more visible and dangerous. Stage 3 is a turning point. What happens during this critical period determines whether the campaign receives undue amplification and attention, or fails.

It is at this stage when journalistic judgment is most important. Media manipulators crave attention. If the goal of Stage 2 is to set an internet trap for attention, Stage 3 is where the campaign catches it.

Journalists are often the ones who find these traps, as their job is to search for important information that the public needs to know. Journalists are on the hunt. And that's why they have to think of media manipulation campaigns as outages, spread over the internet to catch up. When finding evidence of a campaign that is still in stages 1 or 2, journalists must carefully balance the need to report true events with the need not to be victims of a manipulative campaign. Sometimes it is not in the public interest to report incipient campaigns.

To determine whether reporting in Stage 3 will do more good than harm, journalists must ask themselves: Does this little manipulation of the media have the potential to cause real harm? Are influencers responding and spreading the word? Does it seem like a lot of people are falling in love and embracing your harmful messages? If the answer to these questions is yes, then reporting is warranted. If the answers are less clear, they should make the best possible judgment.

As some of our case studies show, the worst thing journalists can do in Stage 3 is report a media manipulation campaign at face value, repeating the misinformation and framing of the campaign. In this case, the journalists have been misled. That is an obvious victory for the manipulators.

But journalists can still amplify the manipulation campaign even if they get the reports right, making Stage 3 extremely difficult. If the disinformation campaign hobbles on social media, an article in the mainstream press - even one that accurately points out how false or wrong the campaign is - could be the combination that lights the fire of the operation.

In that situation, the correct move may be not to write a story, to display strategic silence.

But if it's too late for strategic silence, for example because other news organizations are already amplifying it, or social media platforms are serving it to large audiences that are already acting on it, or because high-profile people are already they are responding to it. then you are already in Stage 3 and it is appropriate and even necessary to report on it.

One way to think about this is: as journalists, you rarely want to start Stage 3. You only want to start Stage 3 with your reporting if a campaign has already gained viral popularity so hidden, out of the big picture, that causing harm or it will be imminent.

In this case, the most important thing is to report critically. This means implementing a "strategic amplification". It means following the rubric of the truth sandwich: starting with what is true, quickly debunking what is false, and then returning to what is known. What is known can be things like who is behind the campaign, where it was planned, who it hurts, and how it fits into the current news cycle and the life cycle of media manipulation. Do not link directly to campaign operators' identifiers and websites if you can avoid it. Don't make it easy for readers to use your reports as a way to find, spread the word, and join the campaign.

Journalists also have a crucial role to play in Stage 4: Mitigation. By Stage 4, a campaign has reached a viral tipping point such that a media correction is clearly necessary. Whether that corrective is effective depends on the situation, but such information is always justified because the campaign has reached a certain level of public awareness.

Stage 4: Mitigation


Once a campaign is amplified in the public consciousness, a number of stakeholders must act to mitigate its damage. Journalism also plays a crucial role here, actively checking the facts and debunking individual disinformation campaigns, to bring the actions and impacts of malicious actors on social media platforms to the attention of civil society, technologists and experts. legislators.

As newsrooms adapted over the past four years to normalizing misinformation on social media, they began to regularly check facts and discredit rhythms. Fact-checkers have written thousands of articles debunking misinformation and conspiracies because they see audiences being repeatedly targeted for sensational and outrageous content online. It is a waste of resources, which could be much better spent maintaining journalism rather than moderating content on platforms. Dedicated data checks are a form of mitigation, dominating SEO results for confirmed tampering campaigns.

Mitigation efforts often fall to civil society, which suffers the long tail of manipulation for years as misinformation spreads. Journalists, medical and public health professionals, civil society leaders, and law enforcement personnel are bearing the true cost of responding to ongoing misinformation.

The evidence they collect adds up and can help pressure platforms to change their systems or Terms of Service. Civil society coalitions, like Change the Terms, have lobbied for years to force platform companies to take responsibility for the damage that proliferates on their sites. Moderation of content should not be the task of civil society or the communities that are harmed.

Platform companies are the ones wielding the power of content moderation in Stage 4. They can take down, remove content, ban terms; In short, they can turn off media manipulation campaigns if they take the right actions at the right time. Displacing the manipulators and hate mongers works. But these mitigation efforts often come too late, such as the demolition of the white supremacists who planned the murderous Unite the Right event or the long, slow growth of the QAnon movement. An example of this in our case book is the case of the targeted harassment of an alleged whistleblower, when some social media companies followed the lead of conventional journalism and blocked the use of a specific name on their platforms to protect a person from any hurt.

But platform companies often respond too late or don't respond at all. We know that platforms like Facebook have knowingly allowed radicalization to spread with deadly results. Although they have policy departments focused on minimizing harm and have committed time and again to making their platforms a safe and equitable environment, they often do not take action until they are compelled to do so by civil society and journalists. Their disparate mitigation efforts are neither coordinated nor standardized, allowing manipulators to take advantage of an asymmetric media environment to execute attacks.

In the regulatory vacuum, we repeatedly see platforms fail in their quest for brand protection, acting only when a campaign has ended or adapted. In January 2020, Facebook released a statement: “In the absence of regulation, Facebook and other companies must design their own policies. We've based ours on the principle that people should be able to listen to those who want to lead them, warts and all. "

This reveals that at Stage 4, the missing power broker is regulators, who could create standardized rules for platforms, but have largely abdicated that duty or found it too difficult so far.

Stage 5: Adapting the campaign


As many of the cases in the Case Book reveal, despite some mitigation, media manipulation campaigns often find ways to continue. In Stage 5, campaigns are tailored when possible, sometimes overnight or over the course of several years, such as the Operation Blaxit digital campaign case study, or the enduring conspiracy theory. Pizzagate. Operators often know the best ways to exploit sociotechnical systems and often use anonymity to avoid attribution and use edited materials and coded language to avoid automatic markup of content. While these individuals or groups may be out of the question, the major social media platforms remain the primary attack vector for such campaigns and have a responsibility to curb the impact of this behavior.

Successful platform mitigation is the only way to curb the impact of adaptation by manipulators. The movie "Plandemic", which claimed that the Covid-19 virus was deployed by powerful elites to create a new world order, went super viral in the spring of 2020. It was pulled after receiving nearly two million views. It was still circulating on smaller video platforms. Prior to mitigation, this disinformation campaign was operating publicly, including previously announcing a follow-up movie, "Indoctrination." When that movie was released, the platforms were ready. By taking proactive steps, the major platforms did a lot to stop the broadcast of the documentary and were able to prevent a repeat of the "Plandemic" virality. As a result of intersectoral coordination, "indoctrination" received much less attention. Motivated manipulators will continue to adapt, but without the social media amplification capabilities at their disposal, their audiences will shrink dramatically.

Keeping track of this ecosystem is difficult. Campaigns are difficult to find, difficult to identify while they are being planted, a challenge for journalists, our institutions and civil society. Uneven and unmotivated mitigation practices of platforms allow for handler adaptation. But in the Technology and Social Change project we present this model, open to many disciplines and research practices, as a means to detect, document and discredit disinformation in all its forms. It is a framework for policy makers seeking to understand the impact off-platform media manipulation has, and how those platforms are designed for continuous exploitation. And we hope it is a blueprint for journalists and researchers seeking standards on how to address the current information crisis.