The political world is awash in a growing sea of social media-fed misinformation, loosely called fake news. Each week brings eyebrow-raising reports of a threat poised to upend America’s already dysfunctional political landscape, or reports that those at the helm of online information ecosystems delight in distorting reality and disrupting societal norms.
Last week, the New York Times reported that unnamed programmers used open-source Google code to create an app putting Michelle Obama’s face on an actress in a porn video. Earlier Times reports said faked videos could be coming to campaigns. This week, the New Yorker profiled Reddit, an anarchic site with 1 million subgroups (where Times reporter Kevin Roose discovered tips about forging political porn videos). Reddit CEO Steve Huffman even confessed he considers himself, “a troll at heart… Making people bristle, being a little outrageous in order to add some spice to life—I get that. I’ve done that.”
There’s nothing new about political distortions or rabble-rousing in American culture and politics. But just as social media is revolutionizing and accelerating aspects of the way people and campaigns communicate, these frontline dispatches heralding a disinformation dystopia are frequently missing a key element: context, or magnitude, so readers know what matters—and doesn’t—about the purported threats or trends. This omission is significant, because as the March issue of Science noted, “about 47 percent of Americans overall report getting news from social media often or sometimes, with Facebook as, by far, the dominant source.”
What is known about how social media platforms trigger the brain and stimulate behavior is changing. While campaign consultants almost always say they cannot know if their partisan messaging on social media affects how targeted voters behave on Election Day, scholars are starting to publish research tracing how and why social media is radicalizing politics.
Last week, Science published a study that analyzed 126,000 rumors spread on Twitter and traced how propaganda spreads further and faster than facts do. People are drawn to falsities, like to share it, and social media super-charges that process, its authors said. In a separate article, 15 social scientists warned that dynamic is fanning political extremism.
“Our call is to promote interdisciplinary research to reduce the spread of fake news and to address its underlying pathologies it has revealed,” the co-authored article, “The Science of Fake News,” concluded. “We must redesign our information ecosystem… We must answer a fundamental question: How can we create a news ecosystem and culture that values and promotes truth?”
That call is not unique, but it highlights how sophisticated the challenge is. Social media uses brain-mimicking artificial intelligence that serves up the content people see. That targeting is based on advanced computing that profiles every online user’s keystrokes. This technology was developed for advertisers to provoke sales. But once it is imported into political campaigns, where agendas, candidates and smears are the products being sold, the result is an outbreak of propaganda or provocations that blur lines between perception and reality; between personal prejudices and more objective truths.
Ask political consultants about this dynamic and they’ll reply that one can’t predict how people receiving their messaging will react. On the other hand, social scientists, academic media analysts and former social media executives are saying that’s not entirely so. They say human behavior is frequently predictable, and some academics have begun to connect the dots and produce an evidence trail.
Who Knows What’s Really Going On?
Take what we know about how Donald Trump’s campaign used Facebook in 2016. It identified and targeted 13.5 million persuadable voters in 16 states with 100,000 different messaging “variants”—the equivalent of having 100,000 campaign ads at your disposal. Of course, top Trump staffers in October 2016 bragged to Bloomberg.com that using the platform to encourage and suppress voters would elect Trump. But did Trump’s use of Facebook tip the race?
“We have precisely no evidence that the Russia stuff or anything that Trump’s campaign did moved any votes. If they did, we don’t know,” said Colin Delany, the founder of Epolitics.com and a columnist specializing in online campaigns for Campaigns and Elections magazine. “We know Trump’s people spent whatever he spent on Facebook. We know they ran all these variants. But campaigns do a lot of things that don’t work all the time.”
How can advertising experts and political consultants know so much about voters they repeatedly target before Election Day—by merging voter profiles compiled by political parties with the personal profiles platforms generated by Facebook’s advertising-driven supercomputers—yet not know what will likely happen when their micro-targeted audiences actually vote?
“You’re never going to be able to get metrics on it. It’s not new,” said John Zogby, a nationally known pollster and Forbes contributor. “Number one, there’s the old cliche we always use. This is about the focus groups, where people say, ‘Oh, I don’t pay any attention to television advertising or jungles,’ and they are humming the jingle for Crest going down the toothpaste aisle.”
“I did eight focus group in 2004 in battleground states, and when I’d ask people about the impact of negative advertising, they’d say, ‘I just shut it off,’ [or] ‘I don’t pay any attention to negative advertising,’ [or] ‘I don’t pay any attention to it whatsoever.’ I said, ‘Well, let’s go around, what’s the first thing that comes to mind when you hear these names: John Kerry?’ ‘I was for him before I was against him.’ ‘He changes his mind all the time.’ There you go. [They recite the attack ads.] How do you get a metric on that?”
Zogby is saying it’s all but impossible to measure what people are subconsciously processing, or what’s motivating their behaviors during the campaign season, especially on Election Day when they leave their digital devices and fill out a ballot. That’s different from tracking what’s happening online before then, as Trump’s campaign watched as its Facebook targets liked, commented, or shared its political messaging—or took another action like submitting an email, volunteered or agreed to attend a rally.
But Zogby also fears the technologists behind Silicon Valley’s top social media platforms are in the dark in a different way about what they have unleashed on the political world. They don’t realize that devices, and what’s presented on screens designed to exploit human nature, are extremely powerful in political contexts. (For example, the MIT study of Twitter reported in Science said people, not computer robots, are mostly responsible for spreading inflammatory content. Why? Because human nature is more drawn to what’s unusual, conspiratorial and echoes one’s beliefs and biases than to sharing facts.)
“The folks at Facebook and Google are amazingly brilliant people, but they have no idea what they have created,” Zogby said. “And that to me is the scariest piece. They can’t ultimately rein it in, because they don’t even understand [how it invites abuse and the impact]. But in the final analysis, is it anywhere as big as people think it is? Or is thinking it’s big itself the power that it has?”
Zogby’s final point is what some academics are now grappling with—tracing, with precision, how social media is distorting political communications. In the same week the New York Times reported that public figures (or any of us) may now have to worry about being pasted into porn videos, the University of Michigan School of Information announced the formation of a new Center for Social Media Responsibility, and hired one of President Obama’s former social media managers to run it.
“We’re looking at some slices of this,” said Garlin Gilchrist II, the center’s executive director, referring to social media’s positive and nefarious uses in politics. “We’ve done some work on what is the proportion of news that was shared online over the last couple of years that came from an unreliable source, specifically Facebook.” That soon-to-be published report, which is expected to be striking, is one example. Another research project is tracking how members of Congress use Twitter, sharing information or opinions that mirror their views. In both instances, the goal is arriving at a more specific understanding of how online echo chambers affect the intersection of the personal and the political.
“One of the reasons that we have launched this effort at the U.M. School of Information is to put in place infrastructure to be able to answer those types of questions, to get toward those answers,” Gilchrist said. “The better we understand networks and characteristics of information that flows through them, and how people are engaging and using them, the better recommendations can be made—both for users, and for the platforms, to be able to say, if you want to optimize your network for something different, here are the choice points that will have the most impact. That is where we are starting from.”
Virtually every major political campaign in 2018 is going to be using social media, especially Facebook, YouTube and Twitter, even if they don’t quite understand it, Zogby said, saying the trend is the latest example of the mutually assured destruction dynamic that’s long been a part of campaigns. “Candidates say, ‘It’s out there. It’s a capability. I better pay attention to it if the other guy’s got it.’”
The Emerging Research Landscape
Needless to say, those who know the most about how social media platforms engage and provoke behavior, and who have worked with campaigns—such as with Trump in 2016—and presumably have assessed their performance, are the companies themselves. So far, the industry’s public response generally has been to help some media outlets brand their coverage as more credible and not tinker with the underlying machinery. But these institutions may be heading toward a reckoning, as experts increasingly are waving flags about intentionally addictive platforms and business models propelling disinformation.
“There’s a collective freakout going on regarding the effects of social media on society as a whole,” Ethan Zuckerman, director of the Center for Civic Media at MIT, whose research focuses on media and social change, said in an email. “I’d classify the concerns I’ve heard into four general areas: Social media is addictive and bad for us; social media platforms are killing journalism; social media is being manipulated by bad actors to spread propaganda; and social media leads to ideological isolation and polarization.”
“Tristan Harris, a former Google design ethicist now with the non-profit Time Well Spent, is leading the charge on the first issue, and he’s got some good arguments,” he said. “His critique is mostly individual: Too much screen time is bad for you and the folks who’ve designed slot machines are the same folks designing social media. Fair enough, and worth researching, but less interesting to me as social/civic phenomenon: screwed-up, addicted citizens make up a dysfunctional body politic, yeah… But this is really about individual impacts, and about the weird phenomenon of people who built these tools now declaring they don’t want their kids using them.”
“The second subject is over 10 years old now, but still inspires debate, if only because it’s a real problem and one that’s very hard to solve,” Zuckerman continued, referring to how the journalism world has lost its impact and reach because social media often gives more credible content the same weight as uninformed opinion and propaganda. “We will lose something important if we lose local accountability and investigative media. I think this question is incredibly important, but I’m also at a loss for new ideas for solving it.”
Facebook and Google’s response to this trend has been to try to grade its content—an approach embraced by mainstream media outlets, in part, because it might help them to regain audiences and bolster their standing with advertisers. Notably, one takeaway from Science’s March cover story is that strategy may make the platforms and news media feel better, but it’s not likely to work because it’s not addressing the underlying human psychology exploited by the platforms’ algorithms.
“Fact checking might even be counterproductive under certain circumstances,” Science wrote. “Research on fluency—the ease of information recall—and familiarity bias in politics shows that people tend to remember information, or how they feel about it, while forgetting the context within which they encountered it. Moreover, they are more likely to accept familiar information as true. There is thus a risk that repeating false information, even in a fact-checking context, may increase an individual’s likelihood of accepting it as true.”
Thus, in 2018, we are at a frustrating crossroads where the political world and social media converge. On one hand, the features that allow disinformation to be spread seem to be identifiable. On the other hand, how those features incite political behaviors, especially voting, resists generalizations. Here’s the way Zuckerman described this landscape:
“These systems we’ve built—specifically a system in which audience action in response to content, either by amplifying (or dampening) that content, or by creating content in response—has all sorts of vulnerabilities that can be preyed on by bad actors. Bots can amplify topics that otherwise would have gone unnoticed. Propagandists can introduce fake news, disinformation, etc. into our newsfeeds and rely on the dynamics of the ecosystem to amplify them. Dark ads can sway us in ways invisible to folks who were not targeted.
“The trick, though, is that we don’t know how well these things work, which means we don’t know how much we should worry about them. When I teach people about the spread of ideas in media, I tell them to look at reach, influence and impact. Reach is pretty simple: how many people saw your message. Influence is harder: did that message impact them and change their thinking? With media cloud, we measure to see if ideas and phrases expressed in one article are picked up in other media, suggesting that an idea or a reframing of a topic has had influence. [But] I’ve not seen much good work on how influential these disinformation campaigns have been, beyond anecdotal work that traces Russian botnets into getting topics to trend, and then watching mainstream media pick up those frames. Finally, you’d want to study impact, which, in this case, has to do with people mobilizing or voting.”
However, Zuckerman does not think it’s too late for social media institutions to reel in the outbreak of disinformation in politics.
“The problems that may be inherent to social networks—echo chambers, filter bubbles, partisan polarization—are worthy of more study, and also may be solvable,” he said. “My worry is that we will end up over-focusing on the bad actor problem and under-focusing on what, to me, seems like the much harder problem, which is the way these systems can damage democracy even when being used in good faith.”
That imperative is also what’s behind the University of Michigan’s push to create its new Center for Social Media Responsibility.
“I am working on the spread of hyper-partisan, clickbait-y, fake and otherwise dubious news spread on social media, especially as it relates to the 2016 election,” said Ceren Budak, a UM professor who will soon publish her findings on what percentage of information on Facebook is dubious.
Libby Hemphill, another University of Michigan researcher and director of its Resource Center for Minority Data, will soon launch a website that “lets users see what Congress has been up to on Twitter, including the sites they share most often… My work shows that social media influences traditional news coverage of political issues, and that’s one way social media can indirectly impact voter behavior. Even if a voter doesn’t engage with disinformation on social media directly, that story could travel and find them elsewhere. That means journalists are one important source of hope in that they vet sources and stories.”
These efforts may inject some much-needed perspective into what Silicon Valley calls the “attention economy” and the UM release announcing its new center calls “an infocaplypse, or information apocalypse, a state where fake news and altered videos on social media and elsewhere on the web effectively end social reality as we know it—something they warn is not far off.”
“Good luck,” said pollster Zogby. “I am a numbers guy. I don’t know how you track it. That’s the truth. Ultimately, it has to make somebody pull a lever [in a voting booth]. It has to deal with a behavior of some sort. There are just so many [neurological] variables out there. Hillary’s campaign had an enormous amount of tools. The problem is, they had Hillary.”
But the Clinton campaign, as many political insiders and ex-Facebook executives have said, didn’t use Facebook as aggressively as Trump did. In an intriguing series of recent tweets, these insiders said Democrats in general rely more on TV ads than online messaging (their consultants earn more). They said Clinton relied on dated internal assessments by Facebook (from 2010) about how its platform affects voter participation, and overall, the Democrats discounted—that is, they didn’t sufficiently heed—“observational political science,” or watch how social media provokes behaviors and responses.
In other words, what academics are now studying and unmasking surrounding social media’s political power is part of what caught the Clinton campaign off guard in 2016.
ZNetwork is funded solely through the generosity of its readers.Donate