After the controversial 2016 presidential election, the public frenzy over “fake news” and who’s to blame for it led academics to investigate the genesis and proliferation of so-called “fake news.” In a report in the Chronicle for Higher Education, Andrew M. Guess analyzed web traffic data from over 2,500 willing citizens’ computers during the final month of the 2016 presidential election. The study found that just 10% of the people accounted for 60% of the false news content. Guess’ data analysis identified two patterns of news consumers who were at greatest risk to being exposed to false information. The first group included people who lacked a high level of media literacy, and the second group consisted of supporters of Donald Trump.
Another, related study examined the impacts of Facebook’s algorithms. Mark Hachman, senior editor of PC World, conducted an experiment that put Facebook to the test to see how effectively partisan fake news gets disseminated through its algorithms. The experiment was simple. Hachman set up two Facebook accounts, one as a Clinton supporter by the name of Chris Smith, and the other as a Trump supporter named Todd White. Then he sat back and let Facebook recommend a series of news pages, effectively asking Facebook to be his news provider.
After two days Hachman found four notable characteristics about how Facebook’s algorithms operate when human behavior was taken out of the mix: (1) He counted ten fake news stories in the Trump supporter’s feed, and zero in his democratic counterpart’s. (2) The Trump supporter saw nearly three times more posts over the course of two days than the Clinton supporter did, suggesting that Facebook viewers identifying as conservative are more likely to be flooded with posts. (3) The majority of posts on both sides were neither fake nor politically partisan, but somewhat slanted, to favor the beliefs of the consumer. (4) Facebook doesn’t just show you posts from pages you like. It includes a section titled “People Also Shared” which highlights similar posts that people shared after viewing the current post, as well as “Related Posts” which serve to algorithmically reinforce the content of the current post.
This story was not covered by any corporate media source. However, corporate media does routinely take stories like these to cherry-pick convenient points of data and craft a misleading and inflammatory narrative about the spread of fake news online in order to damn social media sites and Russian bots. Few critically analyze the implications that these findings have or offer solutions to increase media literacy among at risk populations. Hollow journalism at its finest.
Mark Hachman, “Just How Partisan is Fake News? We Tested It,” PC World, September 7, 2017, www.pcworld.com/article/3142412/windows/just-how-partisan-is-facebooks-fake-news-we-tested-it.html.
Steve Kolowich, “Some Real Data on Fake News,” Chronicle of Higher Education, January 4, 2018, https://www.chronicle.com/article/Some-Real-Data-on-Fake-News/242153.
Student Researcher: Bethany Surface (San Francisco State University)
Faculty Evaluator: Kenn Burrows (San Francisco State University)