Facebook’s Alternative Facts

Volume 105

105 Va. L. Rev. Online 18
Download PDF

[W]e show related articles next to [content flagged by fact-checkers] so people can see alternative facts.”

 

      -Sheryl Sandberg, Sept. 5, 2018

 

Nearly two years have passed since Kellyanne Conway, Counselor to President Donald J. Trump, coined the term “alternative facts” during a television interview. At the time, Conway’s language provoked a sharp response. “Alternative facts are not facts,” her interviewer replied. “They’re falsehoods.”[1] Commentators mostly agreed: Alternative facts were “an assault on foundational concepts of truth”[2] and “the new way of disregarding unpalatable evidence.”[3] Even a year later, one writer likened alternative facts to “reality denial” and claimed that the term had been “mocked out of existence.”[4]

In September 2018, alternative facts roared back into relevance when Facebook’s chief operating officer, Sheryl Sandberg, told a Senate committee that Facebook deploys alternative facts in its fight against misinformation.[5] In Facebook’s strategy, Sandberg explained, potentially false content is presented in users’ News Feeds alongside related articles “so people can see alternative facts.”[6] “The fundamental view is that bad speech can often be countered by good speech,” she said,[7] possibly meaning to evoke Louis Brandeis’s concurrence in Whitney v. California.[8] Thus, she explained, Facebook’s “Related Articles” feature literally places “good speech” (fact-checked content) beside “bad speech” (false content) in users’ scrolling feeds. To Sandberg, alternative facts did not describe reality denial but nearly its opposite: a strategy for evidence-based course correction.

Facebook’s use of Related Articles to fight misinformation, together with the articles’ public characterization as “alternative facts,” provide a case study for exploring the company’s private ordering of speech. They highlight Facebook’s power to control the communicative content of speech in digital space;[9] Facebook’s highly experimental approach to behavioral modification of users; Facebook’s lack of accountability for its speech-regulating choices beyond its economic relationships; Facebook’s selective neutrality in speech-related disputes; the complex relationship between speech practices that suppress misinformation and those that increase user engagement; and the tension that exists between Facebook’s role as a governor of others’ speech and its role as a corporate political speaker in its own right.

None of these factors justifies regulating Facebook as a state actor—a question that may weigh on the minds of the Supreme Court justices who hear Manhattan Community Access Corp. v. Halleck this term.[10] Permitting the government to regulate platforms like Facebook as state actors would, among other things, promote the “both sides” approach that I criticize in this essay. Competition among platforms obviates the need for content-based regulation, so long as users can choose from among an array of providers. Some of them might, however, justify legal constraints on matters of corporate structure, such as dual class stock, that limit managerial accountability, corrode corporate democracy, and, at Facebook, indirectly but powerfully influence how political discourse gets structured.[11]

In this short essay, I argue that Facebook’s adoption of the alternative-facts frame potentially contributes to the divisiveness that has made social media misinformation a powerful digital tool. Facebook’s choice to present information as “facts” and “alternative facts” endorses a binary system in which all information can be divided between moral or tribal categories—“bad” versus “good” speech, as Sandberg put it in her testimony to Congress. As we will see, Facebook’s related-articles strategy adopts this binary construction, offering a both-sides News Feed that encourages users to view information as cleaving along natural moral or political divisions.

In addition, the company’s adoption of alternative facts reflects its strong adherence to both-sides capitalism, in which corporate actors claim that they must be value neutral and politically impartial in order to mitigate business risks or satisfy fiduciary obligations to their investors. The fallacy of both-sides capitalism is its promise that neutrality in commerce—like Facebook’s claim to be a “platform for all ideas”—results in neutral outcomes. The alternative-facts frame demonstrates this. Though it has been presented, by both executive-branch officials and Facebook’s leadership, as politically neutral, the alternative-facts frame advances an ideological bias against evidence-based reasoning. As I show, Conway herself conceived alternative facts to demonstrate how contestation undermines evidence-based reasoning.[12] Because this is true, Facebook’s alternative facts may unwittingly reinforce the post-truth and politically charged notion that once content is contested, resorting to more information won’t help the user distinguish truth from falsity.

If so, Facebook’s alternative facts provide an example of how the superficial neutrality of both-sides capitalism creates new, digitally enhanced threats to democratic discourse. Broadly, the danger is that businesses will adopt tactics that appear neutral but, at least where the democratic process has been commercialized, produce biased results. Facebook’s embrace of alternative facts raises the specific concern that, in order to mitigate the business risks involved in challenging misinformation, the company is deploying platform features that undermine fact-based reasoning and, as a result, strengthening the political hand of one set of actors.

I. Facebook and Political Misinformation

Facebook, Inc., generates “substantially all” of its revenue from advertising.[13] This includes not only traditional advertisements for products and services but also enhanced content distribution for a fee. Although the company does not disclose the proportion of its ad revenue that comes from political expression, we know that political expression generates value for the company, and that Facebook has actively sought to build engagement around political expression on its platform in the U.S. since at least 2006.[14] In both 2015 and 2016, the upcoming U.S. presidential election was the number one “most talked-about global [topic]” on Facebook.[15]

Key to political discourse on Facebook is the News Feed, which presents users with an updating list of posts by the user’s friends and others.[16] Created in 2006 and initially unpopular with many users, “News Feed” has become the platform’s core feature.[17] In 2012, to compete with Twitter, Facebook made changes to News Feed to promote news articles using author bylines and headlines, enabling Facebook to become the leading social media gateway to news publishers’ web sites.[18] Facebook quickly found innovative ways to monetize News Feed. It began allowing users to pay to boost their posts to the top of their friends’ News Feeds.[19] By 2014, Mark Zuckerberg was proclaiming that Facebook’s goal was to make News Feed the “perfect personalized newspaper for every person in the world,” by populating each individual’s News Feed with a customized mix of content.[20]

Yet by January 2015—the start of the 2016 election cycle—Facebook announced self-regulatory reform to counter misinformation: It would reduce distribution of posts that users had reported as hoaxes.[21] It was around this time that Facebook added a specific option for users to report news as false.[22]

In May 2016, just a few months before the election, Gizmodo published charges by an anonymous former Facebook employee that the editors of Facebook’s “Trending” feature censored topics “of interest to conservative readers.”[23] Trending used both an algorithm and an editorial team to populate a running list of popular topics at the top of the Facebook dashboard. Stories “covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories.”[24] This was essentially true; Facebook’s Trending editorial team had been curating trending topics with attention to the judgments of well-established news outlets.

A backlash followed; the Republican Party issued a statement accusing Facebook of liberal bias and using its influence “to silence view points.”[25] Facebook’s own data analysis showed that conservative and liberal topics were approved as trending topics “at virtually identical rates.” [26] Nonetheless, it initiated a major policy change, terminating its Trending editorial team in August 2016 and relying exclusively on algorithms to produce the Trending list. Almost immediately, false news stories began to proliferate in the Trending list.[27] To this day, critics trace Facebook’s amplification of false news stories in the lead-up to the November 2016 election to this change from human curators to algorithm. In January 2017, after the election, Facebook modified its Trending algorithm so that it no longer reflected only a story’s popularity among users, but took into account its recognition by content publishers, a change meant to incorporate a measure of credibility; in June 2018, as the U.S. midterm elections approached, Facebook eliminated the Trending feature altogether.[28]

II. Facebook’s Strategy to Fight Misinformation

In the days after the 2016 election, Mark Zuckerberg claimed it was a “pretty crazy idea” that fake news on Facebook had influenced the election “in any way.”[29] He followed this up by writing that Facebook would strive to improve its efforts to combat fake news, but added the caveat that “[i]dentifying the ‘truth’ is complicated.”[30] These statements by the company’s CEO and controlling shareholder—under an uncommon arrangement, Facebook’s dual-class stock vests Zuckerberg with voting control of the company—suggest that reducing misinformation was not a priority at the time. Nonetheless, by the end of 2016, Facebook had begun experimenting with new features to reduce misinformation.

Several themes run through Facebook’s efforts. First, the company says it does not want misinformation on its platform. However, its executives have consistently emphasized that Facebook shouldn’t be “the arbiter of what’s true and what’s false.”[31] Thus, a major tension exists at the heart of Facebook’s efforts: it wishes to preserve the appearance of neutrality, but Facebook does convey the true–false judgments of fact-checkers to its users, and it suppresses purportedly false content through down-ranking. Facebook may not issue a final judgment about the truth or falsity of content, but it has created a distribution system that relies on assessments of truth and falsity to determine the scope of a message’s distribution. The company is an arbiter of truth and falsity in the practical sense that it chokes off distribution of purportedly false content.

A second theme is the tension between Facebook’s interest in encouraging user engagement and its interest in censoring false but engaging content. Facebook insists on delivering content that users want, even if what users want is misinformation. “We don’t favor specific kinds of sources — or ideas,” Facebook proclaims in its News Feed Values:

Our aim is to deliver the types of stories we’ve gotten feedback that an individual person most wants to see. We do this not only because we believe it’s the right thing but also because it’s good for our business. When people see content they are interested in, they are more likely to spend time on News Feed and enjoy their experience.[32]

This may be why Zuckerberg was reluctant to ascribe bad motives to Holocaust deniers in a July 2018 interview, when he said that he believed Holocaust deniers were not “intentionally getting it wrong.”[33] If Facebook’s users demand content that denies the Holocaust occurred—and some do—Facebook wants to give it to them. Facebook’s business goal of keeping users engaged is thus sometimes in conflict with its professed desire to get misinformation off its platform. This conflict seems to be at the heart of Facebook’s selective embrace of neutrality as a guiding principle.

A third theme is Facebook’s willingness to experiment with behavioral modification of its users. In the year and a half that followed the 2016 election, Facebook experimented with several behavioral interventions around false news. The purpose of these experiments seems to have been to reduce circulation of obviously false content. Although Facebook has disclosed information about these experiments, it has been remarkably less transparent about down-ranking, in which it suppresses content. As a result, we know little about how down-ranking is used by the company to suppress misinformation.

A. Facebook’s First Experiment: Disputed Flags

By late November 2016, Zuckerberg was describing to journalists a new “product” that would address concerns about misinformation.[34] This was “Disputed Flags,” a feature employed by Facebook from roughly December 2016 to December 2017. The company marked content in user News Feeds with red icons to signal it had been disputed by fact-checkers or users.[35] Facebook ended the experiment after finding, among other things, that the flags “could sometimes backfire.”[36] It told users that research had shown that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs—the opposite effect to what [Facebook] intended.”[37]

B. Facebook’s Second Experiment: A Revamped Related Articles Feature

In 2013, Facebook began offering users who read an article “new articles they may find interesting about the same topic.” [38] In this early feature, called “Related Articles,” Facebook supplied additional, recommended content after the user clicked on a link.[39] Its purpose was to increase user engagement and to enhance content customization. Immediately following the 2016 election, Mark Zuckerberg identified “raising the bar for stories that appear in related articles” as one of seven publicly featured “projects” the company had undertaken to address misinformation.[40] This suggests that Facebook eventually came to believe that the original Related Articles feature amplified low-quality content to users before the election.

In spring 2017, while it was experimenting with Disputed Flags, Facebook began testing a different version of Related Articles. The new Related Articles supplied additional content to a user before the user read an article shared in News Feed, and was specifically designed to address misinformation.[41] A few months later, the company told users that it had received feedback that “Related Articles help [sic] give people more perspectives and additional information, and helps them determine whether the news they are reading is misleading or false,” and announced it was expanding the feature.[42]

Related Articles works like this: When someone flags content on Facebook as potentially false, Facebook sends it to third-party fact-checkers. In the United States, Facebook currently uses five fact-check organizations certified by the International Fact-Checking Network: the Associated Press, Factcheck.org, PolitiFact, Snopes.com, and The Weekly Standard Fact Check.[43] Some of these organizations are paid by Facebook for their fact-checking work, but others reportedly reject payment.[44]

If the fact-checker confirms its falsity, Facebook “typically” reduces an article’s traffic by 80%.[45] This is down-ranking, which Zuckerberg has said “destroys the economic incentives that most spammers and troll farms have to generate these false articles in the first place.”[46] Facebook also warns users who are about to share or have shared the false content, and shows Related Articles—short headlines with links to longer articles—next to the false content. For at least some subject matter, Related Articles are not culled from different sources around the internet, but are created by Facebook’s partner fact-check organizations specifically for the purpose of being appended to flagged Facebook content.[47]

This is a screen shot from a video Facebook posted on December 20, 2017, titled “How Facebook Addresses False News,” which shows the “Related Articles” approach:[48]

[[{“fid”:”834″,”view_mode”:”full”,”type”:”media”,”attributes”:{}}]]

 
   

Although Facebook’s example shows the two Related Articles clearly disputing a false article about aliens, some Related Articles do not clearly reject the flagged content. In response to an actual October 2018 post titled “Republicans Vote to Make It Legal Nationwide to Ban Gays & Lesbians from Adopting,” for example, Facebook appended these two related articles[49]:

[[{“fid”:”835″,”view_mode”:”full”,”type”:”media”,”attributes”:{}}]]

 
   

These two actual Related Articles are unlike the examples that Facebook provided above, insofar as they lack headlines that refute the false content; the user must click through to the linked content and read the respective articles to understand what (if anything) Politifact.com and Snopes.com believed was false about the original article. It is quite likely that Facebook has data about click-through rates that would tell us something about the success of the Related Articles strategy. The fact that it has not published any data since beginning the Related Articles experiment more than eighteen months ago might suggest that the data doesn’t support the feature’s efficacy.

Facebook has continued to experiment with new tweaks and features to address political misinformation. In the summer of 2018, it revealed plans to create its own news content: news programs on its video service, Watch, produced for a fee by established news companies such as CNN and Fox News.[50] In September 2018, Facebook’s fact-checking product manager, Tessa Lyons, revealed that Facebook had begun using technology to “predict articles that are likely to contain misinformation and prioritiz[ing] those for fact-checkers to review.”[51] According to Lyons, the company uses predictive signals such as reader comments on the post that question its veracity, and the post’s source. If a Facebook Page sharing content has “a history of sharing things that have been rated false by fact-checkers,” it triggers review.[52]

III. Facebook’s Alternative Facts

A. The News Feed’s Binary Construction

More than a year passed between Facebook’s roll-out of the new Related Articles feature and Sheryl Sandberg’s description of related articles as “alternative facts.” [53] Her remarks may have been intended to evoke Louis Brandeis, the icon of free speech: “The fundamental view,” Sandberg said, “is that bad speech can often be countered by good speech, and if someone says something’s not true and they say it incorrectly, someone else has the opportunity to say, ‘Actually, you’re wrong, this is true.’”[54]

Justice Brandeis’s concurrence in Whitney v. California likewise associated false information with moral wrong: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education,” he wrote, “the remedy to be applied is more speech, not enforced silence.”[55] Of course, Brandeis wasn’t advocating a closed universe of “more speech” provided exclusively by the State, the way that Facebook’s closed universe of News Feed posts presents an exclusive set of curated content. Brandeis’s moral gloss on the solution of “more speech” was grounded, at least in part, on the assumption that citizens, not a single State or a State-like entity, would provide the counter-speech to avert “evil.”

Brandeis also believed that context mattered. “More speech” was the remedy for misinformation only “if there be time.”[56] More speech may not be a viable remedy for misinformation where the context tends to discourage active listening or to discredit the speech. Brandeis’s famous endorsement of “more speech” doesn’t translate easily to social media’s curated feed, especially in light of new insights in behavioral and decision science.

Brandeis conceived of an active speaker and an active listener engaged in “public discussion.”[57] But that assumption does not hold up on social media platforms. In Facebook’s News Feed, information is presented to please the recipient, as determined by Facebook’s customizing algorithms. The “facts” versus “alternative facts” frame of Related Articles interrupts this pleasing data stream’s flow and introduces a binary construction in which content divides between that which conforms customized specifications (“bad” speech, in Sandberg’s depiction), and Related Articles that don’t (“good” speech). However, if the algorithms got the original assessment correct, the reader actually may experience Related Articles more like “bad” speech interrupting the flow of “good” misinformation. The decision to present point and counterpoint in this format not only sends users the simplistic message that information itself is binary, but also twists the user’s intuitive sense about which information is “good” versus “bad.”

In fact, empiricists have tested the extent to which Facebook’s Related Articles are likely to mitigate “motivated reasoning” and stem the influence of false information disseminated on Facebook. The work of two researchers, Leticia Bode of Georgetown University and Emily K. Vraga of George Mason University, is directly on point.

In the first of two studies, they found that corrective Related Articles successfully reduced misperceptions for individuals who previously held a false belief about GMOs and were shown false information about GMOs in a simulated Facebook News Feed.[58] However, they found no effect in a similar study of subjects who held a false belief about the link between vaccines and autism.[59] Bode and Vraga concluded that the length of time a misperception lingered in public discourse affected its debunk-ability, and that correction was more effective “when false beliefs are not deeply ingrained among the public consciousness.”[60]

In a follow-on study, Bode and Vraga explored how a subject’s conspiracist ideation affected his or her capacity for correction. Research has shown that individuals high in conspiracist ideation—those who endorse multiple unrelated conspiracy theories—are particularly vulnerable to misinformation.[61] Bode and Vraga measured subjects’ conspiracist ideation and then asked them to view a simulated Facebook News Feed, where they were exposed to a post, purportedly from USA Today (but in fact fake), which contained false information.[62] Some subjects were then shown two related articles that debunked the fake story, and others were shown debunking comments by Facebook users.[63] Individuals high in conspiracist ideation tended to rate both types of correction as “equally (not) credible.”[64] Although the study’s authors concluded that correction worked, the corrective effects were “relatively small in size.”[65]

Together, these studies suggest that the more “deeply ingrained” health-related misperceptions are, the less likely it is that Related Articles can debunk them. Individuals with conspiracist ideation simply did not trust Related Articles. If this is true, political misinformation that connects to deeply-ingrained partisan commitments might be particularly difficult to debunk through Related Articles. Facebook may discover, like it did with Disputed Flags, that its assumptions about how people respond to its behavioral interventions are erroneous.

As I have argued elsewhere, “alternative facts” are a rhetorical trick.[66] The frame suggests that, in a controversy, each side presents information in its favor. The two sides can’t agree on the facts because facts are a matter of perspective.[67] Ultimately the post-truth reasoner suggests that facts and alternative facts aren’t particularly helpful for resolving a dispute: the greater the controversy, the greater the cacophony of facts bombarding us from both sides. In such a situation, the post-truth reasoner tells us, other inputs—a gut check, tribal affiliation, or trust in a group leader—can provide a superior basis for decision making. In a post-truth world, where one finds alternative facts, one should use alternative decision-making processes.[68]

As this suggests, Facebook’s “alternative facts” may contribute to, rather than ameliorate, the toxicity of social media discourse. The binary construction of a “both sides” News Feed is part of the problem, not part of the solution.

B. “Both Sides” Capitalism

Fundamentally, Facebook’s both-sides News Feed is evidence of its broader adherence to both-sides capitalism, in which for-profit businesses claim impartiality not as a moral virtue, but as a business imperative. Like other adherents to both-sides capitalism, Facebook treats viewpoint neutrality as key to its economic prospects.

There are many reasons that a platform for political discourse might pledge allegiance to both-sides capitalism. The company might perceive that its monopolistic ambitions do not allow it to cede market share to competitors catering to different political affiliations. It might also see a commercial benefit to presenting “both sides” of controversies: It could encourage users to spend more time on Facebook, or to click through to a broader range of links. Facebook has a business interest in remaining free from regulation. If the company is perceived as partisan, this could encourage the opposing political party to pursue laws that reduce Facebook’s profits or prospects. Finally, Facebook is a political actor in its own right, and an active participant in campaign finance and lobbying. It may view both-sides neutrality as a means to deflect criticism when it spends money to influence politics in its own favor.

Facebook took the both-sides approach so far that it formed a fact-checking partnership with a partisan news source, The Weekly Standard, resulting in a new round of controversy. In September 2018, Facebook came under fire when The Weekly Standard flagged as false an article published by ThinkProgress because of its title, “Brett Kavanaugh Said He Would Kill Roe v. Wade Last Week and Almost No One Noticed.”[69] The title was meant to be hyperbolic rather than literal; the article did not falsely attribute any statements to Kavanaugh. Judd Legum, who later became the publisher of ThinkProgress, captured the critique in a tweet alleging that the purpose behind Facebook’s fact-checking program is “to appease the right wing.”[70]

But both-sides capitalism, as implemented by Facebook, is about more than appeasement. It is the claim that, in order to satisfy its obligations to investors and customers, a company must provide services to anyone who can pay for them, promote any ideology regardless of substance, and treat all ideas equally. Increasingly, Silicon Valley tech companies like Facebook present both-sides capitalism, wrongly, as neutral in operation and neutral in outcome.

Finally, we might ask whether Facebook has a real incentive to foster critical thinking in its users. In other words, perhaps Facebook or its CEO and controlling shareholder, Mark Zuckerberg, benefit by advancing an ideological agenda through the alternative-facts frame. Brand loyalty can be a form of post-truth reasoning, and Facebook has nurtured a valuable brand of social media service. Facebook might believe that it does not benefit by sharpening its users’ critical-thinking skills. Considering all the problems the platform has had with privacy, for example, company managers may worry that well-informed users will delete Facebook and move on to a competitor.

V. Conclusion

Facebook’s attempt to rehabilitate “alternative facts” during Sheryl Sandberg’s testimony to the Senate Select Committee on Intelligence drew little attention, but it underscores important tensions in the way the company fights misinformation. It also exposed the company’s commitment to “both sides” capitalism on a national stage.

Facebook’s Related Articles strategy adopts the binary frame of “alternative facts,” and thus conditions users to accept a two-sided view of information that may increase polarization and partisanship, not diffuse it. Facebook may have adopted this binary approach because it fits comfortably within the News Feed format, or because the company views political discourse as a series of simple, binary disagreements that can be staged as for-profit entertainment. Either way, information on Facebook reaches up to 185 million people in North America every day. It seems unlikely that Facebook is serious about behavioral intervention given the research suggesting its difficulty, and more likely that Facebook’s evolving features result from the company’s profit motive.

It’s also possible that Related Articles has become a minor strategy, with down-ranking of false content doing most of the work. In preparing this short essay, I went looking for Related Articles in the News Feeds of students and associates, but found few examples. Some avid Facebook users couldn’t ever recall seeing Related Articles in their own Feeds. Is this because Facebook had successfully suppressed false content through down-ranking? It’s hard to know. Without more transparency from Facebook, users and researchers are left in the dark.

 


[1] See Rebecca Sinderbrand, How Kellyanne Conway Ushered in the Era of ‘Alternative Facts,’ Wash. Post (Jan. 22, 2017), https://www.washingtonpost.com/news/the-fix/wp/2017 /01/22/how-kellyanne-conway-ushered-in-the-era-of-alternative-facts/ (providing video and transcript of the January 22, 2017, interview) [http://perma.cc/TW6P-YABP].

[2] Bret Stephens, Trump: The Reader’s Guide, Wall St. J. (Jan. 23, 2017), https://ww w.wsj.com/articles/trump-the-readers-guide-1485216078 [http://perma.cc/N5HV-Z3YC].

[3] Stefan Kyriazis, George Orwell’s 1984 Explains Trump: Doublespeak, Alternative Facts and Reality Control, Express (Jan. 26, 2017), https://www.express.co.uk/entertainment/bo oks/759436/Trump-George-Orwell-1984-Doublespeak-alternative-facts-crimestop-reality-control [http://perma.cc/8DTK-WQVR].

[4] Louis Menand, Words of the Year, New Yorker (Jan. 8, 2018), https://www.newyorker.c om/magazine/2018/01/08/words-of-the-year [http://perma.cc/2WKU-RCBZ].

[5] Foreign Influence Operations and Their Use of Social Media Platforms: Hearing Before the S. Select Comm. on Intelligence, 115th Cong., at 1:34:54–1:35:14 (2018) [hereinafter Sandberg Senate Testimony], video available at https://www.intelligence.senate.gov/hearing s/open-hearing-foreign-influence-operations’-use-social-media-platforms-company-witnesses [http://perma.cc/7J39-ULU7] (testimony of Sheryl Sandberg, Chief Operating Officer, Facebook).

[6] Id.

[7] Id.

[8] See Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”).

[9] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1599 (2017) (arguing that private content platforms are systems of governance “responsible for shaping and allowing participation in our new digital and democratic culture”).

[10] Halleck v. Manhattan Cmty. Access Corp., 882 F.3d 300 (2d Cir. 2018), cert. granted, 2018 WL 3127413 (U.S. Oct. 12, 2018) (No. 17-1702).

[11] See, e.g., Chris Hughes, The Problem With Dominant Mark Zuckerberg Types, Bloomberg (Dec. 9, 2018), https://www.bloomberg.com/opinion/articles/2018-12-10/the-problem-with-dominant-mark-zuckerberg-types (describing a growing “international cam­paign” against super-voting rights for founders).

[12] See Sarah C. Haan, The Post-Truth First Amendment, 94 Ind. L. J. (forthcoming 2019) (manuscript at 6–7), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=32093 66 [http://perma.cc/Z7NY-YGTD].

[13] Facebook, Inc., Quarterly Report (Form 10-Q) at 28 (Jul. 26, 2018).

[14] See, e.g., Christine B. Williams and Girish J. ‘Jeff’ Gulati, Social Networks in Political Campaigns: Facebook and the Congressional Elections of 2006 and 2008, 15 New Media & Soc’y 52, 56 (2012).

[15] Betsy Cameron and Brittany Darwell, 2015 Year in Review, Facebook Newsroom (Dec. 9, 2015), https://newsroom.fb.com/news/2015/12/2015-year-in-review/ [http://perma.cc/Y XY9-657Z]; Sheida Neman, 2016 Year in Review, Facebook Newsroom (Dec. 8, 2016), https://newsroom.fb.com/news/2016/12/facebook-2016-year-in-review/[http://perma.cc/GY F5-52B2].

[16] See Mark Zuckerberg, Facebook (Sep. 5, 2016), https://www.facebook.com/zuck/posts /10103084921703971 [http://perma.cc/7YFV-F66V] (explaining the thought process behind News Feed in a September 2016 post marking its tenth anniversary). Zuckerberg wrote that “News Feed has been one of the big bets we’ve made in the past 10 years that has shaped our community and the whole internet the most.” Id.

[17] See Farhad Manjoo, Can Facebook Fix Its Own Worst Bug?, N.Y. Times: N.Y. Times Mag. (Apr. 25, 2017), https://www.nytimes.com/2017/04/25/magazine/can-facebook-fix-its-own-worst-bug.html [http://perma.cc/986J-HXSW] (describing the Facebook News Feed as “the most influential source of information in the history of civilization”).

[18] See Nicholas Thompson and Fred Vogelstein, Inside the Two Years that Shook Facebook—and the World, Wired (Feb. 12, 2018), https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/ [http://perma.cc/68B5-FBYN]; Niall Ferguson, What Is To Be Done? Safeguarding Democratic Governance In The Age Of Network Platf- orms, Hoover, Institution, Nov. 13, 2018, https://www.hoover.org/research/what-be-done-safeguarding-democratic-governance-age-network-platforms [http://perma.cc/3VV3-N79Y] (“Facebook and Google are now responsible for nearly 80 percent of news publishers’ referral traffic.”).

[19] See Hayley Tsukayama, Would You Pay to Promote a Facebook Post?, Wash. Post (May 11, 2012), https://www.washingtonpost.com/business/technology/would-you-pay-to-prom­ote-a-facebook-post/2012/05/11/gIQA1nlSIU_story.html [http://perma.cc/4GGR-FBW2].

[20] Eugene Kim, Mark Zuckerberg Wants To Build The ‘Perfect Personalized Newspaper’ For Every Person In The World, Bus. Insider (Nov. 6, 2014), https://www.businessin sider.com/mark-zuckerberg-wants-to-build-a-perfect-personalized-newspaper-2014-11 [http://perma.cc/7C5C-WEMJ].

[21] Erich Owens & Udi Weinsberg, Showing Fewer Hoaxes, Facebook Newsroom (Jan. 20, 2015), https://newsroom.fb.com/news/2015/01/news-feed-fyi-showing-fewer-hoaxes/ [http:// perma.cc/9MG4-MNL4].

[22] Id.

[23] Michael Nunez, Former Facebook Workers: We Routinely Suppressed Conservative News, Gizmodo (May 9, 2016, 9:10 AM), https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006 [http://perma.cc/AJU4-F2TT]. According to the blog, the former employee had worked as a curator of Trending Topics sometime between mid-2014 and December 2015, was “politically conservative,” and “asked to remain anonymous, citing fear of retribution from the company.” Id.

[24] Id.

[25] Team GOP, #MakeThisTrend: Facebook Must Answer for Conservative Censorship, GOP.com: Liberal Media Bias (May 9, 2016), https://gop.com/makethistrend-facebook-must-answer-for-liberal-bias/ [http://perma.cc/6H5R-238K].

[26] Colin Stretch, Response to Chairman John Thune’s Letter on Trending Topics, Facebook Newsroom (May 23, 2016), https://newsroom.fb.com/news/2016/05/response-to-chairman-john-thunes-letter-on-trending-topics/ [http://perma.cc/Z3XK-MD25].

[27] See, e.g., Caitlin Dewey, Facebook Has Repeatedly Trended Fake News Since Firing Its Human Editors, Wash. Post, Oct. 12, 2016, https://www.washingtonpost.com/news/the-inte rsect/wp/2016/10/12/facebook-has-repeatedly-trended-fake-news-since-firing-its-human-editors/ [http://perma.cc/EAM2-BUBS] (reporting a study from Aug. 31 to Sept. 22 that identified “five trending stories that were indisputably fake,” including a “tabloid story claiming that the Sept. 11 attacks were a ‘controlled demolition’”); Abby Ohlheiser, Three Days After Removing Human Editors, Facebook Is Already Trending Fake News, Wash. Post (Aug. 29, 2016), https://www.washingtonpost.com/news/the-intersect/wp/2016/08/29/a-fake-headline-about-megyn-kelly-was-trending-on-facebook/

[http://perma.cc/FV5A-MU5T].

[28] Will Cathcart, Continuing Our Updates to Trending, Facebook Newsroom (Jan. 25, 2017), https://newsroom.fb.com/news/2017/01/continuing-our-updates-to-trending/ [http: //perma.­cc/G4UW-VEDV]; Jacob Kastrenakes, Facebook Will Remove the Trending Topics Section Next Week, The Verge (June 1, 2018, 11:48 AM), https://www.thever ge.com­/2018/6/1/17417428/facebook-trending-topics-being-removed [http://perm a.cc/TFX 9-LM­R3]; Nathan Olivarez-Giles & Deepa Seetharaman, Facebook Moves to Curtail Fake News on ‘Trending’ Feature, Wall St. J. (Jan. 25, 2017), https://www.wsj.com/articles/facebook-moves-to-curtail-fake-news-on-trending-feature-1485367200 [http://perma.cc/W4C4-CAL­M] (“Facebook’s software will surface only topics that have been covered by a significant number of credible publishers.”).

[29] Deepa Seetharaman, Zuckerberg Defends Facebook Against Charges It Harmed Political Discourse, Wall St. J. (Nov. 10, 2016), https://www.wsj.com/articles/zuckerberg-de fends-facebook-against-charges-it-harmed-political-discourse-1478833876 [http://perma.cc/H224-ZE3Z].

[30] Mark Zuckerberg, Facebook (Nov. 12, 2016), https://www.facebook.com/zu ck/posts/10103253901916271 [http://perma.cc/9J8F-53J7].

[31] Sandberg Senate Testimony, supra note 5, at 1:34:19–1:34:42.

[32] News Feed Values, Facebook News Feed, https://newsfeed.fb.com/values/ [http://perma.cc/B3W2-ZRSY] (last visited Nov. 8, 2018).

[33] Kara Swisher, Full Transcript: Facebook CEO Mark Zuckerberg on Recode Decode, Recode: Recode Decode (Jul. 18, 2018, 11:01 AM), https://www.recode.net /2018/7/18/1757 5158/mark-zuckerberg-facebook-interview-full-transcript-kara-swisher [http://perma.cc/QU3Y-JMHN].

[34] Deepa Seetharaman, Mark Zuckerberg Explains How Facebook Plans to Fight Fake News, Wall St. J. (Nov. 20, 2016), https://www.wsj.com/articles/mark-zuckerberg-explains-how-facebook-plans-to-fight-fake-news-1479542069 [http://perma.cc/UY7F-3836]; Mark Zuckerberg, Facebook (Nov. 19, 2016), https://www.facebook.com/zuck/posts/101032 69806149061 [http://perma.cc/4CVD-CDPS].

[35] Tessa Lyons, Replacing Disputed Flags with Related Articles, Facebook Newsroom (Dec. 20, 2017), https://newsroom.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-ag­ainst-misinformation/ [http://perma.cc/3BU2-VD6D]; Barbara Ortutay, Facebook Gets Serious About Fighting Fake News, Associated Press (Dec. 15, 2016), https://www.apnews. com/22e0809d20264498bece040e85b96935 [http://perma.cc/X9T2-TLCS].

[36] Jeff Smith, Grace Jackson & Seetha Raj, Designing Against Misinformation, Medium (Dec. 20, 2017), https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2 [http://perma.cc/MKM8-YNCA].

[37] Lyons, supra note 35.

[38] Sara Su, New Test With Related Articles, Facebook Newsroom (Apr. 25, 2017), https://newsroom.fb.com/news/2017/04/news-feed-fyi-new-test-with-related-articles/ [http://perma.cc/8XEW-8AJ3].

[39] Id.

[40] Zuckerberg, supra note 34.

[41] Su, supra note 38.

[42] Id.

[43] Third-Party Fact-Checking on Facebook, Facebook Business, https://www.faceboo k.com/help/publisher/182222309230722 [http://perma.cc/BN42-NBYJ] (last updated Nov. 7, 2018).

[44] In April 2018, a journalist conducted a study of Facebook’s partnership with these fact-checking organizations for the Tow Center for Digital Journalism at Columbia University. The journalist, Mike Ananny, noted previous reports that the fact-checking partners were paid about $100,000 per year from Facebook for their work. However, Ananny reported that unidentified individuals at several of the organizations told him their organizations had rejected the money. Mike Ananny, The Partnership Press: Lessons for Platform-Publisher Collaborations as Facebook and News Outlets Team to Fight Misinformation, Colum. Journalism Rev.: Tow Ctr. Rep. (Apr. 4, 2018), https://www.cjr.org/tow_center_reports/pa rtnership-press-facebook-news-outlets-team-fight-misinformation.php [http://perma.cc/WM5W-L82X].

[45] Facebook, Inc. Fourth Quarter and Full Year 2017 Earnings Call Transcript, at 3 (Jan. 31, 2018), https://s21.q4cdn.com/399680738/files/doc_financials/2017/Q4/Q4-17-Earnings-call-transcript.pdf [http://perma.cc/82SE-FAJ4] (remarks of Mark Zuckerberg, Chief Executive Officer, Facebook).

[46] Id.

[47] Expanding Our Policies on Voter Suppression, Facebook Newsroom (Oct. 15, 2018), https://newsroom.fb.com/news/2018/10/voter-suppression-policies/ [http://perma.cc/2L2F-YV2J] (describing this process with respect to articles containing information about how to vote).

[48] Dan Zigmond, How Facebook Addresses False News, Facebook, at 1:02 (Dec. 20, 2017), https://www.facebook.com/facebook/videos/10156900476581729/ [http://perma.cc/Z S8G-3C4S].

[49] This screenshot, shared with me by a student, shows Related Articles that appeared in the student’s Facebook News Feed in October 2018. E-mail from student to Sarah C. Haan, Assoc. Professor of Law, Wash. & Lee (Oct. 23, 2018, 6:40 PM EST) (on file with author).

[50] David Ingram, Facebook Enlists Anchors From CNN, Fox News, Univision for News Shows, Reuters (Jun. 6, 2018, 10:03 AM), https://www.reuters.com/article/us-facebook-media/facebook-enlists-anchors-from-cnn-fox-news-univision-for-news-shows-idUSKCN1J21SM [http://perma.cc/3BL4-JCQE].

[51] Seeing the Truth, Facebook Newsroom (Sep. 13, 2018), https://newsroom.fb.com/news /2018/09/inside-feed-tessa-lyons-photos-videos/ [http://perma.cc/U7NH-NXLH].

[52] Id.

[53] Sandberg Senate Testimony, supra note 5, at 1:34:54–1:35:14.}

[54] Id.

[55] Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring).

[56] Id.

[57] Id. at 375–76 (“Those who won our independence believed that the final end of the State was to make men free to develop their faculties; and that in its government the deliberative forces should prevail over the arbitrary. . . . Believing in the power of reason as applied through public discussion, they eschewed silence coerced by law—the argument of force in its worst form. . . . It is the function of speech to free men from the bondage of irrational fears.”).

[58] Leticia Bode & Emily K. Vraga, In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media, 65 J. of Comm. 619, 624–27 (2015).

[59] Id. at 628.

[60] Leticia Bode & Emily K. Vraga, See Something, Say Something: Correction of Global Health Misinformation on Social Media, 33 Health Commc’n 1131, 1132 (2018).

[61] Id. at 1133.

[62] Id. at 1134.

[63] Id.

[64] Id. at 1137.

[65] Id.

[66] Haan, supra note 12, at 15–17.

[67] Id.

[68] Id.

[69] Mathew Ingram, The Weekly Standard and the Flaws in Facebook’s Fact-Checking Program, Colum. Journalism Rev. (Sep. 18, 2018), https://www.cjr.org/the_new_gatekeepe rs/the-weekly-standard-facebook.php [http://perma.cc/NQV7-M9R5].

[70] Id.

Click on a link below to access the full text of this article. These are third-party content providers and may require a separate subscription for access.