Closing the Barn Door? Fact-Checkers as Retroactive Gatekeepers of the COVID-19 “Infodemic”

Based on a study of U.S.-tagged items in a global database of fact-checked statements about the novel coronavirus throughout the first year of the pandemic, this article explores the nature of fact-checkers’ “retroactive gatekeeping.” This term is introduced here to describe the process of assessing the veracity of information after it has entered the public domain rather than before. Although an overwhelming majority of statements across 16 thematic categories were deemed false and debunked, often repeatedly, misinformation continued to circulate freely and widely.

The widespread circulation of information that is inaccurate, misleading, malicious, or otherwise harmful to civil society is hardly new, and its connections with eroding public trust and intensifying social polarization are well-documented (Ekström et al., 2020). But its rampant proliferation in an age of digital and, especially, social media is striking (Tandoc et al., 2018), aided significantly by the structural imperatives of platforms such as Facebook, whose commercial success depends on continuous user engagement (Gray et al., 2020;Pickard, 2019). Amid a global pandemic, this tsunami of misinformation endangers public health, as protective strategies are undermined by distrust, and ineffective or even harmful treatments gain leverage over ones that can save lives (Evanega et al., 2020). The role of fact-checkers as a counter-force has attracted scholarly attention; Luengo and Garcia-Marin (2020), for example, credit their ability not only to address hoaxes of various kinds but also to apply an empirical approach to offset false COVID-19 narratives. Brennen and his colleagues (2020) noted as early as April 2020 the rapid increase in time and resources devoted to countering false claims about the pandemic.
However, connections of these contemporary phenomena with traditional understandings of the proactive role of information gatekeepers in moderating the flow of information to audiences have been scarce. The purpose of this exploratory study is to examine an application of fact-checkers' alternative approach, which involves assessing the veracity of information after it has been published rather than before. Focusing on U.S.-tagged database items, it examines the volume, scope, and nature of COVID-19 misinformation and fact-checkers' efforts to defuse it.

Literature Review
This article draws on three sets of literature. The first two, concerning the nature of misinformation in a digital environment and the role of dedicated "fact-checkers" in combating it, are considered in the first part of this section. The third focuses on the evolution of gatekeeping theory, as a once-narrowly bounded media environment is transformed into a vast ecosystem whose boundaries, to the extent they exist at all, are controlled largely by algorithms.

Literature: Misinformation and Fact-Checking
Although something recognizable as "the internet" has existed for well over 50 years, it was not until the creation and rapid popular uptake of web browsers in the 1990s (Ryan, 2010) that a truly communal digital space took shape. Technological advances in the early 21st century facilitated what became known as "Web 2.0," enabling internet users to become content producers, "first class entities" in a media world that previously had relegated them to more passive consumption roles (Cormode & Krishnamurthy, 2008). With the advent of social media in the mid-2000s, a "culture of connectivity" (van Dijck, 2013) roared into existence; by the 2010s, the exponentially greater transmissibility of information in this digital-native variant made it a dominant form of populist communication (de Zúñiga et al., 2020), raising myriad questions about its effects on, and role in, democratic society (Vraga, 2019).
Extensive documentation highlights the strong connection between social media and misinformation, along with its potential threat to civil society (Allcott et al., 2019). As reliance on social media to deliver information grows, the ability of shared misinformation to influence others' opinions and beliefs also expands, helped along by such factors as confirmation bias, selective exposure, and social ties (Tandoc, 2019). Social media platforms have been found to incubate belief in blatantly false information, with repetition increasing the perception of accuracy even for posts with minimal plausibility (Pennycook et al., 2018); its spread is seen as central to the growth of populist sentiment around the world (Iosifidis & Nicoli, 2020).
Do post hoc attempts to correct this misinformation help? Research suggests that they can, but also that practitioners should step carefully. For instance, general warnings about fake information can have a "spillover" effect, making people more distrustful of the truth (Clayton et al., 2020). Even explicit warnings that a headline has been disputed by a third-party fact-checker do not necessarily weaken the effects of repeat exposure to misinformation (Pennycook et al., 2018). A user's cognitive ability also affects acceptance of the countering information in a fact-check; those on the lower end of the scale are unlikely to adjust their attitudes even after explicit rebuttal of the false information (De keersmaecker & Roets, 2017). In general, research suggests that such postpublication initiatives as fact-checking can help address misinformation, but their impact is not universal, is often contingent on other factors that are difficult to address or assess, and in some cases may even be negative, serving to reinforce belief in the original false claim even when the misinformation is corrected immediately (Thorson, 2016).
Other work has offered a bit more hope. For instance, Vraga and Bode (2020) found that despite the relative "stickiness" of misinformation, corrections do seem to reduce belief in its veracity-though their effect diminishes as issues become more polarized or as people integrate dubious ideas into their own self-concept. Pointing out that corrective messages on social media are aimed at the person sharing the information as well as its recipients, the researchers indicate that effectiveness can be improved through such communication tactics as stating what is false and providing an explanation why, a strategy in line with the approach of fact-checkers in the database analyzed here (Vraga and Bode, 2020).
Although much of the research into misinformation has focused on the political realm, a considerable body of work has examined it in the context of health communication, a field in which increased action from researchers and practitioners is seen as urgently needed (Chou et al., 2020). As Broniatowski and his colleagues (2018) point out, social media have the potential to facilitate fast and far-reaching dissemination of factual health information-but instead are frequently abused to spread harmful content, notably but not exclusively around vaccines. In a meta-analysis of research published between 2012 and 2018, Wang et al. (2019) showed the extent to which social media enable communities of denialists to thrive, with unfounded reports, conspiracy theories, and pseudoscience all having profound consequences for public health, from hostility against health workers to skepticism about medical guidelines, as well as the widely documented problem of vaccine hesitancy (Chou et al., 2018).
Recent work on Covid-related misinformation has underscored the negative impact of hoaxes spread via social media. Lee and his colleagues (2022), for example, found that use of social media for news about the pandemic was negatively related to factual knowledge and positively related to knowledge "miscalibration." However, Kreps and Kriner (2020) found that although simply flagging fake headlines had little effect on assessment of accuracy or propensity to share, corrections that explicitly countered false claims with factual information were somewhat more effective. Additional grounds for optimism come from a study by Bode and Vraga (2021), who showed that exposure to misinformation about COVID-19 on social media was common, but most people who said they saw such content also said they saw a correction of it.
These findings underlie the premises that motivate many fact-checkers: that facts matter, that citizens need and deserve to be properly informed, and that strengthening trust in quality information is crucial (Singer, 2021a). Ensuring the accuracy of information has long been a hallmark of quality journalism, and media outlets have historically had editors devoted to checking facts (Cortada & Aspray, 2020;Graves & Amazeen, 2019;Sivek & Bloyd-Peshkin, 2018). But fact-checking as a distinct journalistic form in its own right dates only to the late 2000s , emerging in the United States around the time that social media was becoming prominent; by early 2023, nearly 400 fact-checkers operated around the world (Duke Reporters Lab, n.d.). Roughly a quarter of them, including organizations contributing to the database studied here, are certified as signatories to the International Fact-checking Network (IFCN, n.d.) Code of Principles 2 ; these fact-checkers have provided explicit evidence to satisfy independent assessors of their adherence to non-partisanship, fairness, and transparency of sources, funding, organization, and methodology.
Like journalists, fact-checkers see equipping citizens with truthful information about civic society as a core function (Singer, 2021a). Like journalists again, factcheckers value truthfulness (Graves, 2017); unlike journalists, though, they overtly adjudicate veracity through a transparent process of weighing evidence, claims, and counterclaims Graves, 2016). Another crucial distinction with theoretical implications is that while journalistic work is largely proactive, involving decisions by reporters and editors about what information to pursue and to publish, fact-checkers are primarily reactive: They respond to information (often though not exclusively bad information) that is already in the public sphere. Journalists and fact-checkers thus can be seen as standing on different sides of a metaphorical gate.
The extent to which journalists' attitudes about what information to let through the gate jibe with fact-checkers' attitudes about how to respond once it is published has not been extensively studied, but a quick look at practitioner norms can offer preliminary insights. The "quest for truthfulness" is the most dominant theme in journalistic codes of ethics around the world (Cooper, 1990, p. 11), and accurately conveying the truth is central to fact-checkers' work, as well (Graves, 2017;Singer, 2021a). However, their approaches to truth differ: While journalists tend to focus on accurately reporting what was said or done, fact-checkers are more interested in judging the veracity of the statement itself. FactCheck.org founder and long-time journalist Brooks Jackson summed up the distinction a decade ago, suggesting the need to evolve quickly away from a gatekeeper model of dwindling relevance into "more of an umpire/referee" (quoted in Amazeen, 2013, p. 19). Implications related to fact-checkers' role in building media literacy have begun to attract scholarly attention (see Çömlekçi, 2022;Frau-Meigs, 2022). Hameleers (2022), for instance, found that fact-checking works best when combined with media literacy interventions, though these may or may not be part of a fact-checker's arsenal.
Another point of distinction lies in fact-checkers' foregrounding of accountability and transparency. While traditional journalistic transparency has been described as "limited and strategic" (Chadha & Koliska, 2015, p. 215), these two norms are high on factcheckers' list of priorities (Singer, 2021a). Published fact-checks typically offer links to sources used in reaching a judgment; they also commonly indicate the steps taken to investigate a statement or claim and to make a decision about it. Fact-checkers believe this provision of information showing audiences how they reached a judgment is crucial in distinguishing their practices from those of traditional journalists (Singer, 2018).
In fact, many fact-checkers explicitly see themselves as correcting some shortcomings of journalism (Amazeen, 2019), describing fact-checking as "good medicine" for an ailing occupation (Graves, 2018, p. 623). Recent work suggests fact-checkers tend to believe they are more trustworthy, credible, and non-partisan information providers than legacy media in their coverage area (Singer, 2021a). Providing an impetus to move away from old media habits and understandings is part of the mission for those who see themselves as "both an extension to traditional journalism and in many respects as a correction of it" (Singer, 2018(Singer, , p. 1079. In summary, research into misinformation and the emergence of fact-checkers as a counter-force suggests that once a falsehood has entered today's unfettered and interconnected information sphere, attempts to "correct" it yield effects that can be positive-but that also are significantly and sometimes negatively influenced by the context in which those debunking attempts are produced and perceived. The volume and seeming indestructability of misinformation, particularly on social media, compounds the difficulty. In the words of Heidi Larson (2021), founding director of the Vaccine Confidence Project at the London School of Hygiene and Tropical Medicine, fact-checking can resemble clipping the head off a weed: You can do it repeatedly, but the roots just continue to grow.
To place fact-checkers' attempts at dealing with misinformation in a more theoretical perspective, this article turns now to a necessarily brief consideration of the nature of more traditional forms of information gatekeeping as well as its ongoing reconfiguration in the digital and social world in which fact-checkers primarily operate.

Literature: The Past, Present, and Possible Future of Gatekeeping
Gatekeeping theory highlights the role of journalists as information mediators (Shoemaker, 1991), encompassing the ways in which they navigate that role amid influences at levels ranging from the individual to the societal, with news production routines holding particular power (Shoemaker et al., 2001;Shoemaker & Reese, 2013). First applied to a newspaper editor in the mid-20th century (White, 1950) and updated over the years (Shoemaker, 1991;Shoemaker & Vos, 2009;Vos & Heinderyckx, 2015), it has been both reasserted and challenged in a world of digital (Bro & Wallberg, 2015;Thorson & Wells, 2015), participatory (Bruns, 2005;Domingo et al., 2008), and social media (Coddington & Holton, 2014). Calls for a rethink date back at least 30 years (see Berkowitz, 1990), but over the past decade, a consensus seems to have emerged that the concept can be most aptly described as "in transition," as Vos and Heinderyckx (2015) characterize it.
Clearly, a fluid and inclusive media space means even journalists readily acknowledge that while they may retain a degree of editorial oversight, they no longer exert control over what enters the information sphere (Vos & Thomas, 2019); gatekeeping, like everything else online, has become a networked process (Meraz & Papacharissi, 2013). There is widespread agreement that the metaphor, if not necessarily the occupational norms undergirding it (Walters, 2021), is past due for an overhaul. Schwalbe and her colleagues (2015) opt for "gatecheckers," particularly in relation to visual content. Pearson and Kosicki (2017) suggest a better metaphor is "way-finding," placing the focus on the paths that news users take to find their way to a particular piece of information. Thorson and Wells (2015) propose a framework of "curative flows," encompassing not only the journalists involved in content production and dissemination but also media consumers, social others within digital networks, strategic communicators, and algorithms "designed to shape the discovery and presentation of content" (p. 31).
This idea of algorithmic gatekeepers deserves highlighting. Algorithms pose a significant challenge to classic gatekeeping theory, which Wallace (2018) suggests is no longer adequate to describe contemporary digital news selection processes involving a complex interplay among users, platforms, and news creators. Compounding the challenge is that algorithmic gatekeepers are "the ultimate incarnation of opacity" (Heinderyckx, 2015, p. 257;Pasquale, 2015)-and they of course play an oversized role in social media (Thorson, 2020), where so much coronavirus misinformation has circulated freely and widely (Ferrara et al., 2020;Rosenberg et al., 2020).
Algorithms external to news organizations and their human gatekeepers increasingly influence how journalists assess, research, and publish stories, as well as how users consume them (Brake, 2017). For instance, editors make extensive and increasing use of algorithms, particularly in their reliance on audience metrics for decisions about coverage and placement (Blanchett, 2021;Tandoc, 2014), part of their adaptation to what Carlson (2018) calls "measurable journalism." As AI permeates all stages of the news-making process, Simon (2022) warns, platform companies may increasingly control not only the channels of distribution but also the means of content production. Diakopoulos (2019) goes further, warning that a loss of editorial control is inevitable: "By introducing algorithms into news curation, journalists and editors are delegating decision-making about what content to include, where to include it, and when to post it" (p. 200). Newly emerging AI permutations such as ChatGPT, which responds to text prompts by rapidly generating copy drawn from machine learning (Pavlik, 2023), will raise fresh questions and concerns.
Recent gatekeeping scholarship has taken these considerations about roles, processes, and technologies on board and begun to tackle the notion of what Hermida (2020) calls "post-publication gatekeeping," a consideration that combines the human aspect of the process with "the materiality of the technological infrastructures and products, and the subsequent emergent social habits of news consumption and circulation" (p. 470). Scholarly attention is beginning to turn to this still underexplored concept, largely in relation to the impact users have on journalists' subsequent news selection (Ai et al., 2022;Blanchett, 2021;Salonen et al., 2022).
The current research seeks to expand these understandings by focusing on the particular form of quality control exercised by fact-checkers in retroactively assessing information that is already published. Given the near-impossibility of keeping any piece of content out of circulation, it suggests that contemporary gatekeeping entails identifying which bits of published information are true and which are not, and attempting to rebut, refute, or otherwise counter the latter. Fact-checkers thus epitomize an approach to gatekeeping born of the social media age, in which historic understanding of the role-that audiences could "hear as a fact only those events which the newsman . . . believes to be true" (White, 1950, p. 390)-has lost its validity.
The core research question of this exploratory study of fact-checkers as gatekeepers interrogates the characteristics of a massive fact-checking database constructed in an attempt to deal with misinformation circulating in the United States during the first year of the COVID-19 pandemic. Following an empirical examination of the scale, scope, and thematic nature of database posts, the article concludes by considering what the fact-checkers' approach might signify in relation to gatekeeping theory in the current media ecosystem.

Method
To address this research question, this study draws on a subset of 2020 #CoronaVirusFacts Alliance database posts: those tagged for the United States. Each post included the name of the fact-checking organization (e.g., PolitiFact); the date of the post; the country or countries tagged by the fact-checker; a rating (false, misleading, and so on); a summary of the statement (e.g., "Boiled orange peels with cayenne pepper are a cure for coronavirus"); and a link to "read more," offering an explanation of the rating and the ability to access a longer discussion on the organization's own website. Each organization decided whether to repost a fact-check from its own site to the database, formatting the relevant information into a template created by the IFCN tech team to support this collaborative effort. This article focuses on the 1,218 posts tagged "United States" (including two erroneously tagged as "USA" or "United"); associated links were not followed.
The researcher first logged all 10,885 posts in an Excel file, organized by month and day, then created a separate file for the U.S.-tagged posts. Information recorded included the country or countries associated with each post; the date of the post; the fact-checker who provided the post; the rating given the item; and the full text of the post itself. Because the analysis focused on the contents of the database, it was not possible to identify visual or textual characteristics of the original material that factcheckers had selected for assessment. Nor was the original publisher of the content analyzed; that said, other work based on this database suggests social media in general, and Facebook in particular, was a source of a significant majority of misinformation fact-checked here (Singer, 2021b).
To facilitate analysis, a supplemental file was created, providing a count of the number of all U.S-tagged posts per month and the number of cross-tagged posts per month (e.g., those tagged for both "France" and "United States"), as well as a log of countries included in any cross-tag. In considering the volume of COVID-19 misinformation in 2020, this article reports descriptive statistics based on a simple count of items.
A thematic textual analysis of the posts was then conducted to understand the topics represented across this body of misinformation and any fluctuations in emphasis over time. Textual analysis enables researchers to perform what McKee (2003, p. 1) describes as an "educated guess" about the most likely interpretations of a text, facilitating analysis of "how texts produce potential meanings and what those potential meanings are" (Hughes, 2007, p. 249). This qualitative approach is useful for considering samples of textual material, as here, rather than large bodies of text, and it can further be focused on a few selected features of that material (Fairclough, 2003). It enables researchers to discern patterns and versions of reality present in a text (Fürsich, 2009). In this study, textual analysis of the posts was applied to track the nature of misinformation debunked by the fact-checkers through various stages of the pandemic, including its emergence in early 2020, the virus' nearly unchecked rampage across America in the following months, and the increased attention to vaccines as the year drew to a close.
To carry out this analysis, the author re-read each post and made a preliminary thematic designation. After considering all 1,218 U.S.-tagged posts in this way, another pass through the data set enabled modification and refinement of these themes to reflect patterns more accurately. New themes-such as "economy" for posts related to stimulus checks and other economic matters, and "tests" to highlight posts that were specifically about testing capacity and processes-were added. Others that initially were considered separately were combined; for example, safety precautions (e.g., use of face masks) and risk factors (e.g., that having a beard increases the chance of infection) were originally interpreted as distinct themes but proved difficult to differentiate and were therefore collapsed into a single "risk" theme. Ultimately, a final list of 16 themes was derived and is provided in Appendix A.
As shown in Table 1, a total of 1,155 items-94.8% of the total-were rated as false, partially false, or misleading. In addition, 29 items received a "no evidence" rating; these were included in the thematic analysis, as were a total of six without a rating and another four with unique ratings indicating at least some degree of falsity. Only 22 posts, 1.8% of the total, were rated as even partially true; none was rated as wholly true. Because these 22 items did not constitute misinformation in the assessment of the fact-checkers, they were set aside for purposes of the thematic analysis. Two April posts labeled as "explanatory" also were omitted from the thematic categorization. It should be emphasized that as this study focused only on a single database of factchecks, these data do not indicate that accurate information about COVID-19 was unavailable to American media consumers in 2020. Fact-checkers have significantly limited time and resources (Micallef et al., 2022;Singer, 2021a), making them more likely to focus on debunking potentially harmful false information than to bolstering a truthful statement.

Findings: Misinformation by the Numbers
The first of the 1,218 "United States" items was posted on January 21, 2020, by Agence France-Presse (AFP), labeling as false the statement that "the coronavirus was created in a lab and patented"; the last appeared on December 31, with Spanish fact-checker Newtral.es refuting the claim that a nurse in Tennessee had contracted Bell's Palsy from a COVID-19 vaccine. In between, a large majority of the posts-943, or 77.4%-were contributed by seven U.S.-based fact-checkers (see Appendix B), with a total of 34 fact-checking enterprises based in other countries providing the rest. More than 80% The total of 79 posts in this category had ratings of "mostly false" (59), "partly false" (15), "mainly false" (1), "inaccurate" (1), or "two Pinocchios" (3), a rating used by the Washington Post Fact Checker to indicate "significant omissions and/or exaggerations." b The total of 22 posts in this category had ratings of "mostly true" (10), "half true" (9), "partially true" (1), "partly true" (1), or "partially correct" (1). In addition, a rating of "no evidence" was used 29 times in posts tagged for the United States. Other ratings applied but not indicated above were "explanatory" (2), used for posts that provided an explanation or context about a statement, and "in dispute," "missing context," "mixed," and "unlikely," each used once. One fact-check was missing its rating, and "organization doesn't apply ratings" was used 5 times early in the year for FactCheck.org, which later changed its usual no-rating policy for database entries. Altogether, there were 41 of these "other" ratings, for a total of 3.4% of all U.S.-tagged posts.
of these items were rated as completely untrue, with most of the others assessed as either partially false or misleading (see Table 1). Few fact-checkers bothered posting to the database items that were even partially true. Of the 22 items rated as containing some truth, 20 were posted by PolitiFact. Database tools facilitated cross-tagging of multiple countries, for instance, those referenced in the hoax but circulating more widely. Of those tagged for the United States, 176 (14.4%) were also tagged for one or more other locations. However, U.S.based fact-checkers rarely exercised this option-only 7 times, all in April. Of the items cross-tagged with the United States in the database, 94% were provided by factcheckers from other countries.
Roughly half of the U.S.-tagged database entries were posted in March (345 posts) or April (315 posts); altogether, more than 80% of the posts (985 of 1,218) were added to the database in the first half of the year. Although this study cannot assess the reasons for such patterns, one possibility is that U.S.-based fact-checkers diverted their energies to the political campaign in the latter part of 2020; fewer than 100 items were added to the coronavirus database in August, September, and October combined. In March, an average of more than 11 posts were collectively contributed every day; in September, the average plummeted to just one per day.
Although the number of posts added to the database decreased over time-with a bit of an uptick in December, as misinformation about the emerging vaccines spread across the internet-the percentage that contained false or misleading information did not. On the contrary, the last item rated as even "half true" appeared on April 3, and both explanatory items also appeared in the first half of April. From then on, virtually every post was identified as either false or misleading by the fact-checkers. Again, the reasons for these patterns can only be inferred from a textual analysis; it may be, for instance, that fact-checkers decided to dedicate their limited time to addressing false information about the coronavirus rather than highlighting material that was at least partially correct. Whatever the explanation, it is clear that debunking false claims did not make them go away; not only did new ones continuously appear but old ones also recirculated repeatedly. To take one example: Claims by then-talk show host Rush Limbaugh and others that the coronavirus was no more serious than the common cold appeared in February, the month the first U.S. death from the virus was recorded; the pandemic was still being termed a hoax 8 months, hundreds of thousands of U.S. deaths, and millions of symptomatic infections later.

Findings: Misinformation themes
A total of 1,194 items of misinformation that appeared in the database across 2020 were subjected to textual analysis. As shown in Table 2, each of the 16 thematic categories represented at least 2% of these posts, ranging from 28 bits of misinformation related to technology to 136 related to politics. Adding in the 70 posts related to Donald Trump, a theme unto himself, indicates more than 17% of the items of misinformation debunked by these fact-checkers were political in nature, perhaps indicative of the overall emphasis placed on holding public figures accountable.   Table 1-are omitted from the count here. Two explanatory posts, which both appeared in April, also are omitted. Themes are presented here in order of prevalence in the dataset, from high to low. Appendix A provides an alphabetical listing of themes and a description of each.
No fact-checker with multiple posts tagged for the United States contributed items that fell into a single thematic category. The only clear distinction among their contributions was the heavy political leaning of the Washington Post Fact Checker, with all 21 of its contributions related either to President Trump or to politics more broadly. Otherwise, there were no discernible patterns to the selection of the fact-checkers, who each worked independently in identifying statements to investigate. For example, although 15 of the 125 posts from Science Feedback related to vaccines, it also contributed posts that fell into 13 of the other thematic categories; to take another example, AFP's 108 posts covered every theme identified here.
The only theme to recur in every month of the year involved vaccinations, referencing the availability, use, or effects of coronavirus vaccines (and, in a few cases, of other vaccines such as those for seasonal flu). As early as April, fact-checkers were countering false reports that a volunteer for a vaccine trial had died. Summer brought claims that a vaccine would "permanently alter our DNA," contained "dangerous amounts of fetal DNA," and resulted in "a 33% death rate." Microsoft billionaire and global philanthropist Bill Gates was repeatedly linked to nefarious plots related to vaccination, from using vaccines to implant microchips and thus "geolocate the population" to admitting "his [sic] COVID-19 vaccine might kill nearly 1 million people." In the early part of the year, there were claims that a vaccine already existed, typically in some distant location such as Russia or Australia; by the end of the year, when vaccines had cleared their clinical trials and actually were becoming available, they were greeted with a surge of misinformation about their safety and side effects, from sterility to facial paralysis to outright death (along with a revival of the microchip claim). Hoaxes related to vaccines accounted for more than half of all posts in December; this was the only theme to be more prevalent late in 2020 than it was in earlier months.
In contrast, misinformation around the origins of the virus were most prevalent early in the year; more than a third of the debunked statements in January, when the database launched, involved claims that the coronavirus was manufactured, notably that it was "a weapon developed in a Chinese lab." Other false statements during this period asserted that various unlikely entities had foreseen the outbreak, from late Libyan strongman Muammar Gaddafi to characters in movies or TV shows; indeed, the creators of the Simpsons cartoon were apparently so prescient that they even "predicted Tom Hanks would contract coronavirus in 2007." The early months of the year also saw dangerous misinformation about the severity of the illness, notably that it was "simply the common cold" or "just the damn flu." On the last day of February 2020, what is believed to be the first U.S. death from coronavirus was reported. Within a month, the United States had become the hardesthit country in the world, with more than 81,000 infections and 1,000 deaths (Taylor, 2021). Fact-checkers were kept busy debunking statements that either exaggerated the progression of the disease ("Florida hospital reports a coronavirus 'infestation'") or, conversely, denied its prevalence and impact ("Pictures and reports of 'empty hospitals' prove COVID-19 spread is 'fake crisis'").
By March, the database was awash in misinformation about supposed cures for coronavirus; by the end of April, 75 such claims had been debunked, yet they continued to appear in every month except September. Fake cures ranged from the infamous hydroxychloroquine to assorted foods and beverages (bananas, coffee, garlic, lemons . . .) to blowing air from a hair dryer up one's nose. Cocaine was said to be a cure, and so was alcohol. On the other hand, a measure that does help curtail spread of the disease, the use of face masks, was said to be useless (a claim often falsely attributed to an authority figure) or even harmful, notably the false statement that masks "cause carbon dioxide toxicity." Overall, false statements about safety measures and risk factors represented the second-most prominent theme, an obvious obstacle to any efforts to encourage Americans to take simple precautionary steps. There were claims that hand sanitizer could burst into flames near a gas stove, that there is "no real scientific basis" to support social distancing, that "stay-at-home orders are illegal in the United States and can be disregarded with impunity." And more.
That the "politics" theme was the most prominent in this dataset is perhaps not surprising during a presidential election year. The nature of this political misinformation was diverse. There were a great many false claims about Democratic politicians, notably House Speaker Nancy Pelosi (claimed to have delayed coronavirus funding as a campaign tactic, among other misdeeds) and reviled Michigan Governor Gretchen Whitmer, accused of banning the purchase of everything from baby car seats to "vegetable seeds and fruit" to American flags. There were allegations of secret visits to the Wuhan lab (to which former President Barack Obama, in cahoots with Bill Gates and perhaps Anthony Fauci, allegedly channeled funds); accusations that Obama not only mishandled the swine flu crisis but also then left Trump "with a cupboard that was bare"; and reports that a whole raft of Democratic politicians, including Joe Biden and Kamala Harris, were blatantly flouting safety measures. Political claims appeared throughout the year, but were most prevalent in March, April, and May; they did not show an uptick as the election neared, as might have been expected. Again, reasons for this finding cannot be identified from a textual analysis, but one viable explanation could be that fact-checkers were simply not adding them to the database but rather dealing with them exclusively on their own websites.
And a few final words about Trump. Although the then-president was the actual source of a considerable amount of pandemic misinformation (Evanega et al., 2020) and much of it was refuted by the fact-checkers, there also were a sizable number of false claims made about him or attributing to him statements he did not make. He did not, for instance, "urge sick people to get out and vote during COVID-19." He did not say "people are dying that have never died before." He did not, at least as far as factcheckers could ascertain, try to steal a vaccine from Germany or refuse testing kits provided by the WHO. In other words, despite the accusations of political bias sometimes leveled at U.S. fact-checkers (see Marietta et al., 2015), they were relatively even-handed in their assessment of statements about his role in the pandemic. Again, though, assessments of false statements by Trump-the Washington Post's factchecker tallied an average of 39 untrue statements each and every day during his final year in office (Kessler, 2021)-may simply not have been added to this database; indeed, in the last half of 2020, only 10 posts in this thematic category were posted, and six of them were from fact-checkers located outside the United States.
To summarize, fact-checkers devoted enormous time and energy to combating coronavirus misinformation, particularly in the first few months of 2020 but continuing straight through to Christmas and beyond. In all, an astounding 1,155 statements over the course of the year were assessed as false or misleading-the great majority of those as wholly untrue-and assiduously debunked. The Discussion now turns to consideration of what these attempts might signify in relation to contemporary gatekeeping theory.

Discussion
What does this small yet depressing sample of coronavirus misinformation suggest about the nature of misinformation? For starters, it is persistent. Similar claims appeared-and were refuted-over and over. From discredited "cures" to vaccines ridiculously portrayed as microchip implantation devices, fact-checkers' efforts seemed to have had little effect on the recurrence of such hoaxes. Second, it is global. This article focuses on the United States, but the larger dataset shows the same claims appeared repeatedly right around the world-and were repeatedly countered by factcheckers in country after country. And third, misinformation is responsive: It reflects reality even as it seeks to reinvent it. Database posts largely tracked the course of the disease in the United States, from its outbreak and rapid initial spread to attempts to combat it to the emergence of effective vaccines. The hoaxes were all the more dangerous because they mirrored the real world-albeit in a grossly distorted way.
More broadly, this study proposes an alternative understanding of gatekeeping in a digital and social media environment. As innumerable other scholars have pointed out through empirical and conceptual work, those who assess and verify information no longer can exert control over what enters the public domain in the ways they once did. Users, as well as the algorithms that minutely track and shape online activity, have gained a fundamental and virtually unassailable role. The best that humans exercising journalistic oversight can offer is an antidote to misinformation: either through the further reporting and analysis that reporters provide, or through an exploration of the veracity of claims, and a debunking of those found to be untrue, that is the fact-checkers' specialty. This debunking is itself the result of what also can be seen as a gatekeeping function: the fact-checker's selection from among the myriad bits of nonsense ricocheting around the ether on any given day.
Although they do many journalistic things, such as investigating and assessing the information they receive, fact-checkers are unlike journalists in important ways. Perhaps most centrally, fact-checkers neither put newly reported information into the marketplace nor keep flawed information out-a gatekeeping function that has become all but impossible to enact. In ruling on the veracity of previously published material rather than determining what should be published in the first place, the core of journalistic gatekeeping as traditionally understood, fact-checkers stand on the other side of a metaphorical gate to exercise an alternative approach that is both reactive and retroactive.
Indeed, fact-checkers may be ideally, maybe even uniquely, suited to performing a human gatekeeping role in a social media age in which algorithm-driven false or misleading information circulates widely, freely, and constantly. This exploratory study in the context of attempts to combat coronavirus misinformation does, however, support earlier calls for a new metaphor (Pearson & Kosicki, 2017;Schwalbe et al., 2015;Thorson & Wells, 2015). Perhaps we might add to the metaphorical mix the idea of "circuit breaker," which captures fact-checkers' attempts to interrupt, at least temporarily, the flow of misinformation. Despite its mechanical connotation, a circuit-breaking role seems, at least for now, best-suited to human actors, particularly given the emphasis on expository transparency that undergirds the guiding principles of factchecking (IFCN, n.d.;Singer, 2021a).
This study has numerous limitations, which in turn suggest wide-ranging opportunities for future study. For starters, the focus here was only on fact-checks relevant to the United States, despite the global nature of the pandemic and the "infodemic" that surrounded it. Another crucial limitation is that only what fact-checkers decided to post to the database was available for analysis, meaning other claims-including truthful ones-could not be assessed. The fact-checker's rationale for selecting one bit of misinformation to debunk rather than another could not be incorporated here; nor could the ways in which database contributors interacted, leaving open important questions around the potential for intermedia agenda-setting in the context of a factchecking network. Follow-up work that incorporates interview or survey data, which might usefully interrogate the place of human actors in an information space increasingly delineated by non-human ones, can address many of these shortcomings.
In the meantime, the insights afforded by this exploratory study potentially can inform how scholars (and practitioners) think about the contested concept of gatekeeping. They suggest ways that traditional understandings of influences on gatekeepers (Shoemaker et al., 2001;Shoemaker & Reese, 2013) are being challenged and reshaped by users, algorithms, and all the other intersecting actors within the contemporary news ecosystem. Perhaps most important, further studies of the effects of fact-checks on audiences' information assessments are needed to better understand whether debunking already-circulating falsehoods is a valuable extension of gatekeeping theory and a meaningful contribution to truth-telling in the public interest . . . or is akin to closing the barn door after the horse has galloped away.

ORCID iD
Jane B. Singer https://orcid.org/0000-0002-5777-9065 Notes 1. The database can be accessed at: https://www.poynter.org/ifcn-covid-19-misinformation/. Additional information about the creation and contents of the database is available at: https://www.poynter.org/coronavirusfactsalliance/. Note that although the database providers offered their own categorizations, these encompassed the database as a whole and ultimately covered a period of around two years. The themes discussed in this article, which is focused on the subset of U.S. posts in 2020 alone, therefore vary accordingly. 2. The IFCN code of principles is available at: https://ifcncodeofprinciples.poynter.org/ Appendix A Coronavirus Database Themes, U.S.-Tagged Posts.

Theme
Descriptions and examples

Culture
Posts referencing celebrities and other famous people (outside politics or science), plus cultural artifacts alleged to have predicted the pandemic Example: "Cristiano Ronaldo bought private island to escape coronavirus" Cure Posts providing misinformation about preventing or treating Covid infection Example: "Breathing air from a hair dryer or a sauna can prevent or cure COVID-19" Economy Posts referencing United States or global economy, including misinformation about stimulus checks and effects on business(es) Example: "Economic stimulus payments to U.S. citizens will either reduce future tax refunds or will have to be paid back" Exaggeration Posts exaggerating the spread or impact of the coronavirus, including false statements about rise or prevalence of cases Example: "Coronavirus hits a 15% fatality rate" Gates Posts about Microsoft billionaire and global philanthropist Bill Gates . . . so common that he gets a category all to himself. Example: "Bill Gates has access to your DNA and ownership in WHO" International Posts referencing things that were happening only outside the United States Example: "Belgium's health minister banned sex inside or with 3 or more people" Manufactured Posts claiming coronavirus is man-made or a direct result of other human activity, including misinformation about patents Example: "The coronavirus is 'a military bio-weapon developed by China's Army'" Media Posts stating that the news media lie, exaggerate, or are otherwise untrustworthy Example: "News outlets are misusing a boy's image to report the same child died of COVID-19 in three different countries" Politics Posts referencing United States political actions or actors other than Donald Trump Example: "'U.S. Vice President Mike Pence delivered empty boxes of PPE to a hospital as a publicity stunt'" (continued)

Risk
Posts referencing misinformation about risk factors and safety precautions Example: "Wearing face masks can cause carbon dioxide toxicity; can weaken immune system" Seriousness Posts claiming that Covid-19 is not a serious disease Example: "COVID-19 is a bacterium that is easily treated with aspirin or a coagulant" Technology Posts connecting coronavirus to conspiracy theories rooted in the misuse of technology, primarily but not exclusively related to 5G Example: "The coronavirus outbreak is caused by 5G technology" Tests Posts containing misinformation related to coronavirus tracing or testing Example: "Nasal swabs used to obtain samples for COVID-19 tests reach the blood-brain barrier and may damage it" Trump Posts containing misinformation about the president, as well as from him Example: "President Donald Trump said, 'People are dying who have never died before'" Vaccines Posts referencing the availability, use, or effects of coronavirus vaccines Example: "Aborted fetal cells are in the COVID-19 vaccine" Xenophobia Posts containing misinformation about people of diverse nationalities (particularly Chinese), religions, or ethnicities. Example: "Coronavirus patients are being 'cremated alive' in China" Other/Multi Posts that do not fit any of the other thematic categories, along with a total of 10 that contain multiple misinformation themes in a single post Example: "Supermarkets are recalling coronavirus-infected toilet paper"