The Interplay of Jargon, Motivation, and Fatigue While Processing COVID-19 Crisis Communication Over Time

Using the backdrop of the COVID-19 pandemic, this three-wave experiment (N = 1,830) examined whether a public health crisis motivates people to engage with complicated information about the virus in the form of jargon. Results revealed that although the presence of jargon negatively impacted message acceptance for topics that were not particularly urgent (flood risk and federal risk policy), the presence of jargon within the COVID-19 topic condition did not affect message perceptions—at first. In subsequent waves of data collection, however, it was found that the influence of jargon strengthened over time within the COVID-19 topic condition. Specifically, jargon began to exert a stronger influence on processing fluency despite the continued urgency of the topic. This finding suggests that motivation to process COVID-19 related information declined over time. Theoretical contributions for language, processing fluency, and persuasion are offered and practical implications for health, risk, science, and crisis communicators are advanced.

The threat of COVID-19 seemed to come out of nowhere and yet was everywhere in an instant. Once a country detected a single case of the virus, more cases, hospitalizations, and deaths followed. Understandably, during the first few months of the new virus' spread across the world, public health information about COVID-19 was ubiquitous. The newness of COVID-19 coupled with the urgent threat posed by the disease produced an information environment that is rare in the modern era, requiring the public to constantly engage with a singular, complex, technical, and evolving topic in order to make safe personal decisions (Zerbe, 2020). And now, at the time of this writing, the world has been attending to the threat of COVID-19 for more than a year. Using the ongoing pandemic as a backdrop, this multiwave study examines how people process technical information about a highly urgent topic over a prolonged period of time. Guided by feelings-as-information theory (Schwarz, 2011) and the elaboration likelihood model (ELM, Petty & Cacioppo, 1986), this series of investigations explores (1) whether a crisis context motivates people to process crisis-related information regardless of its complexity, and (2) whether and how time affects this relationship. The results reported here shed light on how language, motivation, and time affect audiences' reception of complicated public health and risk messaging. By understanding the interplay between technical language, motivation to process information, and time, we offer practical guidance for public health and crisis communicators and advance theory concerning information processing patterns in crisis contexts.

Feelings-as-Information Theory and The Role of Jargon
A convention while communicating with the public is to keep information simple and jargon-free (Krieger & Gallois, 2017;Rakedzon et al., 2017;Rice & Giles, 2017;Sharon & Baram-Tsabari, 2014). From a translational communication perspective, message simplicity achieves two goals: First, message simplicity ensures that the target audience accurately comprehends the information, and second, message simplicity also makes information feel more accessible (Krieger & Gallois, 2017). Together, these two cognitive mechanisms of comprehension and feelings of accessibility promote engagement and support for information being presented . One message feature that undermines message simplicity is jargon, which can be defined as the presence of specialized, technical, and uncommon words or phrases (Rakedzon et al., 2017;. The goal of this project is to extend upon this prior work by examining how feelings of accessibility, evoked through jargon use, affect willingness to engage with risk information during crises. To understand these dynamics, we use feelings-as-information theory (FIT, Schwarz, 2011) to explain how jargon typically affects information processing and, in turn, public communication efforts.
FIT (Schwarz, 2011) advances postulates for how two types of information influence a person's judgments and decision making. These two types of information include primary cognitions, defined as the declarative information that people draw upon while thinking about a topic, and secondary cognitions, defined as experiential information that influences how people feel while thinking. Put differently, primary cognitions include a person's beliefs and attitudes, whereas secondary cognitions (also called metacognition) are specific to people's thoughts about their thoughts. Some common examples of metacognition include confidence in one's beliefs and/ or attitudes, or moods while processing information (Schwarz, 2015). The metacognition of interest here is processing fluency, which can be defined as subjective feelings of difficulty (disfluency) or ease (fluency) while processing new information (Schwarz, 2010).
The first postulate of FIT offers that metacognition, including feelings of difficulty or ease, influences people's judgments (Schwarz, 2011). An experiment by Bishop et al. (1984) offers a nice example of this proposition. Participants were randomly assigned to answer either very difficult or very easy political knowledge questions. Following this task manipulation, participants were asked about their attention towards, and interest in, politics. Consistent with the first postulate from FIT, people's feelings of difficulty (or ease) following the difficult (or easy) questions were predictive of responses: Participants in the difficult condition reported less attention to and interest in politics relative to those in the easy question condition. Thus, people attribute their metacognitive feelings as diagnostic of their broader engagement (Schwarz, 2011).
Contemporary research in metacognition (for reviews see, Alter & Oppenheimer, postulate one of FIT, when processing fluency is impaired, people report more negative attitudes about the topic (Dragojevic, 2020) and view the message with more skepticism . Given this theoretical and empirical precedent, we expect that jargon should impair processing fluency, which in turn should lead to increased message resistance (i.e., motivated resistance to persuasion, MRTP, Nisbet et al., 2015), reduced reports of message credibility, and increased negative cognitions such as topic risk and severity. These relationships compose the first hypothesis: H1: There will be a nonzero indirect effect between language condition and message outcomes mediated through self-reports of processing fluency.

The ELM and the Role of Motivation
Although the relationship between jargon, processing fluency, and message outcomes has been well supported, one contribution of this study is to determine whether a public health crisis offers a boundary condition for the negative effects of jargon and disfluency. As this section argues, guided by the ELM (Petty & Cacioppo, 1986), there are methodological and theoretical reasons to believe that the effects of jargon on message outcomes through reduced processing fluency may not replicate in a crisis context. Methodologically, prior research testing the association between language complexity, fluency, and message outcomes has examined this relationship using benign topics such as self-driving cars (e.g., Bullock et al., 2019), foreign economic policy (e.g., Tolochko et al., 2019), fictional ballot issues (e.g., Shockley & Fairdosi, 2015), or campaign finance laws (e.g., Shulman & Sweitzer, 2018b). Although these are all important topics, they are also nonurgent and not particularly salient in one's daily life. Based on this uniformity, the relationship between jargon, fluency, and outcomes has only been studied under conditions in which participants' motivation to process the information is relatively low. Thus, a theoretical question remains as to whether these relationships replicate when motivation to process is high. A global pandemic and the urgent need to process relevant public health information provides this opportunity.
The conceit that motivation affects information processing comes directly from the ELM (Petty & Cacioppo, 1986). The ELM proposes that people process information via one of two routes, a central route or a peripheral route. When we process information through the central route, we deliberately seek out and critically analyze the best substantive information possible. Conversely, when we process information through the peripheral route, we make decisions based on heuristics, or cognitive shortcuts, that have proven reliable in the past, including source cues and/or endorsements. One of the factors that determine which processing route we take is motivation. Motivation refers to our desire for or interest in making correct decisions (Festinger, 1950), and our concern with obtaining the best information possible to do so (Petty & Cacioppo, 1986). Typically, studies guided by the ELM evoke motivation through factors such as personal relevance or accountability. Here, we include COVID-19 to offer a naturalistic evocation of personal relevance, arguing that motivation to process COVID-19-related information should be high given the urgent need to understand the virus to make safe personal decisions. As a result, we expect central processing should be more likely while processing information related to COVID-19 compared to less urgent topics.
Put simply, when people process information centrally, they are more attuned to the substance of the message rather than its style. Conversely, when people process information peripherally, they are more attuned to message style over substance. If the presence or absence of jargon operates as a stylistic or peripheral cue, akin to a presentational frame (see Shulman & Sweitzer, 2018b), then the influence of jargon should be more impactful under conditions of peripheral processing than central processing. Moreover, integrating this premise within FIT (Schwarz, 2011), if jargon is a peripheral cue, then jargon's influence on processing fluency should be weaker under conditions of central processing relative to peripheral processing. Thus, taken together, these ideas suggest that the potency, or strength, of the language manipulation on reports of processing fluency will vary by topic. Specifically, although the presence or absence of jargon should strongly affect reports of processing fluency under conditions of peripheral processing, the strength of this association should weaken as information processing becomes more central. Moreover, if the strength of the language manipulation on processing fluency varies by topic, then the strength of the indirect effect between language and outcomes should vary by topic as well. Specifically, it is expected that this effect will be weakest in the high motivation topic condition , followed by the moderate motivation topic condition (flood risk), and strongest in the no-motivation topic condition (National Response Framework Policy, NRF): H2: Topic will moderate the strength of the indirect effect between language condition and message outcomes, mediated through processing fluency.

Time, Motivation, and the Possibility of Fatigue
Although the effect of motivation on information processing is influential at any given point in time, we seek to examine whether the effects of increased motivation on information processing can be sustained across a prolonged time period. The duration of the COVID-19 pandemic offers the opportunity to address this question. Although the urgency and threat of this virus have never truly dissipated and have, by most metrics, even worsened around the globe at the time of this data collection, it is also possible that the prolonged nature of the crisis has reduced motivation to process crisisrelated information. This final section speculates about how "pandemic fatigue" or "COVID-19 fatigue" might lead to declining levels of motivation, and how a reduction in motivation alters information processing patterns in response to jargon.
The existence of "COVID-19 fatigue," or more generally "pandemic fatigue," is best supported through the coinage of these phrases (Michie et al., 2020;Zerbe, 2020), and the attention these terms have gained in recent months. A Google Trends (Michie et al., 2020) search of the phrase "pandemic fatigue" resulted in over 200 million hits as of November 2020. According to Michie et al. (2020), COVID-19 fatigue "describe[s] a presumed tendency for people naturally to become tired of the rules and guidance they should follow to prevent the spread of COVID-19" (p. 1). The understanding behind COVID-19 fatigue is that people simply do not have the capacity to remain in a state of vigilance for a prolonged period of time, and, as a result, let their guard down due to mental and emotional exhaustion (Michie et al., 2020;Zerbe, 2020). Here, we aim to connect this idea of COVID-19 fatigue to the current study by positing that COVID-19 fatigue and motivation to process information should be inversely related. Specifically, as fatigue grows, motivation to process information should lessen. And, based on propositions from the ELM (Petty & Cacioppo, 1986), these reductions in motivation should result in a shift from central to peripheral processing.
If motivation levels surrounding COVID-19 do not remain sufficiently high throughout the duration of the pandemic, then time will moderate the relationships between jargon use, topic, processing fluency, and outcomes. Specifically, we expect that COVID-19 fatigue is likely to vary as the COVID-19 pandemic endures over time. For the COVID-19 topic condition, if fatigue is occurring, then this fatigue should manifest in variability that reflects a shift from central to peripheral processing due to the loss of motivation. When this shift occurs, the effects of peripheral cues (i.e., jargon use) should become more influential on reports of processing fluency. For the two less urgent topics, however (flooding, NRF policy), we do not expect time to affect the relationships between jargon, fluency, and subsequent outcomes because motivation levels are not expected to systematically vary with time. Thus, we hypothesize that the inclusion of time will affect the strength of the topic manipulation, which will in turn impact the strength of the language manipulation (presence or absence of jargon) on processing fluency. If this is the case, the strength of the indirect effects should be affected by topic and time as well.
H3: Survey wave should moderate the strength of the indirect effects between jargon use and topic on outcomes, mediated through processing fluency.
If the evidence supports the idea of pandemic fatigue, the question then becomes when does this fatigue set in and what does this tell us about what drives motivation in a crisis? If duration produces fatigue, motivation to process COVID-19 information should be highest during the early stages of a crisis when the threat is novel, when information is most needed, and when fatigue is likely to be low. Once this information is obtained, however, and once the threat becomes less novel, more familiar, and more mentally exhausting, this motivation could wear off and fatigue might set in. Thus, if we see information processing patterns reflective of central processing during the early waves of data collection, but over time this processing shifts towards patterns reflective of peripheral processing, this evidence would be consistent with a "fatigue-based explanation" for information processing within a crisis over time.
There is, however, another pattern of results that would support a different explanation, aside from fatigue, for why we might observe variance in motivational levels during a pandemic. We term this pattern of results the "urgency-based explanation." 1 Under this explanation, we still expect that motivation will vary for the COVID-19 topic. This expectation reflects the assumption that people cannot remain in a state of vigilance for a long period of time (Zerbe, 2020). That said, where and why we see this variability would be different when one's motivation to process information in a crisis is dictated by urgency rather than novelty, as suggested in the fatigue-based explanation. When one's motivation is affected by urgency, then motivation should decline during times when the threat of the virus has subdued and increase as the threat becomes more acute. In sum, under this explanation, as the virus surges, so too will motivation, and as the virus lulls, so too will motivation. Thus, this explanation would suggest a pragmatic, "urgency-based" response to crisis communication.
Together, both the fatigue-based explanation and the urgency-based explanation would produce support for hypothesis three, though a deeper investigation into the data would reveal different patterns of relationships. These relationships are visually depicted in Figure 1 to make the contrast between the two more apparent. To underscore the different explanations for each trend, we overlay these trends on positivity rates of COVID-19 in the United States during the time points in which the data was collected. Thus, the final question is whether information processing patterns better reflect the fatigue-based or urgency-based explanation: RQ: Does the influence of time better support fatigue-based processing or urgencybased processing trends?

Participants
Participants (N = 1830) were recruited using TurkPrime (Litman et al., 2017). This sample was 60.4% male, 39.2% female, and 38 participants' data was either missing (n = 31), identified as transgender (n = 3), or preferred not to say (n = 4). 2 The average age of the sample was 37.05 years (SD = 10.40) and 66.7% identified as White, 16.4% Black, 4.2% Hispanic, 3.8% Asian, 0.9% American Indian, with the rest reporting either other (0.9%), prefer not to say (0.3%), or Pacific Islander (0.1%). The survey took an average of 9.04 min (SD = 5.00) to complete after excluding participants who took less than three minutes (n = 30), and longer than 30 min (n = 56; Mode = 6.6 min). This demographic data is further broken down by survey wave in Table 1. Eligible participants were compensated $2.00 for their participation. All of the study materials and procedures for each survey wave were approved and determined exempt by the Institutional Review Board at the authors' home university. Note. Chi-square analyses and one-way analysis of variances were run to assess whether these demographic categories were significantly different across survey waves. The results from these analyses revealed that there were no significant differences across survey waves for number of participants (p = .664), sex (p = .449), age (p = .704), and duration (p = .314). There were however significant differences across race (p = .03), political affiliation (p < .001), COVID-19 status (p < .01), flood experience (p < .01), and cases dropped for timing (p < .01). Because some of these differences are likely to be a natural result of the social context we were measuring, none of these variables were included in the models.
To improve data quality given known issues with mTurk (Chmielewski & Kucker, 2020), to be eligible to participate workers were required to be at least 18 years old, live in the United States, pass a Captcha, and could not have previously participated in any waves of this study. We further required that workers needed to have at least a 95% completion rating on at least 500 HITs to qualify. In addition to these criteria, we also included the following attention check, "In the past year, how many times have you suffered a fatal heart attack while watching television?" The correct answer to this question was "never," but additional options included once, twice, and often. Across all three survey waves, 1254 answered this question correctly (68.5%), 443 incorrectly (24.2%), with 133 responses missing (7.3%). Passing rates were not significantly different across survey wave, χ 2 (2, N = 1697) = 3.18, p = .204. Given the number of participants who failed this attention check, however, we opted to maintain participants' responses but include attention check failure as a covariate within analyses. 3

Procedure
To account for time in this study, the procedure described below was repeated across three independent data collections that took place in May, October, and December of 2020. 4 The procedure was identical across all waves and consisted of a 2 (Language: Jargon Present, Jargon Absent) × 3 (Topic: COVID-19, Flood Risk, National Response Framework [NRF]) between-subjects online survey experiment created using Qualtrics. Across all six message conditions, participants were first exposed to an introductory page that provided a brief background for the randomly assigned topic (78 words). This information was held on screen for a minimum of 5 seconds to encourage participants to read the information. After this time elapsed, a "continue" button appeared at the bottom of the screen that allowed participants to advance to the next page. On the next screen, participants were presented with a message describing the risks posed by the specific topic (105 words), which was held on screen for a minimum of 8 seconds. Finally, the third screen offered generic guidelines for how to mitigate risk in emergency situations. These guidelines were the same across all three topic conditions (98 words) and were held on screen for a minimum of 8 seconds. The language condition was introduced on screens two and three, wherein participants were exposed to either a jargon or no-jargon version of the message and emergency guidelines (see Appendix). Following assignment to message condition, participants were asked a series of questions regarding their message and risk perceptions. Following these measures, participants reported their demographics. The study rationale, pre-registered hypotheses, data, and materials are made available on the Open Science Framework page associated with this project.

Language Manipulation
To produce the jargon/no-jargon language manipulation, in the jargon condition (n = 910), 40 jargon terms or phrases (e.g., "zoonotic pathogen") were interspersed within the risk message screen and guidelines screen (20 terms in each message). In the no-jargon condition (n = 920), these terms were replaced by using more common terms or phrases ("germ transmitted from animals to humans," see Appendix). This technique has been used in previous studies (e.g., Bullock et al., 2019) to manipulate language difficulty and vary processing fluency.
To assess whether the jargon manipulation was effective, a six-item processing fluency scale (Kostyk et al., 2019;Shulman & Sweitzer, 2018a, 2018b was administered directly after exposure to message condition (M = 4.52, SD = 1.26, α = 0.84). Results from this scale can be interpreted such that a higher score indicates a more fluent, or easier, processing experience. A sample item includes, "The passage felt easy to read." Consistent with expectations, participants exposed to the jargon condition reported a significantly less fluent processing experience (M = 4.27, SD = 1.18) than those exposed to the no-jargon condition (M = 4.77, SD = 1.28, t [1814] = 8.68, p < .001, d = 0.41, r = .20). Thus, this manipulation was successful.

Topic Manipulation
There were three risk topics chosen to produce variance in motivation. These risk topics included, in order from high motivation to no motivation, COVID-19 (n = 612), flooding (n = 605), and NRF (n = 613). COVID-19 was selected as the topic that should yield the most motivation because of its urgency, novelty, and personal relevance. By contrast, although flooding poses a sufficient personal risk to much of the population, flooding is not particularly novel, though it can pose an urgent threat during significant weather events. Thus, we assumed flood information would induce moderate levels of motivation. Finally, the NRF message was provided as a control because although the topic is about risk policy, it is not inherently about personal risk. Given that previous research on jargon and processing fluency (e.g., Shulman & Sweitzer, 2018a, 2018b found effects using policy topics, establishing this baseline was helpful for contextualizing the strength of effects. In addition to face validity, we conducted a manipulation check using a five-item urgency scale created for this project (M = 5.34, SD = 1.22, α = 0.88). A sample item includes, "The message you read was about a current threat to public health in the USA." A one-way analysis of variance revealed significant differences in topic urgency, F (2, 1803) = 133.92, p < .001, η 2 = .13. As intended, post-hoc tests using Tukey HSD (α = 0.05) showed that urgency reports were significantly higher in the COVID-19 condition (M = 5.96, SD = 0.80), than in the flood (M = 5.06, SD = 1.19) and NRF conditions (M = 5.00, SD = 1.36), which were not significantly different from one another.

Time-Related Fatigue
To assess the possibility of time-related COVID-19 fatigue, we ran the survey experiment at three different time points in 2020 and refer to each as wave one (n = 613, modal date: May 28, 2020), wave two (n = 595, modal date: October 18, 2020), and wave three (n = 622, modal date: December 17, 2020). We offer COVID-19 rate information in Figure 1 to provide context (Centers for Disease Control and Prevention, 2021). Based on this data, the wave in which the virus posed the highest statistical threat was during wave three (M cases/day = 226,725), followed by waves two (M cases/day = 59,069) and one (M cases/day = 20,765).

Outcome Measures
These outcome measures were adapted from previously published research. Although these scales are all likely to be related, they were analyzed separately during hypothesis testing to offer a robustness check that better ensures our patterns of results were not contingent on any one measure. The first scale assessed message resistance using the eightitem motivated resistance to persuasion scale (MRTP, M = 3.33, SD = 1.18, α = 0.83; Nisbet et al., 2015). For this scale, higher scores reflect increased resistance to the message, and lower scores reflect message acceptance. An example item includes "The message I saw tried to manipulate me." The second scale used to assess message perceptions was a four-item credibility scale (M = 5.80, SD = 0.90, α = 0.86; Appelman & Sundar, 2016), in which higher scores indicate more positive ratings of the message content. An example item includes "The message I saw seemed accurate." To assess risk perceptions, two scales were used:

Analysis Plan
Conceptually, all analyses build off the basic mediation model (Model 4, see Figure 2) in Hayes (2018) macro PROCESS (all analyses conducted using 10,000 resamples). The "first half" of these models assesses the strength of the relationship between language condition on processing fluency (path 1), the "second half" assesses the relationship between self-reports of processing fluency on outcomes (path 2), and the indirect effects assess the relationship between language condition on outcomes mediated through processing fluency. Across all tests, a separate model was run for each dependent variable, thus producing four statistical tests of each hypothesis. For H1, Model 4 was used with language condition as the independent variable (X), processing fluency as the mediator (M), and MRTP, credibility, risk, or severity as the dependent variable (Y). For H2, Model 7 was used, which adds topic condition as a categorical moderator (W) on the path between language and processing fluency (Path 1, moderated mediation) to reflect our expectation that topic condition will moderate the strength of the language manipulation on reports of processing fluency. In all analyses, the NRF condition was the referent category (0). For H3, Model 11 was used. This model tests whether survey wave (Z) moderates the effect of topic, which in turn moderates the relationship between language and processing fluency (path 1; moderated-moderated mediation). Once again, this model was chosen because it reflects our assumption that survey wave will moderate the potency of the topic manipulation, which will in turn moderate the strength of the relationship between language condition and processing fluency. All models included attention check responses (0: fail, 1: pass) as a covariate (see endnote 3).
Given the way these models are estimated (from 10,000 resamples, see Hayes, 2017), for each hypothesis test, the estimates obtained in the first half of these models remained relatively consistent regardless of the model outcome. As such, unless otherwise stated, the statistics presented herein are from the first half of the model predicting MRTP for efficiency and consistency. The indirect effects, however, vary depending on model outcome, so each of these estimates will be presented here. The complete set of results across all four models is available on our OSF page. For all analyses below, the statistics presented were limited to the tests and estimates that most directly test the hypothesis under examination.

Hypothesis Testing
Hypothesis 1 predicted that there would be a nonzero indirect effect between language condition (X) on message outcomes (Y: MRTP, credibility, risk, and severity), mediated through processing fluency (M). Across all four models, this hypothesis was supported. Specifically, it was found that the first half of the model predicting processing fluency from language condition (path 1) was statistically significant, F (2, 1688) = 260.95, p < .001, R 2 = .24, and confirmed that participants in the jargon condition reported lower levels of processing fluency than those in the no jargon condition (B = −0.43, SE = 0.05, t = −7.92, p < .001). Additionally, the hypothesized indirect effects were also observed such that there was a nonzero and positive indirect effect between jargon use and MRTP (B = 0.23, SE = 0.03, 95% CI [0.17, 0.29], R 2 = .51), a nonzero negative indirect effect between jargon use and credibility (B = −0.08, SE = 0.01, 95% CI [−0.11, −0.06], R 2 = .06), and a nonzero positive indirect effect between jargon use and severity (B = 0.19, SE = 0.03, 95% CI [0.14, 0.24], R 2 = .37). Additionally, although the effect was in the direction opposite of expectations, there was also a nonzero negative indirect effect between jargon use and risk perceptions (B = −0.06, SE = 0.01, 95% CI [−0.08, −0.03], R 2 = .03), revealing that the presence of jargon indirectly decreased risk perceptions. Thus, in support of H1, the results reveal that the presence of jargon undermined the messages' effectiveness through the mediator of processing fluency.
Hypothesis 2 predicted that topic condition (W) would moderate the indirect effect between language (X) and message outcome (Y) through processing fluency (M). Specifically, it was hypothesized that the strength of the language manipulation on processing fluency, and the strength of the indirect effects, would be strongest in the NRF condition, followed by the flooding condition, and weakest in the COVID-19 condition. The first half of this model-which predicted processing fluency from language and topic condition-was statistically significant, F (6, 1684) = 96.72, p < .001, R 2 = .26, as was the interaction effect between language and topic conditions, F (2, 1684) = 3.08, p < .05). Specifically, the effect of language condition on processing fluency was strongest within the NRF condition ( . Thus, consistent with H2, the strength of the relationship between language and processing fluency varied by topic. Specifically, as expected, the strength of the jargon manipulation was weakest in the COVID-19 condition. In addition to evidence of moderated mediation on path 1, there was also evidence of a nonzero difference in the index of moderated mediation between topic conditions. The estimates provided below are the difference scores between indirect effect estimates, which are calculated by subtracting the indirect effect within the COVID-19 condition from this effect in the NRF condition. Nonzero numbers suggest no overlap between the two estimates using 95% confidence intervals (see Hayes, 2017). For comparisons between the NRF and COVID-19 conditions, findings were consistent with H2 in three of the four models (MRTP: B = −0.18, SE = 0.07, 95% CI [−0.33, −0.04]; Credibility: B = 0.08, SE = 0.03, 95% CI [0.02, 0.14]; Severity: B = −0.19, SE = 0.06, 95% CI [−0.32, −0.07]) and approached nonzero differences for the model predicting risk (B = 0.03, SE = 0.02, 95% CI [−0.00, 0.08]). These estimates indicate that the strength of the indirect effect was stronger within the NRF condition than within the COVID-19 condition. The strength of the indirect effects in the flood condition fell in between the NRF and COVID-19 condition and thus were not significantly different from either. In sum, these findings are supportive of H2 and indicate that the indirect effect of the language manipulation on outcomes was weakest in the COVID-19 topic condition (see Table 2).
Hypothesis 3 predicted that survey wave (Z) would moderate the effect of topic (W) on language condition (X) and processing fluency (M, path 1), and the indirect relationship between language condition on message outcomes (see Figure 2). Similar to tests of H2, because survey wave and topic condition were predicted to moderate the path 1 estimate, it was important to first establish whether the anticipated three-way  Note. Paths estimated with 95% bias-corrected bootstrap confidence intervals based on 10,000 resamples from Hayes (2018) PROCESS Model 7. For the jargon variable, the no-jargon condition is the referent category. The results for path 1 are taken from estimates predicting MRTP. Although estimates vary slightly from model to model (see Hayes, 2017), none of the substantive conclusions change. Different subscripts on the indirect effects reflect that there is a nonzero difference between the two indirect estimates.
interaction between language, topic, and wave was significant when processing fluency was the outcome (path 1). Consistent with expectations, the first half of this model, F (18, 1672) = 34.02, p < .001, R 2 = .27, and the three-way interaction effect, were statistically significant, F (4, 1672) = 3.38, p < .01. Specifically, the interaction effect between the language and topic conditions was significant during wave one ( These results provide support for H3, which hypothesized that time (survey wave) would moderate the effect of topic. Specifically, it was found that the strength of the interaction effect between language and topic weakened over time.
The second part of H3 and the research question were interested in how this interaction would influence the indirect effects under investigation. For efficiency, given the number of predictors within these four models, the indirect effects germane to this analysis are broken down by topic and wave in Table 3 (and visually illustrated in the Supplemental Figures S1-S4 in the online version). In sum, consistent with expectations from H3, the strength of the indirect effect between language condition and topic on outcomes appeared to change over time. Although the results vary slightly from model to model, within the COVID-19 condition, the results suggest that the only wave in which the indirect effects reached nonzero estimates was during wave two. This was the case for two outcomes including MRTP ( For the models predicting credibility and severity, the indirect effects in the COVID-19 condition (across all waves), never reached nonzero levels (see Table 3). Together, although the evidence is mixed, this pattern of results supports the three-way interaction effect hypothesized in H3. More specific to the RQ, this pattern also supports the urgency-based explanation for motivation. Although language condition was never a "strong" indirect predictor of outcomes in the COVID-19 condition (compared to other topic conditions), when this estimate did reach nonzero it was during wave two-a time in which the threat of the virus was (relatively) subdued. For clarity, in Table 3 all nonzero conditional indirect effects include an asterisk for emphasis and the supplemental materials visually depict these relationships in Figures S1-S4 in the online version.

Post-Hoc Analyses
As indicated previously (see Table 1), ∼24% of participants failed our attention check measure. Given the amount of participants who failed this item, we opted to run a post-hoc analysis to examine whether study interpretations change when these participants are included (without a covariate) in our models. The reasoning behind this decision is elaborated upon in endnote three. Briefly, however, we speculated, post-hoc, that attentional deficits to the degree observed here might be common in crisis environments. And, if attention deficits of this kind are to be expected in a crisis, then including a broad range of participants would be important for the generalizability of our findings. Thus, although the results presented below should be interpreted with caution, we hope that providing this information sheds light on the many ways people might process information during a crisis. The results from the hypothesis tests that included the attention check covariate offered support for the urgency-based explanation, which argued that the effects of language would be strongest when the threat of the virus was weakest (wave two). In contrast, however, the results from this post-hoc analysis reveal that, when this covariate was removed from Model 11, the pattern of results for the indirect effect estimates supported the fatigue-based, not urgency-based, motivational explanation. Recall that a fatiguebased explanation suggests that over time-regardless of situational urgency-people lose the motivation to process complex information (see Figure 2). Consistent with this explanation, the results from this analysis show that the indirect effects of the language condition on outcomes was strongest during wave three of this study even though during this wave the threat of COVID-19 was most severe (Centers for Disease Control and Prevention, 2021). Thus, when all participants were included in analyses, our results suggest that the motivation to centrally process information about COVID-19 decreases over time. These results are provided in Table 4 and a visual comparison between these analyses with, and without, the covariate are presented in Figures  S1-S4 in the online version. Importantly, however, although the results pertinent to the RQ were affected by the attention check covariate, the results for H1 and H2 were unaffected by this variable.

Discussion
This experiment examined the influence of jargon in an important context: An ongoing, novel, and complicated public health crisis. The question that inspired this investigation was whether well-established relationships between jargon, fluency, and message perceptions would replicate if the topic in question was urgent and personally relevant, as has been the case during the COVID-19 pandemic. Guided by the assumption that participants would be motivated to process COVID-19 information, we examined whether the trends observed in the COVID-19 topic condition would deviate from the trends observed with less urgent topics (flood risk and federal emergency policy). This study used FIT (Schwarz, 2011) and the ELM (Petty & Cacioppo, 1986) to understand (a) whether the language manipulation would exert a stronger influence under conditions of peripheral rather than central processing, and (b) what implications these findings offer to public communicators in general, and crisis communicators in particular.
The first hypothesis examined whether the relationship between jargon, processing fluency, and message outcomes replicated. Consistent with prior research (e.g.,  Note. Paths estimated with 95% bias-corrected bootstrap confidence intervals based on 10,000 resamples from Hayes (2018) PROCESS Model 11 with the attention check covariate removed. Only interaction estimates germane to H3 presented here. Full output can be found on OSF. *p < .05 or indicates a nonzero estimate, **p < .01, ***p < .001. Bullock et al., 2019;Shulman & Sweitzer, 2018b) and postulate one from FIT (Schwarz, 2011), this experiment found that the presence of jargon, defined as specialized and atypical words and phrases, impaired processing fluency, which led to higher message skepticism, lower credibility, higher topic severity and, surprisingly, lower risk perceptions. Overall, given that the language manipulation largely replicated findings from past research, our goal-which was to test moderators of this foundational relationship-became possible.
Integrating the ELM (Petty & Cacioppo, 1986) within these relationships, we examined whether the influence of jargon on processing fluency would be moderated by topic urgency. This is a theoretically interesting question because it sheds light on whether the information gleaned from one's processing fluency experience qualifies as peripheral or central information when forming judgments. Although this was not the primary focus of this paper, it merits mentioning that how metacognition is stored in memory and used in decision-making is debated in social psychology (see Tormala et al., 2002). Specifically, work guided by Schwarz and colleagues contends that metacognitive experiences are more of an in-the-moment, top-of-mind, heuristic consideration, whereas Petty and colleagues argue that metacognition can guide thought under more deliberative circumstances as well (see Petty et al., 2007a). Here, the results pertaining to H2 found that the effects of the jargon manipulation were stronger under conditions of peripheral, rather than central, processing. Specifically, the conditional effects obtained with the moderator of topic condition revealed that across all comparisons, the NRF condition (least urgent) yielded the strongest relationships between jargon, fluency, and outcomes, and the COVID-19 condition (most urgent) yielded the weakest relationship (with flooding condition always falling in between). If one were to accept the assumption that people should be more motivated to process COVID-19 related information, then this evidence supports the idea of jargon as a peripheral cue. Moreover, it should be emphasized that the fact that jargon did not affect outcomes in the COVID-19 condition is noteworthy. Meta-analyses (e.g., Alter & Oppenheimer, 2009) and research in communication (e.g.,  have consistently documented the strong and consistent influence of complex language on outcomes through processing fluency. Although these prior findings could be a publication bias, the fact that we found a boundary condition in which the language condition did not influence outcomes is interesting and important. In addition to finding that the influence of the language condition on processing fluency was moderated by topic, we also sought to understand whether context affected these relationships. To understand context, we integrated time into these analyses by running this experiment in three different waves. The results pertaining to H3 consistently revealed an interaction effect between language, topic, and wave, but a deeper examination into these relationships revealed some interesting trends, which we explain via the number of nonzero estimates across all combinations in our design. Although we acknowledge that this oversimplifies results, this approach makes broader patterns more apparent. To set the stage, across the whole design there were 36 indirect effect estimates (see Tables 3 and 4 and the Supplemental materials): 12 from each topic condition (across all four outcomes), 12 from each wave, and nine for each outcome.
When breaking down these patterns by topic, within the NRF condition nine of the 12 estimates were nonzero (all 12 were nonzero in models with the covariate removed). In the flood condition nine of these estimates were nonzero (eight without the covariate). And in the COVID-19 condition only two were nonzero (six without the covariate). Thus, as expected, the indirect effects were consistently stronger in the NRF and flood condition than in the COVID-19 condition as expected. When integrating survey wave (time) within these patterns, it becomes clear that-regardless of covariate use-during wave one the conditional indirect estimates were always nonzero across the NRF and flood condition. For the COVID-19 condition, however, during wave one these estimates approximated zero across all four outcomes (regardless of covariate). Interestingly, and unexpectedly, there was variance in the indirect estimates over time for the NRF and flooding topic. Specifically, for the NRF topic (particularly when the covariate was included), the strength of the indirect effect was weakest in wave three relative to the other two waves. For the flooding topic, the indirect effects were consistently weaker in wave two. Thus, although we expected the NRF and flooding topics to be stable, there was variability between waves in ways that cannot be conveniently explained by processing motivation.
Although there was unexpected variance in the NRF and flooding condition over time, there was also variance in the COVID-19 condition over time. As observed from the post-hoc analysis, the interpretation of this variance largely depends on whether the attention check covariate was included. If this variable was included, then across all twelve indirect effect estimates in the COVID-19 condition, only two were nonzero, and both estimates were obtained during wave two. Given that wave two was associated with a combination of plateaued COVID-19 rates in the United States and familiarity with the virus, this pattern of results supports the urgency-based explanation for motivation. This explanation proffered that when the threat or urgency of the topic was relatively low, then the effect of jargon on outcomes should strengthen. Indeed, this is what we observed. Moreover, even though there were only two nonzero effects, this pattern could still be suggestive of the urgency-based explanation. Namely, because the threat of COVID-19 never truly dissipated, this pattern of results suggests that relatively high levels of motivation were sustained throughout the duration of this study, which is why most of the estimates never exceeded zero. This pattern is opposite of what we would expect if COVID-19 fatigue were occurring and demonstrates the resiliency of this issue in participants' minds.
Although the models that included the attention check covariate yielded support for the urgency-based explanation, the exclusion of this covariate revealed a different pattern. Because it is not our intention to speculate which set of results is "most valid," and because we did not want to exclude such a large proportion of our sample (∼25%), we opted to present both sets of results for transparency. When the attention check covariate was excluded from the models germane to H3, it was revealed that of the 12 estimates in the COVID-19 condition, six were nonzero, with four of the six coming in wave three, and two during wave two. Thus, the results from the post-hoc analysis suggest that, over time, the strength of the language manipulation increased. If jargon use is a peripheral cue, this pattern further suggests that motivation to process decreased over time. Furthermore, it is interesting that the influence of jargon strengthened over time in the COVID-19 condition even though the jargon terms used in the study (e.g., social distancing) likely became increasingly familiar. Thus, even though the language manipulation (in the COVID-19 condition alone) likely became less potent over time, the effects strengthened. This provides compelling support for the fatigue-based explanation.
Despite support for these hypotheses, there are important limitations to acknowledge. The first includes our decision to use TurkPrime workers and the known issues with data quality (Chmielewski & Kucker, 2020) and representativeness. Although we understood these limitations going in, the need to collect data quickly and on a budget informed this decision. Moreover, given that this was a survey experiment, our goal was primarily internal validity, so the lack of representativeness or generalizability was not a pressing concern. A related sample issue was the overrepresentation of males. Sex quotas cost an additional fee and we could not afford to add this component while paying participants the same amount across all waves. Fortunately, however, this imbalance was consistent across all three waves of the study, rendering comparisons between waves possible. Another limitation was the attention check. The fact that so many people missed the attention check coupled with the systematic differences in results produced by this response was concerning. That said, it is possible that participants' inability to pass this check might have been symptomatic of this type of environment. For instance, some participants might have been relatively new to the platform or operating under diminished resources due to the mental, physical, and/or financial stress brought on by the pandemic. Although this is just speculation, it is possible that in other crisis situations participants would be similarly unable to pass checks of this sort. If this were the case, then the data that excludes the covariate would be more 'valid' in times of crisis. Thus, given that we were unsure about survey practices, and survey integrity, in these unprecedented times we opted to present both sets of analyses. That said, there were no substantive differences in attention check passing rates between waves, and the presence or absence of this covariate did not influence results related to hypothesis one or two.
Finally, related to both study limitations and conclusions, one of the features and bugs of this research was that it occurred during a once in a lifetime (hopefully) global pandemic. Therefore, justifiable questions remain as to whether these findings could or should replicate or generalize to "normal times." That said, this backdrop also provided a rare opportunity to test theoretical processes within an inherently interesting and critically important social context. It is hoped that the insights gained here regarding language, motivation, and time serve to inform public communication efforts in general and crisis communication in particular. Namely, in the midst of a crisis, when motivation to process information is high, then the most precise information possible (regardless of its complexity) should be dispensed. Absent these preconditions, however, public communicators would be well advised to follow convention and keep it simple. contributed "invalid" data within this particular social context, we decided to err on the side of caution by presenting results both ways. 4. There was another wave of data collected in March of 2020. These results are not presented here because they have already been published (see . The pattern of results from this previous wave was identical to the results from the first wave of data reported here. For those interested, the OSF page affiliated with our previous project is linked to our current OSF page. The United States is preparing for a new season of natural disasters caused by melting snow and heavy rain. These events are common across the world and in every state in the United States. These natural hazards are called "hydraulic exchange flows" and the disasters they cause are called "floods." On the next page you will receive more detailed information about this type of disaster along with guidelines for how to protect yourself from this public health emergency. (word count = 78) The United States is always preparing for natural disasters like hurricanes, tornadoes, earthquakes, tsunamis, volcanoes, and floods. These events are common across the world and in every state in the United States. The U.S. policy that determines response to these events is called the "National Response Framework," or the "NRF." On the next page you will receive more detailed information about these types of policies along with guidelines for how to protect yourself from a public health emergency. (word count = 78) Jargon condition Note: Jargon words are only underlined here for emphasis, they were not underlined in the experimental stimuli SARS-coV-2, a novel coronavirus from the family Coronaviridae, is a zoonotic pathogen and causative agent for COVID-19. COVID-19 is an infectious disease that causes inflammation of the respiratory tract and bronchial tubes. COVID-19 is transmitted through respiratory aerosols or secretions on contaminated surfaces and has an incubation period of 2 to 14 days. During this time individuals may be asymptomatic. According to medical virologists, COVID-19 has the highest lethality for people who are Hydraulic exchange flows occur when the topography of an area that is usually above water becomes saturated and submerged. These events are caused by excessive precipitation, tidal surges, or ice jams that lead to downstream overbank flow. Hydraulic exchange flow is dangerous for densely populated areas, because the land is impervious and water cannot dissipate. In this The NRF determines how the United States responds to major adverse events. The NRF and its support annexes provide guidance about hazards and define roles for inter-agency and inter-organization collaboration. NRF focuses on maintaining community lifeline stabilization and resilient capabilities for hazard identification and risk assessment and is used by critical infrastructure sector leadership to help follow the United States' national security strategy.
(continued) To stay protected, experts recommend taking the following precautions.
• Individuals should maintain good hygiene and keep a supply of ethanol-based sanitizers and decontaminants that are certified for emerging viral pathogens claims and can be used on high-touch surfaces. • Individuals should be prepared to shelter in place, engage in social distancing or quarantine, and be able to telecommute. • Keep nonperishable supplies, including items with electrolytes like UHT milk.
• Visit an automated teller machine or financial institution and gather vital documents, like immunization records and other EHRs. • Pay attention to advice from FEMA and follow their guidelines about community NPIs.