Abstract
Although accusations of editorial slant are ubiquitous to the contemporary media environment, recent advances in journalism such as news writing algorithms may hold the potential to reduce readers’ perceptions of media bias. Informed by the Modality-Agency-Interactivity-Navigability (MAIN) model and the principle of similarity attraction, an online experiment (n = 612) was conducted to test if news attributed to an automated author is perceived as less biased and more credible than news attributed to a human author. Results reveal that perceptions of bias are attenuated when news is attributed to a journalist and algorithm in tandem, with positive downstream consequences for perceived news credibility.
The contemporary media consumer has unprecedented access to news, but only limited time and ability to evaluate the information that they find (e.g., Wise, Bolls, & Shaefer, 2008). To aid in the news evaluation process, readers often prefer attitude consistent information (e.g., Knobloch-Westerwick & Meng, 2009) or rely upon mental rules of thumb known as heuristics (e.g., Chen & Chaiken, 1999) to make judgments about the credibility of news (e.g., Metzger, Flanagin, & Medders, 2010). In terms of the latter, one well-documented phenomenon is the tendency for partisans to view media as biased against their beliefs (e.g., Giner-Sorolla & Chaiken, 1994). This “hostile media bias” (e.g., Vallone, Ross, & Lepper, 1985) has significant theoretical and practical implications for the evaluation of news, as the perception that media are biased can affect variables critical to positive democratic outcomes, such as message credibility (e.g., Eveland & Shah, 2003).
Although journalists are often perceived as biased by partisans, do the same effects apply to news purportedly produced by nonhuman actors? Data-driven algorithms from companies such as Automated Insights publish thousands of articles a day on a variety of topics for wire services and news outlets (e.g., Anderson, 2013; Coddington, 2015; Cohen, Hamilton, & Turner, 2011). The growing presence of automated content has the potential to shape the credibility of the news that is produced. For example, the Modality-Agency-Interactivity-Navigability (MAIN) model theorizes that audiences are less likely to perceive news attributed to machines as biased due to the perception that if a machine is the source of a story, “then it must be objective in its selection and free from bias” (Sundar, 2008, p. 83). Support for this prediction has been equivocal in past work. Some studies show evidence that consumers prefer news purportedly selected (Sundar & Nass, 2001) or written via automation (Graefe, Haim, Haarmann, & Brosius, 2016) along certain formative predictors of credibility, whereas other work suggests that audiences may not discern between the two sources (Clerwall, 2014; Edwards, Edwards, Spence, & Shelton, 2014).
One option to build upon the empirical foundation offered by past work is to identify the theoretical mechanisms through which the effects of news automation either bolster or hinder perceived message credibility. To that end, an online experiment (n = 612) was conducted that tested the effect of source attribution (human vs. algorithm vs. human and algorithm) on message credibility via theoretical mechanisms identified by the MAIN model and the principle of similarity attraction. The following sections review relevant research on media bias, information processing, and machine attribution, followed by an overview of methods, a report of results and a discussion of theoretical implications.
Message Credibility and Automated Sources
A long tradition of research has examined the factors that contribute to evaluations of message credibility (e.g., Carter & Greenberg, 1965; Hovland & Weiss, 1951; McCroskey, 1966; Metzger, Flanagin, Eyal, Lemus, & Mccann, 2003). Recent explications of the concept offer the definition that message credibility “is an individual’s judgement of the veracity of the content of communication” (Appleman & Sundar, 2015, p. 5). Although a variety of factors can influence perceptions of message credibility, information processing theories such as the heuristic systematic model (e.g., Chen & Chaiken, 1999) predict that credibility judgments are formed through a combination of systematic, message-based processing that involve deep, effortful evaluation of a message and heuristic, cue-based processing that renders judgments based on mental rules of thumb that aid processing of information with relatively little effort (e.g., Todorov, Chaiken, & Henderson, 2002). Humans tend to be cognitive misers (e.g., Moskowitz, Skurnik, & Galinsky, 1999), relying more frequently on heuristic-based processing to make judgments, particularly when the ability or motivation to process information is relatively low (e.g., Chaiken, Liberman, & Eagly, 1989). Such situations are quite common for the online news environment, which limits a receiver’s ability to process news in a careful or deliberative manner. When faced with such conditions of cognitive load, the heuristic systematic model assumes that news readers are predisposed to rely upon heuristics to judge the credibility of information.
In addition to mental rules of thumb derived from the formal features of a message like length or source attractiveness (Chen & Chaiken, 1999), the affordances of digital media that operate on the periphery of media can also activate heuristics that influence readers’ perceptions of message credibility (e.g., Metzger et al., 2010; Sundar, Jia, Waddell, & Huang, 2015). For example, the MAIN model theorizes that digital media are accompanied by a variety of features that shape how media are subsequently evaluated (Sundar, 2008). One mental shortcut identified by the MAIN model that is particularly germane to the current investigation is the machine heuristic. Specifically, the MAIN model predicts that news attributed to automation can activate the heuristic that if a machine selected this information, then it must be free of bias. This prediction was originally derived from the work of Sundar and Nass (2001), who observed that news purportedly selected by a computer was evaluated more favorably than news purportedly selected by a journalist. In practice, algorithms technically hold the potential to be biased due to the nature of data that informs automation or the choices made by human programmers in the design of news automation software (e.g., Graefe, 2017). Nonetheless, audience familiarity with computational journalism is relatively low, nor do audiences typically orient toward the programmer when using new forms of technology (e.g., Sundar & Nass, 2001). As a result, lay perceptions of algorithms as a relatively impartial source are likely to be observed among the average news reader, in line with the tenets of the MAIN model (Sundar, 2008).
Although some theory suggests that audiences should respond favorably to machine automated content, a review of relevant past work shows that evidence for the persuasive appeal of news automation has been mixed. For example, several studies have found that news purportedly selected by a Twitter bot (Edwards et al., 2014) or produced via software (Clerwall, 2014) is perceived as relatively similar to news attributed to humans. Positive effects of machine attribution have also been observed, however, such as one study (Graefe et al., 2016), which found that news purportedly written by a machine was perceived as higher in journalistic expertise than news assumed to be written by a human author. More broadly, studies of how machine automation affects the processing of news is part of an ongoing discussion regarding the practical and ethical dilemmas that have accompanied the proliferation of software generated news (e.g., Carlson, 2016; Dörr, 2015; Lokot & Diakopoulos, 2015; Young & Hermida, 2014).
If some studies have found positive effects of machine attribution, but others have not observed effects, what possible factors might account for the equivocality of previous findings? One possibility may be that automation cues affect readers’ perceptions of message credibility through dual, opposing explanatory mechanisms. Identifying these theoretical pathways is of practical and conceptual interest, as an understanding of the factors that mediate a prospective psychological effect can be leveraged to foster positive deliberative outcomes while minimizing those factors that lead to negative outcomes. In terms of positive effects, the present study employs the MAIN model (Sundar, 2008) as an explanatory framework to hypothesize that news attributed to a machine will be perceived as more credible than news attributed to human journalists through the indirect pathway of perceived bias, an outcome, which would reflect activation of the machine heuristic (e.g., Bellur & Sundar, 2014).
As for negative effects, one reason why audiences may prefer human rather than machine sources is due to the principle of similarity attraction (e.g., Byrne, 1997). A large body of work from social psychology (e.g., Montoya & Horton, 2013) has revealed that individuals tend to prefer others who are similar to the self along factors such as appearance, opinion, or personality (e.g., Simons, Berkowitz, & Moyer, 1970). Principles of human–human communication such as similarity attraction are often generalizable to nonhuman actors (e.g., K. M. Lee, Peng, Jin, & Yan, 2006; Nass & Moon, 2000; Reeves & Nass, 1996). The principle of similarity attraction may lead to negative effects of machine automation because even subtle cues such as a preference for communicators with a humanlike appearance can trigger the similarity attraction effect (Nowak & Biocca, 2003; Nowak, Hamilton, & Hammond, 2009; Nowak & Rauh, 2008). Applied to the current study, the preference for human-produced news may thus be explained in part by variations in perceptions of source anthropomorphism, which would be expected to be greater for human sources relative to automated sources. If this is the case, then news attributed to a human should be perceived as more credible than news attributed to a machine via the indirect pathway of source anthropomorphism, per the tenets of the similarity attraction effect observed in past work (e.g., Simons et al., 1970)
In sum, it is theorized that machine authored news (relative to human authored news) will be perceived as less biased, which will subsequently be related to more favorable perceptions of news credibility. Second, it is theorized that human authors (relative to machine authors) will be perceived as more anthropomorphic, which is also expected to lead to more favorable perceptions of message credibility. However, as mentioned before, the presence of both positive and negative effects that cooccur would be expected to lead to null effects given their competing influence on credibility. To that end, the present study tests not only the effects of machine and human cues in isolation, but also the influence of “tandem authorship”: when news is attributed to both human journalists and news algorithms. Specifically, if humans and machines bolster message credibility through competing psychological mechanisms, then news attributed in tandem to a journalist and machine should be evaluated more positively than news attributed to a sole author, given that machine and human authors in tandem would be expected to benefit from both heightened anthropomorphism (due to the presence of a human author) and lower bias (due to the presence of a machine author). Put another way, it is theorized that indirect effects will be greatest when human and machine authors operate in tandem, given their dual positive influence on bias and anthropomorphism. More formally, the following hypotheses are proposed:
H1: Machine attribution (relative to human attribution) will decrease perceptions of media bias.
H2: Machine attribution (relative to human attribution) will decrease perceptions of source anthropomorphism.
H3: Machine attribution (relative to human attribution) will have a positive indirect effect on message credibility via perceived bias.
H4: Machine attribution (relative to human attribution) will have a negative indirect effect on message credibility via source anthropomorphism.
H5: Attribution to human and machine authors in tandem will have a more favorable impact on message credibility than human authors in isolation via the indirect pathway of bias and anthropomorphism.
Method
An online experiment was conducted to test a 3 (author attribution: journalist vs. algorithm vs. tandem authorship) × 2 (news outlet: MSNBC vs. Fox News) × 2 (story topic: Khan Conflict vs. Paris Accord) between subjects design. Participants (n = 612) were asked to read a news article that was purportedly written by either a journalist, an algorithm, or by a journalist and algorithm in tandem. Following the news article, participants were asked to complete a questionnaire that measured perceptions of article credibility, perceived bias, and source anthropomorphism.
Participants
An a priori power analysis determined that a total sample of at least 600 participants would be needed to achieve 80% power if α = .05 and f = .15 was assumed to be the minimum effect size of interest. As a result, 612 participants from the United States were recruited to participate in the study using the online crowdsourcing website, Amazon Mechanical Turk (MTurk). MTurk is an increasingly popular platform for online data collection with a population of workers that is typically more diverse in age and racial distribution than American college samples (e.g., Buhrmester, Kwang, & Gosling, 2011). Numerical and qualitative attention checks were used throughout the survey to exclude careless responding (e.g., selecting answers without reading the stem of the question). Participants were paid 75 cents for their participation in the study, which required approximately 4 min for most subjects to complete. The study was approved by the institutional review board at the primary investigator’s university.
The average age of participants was 37.83 years (SD = 11.96) and the sex distribution was 53% male (n = 329). When asked to self-report their race, 81% reported “White/Caucasian” (n = 494), 8% reported “Black/African American” (n = 47), 6% reported “Asian/Asian-American” (n = 39), 4% reported “Hispanic/Latino/Latina” (n = 24), and 1% reported “Bi-racial” or “Other” (n = 8). Participants were also asked to report their political ideology (1 = strong liberal, 5 = strong conservative) and party affiliation (1 = strongly Democratic, 5 = strongly Republican) for the purpose of compiling descriptive statistics about the sample. Most participants identified as liberal (M = 2.78, SD = 1.26) and democratic (M = 2.71, SD = 1.22). To better visualize the distribution of subjects according to political party, an additional scale was created that trichotomized the original continuous variables. Any subject scoring at the mid-point of the scale was reclassified as “Independent” (26%, n = 161), whereas anyone scoring less than 3 was assigned to “Democrat” (46%, n = 270), and those scoring above 3 were assigned to “Republican” (28%, n = 172).
Procedure
Participants were recruited to participate in a study entitled, “online news.” After reading a consent form and agreeing to complete the study, participants completed a preexposure questionnaire measuring their demographics and political affiliation. Afterward, participants were randomly assigned to read a news article that varied the declared author, source, and topic of the story, as determined by random assignment. Participants were given minimal instruction throughout the experiment to avoid sensitizing subjects to the purpose of the study. Finally, participants concluded the study by completing a postexposure questionnaire that measured the outcome variables of interest (see online appendix file for preexposure questionnaire and postexposure questionnaire).
Stimuli
Although nascent forms of computational journalism are primarily limited to topics such as sports, weather, and finance, journalists also employ algorithms to complement their own work which some scholars have coined the “man-machine marriage” (e.g., Graefe, 2017). This collaboration between journalism and algorithms expands the conceivable scope of news that may involve some form of automation. To that end, participants were asked to read one of the two possible short news articles based on current events that were topical at the time the study was conducted (Trump dispute with Khan; Trump rejection of Paris Climate Accord; see online appendix). Multiple news articles were employed so that any effects observed would not be idiosyncratic to the specific context of the study, per the tenets of the stimulus sampling approach (O’Keefe, 2015). Both stories were of equal length (approximately 120 to 150 words) and were pretested with an independent sample of participants (see “Pretest” sample in the appendix section) from MTurk to ensure that both news stories were similar in terms of perceived credibility, t(159) = .79, p = .43, and perceived bias, t(159) = .10, p = .92.
Given the focus of the present work on media bias, the news article assigned to participants was attributed to either Fox News or MSNBC, two of the most prominent news outlets widely recognized for the ideological slant of their media coverage (Iyengar & Hahn, 2009). These media outlets were chosen to increase the salience of media bias among participants in the study. As expected, pretesting revealed that liberals perceived Fox News as more biased than MSNBC whereas conservatives perceived MSNBC as more biased than Fox News, F(1, 157) = 22.04, p < .0001. In short, participants were exposed to stimuli that were likely to heighten perceived bias. More broadly, the use of multiple news outlets allows the effects studied to be generalized across multiple news sources, which was intended to bolster the external validity of the study’s findings.
Independent Variables
Source attribution
The present study focused on differences elicited by varying the declared source of the article while holding the writing style of the article as a constant across conditions. Specifically, participants were randomly assigned to read a news article attributed to one of the three possible sources: (a) Kelly Richards, reporter, (b) Automated Insights, or (c) the two sources in tandem (Kelly Richards, reporter, and Automated Insights). The source attribution manipulation appeared on the byline of the news article at the top of the page. Automated Insights was selected as the source for the algorithm condition because it is among the most popular services for automated news that is currently employed by a variety of media outlets. Although the byline for the news article varied between conditions, no actual difference in content was manipulated to avoid confounding the declared author of the article with the effects elicited by writing styles that might vary between sources.
Past studies suggest that consumers often do not recall the source of news stories that they read (e.g., Amazeen & Muddiman, 2018). Given the critical role played by source attribution in the present work, it was necessary to ensure that subjects were able to recall the listed author after exposure to the news article in an experimental, short-term context. To that end, an additional sample of participants were recruited (n = 165; see “Independent Sample 1” in the appendix section) from MTurk and asked to read a news article purportedly written by either Automated Insights (the same machine author used in the main study), a human author (“Kelly Richards”; the same human author used in the main study), or Quill (a common software used for automated reporting). Afterward, participants were asked in the postexposure questionnaire if they could recall the name of the author listed on the byline for the news article with the options “Automated Insights,” “Quill,” “Kelly Richards,” or “I don’t remember.” Results supported the efficacy of the manipulation, χ2(6) = 283.04, p < .0001, such that 92% participants correctly identified the source in the Automated Insights condition, 92% identified the source correctly in the human condition, and 94% identified the source correctly in the Quill condition. Given that this sample was collected from the same population of workers as the main study (and were relatively similar in terms of demographic traits; see the appendix section), this pattern of results along with the findings of the main study provide support for the assumption that respondents were aware of the attribution manipulation.
News outlet
Participants were randomly assigned to read a news article attributed to either MSNBC or Fox News, as described above. A chi-square test with news outlet type as the independent variable and news outlet recall as the dependent variable revealed the efficacy of the manipulation, χ2(4) = 712.82, p < .0001. Participants in the Fox News condition correctly identified Fox News as the source 95.38% of the time (n = 289), whereas participants in the MSNBC condition correctly identified MSNBC as the source 92.23% of the time (n = 285).
Story topic
The topic of the news story was varied, such that participants were randomly assigned to read a story about a political dispute or international policy, as described above.
Mediating Variables
Source anthropomorphism
Four seven-point semantic differential items were adapted from prior research (Bartneck, Kulić, Croft, & Zoghbi, 2007) to measure perceived source anthropomorphism. Sample items included the extent to which the listed author for the article was perceived as “fake/natural,” “artificial/life-like,” and “unconscious/conscious.” The four items were averaged to form an index, which was reliable (M = 4.99, SD = 1.58, Cronbach’s α = .95).
Article bias
Four Likert-type items (1 = strongly disagree, 7 = strongly agree) adapted from prior research (Eveland & Shah, 2003; Houston, Hansen, & Nisbett, 2011; E.-J. Lee, 2012) were used to measure perceptions of media bias. Items included the extent to which the article from the study was perceived as “biased,” “slanted,” “distorted,” and “skewed.” An index was formed by taking the average of the four items, which was reliable (M = 3.26, SD = 1.69, Cronbach’s α = .96).
Dependent Variable
Article credibility
Three Likert-type items (1 = strongly disagree, 7 = strongly agree) adapted from prior research (Appleman & Sundar, 2016) were used to measure reflective indicators of article credibility. Items included the extent to which the article was perceived as “accurate,” “authentic,” and “believable.” An index was formed by taking the average of the three items, which was reliable (M = 5.11, SD = 1.50, Cronbach’s α = .92).
Other Measured Variables
Demographics
Participants were asked to report their sex, race, and age for the purpose of compiling descriptive statistics about the sample.
Results
Main Analyses
H1 predicted that machine attribution (relative to human attribution) would decrease perceived media bias. To test this hypothesis, a one-way ANOVA was conducted with author attribution (human vs. algorithm vs. tandem) as the independent variable and perceived media bias as the dependent variable. Results revealed that the effect of author attribution on perceived media bias was statistically significant, F(2, 609) = 5.63, p = .004, such that both tandem authors (M = 3.03, SE = .12) and machine authors (M = 3.18, SE = .12) were perceived as less biased than human authors (M = 3.57, SE = .12). Given this pattern of results, H1 was supported.
H2 predicted that machine attribution (relative to human attribution) would decrease perceived source anthropomorphism. To test this hypothesis, a one-way ANOVA was conducted with author attribution (human vs. algorithm vs. tandem) as the independent variable and perceived source anthropomorphism as the dependent variable. Results revealed that the effect of author attribution on perceived source anthropomorphism was statistically significant, F(2, 609) = 30.42, p < .0001, such that human authors (M = 5.46, SE = .10) were perceived as more anthropomorphic than machine authors (M = 4.33, SE = .11). Along similar lines, tandem authors (M = 5.15, SE = .11) were also perceived as more anthropomorphic than machine authors. Given this pattern of results, H2 was supported.
H3 predicted that machine attribution (relative to human attribution) would have a positive indirect effect on article credibility via perceived bias, whereas H4 predicted that machine attribution (relative to human attribution) would have a negative indirect effect on article credibility via source anthropomorphism. To test these hypotheses, a series of parallel mediation models were run using Model 4 of the PROCESS macro (Hayes, 2013) with 5,000 bootstrapped samples and 95% bias adjusted confidence intervals (CIs). Given that our key independent variable was nominal with three categories, dummy coding was used to statistically control for one level of the independent variable whereas the other two levels were compared, per the categorical approach to mediation analysis described by Hayes and Preacher (2014).
To test H3 and H4, the indirect effect of machine attribution (relative to human attribution) was estimated while dummy coding and statistically controlling for participants in the tandem author condition. As shown in Figure 1, machine attribution decreased perceptions of bias, which were subsequently related to message credibility. Machine attribution also decreased perceptions of source anthropomorphism, which was subsequently related to message credibility. Both indirect effects were significant because the 95% bias adjusted CI for each estimate did not include zero for either bias, a1b1 = .09, 95% CI = [.01, .17], or source anthropomorphism, a2b2 = –.13, 95% CI = [–.19, –.08]. Given these findings, H3 and H4 were supported.

Figure 1. Comparison of effect of tandem attribution (relative to control) and machine attribution (relative to control) on perceived bias, source anthropomorphism, and article credibility.
Note. Two models are displayed in figure; estimates for machine appear on top line followed by estimates for tandem in parentheses. In both cases, contrasts are made between treatment (coded as 1) and human condition (coded as 0).
*p < .05 **p < .01 ***p < .001
H5 hypothesized that news attributed to tandem authors would be perceived as more credible than news attributed to a journalist in isolation via the indirect pathways of bias and source anthropomorphism. To test this hypothesis, the tandem condition was compared against the human author condition while dummy coding and controlling for the machine author in isolation condition. As shown in Figure 1, news attributed to tandem authors (relative to human authors) was perceived as less biased, leading to a positive indirect effect on news credibility, a1b1 = .26, 95% CI = [.10, .44]. Negative indirect effects were also observed, such that tandem authors were perceived as less anthropomorphic than human authors, leading to a negative indirect effect on credibility, a1b1 = –.07, 95% CI = [–.14, –.01]. In short, tandem authors were perceived as more credible than human authors via the indirect pathway of bias, but less credible than human authors via source anthropomorphism. As a result, H5 was only partially supported.
Finally, while study hypotheses offered predictions regarding the indirect influence of machine attribution, supplemental ANOVAs were also conducted to probe the total effect of source attribution on the mediators and dependent variables of interest. Results generally mirrored the findings in the main analyses, as shown in Table 1.
|
Table 1. Univariate ANOVA, Effect of Source Attribution on Perceived Bias, Perceived Anthropomorphism, and Perceived Message Credibility.

Supplemental Analyses: Moderation by Topic Type or News Outlet
Given that two possible news stories served as the context of the study, supplemental analyses were conducted to ensure that the effects observed were invariant between the two articles that were sampled. Model 7 of the PROCESS macro was run, which revealed no evidence for moderation by topic type for either of the two tested models because the index of moderated mediation for all effects included zero. Put simply, additional analyses revealed no evidence that study effects were moderated by the story context in which the effects of source attribution were investigated.
The source for the news article (Fox News vs. MSNBC) also varied between conditions. To test whether effects were invariant between these two sources, Model 7 of the PROCESS macro was again employed with 5,000 bootstrapped samples and 95% bias-adjusted CIs. Again, no evidence of moderation by source was observed given that the indices of moderated mediation for bias and anthropomorphism both included zero across all models.
Supplemental Analyses: Addressing Possible Confounds
Effects observed in the main study could be attributed to one of the two possible explanations: (a) an effect elicited by machine and human authorship in tandem, or more generally (b) an effect elicited by any form of tandem authorship. To address this possibility, an independent sample of respondents (see “Independent Sample 2” in the appendix section) were asked to read one of the three possible news articles: two of which were purportedly written by a single human source and a third that was purportedly written by the two human sources together so that the possibility of multiple author effects could be tested. Results revealed that tandem human authorship (relative to sole human authorship) had no statistically significant effect on either perceived news credibility, F(2, 450) = 1.68, p = .19, or perceived news bias, F(2, 450) = 1.92, p = .15. Given these findings, it appears that our results are more likely attributed to the combination of human and machine sources together rather than a multiple source effect in general.
Supplemental Analyses: Attribution to Company Versus Attribution to Algorithm
When audiences encounter news attributed to a source such as “Automated Insights,” it is possible that readers may orient toward the company itself rather than the underlying algorithm provided by the company. If this is the case, then perceptions of the source’s machine-like quality should vary based on whether the source is described by the company title or algorithm title. To probe this possibility, an independent sample of participants (see “Independent Sample 1” in the appendix section) were exposed to an article purportedly written by Automated Insights, Quill, or a human author. Afterward, participants were asked to evaluate the extent to which the author was perceived as “machine-like” on a 7-point, semantic differential scale (1 = human-like, 7 = machine-like). A one-way ANOVA with author type as the independent variable and perceived machine-ness as the dependent variable was significant, F(2, 162) = 17.10, p < .0001. Post hoc comparisons revealed that human authors (M = 2.40, SE = .26) were perceived as less machine-like than either Automated Insights (M = 4.13, SE = .26; p < .0001) or Quill (M = 4.24, SE = .22; p < .0001), whereas the difference in perceived machine-ness between Quill and Automated Insights was nonsignificant (p > .05). Given this finding, it appears that both Quill and Automated Insights were both perceived as more machine-like than a human author, an outcome unlikely if authors psychologically oriented toward Automated Insights as a corporation rather than as an algorithm.
Discussion
Although advances in natural language generation have broadened the role played by automation in the creation of news, guidelines for how audiences should be informed that algorithms contributed to the production of news are still in nascent stages of development. The psychological effects of perceived machine authorship have also been equivocal, as past work has found that declared and/or actual machine authorship can trigger both positive and negative effects on news-relevant variables. One gap in past work is how audiences respond to news purportedly written through collaboration between machine and human sources, an area which is critical to understand given that automated news still retains various degrees of human intervention in both programming and production. Furthermore, studying the effects of attribution to human and machine authors in tandem might help to reconcile the contradictory effects observed in past work if machine and human sources benefit from competing psychological pathways.
The results of the present study revealed two dual theoretical mechanisms through which machine attribution affects news credibility. First, machine attribution decreased perceptions of news credibility through the indirect pathway of perceived bias. This outcome contributes to theory by providing support for the MAIN model (Sundar, 2008), which theorizes that automation cues affect news credibility through activation of the machine heuristic and the resultant perception that news selected via automation is unbiased. Notably, the persuasive effects of the machine heuristic were observed both in isolation and in tandem, which suggests that interface features that indicate automation are not undermined by the co-occurrence of human sources. More broadly, this finding offers possible optimism that the introduction of automation services may be capable of playing a role in reducing reactance to news perceived as biased, an outcome which scholars recognize as critical for reducing selective exposure to ideologically consistent information.
Given that some past studies have found null or negative effects of news automation (Clerwall, 2014; Edwards et al., 2014; Graefe et al., 2016), the current work also offered the hypothesis that news attributed to a human would be perceived as more credible than news attributed to a machine due to differences in source anthropomorphism. This hypothesis was also supported, which is consistent with past work that suggests individuals tend to apply the principle of similarity attraction to their interactions with automated agents (e.g., Nowak et al., 2009). If machine sources are less biased than humans but also less anthropomorphic, the presence of two coefficients with opposite signs would produce a null effect, which may account for the inconsistent outcomes observed in past work (Clerwall, 2014; Edwards et al., 2014; Graefe et al., 2016; Sundar & Nass, 2001). In support of this interpretation, the findings observed in Figure 1 show the presence of two indirect pathways with opposite signs, which would, therefore, obscure prospective total effects of machine automation from being observed if indirect effects were not considered. Such a finding highlights the critical importance of studying not just the total effect elicited by attribution effects, but also the psychological mechanisms that such manipulations have on downstream news-related variables.
In addition to hypotheses regarding the impact of sole automated authorship, the present study advanced the hypothesis that news perceived to be written by machine and human sources in tandem would elicit more favorable credibility outcomes than a human or machine source in isolation. Consistent with this hypothesis, news attributed to tandem authors was perceived as less biased than human authors in isolation, which was subsequently associated with higher message credibility. Furthermore, supplemental tests with an independent sample collected after the main study revealed that the tandem author effect observed in the present study was not confounded with a preference for multiple authors in general, as attributing a news article to two human authors did not have an effect on perceived bias such as the one elicited by the perceived combination of human and machine authors in the main study. Not all effects of tandem authorship were positive, however, as tandem authors were still perceived as less anthropomorphic than solo human authors, which elicited negative effects on message credibility.
One critical issue to highlight is that the effects observed in the prior study imply an effect of purported authorship has been observed, yet, extant literature on audience consumption of digital media suggests that many readers are not able to retroactively recall the source of information they consume. For example, the Media Insights Project (2016) recently found that only two out of the 10 subjects were able to accurately recall the source of a news article following exposure, whereas others such as Amazeen and Muddiman (2018) have similarly found that recall for the source of the news organization is also relatively low among consumers. Recall and awareness of authorship is a likely prerequisite for psychological effects of purported automation to be observed. With these considerations in mind, what might account for an awareness of source in the present study given that prior research that suggests audience recall for source is relatively low? One possibility is that the context of news consumption may serve as a moderator for source recall: the Media Insights project examined source recall in the context of news retrieved from social media, whereas the present study studied responses to news articles retrieved from the original news portal. In the case of news on social media, past work suggests that audience recall and psychological responses vary when multiple sources are provided (e.g., Kang, Bae, Zhang, & Sundar, 2011), which may be one reason for the disparity in recall between the two studies. A similar conclusion is offered by the Digital News Report in 2016, which suggested that news accessed via “social networks, portals, and mobile apps means that the originating news brand gets clearly noticed less. . . .” With these observations in mind, a valuable avenue for future research would be to examine responses to automated journalism in contexts such as social media where multiple sources are present, which might inhibit the recall of the source and by extension the psychological influence of perceived automation.
The findings of the present study highlight a variety of theoretical and practical considerations for the field of automated journalism. Attribution practices for acknowledging the role played by automation in the production of news are inconsistent across news outlets, ranging in some cases from no attribution at all to taglines such as “powered by Automated Insights” that briefly appear at the end of an article. The current work offers tentative evidence that direct forms of attribution, such as acknowledging automation on the byline of a news article, can have positive effects on message credibility when listed alongside traditional human authors. With that said, it is also important to note that perceptions of human-ness bolster message credibility. Given this consideration, a second attribution practice that is worthy of further consideration is the use of labels that anthropomorphize automated authors while still serving as a cue for the machine heuristic. Further empirical work is necessary to formally test such possibilities, particularly given that the use of anthropomorphic labels that imply the presence of a physical entity are misleading and may potentially heighten social expectations of automation that may be difficult for the news attributed to algorithms to actually satisfy (e.g., Reeves & Nass, 1996).
Another relevant issue raised by the present study is how audiences psychologically orient toward news attributed to a corporation such as Automated Insights, which supplies an automated service but is not itself a machine actor. To address this issue, an independent sample of participants evaluated the perceived machine-ness of news written by one of the three possible sources: Automated Insights, a corporation that supplies algorithm-based news; Quill, an algorithm that generates news; or a sole human author. Results revealed that Quill and Automated Insights were perceived as equally machine-like and statistically distinct in machine-ness from a sole human author. Given this finding, it appears that audiences who encounter algorithm-generated news hosted by a news organization, but supplied by a corporate entity such as Automated Insights psychologically orient toward the algorithm as source rather than to the corporation that supplied the algorithm service. If this was not the case, then it would be unlikely that machine authors (such as Quill) and corporate authors (such as Automated Insights) would not vary along the dimension of machine-ness, particularly in light of marketing trends that continue to frame corporations as public actors with human motives and values (e.g., Schwartz, 2017). that said, it remains possible that readers’ perceive automated content as produced via collective effort rather than as the sole product of an algorithm’s function. Therefore, our results should be contextualized as focusing on a specific form of attribution, namely, attribution to the company that purportedly produced the article rather than the underlying algorithm. Finally, one related point to consider is how attribution to the algorithm versus attribution to the company supplying the algorithm might also affect news related variables, such as perceived bias or news credibility. If credibility varies as a function of the label or metaphor used to describe automation, then it would be critical for journalists to adopt consistent labeling practices that both accurately describe the nature of automation as well as heighten the perceived credibility of the news products they supply. In short, further work is necessary to test whether attribution to corporate entities such as Automated Insights rather than the algorithm itself undermines the prospective benefits of heuristics cued by automation, particularly if terms such as Automated Insights imply the presence of collective work.
Limitations and Future Directions
There are variety of promising avenues for future work to pursue that could address the limitations of this work. First, the present study examined the effects of news attributed to different declared authors, but did not manipulate actual differences in content between conditions. Although content was held constant for the sake of experimental control, other scholars have found that automation effects vary between declared sources and actual automation (Graefe et al., 2016), so our results should be considered applicable to the former rather than the latter. In terms of the context of the study, our stimuli pertained to politically motivated current events prone to perceptions of bias. To expand the generalizability of our results, future studies should consider replicating study findings with other news topics such as weather or finance that are less likely to be politically polarized. As for possible moderators, additional studies should evaluate not just the effects of automation overall, but also probe specific variables that might condition the effects of purported machine authorship such as technological expertise or familiarity with automation. Furthermore, it would also be valuable for multiple author names to be used within each condition to ensure that any effects observed are not idiosyncratic to the specific manipulation in question.
Finally, it is important to call attention to the distribution of ideology and political party in the main study’s sample, which underrepresented participants with conservative beliefs or who identified as a member of the Republican Party. This skew in ideology and political party affiliation is common for samples recruited from MTurk, which tends to be more educated and more liberal than the average population (e.g., Clifford, Jewell, & Waggoner, 2015). Given that politically motivated reasoning may vary between conservatives and liberals, it is possible that a different pattern of results might be observed with a sample that includes a higher proportion of conservative participants. For example, conservative respondents may exhibit systematically different responses to Fox News and MSNBC than liberal respondents, a possibility, which was not testable with the present sample given its ideological skew. To that end, replication of the present study with purposive sampling intended to mirror the distribution of ideological beliefs in the general population would be a valuable avenue for future work to consider.
In sum, while anxiety over the deployment of technology into new domains is common, the rise of automated news may hold utility for bolstering perceptions of news credibility. News attributed to a machine appears to be more positively evaluated when it is accompanied by the intervention of a human agent than when algorithms are assumed to be the sole author. For journalists coping with questions of media bias, it appears that highlighting the role played by automation in their work may augment the credibility of the news products that they create.
|
Appendix Demographics by Sample.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
References
|
Amazeen, M. A., Muddiman, A. R. (2018). Saving media or trading on trust? The effects of native advertising on audience perceptions of legacy and online news publishers. Digital Journalism, 6, 176-195. doi:10.1080/21670811.2017.1293488 Google Scholar | Crossref | |
|
Anderson, C. W. (2013). Towards a sociology of computational and algorithmic journalism. New Media & Society, 15, 1005-1021. doi:10.1177/1461444812465137 Google Scholar | SAGE Journals | ISI | |
|
Appleman, A., Sundar, S. S. (2016). Measuring message credibility: Construction and validation of an exclusive scale. Journalism & Mass Communication Quarterly, 93(1), 59-79. doi:https://doi.org/10.1177/1077699015606057 Google Scholar | |
|
Bartneck, C., Suzuki, T., Kanda, T., Nomura, T. (2007). The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI & Society, 21, 217-230. doi:10.1007/s00146-006-0052-7 Google Scholar | Crossref | |
|
Bellur, S., Sundar, S. S. (2014). How can we tell when a heuristic has been used? Design and analysis strategies for capturing the operation of heuristics. Communication Methods and Measures, 8, 116-137. doi:10.1080/19312458.2014.903390 Google Scholar | Crossref | |
|
Buhrmester, M., Kwang, T., Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3-5. doi:10.1177/1745691610393980 Google Scholar | SAGE Journals | ISI | |
|
Byrne, D. (1997). An overview (and underview) of research and theory within the attraction paradigm. Journal of Social and Personal Relationships, 14, 417-431. doi:10.1177/0265407597143008 Google Scholar | SAGE Journals | ISI | |
|
Carlson, M. (2016). Automated journalism: A post man future for digital news? In Franklin, B., Eldridge, S. (Eds.), The Routledge companion to digital journalism studies (pp. 226-234). New York, NY: Routledge. Google Scholar | Crossref | |
|
Carter, R. F., Greenberg, B. S. (1965). Newspapers or television: Which do you believe? Journalism & Mass Communication Quarterly, 42, 29-34. doi:10.1177/107769906504200104 Google Scholar | SAGE Journals | ISI | |
|
Chaiken, S., Liberman, A., Eagly, A. E. (1989). Heuristic and systematic information processing within and beyond the persuasion context. In Uleman, J. S., Bargh, J. A. (Eds.), Unintended thought (pp. 212-252). New York, NY: Guilford Press. Google Scholar | |
|
Chen, S., Chaiken, S. (1999). The heuristic systematic model in its broader context. In Chaiken, S., Trope, Y. (Eds.), Dual-process theories in social psychology (pp. 73-96). New York, NY: Guilford Press. Google Scholar | |
|
Clerwall, C. (2014). Enter the robot journalist: Users’ perceptions of automated content. Journalism Practice, 8, 519-531. doi:10.1080/17512786.2014.883116 Google Scholar | Crossref | |
|
Clifford, S., Jewell, R. M., Waggoner, P. D. (2015). Are samples drawn from Mechanical Turk valid for research on political ideology? Research & Politics, 2(4). doi:10.1177/2053168015622072 Google Scholar | SAGE Journals | |
|
Coddington, M. (2015). Clarifying journalism’s quantitative turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting. Digital Journalism, 3, 331-348. doi:10.1080/21670811.2014.976400 Google Scholar | Crossref | ISI | |
|
Cohen, S., Hamilton, J. T., Turner, F. (2011). Computational journalism. Communications of the ACM, 54(10), 66-71. doi:10.1145/2001269.2001288 Google Scholar | Crossref | ISI | |
|
Dörr, K. N. (2015). Mapping the field of algorithmic journalism. Digital Journalism, 4, 700-722. doi:10.1080/21670811.2015.1096748 Google Scholar | Crossref | |
|
Edwards, C., Edwards, A., Spence, P. R., Shelton, A. K. (2014). Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior, 33, 372-376. doi:10.1016/j.chb.2013.08.013 Google Scholar | Crossref | ISI | |
|
Eveland, W. P., Shah, D. V. (2003). The impact of individual and interpersonal factors on perceived news media bias. Political Psychology, 24, 101-117. doi:10.1111/0162-895X.00318 Google Scholar | Crossref | ISI | |
|
Giner-Sorolla, R., Chaiken, S. (1994). The causes of hostile media judgments. Journal of Experimental Social Psychology, 30, 165-180. doi:10.1006/jesp.1994.1008 Google Scholar | Crossref | ISI | |
|
Graefe, A. (2017). Guide to automated journalism. Tow Center for Digital Journalism. Retrieved from http://towcenter.org/research/guide-to-automated-journalism/ Google Scholar | |
|
Graefe, A., Haim, M., Haarmann, B., Brosius, H.-B. (2016). Readers’ perceptions of computer-generated news: Credibility, expertise, and readability. Advance online publication. Journalism. Advance online publication. doi:10.1177/1464884916641269 Google Scholar | SAGE Journals | |
|
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: Guilford Press. Google Scholar | |
|
Hayes, A. F., Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67, 451-470. doi:10.1111/bmsp.12028 Google Scholar | Crossref | Medline | ISI | |
|
Houston, J. B., Hansen, G. J., Nisbett, G. S. (2011). Influence of user comments on perceptions of media bias and third person effect in online news. Electronic News, 5, 79-92. doi:10.1177/1931243111407618 Google Scholar | SAGE Journals | |
|
Hovland, C. I., Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15, 635-650. doi:10.1086/266350 Google Scholar | Crossref | ISI | |
|
Iyengar, S., Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59, 19-39. doi:10.1111/j.1460-2466.2008.01402.x Google Scholar | Crossref | ISI | |
|
Kang, H., Bae, K., Zhang, S., Sundar, S. S. (2011). Source cues in online news: Is the proximate source more powerful than distal sources? Journalism & Mass Communication Quarterly, 88, 719-736. doi:10.1177/107769901108800403 Google Scholar | SAGE Journals | ISI | |
|
Knobloch-Westerwick, S., Meng, J. (2009). Looking the other way: Selective exposure to attitude consistent information and counterattitudinal political information. Communication Research, 36, 426-448. doi:10.1177/00936502093330 Google Scholar | SAGE Journals | ISI | |
|
Lee, E.-J. (2012). That’s not the way it is: How user-generated comments on the news affect perceived media bias. Journal of Computer-Mediated Communication, 18, 32-45. doi:10.1111/j.1083-6101.2012.01597.x Google Scholar | Crossref | ISI | |
|
Lee, K. M., Peng, W., Jin, S.-A., Yan, C. (2006). Can robots manifest personality? An empirical test of personality recognition, social responses, and social presence in human-robot interaction. Journal of Communication, 56, 754-772. doi:10.1111/j.1460-2466.2006.00318.x Google Scholar | Crossref | ISI | |
|
Lokot, T., Diakopoulos, N. (2015). News bots: Automating news and information dissemination on Twitter. Digital Journalism, 4, 682-699. doi:10.1080/21670811.2015.1081822 Google Scholar | Crossref | |
|
McCroskey, J. C. (1966). Scales for the measurement of ethos. Speech Monographs, 33, 65-72. doi:10.1080/03637756609375482 Google Scholar | Crossref | |
|
Media Insights Project . (2016). Who shared it? How Americans decide what news to trust on social media. Retrieved from http://mediainsight.org/Pages/%27Who-Shared-It%27-How-Americans-Decide-What-News-to-Trust-on-Social-Media.aspx Google Scholar | |
|
Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., Mccann, R. M. (2003). Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Annals of the International Communication Association, 27, 293-335. doi:10.1080/23808985.2003.11679029 Google Scholar | Crossref | |
|
Metzger, M. J., Flanagin, A. J., Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60, 413-439. doi:10.1111/j.1460-2466.2010.01488.x Google Scholar | Crossref | ISI | |
|
Montoya, R. M., Horton, R. S. (2013). A meta-analytic investigation of the processes underlying the similarity attraction effect. Journal of Social and Personal Relationships, 30, 64-94. doi:10.1177/0265407512452989 Google Scholar | SAGE Journals | ISI | |
|
Moskowitz, G. B., Skurnik, I., Galinsky, A. D. (1999). The history of dual-process notions, and the future of preconscious control. In Chaiken, S., Trope, Y. (Eds.), Dual process theories in social psychology (pp. 12-40). New York, NY: Guilford Press. Google Scholar | |
|
Nass, C., Moon, Y. (2000). Machines and mindfulness: Social responses to computers. Journal of Social Issues, 56, 81-103. doi:10.1111/0022-4537.00153 Google Scholar | Crossref | ISI | |
|
Nowak, K. L., Biocca, F. (2003). The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoperators and Virtual Environments, 12, 481-494. doi:10.1162/105474603322761289 Google Scholar | Crossref | ISI | |
|
Nowak, K. L., Hamilton, M. A., Hammond, C. C. (2009). The effect of image features on judgments of homophily, credibility, and intention to use as avatars in future interactions. Media Psychology, 12, 50-76. doi:10.1080/15213260802669433 Google Scholar | Crossref | ISI | |
|
Nowak, K. L., Rauh, C. (2008). Choose your “buddy icon” carefully: The influence of avatar androgyny, anthropomorphism and credibility in online interactions. Computers in Human Behavior, 24, 1473-1493. doi:10.1016/j.chb.2007.05.005 Google Scholar | Crossref | ISI | |
|
O’Keefe, D. J. (2015). Trends and prospects in persuasion theory and research. In O’Keefe, D. J. (Ed.), Persuasion: Theory and research (pp. 214-240). Thousand Oaks, CA: SAGE. Google Scholar | |
|
Reeves, B., Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Stanford, CA: Cambridge University Press. Google Scholar | |
|
Schwartz, M. S. (2017). Corporate social responsibility. London, England: Routledge. Google Scholar | Crossref | |
|
Simons, H. W., Berkowitz, N. N., Moyer, J. R. (1970). Similarity, credibility, and attitude change: A review and a theory. Psychological Bulletin, 73, 1-16. doi:10.1037/h0028429 Google Scholar | Crossref | ISI | |
|
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In Metzger, M. J., Flanagin, A. J. (Eds.), Digital media, youth, and credibility (pp. 73-100). Cambridge, MA: The MIT Press. Google Scholar | |
|
Sundar, S. S., Jia, H., Waddell, T. F., Huang, Y. (2015). Toward a theory of interactive media effects: Four models for explaining how interface features affect user psychology. In Sundar, S. S. (Ed.), Handbook of psychology of communication technology (pp. 47-86). Malden, MA: Wiley Blackwell. Google Scholar | Crossref | |
|
Sundar, S. S., Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51, 57-72. doi:10.1111/j.1460-2466.2001.tb02872.x Google Scholar | Crossref | |
|
Todorov, A., Chaiken, S., Henderson, M. D. (2002). The heuristic-systematic model of social information processing. In Dillard, J. P., Pfau, M. (Eds.), The persuasion handbook: Developments in theory and practice (pp. 195-212). Thousand Oaks, CA: SAGE. Google Scholar | Crossref | |
|
Vallone, R. P., Ross, L., Lepper, M. R. (1985). The hostile media phenomenon: Biased perception and perceptions media bias in coverage of the Beirut massacre. Journal of Personality and Social Psychology, 49, 577-585. doi:10.1037/0022-3514.49.3.577 Google Scholar | Crossref | Medline | ISI | |
|
Wise, K., Bolls, P. D., Shaefer, S. R. (2008). Choosing and reading online news: How available choice affects cognitive processing. Journal of Broadcasting & Electronic Media, 52, 69-85. doi:10.1080/08838150701820858 Google Scholar | Crossref | ISI | |
|
Young, M. L., Hermida, A. (2014). From Mr. and Mrs. Outlier to central tendencies: Computational journalism and crime reporting at the Los Angeles Times. Digital Journalism, 3, 381-397. doi:10.1080/21670811.2014.976409 Google Scholar | Crossref |
Author Biography
T. Franklin Waddell (PhD, Penn State) is an assistant professor in the College of Journalism and Communication at the University of Florida. His research focuses on the effects of new media that either afford the opportunity for self-expression (such as avatars) or allow individuals to monitor the collective opinion of others (such as online comments).
