Skip to main content

[]

Intended for healthcare professionals
Skip to main content
Free access
Editorial
First published online August 30, 2018

When the “Best Available Evidence” Doesn’t Win: How Doubts About Science and Scientists Threaten the Future of Evidence-Based Management

Abstract

The well documented academic-practice gap has frequently been viewed as a problem to be solved via evidence-based management. Evidence-based management focuses heavily on aggregating and evaluating research evidence to address practical questions via meta-analysis and other forms of systematic review, as well as educating managers and management students on how to differentiate strong from weak research methods. The assumption has been that if researchers produce stronger research findings and disseminate them in manager- and student-accessible venues, the results will be believed and implemented if appropriate to the context. However, these strategies alone are woefully insufficient, as a growing body of evidence suggests that even when individuals are aware of research findings supported by a vast majority of studies, they often choose not to believe them. Sources of this disbelief include growing public distrust of academics, scientific research, and professional expertise in general, as well as negative emotional reactions to specific research findings that threaten people’s cherished beliefs, self-image, self-interest, or social identity. This editorial takes an interdisciplinary approach to explain why people might not believe management research findings and offers strategies for increasing public trust of academics and reducing resistance to self-threatening research findings.
Despite having more access to information than ever before—and despite being better educated than at any time in history—the number of people who maintain beliefs lying outside the scientific consensus remain stubbornly high.
A recent study of the academic-practice (A-P) gap in management (Banks, Pollack, Bochantin, Kirkman, Whelpley, & O’Boyle, 2016) revealed that the two biggest “grand challenges” from the perspective of 828 surveyed management academics are to implement management best practices and to reduce the A-P gap. These goals are also central to the evidence-based management (EBMgt) movement. EBMgt focuses heavily on aggregating and evaluating research evidence to address practical questions via meta-analysis and other forms of systematic review (Rousseau, Manning, & Denyer, 2008). Overall, the assumption has been that if researchers produce stronger, more replicable research findings and disseminate them in practitioner-accessible venues, the results will be believed and implemented if appropriate to the context.
However, an abundance of prior research indicates that some of the strongest management research findings have not been widely adopted by managers (e.g., Highhouse, 2008; Johns, 1993). Rynes (2012) argues that there are three major categories of reasons for this: (1) people are unaware of research findings; (2) even if they are aware of findings, they don’t believe them; or (3) even if they are aware of findings and believe them, they don’t bother to implement them. In this editorial, we focus primarily on the second phenomenon—disbelief of research findings.
A growing body of evidence suggests that even when individuals are aware of research findings supported by a vast majority of studies, they often choose not to believe them. For example, despite strong scientific consensus that human activity is a driver of rising global temperatures, 39% of surveyed Americans do not even believe there is solid evidence that the earth has been warming (Pew Research Center, 2014). Similarly, while Pew research shows that 98% of scientists believe that natural selection plays a role in evolution, 34% of Americans entirely reject evolution (Masci, 2017).
Closer to the field of management, Rynes, Colbert, and Brown (2002) found a number of large discrepancies between research findings and the beliefs of 959 human resource (HR) professionals.1 For example, the HR managers in their study did not believe that goal setting is more effective than employee participation for improving organizational performance, that most errors in performance appraisal cannot be eliminated by error-reduction training, that intelligence is a better predictor of performance than either values or conscientiousness, or that intelligence improves performance even on low-skilled jobs. Similar results have been found using samples of HR managers from Holland, Finland, South Korea, and Spain (Sanders, van Riemsdijk, & Groen, 2008; Tenhiälä, Giluk, Kepes, Simón, Oh, & Kim, 2016). Additionally, Highhouse (2008) showed that even when HR professionals are aware of research findings that demonstrate tests and actuarial selection models are superior to unstructured interviews, they still believe that this is not true as it personally applies to them. Turning to more macro topics, many institutional investors believe there is a negative relationship between corporate social responsibility and corporate financial performance (Jay & Grant, 2017), even though meta-analyses show a positive relationship (Orlitzky, Schmidt, & Rynes, 2003). Similarly, Welbourne and Andrews (1996) found that firms undergoing initial public offerings (IPOs) had a significantly larger chance of 5-year survival if they placed a higher value on human resources and used more organizational performance–based compensation for a wide range of employees. However, examination of investors’ stock price premia for IPOs showed that they evaluated organizational performance–based pay negatively rather than positively and ignored the value placed on human resources.
Of course, disregarding scientists and disbelieving scientific research is neither new nor confined to management. However, there are reasons for growing alarm about the disbelief of scientific findings across a wide range of professional domains because it seems to reflect a much broader drop in the credibility of academics and scientists.

Growing Distrust and Reduced Credibility of Academics

Some of the reduced credibility and increased distrust of academics and research stems from the rapid rise in studies suggesting that existing research findings are not nearly as robust as previously believed. Reasons for this range from relatively innocent causes (e.g., undetected analytical errors) to questionable research practices (such as excluding outlier data or hypothesizing after the results are known, or HARKing; Kerr, 1998) to out-and-out falsification of data or results. These problems have been exposed in nearly all scientific fields (Ioannidis, 2005), including management and psychology (O’Boyle, Banks, & Gonzalez-Mulé, 2016) and strategic management (Bergh, Sharp, Aguinis, & Li, 2017). As such, at least part of the blame for reduced credibility of academic research lies within the academic community.
However, researchers themselves are hardly the only ones to blame for the public’s growing distrust of research. For example, extensive investigations reveal that there have been many well-funded, concerted efforts to discredit solid scientific research for self-interested political, ideological, or economic ends (Mayer, 2017; Mooney, 2006; Oreskes & Conway, 2010). These campaigns have painstakingly worked not only to discredit research findings but also to smear the reputations of the scientists who produced them. Although most of the external attacks on research and researchers have been leveled in fields other than management, they have dealt a heavy blow to the credibility and perceived trustworthiness of science and scientists in general, as well as the universities and other organizations that employ them.
For example, one survey showed that 24% of Americans feel “cold” or “very cold” toward professors (Pew Research Center, 2017a). Another Pew survey showed that although 72% of Democrats believe colleges and universities have a positive effect on the country, 58% of Republicans believe they have a negative effect (up 21% from 2015; Sullivan & Jordan, 2017). Yet another Pew survey showed that 39% of Americans do not believe that climate scientists provide full and accurate information about climate, with 36% believing that their findings mostly reflect their desire for career advancement and 27% their political leanings (Pew Research Center, 2016). Taken together, these findings suggest that although professors are still held in high esteem by many members of the U.S. citizenry, to a substantial minority they are perceived as social out-groups, with all the attendant negative consequences of that characterization (Tajfel & Turner, 1979).
A further illustration of the effects of political polarization on perceptions of experts was provided in a carefully controlled study by Marks, Copland, Loh, Sunstein, and Sharot (2018). Marks et al. showed that people performing an online task preferred to seek the advice of politically like-minded collaborators (which were actually a computer algorithm) and to believe they had higher abilities on the task, despite evidence presented to the contrary. Marks et al. conclude that “knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement” (2). In the present context, this suggests that the knowledge and expertise of conservative academics are likely to be discounted or discredited by liberal audiences and vice versa. While examples of this phenomenon are ubiquitous in the current political environment in the United States, similar trends have also been documented in Europe (Trilling, van Klingeren, & Tsfati, 2016) and parts of East Asia (Lee, 2007).
Put into an even larger context, Nichols (2017) argues that professors are just one of many types of professionals—including doctors, lawyers, and realtors—who increasingly find their expertise challenged by patients or clients or by students who do not value their many years of study and specialized expertise. The decline in respect for people who have acquired deep knowledge in specialized areas was described nearly 40 years ago by Asimov as “a strain of anti-intellectualism [that] has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’” (1980: 19). Nichols believes that anti-intellectualism has now evolved much further, into what he calls the Death of Expertise:
a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laypeople, students and teachers, knowers and wonderers . . . not just a rejection of existing knowledge, [but rather] fundamentally a rejection of science and dispassionate rationality, which are the foundations of modern civilization. (3-5)
Again, while this attitude may not describe the majority of the U.S. population, it does describe a substantial minority.

The Role of Motivated Reasoning in Rejecting Specific Research Findings

To this point, we have argued that the reduced credibility and perceived expertise of academics among a sizeable segment of the population threatens the likelihood that managers will look to academic research for advice or apply empirically validated best practices. However, sometimes skepticism or dismissiveness come not from attitudes toward the messenger but, rather, toward specific messages. That people are not dispassionately rational in their evaluations of arguments or data has been well demonstrated (Tversky & Kahneman, 1974). Rather, humans pursue motivated reasoning designed to arrive at particular, favored conclusions that are primed by our deeper underlying values, worldviews, vested interests, fears, and self-identities and social identities (Haidt, 2001). However, having arrived at our judgments emotionally (and with split-second alacrity), we then justify them rationally and close ourselves off from disconfirming evidence.
Take, for example, the strongly disbelieved research finding that intelligence is the single best predictor of job performance (Schmidt & Hunter, 1998). There are several reasons to suspect that motivated reasoning is at play here. For example, the fact that intelligence has a substantial genetic component means that low intelligence cannot be completely overcome by hard work, a fact that may threaten people’s sense of fairness and feelings of control. Second, the fact that for a variety of reasons, there are average differences in scores of intelligence across racial and ethnic groups means that findings about the importance of intelligence can also threaten one’s self-identity or social identity. Third, research suggesting a strong link between intelligence and performance may also threaten the self-image of those whose past experiences (especially academic ones) may have caused them to feel insecure about their intelligence. Similarly, the widely disbelieved findings that both tests and algorithmic employee selection methods are more accurate than subjective interviews (Highhouse, 2008) are likely to threaten managers’ sense of autonomy, control, and self-image as competent people.
Indeed, an experiment by Caprar, Do, Rynes, and Bartunek (2016) revealed clear evidence of motivated cognition in management students’ agreement, or disagreement, with three research essays related to predictors of job performance. The first essay, excerpted from Schmidt (2009), argued that employers should test for intelligence because it is the best predictor of job performance. The second essay, excerpted from Goleman (1998), contrarily argued that emotional intelligence is the best predictor and that intelligence is a weak predictor, explaining only 10% to 25% of the variance in performance (implying that emotional intelligence explains the rest). The third, taken from Pfeffer (1998), argued that “fit” is the best predictor of performance. After reading all three essays (balanced by order across subjects), the students agreed most strongly with the emotional intelligence argument (3.73 on a six-item, 5-point scale), followed by the fit argument (3.69), and trailed considerably by the intelligence argument (3.35). Moreover, analyses showed a significant positive correlation between students’ grade point averages (GPAs) and their agreement with the intelligence essay but not their agreement with the other two essays. Finally, students were asked whether employers should use intelligence tests in hiring (yes/no) and why. Low-GPA students were more likely not only to say “no” but also to use self-protecting reasons to explain their beliefs (e.g., “I don’t have the highest GPA in the world but I am a very hard worker and strive to get my work done all the time,” or “Just because someone is intelligent doesn’t make them a good worker or employee. . .. I mostly base this opinion on personal experience”; Caprar et al.: 219)
Although there are not many studies like Highhouse (2008) or Rynes et al. (2002) that directly examine the extent to which practitioners’ beliefs differ from management research findings, we believe there are a number of management topics for which people are likely to react to research findings with motivated reasoning. Take, for example, field experiments demonstrating the continued existence of discrimination in hiring (e.g., Pager, Bonikowski, & Western, 2009). Whether people believe this research is likely to depend on their demographic characteristics, social identities, political affiliation, and worldviews about whether people get what they deserve in life (Hornsey & Fielding, 2017; Pew Research Center, 2017b). Similarly, research suggesting the benefits of diversifying the labor force or promoting women or minorities into leadership positions (e.g., Herrin, 2009) is likely to threaten the vested interests of members of currently overrepresented groups while raising the hopes and aspirations of others. Research on immigration and globalization may also trigger peoples’ in-group identities, leading to derogatory stereotypes of the out-groups championed by such research (Petriglieri, 2011).
Many people are also likely to use motivated reasoning when evaluating research-based claims about the causes and consequences of pay inequality, the reduction of which was the number one “grand challenge” noted by the combined group of academics and practitioners (n = 1,767) in the Banks et al. (2016) study. For example, those with strong hierarchical, “just world,” and social dominance worldviews are more likely to accept privilege based on existing social strata and to view such hierarchies as both natural and valuable (Hornsey & Fielding, 2017). As such, they are more likely to embrace “trickle-down” economic policies (i.e., reduced taxes on the wealthy) than attempts to reduce income inequality via more egalitarian pay policies or government regulation, while those with opposing worldviews feel otherwise. Similarly, research suggests that conservative managers with strong needs for cognitive closure are more likely to prefer shareholder versus multistakeholder models of governance and hierarchical structuring that reduces the need for argumentation or negotiation with those lower in the hierarchy (Tetlock, 2000). More generally, topics such as corporate social responsibility, corporate governance, and business policy are likely to be responded to differently on the basis of people’s beliefs about the relative value of individualism versus collectivism, as well as their social identification with political ideologies such as conservatism, liberalism, or libertarianism (Haidt, 2012). Although the particular underlying beliefs that influence motivated reasoning may differ across cultures, the phenomenon of motivated reasoning is universal and, thus, likely to shape beliefs about research findings globally.
The fact that people do not respond to research findings as rational “blank slates” poses serious challenges for EBMgt. First, it means that just having a strong body of evidence may be insufficient to convince many people of its validity, particularly if the topic is one about which passions run high. When people emotionally reject research findings, they do not do so based on evidence; rather, they do so based on intuitive judgments or “gut instincts” reflecting their values, fears, personal experiences, vested interests, need to preserve self-esteem, or desire to maintain autonomy or control. In such cases, it will be ineffective to simply continue to present the scientific message. As Hornsey and Fielding note, “If people are motivated to reject a scientific message, then continually presenting the scientific message represents a misunderstanding of what is causing the incomprehension, one that is likely to lead to frustration” (2017: 468). Furthermore, even adding to an already-solid research base generally won’t do the trick because people often dig in even further when presented with additional contrary evidence (Festinger, Riecken, & Schachter, 1956). This means that more effective methods of persuasion must be found.
In sum, there are quite a large number of topics that we research and teach about in management that are likely to arouse skepticism or even dismissiveness among some practitioners and students. As educators, we should both expect and invite skepticism because skepticism builds critical thinking and is likely to help us improve our arguments. Still, skeptical audiences are more challenging to persuade than inherently supportive ones, and cynical or dismissive audiences2 are more challenging still (Hoffman, 2015; Hornsey & Fielding, 2017).

Potential Solutions

To this point, we have argued that to the extent that our audiences include people who are skeptical or distrustful of academics or motivated to resist particular research-based messages, standard EBMgt recommendations—to conduct systematic reviews, replicate findings, reduce sources of error, make findings more easily available, and provide high levels of transparency—may be insufficient. Rather, when audience members distrust the messenger and/or disagree with particular messages, we need to think much more carefully about how to overcome skepticism or resistance. Given that resistance is based on emotions, values, fears, identities, and worldviews, we have to find ways to address those underlying issues. To fail to do so is to leave us vulnerable to an ineffective type of shadowboxing where “each contestant lands heavy blows to the opponent’s shadow, then wonders why she doesn’t fall down” (Haidt, 2001: 823). Below, we focus on two general strategies for improving the credibility of management academics and increasing the likelihood that people will accept our research, as summarized in Table 1.
Table 1 Strategies and Tactics for Increasing Public Trust and Academic Credibility and Addressing Resistance to Specific Findings
StrategyIllustrative Tactic
Increase public trust and academic credibilityImprove research creation
• Focus on bigger, more important problems
• Cocreate research with practitioners
• Improve research quality, replicability, and transparency
Improve research dissemination and communication
• Disseminate research in alternative media (e.g., professional and bridge journals, TED talks, online forums, MOOCs)
• Grab attention via narratives, metaphors and analogies, graphics, and more translatable statistics, use of “sticky” communication principles
Anticipate and address resistance to specific findingsUse dialectic methods and two-sided arguments with refutation
Use the best available evidence and explain research methods along with findings
Use experiential methods
Use jiu jitsu persuasion
Note: MOOC = massive open online course.

Strategies for Increasing Public Trust and Academic Credibility

Improve research creation

Most discussions of the A-P gap have focused mainly on transmitting research findings to practitioners. As such, they do not focus very directly on the question of whether our research is “worth” transmitting. However, this is beginning to change. For example, one obvious recommendation has been for researchers to focus on bigger, more important problems (Bennis & O’Toole, 2005) rather than problems that primarily fill gaps in the academic research literature. A related suggestion is to broaden the range of stakeholders whose interests are considered, moving from an overemphasis on shareholders to broader considerations of customers, employees, local communities, taxpayers, the environment, and society as a whole (Community for Responsible Research in Business and Management, 2018).
Calls have also escalated for research that is cocreated between academics and practitioners. In most management research, practitioners simply serve as data sources, providing surveys, interviews, or archival records for academics to use in testing hypotheses. This may reinforce the perception that academics are “other” and deepen the A-P gap. However, when academics cocreate with practitioners, the questions that are posed and the methods by which they are addressed are more likely to produce research that is both relevant and translatable to practitioners (Bansal, Bertels, Ewart, MacConnachie, & O’Brien, 2012).
In addition, building high-quality connections while pursuing joint research increases the likelihood that the outcome will be more interesting and significant to both parties (Dutton & Dukerich, 2006). This is because high-quality connections foster greater emotional involvement, incorporate more give-and-take, and open interaction partners up to new ideas and influence. Indeed, Bartunek (2007) argues that simply creating strong relationships with practitioners is likely to yield many mutual learning benefits and increase trust, even if those relationships do not develop into joint research projects.
Finally, at the same time that we take these positive steps to make our research more important, relevant, and useful to stakeholders other than ourselves, we also need to improve the quality, replicability, and transparency of our research so that we avoid the negative publicity created by embarrassing failures to replicate and retractions of research involving outright fraud (Aguinis, Ramani, & Alabduljader, 2018; O’Boyle et al., 2016). Several journals have recently tightened their ethics codes and/or changed their submission procedures to deal with these issues. For example, Personnel Psychology has instituted CrossCheck, a plagiarism detection tool, for all new submissions. In an effort to disincentivize HARKing, the Journal of Business and Psychology now allows authors to submit manuscripts for results-blind review where manuscripts are initially evaluated based on the introduction and methodology alone. Other outlets such as Strategic Management Journal now desk reject manuscripts that rely on cutoff values (e.g., p < .05) for statistical support as a means to reduce “p-hacking” (i.e., the process of continually running data with slight changes until statistical significance is achieved). The Journal of Applied Psychology now requires authors to conform to the American Psychological Association’s Journal Article Reporting Standards, which call for greater transparency and replicability. The Journal of Management, along with a number of other journals in the field, is now a member of the Committee on Publication Ethics, or COPE, that establishes a code of conduct to discourage unethical practices and creates clear guidelines on how to handle allegations of author, reviewer, and editor misconduct.
Taken together, making progress on the preceding suggestions is likely to increase the credibility of management academics as well as the perceived relevance of our research. In the terminology of Shapiro, Kirkman, and Courtney (2007), these strategies will help to reduce the problem that much management research is “lost before translation” because of unimportant or uninteresting topics and inadequate input from practitioners in both problem selection and research design. We now address the problem of research being “lost in translation.”

Improve research dissemination and communication

To outsiders, the current publishing model of academic research is likely to appear strange, counterintuitive, and wasteful. Academics working in publicly funded schools are paid to produce research, which is then handed over to journals free of charge. The journals then sell the research back to those same publicly funded schools, and others, for a premium. Nonacademics are unlikely to see this as the most efficient use of their tax dollars. To make matters worse, even if academic journal articles were made more widely available, the writing is largely uninterpretable without substantial research training. Thus, even research that practitioners might find interesting, important, and useful generally remains little known and underutilized.
Given that the current publishing model is unlikely to change dramatically in the short term, how do we make our research more accessible and interpretable? Experts have long recommended publishing findings in outlets that are accessible to practitioners (e.g., practitioner and bridge journals), but researchers may struggle to learn the distinct style of communication needed for such articles and are uncertain about the rewards of doing so. Perhaps even more challenging, many practitioners, students, and members of the general population now get much of their information from sources that were barely in use little more than a decade ago, such as blogs, online videos, and various forms of social media. However, the best opportunities to humanize management academics and get research evidence to the public may lie in these alternative forums.
For example, many of the most watched TED talks (e.g., The Power of Vulnerability, The Power of Introverts) are based on social science research and have been viewed millions of times—considerably more than the original work discussed in the videos. Indeed, TED recently began a regular podcast series, WorkLife With Adam Grant, focusing explicitly on work-related issues of interest to both managers and employees. Additionally, online communities such as Reddit can be leveraged to gain a larger audience for research findings. A doctoral candidate at Indiana University recently had her Journal of Applied Psychology article discussed on Reddit’s front page. The discussion of the central finding that women experienced more workplace incivility from other women than men (i.e., Queen Bee Syndrome) stayed on the front page for almost the entire day and generated over 4,000 comments and more than 60,000 votes (the metric by which readers indicate importance). To give that additional context, according to Statista (https://www.statista.com/statistics/443332/reddit-monthly-visitors/), Reddit is viewed approximately 1.6 billion times daily. Other opportunities for reaching practitioner audiences include research-based business school websites such as Knowledge@Wharton and massive open online courses such as Scott DeRue’s Leading Teams. Many business schools and the Academy of Management have long employed research publicists, a step recently taken by the Journal of Management as well. Another advance has been the creation of the Behavioral Science and Policy Association, which produces a weekly digital newsletter and a peer-reviewed journal, Behavioral Science and Policy, featuring short, accessible articles describing actionable policy applications of behavioral science research. These new types of media help to address practitioners’ expressed desire for continuing education (Banks et al., 2016).
No matter what media are used to share research findings, researchers should also consider how best to grab attention in ways that will increase interest in research evidence. Perhaps the most universally successful technique for doing so is to open with a compelling story that hooks the reader, rather than diving right into data or beginning with a theoretical exposition. This tactic has been used in popular research-based books by academic authors such as Thaler and Sunstein’s (2009)Nudge, Heath and Heath’s (2010)Switch, and Grant’s (2013)Give and Take and can easily be used by academics who give media interviews or speak to practitioner audiences.
Storytelling is effective for a number of reasons. For example, people are less likely to counterargue evidence that is presented in narrative format (Niederdeppe, Shapiro, & Porticella, 2011), perhaps because it frames the discussion in a particular way and puts the audience in a confirmatory mind set. Narratives also reduce reactance to persuasive messages (Moyer-Gusé & Nabi, 2010) and have been shown to evoke retrospective reflection, which increases the likelihood that people will recall memories consistent with the narrative that strengthen their belief (Hamby, Brinberg, & Daniloski, 2017). Stories further address the well-established finding that most people are more convinced by a compelling story than by large-sample statistical evidence (Rynes, 2012).
Yet another way of improving the credibility of scientific findings might be to use graphics, rather than text-based messages, and statistics that better convey practical importance to nonscientists than correlations and (especially) the coefficient of determination (Kuncel & Rigdon, 2013). Examples of more effective means of communication include risk ratios, converting contingency tables into graphics, stacked bar charts, and binomial effect size displays that convert correlations into percentages comparing different groups (e.g., outcomes for those scoring above average vs. those scoring below).
Finally, based on Heath and Heath’s (2007) research-based book Made to Stick, Rousseau and Boudreau (2011) offered a set of recommendations to make research findings “sticky.” Two recommendations are to transmit core principles in plain language and to use familiar analogies. Rousseau and Boudreau’s examples include using the core principle “losses hurt more than gains feel good” (277) to summarize prospect theory or the core principle “socially distant groups tend to have negative perceptions of each other” (275) to explain the main thrust of intergroup research. Another principle, “embed findings within existing practitioner decision frameworks” (Rousseau & Boudreau, 2011: 278), is exemplified by demonstrating how the monetary value of recruitment, selection, and retention processes can be demonstrated using similar processes and frameworks to those employed in supply chain analysis. Additional principles include presenting findings in ways that are deliberately targeted toward practitioners, such as framing research according to end-users’ interests, starting with a relevant story, providing general “dos and don’ts,” explicating known boundary conditions, and including opinion leaders’ testimonies.

Strategies for Anticipating and Addressing Resistance to Specific Findings

To this point, we have addressed general persuasive strategies for more effectively presenting evidence-based messages where there is not a particular reason to expect much resistance. As such, the above strategies are likely to be effective where the topic is not strongly emotional and the audience is either “friendly” or a mix of sympathetic and skeptical readers or listeners. However, as indicated earlier, there are many management topics about which we should expect at least some members of our audiences to have strongly held views (e.g., discrimination, diversity, employment testing, socially responsible investing). When addressing such topics, additional tactics that anticipate and address resistance more directly are needed.

Use dialectic methods or two-sided arguments

When researchers have the opportunity to engage directly with an audience, they may be able to shift beliefs about research evidence by helping people reach their own evidence-based conclusions. One way this can be done is through the use of dialectic (rather than didactic) methods. With dialectic methods, researchers actively engage in discourse among people holding (or role-playing) different points of view with the goal of establishing truth through evidence. For example, Latham (2007) has his executive master of business administration students vigorously debate the likely effectiveness of different methods of performance appraisal before revealing the evidence-based answer. By encouraging debate with and among students rather than simply talking at them, professors begin to create trust and reduce the chances of being perceived as condescending or “other” (e.g., Trank, 2014). This type of debate can also shift the audience to deeper information processing, reducing the effects of motivated reasoning.
A similar approach can be used in speaking to or writing articles for practitioners about potentially contentious issues. For example, one might begin an article about employment discrimination by asking the reader to consider whether discrimination still exists in hiring and then laying out various arguments in favor of each case (yes vs. no)—a technique known as two-sided arguing (Knowles & Linn, 2004). Presentation of both cases can then be followed by the introduction of meta-analytic evidence from field experiments showing clear evidence of persistent discrimination to the present time period (e.g., Pager, 2007; Quillian, Pager, Hexel, & Midtbøen, 2017). This strategy of “two-sided arguing with refutation” has been found to be more successful than presenting only a single position (Allen, 1991; O’Keefe, 1999), particularly in cases where those holding the weaker position would be likely to come up with counterarguments on their own if only the evidence-based side were presented.

Use the best available evidence and explain research methods along with findings

In the preceding example regarding the continued existence of discrimination in hiring, it is possible to present research evidence on both sides of the debate. For example, one can find many surveys showing that employers and members of the public report less discriminatory attitudes than in the past or that companies have increasingly recognized diversity as a goal and revamped hiring procedures to attempt to lessen discrimination. However, the research methods behind such studies (mostly attitude or self-reported practice surveys) are weaker in both internal and external validity than the meta-analyses of field experiments cited above (Blank, Dabady, & Citro, 2004). Because of the vast superiority of some methods over others, practitioners of EBMgt discuss procedures along with results as a way of building greater understanding of the value of strong research methods and increasing the confidence of those without strong research backgrounds.
The potential benefits of teaching audiences about stronger versus weaker methodologies have been demonstrated by Sagarin and Cialdini (2004) in the context of marketing advertisements. They found that training students to be critical of advertisement content and to identify credible versus noncredible sources for messages made them subsequently less resistant to legitimate and appropriate sources and less persuaded by ads with illegitimate uses of heuristics than were people not so trained. According to Knowles and Linn,
It is as if a general wariness that untrained participants applied to all advertisements was lifted from the legitimate ads after training. People who are provided with a sense of power, efficacy, control, and competence seem to have less need to be wary. (2004: 128)

Use experiential methods

Experiential methods can be a terrific way of getting an audience engaged in a topic. In addition, they can be used to reduce people’s well-known overconfidence about their ability to evaluate other people or make correct managerial decisions. For example, Latham and Wexley (1981) describe a training session they conducted with middle managers who had several years of experience in conducting performance appraisals. They gave these managers a detailed job description for a position in their unit and then had them view one of two job interviews. Everything about the two interviews was identical, except that in the first condition, the applicant said that he had 2 brothers, a father with a Ph.D. in physics, and a mother with a master’s degree in social work. In the second video, the tape was spliced so that the (same) applicant said he had 12 siblings, his father was a bus driver, and his mother was a maid. The first group rated the applicant a 9 on a 9-point qualifications scale, while the second group gave this same person 5s and 6s—a perfect illustration of the “similar-to-me” rating error. Such an experience can help reduce overconfidence and increase motivation to improve.
Although it is easiest to think of how to apply experiential methods in face-to-face situations, they can also be used in written contexts. For example, writers of research-based books sometimes include links to surveys, questionnaires, or self-assessments that help people see where they have the biggest opportunities for changing or improving their current situations in an evidence-guided way (e.g., Fredrickson, 2009; Grant, 2013). In addition, they give explicit guidance about how to put certain research findings into action. Similar approaches can be found in research-based articles, such as Amabile and Kramer’s (2011) research-based article on how to take advantage of the emotional and productivity benefits of small wins.

Use jiu jitsu persuasion to deal with resistance to specific issues

Broadly speaking, all of the above strategies require researchers to understand the underlying factors that cause resistance to research findings. The most effective persuasion strategies will vary, however, depending on the specific forces that are shaping beliefs. For example, changing the minds of people who resist research findings because they are inconsistent with the beliefs of their identity groups will require different strategies than changing the minds of people who question the credibility of research in general.
Hornsey and Fielding refer to these targeted persuasion strategies as “jiu jitsu persuasion,” explaining that
rather than taking on people’s surface attitudes directly (which causes people to tune out or rebel), the goal of jiu jitsu persuasion is to identify the underlying motivation, and then to tailor the message so that it aligns with that motivation. (2017: 469)
An example of jiu jitsu persuasion was offered by Jay and Grant (2017) in their book on the role of conversation in overcoming conflict that stems from value differences. They described two investment advisors who kept running into a brick wall when trying to convince pension fund managers and other institutional investors to consider socially responsible investments. Despite their arguments that social investing is compatible with “doing well while doing good” (which is also consistent with meta-analytic evidence; e.g., Orlitzky et al., 2003), this argument fell on deaf ears because fund managers had an implicit theory that the two goals were locked in an inevitable tradeoff.
Jay and Grant (2017) suggest a four-step solution for honoring conflicting values based on embracing the tension between them. The first step, to “move beyond factual debates and clarify values, as well as associated hopes and fears” (Jay & Grant, 2017: 151), had already been accomplished by uncovering the clash between implicit “win-win” and “zero-sum” theories of the issue. Their second step, “own the polarization” (151), involves acknowledging your own ambivalence and the concern you have for the other person’s values or worldview. For example, the investment advisors acknowledged that there are indeed some trade-offs, particularly in the short term (e.g., higher wages might dampen short-term financial results), and expressed appreciation and concern for the fund managers’ fiduciary responsibility. In Jay and Grant’s third step, “expand the landscape” (151), the investment advisors drew a graph of financial performance versus social impact, but then also drew a line suggesting that the downward-sloping (i.e., trade-off) line could be shifted upwards through innovation (e.g., “We could imagine shifting this line outward—finding clever investment strategies that could break the trade-offs. We could do that by paying attention to information that other investors aren’t paying attention to”; 151). And in their step four, “dancing in the new terrain” (151), the parties worked together to generate options that moved from “either-or” to “both-and” solutions. Jay and Grant report that “the results were significant: the team’s clients responded positively to this conversation and began seriously considering their (social impact) approach. The conversation created an opening where one had not existed before” (145).

Conclusion

In the end, if research findings are to be implemented, researchers need to consider not only how to summarize research findings and communicate them in practitioner-facing outlets but also how to reduce potential resistance to believing the findings. Given questions about the credibility of research and researchers, as well as the fact that consumers of research may be motivated to reach specific conclusions based on their worldviews, identities, and self-interest, researchers cannot automatically assume that research findings that make it across the A-P gap will be believed. Our tool kit needs to expand beyond conducting systematic reviews and factually communicating results to practitioners. We need to consider how to communicate evidence to reduce resistance, help people draw their own evidence-based conclusions, and actively repair public trust in science and scientists. We also need more empirical evidence about what strategies work best to shape research-related beliefs. Armed with such evidence, researchers will have the tools necessary to convince skeptics and rebuild public trust in science and scientists.

Acknowledgments

We would like to thank Jean Bartunek, Nancy Hauserman, and Bodi Vasi for their extremely helpful comments in the preparation of this manuscript as well as David Allen and two anonymous reviewers for valuable feedback during the revision process.

ORCID iD

Footnotes

1. Although Rynes et al. (2002) were able to establish that these beliefs were inconsistent with research findings, they could not determine how many of the managers in their study actually “knew” the research on these topics yet still did not believe it. Given that very few of them reported reading research journals, it is likely that they were unaware of the actual research but did not hold beliefs consistent with it. Still, subsequent research (e.g., Caprar, Do, Rynes, & Bartunek, 2016) has found the same results with respect to disbelief about the validity of intelligence in predicting performance even after people have read the research.
2. Skeptics display questioning attitudes or doubt towards items of putative knowledge or belief. Real skeptics question things, consider all the evidence, and have open minds. (Indeed, one prominent climate scientist says that scientists are the true skeptics: “We demand evidence, we kick the tires . . . the entire system of peer review is based on you making the best argument for the result that you’ve found, and then multiple colleagues who are not involved in your research then tear it apart and try to find all the holes possible”; Katharine Hayhoe, director of Texas Tech University’s Climate Science Center, quoted in Grandoni, 2018: para. 6). Cynics have a sneering disbelief in sincerity, virtue, or selflessness, tending to believe that all acts are selfish. Cynics generally apply these attitudes to a wide variety of people and ideas and are unlikely to be convinced by evidence. Dismissives are those who are actively opposed to a particular scientific finding for reasons that are based on ideology, self-image, group identification, or vested interests. They generally will not be convinced by evidence.

References

Aguinis H., Ramani R. S., Alabduljader N. 2018. What you see is what you get? Enhancing methodological transparency in management research. The Academy of Management Annals, 12: 83-110.
Allen M. 1991. Meta-analysis comparing the persuasiveness of one-sided and two-sided messages. Western Journal of Speech Communication, 55: 390-404.
Amabile T. M., Kramer S. J. 2011. The power of small wins. Harvard Business Review, 89(5): 70-80.
Asimov I. 1980. A cult of ignorance. Newsweek, January 21: 19.
Banks G., Pollack J. M., Bochantin J. E., Kirkman B. L., Whelpley C. E., O’Boyle E. H. 2016. Management’s science-practice gap: A grand challenge for all stakeholders. Academy of Management Journal, 59: 2205-2231.
Bansal P., Bertels S., Ewart T., MacConnachie P., O’Brien J. 2012. Bridging the research-practice gap. Academy of Management Perspectives, 26(1): 73-92.
Bartunek J. M. 2007. Academic-practitioner collaboration need not require joint or relevant research: Toward a relational scholarship of integration. Academy of Management Journal, 50: 1323-1333.
Bennis W., O’Toole J. 2005. How business schools lost their way. Harvard Business Review, 83(5): 96-104, 154.
Bergh D. D., Sharp B. M., Aguinis H., Li M. 2017. Is there a credibility crisis in strategic management research? Evidence on the reproducibility of study findings. Strategic Organization, 15: 423-436.
Blank R. M., Dabady M., Citro C. F. (Eds.). 2004. Measuring racial discrimination [National Research Council Panel on Methods for Assessing Discrimination]. Washington, DC: National Academies Press.
Caprar D. V., Do B., Rynes S. L., Bartunek J. M. 2016. It’s personal: An exploration of students’ (non)acceptance of management research. Academy of Management Learning & Education, 15: 207-231.
Community for Responsible Research in Business and Management. 2018. A vision of responsible research in business and management: A necessary conversation. Retrieved from http://humanisticmanagement.international/wp-content/uploads/2018/03/HMNC-Responsible-Research-March52018-v4.pdf. Accessed July 21, 2018.
Dutton J. E., Dukerich J. M. 2006. The relational foundation of research: An under-appreciated dimension of interesting research. Academy of Management Journal, 49: 21-26.
Festinger L., Riecken H. W., Schachter S. 1956. When prophesy fails. Minneapolis: University of Minnesota Press.
Fredrickson B. L. 2009. Positivity. New York: Three Rivers Press.
Goleman D. 1998. Working with emotional intelligence. New York: Bantam Books.
Grandoni D. 2018. The Energy 202: Why climate scientists want to be thought of as the real “climate skeptics.” Washington Post. Retrieved from https://www.washingtonpost.com/news/powerpost/paloma/the-energy-202/2018/05/18/the-energy-202-why-climate-scientists-want-to-be-thought-of-as-the-real-climate-skeptics/5afded6330fb042588799589/?utm_term=.03b185167632. Accessed August 15, 2018.
Grant A. 2013. Give and take: A revolutionary approach to success. New York: Viking.
Haidt J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834.
Haidt J. 2012. The righteous mind: Why good people are divided by politics and religion. New York: Penguin Books.
Hamby A., Brinberg D., Daniloski K. 2017. Reflecting on the journey: Mechanisms in narrative persuasion. Journal of Consumer Psychology, 27: 11-22.
Heath C., Heath D. 2007. Made to stick: Why some ideas survive and others die. New York: Random House.
Heath C., Heath D. 2010. Switch: How to change things when change is hard. New York: Broadway Books.
Herrin C. 2009. Does diversity pay? Race, gender, and the business case for diversity. American Sociological Review, 74: 208-224.
Highhouse S. A. 2008. Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1: 333-342.
Hoffman A. J. 2015. How culture shapes the climate debate. Stanford, CA: Stanford University Press.
Hornsey M. J., Fielding K. S. 2017. Attitude roots and jiu jitsu persuasion: Understanding and overcoming the motivated rejection of science. American Psychologist, 72: 459-473.
Ioannidis J. P. 2005. Why most published research findings are false. PLoS Medicine, 2(8): e124. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
Jay J., Grant G. 2017. Breaking through gridlock: The power of conversation in a polarized world. Oakland, CA: Berrett-Koehler.
Johns G. 1993. Constraints on the adoption of psychology-based personnel practices: Lessons from organizational innovation. Personnel Psychology, 46: 569-592.
Kerr N. L. 1998. HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2: 196-217.
Knowles E. S., Linn J. A. 2004. Approach-avoidance model of persuasion: Alpha and omega strategies for change. In Knowles E. S., Linn J. A. (Eds.), Resistance and persuasion: 117-148. Mahwah, NJ: Erlbaum.
Kuncel N. R., Rigdon J. 2013. Communicating research findings. In Weiner I. B. (Ed.), Handbook of psychology (2nd ed.): 43-58. Hoboken, NJ: John Wiley & Sons.
Latham G. P. 2007. A speculative perspective on the transfer of behavioral science findings to the workplace: “The times they are a-changin’.” Academy of Management Journal, 50: 1027-1032.
Latham G. P., Wexley K. N. 1981. Increasing productivity through performance appraisal. Reading, MA: Addison-Wesley.
Lee A-R. 2007. Value cleavages, issues, and partisanship in East Asia. Journal of East Asian Studies, 7: 251-274.
Marks J., Copland E., Loh E., Sunstein C. R., Sharot T. 2018. Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains. Harvard Public Law working paper no. 18-22, Harvard University, Cambridge, MA. Retrieved from https://ssrn.com/abstract=3162009. Accessed August 15, 2018.
Masci D. 2017. For Darwin Day, 6 facts about the evolution debate. Retrieved from http://www.pewresearch.org/fact-tank/2017/02/10/darwin-day/. Accessed March 9, 2018.
Mayer J. 2017. Dark money. New York: Anchor.
Mooney C. 2006. The Republican war on science. New York: Basic Books.
Moyer-Gusé E., Nabi R. L. 2010. Explaining the effects of narrative in an entertainment television program: Overcoming resistance to persuasion. Human Communication Research, 36: 26-52.
Nichols T. 2017. The death of expertise: The campaign against established knowledge and why it matters. New York: Oxford University Press.
Niederdeppe J., Shapiro M. A., Porticella N. 2011. Attributions of responsibility for obesity: Narrative communication reduces reactive counterarguing among liberals. Human Communication Research, 37: 295-323.
O’Boyle E. H., Banks G., Gonzalez-Mulé E. 2016. The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43: 376-399.
O’Keefe D. J. 1999. How to handle opposing arguments in persuasive messages: A meta-analytic review of the effects of one-sided and two-sided messages. Annals of the International Communication Association, 22: 209-249.
Oreskes N., Conway E. M. 2010. Merchants of doubt. New York: Bloomsbury.
Orlitzky M. O., Schmidt F., Rynes S. L. 2003. Corporate social and financial performance: A meta-analysis. Organization Studies, 24: 403-441.
Pager D. 2007. The use of field experiments for studies of employment discrimination: Contributions, critiques, and directions for the future. The Annals of the American Academy of Political and Social Science, 609: 104-133.
Pager D., Bonikowski B., Western B. 2009. Discrimination in a low-wage labor market: A field experiment. American Sociological Review, 74: 777-799.
Petriglieri J. L. 2011. Under threat: Responses to the consequences of threats to individuals’ identities. Academy of Management Review, 36: 641-662.
Pew Research Center. 2014. Beyond red vs. blue: The political typology. Retrieved from http://www.people-press.org/2014/06/26/section-7-global-warming-environment-and-energy/. Accessed July 25, 2018.
Pew Research Center. 2016. The politics of climate. Retrieved from http://www.pewinternet.org/2016/10/04/the-politics-of-climate/. Accessed August 15, 2018.
Pew Research Center. 2017a. Partisans differ widely in views of police officers, college professors. Retrieved from http://www.people-press.org/2017/09/13/partisans-differ-widely-in-views-of-police-officers-college-professors/. Accessed January 7, 2018.
Pew Research Center. 2017b. The partisan divide on political values grows even wider: Race, immigration, and discrimination. Retrieved from http://www.people-press.org/2017/10/05/4-race-immigration-and-discrimination/. Accessed May 28, 2018.
Pfeffer J. 1998. The human equation. Boston: Harvard Business School Press.
Quillian L., Pager D., Hexel O., Midtbøen A. H. 2017. Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences of the United States of America, 114: 10870-10875.
Rousseau D. M., Boudreau J. W. 2011. Sticky findings: Research evidence practitioners find useful. In Mohrman S. A., Lawler III E. E. (Eds.), Useful research: Advancing theory and practice: 269-288. San Francisco: Barrett-Koehler.
Rousseau D. M., Manning J., Denyer D. 2008. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals, 2: 475-515.
Rynes S. L. 2012. The research-practice gap in I/O psychology and related fields: Challenges and potential solutions. In Kozlowski S. J. W. (Ed.), The Oxford handbook of organizational psychology, vol. 1: 409-452. New York: Oxford University Press.
Rynes S. L., Colbert A. E., Brown K. G. 2002. HR professionals’ beliefs about effective human resource practices: Correspondence between research and practice. Human Resource Management, 41: 149-174.
Sagarin B. J., Cialdini R. B. 2004. Creating critical consumers: Motivating receptivity by teaching resistance. In Knowles E. S., Linn J. A. (Eds.), Resistance and persuasion: 259-282. Mahwah, NJ: Erlbaum.
Sanders K., van Riemsdijk M., Groen B. 2008. The gap between research and practice: A replication study on the HR professionals’ beliefs about effective human resource practices. The International Journal of Human Resource Management, 19: 1976-1988.
Schmidt F. L. 2009. Select on intelligence. In Locke E. A. (Ed.), Handbook of principles of organizational behavior: 3-17. Chichester, England: John Wiley & Sons.
Schmidt F. L., Hunter J. E. 1998. The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124: 262-274.
Shapiro D. L., Kirkman B. L., Courtney H. G. 2007. Perceived causes and solutions of the translation problem in management research. Academy of Management Journal, 50: 249-266.
Sullivan K., Jordan M. 2017. Elitists, crybabies and junky degrees: A Trump supporter explains rising conservative anger at American universities. Washington Post. Retrieved from http://www.washingtonpost.com/sf/national/2017/11/25/elitists-crybabies-and-junky-degrees/?utm_term=.d2b7c34a480a. Accessed March 7, 2018.
Tajfel H., Turner J. 1979. An integrative theory of intergroup conflict. In Austin W. G., Worchel S. (Eds.), The social psychology of group relations: 33-47. Monterey, CA: Brooks/Cole.
Tenhiälä A., Giluk T. L., Kepes S., Simón C., Oh. I. S., Kim S. 2016. The research-practice gap in human resource management: A cross-cultural study. Human Resource Management, 55: 179-200.
Tetlock P. E. 2000. Cognitive biases and organizational correctives: Do both disease and cure depend on the politics of the beholder? Administrative Science Quarterly, 45: 293-326.
Thaler R. H., Sunstein C. R. 2009. Nudge: Improving decisions about health, wealth, and happiness. New York: Penguin.
Trank C. Q. 2014. “Reading” evidence-based management: The possibilities of interpretation. Academy of Management Learning & Education, 13: 381-395.
Trilling D., van Klingeren M., Tsfati Y. 2016. Selective exposure, political polarization, and possible mediators: Evidence from the Netherlands. International Journal of Public Opinion Research, 29: 189-213.
Tversky A., Kahneman D. 1974. Heuristics and biases: Judgement under uncertainty. Science, 185: 1124-1130.
Welbourne T. M., Andrews A. O. 1996. Predicting the performance of initial public offerings: Should human resource management be in the equation? Academy of Management Journal, 39: 891-919.

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
Email Article Link
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: August 30, 2018
Issue published: November 2018

Keywords

  1. evidence-based management
  2. academic-practice gap
  3. motivated reasoning
  4. persuasion
  5. decision making

Rights and permissions

© The Author(s) 2018.
Request permissions for this article.

Authors

Affiliations

Amy E. Colbert

Notes

Sara L. Rynes, University of Iowa, 108 Pappajohn Business Building, Iowa City, IA 52242, USA. E-mail: [email protected]

Metrics and citations

Metrics

Journals metrics

This article was published in Journal of Management.

View All Journal Metrics

Article usage*

Total views and downloads: 12454

*Article usage tracking started in December 2016


Altmetric

See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores



Articles citing this one

Receive email alerts when this article is cited

Web of Science: 48 view articles Opens in new tab

Crossref: 49

  1. Lay Theories of Expertise: A Mixed‐Methods Exploration
    Go to citationCrossrefGoogle Scholar
  2. How to deliver gender diversity education to men: Training algorithms to the rescue
    Go to citationCrossrefGoogle Scholar
  3. Self-serving beliefs about science: Science justifies my weaknesses (but not other people’s)
    Go to citationCrossrefGoogle ScholarPub Med
  4. Managerial taboos: How the ideal of a manager may harm people and organizations
    Go to citationCrossrefGoogle Scholar
  5. What are interviews for? A qualitative study of employment interview goals and design
    Go to citationCrossrefGoogle Scholar
  6. Sustainable business models: Researchers as design thinkers for problem‐driven research
    Go to citationCrossrefGoogle Scholar
  7. The Reasoning through Evidence versus Advice (EvA) Scale: Scale Development and Validation
    Go to citationCrossrefGoogle Scholar
  8. Exploring the gap between research and practice in human resource management (HRM): a scoping review and agenda for future research
    Go to citationCrossrefGoogle Scholar
  9. A scoping review of the intersection of environmental and science identity
    Go to citationCrossrefGoogle Scholar
  10. Inclusion of managers and other practitioners in scientific research (pros and cons)
    Go to citationCrossrefGoogle Scholar
  11. View More

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/EPUB

View PDF/EPUB

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:

SMA members can access this journal content using society membership credentials.

SMA members can access this journal content using society membership credentials.


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.