Closer to the field of management,
Rynes, Colbert, and Brown (2002) found a number of large discrepancies between research findings and the beliefs of 959 human resource (HR) professionals.
1 For example, the HR managers in their study did not believe that goal setting is more effective than employee participation for improving organizational performance, that most errors in performance appraisal cannot be eliminated by error-reduction training, that intelligence is a better predictor of performance than either values or conscientiousness, or that intelligence improves performance even on low-skilled jobs. Similar results have been found using samples of HR managers from Holland, Finland, South Korea, and Spain (
Sanders, van Riemsdijk, & Groen, 2008;
Tenhiälä, Giluk, Kepes, Simón, Oh, & Kim, 2016). Additionally,
Highhouse (2008) showed that even when HR professionals are aware of research findings that demonstrate tests and actuarial selection models are superior to unstructured interviews, they still believe that this is not true as it personally applies to them. Turning to more macro topics, many institutional investors believe there is a negative relationship between corporate social responsibility and corporate financial performance (
Jay & Grant, 2017), even though meta-analyses show a positive relationship (
Orlitzky, Schmidt, & Rynes, 2003). Similarly,
Welbourne and Andrews (1996) found that firms undergoing initial public offerings (IPOs) had a significantly larger chance of 5-year survival if they placed a higher value on human resources and used more organizational performance–based compensation for a wide range of employees. However, examination of investors’ stock price premia for IPOs showed that they evaluated organizational performance–based pay
negatively rather than positively and ignored the value placed on human resources.
Of course, disregarding scientists and disbelieving scientific research is neither new nor confined to management. However, there are reasons for growing alarm about the disbelief of scientific findings across a wide range of professional domains because it seems to reflect a much broader drop in the credibility of academics and scientists.
Growing Distrust and Reduced Credibility of Academics
Some of the reduced credibility and increased distrust of academics and research stems from the rapid rise in studies suggesting that existing research findings are not nearly as robust as previously believed. Reasons for this range from relatively innocent causes (e.g., undetected analytical errors) to questionable research practices (such as excluding outlier data or hypothesizing after the results are known, or HARKing;
Kerr, 1998) to out-and-out falsification of data or results. These problems have been exposed in nearly all scientific fields (
Ioannidis, 2005), including management and psychology (
O’Boyle, Banks, & Gonzalez-Mulé, 2016) and strategic management (
Bergh, Sharp, Aguinis, & Li, 2017). As such, at least part of the blame for reduced credibility of academic research lies within the academic community.
However, researchers themselves are hardly the only ones to blame for the public’s growing distrust of research. For example, extensive investigations reveal that there have been many well-funded, concerted efforts to discredit solid scientific research for self-interested political, ideological, or economic ends (
Mayer, 2017;
Mooney, 2006;
Oreskes & Conway, 2010). These campaigns have painstakingly worked not only to discredit research findings but also to smear the reputations of the scientists who produced them. Although most of the external attacks on research and researchers have been leveled in fields other than management, they have dealt a heavy blow to the credibility and perceived trustworthiness of science and scientists in general, as well as the universities and other organizations that employ them.
For example, one survey showed that 24% of Americans feel “cold” or “very cold” toward professors (
Pew Research Center, 2017a). Another Pew survey showed that although 72% of Democrats believe colleges and universities have a positive effect on the country, 58% of Republicans believe they have a negative effect (up 21% from 2015;
Sullivan & Jordan, 2017). Yet another Pew survey showed that 39% of Americans do not believe that climate scientists provide full and accurate information about climate, with 36% believing that their findings mostly reflect their desire for career advancement and 27% their political leanings (
Pew Research Center, 2016). Taken together, these findings suggest that although professors are still held in high esteem by many members of the U.S. citizenry, to a substantial minority they are perceived as social out-groups, with all the attendant negative consequences of that characterization (
Tajfel & Turner, 1979).
A further illustration of the effects of political polarization on perceptions of experts was provided in a carefully controlled study by
Marks, Copland, Loh, Sunstein, and Sharot (2018). Marks et al. showed that people performing an online task preferred to seek the advice of politically like-minded collaborators (which were actually a computer algorithm) and to believe they had higher abilities on the task, despite evidence presented to the contrary. Marks et al. conclude that “knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement” (2). In the present context, this suggests that the knowledge and expertise of conservative academics are likely to be discounted or discredited by liberal audiences and vice versa. While examples of this phenomenon are ubiquitous in the current political environment in the United States, similar trends have also been documented in Europe (
Trilling, van Klingeren, & Tsfati, 2016) and parts of East Asia (
Lee, 2007).
Put into an even larger context,
Nichols (2017) argues that professors are just one of many types of professionals—including doctors, lawyers, and realtors—who increasingly find their expertise challenged by patients or clients or by students who do not value their many years of study and specialized expertise. The decline in respect for people who have acquired deep knowledge in specialized areas was described nearly 40 years ago by Asimov as “a strain of anti-intellectualism [that] has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’” (
1980: 19). Nichols believes that anti-intellectualism has now evolved much further, into what he calls the Death of Expertise:
a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laypeople, students and teachers, knowers and wonderers . . . not just a rejection of existing knowledge, [but rather] fundamentally a rejection of science and dispassionate rationality, which are the foundations of modern civilization. (3-5)
Again, while this attitude may not describe the majority of the U.S. population, it does describe a substantial minority.
The Role of Motivated Reasoning in Rejecting Specific Research Findings
To this point, we have argued that the reduced credibility and perceived expertise of academics among a sizeable segment of the population threatens the likelihood that managers will look to academic research for advice or apply empirically validated best practices. However, sometimes skepticism or dismissiveness come not from attitudes toward the messenger but, rather, toward specific messages. That people are not dispassionately rational in their evaluations of arguments or data has been well demonstrated (
Tversky & Kahneman, 1974). Rather, humans pursue motivated reasoning designed to arrive at particular, favored conclusions that are primed by our deeper underlying values, worldviews, vested interests, fears, and self-identities and social identities (
Haidt, 2001). However, having arrived at our judgments emotionally (and with split-second alacrity), we then
justify them rationally and close ourselves off from disconfirming evidence.
Take, for example, the strongly disbelieved research finding that intelligence is the single best predictor of job performance (
Schmidt & Hunter, 1998). There are several reasons to suspect that motivated reasoning is at play here. For example, the fact that intelligence has a substantial genetic component means that low intelligence cannot be completely overcome by hard work, a fact that may threaten people’s sense of fairness and feelings of control. Second, the fact that for a variety of reasons, there are average differences in scores of intelligence across racial and ethnic groups means that findings about the importance of intelligence can also threaten one’s self-identity or social identity. Third, research suggesting a strong link between intelligence and performance may also threaten the self-image of those whose past experiences (especially academic ones) may have caused them to feel insecure about their intelligence. Similarly, the widely disbelieved findings that both tests and algorithmic employee selection methods are more accurate than subjective interviews (
Highhouse, 2008) are likely to threaten managers’ sense of autonomy, control, and self-image as competent people.
Indeed, an experiment by
Caprar, Do, Rynes, and Bartunek (2016) revealed clear evidence of motivated cognition in management students’ agreement, or disagreement, with three research essays related to predictors of job performance. The first essay, excerpted from
Schmidt (2009), argued that employers should test for intelligence because it is the best predictor of job performance. The second essay, excerpted from
Goleman (1998), contrarily argued that emotional intelligence is the best predictor and that intelligence is a weak predictor, explaining only 10% to 25% of the variance in performance (implying that emotional intelligence explains the rest). The third, taken from
Pfeffer (1998), argued that “fit” is the best predictor of performance. After reading all three essays (balanced by order across subjects), the students agreed most strongly with the emotional intelligence argument (3.73 on a six-item, 5-point scale), followed by the fit argument (3.69), and trailed considerably by the intelligence argument (3.35). Moreover, analyses showed a significant positive correlation between students’ grade point averages (GPAs) and their agreement with the intelligence essay but not their agreement with the other two essays. Finally, students were asked whether employers should use intelligence tests in hiring (yes/no) and why. Low-GPA students were more likely not only to say “no” but also to use self-protecting reasons to explain their beliefs (e.g., “I don’t have the highest GPA in the world but I am a very hard worker and strive to get my work done all the time,” or “Just because someone is intelligent doesn’t make them a good worker or employee. . .. I mostly base this opinion on personal experience”; Caprar et al.: 219)
Although there are not many studies like
Highhouse (2008) or
Rynes et al. (2002) that directly examine the extent to which practitioners’ beliefs differ from management research findings, we believe there are a number of management topics for which people are likely to react to research findings with motivated reasoning. Take, for example, field experiments demonstrating the continued existence of discrimination in hiring (e.g.,
Pager, Bonikowski, & Western, 2009). Whether people believe this research is likely to depend on their demographic characteristics, social identities, political affiliation, and worldviews about whether people get what they deserve in life (
Hornsey & Fielding, 2017;
Pew Research Center, 2017b). Similarly, research suggesting the benefits of diversifying the labor force or promoting women or minorities into leadership positions (e.g.,
Herrin, 2009) is likely to threaten the vested interests of members of currently overrepresented groups while raising the hopes and aspirations of others. Research on immigration and globalization may also trigger peoples’ in-group identities, leading to derogatory stereotypes of the out-groups championed by such research (
Petriglieri, 2011).
Many people are also likely to use motivated reasoning when evaluating research-based claims about the causes and consequences of pay inequality, the reduction of which was the number one “grand challenge” noted by the combined group of academics and practitioners (
n = 1,767) in the
Banks et al. (2016) study. For example, those with strong hierarchical, “just world,” and social dominance worldviews are more likely to accept privilege based on existing social strata and to view such hierarchies as both natural and valuable (
Hornsey & Fielding, 2017). As such, they are more likely to embrace “trickle-down” economic policies (i.e., reduced taxes on the wealthy) than attempts to reduce income inequality via more egalitarian pay policies or government regulation, while those with opposing worldviews feel otherwise. Similarly, research suggests that conservative managers with strong needs for cognitive closure are more likely to prefer shareholder versus multistakeholder models of governance and hierarchical structuring that reduces the need for argumentation or negotiation with those lower in the hierarchy (
Tetlock, 2000). More generally, topics such as corporate social responsibility, corporate governance, and business policy are likely to be responded to differently on the basis of people’s beliefs about the relative value of individualism versus collectivism, as well as their social identification with political ideologies such as conservatism, liberalism, or libertarianism (
Haidt, 2012). Although the particular underlying beliefs that influence motivated reasoning may differ across cultures, the phenomenon of motivated reasoning is universal and, thus, likely to shape beliefs about research findings globally.
The fact that people do not respond to research findings as rational “blank slates” poses serious challenges for EBMgt. First, it means that just having a strong body of evidence may be insufficient to convince many people of its validity, particularly if the topic is one about which passions run high. When people emotionally reject research findings, they do not do so based on evidence; rather, they do so based on intuitive judgments or “gut instincts” reflecting their values, fears, personal experiences, vested interests, need to preserve self-esteem, or desire to maintain autonomy or control. In such cases, it will be ineffective to simply continue to present the scientific message. As Hornsey and Fielding note, “If people are
motivated to reject a scientific message, then continually presenting the scientific message represents a misunderstanding of what is causing the incomprehension, one that is likely to lead to frustration” (2017: 468). Furthermore, even adding to an already-solid research base generally won’t do the trick because people often dig in even further when presented with additional contrary evidence (
Festinger, Riecken, & Schachter, 1956). This means that more effective methods of persuasion must be found.
In sum, there are quite a large number of topics that we research and teach about in management that are likely to arouse skepticism or even dismissiveness among some practitioners and students. As educators, we should both expect and invite skepticism because skepticism builds critical thinking and is likely to help us improve our arguments. Still, skeptical audiences are more challenging to persuade than inherently supportive ones, and cynical or dismissive audiences
2 are more challenging still (
Hoffman, 2015;
Hornsey & Fielding, 2017).
Potential Solutions
To this point, we have argued that to the extent that our audiences include people who are skeptical or distrustful of academics or motivated to resist particular research-based messages, standard EBMgt recommendations—to conduct systematic reviews, replicate findings, reduce sources of error, make findings more easily available, and provide high levels of transparency—may be insufficient. Rather, when audience members distrust the messenger and/or disagree with particular messages, we need to think much more carefully about how to overcome skepticism or resistance. Given that resistance is based on emotions, values, fears, identities, and worldviews, we have to find ways to address those underlying issues. To fail to do so is to leave us vulnerable to an ineffective type of shadowboxing where “each contestant lands heavy blows to the opponent’s shadow, then wonders why she doesn’t fall down” (
Haidt, 2001: 823). Below, we focus on two general strategies for improving the credibility of management academics and increasing the likelihood that people will accept our research, as summarized in
Table 1.
Strategies for Increasing Public Trust and Academic Credibility
Improve research creation
Most discussions of the A-P gap have focused mainly on transmitting research findings to practitioners. As such, they do not focus very directly on the question of whether our research is “worth” transmitting. However, this is beginning to change. For example, one obvious recommendation has been for researchers to focus on bigger, more important problems (
Bennis & O’Toole, 2005) rather than problems that primarily fill gaps in the academic research literature. A related suggestion is to broaden the range of stakeholders whose interests are considered, moving from an overemphasis on shareholders to broader considerations of customers, employees, local communities, taxpayers, the environment, and society as a whole (
Community for Responsible Research in Business and Management, 2018).
Calls have also escalated for research that is cocreated between academics and practitioners. In most management research, practitioners simply serve as data sources, providing surveys, interviews, or archival records for academics to use in testing hypotheses. This may reinforce the perception that academics are “other” and deepen the A-P gap. However, when academics cocreate with practitioners, the questions that are posed and the methods by which they are addressed are more likely to produce research that is both relevant and translatable to practitioners (
Bansal, Bertels, Ewart, MacConnachie, & O’Brien, 2012).
In addition, building high-quality connections while pursuing joint research increases the likelihood that the outcome will be more interesting and significant to both parties (
Dutton & Dukerich, 2006). This is because high-quality connections foster greater emotional involvement, incorporate more give-and-take, and open interaction partners up to new ideas and influence. Indeed,
Bartunek (2007) argues that simply creating strong relationships with practitioners is likely to yield many mutual learning benefits and increase trust, even if those relationships do not develop into joint research projects.
Finally, at the same time that we take these positive steps to make our research more important, relevant, and useful to stakeholders other than ourselves, we also need to improve the quality, replicability, and transparency of our research so that we avoid the negative publicity created by embarrassing failures to replicate and retractions of research involving outright fraud (
Aguinis, Ramani, & Alabduljader, 2018;
O’Boyle et al., 2016). Several journals have recently tightened their ethics codes and/or changed their submission procedures to deal with these issues. For example,
Personnel Psychology has instituted CrossCheck, a plagiarism detection tool, for all new submissions. In an effort to disincentivize HARKing, the
Journal of Business and Psychology now allows authors to submit manuscripts for results-blind review where manuscripts are initially evaluated based on the introduction and methodology alone. Other outlets such as
Strategic Management Journal now desk reject manuscripts that rely on cutoff values (e.g.,
p < .05) for statistical support as a means to reduce “p-hacking” (i.e., the process of continually running data with slight changes until statistical significance is achieved). The
Journal of Applied Psychology now requires authors to conform to the American Psychological Association’s Journal Article Reporting Standards, which call for greater transparency and replicability. The
Journal of Management, along with a number of other journals in the field, is now a member of the Committee on Publication Ethics, or COPE, that establishes a code of conduct to discourage unethical practices and creates clear guidelines on how to handle allegations of author, reviewer, and editor misconduct.
Taken together, making progress on the preceding suggestions is likely to increase the credibility of management academics as well as the perceived relevance of our research. In the terminology of
Shapiro, Kirkman, and Courtney (2007), these strategies will help to reduce the problem that much management research is “lost
before translation” because of unimportant or uninteresting topics and inadequate input from practitioners in both problem selection and research design. We now address the problem of research being “lost
in translation.”
Improve research dissemination and communication
To outsiders, the current publishing model of academic research is likely to appear strange, counterintuitive, and wasteful. Academics working in publicly funded schools are paid to produce research, which is then handed over to journals free of charge. The journals then sell the research back to those same publicly funded schools, and others, for a premium. Nonacademics are unlikely to see this as the most efficient use of their tax dollars. To make matters worse, even if academic journal articles were made more widely available, the writing is largely uninterpretable without substantial research training. Thus, even research that practitioners might find interesting, important, and useful generally remains little known and underutilized.
Given that the current publishing model is unlikely to change dramatically in the short term, how do we make our research more accessible and interpretable? Experts have long recommended publishing findings in outlets that are accessible to practitioners (e.g., practitioner and bridge journals), but researchers may struggle to learn the distinct style of communication needed for such articles and are uncertain about the rewards of doing so. Perhaps even more challenging, many practitioners, students, and members of the general population now get much of their information from sources that were barely in use little more than a decade ago, such as blogs, online videos, and various forms of social media. However, the best opportunities to humanize management academics and get research evidence to the public may lie in these alternative forums.
For example, many of the most watched TED talks (e.g.,
The Power of Vulnerability, The Power of Introverts) are based on social science research and have been viewed millions of times—considerably more than the original work discussed in the videos. Indeed, TED recently began a regular podcast series,
WorkLife With Adam Grant, focusing explicitly on work-related issues of interest to both managers and employees. Additionally, online communities such as Reddit can be leveraged to gain a larger audience for research findings. A doctoral candidate at Indiana University recently had her
Journal of Applied Psychology article discussed on Reddit’s front page. The discussion of the central finding that women experienced more workplace incivility from other women than men (i.e., Queen Bee Syndrome) stayed on the front page for almost the entire day and generated over 4,000 comments and more than 60,000 votes (the metric by which readers indicate importance). To give that additional context, according to Statista (
https://www.statista.com/statistics/443332/reddit-monthly-visitors/), Reddit is viewed approximately 1.6 billion times daily. Other opportunities for reaching practitioner audiences include research-based business school websites such as Knowledge@Wharton and massive open online courses such as Scott DeRue’s
Leading Teams. Many business schools and the Academy of Management have long employed research publicists, a step recently taken by the
Journal of Management as well. Another advance has been the creation of the Behavioral Science and Policy Association, which produces a weekly digital newsletter and a peer-reviewed journal,
Behavioral Science and Policy, featuring short, accessible articles describing actionable policy applications of behavioral science research. These new types of media help to address practitioners’ expressed desire for continuing education (
Banks et al., 2016).
No matter what media are used to share research findings, researchers should also consider how best to grab attention in ways that will increase interest in research evidence. Perhaps the most universally successful technique for doing so is to open with a compelling story that hooks the reader, rather than diving right into data or beginning with a theoretical exposition. This tactic has been used in popular research-based books by academic authors such as
Thaler and Sunstein’s (2009)Nudge,
Heath and Heath’s (2010)Switch, and
Grant’s (2013)Give and Take and can easily be used by academics who give media interviews or speak to practitioner audiences.
Storytelling is effective for a number of reasons. For example, people are less likely to counterargue evidence that is presented in narrative format (
Niederdeppe, Shapiro, & Porticella, 2011), perhaps because it frames the discussion in a particular way and puts the audience in a confirmatory mind set. Narratives also reduce reactance to persuasive messages (
Moyer-Gusé & Nabi, 2010) and have been shown to evoke retrospective reflection, which increases the likelihood that people will recall memories consistent with the narrative that strengthen their belief (
Hamby, Brinberg, & Daniloski, 2017). Stories further address the well-established finding that most people are more convinced by a compelling story than by large-sample statistical evidence (
Rynes, 2012).
Yet another way of improving the credibility of scientific findings might be to use graphics, rather than text-based messages, and statistics that better convey practical importance to nonscientists than correlations and (especially) the coefficient of determination (
Kuncel & Rigdon, 2013). Examples of more effective means of communication include risk ratios, converting contingency tables into graphics, stacked bar charts, and binomial effect size displays that convert correlations into percentages comparing different groups (e.g., outcomes for those scoring above average vs. those scoring below).
Finally, based on
Heath and Heath’s (2007) research-based book
Made to Stick,
Rousseau and Boudreau (2011) offered a set of recommendations to make research findings “sticky.” Two recommendations are to transmit core principles in plain language and to use familiar analogies. Rousseau and Boudreau’s examples include using the core principle “losses hurt more than gains feel good” (277) to summarize prospect theory or the core principle “socially distant groups tend to have negative perceptions of each other” (275) to explain the main thrust of intergroup research. Another principle, “embed findings within existing practitioner decision frameworks” (
Rousseau & Boudreau, 2011: 278), is exemplified by demonstrating how the monetary value of recruitment, selection, and retention processes can be demonstrated using similar processes and frameworks to those employed in supply chain analysis. Additional principles include presenting findings in ways that are deliberately targeted toward practitioners, such as framing research according to end-users’ interests, starting with a relevant story, providing general “dos and don’ts,” explicating known boundary conditions, and including opinion leaders’ testimonies.
Strategies for Anticipating and Addressing Resistance to Specific Findings
To this point, we have addressed general persuasive strategies for more effectively presenting evidence-based messages where there is not a particular reason to expect much resistance. As such, the above strategies are likely to be effective where the topic is not strongly emotional and the audience is either “friendly” or a mix of sympathetic and skeptical readers or listeners. However, as indicated earlier, there are many management topics about which we should expect at least some members of our audiences to have strongly held views (e.g., discrimination, diversity, employment testing, socially responsible investing). When addressing such topics, additional tactics that anticipate and address resistance more directly are needed.
Use dialectic methods or two-sided arguments
When researchers have the opportunity to engage directly with an audience, they may be able to shift beliefs about research evidence by helping people reach their own evidence-based conclusions. One way this can be done is through the use of dialectic (rather than didactic) methods. With dialectic methods, researchers actively engage in discourse among people holding (or role-playing) different points of view with the goal of establishing truth through evidence. For example,
Latham (2007) has his executive master of business administration students vigorously debate the likely effectiveness of different methods of performance appraisal before revealing the evidence-based answer. By encouraging debate with and among students rather than simply talking at them, professors begin to create trust and reduce the chances of being perceived as condescending or “other” (e.g.,
Trank, 2014). This type of debate can also shift the audience to deeper information processing, reducing the effects of motivated reasoning.
A similar approach can be used in speaking to or writing articles for practitioners about potentially contentious issues. For example, one might begin an article about employment discrimination by asking the reader to consider whether discrimination still exists in hiring and then laying out various arguments in favor of each case (yes vs. no)—a technique known as two-sided arguing (
Knowles & Linn, 2004). Presentation of both cases can then be followed by the introduction of meta-analytic evidence from field experiments showing clear evidence of persistent discrimination to the present time period (e.g.,
Pager, 2007;
Quillian, Pager, Hexel, & Midtbøen, 2017). This strategy of “two-sided arguing with refutation” has been found to be more successful than presenting only a single position (
Allen, 1991;
O’Keefe, 1999), particularly in cases where those holding the weaker position would be likely to come up with counterarguments on their own if only the evidence-based side were presented.
Use the best available evidence and explain research methods along with findings
In the preceding example regarding the continued existence of discrimination in hiring, it is possible to present research evidence on both sides of the debate. For example, one can find many surveys showing that employers and members of the public report less discriminatory attitudes than in the past or that companies have increasingly recognized diversity as a goal and revamped hiring procedures to attempt to lessen discrimination. However, the research methods behind such studies (mostly attitude or self-reported practice surveys) are weaker in both internal and external validity than the meta-analyses of field experiments cited above (
Blank, Dabady, & Citro, 2004). Because of the vast superiority of some methods over others, practitioners of EBMgt discuss procedures along with results as a way of building greater understanding of the value of strong research methods and increasing the confidence of those without strong research backgrounds.
The potential benefits of teaching audiences about stronger versus weaker methodologies have been demonstrated by
Sagarin and Cialdini (2004) in the context of marketing advertisements. They found that training students to be critical of advertisement content and to identify credible versus noncredible sources for messages made them subsequently less resistant to legitimate and appropriate sources and less persuaded by ads with illegitimate uses of heuristics than were people not so trained. According to Knowles and Linn,
It is as if a general wariness that untrained participants applied to all advertisements was lifted from the legitimate ads after training. People who are provided with a sense of power, efficacy, control, and competence seem to have less need to be wary. (2004: 128)
Use experiential methods
Experiential methods can be a terrific way of getting an audience engaged in a topic. In addition, they can be used to reduce people’s well-known overconfidence about their ability to evaluate other people or make correct managerial decisions. For example,
Latham and Wexley (1981) describe a training session they conducted with middle managers who had several years of experience in conducting performance appraisals. They gave these managers a detailed job description for a position in their unit and then had them view one of two job interviews. Everything about the two interviews was identical, except that in the first condition, the applicant said that he had 2 brothers, a father with a Ph.D. in physics, and a mother with a master’s degree in social work. In the second video, the tape was spliced so that the (same) applicant said he had 12 siblings, his father was a bus driver, and his mother was a maid. The first group rated the applicant a 9 on a 9-point qualifications scale, while the second group gave this same person 5s and 6s—a perfect illustration of the “similar-to-me” rating error. Such an experience can help reduce overconfidence and increase motivation to improve.
Although it is easiest to think of how to apply experiential methods in face-to-face situations, they can also be used in written contexts. For example, writers of research-based books sometimes include links to surveys, questionnaires, or self-assessments that help people see where they have the biggest opportunities for changing or improving their current situations in an evidence-guided way (e.g.,
Fredrickson, 2009;
Grant, 2013). In addition, they give explicit guidance about how to put certain research findings into action. Similar approaches can be found in research-based articles, such as
Amabile and Kramer’s (2011) research-based article on how to take advantage of the emotional and productivity benefits of small wins.
Use jiu jitsu persuasion to deal with resistance to specific issues
Broadly speaking, all of the above strategies require researchers to understand the underlying factors that cause resistance to research findings. The most effective persuasion strategies will vary, however, depending on the specific forces that are shaping beliefs. For example, changing the minds of people who resist research findings because they are inconsistent with the beliefs of their identity groups will require different strategies than changing the minds of people who question the credibility of research in general.
Hornsey and Fielding refer to these targeted persuasion strategies as “jiu jitsu persuasion,” explaining that
rather than taking on people’s surface attitudes directly (which causes people to tune out or rebel), the goal of jiu jitsu persuasion is to identify the underlying motivation, and then to tailor the message so that it aligns with that motivation. (2017: 469)
An example of jiu jitsu persuasion was offered by
Jay and Grant (2017) in their book on the role of conversation in overcoming conflict that stems from value differences. They described two investment advisors who kept running into a brick wall when trying to convince pension fund managers and other institutional investors to consider socially responsible investments. Despite their arguments that social investing is compatible with “doing well while doing good” (which is also consistent with meta-analytic evidence; e.g.,
Orlitzky et al., 2003), this argument fell on deaf ears because fund managers had an implicit theory that the two goals were locked in an inevitable tradeoff.
Jay and Grant (2017) suggest a four-step solution for honoring conflicting values based on embracing the tension between them. The first step, to “move beyond factual debates and clarify values, as well as associated hopes and fears” (
Jay & Grant, 2017: 151), had already been accomplished by uncovering the clash between implicit “win-win” and “zero-sum” theories of the issue. Their second step, “own the polarization” (151), involves acknowledging your own ambivalence and the concern you have for the other person’s values or worldview. For example, the investment advisors acknowledged that there are indeed some trade-offs, particularly in the short term (e.g., higher wages might dampen short-term financial results), and expressed appreciation and concern for the fund managers’ fiduciary responsibility. In Jay and Grant’s third step, “expand the landscape” (151), the investment advisors drew a graph of financial performance versus social impact, but then also drew a line suggesting that the downward-sloping (i.e., trade-off) line could be shifted upwards through innovation (e.g., “We could imagine shifting this line outward—finding clever investment strategies that could break the trade-offs. We could do that by paying attention to information that other investors aren’t paying attention to”; 151). And in their step four, “dancing in the new terrain” (151), the parties worked together to generate options that moved from “either-or” to “both-and” solutions. Jay and Grant report that “the results were significant: the team’s clients responded positively to this conversation and began seriously considering their (social impact) approach. The conversation created an opening where one had not existed before” (145).