Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton June 6, 2020

Two-sided science: Communicating scientific uncertainty increases trust in scientists and donation intention by decreasing attribution of communicator bias

  • Mickey J. Steijaert EMAIL logo , Gabi Schaap and Jonathan Van’t Riet
From the journal Communications


Previous research has shown that uncertainty communication by scientists (i. e., expressing reservations towards their own research) increases the public’s trust in their work. The reasons for this have not been elucidated, however. In the present study, we provide a theoretical explanation for this phenomenon. Specifically, we expected that attributed communicator bias would mediate the effect of uncertainty communication on trust. Results from a mixed-design experiment (N = 88), using modified science news articles, revealed support for this hypothesis. Positive effects of uncertainty communication on trust and donation intention were both mediated by attributed communicator bias.

1 Introduction

Trust in scientists is an important factor in determining informed judgments on issues such as technology, climate, or health (Retzbach, Otto, and Maier, 2016). There is, however, growing concern about decreasing public trust in science (Weingart, 2012). As a result, many scholars have called for a more honest communication of scientific research, most notably to openly transmit scientific uncertainty to the public as a way to retain public trust (e. g., Jensen, 2008; Leshner, 2003; Zehr, 1999).

Uncertainty and tentativeness are core features of the scientific endeavor: They ensure open and evidence-based debate within both the scientific and the public realm. In news media, however, scientific information is largely stripped of uncertainty (Pellechia, 1997; Singer, 1990; Stocking, 1999; Stocking and Holstein, 1993, 2009; Weiss and Singer, 1988). This may be the result of scientists’ own reservations to communicate uncertainty. Although there is sometimes an incentive to communicate uncertainty in circumstances where it increases the chances of further research funding (Mellor, 2010), scientists also fear a loss of authority when they express uncertainty about their work (Post, 2016; Post and Maier, 2016). These fears may be unwarranted: There exists no compelling evidence that communicating scientific uncertainty negatively affects public perceptions of science. In fact, the evidence is quite mixed. Empirical research on the effects of uncertainty communication on perceptions of science or scientists sometimes yields positive effects (Crismore and Vande Kopple, 1997; Jensen, 2008; Retzbach et al., 2016), and sometimes yields no effects (Binder, Hillback, and Brossard, 2016; Jensen et al., 2011; Ratcliff, Jensen, Christy, Crossley, and Krakow, 2018; Thiebach, Mayweg-Paus, and Jucks, 2015; Winter, Kramer, Rosner, and Neubaum, 2015). These contradictory findings show that uncertainty has different effects in different contexts. How the context influences the effect of uncertainty is unclear, as the cognitive processes whereby individuals perceive and judge scientific uncertainty remain largely unidentified. The current study contributes to the literature by probing cognitive processes that may explain the effect of uncertainty communication on trust. To do so, the study applies concepts from attribution theory. Where scientists presumably fear that uncertainty communication damages their authority (Post and Maier, 2016), in this study we take an alternative position, assuming that scientists communicating uncertainty may be seen as more objective and less biased (Eisend, 2007). Thus, the current research explores to what extent the effect on trust of communicating uncertainty in science communication can be explained by attribution of communicator bias.

In addition, whereas prior research has been limited to perceptions of science, the perpetuation of science hinges in part on public willingness to devote financial resources to research. Therefore, in addition to measuring the effects on trust, this study goes one step further to investigate the behavioral consequences of uncertainty communication, focusing on the intention to donate to research as a dependent variable.

Trust in science and uncertainty

In the current study, trust is conceptualized as the belief, resulting from cognitive evaluations, that a scientific communicator is inclined to communicate the truth as she or he actually sees it (Hovland, Janis, and Kelley, 1953; Jensen, 2008; O’Keefe, 2016; Whitehead, 1968). Trust is often seen as one dimension of communicator credibility, the other being a communicator’s expertise or competence (O’Keefe, 2016).

The predictor of trust in this study is uncertainty communication in science news: the communication of any limitation or tentativeness to a certain study in the news. For example, researchers may point to the limited generalizability of their research, to limitations in the study design, or note that more research is needed before any definitive claims can be made (Peters and Dunwoody, 2016). While this may be sound scientific practice, it is more the exception than the rule in science communication to a lay public (Lai and Lane, 2009; Pellechia, 1997; Stocking, 1999). Previous research on uncertainty communication has investigated its effects on both trust, and on the more extensive concept of credibility – which includes perceived expertise. This research has yielded conflicting results, suggesting differential effects on trust, on the one hand, and credibility, on the other. Jensen (2008) found a positive effect of uncertainty communication on trust, while Retzbach et al. (2016) report a marginally positive effect on trust in scientists in general (i. e., not a scientist in a specific message), and Crismore and Vande Kopple (1997) found that readers perceived the author of a scientific text containing uncertainty as more believable.

In contrast, the perceived expertise of specific scientists, trust in medical professions, argument credibility, and personal certainty about technological risks appear not to be affected by uncertainty communication (Binder et al., 2016; Jensen, 2008; Jensen et al., 2011; Thiebach et al., 2015, Winter et al., 2015). Most notably, Ratcliff et al. (2018) failed to replicate Jensen’s preliminary results in a larger sample, finding no effect of uncertainty on scientist credibility in health news. These conflicting results may in part be explained by the fact that, as Retzbach et al. (2016) argue, admitting uncertainties may enhance a scientist’s perceived trustworthiness as it conveys a willingness to tell the truth. At the same time, it might reduce the perceived competence of scientists (i. e., expertise). Indeed, Hendriks, Kienhues and Bromme (2016a) found that admitting a study’s flaws damaged an expert blogger’s perceived expertise, while increasing levels of perceived integrity and benevolence. This has led some authors to conclude that trust is the aspect of a communicator’s credibility most likely to be positively affected by communicating uncertainty, as it conveys a sense of objectivity or unbiased communication (Crismore and Vande Kopple, 1997; Jensen, 2008; Retzbach et al., 2016). However, these assumptions are in need of substantiation.

In the current study, we focus on the impact of uncertainty communication on trust in a scientist in the news. Participants receive messages that are completely similar, except for the level of uncertainty that is communicated. We expect that beliefs about a scientist’s objectivity and willingness to tell the truth are positively affected by communicating uncertainty.

H1: Communicating uncertainty by a scientist in science news will increase recipients’ trust compared to communicating no uncertainty.

Trust in a scientist can be expected to increase positive behavior towards a scientist. Indeed, Wheeler (2009) found a significant positive effect of source credibility on intention to donate money to this person. Therefore, we expect first that uncertainty communication will positively affect the intention to donate, while research suggests that this relation is mediated by the degree of trust.

H2: Communicating uncertainty by a scientist will increase a recipients’ intention to donate money to this scientist compared to communicating no uncertainty.

H3: Trust in a scientist will mediate the effect of uncertainty communication on donation intention.

Underlying mechanism: Attribution of communicator bias

The above hypotheses rest on the assumption that trust in a scientist will increase because recipients will judge a scientist communicating uncertainty as being more objective and willing to tell the truth. It is, however, as of yet unclear why uncertainty communication would lead to increased trust (Jensen et al., 2011). Ratcliff et al. (2018) argue that “sensing that an article has incomplete or undisclosed data can give the impression of an underlying attempt to persuade rather than inform” (p. 116). We substantiate this argument by lending insights from attribution theory. Specifically, the attribution of communicator bias by the recipient may provide a theoretical contribution by proposing a causal mechanism explaining the effect of uncertainty on trust.

Attribution theory entails the study of perceived causation, or the perception or inference of a cause (Kelley and Michela, 1980). Its basic assumption is that individuals interpret behavior in terms of its causes and determine their reaction to the behavior accordingly (Crowley and Hoyer, 1994). A key concept in attribution theory in communication is the concept of communicator bias. This concept refers to different attributed motivations that explain why a given communicator would fail to convey a truthful message. Attribution theory predicts that a claim made by a source perceived as biased will be discounted (Kelley, 1973).

An important source of communicator bias is an unwillingness to convey accurate information. If a communicator is perceived to be unwilling to convey information properly (e. g., because of external pressure or personal benefit), this leads to the attribution of communicator bias. Attribution of communicator bias, in turn, negatively affects trust. A communicator is more likely to be trusted if the position she or he advocates is inconsistent with what would be expected from biased sources (Eagly, Wood, and Chaiken, 1978; Kelley and Michela, 1980; O’Keefe, 2016; Wiener and Mowen, 1986).

Many studies in persuasion research have applied attribution theory especially in research on two-sided messages in marketing (Eisend, 2007). Two-sided messages are persuasive messages in which, alongside positive claims about a given product or service, negative facts are provided. Marketing researchers have found this technique to be a useful marketing tool in several contexts (Crowley and Hoyer, 1994; Pechmann, 1990).

According to Eisend (2007), attribution theory is the most apt at explaining the positive consequences of two-sided messages. Consumers can attribute claims “either to the advertiser’s desire to sell the product or to actual characteristics of the product” (Eisend, 2007, p. 616). The first of these options would lead to communicator bias, since the advertiser is unwilling to convey a truthful message to sell the product. However, if an advertiser includes negative information in the message, this takes away grounds for attributing communicator bias, as it would signify the communicator acting against his or her own interest. Subsequently, the credibility of the advertiser is positively affected, increasing advertisement effectiveness (Eisend, 2007).

Science news with uncertain elements can be seen as an example of a two-sided message, and might take away reasons to attribute communicator bias to the scientist. There are, after all, no perceived external pressures or possible benefits to the scientist that could motivate him or her to communicate the limitations. In short, the recipient is less likely to attribute communicator bias to a scientist communicating uncertainty, increasing trust (Eisend, 2007; Kelley and Michela, 1980). Whereas we might well be able to explain the positive effect of uncertainty communication on trust as found by Jensen (2008), Retzbach et al. (2016), and Crismore and Vande Kopple (1997) by the mediating effect of communicator bias, to date, this hypothesis has not yet been tested.

Thus, this study expects communication of uncertainty in science news to increase trust. Following the line of reasoning from attribution theory, this effect is mediated by a decrease in the attribution of communicator bias by the recipient:

H4: The positive effect on trust of communicating uncertainty by a scientist in science news is mediated by communicator bias.

2 Method


An online experiment was conducted using modified Dutch online science news articles and an online questionnaire following a 2 (within: uncertainty) x 2 (between: condition order) x 3 (between: article topic) mixed design.[1] Participants read two articles with different levels of uncertainty and completed a questionnaire after reading each article. To reduce the risk of order effects, the uncertainty conditions were counterbalanced: One half of participants started with the high-uncertainty article, the other half with the low-uncertainty article. In addition, to increase the generalizability of the results, three articles with different topics were created, each with both a high-uncertainty and a low-uncertainty version. Thus, the final design included twelve experimental groups and two measurements, in which every possible combination of uncertainty, condition order, and article topic was incorporated.


A sample of university students was employed to test the theoretical assumptions. After approval by the faculty’s ethics committee, participants were recruited using an online survey system at the university. They could read the articles and fill in the questionnaire from their laptop or desktop at home and were rewarded with study credits or a €5.– coupon. Eleven participants, who did not meet the requirements of the study (proficient in Dutch; student or recently graduated; age <30), were omitted from the analysis. The final sample (N = 88) was therefore relatively homogeneous in terms of language, education, and age, and satisfies to detect within-subject effects of Cohen’s f = 0.34.[2] Participants were asked about their gender (92 % female) and age (M = 20.69, SD =2.65). Whereas there is no reason to suspect that gender substantially influences the cognitive mechanism under scrutiny, the fact that our sample consists of students may well influence the results. Being more attuned to the scientific process, students might react differently (i. e., more positively) to scientific uncertainty in the news. To check to what extent knowledge of the scientific enterprise influences the effect of uncertainty communication, participants were asked how many years in total they had spent in scientific education (M = 2.42, SD = 1.73). The consquences for generalizing this study’s results are further substantiated in the discussion section.


All participants were asked for their written consent. Participants were then randomly assigned to one of the twelve groups. A randomization check showed that participants in different groups did not differ significantly in age, gender and years of university education.

After reading the first article, participants filled in a questionnaire measuring their scores on the relevant variables. To minimize risk of item bias, questions were rotated and filler questions were added to each list of questions. Subsequently, to decrease the risk of transfer errors between the two measurements, the participants completed a short cognitive test, consisting of ten easy mathematical calculations and a word-order assignment. Finally, the participants proceeded to the second article and a second questionnaire, and were debriefed with a short text stating the research goals and manipulations.

Stimulus material

The three articles were modified versions of short medical items from several Dutch online news sources, which were selected to have minor relevance to the study’s population. The first author of this study, who is also a professional science journalist, rewrote the articles. The first article, ‘Bechterew’, dealt with a study on a new treatment that could possibly aid patients with Bechterew disease. This article was adapted from a Dutch national business news website article discussing a study by Feagan et al. (2013) on Crohn’s disease. ‘Particulate matter’, the second article, was adapted from a piece on a well-known Dutch science website concerning a study by Pedersen et al. (2013) and discussed a study that showed the negative influence of particulate matter on infants’ health. Finally, the third article, ‘Rhubarb’, discussed a study that discarded claims about the negative effect of rhubarb on bones and joints. This article was adapted from a piece in a Dutch national newspaper that rejected claims about the harmful influence of coffee, and which was based on a study by Larsson, Drea, Jensen-Urstad, and Wolk (2015).

Two versions of each article were created, with different levels of uncertainty. Each introduced the lead author of the study discussed; a male doctor, who commented on the findings in the last paragraph of the article. Following Jensen (2008), the discussion sections of the original research articles were scanned for remarks on study limitations. In the high-uncertainty condition, the doctor was quoted as stating two of these limitations and notifying the reader that the study’s results should be interpreted with caution. In contrast, in the low-uncertainty condition, the doctor provided two positive remarks on the study and called it a breakthrough. Besides topic and uncertainty, great care was taken to create articles that were completely similar in design, length and structure. Each of the six articles was presented in the format of a lifestyle news article on the largest Dutch online news website The stimulus material is published online on Open Science Framework.

As a manipulation check, three 0–100 visual analog scale items measured the extent to which participants felt the researchers’ communicated uncertainty: ‘Dr. [name of the researcher] pointed to the shortcomings of his research.’; ‘Dr. [name of the researcher] did not doubt his study results.’; ‘Dr. [name of the researcher] was uncertain about his study’s results.’, with ‘totally disagree’ and ‘totally agree’ as lower and upper bounds. Manipulation check indices were created for both high (M = 68.25, SD = 17.01) and low (M = 14.50, SD = 14.13) uncertainty conditions by taking the mean of the three items. A paired-samples t-test showed that the mean uncertainty index score was significantly higher in the high-uncertainty condition than in the low-uncertainty condition (ΔM = 53.75, t(87) = 21.04, p < .001), indicating that the uncertainty manipulation was effective.


Communicator bias. To measure attribution of communicator bias, a scale of four 0–100 visual analog scale items was developed to cover the varying causal attributions participants could apply regarding the quotes by the researcher in the article. One item was adopted from Sen and Lerman (2007), and several other items were developed to capture the richness of possible attributions. The items were tested in a pilot study (N = 50), in which four items were found to form a reliable scale, which correlated strongly with relevant related variables. In the final measure, the following four items were used: ‘In the article, Dr. (name of the researcher) mainly tries to [(lower bound:) promote his study]/[(upper bound:) inform the public correctly]’; ‘While making his remarks, Dr. (name of the researcher) mainly thinks about [his own interests]/[the interests of the public]’; ‘In the article, Dr. (name of the researcher) projects his research [as positively as possible]/[as realistically as possible]’; ‘Dr. (name of the researcher) bases his remarks on his study [on the study’s quality]/[on reasons that have nothing to do with the study’s quality]’. Two 0–100 communicator bias scales were created for high (M = 25.70, SD = 13.66) and low (M = 52.45, SD = 17.78) uncertainty conditions by taking the mean of the four items. While for the high-uncertainty condition the scale’s reliability was satisfactory (α = .741), for the low-uncertainty condition it was somewhat low (α = .656).

Trust. To measure trust, a scale was constructed using seven items from Gaziano and McGrath (1986). For every item, participants had to complete the sentence ‘I think the lead scientist in the article, … (name of the researcher), …’: ‘… is fair/… is unfair’, ‘… is biased/… is unbiased’, ‘… tells the whole story/… doesn’t tell the whole story’, ‘… is accurate/… is inaccurate’, ‘… separates fact and opinion/… doesn’t separate fact and opinion’, ‘… can be trusted/… cannot be trusted’, ‘… is grounded in facts/… is not grounded in facts’. Respondents used 0–100 visual analog scales to indicate their position between the two options for every item. Two 0–100 trust item scales were constructed for both high (M = 75.58, SD = 12.92) and low (M = 55.05, SD = 17.13) uncertainty conditions by taking the mean of the seven items. Both trust item scales reached satisfactory reliability (α = .882; α = .868).

Donation intention. To measure donation intention, respondents were asked one single question: ‘If you had ten euros to spend on scientific research, how much would you spend on Dr. (name of the researcher)’s research?’ They were given a blank space to specify the amount of euros they would spend. To ease comparison of effect sizes, donation intention was recoded into a 0–100 interval, with 0 indicating no intention to donate any money whatsoever, and 100 indicating the intention to donate all available money (€10.-). The variable was measured for both high (M = 45.68, SD = 25.53) and low (M = 33.01, SD = 25.71) uncertainty conditions. Descriptive statistics of all relevant variables can be found in Table 1.

Table 1:

Descriptive statistics for relevant variables.







Dependent variables

Communicator bias, high uncertainty






Communicator bias, low uncertainty






Trust, high uncertainty






Trust, low uncertainty






Donation intention, high uncertainty






Donation intention, low uncertainty







Gender (Female)


92.0 %







Years of academic education






3 Results

Main effects

First, Hypothesis 1 was tested, which states that uncertainty in science news articles positively influences trust in the scientist mentioned in the article. To this end, a three-way, mixed-model Analysis of Variance (ANOVA) was conducted, with trust as the dependent variable and uncertainty condition as random factor. To check for possible influences of the different experimental conditions, order (high-uncertainty article first versus low-uncertainty article first) and topic (which two out of the three health articles) were included as fixed factors. The analysis revealed a significant main effect of uncertainty on trust (F(1, 76) = 109.38, p < .001, ηG² = 0.34). Estimated marginal means analysis showed participants in the high-uncertainty condition scored higher on trust (M = 75.61, SE = 1.37) than participants in the low-uncertainty condition (M = 54.88, SE = 1.87).

Both interactions with uncertainty order and topic revealed no significant effects, indicating that the effect is stable over conditions. In conclusion, the results show that the first hypothesis is supported by the experimental data.

Likewise, Hypothesis 2 expected a positive effect of uncertainty on donation intention. A similar mixed-model ANOVA, with donation intention as dependent variable, uncertainty condition as random factor and uncertainty order and topic as fixed factors, showed a significant difference in donation intention score between the uncertainty conditions (F(1, 76) = 29.78, p < .001, ηG² = 0.07). Participants in the high-uncertainty group (M = 45.71, SE = 2.65) were willing to donate on average €1.30 more to the researcher in the article than participants in the low-uncertainty group (M = 32.88, SE = 2.83). While the interaction with topic showed no significance (F(5, 76) = 1.60, p = .171, ηG² = 0.02), the interaction with uncertainty order did (F(1, 76) = 5.29, p = .024, ηG² = 0.01). This indicates that the difference in donation intention between uncertainty conditions is influenced by the order of these conditions.

To gain insight into the interaction between uncertainty and uncertainty order, Table 2 shows the mean donation intention for the different uncertainty conditions and uncertainty order groups. The group means indicate that in both uncertainty order groups, participants in the high-uncertainty condition are willing to donate significantly more money to the researcher in the article than participants in the low-uncertainty condition.

This finding provides empirical support for the second hypothesis. The difference between conditions, though, is higher when the high-uncertainty article is read second (ΔM = 18.24) than when the high-uncertainty article is read first (ΔM = 7.43). Thus, the influence of uncertainty on donation intention is strongest when participants read a high-uncertainty article after reading a low-uncertainty article.

Table 2:

Mean donation intention scores for different experimental conditions (N = 88).

High uncertainty

Low uncertainty

High uncertainty first, low uncertainty second

39.52 (3.69)

32.10 (3.95)

Low uncertainty first, high uncertainty second

51.90 (3.79)

33.65 (4.05)

Note. Standard deviations between brackets.

Mediation analyses

Hypothesis 3 states that the effect of uncertainty on donation intention is mediated by trust. To test this hypothesis, we used the MEMORE-macro for SPSS (Montoya and Hayes, 2017). MEMORE estimates a complete path-analytic framework for repeated measures in one single analysis, using 10,000 bootstrap samples to generate 95 % confidence intervals regarding the direct, indirect, and total effect of uncertainty on trust. The results of this analysis are presented in Table 3; specific path coefficients are given in Figure 1.

Table 3:

Mediation model effect sizes and 95 % confidence intervals on donation intention. X: uncertainty, M: trust (N = 88).

Effect size


CI 95 %

Total effect




7.56 – 17.78

Direct effect



–8.52 – 5.51

Indirect effect




8.84 – 19.37



*p=<.05; **p<.01; ***p<.001

Figure 1: Effect of uncertainty (0 = low uncertainty, 1 = high uncertainty) on donation intention (0–100) mediated through trust (0–100, N = 86). Coefficients are unstandardized estimates and standard errors are reported between parentheses. The estimated indirect effect ab is 14.17 [8.84 – 19.38], and the total effect c is 12.67 (2.57), p < .001
Figure 1:

Effect of uncertainty (0 = low uncertainty, 1 = high uncertainty) on donation intention (0–100) mediated through trust (0–100, N = 86). Coefficients are unstandardized estimates and standard errors are reported between parentheses. The estimated indirect effect ab is 14.17 [8.84 – 19.38], and the total effect c is 12.67 (2.57), p < .001

We look at each path in the mediation model separately. Both the effect of uncertainty on trust (path a, b = 20.53, SE = 1.91, p < .001) and the effect of trust on donation intention (path b, c = .69, SE = .13, p < .001) are significant. In addition, Table 3 shows that the path coefficient for the indirect effect ab significantly deviates from zero (b = 14.17 [8.84 – 19.37]). This indicates that trust significantly mediates the effect of uncertainty on donation intention. The direct effect c’ does not significantly differ from zero (b = –1.51, SE = 3.53, p = .67), suggesting that trust completely mediates the effect of uncertainty on donation intention. Hypothesis 3 is thus in line with the empirical evidence.

Hypothesis 4 states that the influence of uncertainty on trust is mediated by communicator bias. To test this, we again used MEMORE confidence intervals with 10,000 bootstrap samples. The results of this analysis are presented in Table 4; specific path coefficients are given in Figure 2.

Table 4:

Mediation model effect sizes and 95 % confidence intervals on trust. X: uncertainty, M: communicator bias (N = 88).

Effect size


CI 95 %

Total effect




16.74 – 24.33

Direct effect




.63 – 10.11

Indirect effect




10.66 – 20.04



*p=<.05; **p<.01; ***p<.001

Figure 2: Effect of uncertainty (0 = low uncertainty, 1 = high uncertainty) on trust (0–100) mediated through communicator bias (0–100, N = 86). Coefficients are unstandardized estimates and standard errors are reported between parentheses. The estimated indirect effect ab is 15.16 [10.72 – 20.12], and the total effect c is 20.53 (1.91), p < .001
Figure 2:

Effect of uncertainty (0 = low uncertainty, 1 = high uncertainty) on trust (0–100) mediated through communicator bias (0–100, N = 86). Coefficients are unstandardized estimates and standard errors are reported between parentheses. The estimated indirect effect ab is 15.16 [10.72 – 20.12], and the total effect c is 20.53 (1.91), p < .001

Both the effect of uncertainty on communicator bias (path a, b = –26.75, SE = 2.18, p < .001) and the effect of trust on donation intention (path b, b = –.57, SE = .07, p <.001) are significant. Table 4 shows that the indirect effect size ab significantly deviates from zero (b = 15.16 [10.66 – 20.04]). This indicates that communicator bias significantly mediates the effect of uncertainty on trust. The direct effect remains significant when communicator bias is taken into account but is reduced substantially (b= 5.37 [.63 – 10.11]). The statistical evidence thus suggests that, in line with Hypothesis 4, communicator bias largely mediates the effect of uncertainty on trust.

4 Discussion

Trust is an important factor in holding the fabric of society together. As science has an important role in feeding public debate with relevant and reliable knowledge, trust in science is especially pertinent (Peters and Dunwoody, 2016). For this reason, it is important to investigate the mechanisms that determine how people assign trust to a scientific communicator, especially in a time when many scientists worry about waning trust in science as an institution (Nerlich, 2017).

This research shows that trust in science benefits from the communication of uncertainty. After reading an article in which a scientist is quoted making clear statements about the uncertainty of his results, participants perceive the expert as more trustworthy than after reading an article in which such uncertainty is absent. Thus, this study confirms the finding by Jensen (2008) that uncertainty communication positively affects scientists’ trustworthiness, and disconfirms opposite findings in studies indicating that uncertainty may have no effect on related variables such as trust in scientific professions in general, or argument credibility and scientist’s expertise (Jensen, 2008; Jensen et al., 2011; Thiebach et al., 2015). Moreover, for the first time, this study provides evidence that trusting a scientist leads to greater willingness to donate money to his or her research.

As we discussed in the introduction, news representations of science often neglect to communicate uncertainties. This might be caused by scientists’ own reservations, being afraid that communicating uncertainties weakens their authority. Other explanations include the perceived low news value of uncertainty: News consumers are seemingly only interested in clear-cut truths and not in the ‘ifs or buts’ of science. Not only journalists are to blame in this. Earlier studies have found press officers and scientists themselves leaving out uncertainties when communicating about science to accommodate journalists (Schat, Bossema, Numans, Smeets, and Burger, 2018; Sumner et al., 2014). Whatever the cause may be, the cognitive mechanism proposed in our study indicates that communicating doubts or nuances can have a positive effect on trust, and this might be a reason to include uncertainties in science communication.

This study aimed at providing a theoretical contribution by proposing a causal mechanism to explain the effect of uncertainty on trust. Attribution theory, which has been proposed as an explanation for the positive effect of two-sided marketing messages on persuasion (Eisend, 2007), was assumed to also explain the positive effect of communicating uncertainty in science news. Our results show that uncertainty communication significantly decreases the attribution of communicator bias to a scientist, that is, the perception that she or he is unwilling to convey a truthful message about her or his study – just as a salesperson is considered less biased if she or he gives negative information about a product. Attributed bias in turn determines the researcher’s trustworthiness.

One notable limitation of this study concerns the sample employed. Just as in the research by Jensen (2008), all participants in this study were at university or had recently graduated. There is reason to suspect that scientific education might influence the effect of uncertainty communication since those familiar with scientific practice will probably value thorough research reporting more – including uncertainty. The number of years of scientific education had no significant effect on trust in this study. However, this does not tell us much about those who have never seen a university building from the inside. Do they react as positively to scientific uncertainty as the student participants in this study? Ratcliff et al. (2018) found no link between uncertainty and researcher credibility in a larger, more representative sample, further substantiating the notion that scientific education affects one’s reaction to uncertainty communication. We encourage future research in this field to test our assumptions in a sample that better represents the general science communication audience. As it stands, we argue that the goal of the current study was to investigate whether a mechanism exists that can explain the possibly counterintuitive finding from prior research that trust in science and scientists increases when they communicate being uncertain about their research. Our findings indicate that indeed such a mechanism exists in recipients of science communication. While this contributes to theoretical development, future research must ascertain whether such a mechanism is universal to all recipients or specific to certain social groups.

Another downside of experimental methods is that they possibly provoke effects of the experimental setting itself, such as intensified reading of the news articles. We encourage researchers to test our results in more natural settings, where participants are not explicitly ordered to read a certain science news text.

Furthermore, the sample size (N = 88) was not ideally suited to find small or moderately sized effects. The main effect of uncertainty communication on trust is strong (ηG² = 0.34) and therefore easily detected, even by a sample of limited size. Care has to be taken though when interpreting the control analyses: interactions with uncertainty order and article topic. As the sample is underpowered to detect these interaction effects, we cannot entirely rule out the possibility of unnoticed order-/topic-effects.

Finally, causality claims must be handled with care in this study. Whereas the causal direction of the effects of uncertainty communication is solid (after all, the measurement of trust and communicator bias took place after the manipulation of uncertainty communication), the causality between communicator bias and trust remains a matter of inference. Though it seems theoretically valid to assume an effect of communicator bias on trust, this causal effect cannot be derived from the empirical data.

In addition to the findings derived from our hypotheses, it is interesting to consider the order effects in this study. When participants faced a low-uncertainty scientist first and the high-uncertainty scientist second, they judged the high-uncertainty scientist as significantly less biased and more trustworthy. It seems, thus, that the effects of uncertainty communication are stronger if the article with no limitations comes first. This finding makes sense when put in the perspective of attribution theory. People have certain expectations of how people act – such as the expectation that a scientist probably expresses certainty about his or her own research findings (Kelley and Michela, 1980). If first confronted with a scientist that meets these expectations, this will only strengthen expectations towards scientists’ general biases (i. e., a tendency to be certain about their own results) and increase contrast with the uncertain scientist in the second article. Subsequently, the decrease in perceived communicator bias and increase in trust only grow larger in this condition order.

In defining trust, we have built on the traditional conceptualization of trust as a dimension of credibility. Recently, scholars have proposed the notion of epistemic trust in the context of science communication (Hendriks, Kienhues and Bromme, 2016b). Epistemic trust indicates to what extent receivers put trust in scientists as conveyors of knowledge. Interestingly, Hendriks et al. (2016b) incorporate the notion that the audience should be able to “vigilantly identify sources whose intentions might lead to a loss of benevolence or of integrity” (p. 154). This notion closely resembles our concept of attributed communicator bias. Further research might benefit from a more extensive theoretical exploration of trust in the context of science communication and how perceived integrity and communicator bias play a role in this.

This study focused on the lead scientist in science news. In many news items it is not the lead scientist who reacts to his or her own work but rather an unaffiliated, supposedly objective scientist criticizing or complimenting their colleague (Stocking, 1999). In fact, providing ‘balance’ to a science news story by introducing an independent scientist is sometimes regarded as sound journalistic practice – even if providing more balance in fact unjustly undermines the lead scientists’ credibility (Boykoff and Boykoff, 2004). Jensen (2008) did test the effect on credibility if an unaffiliated scientist communicated uncertainty, finding that it actually decreases perceived credibility.

Similarly, Binder, Hillback and Brossard (2016) found no such effect in the context of nanotechnology research. It would be interesting to expand the logic of attribution theory to the unaffiliated scientist. Does communicating uncertainty through an unaffiliated scientist perhaps lead to an increase in communicator bias, since this would feel like proof that the main scientist is holding something back?

It is important to note the difference between trust in the scientist and actual persuasive power of the message. While uncertainty communication might increase trust of the scientist quoted, it might at the same time diminish persuasiveness of the message itself, simply because the uncertainty factors convey to recipients that the study’s results are unreliable. The same paradox occurs in two-sided marketing messages, where scholars have found the communication of negative product traits to increase message credibility but to simultaneously decrease buying intention (Eisend, 2007). It would be interesting for future scholars to explore how this interplay between uncertainty communication and persuasion works for science communication. Is this relationship completely linear? Or is there a specific ‘tipping point’ where the rise in trust no longer alleviates the negative influence of communicating uncertainty on the persuasiveness of the message? The fact that uncertainty communication positively influences donation intention indicates that uncertainty communication has persuasive value with regard to the scientific enterprise, but it is possible that uncertainty decreases the persuasive power with regard to the actual finding (e. g., the negative effects of consuming rhubarb). More research is certainly needed to provide conclusive answers to these questions.

This study represents one of the few investigations into the effectiveness of two-sidedness in science communication. As far as we know, it is the first study to do so using attribution theory and communicator bias as mediating variables. Moreover, it is the first to test the effects on behavioral intentions in a scientific context.


Binder, A. R., Hillback, E. D., & Brossard, D. (2016). Conflict or caveats? Effects of media portrayals of scientific uncertainty on audience perceptions of new technologies. Risk Analysis, 36(4), 831–846.10.1111/risa.12462Search in Google Scholar

Boykoff, M. T., & Boykoff, J. M. (2004). Balance as bias: Global warming and the US prestige press. Global Environmental Change, 14(2), 125–136.10.1016/j.gloenvcha.2003.10.001Search in Google Scholar

Crismore, A., & Vande Kopple, W. J. (1997). Hedges and readers: Effects on attitudes and learning. In R. Markkanen, & H. Schröder (Eds.), Hedging and discourse: Approaches to the analysis of a pragmatic phenomenon in academic texts (pp. 83–114). Berlin: Walter de Gruyter.10.1515/9783110807332.83Search in Google Scholar

Crowley, A. E., & Hoyer, W. D. (1994). An integrative framework for understanding two-sided persuasion. Journal of Consumer Research, 20(4), 561–574.10.1086/209370Search in Google Scholar

Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about communicators and their effect on opinion change. Journal of Personality and Social Psychology, 36(4), 424–435.10.1037/0022-3514.36.4.424Search in Google Scholar

Eisend, M. (2007). Understanding two-sided persuasion: An empirical assessment of theoretical approaches. Psychology & Marketing, 24(7), 615–640.10.1002/mar.20176Search in Google Scholar

Feagan, B. G., Rutgeerts, P., Sands, B. E., Hanauer, S., Colombel, J. F., Sandborn, W. J., …, & Fox, I. (2013). Vedolizumab as induction and maintenance therapy for ulcerative colitis. New England Journal of Medicine, 369(8), 699–710.10.1056/NEJMoa1215734Search in Google Scholar

Gaziano, C., & McGrath, K. (1986). Measuring the concept of credibility. Journalism and Mass Communication Quarterly, 63(3), 451–462.10.1177/107769908606300301Search in Google Scholar

Hendriks, F., Kienhues, D., & Bromme, R. (2016a). Disclose your flaws! Admission positively affects the perceived trustworthiness of an expert science blogger. Studies in Communication Sciences, 16(2), 124–131.10.1016/j.scoms.2016.10.003Search in Google Scholar

Hendriks, F., Kienhues, D., & Bromme, R. (2016b). Trust in science and the science of trust. In B. Blöbaum (Ed.), Trust and communication in a digitized world (pp. 143–159). Springer, Cham.10.1007/978-3-319-28059-2_8Search in Google Scholar

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion; psychological studies of opinion change. New Haven, CT: Yale University Press.Search in Google Scholar

Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists’ and journalists’ credibility. Human Communication Research, 34(3), 347–369.10.1111/j.1468-2958.2008.00324.xSearch in Google Scholar

Jensen, J. D., Carcioppolo, N., King, A. J., Bernat, J. K., Davis, L., Yale, R., & Smith, J. (2011). Including limitations in news coverage of cancer research: Effects of news hedging on fatalism, medical skepticism, patient trust, and backlash. Journal of Health Communication, 16(5), 486–503.10.1080/10810730.2010.546491Search in Google Scholar

Kelley, H. H. (1973). The processes of causal attribution. American Psychologist, 28(2), 107–128.10.1037/h0034225Search in Google Scholar

Kelley, H. H., & Michela, J. L. (1980). Attribution theory and research. Annual Review of Psychology, 31, 457–501.10.1146/ in Google Scholar

Lai, W. Y. Y., & Lane, T. (2009). Characteristics of medical research news reported on front pages of newspapers. Plos one, 4(7), e6103.10.1371/journal.pone.0006103Search in Google Scholar

Larsson, S. C., Drca, N., Jensen-Urstad, M., & Wolk, A. (2015). Coffee consumption is not associated with increased risk of atrial fibrillation: Results from two prospective cohorts and a meta-analysis. BMC medicine, 13, 207–214.10.1186/s12916-015-0447-8Search in Google Scholar

Leshner, A. I. (2003). Public engagement with science. Science, 299(5609), 977.10.1126/science.299.5609.977Search in Google Scholar

Mellor, F. (2010). Negotiating uncertainty: Asteroids, risk and the media. Public Understanding of Science, 19(1), 16–33.10.1177/0963662507087307Search in Google Scholar

Montoya, A. K., & Hayes, A. F. (2017). Two condition within-participant statistical mediation analysis: A path-analytic framework. Psychological Methods, 22(1), 6–27.10.1037/met0000086Search in Google Scholar

Nerlich, B. (2017). Public trust in science: Myths and realities. Retrieved February 17, 2018 from in Google Scholar

O’Keefe, D. J. (2016). Persuasion: Theory and research (3rd ed.). Thousand Oaks, CA: Sage Publications.Search in Google Scholar

Pechmann, C. (1990). How do consumer inferences mediate the effectiveness of two-sided messages? In M. E. Goldberg et al. (Eds.), Advances in consumer research (pp. 337–341). Ann Arbor, MI: Association for Consumer Research.Search in Google Scholar

Pedersen, M., Giorgis-Allemand, L., Bernard, C., Aguilera, I., Andersen, A. M. N., Ballester, F., …, & Dedele, A. (2013). Ambient air pollution and low birthweight: A European cohort study (ESCAPE). The lancet Respiratory medicine, 1(9), 695–704.10.1016/S2213-2600(13)70192-9Search in Google Scholar

Pellechia, M. G. (1997). Trends in science coverage: A content analysis of three U.S. newspapers. Public Understanding of Science, 6(1), 49–68.10.1088/0963-6625/6/1/004Search in Google Scholar

Peters, H. P., & Dunwoody, S. (2016). Scientific uncertainty in media content: Introduction to this special issue. Public Understanding of Science, 25(8), 893–908.10.1177/0963662516670765Search in Google Scholar

Post, S. (2016). Communicating science in public controversies: Strategic considerations of the German climate scientists. Public Understanding of Science, 25(1), 61–70.10.1177/0963662514521542Search in Google Scholar

Post, S., & Maier, M. (2016). Stakeholders’ rationales for representing uncertainties of biotechnological research. Public Understanding of Science, 25(8), 944–960.10.1177/0963662516645039Search in Google Scholar

Ratcliff, C. L., Jensen, J. D., Christy, K., Crossley, K., & Krakow, M. (2018). News coverage of cancer research: Does disclosure of scientific uncertainty enhance credibility? In H. D. O’Hair (Ed.), Risk and health communication in an evolving media environment. New York, NY: Routledge.Search in Google Scholar

Retzbach, J., Otto, L., & Maier, M. (2016). Measuring the perceived uncertainty of scientific evidence and its relationship to engagement with science. Public Understanding of Science, 25(6), 638–655.10.1177/0963662515575253Search in Google Scholar

Schat, J., Bossema, F. G., Numans, M. E., Smeets, I., & Burger, J. P. (2018). Overdreven gezondheidsnieuws. Relatie tussen overdrijving in academische persberichten en in nieuwsmedia [Exaggerated health news. Relation between exaggeration in academic press releases and news media]. Nederlands Tijdschrift voor Geneeskunde, 162, 5.Search in Google Scholar

Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the web. Journal of Interactive Marketing, 21(4), 76–94.10.1002/dir.20090Search in Google Scholar

Singer, E. (1990). A question of accuracy: How journalists and scientists report research on hazards. Journal of Communication, 40, 102–116.10.1111/j.1460-2466.1990.tb02284.xSearch in Google Scholar

Stocking, S. H. (1999). How journalists deal with scientific uncertainty. In S. M. Friedman et al. (Eds.), Communicating uncertainty: Media coverage of new and controversial science (pp. 23–42). New York, NY: Routledge.Search in Google Scholar

Stocking, S. H., & Holstein, L. W. (1993). Constructing and reconstructing scientific ignorance. Science Communication, 15(2), 186–210.10.1177/107554709301500205Search in Google Scholar

Stocking, S. H., & Holstein, L. W. (2009). Manufacturing doubt: Journalists’ roles and the construction of ignorance in a scientific controversy. Public Understanding of Science, 18(1), 23–42.10.1177/0963662507079373Search in Google Scholar

Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C. A., Davies, A., … & Boy, F. (2014). The association between exaggeration in health-related science news and academic press releases: Retrospective observational study. Bmj, 349, g7015.10.1136/bmj.g7015Search in Google Scholar

Thiebach, M., Mayweg-Paus, E., & Jucks, R. (2015). “Probably true” says the expert: How two types of lexical hedges influence students’ evaluation of scientificness. European Journal of Psychology of Education, 30(3), 369–384.10.1007/s10212-014-0243-4Search in Google Scholar

Weingart, P. (2012). The lure of the mass media and its repercussions on science. In S. Rödder et al. (Eds.), The Sciences’ Media Connection–Public Communication and its Repercussions (pp. 17–32). Dordrecht, The Netherlands: Springer.10.1007/978-94-007-2085-5_2Search in Google Scholar

Weiss, C. H., & Singer, E. (1988). Reporting of social science in the national media. New York: Russell Sage Foundation.Search in Google Scholar

Wheeler, R. T. (2009). Nonprofit advertising: Impact of celebrity connection, involvement and gender on source credibility and intention to volunteer time or donate money. Journal of Nonprofit & Public Sector Marketing, 21(1), 80–107.10.1080/10495140802111984Search in Google Scholar

Whitehead, J. L., Jr. (1968). Factors of source credibility. Quarterly Journal of Speech, 54, 59–63.10.1080/00335636809382870Search in Google Scholar

Wiener, J. L., & Mowen, J. C. (1986). Source credibility: On the independent effects of trust and expertise. In R. J. Lutz (Ed.), Advances in consumer research, volume 13 (pp. 306–310). Provo, UT: Association for Consumer Research.Search in Google Scholar

Winter, S., Kramer, N. C., Rosner, L., & Neubaum, G. (2015). Don’t keep it (too) simple: How textual representations of scientific uncertainty affect laypersons’ attitudes. Journal of Language and Social Psychology, 34(3), 251–272.10.1177/0261927X14555872Search in Google Scholar

Zehr, S. C. (1999). Scientists’ representations of uncertainty. In S. M. Friedman et al. (Eds.), Communicating uncertainty: Media coverage of new and controversial science (pp. 3–21). New York, NY: Routledge.Search in Google Scholar

Published Online: 2020-06-06
Published in Print: 2021-06-25

© 2020 Steijaert et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.5.2023 from
Scroll to top button