Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.
Keywords: public understanding of science, science communication, uncertainty, replicability, reproducibility
In 2019, the National Academies of Sciences, Engineering, and Medicine (NASEM) released a consensus report on Reproducibility and Replicability in Science. The report provided an overview of scientific reproducibility and replicability, and discussed how they affect understandings of the uncertainty inherent in scientific research. Uncertainty, while distinct from reproducibility and replicability, is inextricably tied to both. It can be quantified with varying levels of precision in some cases, via probabilities and confidence intervals, and qualitatively communicated through rhetoric, including the use of words such as ‘could,’ ‘possibly,’ or ‘unlikely,’ or through the provision of diverse, conflicting information and opinions.
Within scientific research, reproducibility and replicability are part of the processes that researchers use to reduce and set bounds on uncertainty. The contested definitions of reproducibility and replicability vary (see Goodman et al., 2016; Plesser, 2018). But generally, reproducibility refers to the ability to rerun analyses from another study using the same data and methods and achieve the same results as the original study (NASEM, 2019), while replicability refers to the ability to achieve similar findings as an earlier study using different data and analyses (NASEM, 2019). Reproducibility can testify to the reliability of previous findings, to the extent that the results can be consistently produced by others. Replicability can provide insight into the validity of results by adding to the body of evidence that a particular phenomenon exists beyond the bounds of a given data set or research method.
Here, I use the 2019 NASEM report, Reproducibility and Replicability in Science, to construct an overview summarizing existing research on what factors shape public perceptions of uncertainty related to scientific information and how to communicate issues of reproducibility and replicability effectively given those factors. It also highlights how we need much more research on how to communicate these topics effectively. Throughout this article, I assume that communication is ‘effective’ to the extent that it makes credible and relevant information accessible to broader publics without creating confusion, harmful (in)action on scientific information, misleading conclusions, or undue mistrust or misperceptions of science, scientists, or a piece of scientific information. This article ends with recommendations on ways to communicate uncertainty related to science, as well as its causes and implications.
The summaries here focus primarily on the small body of research concerning public perceptions of uncertainty in scientific results and on the handful of studies on public perceptions related to reproducibility or replicability. These studies generally concern true scientific uncertainty—where something is known to be unknown, either broadly or within a certain measurable range, and whether within a single study, across studies, or from a lack of research in an area. Uncertainty related to reproducibility and replicability would fall into this category.
With respect to science as it figures in public discourse, other types of uncertainty also affect public perceptions and science communication. One of these is uncertainty in decision making about science related to the questions of how we should act on scientific information as individuals and with respect to society. This can pertain to questions about what is ethically right to do with a given technology, such as artificial intelligence, or how to address a global issue, such as climate change. Such uncertainty may also pertain to everyday questions such as how to incorporate the recommendations of nutritional science into one’s diet.
Given a state of high uncertainty either within the science itself or with respect to decision making informed by science, such uncertainty can also be misleadingly portrayed though misinformation and disinformation campaigns (Dixon & Clarke, 2012; Westervelt, 2019), as has been the case during the COVID-19 pandemic. Less nefariously, uncertainty may be generated from unintentionally misleading portrayals of science across news articles or other information sources. These kinds of manufactured uncertainty are the product of misinformation on scientific information and science-related issues, or of accounts that misrepresent or only partially represent the complex issues—intentionally or not—without giving context to explain (apparent) discrepancies or conflicts across information sources.
True scientific uncertainty can create greater space for uncertainty in decision making and for manufactured uncertainty, for example. Or, similarly, an argument about an area of true scientific uncertainty might emerge because of deep divisions concerning public decision making related to that science. All three of these broad types of uncertainty related to science are also influenced by the factors described in this summary.
For clarity, I sort some of the key factors shaping public perceptions of science-related uncertainty—and its forms, causes, and potential implications—into two categories: 1) how uncertainty appears both within and across information sources, and 2) how recipients process information, depending upon their motivations, abilities, values, and beliefs.
Only a small number of studies explicitly focus on communication and perceptions of true uncertainty in science, let alone perceptions of reproducibility and replicability. Hence, this overview also looks at the literature on perceptions of uncertainty in science-related issues, particularly uncertainty in decision making on science, as addressed in literature on risk and decision making (Fischhoff & Davis, 2014; Funtowicz & Ravetz, 2003; Renn, 1992). It draws as well on the extensive literatures from communication, psychology, and political science fields on media and public opinion, particularly on misinformation and manufactured uncertainty.
Most of the research on public perceptions of uncertainty focuses on how certain factors within a news article or piece of information influence perceptions of the uncertainty and the science and actors involved. This section starts with an overview of those studies. Studies focus primarily on uncertainty as it appears through hedging language, or caveats, through showing two sides of an argument related to a science issue, and through positive and negative or gain and loss frames. While these studies largely rely on experiments in controlled settings, most people receive information, particularly science information, in the highly complex, uncontrolled settings of online and social media. Given this phenomenon, this summary then considers how media can influence the reception and mental processing of scientific uncertainty.
In the literature on public perceptions of uncertainty, true scientific uncertainty generally appears through language that expresses caveats, or hedges. Hedges (also called “powerless language”) include using words such as ‘might,’ ‘in some cases,’ and ‘possibly.’ They are key to communicating scientific information accurately because they communicate the uncertainty present in scientific results.
Although research on persuasive communication, such as editorials and legal arguments, found that hedges increase negative views of the speaker (Hosman, 1989) and the messages (Blankenship & Holtgraves, 2016), research in science communication often finds the opposite effect: that hedges can increase credibility of the information or of sources associated with the information (Durik et al., 2008; Jensen, 2008; Ratcliff et al., 2018; Winter et al., 2014). In one study, the most hedge-free version of scientific information was least effective in gaining credibility among readers (Winter et al., 2014). Others studies similarly have found that hedges in coverage of cancer research increased the perceived trustworthiness of the scientists and journalists involved (Jensen, 2008), while a replication of that study found that hedges did not affect views of scientists (which stayed high) but did increase the credibility of journalists associated with a story (Ratcliff et al., 2018). The findings vary with respect to exact effects, such as whether the increase affects the perceived credibility of the story, the scientists, or the journalists involved. Overall, however, these findings suggest that people perceive scientific information differently than other types of information, in that they expect and accept more uncertainty in scientific information.
Importantly, however, research also suggests that hedges are more effective when researchers use them to discuss the results of their own work rather than when scientists criticize or add caveats to the work of other researchers (Jensen, 2008; Jensen et al., 2017). This distinction in who provides the caveats can be more important than the caveats themselves (Jensen et al., 2017) and it aligns with research on the effects of what is typically called ‘two-sided information.’ Two-sided information refers to information that shows the pros and cons of a particular approach, line of research or technology, or argument on a particular science-related issue. That is in contrast to ‘one-sided information,’ which has a clear positive or negative slant (as described more in the section on positive and negative frames here).
Two-sided science information in experiments usually has either a) scientists disagreeing with each other on results or interpretations (conflict), or b) scientists providing counterinterpretations and caveats for their own work (caveats). When two-sided information has the former, through contradictions or conflicts between scientists or the addition of caveats to another researcher’s work rather than to their own, the information can increase the perceived tentativeness of scientific results (Chang, 2015; Flemming et al., 2015, 2017; Kortenkamp & Basten, 2015; Nagler, 2014), decrease perceived credibility of the associated scientists and journalists (Chang, 2015; Kortenkamp & Basten, 2015), decrease support for the research or related technology (Chang, 2015; Flemming et al., 2017), and increase fatalistic views toward preventing or alleviating sources of risk and uncertainty (Jensen et al., 2017). Uncertainty in these studies cited here arose through exposure to conflict-focused two-sided information either from several news articles that seem to contradict each other (Chang, 2015; Flemming et al., 2015, 2017; Nagler, 2014) or from a single article containing conflict or contradictions between researchers’ statements (Jensen et al., 2017; Kortenkamp & Basten, 2015). The studies did not specify the source of the conflict, however, such as if it arose from disagreements about methods, results, or conclusions or from more personal conflicts between rival scientists or labs.
The good news is that two-sided information can also increase the credibility of the scientists involved when all researchers in the story provide both pro- and contra-arguments, or all share caveats for understanding research results. In such cases, presenting the true uncertainty can increase the perceived credibility of the scientists (Mayweg-Paus & Jucks, 2017), or at least not decrease credibility the way conflict-focused two-sided information can (Kortenkamp & Basten, 2015). It seems, then, that presenting true uncertainty through two-sided stories is not a problem per se. It is the uncertainty stemming from conflict between scientists in particular that can lead to negative perceptions. A recent review article by Gustafson and Rice (2020) supports this conclusion as well. Examining 48 studies that included a single message conveying uncertainty, “consensus uncertainty,” as the authors called uncertainty stemming from conflict between scientists, almost always led to negative results, meaning that readers were less likely to follow the recommendations of the information (Gustafson & Rice, 2020). In comparison, information about what the authors called “technical uncertainty,” such as through error bars, confidence intervals, or probabilities, had positive or null effects (Gustafson & Rice, 2020).
The effects of conflict-focused information vary across studies. One study, for example, found that uncertainty stemming from disagreement between experts on the effects of hypothetical flooding from climate change made participants see flooding as more likely (and, hence, less uncertain) than did uncertainty stemming from climate models (Patt, 2007). Another study found that uncertainty arising from caveats by particular scientists versus uncertainty arising from conflict between scientists only affected participants’ views of uncertainty and risk if those participants were more deferential to the authority of scientists (Binder et al., 2016). Those who were more deferential had higher risk perceptions when they read articles characterized by conflict-based uncertainty, while other participants had no difference in perceptions depending on the presentation of uncertainty (Binder et al., 2016).
As the Binder et al. (2016) study highlights—on how deference shaped perceptions of uncertainty in information—the mixture of findings regarding the effects of conflict-based, or consensus, uncertainty is likely due in part to how a particular scientific issue variably affects recipients of information concerning that issue. For example, some recipients prefer two-sided information despite the increased uncertainty that it creates, especially recipients who are motivated to work through and understand the information because of their connection to the issue (Winter & Krämer, 2012). In the Binder et al. (2016) and the Winter and Kramer (2012) studies, it was found that characteristics of individuals within the studies, as well as the specific topic of information, shape reactions to the uncertainty involved.
A study comparing effects of conflicting information in articles on two different issues—one on dioxin in sewage and one on wolf reintroduction—illustrates well how the effects of uncertainty vary across particular issues. In the article on dioxin in sewage, conflict between researchers’ interpretations of scientific results decreased recipients’ certainty in their own prior beliefs, and increased the perceived credibility of scientists involved. Conflict between researchers, however, increased personal certainty and decreased the perceived credibility of scientists in the wolf article (Jensen & Hurley, 2012).
These mixed results suggest differences in how much people generally rely on scientific information to understand a particular issue. For example, dioxin in sewage can have personal health effects that would be difficult to see or understand without some scientific information. In other words, the topic might be one in which technical uncertainty is perceived as a more relevant part of the issue for most people. For wolf reintroduction, in contrast, scientific information on the impacts of wolves on ecosystems might not be as salient in opinion formation. Instead, people’s personal views on the value of the wilderness, the role of humans in nature, perceptions of wolves as a personal risk or benefit, and other concerns could play a larger role than scientific information and associated uncertainty.
Some of the literature on public perceptions of uncertainty in science focuses on the influences of positive and negative frames or, similarly, gain and loss frames. Famous studies by psychologists Tversky and Kahneman (1981, 1984) examined how people made different decisions depending whether choices highlighted gains or losses. For example, in the gain-framed options, participants could choose between a sure gain or gamble (of equivalent expected value as the sure-thing gain), that offered a 25% chance of gaining more but at the risk of a 75% chance of gaining nothing. In the loss frame, participants received the reverse: the choice between a sure loss or a 75% chance of losing more but with the 25% chance of losing nothing. In the gain frame, people tended to choose the sure thing—100% chance of gain. But in the loss frame, when faced with what seems like a sure loss, people were less risk averse. They were more likely to choose the gamble (25% chance of losing nothing but 75% chance of losing more), becoming willing to take risks to try to avoid or minimize the loss (Kahneman & Tversky, 1984; Tversky & Kahneman, 1981).
Similar to gain and loss framing, growing bodies of work examine positive and negative framing. Positive or negative framing highlights either more risks or more benefits (or potential losses or gains), respectively. Only a few studies seem to focus on how those positive and negative frames affect perceptions of uncertainty in particular and how people act on that information. One study found that negative frames can increase perceived uncertainty in scientific information (Kimmerle et al., 2015), perhaps by making people more attentive to uncertainty. Others have found that negative frames increase tolerance for conflicting information and change intentions to act on uncertainty (Morton et al., 2011; Smithson, 1999). For example, under negative frames, one study found that people wanted precise information, even with conflict-based uncertainty (from scientists disagreeing), rather than vague, agreed-upon scientific interpretations (Smithson, 1999). Another found that negative frames on uncertainty information related to climate change made participants less likely to take action to avoid risks of climate change (Morton et al., 2011). The authors of that study suggested that the effect could reflect that negative frames increased participants’ feelings of powerlessness to act in the face of uncertainty (Morton et al., 2011).
The variety across the few studies on the effects of positive and negative frames on perceptions of uncertainty fits a larger pattern of inconsistent results from health and risk communication research on those effects (see, for meta-analyses and reviews of gain/loss frames, Batteux et al., 2019; Gallagher & Updegraff, 2012; O'Keefe & Jensen, 2008, 2009; O'Keefe & Nan, 2012; O'Keefe & Wu, 2012). As the Morton et al. (2011) study indicates, most of the research on positive and negative frames comes from risk communication contexts in which a clear action is desired. Researchers are interested in how portraying information in positive or negative ways can make people more likely to act in the desired way.
Most communication contexts involving scientific uncertainty might not have such clear-cut choices or desired outcomes. In many decision-making contexts, one must balance risks and benefits even as one’s choices are possibly changing, while grappling with unquantified uncertainties (Funtowicz & Ravetz, 2003; Renn, 1992). Moving to communication on less clear-cut areas of uncertainty, such as how we should develop and use self-driving cars in the United States, we find that not only the science itself but also the decision stakes have many more (unquantifiable) uncertainties attached. Relative to caveats and two-sided information, positive and negative frames are harder to apply, unless communicators have clear behavioral outcomes and bounded choice(s) that they want people to make (which brings ethical questions about whether persuading people to make those choices is the appropriate approach).
Alongside the small but growing bodies of research on public perceptions of uncertainty summarized here, a handful of studies focus specifically on how information about reproducibility or replicability affects trust in the science and scientists involved. These studies tend to not position themselves within the work on perceptions of uncertainty in science. Similar to the studies of perceptions of uncertainty, though, almost all of these are experiments. Most focus only on views of psychological sciences and find that, in experiments, learning about problems of reproducibility in psychology reduced participants’ trust in existing psychological science research (Anvari & Lakens, 2019; Chopik et al., 2018) or in the psychology research community (Wingen et al., 2019).
The source of the lack of reproducibility or replicability seemed to influence participants’ trust levels. Wingen et al. (2019), running experiments with respondents on Amazon Mechanical Turk, found that when the lack of reproducibility in psychological studies was due to questionable research practices—such as selective reporting—trust in the psychological science research community to ‘do what was right’ was lower than when it was attributed to the possibility of hidden moderators, not controlled for in the study designs. However, people who received information only about lack of reproducibility versus about lack of reproducibility due to questionable practices did not have significantly different levels of trust (Wingen et al., 2019). This finding aligns with Anvari and Laken’s (2019) studies that found that explicitly connecting replication failures to questionable research practices did not seem to result in different levels of trust than when people only heard about replication failures without an explanation of the cause. Both studies found that including information about researchers’ efforts to improve transparency and open science did not raise levels of trust in either past or future psychological research (Anvari & Lakens, 2019) or in the research community (Wingen et al., 2019).
Each study also found several important caveats, however. Anvari and Lakens (2019), in an online experiment with primarily participants from Europe, found that learning about replication failures in psychological sciences decreased trust in the reliability of past research, but not in the reported trust of future results. Support for public funding of psychological sciences research also remained high regardless of the information participants received (Anvari & Lakens, 2019). Chopik et al. (2018), in a pre/post survey of undergraduates who attended a lecture on problems of reproducibility in psychological sciences found that after the lecture students were less likely to trust psychological research results. The authors stated this result was not especially surprising, but also emphasized that students were not any less likely to report wanting to enter the field for graduate school, however, and had a significantly greater appreciation for study design features aimed to enhance reproducibility (Chopik et al., 2018).
These studies do not examine how these views fit into broader views of science, nor how likely people are to be exposed to information about reproducibility and replicability in science and to what effect (see Rutjens et al., 2018, for an overview). Only one study, using a nationally representative survey, starts to give some insight into these questions. Asking a representative sample of Germans how news about issues of lack of reproducibility or replicability shaped their views of science, Mede et al. (2020) found that the vast majority of Germans had not heard about issues of reproducibility or replicability. Most, upon learning about the issues, however, believed they were indicators of science’s processes for quality control and self-correction (Mede et al., 2020). Most also did not indicate that lack of reproducibility and replicability at a given moment meant one could not trust science overall (Mede et al., 2020). The exception to this were respondents who identified as members of the populist far-right party Alternative for Germany (in German: Alternative für Deutschland, or AfD), who tended to take lack of reproducibility and replicability as indicators that scientific research is not trustworthy (Mede et al., 2020). This result is an example of how individual’s values and worldviews can shape their perceptions of true scientific uncertainty, and of how such true uncertainty can create space for, or add fuel to, disagreements about what to do with scientific information.
As fewer science journalists publish in traditional print outlets, more alternative sources for scientific information emerge through online and social media platforms (Brossard, 2013; Brossard & Scheufele, 2013; Brumfiel, 2009; Newman et al., 2017; H. P. Peters et al., 2014). In many ways, this development has democratized science communication—making it easier for more people to produce content and for more people to find and access it (Benkler, 2006; Cacciatore et al., 2014; Funk et al., 2017). This includes scientists and science and health institutions, who can now more directly communicate with publics, rather than relying solely on communication mediated through journalists (Bik & Goldstein, 2013; Broniatowski et al., 2016; Collins et al., 2016; Colson & Allan, 2011; Eysenbach, 2011; Guidry et al., 2017). People who are interested in science or a scientific issue can also more easily find nuanced, specialized, and catered information and even opportunities to discuss such information with scientists. People who do not actively seek out science information are also likely to come across it incidentally when they are on social media (Fletcher & Nielsen, 2017; Funk et al., 2017).
It is very easy for that exposure to include misleading or conflicting accounts of science information, as suggested by recent work on concerning online misinformation and its spread across social media platforms (Garrett, 2017; Lewandowsky, Ecker et al., 2017; Vosoughi et al., 2018). Headlines written to capture attention and generate emotion get more clicks (Gardiner, 2015), and they are likely to be one of the few parts of the article that people see (Gabielkov et al., 2016). Scientists communicating directly through social media may air what traditionally might have been ‘in-house’ conversations in the scientific community, in which researchers debate theories, methods, and results (Yeo et al., 2017) or joke about mishaps in science (Simis-Wilkinson et al., 2018), with unknown implications for publics’ perceptions of science and uncertainty. Many of these features—such as misleading headlines and misinformation—existed before social media (Scheufele & Krause, 2019), but the quantity, reach, speed, and personalization of communication and information can change the type and size of their effects.
One way in which the media can increase perceptions of uncertainty in a negative or misleading way is by increasing the likelihood that people will receive conflicting information (Dunwoody, 1999; Purcell et al., 2012) or misinformation (Garrett et al., 2016) concerning the manufactured uncertainty described earlier, or conflict-based examples of true scientific uncertainty. While no studies, to my knowledge, have examined either how prevalent this is or what its implications are for people’s perceptions of science and uncertainty, research on public perceptions of fields like nutrition indicates that exposure to multiple conflicting stories can, understandably, overwhelm people and leave them unsure as to how they should act on the information obtained (Nagler, 2014; Ramondt & Ramirez, 2019; Ward et al., 2011). Online environments could easily exacerbate such ‘whiplash’ if reporting of scientific results seems to contradict other stories, and heighten perceptions of uncertainty. It is not clear how or if contradiction across information sources might affect perceptions of the science itself, however, or if, in such cases, people would instead attribute their possible confusion to problems with reporting or with media more generally.
As some of the studies on perceptions of uncertainty conveyed through hedges and two-sided information illustrated, people distinguish between the scientists and journalists connected to a piece of information (Chang, 2015; Jensen, 2008; Ratcliff et al., 2018). National-level data from the Pew Research Center also indicates that Americans overwhelmingly attribute problems in science news coverage (i.e., inaccurate, untrustworthy, confusing representations of the science) to journalists than to scientists (Funk et al., 2017). Journalists are seen as the bigger problem by 73% of Americans, while 24% of Americans think that scientists carry the brunt of the blame for problems in science coverage (Funk et al., 2017).
In addition to the potential for exposure to conflicting news stories, information on online platforms comes surrounded by many different cues that can shape perceptions of the story apart from the content of any given article itself. Comments, for example, can contain contradictory or misleading information and change the frame around an article. One study found that uncivil comments following an otherwise neutral article on nanotechnology increased risk perceptions of nanotechnology for those individuals who were already leaning toward not supporting the technology, further polarizing audiences (Anderson et al., 2014). This effect could have occurred as a result of an increase in perceptions of uncertainty around the information. In that case, uncertainty would provide those individuals with a rationale for more heavily relying on their prior views of the science issue rather than on information provided in the article itself. The potential for exposure to uncivil comments and to contradictory articles in online environments could mean that credible information ends up surrounded by conflict-producing cues that can increase perceptions of uncertainty and decrease perceptions of scientific credibility related to the article itself.
Of course, social media has the power to spread false and misleading information, especially information designed to breed conflict, as we see with recent disinformation campaigns, particularly during the COVID-19 pandemic (Barnes & Sanger, 2020; Garrett, 2017; Scheufele & Krause, 2019). The bad news is that not only are these sources of misinformation easy to create and share through online and social media, they are particularly designed to spread and stick (Lazer et al., 2018; Vosoughi et al., 2018). Misinformation, once it enters someone’s mind, is incredibly difficult to remove (Green & Donahue, 2011; Lewandowsky, Ecker et al., 2012; Seifert, 2002). It is possible, however, as the next section describes, to correct misinformation through many of the same features that allowed it to spread.
Exposure to multiple different sources and cues for information on a scientific topic can facilitate the effective communication of true scientific uncertainty. Through many of the same features that help spread misinformation, online and social media also offer outlets for alleviating misperceptions. Research on reducing misperceptions of science issues finds that context cues on social media, for example, can provide corrections that help people more accurately understand uncertainty in issues (Bode & Vraga, 2015; Vraga & Bode, 2017). In experiments, when a story with misinformation on genetically modified (GM) foods or vaccines on Facebook was followed by a “related stories” link to articles with more accurate information on the issue, readers had significantly reduced (and more scientifically accurate) perceptions of the uncertainty present in those fields (Bode & Vraga, 2015). Expert sources, such as the Centers for Disease Control and Prevention (CDC), can also effectively correct misperceptions through comments or posts following posts that contain misinformation (Bode & Vraga, 2015; Guidry et al., 2017; Vraga & Bode, 2017). One experiment modeling Twitter posts found that when the CDC tweeted a refutation to a tweet that incorrectly attributed the spread of the Zika virus to the release of GM mosquitoes, viewers of both tweets had significantly reduced misperceptions, especially viewers who were most likely to initially believe the misinformation (Vraga & Bode, 2017). Such tweets did not affect the credibility of the CDC (Vraga & Bode, 2017), suggesting that corrective tweets and messages could be an effective way to alleviate misperceptions of the uncertainty associated with science-related issues.
Misperceptions are notoriously hard to undo, especially when they align with strongly held values and worldviews (see, for examples, Hart & Nisbet, 2011; Nyhan & Reifler, 2010; Nyhan et al., 2014), but there are several ways to ensure that corrections will be more likely to work in the intended direction. Corrections are most effective when they offer a coherent story to fill the gap left by misperceptions or misinformation. The correction has to be as satisfactory in explaining the situation or issue as the misinformation was (Lewandowsky, Ecker, et al., 2012; Seifert, 2002; Walter & Murphy, 2018) and can be even more effective if it also explains why the original, misinformed claim emerged (Seifert, 2002). Evidence suggests that corrections should include a direct rebuttal of the misinformation (Bode & Vraga, 2015; Vraga & Bode, 2017; Walter & Murphy, 2018). It also appears that highlighting scientific consensus can be effective for some issues (Lewandowsky, Gignac, & Vaughan, 2012). Corrections are more likely to work if they align with or do not contradict respondents’ worldviews, especially the worldviews that made the misinformation attractive and easily understandable (Lewandowsky, Ecker, et al., 2012). Lewandowsky, Ecker, et al. (2012) and Walter & Murphy (2018) offer more detailed overviews and recommendations for effective corrections.
Social media has the potential not only to expose users to corrections that effectively communicate science and uncertainty, but also to other opportunities to learn through greater exposure to news about science. Many people have ‘incidental’ exposure to news when they go onto social media platforms, even if they went online for purposes that did not include newsgathering (Feezell, 2017; Kim et al., 2013; Lee & Kim, 2017). This phenomenon applies to science-related news as well (Funk et al., 2017). Only one in six people report actively seeking out science information, but 55% of Americans (and 79% of social media users) report coming across science information incidentally on social media (Funk et al., 2017). While no studies focus on the implications of incidental exposure online for science-related issues, the available evidence suggests that incidental exposure can increase knowledge, as well as the willingness to engage with issues and information.
Similarly, the opportunity to receive information incidentally or from multiple different information sources could also mean that people are more receptive to that information when it reaches them. Social cues and the multiple pathways that information moves through in the modern media environment also make it possible for information to reach people who would otherwise be less receptive to scientific information or less trusting of scientific sources. One way that this can occur is through cues that can make it less likely that individuals’ values and worldviews will lead them to avoid particular information. For example, user-generated social cues such as shares and likes can increase the likelihood that individuals will attend to information, even if that information does not align with their prior beliefs on the issue, by helping to bypass other, more ideological cues triggered by the article itself (Messing & Westwood, 2012).
In the realm of science information and uncertainty in particular, a study on the NASEM consensus report on the effects of genetically engineered crops (NASEM, 2017) found that after the report’s release and its movement through news outlets and social media, public perceptions of genetically modified organisms (GMOs) moved more in line with the findings of the report, seeing less risk but not more or less benefit (Howell et al., 2018). More interesting for understanding the role of the media environment in shaping perceptions, however, was that risk perceptions around GMOs reduced most among people who were least likely to be the regular recipients of such information. Although the study did not pinpoint a cause or pathway, the results suggest that the scientific consensus information could have reached those individuals through other outlets that they found more credible or trustworthy than a scientific report from a committee of researchers.
Similarly, although scientists and scientific institutions often communicate with publics through interviews with journalists from general news outlets, the variety of platforms also means scientists have many more opportunities to communicate directly with publics, particularly through niche platforms. Of course, this can have mixed effects, as many scientists might not be particularly good at communicating beyond an audience similar to themselves. Given the many context cues that appear in online environments, and the potential for people variably to perceive or manipulate the context around a piece of information online, limits also exist on the extent to which scientists and other communicators can control their messages. But the opportunity for scientists and science communicators to directly engage with audiences through different platforms could improve the communication of science to the public, as surveys indicate that Americans find niche science sources to be more trustworthy sources of science information than general news sources (Funk et al., 2017). As more and more individuals rely on online and social media for information in general and on science in particular (Newman et al., 2017; Shearer & Gottfried, 2017), this mixture of effects will continue to play an increasingly important role in understanding public views of scientific uncertainty.
As the previous section suggests, perceptions of uncertainty depend on how people perceive uncertainty differently depending on their personal characteristics. Those characteristics fall into largely two categories: 1) the motivation and ability to find and process information on scientific uncertainty, and 2) value- and belief-based pathways that affect openness to and subsequent processing of information. I focus primarily on individual factors for which we have national-level data on their prevalence in the United States, such as trust in scientists, because of the NASEM committee’s focus on communicating to a national audience. For readers who want to do more targeted communication with specific publics, however, and for researchers interested in these areas, I also include some discussion of other individual-level factors that shape different people’s perceptions of and tolerance for uncertainty information.
Individuals’ motivation and competency to seek out and understand information related to scientific uncertainty influence perceptions of uncertainty-related information and actions related to those perceptions. The exact results of these different characteristics in different communication and decision-making contexts will vary, but I mention them briefly here to highlight some of the key factors and available research, especially for those interested in more targeted communication. These factors include numeracy and science literacy (Kahan et al., 2012; NASEM, 2016; E. Peters et al., 2007), statistical reasoning abilities (Dunwoody & Griffin, 2013; Tversky & Kahneman, 1974), and self-efficacy, or one’s perception of their ability to do a certain task (Flemming et al., 2015; Morton et al., 2011; Vardeman & Aldoory, 2008). Perceived and actual ability shape the extent to which a given person can and will work to understand and act on information (Budescu et al., 2009; Einsiedel & Thorne, 1999; Fung et al., 2018; Griffin, 2016; E. Peters et al., 2007; Winter & Krämer, 2012).
One factor with potential implications for how people understand of scientific information and uncertainty is their knowledge of scientific processes. According to the National Science Board’s (NSB) Science & Engineering Indicators (2018), the majority of Americans have some understanding of probability and experimental design (64% and 51% qualify as understanding these, respectively) but not of what a scientific study is (only 23% qualify as understanding). The data come from items on the General Social Survey, an omnibus survey of a nationally representative sample of U.S. adults, and are not collected by the NSB itself. As with all surveys, however, there could be nonresponse bias. For example, if fewer people with certain characteristics relevant to the survey questions, such as scientific knowledge, respond to those particular items or take the survey at all, the true rates of understanding might be even lower than reflected in the responses. Overall, however, the numbers seem to indicate that the U.S. public has less understanding of what distinguishes scientific information from other forms of knowledge, which could affect how people perceive scientific uncertainty.
The coding that the NSB applied for defining “understanding of scientific study” is somewhat conservative, and it is important not to take this data on its own as proof of a scientifically ignorant public. The U.S. public might have a sense of scientific studies that is not captured by the NSB measure but is relevant to how they understand and tolerate uncertainty in scientific information and their trust in that information. Additionally, knowledge in general, while often significant, rarely plays a large role on its own in shaping individuals’ opinions on an issue, in part because of the effects of information presentation and individuals’ experiences and values on how people act on their knowledge (Allum et al., 2008). The NSB indicators could suggest, however, that discussing uncertainty in the context of how these concepts and practices fit into scientific processes could be more effective with respect to building upon existing public knowledge of the scientific process than assuming that individuals will understand what makes a study scientific.
Even if people do not start with high knowledge or understanding of a science-related issue, they might have increased motivation to understand the uncertainty involved because of a number of issue-specific factors. These include factors such as individual interest, personal investment, or the perceived relevance of a particular issue, which may be heightened by direct involvement (as for example, in the case of a particular illness), with associated uncertainty (Blankenship & Holtgraves, 2016; Vardeman & Aldoory, 2008).
A growing body of work studies how emotions—particularly fear and anger—shape reactions and motivations related to uncertainty in information (e.g., Tannenbaum et al., 2015; Tiedens & Linton, 2001; Weeks, 2015). Fear and anger, while both negatively valenced emotions, appear to have opposite relationships to uncertainty—with fear being associated with heightened uncertainty and the quest for seeking out more information, and anger, with heightened certainty and less attention to detail or nuance (Tiedens & Linton, 2001).
Further, some people have higher and lower tolerances for uncertainty regardless of the situation. Some people have a higher need for cognition, or need to work through the complexity of a topic, which can mean preferring more information even if it means more uncertainty (Winter & Krämer, 2012). Others tend to have low tolerance for uncertainty and low openness to information that could increase uncertainty, such as people with more dogmatic and authoritarian tendencies (Altemeyer, 1996; Rokeach, 1960). This discomfort with uncertainty becomes especially strong when that uncertainty seems to disrupt closely held values and beliefs.
When people come across scientific information, they interpret it in ways that are shaped by values related to the information. Much of this processing is called directional motivated reasoning (Kunda, 1990). People are motivated to interpret information in ways biased by their held beliefs to help prevent belief-incongruent information from triggering discomfort, or cognitive dissonance (see, for overviews, Festinger, 1957; Kunda, 1990). In general, the more strongly held the belief about one’s self or the way that the world does or should work, the more uncomfortable and threatening information contrary to that belief is.
Because the specific beliefs that are relevant across issues and individuals will vary, this overview focuses on two more general beliefs that can apply across science communication settings and play a role in reactions to scientific uncertainty: trust and epistemic beliefs.
Trust and Confidence in Scientists. Trust can shape how individuals view information from different sources as well as the opinions that they form about issues related to information (Freudenburg, 1993; Freudenberg & Pastor, 1992; Kasperson et al., 1992; R. G. Peters et al., 1997). It plays a key role in how people navigate uncertainty (Engdahl & Lidskog, 2014; Kasperson, 1992), and trust in scientists can affect perceived risk in the face of uncertainty, typically by reducing risk perceptions (Ho et al., 2011; Siegrist, 2000; Siegrist et al., 2012; Wachinger et al., 2013). Additionally, levels of trust in scientists relative to trust in other relevant actors—or the ‘trust gap’ between actors—for an issue can play a significant role in views of that issue (Priest et al., 2003).
In the United States, high trust in scientists has remained stable over the past few decades (Krause et al., 2019). Around 40% of Americans indicate that they have a “great deal of confidence” in the scientific community (Krause et al., 2019). Compared to all other institutions in the United States, this level of confidence is especially high: comparable to confidence in the medical community and below only confidence in the military in the past two decades. This confidence has been more stable than has confidence in other institutions, as well (Krause et al., 2019). If trust gaps play a role in how people respond to uncertainty in scientific information, then it is particularly important that the scientific community ranks ahead of other relevant institutions people look to when forming opinions on science-related issues.
Trust is also shaped by issue-relevant experiences and values that affect information processing, such as those captured by political ideology and religious views. Because of these interactions, despite overall high trust levels at the national level, trust within particular groups varies depending on the values held by individuals (Funk et al., 2019; Krause et al., 2019). For example, those who identify as members of the Republican Party today are less confident in the scientific community than are other people (Krause et al., 2019). Similar gaps exist between Americans who identify as highly religious and those who do not (those who are not religious indicating greater confidence), across people of different religions, and between those who live in urban versus rural areas (urban residents indicating greater confidence) (Krause et al., 2019). Despite some movement and gaps within groups, however, the U.S. public generally has consistent and high confidence in the scientific community.
Briefly, on a related note, in addition to possibly relating to different levels of trust for certain science issues, political ideology can also affect what kinds of news people see and pay attention to (Garrett, 2009a, 2009b; Garrett et al., 2016; Stroud, 2010). It is less clear how this selection and differing levels of trust play out in how people with different political ideologies receive and reason through science-related information and uncertainty in science in general, however. Evidence suggests that Republicans and Democrats are similarly likely to be active consumers of science information, to report being very or somewhat interested in science news, and to say they sometimes or often consume science-related entertainment (Funk et al., 2017). Only 32% of Republicans and 27% of Democrats say that they see news reporting on disagreement among scientific experts (or the conflict-based uncertainty described earlier), and 22% of Republicans and 16% of Democrats say they see news reports about scientific research that seem made up. Republicans are more likely than are Democrats, however, to believe that news media do a poor job of covering science, however, which is in line with partisan differences on ratings of news media in general (Funk et al., 2017).
As we see with the current pandemic, although partisan and other divides can appear on beliefs related to the issues and levels of certainty involved, especially when it comes to belief in misinformation (Schaeffer, 2020), broad public consensus in perceptions of the scientific uncertainty involved also exist. For example, recent data shows that when it comes to research concerning COVID-19, most Democrats and Republicans think both that the core scientific understanding of the virus is well-understood (74% and 66%, respectively) and that it makes sense that studies show conflicting evidence as the research improves (83% and 72%, respectively) (Tyson & Spencer, 2020). The evidence suggests, then, that communicators should not assume that Americans have widely different interest in, exposure to, or ability to reason through science-related and uncertainty information in general depending on partisan identity, unless there is evidence to suggest partisan-related value-based divisions on particular aspects of the issue or information being communicated.
Epistemic Beliefs and Views of Science. Despite the wide range of data on trust in scientists, there do not appear to be measures capturing general trust in science. Data on views of the nature of scientific knowledge, however, often called epistemic beliefs, do exist, and research finds that such beliefs significantly relate to how people process information on scientific uncertainty. Epistemic beliefs capture whether one sees scientific knowledge as absolute truth, relative, or contingent, with such beliefs typically being seen as a progression leading toward increasingly sophisticated epistemic beliefs (Sinatra et al., 2014). Individuals who hold more sophisticated beliefs are more likely to critically evaluate and perceive uncertainty in scientific information (Feinkohl et al., 2016; Kimmerle et al., 2015; Rabinovich & Morton, 2012) but also more likely to view information containing uncertainty as persuasive (Rabinovich & Morton, 2012). More important for the communication of uncertainty, however, is that epistemic beliefs can change over shorter time spans, depending on communication. For example, more sophisticated beliefs can be induced by explaining the aims of science and of the scientific process (Rabinovich & Morton, 2012) prior to communicating uncertainty information.
As these findings highlight, the perception of uncertainty in scientific results is not necessarily undesirable, and it may reflect accurate views of the strengths and limitations of scientific information. Research on exposure to conflicting information, for example, found that such exposure decreased beliefs that it is possible to find one best solution to health issues (Kienhues et al., 2011), which is likely a more accurate view of the context-dependence of decision making in health care. Participants in that study did not become ‘relativized’ (seeing scientific knowledge as all relative or opinion-based), however, or helpless, and were able to gain knowledge about health and medical information (Kienhues et al., 2011). This also aligns with Anvari and Laken’s (2019) findings that learning about replication failures in psychological research decreased trust in past research but not in future research.
Further, communicating uncertainty in news stories about a particular science topic does not appear to change beliefs about the nature of science in general (A. Retzbach & Maier, 2015). Altogether, these findings suggest that uncertainty in a particular story or area of science will not necessarily bleed into views of science overall, and that people, at least those in these studies, expect uncertainty in science and do not view it negatively, per se. People appear to expect a level of uncertainty in scientific information. This is counter to the belief of many scientists that the American public has a primarily risk-focused view of science (Braun et al., 2015) and that they are unable to understand or ‘correctly’ handle scientific information (Besley & Nisbet, 2011; Davies, 2008; Ecklund et al., 2012). It is also commonly thought that providing the public with information on uncertainty will trigger distrust, panic, and confusion (Frewer et al., 2003).
In fact, those who are most interested in science and hold positive attitudes toward science in general can also be those most likely to perceive scientific uncertainty (Kohl et al., 2016; J. Retzbach et al., 2016), and exposure to scientific uncertainty can sometimes increase interest in a particular science issue (A. Retzbach & Maier, 2015). These relationships between uncertainty in science and interest in science illustrate how the perception of uncertainty need not entail being ‘anti-science.’ As the studies above indicate, often the opposite is true—perhaps because those with the greatest familiarity with science are more likely to hold more complicated epistemic beliefs, rather than viewing scientific information as absolute or relative.
With respect to public perceptions of uncertainty related to scientific information, we can draw several conclusions based on the existing literature described in this article. Because of the highly context-dependent nature of perceptions of uncertainty, views of uncertainty in a given field are unlikely fully to bleed over broader perceptions of scientific uncertainty as such. Additionally, people often hold different views regarding the perceived rigidity of different fields (e.g., mathematics is seen as most structured and social fields as less structured, Buehl & Alexander, 2001) and have varying levels of trust for different scientific actors depending on the issue or field (e.g., Funk et al., 2019). Therefore, views of ‘science’ are likely better understood as views of multiple scientific issues, fields, and actors, which vary in saliency across different populations over time and depending on context.
Even with the variation across individuals, issues, and information sources, however, based on the literature review above, several considerations can aid individuals who want to communicate information related to uncertainty in science, including uncertainty related to reproducibility and replicability or the lack thereof. The most relevant section of the article that each recommendation draws from are listed after each consideration, for reference.
People appear to expect uncertainty in science and communicating through hedges/caveats can increase perceived credibility of that information (see Hedges, Two-Sided Information, and Conflicts Versus Caveats).
Conflict between researchers in stories (consensus uncertainty) can unnecessarily increase perceptions of uncertainty. If conflict and disagreement is a necessary part of the story, however, communicators can effectively communicate existing disagreements by explaining sources of uncertainty and disagreement and the steps that researchers are taking to address them (see Hedges).
By addressing common sources of disagreement within one story, communicators can provide context for people when they encounter additional, potentially conflicting, information.
If misperceptions of the levels, sources, and implications of uncertainty arise, corrections are most effective when they coherently ‘fill the gap’ in the story that the misperception originally filled and explain why the confusion arose. Corrections should also do so in a way that a) focuses on the misperception itself (rather than blaming any actors who spread it) and b) that aligns or does not clash with beliefs or worldviews of those more likely to hold the misperception (see Benefits of the Media Environment).
Additionally, although spread of information through traditional and online sources is difficult to predict or control, it’s important to remember that online and social media in particular offer outlets for effective corrections of misinformation, such as through tweets from expert sources and links to related stories.
Finally, returning to concerns of communicating reproducibility and replicability in science, the last, more general, recommendation is as follows:
Effective communication of uncertainty related to reproducibility and replicability should include: a) the role of reproducibility and replicability in scientific research, and b) why reproducibility and replicability sometimes fail for certain studies, or what it can mean that a replication fails.
The last recommendation above indicates an area where we need much more research, especially on how information related to reproducibility and replicability affects perceptions of uncertainty and of science more broadly. In particular, the two studies that tested how explaining what steps researchers were taking to address reproducibility did not see that including such explanations increased trust in studies from the field received (Anvari & Lakens, 2019; Wingen et al., 2019). Why this is the case, how long such effects hold, and whether different information formats are more effective, are just some of the areas where we need more research. Such studies should also continue to allow for distinctions between the causes of lack of reproducibility and replicability. As mentioned in the introduction, the lack of reproducibility and replicability can stem from a wide range of causes, including methods that are not strong enough to capture small effects, rare instances that are difficult to recreate, human error, and, in the worst case, fraud. Each could have very different impacts on how people perceive a specific case of lack of reproducibility and replicability, and each have different relationships to uncertainty and steps for moving forward to address the causes.
Because of its context-dependence, uncertainty seems rarely to function as the sole or determining factor in views concerning a particular science-related issue. As research on science in various policy arenas has highlighted, uncertainty may create space for other values, considerations, and goals to come into play as stakeholders debate how to tolerate uncertainty, which may entail temporary inaction or delay (as, for example, with respect to COVID) or to act on information (Campbell, 1985; Post & Maier, 2016; Renn, 1992).
The space for discussion and interpretation created by uncertainty is part of why it is necessary to effectively communicate scientific uncertainty so that actors can make well-informed decisions. It is also why it is important to understand how perceptions of uncertainty will be shaped by information presentation (both within and across stories and mediums) as well as by individual values, goals, and beliefs concerning scientific information and issues. The literature suggests that uncertainty in scientific information does not inevitably translate to a lack of credibility. For many members of the public, uncertainty in science is expected. Scientific information on uncertainty can serve as an indicator of the credibility of the messenger as well as of the information itself. Information on replicability and reproducibility in particular can also be seen as part of science’s process of self-correction and quality control (Mede et al., 2020).
Communicating uncertainty appears to be most effective when it includes caveats from scientists involved in the research and avoids conflict between scientists. In the case of communicating disagreements concerning reproducibility and replicability, discussion of any sources of uncertainty and conflict will be more effective if it gives context for the uncertainty, both as it relates to the particular study or topic, and provides insight as to how reproducibility and replicability and uncertainty relate to scientific research and processes in general. This could be done by not only explicitly acknowledging uncertainty, its sources, and its potential implications, but by also including the steps that researchers are taking to address uncertainty and how reproducibility and replicability fits in that process.
Addressing issues of replicability and reproducibility and their larger implications for science and perceptions of science will help us to develop a clear and accurate picture of public perceptions of science and what those perceptions mean for how we view, discuss, and act on information about science. While uncertainty can be manufactured, manipulated, and weaponized to argue for a particular course of (in)action, true uncertainty in scientific information itself or in our understanding of its implications for society opens up space for disagreement and dialogue about how we want to move forward as individuals and as a society.
This manuscript is based on a report commissioned by the National Academies of Science, Engineering, and Medicine, Committee on Reproducibility and Replicability in Science.
This article is based on work funded by the National Academies of Science, Engineering, and Medicine (NASEM). Any views expressed are those of the author and do not necessarily represent the views of the NASEM.
Allum, N., Sturgis, P., Tabourazi, D., & Brunton-Smith, I. (2008). Science knowledge and attitudes across cultures: A meta-analysis. Public Understanding of Science, 17(1), 35–54. https://doi.org/https://doi.org/10.1177/0963662506070159
Altemeyer, B. (1996). The authoritarian specter. Harvard University Press.
Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2014). The “Nasty Effect”: Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19(3), 373–387. https://doi.org/10.1111/jcc4.12009
Anvari, F., & Lakens, D. (2019). The replicability crisis and public trust in psychological science. Comprehensive Results in Social Psychology, 3(3), 266–286. https://doi.org/10.1080/23743603.2019.1684822
Barnes, J. E., & Sanger, D. E. (2020, July 28). Russian intelligence agencies push disinformation on pandemic. The New York Times. https://www.nytimes.com/2020/07/28/us/politics/russia-disinformation-coronavirus.html
Batteux, E., Ferguson, E., & Tunney, R. J. (2019). Do our risk preferences change when we make decisions for others? A meta-analysis of self-other differences in decisions involving risk. PLoS One, 14(5), Article e0216566. https://doi.org/10.1371/journal.pone.0216566
Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. Yale University Press.
Besley, J. C., & Nisbet, M. C. (2011). How scientists view the public, the media, and the political process. Public Understanding of Science, 22(6), 644–659. https://doi.org/10.1177/0963662511418743
Bik, H. M., & Goldstein, M. C. (2013). An introduction to social media for scientists. PLoS Biology, 11(4), Article e1001535. https://doi.org/10.1371/journal.pbio.1001535
Binder, A. R., Hillback, E. D., & Brossard, D. (2016). Conflict or caveats? Effects of media portrayals of scientific uncertainty on audience perceptions of new technologies. Risk Analysis, 36(4), 831–846. https://doi.org/10.1111/risa.12462
Blankenship, K. L., & Holtgraves, T. (2016). The role of different markers of linguistic powerlessness in persuasion. Journal of Language and Social Psychology, 24(1), 3–24. https://doi.org/10.1177/0261927x04273034
Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619–638. https://doi.org/10.1111/jcom.12166
Braun, M., Starkbaum, J., & Dabrock, P. (2015). Safe and sound? Scientists' understandings of public engagement in emerging biotechnologies. PLoS One, 10(12), Article e0145033. https://doi.org/10.1371/journal.pone.0145033
Broniatowski, D. A., Hilyard, K. M., & Dredze, M. (2016). Effective vaccine communication during the Disneyland measles outbreak. Vaccine, 34(28), 3225–3228. https://doi.org/10.1016/j.vaccine.2016.04.044
Brossard, D. (2013). New media landscapes and the science information consumer. Proceedings of the National Academies of Science, 110(3), 14096–14101. https://doi.org/10.1073/pnas.1212744110
Brossard, D., & Scheufele, D. A. (2013). Science, new media, and the public. Science, 339(6115), 40–41. https://doi.org/10.1126/science.1232329
Brumfiel, G. (2009). Science journalism: Supplanting the old media? Nature, 458(7236), 274–277. https://doi.org/10.1038/458274a
Budescu, D. V., Broomell, S., & Por, H.-H. (2009). Improving communication of uncertainty in the reports of the intergovernmental panel on climate change. Psychological Science, 20(3), 299–308. https://doi.org/10.1111/j.1467-9280.2009.02284.x
Buehl, M. M., & Alexander, P. A. (2001). Beliefs about academic knowledge. Educational Psychology Review, 13(4), 385–418. https://doi.org/10.1023/A:1011917914756
Cacciatore, M. A., Scheufele, D. A., & Corley, E. A. (2014). Another (methodological) look at knowledge gaps and the internet's potential for closing them. Public Understanding of Science, 23(4), 376–394. https://doi.org/10.1177/0963662512447606
Campbell, B. L. (1985). Uncertainty as symbolic action in disputes among experts. Social Studies of Science, 15(3), 429–453. https://doi.org/10.1177/030631285015003002
Chang, C. (2015). Motivated processing: How people perceive news covering novel or contradictory health research findings. Science Communication, 37(5), 602–634. https://doi.org/10.1177/1075547015597914
Chopik, W. J., Bremner, R. H., Defever, A. M., & Keller, V. N. (2018). How (and whether) to teach undergraduates about the replication crisis in psychological science. Teaching of Psychology, 45(2), 158–163. https://doi.org/10.1177/0098628318762900
Collins, K., Shiffman, D., & Rock, J. (2016). How are scientists using social media in the workplace? PLoS One, 11(10), Article e0162680. https://doi.org/10.1371/journal.pone.0162680
Colson, V., & Allan, S. (2011). Science blogs as competing channels for the dissemination of science news. Journalism: Theory, Practice & Criticism, 12(7), 889–902. https://doi.org/10.1177/1464884911412834
Davies, S. R. (2008). Constructing communication: Talking to scientists about talking to the public. Science Communication, 29(4), 413–434. https://doi.org/10.1177/1075547008316222
Dixon, G. N., & Clarke, C. E. (2012). Heightening uncertainty around certain science. Science Communication, 35(3), 358–382. https://doi.org/10.1177/1075547012458290
Dunwoody, S. (1999). Scientists, journalists, and the meaning of uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (Eds.), Communicating uncertainty–media coverage of new and controversial science (pp. 59–80). Lawrence Erlbaum.
Dunwoody, S., & Griffin, R. J. (2013). Statistical reasoning in journalism education. Science Communication, 35(4), 528–538. https://doi.org/10.1177/1075547012475227
Durik, A. M., Britt, M. A., Reynolds, R., & Storey, J. (2008). The effects of hedges in persuasive arguments: A nuanced analysis of language. Journal of Language and Social Psychology, 27(3), 217–234. https://doi.org/10.1177/0261927X08317947
Ecklund, E. H., James, S. A., & Lincoln, A. E. (2012). How academic biologists and physicists view science outreach. PLoS One, 7(5), Article e36240. https://doi.org/10.1371/journal.pone.0036240
Einsiedel, E., & Thorne, B. (1999). Public responses to uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (Eds.), Communicating uncertainty–Media coverage of new and controversial science (pp. 43–58). Lawrence Erlbaum.
Engdahl, E., & Lidskog, R. (2014). Risk, communication and trust: Towards an emotional understanding of trust. Public Understanding of Science, 23(6), 703–717. https://doi.org/10.1177/0963662512460953
Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), Article e123. https://doi.org/10.2196/jmir.2012
Feezell, J. T. (2017). Agenda setting through social media: The importance of incidental news exposure and social filtering in the digital era. Political Research Quarterly, 71(2), 482–494. https://doi.org/10.1177/1065912917744895
Feinkohl, I., Flemming, D., Cress, U., & Kimmerle, J. (2016). The impact of epistemological beliefs and cognitive ability on recall and critical evaluation of scientific information. Cognitive Process, 17(2), 213–223. https://doi.org/10.1007/s10339-015-0748-z
Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
Fischhoff, B., & Davis, A. L. (2014). Communicating scientific uncertainty. Proceedings of the National Academy of Sciences, 111(Suppl. 4), 13664–13671. https://doi.org/10.1073/pnas.1317504111
Flemming, D., Feinkohl, I., Cress, U., & Kimmerle, J. (2015). Individual uncertainty and the uncertainty of science: The impact of perceived conflict and general self-efficacy on the perception of tentativeness and credibility of scientific information. Frontiers in Psychology, 6, Article 1859. https://doi.org/10.3389/fpsyg.2015.01859
Flemming, D., Feinkohl, I., Cress, U., & Kimmerle, J. (2017). User comments about research findings: How conflictual information in online science journalistic articles influences laypeople’s understanding of scientific tentativeness. Communications, 42(4), 465–480. https://doi.org/10.1515/commun-2017-0037
Fletcher, R., & Nielsen, R. K. (2017). Are people incidentally exposed to news on social media? A comparative analysis. New Media & Society, 20(7), 2450–2468. https://doi.org/10.1177/1461444817724170
Freudenberg, W. R., & Pastor, S. K. (1992). Public responses to technological risks: Toward a sociological perspective. The Sociological Quarterly, 33(3), 389–412. https://doi.org/10.1111/j.1533-8525.1992.tb00381.x
Freudenburg, W. R. (1993). Risk and recreancy: Weber, the division of labor, and the rationality of risk perceptions. Social Forces, 71(4), 909–932. https://doi.org/10.1093/sf/71.4.909
Frewer, L. J., Hunt, S., Brennan, M., Kuznesof, S., Ness, M., & Ritson, C. (2003). The views of scientific experts on how the public conceptualize uncertainty. Journal of Risk Research, 6(1), 75–85. https://doi.org/10.1080/1366987032000047815
Fung, T. K. F., Griffin, R. J., & Dunwoody, S. (2018). Testing links among uncertainty, affect, and attitude toward a health behavior. Science Communication, 40(1), 33–62. https://doi.org/10.1177/1075547017748947
Funk, C., Gottfried, J., & Mitchell, A. (2017). Science news and information today. Pew Research Center. http://www.journalism.org/2017/09/20/science-news-and-information-today/
Funk, C., Hefferon, M., Kennedy, B., & Johnson, C. (2019). Trust and mistrust in Americans' views of scientific experts. Pew Research Center. https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/
Funtowicz, S., & Ravetz, J. (2003). Post-normal science. In Internet Encyclopaedia of Ecological Economics (pp. 1–10). International Society for Ecological Economics.
Gabielkov, M., Ramachandran, A., Chaintreau, A., & Legout, A. (2016). Social clicks: What and who gets read on Twitter. Paper presented at the ACM SIGMETRICS/IFIP Performance, Antibes Juan-les-Pins, France.
Gallagher, K. M., & Updegraff, J. A. (2012). Health message framing effects on attitudes, intentions, and behavior: A meta-analytic review. Annals of Behavioral Medicine, 43(1), 101–116. https://doi.org/10.1007/s12160-011-9308-7
Gardiner, B. (2015, December 18). You'll be outraged at how easy it was to get you to click on this headline. WIRED. https://www.wired.com/2015/12/psychology-of-clickbait/
Garrett, R. K. (2009a). Echo chambers online? Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication, 14(2), 265–285. https://doi.org/10.1111/j.1083-6101.2009.01440.x
Garrett, R. K. (2009b). Politically motivated reinforcement seeking: Reframing the selective exposure debate. Journal of Communication, 59(4), 676–699. https://doi.org/10.1111/j.1460-2466.2009.01452.x
Garrett, R. K. (2017). The "echo chamber" distraction: Disinformation campaigns are the problem, not audience fragmentation. Journal of Applied Research in Memory and Cognition, 6(4), 370–376. https://doi.org/10.1016/j.jarmac.2017.09.011
Garrett, R. K., Weeks, B. E., & Neo, R. L. (2016). Driving a wedge between evidence and beliefs: How online ideological news exposure promotes political misperceptions. Journal of Computer-Mediated Communication, 21(5), 331–348. https://doi.org/10.1111/jcc4.12164
Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 1–12. https://doi.org/10.1126/scitranslmed.aaf5027
Green, M. C., & Donahue, J. K. (2011). Persistence of belief change in the face of deception: The effect of factual stories revealed to be false. Media Psychology, 14(3), 312–331. https://doi.org/10.1080/15213269.2011.598050
Griffin, R. J. (2016). Scientific uncertainty in media content: Some reflections on this special issue. Public Understanding of Science, 25(8), 1009–1013. https://doi.org/10.1177/0963662516674649
Guidry, J. P. D., Jin, Y., Orr, C. A., Messner, M., & Meganck, S. (2017). Ebola on Instagram and Twitter: How health organizations address the health crisis in their social media engagement. Public Relations Review, 43(3), 477–486. https://doi.org/10.1016/j.pubrev.2017.04.009
Gustafson, A., & Rice, R. E. (2020). A review of the effects of uncertainty in public science communication. Public Understanding of Science, 29(6), 1–20. https://doi.org/10.1177/0963662520942122
Hart, P. S., & Nisbet, E. C. (2011). Boomerang effects in science communication. Communication Research, 39(6), 701–723. https://doi.org/10.1177/0093650211416646
Ho, S. S., Scheufele, D. A., & Corley, E. A. (2011). Value predispositions, mass media, and attitudes toward nanotechnology: The interplay of public and experts. Science Communication, 33(2), 167–200. https://doi.org/10.1177/1075547010380386
Hosman, L. A. (1989). The evaluative consequences of hedges, hesitations, and intensifiers: Powerful and powerless speech styles. Human Communication Research, 15(3), 383–406. https://doi.org/10.1111/j.1468-2958.1989.tb00190.x
Howell, E. L., Wirz, C. D., Brossard, D., Jamieson, K. H., Scheufele, D. A., Winneg, K. M., & Xenos, M. A. (2018). National Academy of Sciences report on genetically engineered crops influences public discourse. Politics and the Life Sciences, 37(2), 250–261. https://doi.org/10.1017/pls.2018.12
Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists and journalists credibility. Human Communication Research, 34(3), 347–369. https://doi.org/10.1111/j.1468-2958.2008.00324.x
Jensen, J. D., & Hurley, R. J. (2012). Conflicting stories about public scientific controversies: Effects of news convergence and divergence on scientists' credibility. Public Understanding of Science, 21(6), 689–704. https://doi.org/10.1177/0963662510387759
Jensen, J. D., Pokharel, M., Scherr, C. L., King, A. J., Brown, N., & Jones, C. (2017). Communicating uncertain science to the public: How amount and source of uncertainty impact fatalism, backlash, and overload. Risk Analysis, 37(1), 40–51. https://doi.org/10.1111/risa.12600
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732–735. https://doi.org/10.1038/nclimate1547
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350. https://doi.org/10.1037/0003-066X.39.4.341
Kasperson, R. E. (1992). The social amplification of risk: Progress in developing an integrative framework. In S. Krimsky & D. Golding (Eds.), Social Theories of Risk (pp. 153–178). Praeger.
Kasperson, R. E., Golding, D., & Tuler, S. (1992). Social distrust as a factor in siting hazardous facilities and communicating risks. Journal of Social Issues, 48(4), 161–187. https://doi.org/10.1111/j.1540-4560.1992.tb01950.x
Kienhues, D., Stadtler, M., & Bromme, R. (2011). Dealing with conflicting or consistent medical information on the web: When expert information breeds laypersons' doubts about experts. Learning and Instruction, 21(2), 193–204. https://doi.org/10.1016/j.learninstruc.2010.02.004
Kim, Y., Chen, H.-T., & Gil de Zúñiga, H. (2013). Stumbling upon news on the Internet: Effects of incidental news exposure and relative entertainment use on political engagement. Computers in Human Behavior, 29(6), 2607–2614. https://doi.org/10.1016/j.chb.2013.06.005
Kimmerle, J., Flemming, D., Feinkohl, I., & Cress, U. (2015). How laypeople understand the tentativeness of medical research news in the media. Science Communication, 37(2), 173–189. https://doi.org/10.1177/1075547014556541
Kohl, P. A., Kim, S. Y., Peng, Y., Akin, H., Koh, E. J., Howell, A., & Dunwoody, S. (2016). The influence of weight-of-evidence strategies on audience perceptions of (un)certainty when media cover contested science. Public Understanding of Science, 25(8), 976–991. https://doi.org/10.1177/0963662515615087
Kortenkamp, K. V., & Basten, B. (2015). Environmental science in the media: Effects of opposing viewpoints on risk and perceptions of uncertainty. Science Communication, 37(3), 287–313. https://doi.org/10.1177/1075547015574016
Krause, N. M., Brossard, D., Scheufele, D. A., Xenos, M. A., & Franke, K. (2019). Trends: Americans' trust in science and scientists. Public Opinion Quarterly, 83(4), 817–836. https://doi.org/10.1093/poq/nfz041
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Lee, J. K., & Kim, E. (2017). Incidental exposure to news: Predictors in the social media setting and effects on information gain online. Computers in Human Behavior, 75, 1008–1015. https://doi.org/10.1016/j.chb.2017.02.018
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the "post-truth" era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/ 10.1177/1529100612451018
Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2012). The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change, 3(4), 399–404. https://doi.org/10.1038/nclimate1720
Mayweg-Paus, E., & Jucks, R. (2017). Conflicting evidence or conflicting opinions? Two-sided expert discussions contribute to experts’ trustworthiness. Journal of Language and Social Psychology, 37(2), 203–223. https://doi.org/10.1177/0261927x17716102
Mede, N. G., Schafer, M. S., Ziegler, R., & Weisskopf, M. (2020). The "replication crisis" in the public eye: Germans' awareness and perceptions of the (ir)reproducibility of scientific research. Public Understanding of Science, 30(1), 91–102. https://doi.org/10.1177/0963662520954370
Messing, S., & Westwood, S. J. (2012). Selective exposure in the age of social media. Communication Research, 41(8), 1042–1063. https://doi.org/10.1177/0093650212466406
Morton, T. A., Rabinovich, A., Marshall, D., & Bretschneider, P. (2011). The future that may (or may not) come: How framing changes responses to uncertainty in climate change communications. Global Environmental Change, 21(1), 103–109. https://doi.org/10.1016/j.gloenvcha.2010.09.013
Nagler, R. H. (2014). Adverse outcomes associated with media exposure to contradictory nutrition messages. Journal of Health Communication, 19(1), 24–40. https://doi.org/10.1080/10810730.2013.798384
National Academies of Sciences, Engineering, and Medicine. (2016). Science literacy: Concepts, contexts, and consequences. The National Academies Press. https://doi.org/10.17226/23595
National Academies of Sciences, Engineering, and Medicine. (2017). Genetically engineered crops: Experiences and prospects. The National Academies Press. https://doi.org/10.17226/23395
National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and replicability in science. The National Academies Press. https://doi.org/10.17226/25303
National Science Board. (2018). Science & engineering indicators. https://www.nsf.gov/statistics/2018/nsb20181/
Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2017). Reuters institute digital news report. http://www.digitalnewsreport.org/
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2
Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: A randomized trial. Pediatrics, 133(4), 835–842. https://doi.org/10.1542/peds.2013-2365
O'Keefe, D. J., & Jensen, J. D. (2008). Do loss-framed persuasive messages engender greater message processing than do gain-framed messages? A meta-analytic review. Communication Studies, 59(1), 51–67. https://doi.org/10.1080/10510970701849388
O'Keefe, D. J., & Jensen, J. D. (2009). The relative persuasiveness of gain-framed and loss-framed messages for encouraging disease detection behaviors: A meta-analytic review. Journal of Communication, 59(2), 296–316. https://doi.org/10.1111/j.1460-2466.2009.01417.x
O'Keefe, D. J., & Nan, X. (2012). The relative persuasiveness of gain- and loss-framed messages for promoting vaccination: A meta-analytic review. Health Communication, 27(8), 776–783. https://doi.org/10.1080/10410236.2011.640974
O'Keefe, D. J., & Wu, D. (2012). Gain-framed messages do not motivate sun protection: A meta-analytic review of randomized trials comparing gain-framed and loss-framed appeals for promoting skin cancer prevention. International Journal of Environmental Research and Public Health, 9(6), 2121–2133. https://doi.org/10.3390/ijerph9062121
Patt, A. (2007). Assessing model-based and conflict-based uncertainty. Global Environmental Change, 17(1), 37–46. https://doi.org/10.1016/j.gloenvcha.2006.10.002
Peters, E., Hibbard, J., Slovic, P., & Dieckmann, N. (2007). Numeracy skill and the communication, comprehension, and use of risk-benefit information. Health Affairs, 26(3), 741–748. https://doi.org/10.1377/hlthaff.26.3.741
Peters, H. P., Dunwoody, S., Allgaier, J., Lo, Y.-Y., & Brossard, D. (2014). Public communication of science 2.0: Is the communication of science via the "new media" online a genuine transformation or old wine in new bottles? EMBO Reports, 15(7), 749–753. https://doi.org/10.15252/embr.201438979
Peters, R. G., Covello, V. T., & McCallum, D. B. (1997). The determinants of trust and credibility in environmental risk communication: An empirical study. Risk Analysis, 17(1), 43–54. https://doi.org/10.1111/j.1539-6924.1997.tb00842.x
Plesser, H. E. (2018). Reproducibility vs. replicability: A brief history of a confused terminology. Frontiers in Neuroinformatics, 11(76), 1–4. https://doi.org/10.3389/fninf.2017.00076
Post, S., & Maier, M. (2016). Stakeholders' rationales for representing uncertainties of biotechnological research. Public Understanding of Science, 25(8), 944–960. https://doi.org/10.1177/0963662516645039
Priest, S. H., Bonfadelli, H., & Rusanen, M. (2003). The "trust gap" hypothesis: Predicting support for biotechnology across national cultures as a function of trust in actors. Risk Analysis, 23(4), 751–766. https://doi.org/10.1111/1539-6924.00353
Purcell, K., Brenner, J., & Rainie, L. (2012). Search engine use 2012. Pew Research Center. https://www.pewinternet.org/2012/03/09/search-engine-use-2012/
Rabinovich, A., & Morton, T. A. (2012). Unquestioned answers or unanswered questions: Beliefs about science guide responses to uncertainty in climate change risk communication. Risk Analysis, 32(6), 992–1002. https://doi.org/10.1111/j.1539-6924.2012.01771.x
Ramondt, S., & Ramirez, A. S. (2019). Assessing the impact of the public nutrition information environment: Adapting the cancer information overload scale to measure diet information overload. Patient Education and Counseling, 102(1), 37–42. https://doi.org/10.1016/j.pec.2018.07.020
Ratcliff, C. L., Jensen, J. D., Christy, K., Crossley, K., & Krakow, M. (2018). News coverage of cancer research: Does disclosure of scientific uncertainty enhance credibility? In H. D. O'Hair (Ed.), Risk and health communication in an evolving media environment (pp. 156–175). Routledge.
Renn, O. (1992). The social arena concept of risk debates. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 179–195). Praeger.
Retzbach, A., & Maier, M. (2015). Communicating scientific uncertainty. Communication Research, 42(3), 429–456. https://doi.org/10.1177/0093650214534967
Retzbach, J., Otto, L., & Maier, M. (2016). Measuring the perceived uncertainty of scientific evidence and its relationship to engagement with science. Public Understanding of Science, 25(6), 638–655. https://doi.org/10.1177/0963662515575253
Rokeach, M. (1960). The open and closed mind. Basic Books.
Rutjens, B. T., Heine, S. J., Sutton, R. M., & van Harreveld, F. (2018). Attitudes towards science. In J. M. Olson (Ed.), Advances in experimental social psychology, vol. 57 (pp. 125-165). Academic Press.
Schaeffer, K. (2020). A look at the Americans who believe there is truth to the conspiracy theory that COVID-19 was planned. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/07/24/a-look-at-the-americans-who-believe-there-is-some-truth-to-the-conspiracy-theory-that-covid-19-was-planned/
Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academies of Science, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115
Seifert, C. M. (2002). The continued influence of misinformation in memory: What makes a correction effective? The Psychology of Learning and Motivation, 41, 265–292. https://doi.org/10.1016/S0079-7421(02)80009-3
Shearer, E., & Gottfried, J. (2017). News use across social media platforms 2017. Pew Research Center. http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/
Siegrist, M. (2000). The influence of trust and perceptions of risk and benefits on the acceptance of gene technology. Risk Analysis, 20(2), 195–203. https://doi.org/10.1111/0272-4332.202020
Siegrist, M., Connor, M., & Keller, C. (2012). Trust, confidence, procedural fairness, outcome fairness, moral conviction, and the acceptance of GM field experiments. Risk Analysis, 32(8), 1394–1403. https://doi.org/10.1111/j.1539-6924.2011.01739.x
Simis-Wilkinson, M., Madden, H., Lassen, D., Su, L. Y.-F., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Scientists joking on social media: An empirical analysis of #overlyhonestmethods. Science Communication, 40(3), 314–339. https://doi.org/10.1177/1075547018766557
Sinatra, G. M., Kienhues, D., & Hofer, B. K. (2014). Addressing challenges to public understanding of science: epistemic cognition, motivated reasoning, and conceptual change. Educational Psychologist, 49(2), 123–138. https://doi.org/10.1080/00461520.2014.916216
Smithson, M. (1999). Conflict aversion: Preference for ambiguity vs conflict in sources and evidence. Organizational Behavior and Human Decision Processes, 79(3), 179–198. https://doi.org/10.1006/obhd.1999.2844
Stroud, N. J. (2010). Polarization and partisan selective exposure. Journal of Communication, 60(3), 556–576. https://doi.org/10.1111/j.1460-2466.2010.01497.x
Tannenbaum, M. B., Hepler, J., Zimmerman, R. S., Saul, L., Jacobs, S., Wilson, K., & Albarracin, D. (2015). Appealing to fear: A meta-analysis of fear appeal effectiveness and theories. Psychological Bulletin, 141(6), 1178–1204. https://doi.org/10.1037/a0039729
Tiedens, L. Z., & Linton, S. (2001). Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. Journal of Personality and Social Psychology, 81(6), 973–988. https://doi.org/10.1037/0022-3514.81.6.973
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. https://doi.org/10.1126/science.7455683
Tyson, A., & Spencer, A. (2020). Most Americans say despite ongoing research, ways to limit spread of COVID-19 are well understood. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/07/08/most-americans-say-despite-ongoing-research-ways-to-limit-spread-of-covid-19-are-well-understood/
Vardeman, J. E., & Aldoory, L. (2008). A qualitative study of how women make meaning of contradictory media messages about the risks of eating fish. Health Communication, 23(3), 282–291. https://doi.org/10.1080/10410230802056396
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Vraga, E. K., & Bode, L. (2017). Using expert sources to correct health misinformation in social media. Science Communication, 39(5), 621–645. https://doi.org/10.1177/1075547017731776
Wachinger, G., Renn, O., Begg, C., & Kuhlicke, C. (2013). The risk perception paradox—Implications for governance and communication of natural hazards. Risk Analysis, 33(6), 1049–1065. https://doi.org/10.1111/j.1539-6924.2012.01942.x
Walter, N., & Murphy, S. T. (2018). How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs, 85(3), 423–441. https://doi.org/10.1080/03637751.2018.1467564
Ward, P. R., Henderson, J., Coveney, J., & Meyer, S. (2011). How do South Australian consumers negotiate and respond to information in the media about food and nutrition? Journal of Sociology, 48(1), 23–41. https://doi.org/10.1177/1440783311407947
Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719. https://doi.org/10.1111/jcom.12164
Westervelt, A. (2019, January 10). How the fossil fuel industry got the media to think climate change was debatable. The Washington Post. https://www.washingtonpost.com/outlook/2019/01/10/how-fossil-fuel-industry-got-media-think-climate-change-was-debatable/?utm_term=.f340b9de8444
Wingen, T., Berkessel, J. B., & Englich, B. (2019). No replication, no trust? How low replicability influences trust in psychology. Social Psychological and Personality Science, 11(4), 454–463. https://doi.org/10.1177/1948550619877412
Winter, S., & Krämer, N. C. (2012). Selecting science information in web 2.0: How source cues, message sidedness, and need for cognition influence users' exposure to blog posts. Journal of Computer-Mediated Communication, 18(1), 80–96. https://doi.org/10.1111/j.1083-6101.2012.01596.x
Winter, S., Krämer, N. C., Rösner, L., & Neubaum, G. (2014). Don’t keep it (too) simple: How textual representations of scientific uncertainty affect laypersons' attitudes. Journal of Language and Social Psychology, 34(3), 251–272. https://doi.org/10.1177/0261927x14555872
Yeo, S. K., Liang, X., Brossard, D., Rose, K. M., Korzekwa, K., Scheufele, D. A., & Xenos, M. A. (2017). The case of #arseniclife: Blogs and Twitter in informal peer review. Public Understanding of Science, 26(8), 937–952. https://doi.org/10.1177/0963662516649806
©2020 Emily L. Howell. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.