Science Communication in the Context of Reproducibility and Replicability : How Non-scientists Navigate Scientific Uncertainty

Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research. But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty? To gain insight into this issue, we would need to know how those views are shaped by media coverage of the issue, but none of the emergent research on public views of reproducibility and replicability in science considers that question. We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science. Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science. It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

RUNNING HEAD: Uncertainty in the public eye 3 2019), as has been the case during the COVID-19 pandemic. Less nefariously, uncertainty may be generated from unintentionally misleading portrayals of science across news articles or other information sources. These kinds of manufactured uncertainty are the product of misinformation on scientific information and science-related issues, or of accounts that misrepresent or only partially represent the complex issuesintentionally or notwithout giving context to explain (apparent) discrepancies or conflicts across information sources.
True scientific uncertainty can create greater space for uncertainty in decision-making and for manufactured uncertainty, for example. Or, similarly, an argument about an area of true scientific uncertainty might emerge because of deep divisions concerning public decisionmaking related to that science. All three of these broad types of uncertainty related to science are also influenced by the factors described in this summary.
For clarity, I sort some of the key factors shaping public perceptions of science-related uncertaintyand its forms, causes, and potential implicationsinto two categories: 1) how uncertainty appears both within and across information sources, and 2) how recipients process information, depending upon their motivations, abilities, values, and beliefs.
Only a small number of studies explicitly focus on communication and perceptions of true uncertainty in science, let alone perceptions of reproducibility and replicability. Hence, this overview also looks at the literature on perceptions of uncertainty in science-related issues, particularly uncertainty in decision-making on science, as addressed in literature on risk and decision-making (Fischhoff & Davis, 2014;Funtowicz & Ravetz, 2003;Renn, 1992). It draws as well on the extensive literatures from communication, psychology, and political science fields on media and public opinion, particularly on misinformation and manufactured uncertainty.
Most of the research on public perceptions of uncertainty focuses on how certain factors Just Accepted RUNNING HEAD: Uncertainty in the public eye 4 within a news article or piece of information influence perceptions of the uncertainty and the science and actors involved. This section starts with an overview of those studies. Studies focus primarily on uncertainty as it appears through hedging language, or caveats, through showing two-sides of an argument related to a science issue, and through positive and negative or gain and loss frames. While these studies largely rely on experiments in controlled settings, most people receive information, particularly science information, in the highly complex, uncontrolled settings of online and social media. Given this phenomenon, this summary then considers how media can influence the reception and mental processing of scientific uncertainty.

Hedges and credibility from caveats
In the literature on public perceptions of uncertainty, true scientific uncertainty generally appears through language that expresses caveats, or hedges. Hedges (also called "powerless language") include using words such as "might," "in some cases," and "possibly." They are key to communicating scientific information accurately because they communicate the uncertainty present in scientific results.
Although research on persuasive communication, such as editorials and legal arguments, found that hedges increase negative views of the speaker (Hosman, 1989) and the messages (Blankenship & Holtgraves, 2016), research in science communication often finds the opposite effect: that hedges can increase credibility of the information or of sources associated with the information (Durik, Britt, Reynolds, & Storey, 2008;Jensen, 2008;Ratcliff, Jensen, Christy, Crossley, & Krakow, 2018;Winter, Krämer, Rösner, & Neubaum, 2014). In one study, the most hedge-free version of scientific information was least effective in gaining credibility among readers (Winter et al., 2014). Others studies similarly have found has found that hedges in Just Accepted RUNNING HEAD: Uncertainty in the public eye 5 coverage of cancer research increased the perceived trustworthiness of the scientists and journalists involved (Jensen, 2008), while a replication of that study found that hedges did not affect views of scientists (which stayed high) but did increase the credibility of journalists associated with a story (Ratcliff et al., 2018). The findings vary with respect to exact effects, such as whether the increase affects the perceived credibility of the story, the scientists, or the journalists involved. Overall, however, these findings suggest that people perceive scientific information differently than other types of information, in that they expect and accept more uncertainty in scientific information.

The effects of conflict versus caveats in "two-sided" information
Importantly, however, research also suggests that hedges are more effective when researchers use them to discuss the results of their own work rather than when scientists criticize or add caveats to the work of other researchers (Jensen, 2008;Jensen et al., 2017). This distinction in who provides the caveats can be more important than the caveats themselves (Jensen et al., 2017) and it aligns with research on the effects of what is typically called "twosided information." Two-sided information refers to information that shows the pros and cons of a particular approach, line of research or technology, or argument on a particular science-related issue. That is in contrast to "one-sided information," which has a clear positive or negative slant (as described more in the section on positive and negative frames, below). Two-sided science information in experiments usually has either a) has scientists disagreeing with each other on results or interpretations (conflict), or b) has scientists providing counter-interpretations and caveats for their own work (caveats). When two-sided information has the former, through contradictions or conflicts between scientists or the addition of caveats to another researcher's work rather than to their own, the information can increase the perceived Just Accepted tentativeness of scientific results (Chang, 2015;Kortenkamp & Basten, 2015;Nagler, 2014), decrease perceived credibility of the associated scientists and journalists (Chang, 2015;Kortenkamp & Basten, 2015), decrease support for the research or related technology (Chang, 2015;Flemming et al., 2017), and increase fatalistic views toward preventing or alleviating sources of risk and uncertainty (Jensen et al., 2017). Uncertainty in these studies cited here arose through exposure to conflict-focused two-sided information either from exposure to several news articles that seem to contradict each other (Chang, 2015;Flemming et al., 2015Flemming et al., , 2017Nagler, 2014) or exposure to single article containing conflict or contradictions between researchers' statements (Jensen et al., 2017;Kortenkamp & Basten, 2015). The studies did not specify the source of the conflict, however, such as if it arose from disagreements about methods, results, or conclusions or from more personal conflicts between rival scientists or labs.
The good news is that two-sided information can also increase the credibility of the scientists involved when all researchers in the story provide both pro-and contra-arguments, or all share caveats for understanding research results. In such cases, presenting the true uncertainty can increase the perceived credibility of the scientists (Mayweg-Paus & Jucks, 2017), or at least not decrease credibility the way conflict-focused two-sided information can (Kortenkamp & Basten, 2015). It seems, then, that presenting true uncertainty through two-sided stories is not a problem per se. It is the uncertainty stemming from conflict between scientists in particular that can lead to negative perceptions. A recent review article by Gustafson and Rice (2020) supports this conclusion as well. Examining 48 studies that included a single message conveying uncertainty, consensus uncertainty, as the authors called uncertainty stemming from conflict between scientists, almost always led to negative results, meaning that readers were less likely to Just Accepted RUNNING HEAD: Uncertainty in the public eye 7 follow the recommendations of the information (Gustafson & Rice, 2020). In comparison, information about what the authors called technical uncertainty, such as through error bars, confidence intervals, or probabilities, had positive or null effects (Gustafson & Rice, 2020).
The effects of conflict-focused information vary across studies. One study, for example, found that uncertainty stemming from disagreement between experts on the effects of hypothetical flooding from climate change made participants see flooding as more likely (and hence, less uncertain) than did uncertainty stemming from climate models (Patt, 2007). Another study found that uncertainty arising from caveats by particular scientists versus uncertainty arising from conflict between scientists only affected participants' views of uncertainty and risk if those participants were more deferential to the authority of scientists (Binder, Hillback, & Brossard, 2016). Those who were more deferential had higher risk perceptions when they read articles characterized by conflict-based uncertainty, while other participants had no difference in perceptions depending on the presentation of uncertainty (Binder et al., 2016).

How characteristics of particular individuals and issues prompt different responses
As the Binder, Hillback, & Brossard (2016) study highlightson how deference shaped perceptions of uncertainty in informationthe mixture of findings regarding the effects of conflict-based, or consensus, uncertainty is likely due in part to how a particular scientific issue variably affects recipients of information concerning that issue. For example, some recipients prefer two-sided information despite the increased uncertainty that it creates, especially recipients who are motivated to work through and understand the information because of their connection to the issue (Winter & Krämer, 2012). In the Binder et al. (2016)

and the Winter and
Kramer (2012) studies, it was found that characteristics of individuals within the studies, as well as the specific topic of information, shape reactions to the uncertainty involved.

Just Accepted
RUNNING HEAD: Uncertainty in the public eye 8 A study comparing effects of conflicting information in articles on two different issuesone on dioxin in sewage and one on wolf reintroduction-illustrates well how the effects of uncertainty vary across particular issues. In the article on dioxin in sewage, conflict between researchers' interpretations of scientific results decreased recipients' certainty in their own prior beliefs, and increased the perceived credibility of scientists involved. Conflict between researchers, however, increased personal certainty and decreased the perceived credibility of scientists in the wolf article (Jensen & Hurley, 2012).
These mixed results suggest differences in how much people generally rely on scientific information to understand a particular issue. For example, dioxin in sewage can have personal health effects that would be difficult to see or understand without some scientific information. In other words, the topic might be one in which technical uncertainty is perceived as a more relevant part of the issue for most people. For wolf reintroduction, in contrast, scientific information on the impacts of wolves on ecosystems might not be as salient in opinion formation. Instead, people's personal views on the value of the wilderness, the role of humans in nature, perceptions of wolves as a personal risk or benefit, and other concerns could play a larger role than scientific information and associated uncertainty.

Positive versus negative frames
Some of the literature on public perceptions of uncertainty in science focuses on the influences of positive and negative frames or, similarly, gain and loss frames. Famous studies by psychologists Tversky and Kahneman (1981;1984) examined how people made different decisions depending whether choices highlighted gains or losses. For example, in the gainframed options, participants could choose between a sure gain or gamble (of equivalent expected value as the sure-thing gain), that offered a 25 percent chance of gaining more but at the risk of a Just Accepted RUNNING HEAD: Uncertainty in the public eye 9 75 percent chance of gaining nothing. In the loss frame, participants received the reverse: the choice between a sure loss or a 75 percent chance of losing more but with the 25 percent chance of losing nothing. In the gain frame, people tended to choose the sure thing -100 percent chance of gain. But in the loss frame, when faced with what seems like a sure loss, people were less risk averse. They were more likely to choose the gamble (25 percent chance of losing nothing but 75 percent chance of losing more), becoming willing to take risks to try to avoid or minimize the loss (Kahneman & Tversky, 1984;Tversky & Kahneman, 1981).
Similar to gain and loss framing, growing bodies of work examine positive and negative framing. Positive or negative framing highlights either more risks or more benefits (or potential losses or gains), respectively. Only a few studies seem to focus on how those positive and negative frames affect perceptions of uncertainty in particular and how people act on that information. One study found that negative frames can increase perceived uncertainty in scientific information , perhaps by making people more attentive to uncertainty. Others have found that negative frames increase tolerance for conflicting information and change intentions to act on uncertainty (Morton, Rabinovich, Marshall, & Bretschneider, 2011;Smithson, 1999). For example, under negative frames, one study found that people wanted precise information, even with conflict-based uncertainty (from scientists disagreeing), rather than vague, agreed-upon scientific interpretations (Smithson, 1999). Another found that negative frames on uncertainty information related to climate change made participants less likely to take action to avoid risks of climate change (Morton et al., 2011).
The authors of that study suggested that the effect could reflect negative frames increased participants' feelings of powerlessness to act in the face of uncertainty (Morton et al., 2011). Most communication contexts involving scientific uncertainty might not have such clearcut choices or desired outcomes. In many decision-making contexts, one must balance risks and benefits even as one's choices are possibly changing, while grappling with unquantified uncertainties (Funtowicz & Ravetz, 2003;Renn, 1992). Moving to communication on less clearcut areas of uncertainty, such as how we should develop and use self-driving cars in the U.S., we find that not only the science itself but also the decision stakes have many more (unquantifiable) uncertainties attached. Relative to caveats and two-sided information, positive and negative frames are harder to apply, unless communicators have a clear behavioral outcomes and bounded choice(s) that they want people to make (which brings ethical questions about whether persuading people to make those choices is the appropriate approach).

Effects of information on reproducibility and replicability in particular
Alongside the small but growing bodies of research on public perceptions of uncertainty summarized above, a handful of studies focus specifically on how information about reproducibility or replicability affects trust in the science and scientists involved. These studies tend to not position themselves within the work on perceptions of uncertainty in science. Similar to the studies of perceptions of uncertainty, though, almost all of these are experiments. Most Just Accepted focus only on views of psychological sciences and find that, in experiments, learning about problems of reproducibility in psychology reduced participants' trust in existing psychological science research (Anvari & Lakens, 2019;Chopik, Bremner, Defever, & Keller, 2018) or in the psychology research community (Wingen, Berkessel, & Englich, 2019).
The source of the lack of reproducibility or replicability seemed to influence participants' trust levels. Wingen et al. (2019), running experiments with respondents on Amazon Mechanical Turk, found that when the lack of reproducibility in psychological studies was due to questionable research practicessuch as selective reportingtrust in the psychological science research community to "do what was right" was lower than when it was attributed to the possibility of hidden moderators, not controlled for in the study designs. However, people who received information only about lack of reproducibility versus about lack of reproducibility due to questionable practices did not have significantly different levels of trust (Wingen et al., 2019).
This finding aligns with Anvari and Laken's (2019) studies that found that explicitly connecting replication failures to questionable research practices did not seem to result in different levels of trust than when people only heard about replication failures without an explanation of the cause.
Both studies found that including information about researchers' efforts to improve transparency and open science did not raise levels of trust in either past or future psychological research (Anvari & Lakens, 2019) or in the research community (Wingen et al., 2019).
Each study also found several important caveats, however. Anvari and Lakens (2019), in an online experiment with primarily participants from Europe, found that learning about replication failures in psychological sciences decreased trust in the reliability of past research, but not in the reported trust of future results. Support for public funding of psychological sciences research also remained high regardless of the information participants received (Anvari

Just Accepted
& Lakens, 2019). Chopik et al. (2018), in a pre/post survey of undergraduates who attended a lecture on problems of reproducibility in psychological sciences found that after the lecture students were less likely to trust psychological research results. The authors stated this result was not especially surprising, but also emphasized that students were not any less likely to report wanting to enter the field for graduate school, however, and had a significantly greater appreciation for study design features aimed to enhance reproducibility (Chopik et al., 2018).
These studies do not give examine how these views fit into broader views of science, nor how likely people are to be exposed to information about reproducibility and replicability in science and to what effect (see Rutjens, Heine, Sutton, & van Harreveld, 2018 for an overview).
Only one study, using a nationally representative survey, starts to give some insight into these questions. Asking a representative sample of Germans how news about issues of lack of reproducibility or replicability shaped their views of science, Mede et al. (2020) found that the vast majority of Germans had not heard about issues of reproducibility or replicability. Most, upon learning about the issues, however, believed they were indicators of science's processes for quality control and self-correction (Mede, Schafer, Ziegler, & Weisskopf, 2020). Most also did not indicate that lack of reproducibility and replicability at a given moment meant one could not trust science overall (Mede et al., 2020). The exception to this were respondents who identified as members of the populist far-right party AfD, who tended to take lack of reproducibility and replicability as indicators that scientific research is not trustworthy (Mede et al., 2020). This result is an example of how individual's values and worldviews can shape their perceptions of true scientific uncertainty, and of how such true uncertainty can create space for, or add fuel to, disagreements about what to do with scientific information.

How the media affects the presentation and perception of uncertainty
Just Accepted RUNNING HEAD: Uncertainty in the public eye 13 As fewer science journalists publish in traditional print outlets, more alternative sources for scientific information emerge through online and social media platforms (Brossard, 2013;Brossard & Scheufele, 2013;Brumfiel, 2009;Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2017;H. P. Peters, Dunwoody, Allgaier, Lo, & Brossard, 2014). In many ways, this development has democratized science communicationmaking it easier for more people to produce content and for more people to find and access it (Benkler, 2006;Cacciatore, Scheufele, & Corley, 2014;Funk, Gottfried, & Mitchell, 2017). This includes scientists and science and health institutions, who can now more directly communicate with publics, rather than relying solely on communication mediated through journalists (Bik & Goldstein, 2013;Broniatowski, Hilyard, & Dredze, 2016;Collins, Shiffman, & Rock, 2016;Colson & Allan, 2011;Eysenbach, 2011;Guidry, Jin, Orr, Messner, & Meganck, 2017). People who are interested in science or a scientific issue can also more easily find nuanced, specialized, and catered information and even opportunities to discuss such information with scientists. People who do not actively seek out science information are also likely to come across it incidentally when they are on social media Funk et al., 2017).
It is very easy for that exposure to include misleading and/or conflicting accounts of science information, as suggested by recent work on concerning online misinformation and its spread across social media platforms (Garrett, 2017;Lewandowsky, Ecker, & Cook, 2017;Vosoughi, Roy, & Aral, 2018). Headlines written to capture attention and generate emotion get more clicks (Gardiner, 2015), and they are likely to be one of the few parts of the article that people see (Gabielkov, Ramachandran, Chaintreau, & Legout, 2016). Scientists communicating directly through social media may air what traditionally might have been "in-house" conversations in the scientific community, in which researchers debate theories, methods, and RUNNING HEAD: Uncertainty in the public eye 14 results (Yeo et al., 2017) or joke about mishaps in science (Simis-Wilkinson et al., 2018), with unknown implications for publics' perceptions of science and uncertainty. Many of these featuressuch as misleading headlines and misinformationexisted before social media , but the quantity, reach, speed, and personalization of communication and information can change the type and size of their effects.

Negative effects of exposure to conflicting or inaccurate stories
One way in which the media can increase perceptions of uncertainty in a negative or misleading way is by increasing the likelihood that people will receive conflicting information (Dunwoody, 1999;Purcell, Brenner, & Rainie, 2012) and/or misinformation (Garrett, Weeks, & Neo, 2016) concerning the manufactured uncertainty described earlier, or conflict-based examples of true scientific uncertainty. While no studies, to my knowledge, have examined either how prevalent this is or what its implications are for people's perceptions of science and uncertainty, research on public perceptions of fields like nutrition indicates that exposure to multiple conflicting stories can, understandably, overwhelm people and leave them unsure as to how they should act on the information obtained (Nagler, 2014;Ramondt & Ramirez, 2019;Ward, Henderson, Coveney, & Meyer, 2011). Online environments could easily exacerbate such "whiplash" if reporting of scientific results seems to contradict other stories, and heighten perceptions of uncertainty. It is not clear how or if contradiction across information sources might affect perceptions of the science itself, however, or if, in such cases, people would instead attribute their possible confusion to problems with reporting or with media more generally.
As some of the studies on perceptions of uncertainty conveyed through hedges and twosided information illustrated, people distinguish between the scientists and journalists connected to a piece of information (Chang, 2015;Jensen, 2008;Ratcliff et al., 2018). National-level data Just Accepted from the Pew Research Center also indicates that Americans overwhelmingly attribute problems in science news coverage (i.e. inaccurate, untrustworthy, confusing representations of the science) to journalists than to scientists (Funk et al., 2017). Journalists are seen as the bigger problem by 73% of Americans, while 24% of Americans think that scientists carry the brunt of the blame for problems in science coverage (Funk et al., 2017).
In addition to the potential for exposure to conflicting news stories, information on online platforms comes surrounded by many different cues that can shape perceptions of the story apart from the content of any given article itself. Comments, for example, can contain contradictory or misleading information and change the frame around an article. One study found that uncivil comments following an otherwise neutral article on nanotechnology increased risk perceptions of nanotechnology for those individuals who were already leaning toward not supporting the technology, further polarizing audiences (Anderson, Brossard, Scheufele, Xenos, & Ladwig, 2014). This effect could have occurred as a result of an increase in perceptions of uncertainty around the information. In that case, uncertainty would provide those individuals with a rationale for more heavily relying on their prior views of the science issue rather than on information provided in the article itself. The potential for exposure to uncivil comments and to contradictory articles in online environments could mean that credible information ends up surrounded by conflict-producing cues that can increase perceptions of uncertainty and decrease perceptions of scientific credibility related to the article itself.
Of course, social media has the power to spread false and misleading information, especially information designed to breed conflict, as we see with recent disinformation campaigns, particularly during the COVID-19 pandemic (Barnes & Sanger, 2020;Garrett, 2017;. The bad news is that not only are these sources of misinformation Just Accepted RUNNING HEAD: Uncertainty in the public eye 16 easy to create and share through online and social media, they are particularly designed to spread and stick (Lazer et al., 2018;Vosoughi et al., 2018). Misinformation, once it enters someone's mind, is incredibly difficult to remove (Green & Donahue, 2011;Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012;Seifert, 2002). It is possible, however, as the next section describes, to correct misinformation through many of the same features that allowed it to spread.

Benefits of the media environment: incidental exposure, corrections, and trusted sources
Exposure to multiple different sources and cues for information on a scientific topic can facilitate the effective communication of true scientific uncertainty. Through many of the same features that help spread misinformation, online and social media also offer outlets for alleviating misperceptions. Research on reducing misperceptions of science issues finds that context cues on social media, for example, can provide corrections that help people more accurately understand uncertainty in issues (Bode & Vraga, 2015;Vraga & Bode, 2017). In experiments, when a story with misinformation on GM foods or vaccines on Facebook was followed by a "related stories" link to articles with more accurate information on the issue, readers had significantly reduced (and more scientifically accurate) perceptions of the uncertainty present in those fields (Bode & Vraga, 2015). Expert sources, such as the CDC, can also effectively correct misperceptions through comments or posts following posts that contain misinformation (Bode & Vraga, 2015;Guidry et al., 2017;Vraga & Bode, 2017). One experiment modeling Twitter posts found that when the CDC tweeted a refutation to a tweet that incorrectly attributed the spread of the Zika virus to the release of GM mosquitoes, viewers of both tweets had significantly reduced misperceptions, especially viewers who were most likely to initially believe the misinformation (Vraga & Bode, 2017). Such tweets did not affect the credibility of the CDC (Vraga & Bode, 2017), suggesting that corrective tweets and messages could be an effective way to alleviate Just Accepted misperceptions of the uncertainty associated with science-related issues.
Misperceptions are notoriously hard to undo, especially when they align with strongly held values and worldviews (see, for examples, Hart & Nisbet, 2011;Nyhan & Reifler, 2010;Nyhan, Reifler, Richey, & Freed, 2014), but there are several ways to ensure that corrections will be more likely to work in the intended direction. Corrections are most effective when they offer a coherent story to fill the gap left by misperceptions or misinformation. The correction has to be as satisfactory in explaining the situation or issue as the misinformation was (Lewandowsky, Ecker, et al., 2012;Seifert, 2002;Walter & Murphy, 2018) and can be even more effective if it also explains why the original, misinformed claim emerged (Seifert, 2002). Evidence suggests that corrections should include a direct rebuttal of the misinformation (Bode & Vraga, 2015;Vraga & Bode, 2017;Walter & Murphy, 2018). It also appears that highlighting scientific consensus can be effective for some issues (Lewandowsky, Gignac, & Vaughan, 2012).
Corrections are more likely to work if they align with or do not contradict respondents' worldviews, especially the worldviews that made the misinformation attractive and easily understandable (Lewandowsky, Ecker, et al., 2012).   Social media has the potential not only to expose users to corrections that effectively communicate science and uncertainty, but also to other opportunities to learn through greater exposure to news about science. Many people have "incidental" exposure to news when they go onto social media platforms, even if they went online for purposes that did not include newsgathering (Feezell, 2017;Kim, Chen, & Gil de Zúñiga, 2013;Lee & Kim, 2017). This phenomenon applies to science-related news as well (Funk et al., 2017). Only one-in-six people report actively seeking out science information, but 55 percent of Americans (and 79 percent of Just Accepted RUNNING HEAD: Uncertainty in the public eye 18 social media users) report coming across science information incidentally on social media (Funk et al., 2017). While no studies focus on the implications of incidental exposure online for science-related issues, the available evidence suggests that incidental exposure can increase knowledge, as well as the willingness to engage with issues and information.
Similarly, the opportunity to receive information incidentally and/or from multiple different information sources could also mean that people are more receptive to that information when it reaches them. Social cues and the multiple pathways that information moves through in the modern media environment also make it possible for information to reach people who would otherwise be less receptive to scientific information or less trusting of scientific sources. One way that this can occur is through cues that can make it less likely that individuals' values and worldviews will lead to them avoid particular information. For example, user-generated social cues such as shares and likes can increase the likelihood that individuals will attend to information, even if that information does not align with their prior beliefs on the issue, by helping to bypass other, more ideological cues triggered by the article itself (Messing & Westwood, 2012).
In the realm of science information and uncertainty in particular, a study on the National Academies of Sciences, Engineering, and Medicine's (NASEM) consensus report on the effects of genetically-engineered crops (National Academies of Sciences, 2017) found that after the report's release and its movement through news outlets and social media, public perceptions of GMOs moved more in line with the findings of the report, seeing less risk but not more or less benefit (Howell et al., 2018). More interesting for understanding the role of the media environment in shaping perceptions, however, was that risk perceptions around GMOs reduced most among people who were least likely to be the regular recipients of such information.

Just Accepted
Although the study did not pinpoint a cause or pathway, the results suggest that the scientific consensus information could have reached those individuals through other outlets that they found more credible or trustworthy than a scientific report from a committee of researchers.
Similarly, although scientists and scientific institutions often communicate with publics through interviews with journalists from general news outlets, the variety of platforms also means scientists have many more opportunities to communicate directly with publics, particularly through niche platforms. Of course, this can have mixed effects, as many scientists might not be particularly good at communicating beyond an audience similar to themselves.
Given the many context cues that appear in online environments, and the potential for people variably to perceive or manipulate the context around a piece of information online, limits also exist on the extent to which scientists and other communicators can control their messages. But the opportunity for scientists and science communicators to directly engage with audiences through different platforms could improve the communication of science to the public, as surveys indicate that Americans find niche science sources to be more trustworthy sources of science information than general news sources (Funk et al., 2017). As more and more individuals rely on online and social media for information in general and on science in particular (Newman et al., 2017;Shearer & Gottfried, 2017), this mixture of effects will continue to play an increasingly important role in understanding public views of scientific uncertainty.

How people process (uncertainty) information
As the previous section suggests, perceptions of uncertainty depend on how people perceive uncertainty differently depending on their personal characteristics. Those characteristics fall into largely two categories: 1) the motivation and ability to find and process information on scientific uncertainty, and 2) value-and belief-based pathways that affect openness to and Just Accepted RUNNING HEAD: Uncertainty in the public eye 20 subsequent processing of information. I focus primarily on individual factors for which we have national-level data on their prevalence in the U.S., such as trust in scientists, because of the NASEM Committee's focus on communicating to a national audience. For readers who want to do more targeted communication with specific publics, however, and for researchers interested in these areas, I also include some discussion of other individual-level factors that shape different people's perceptions of and tolerance for uncertainty information.

Ability to understand scientific information
Individuals' motivation and competency to seek out and understand information related to scientific uncertainty influence perceptions of uncertainty-related information and actions related to those perceptions. The exact results of these different characteristics in different communication and decision-making contexts will vary, but I mention them briefly here to highlight some of the key factors and available research, especially for those interested in more targeted communication. These factors include numeracy and science literacy (Kahan et al., 2012;National Academies of Sciences, Engineering, and Medicine, 2016;E. Peters, Hibbard, Slovic, & Dieckmann, 2007), statistical reasoning abilities (Dunwoody & Griffin, 2013;Tversky & Kahneman, 1974) and self-efficacy, or one's perception of their ability to do a certain task Morton et al., 2011;Vardeman & Aldoory, 2008). Perceived and actual ability shape the extent to which a given person can and will work to understand and act on information (Budescu, Broomell, & Por, 2009;Einsiedel & Thorne, 1999;Fung, Griffin, & Dunwoody, 2018;Griffin, 2016;E. Peters et al., 2007;Winter & Krämer, 2012). have some understanding of probability and experimental design (64% and 51% qualify as understanding these, respectively) but not of what a scientific study is (only 23% qualify as understanding). The data come from items on the General Social Survey, an omnibus survey of a nationally representative sample of U.S. adults, and are not collected by the NSB itself. As with all surveys, however, there could be non-response bias. For example, if fewer people with certain characteristics relevant to the survey questions, such as scientific knowledge, respond to those particular items or take the survey at all, the true rates of understanding might be even lower than reflected in the responses. Overall, however, the numbers seem to indicate that the U.S. public has less understanding of what distinguishes scientific information from other forms of knowledge, which could affect how people perceive scientific uncertainty.
The coding that the NSB applied for defining "understanding of scientific study" is somewhat conservative, and it is important not to take this data on its own as proof of a scientifically ignorant public. The U.S. public might have a sense of scientific studies that is not captured by the NSB measure but is relevant to how they understand and tolerate uncertainty in scientific information and their trust in that information. Additionally, knowledge in general, while often significant, rarely plays a large role on its own in shaping individuals' opinions on an issue, in part because of the effects of information presentation and individuals' experiences and values on how people act on their knowledge (Allum, Sturgis, Tabourazi, & Brunton-Smith, 2008). The NSB indicators could suggest, however, that discussing uncertainty in the context of how these concepts and practices fit into scientific processes could be more effective with respect to building upon existing public knowledge of the scientific process than assuming that individuals will understand what makes a study scientific.

Emotional and personal motivations to understand uncertainty
Just Accepted RUNNING HEAD: Uncertainty in the public eye 22 Even if people do not start with high knowledge or understanding of a science-related issue, they might have increased motivation to understand the uncertainty involved because of a number of issue-specific factors. These include factors such as individual interest, personal investment, or the perceived relevance of a particular issue, which may be heightened by direct involvement (as for example, in the case of a particular illness), with associated uncertainty (Blankenship & Holtgraves, 2016;Vardeman & Aldoory, 2008).
A growing body of work studies how emotionsparticularly fear and angershape reactions and motivations related to uncertainty in information (e.g., Tannenbaum et al., 2015;Tiedens & Linton, 2001;Weeks, 2015). Fear and anger, while both negatively valenced emotions, appear to have opposite relationships to uncertaintywith fear being associated with heightened uncertainty and the quest for seeking out more information, and anger, with heightened certainty and less attention to detail or nuance (Tiedens & Linton, 2001).
Further, some people have higher and lower tolerances for uncertainty regardless of the situation. Some people have a higher need for cognition, or need to work through the complexity of a topic, which can mean preferring more information even if it means more uncertainty (Winter & Krämer, 2012). Others tend to have low tolerance for uncertainty and low openness to information that could increase uncertainty, such as people with more dogmatic and authoritarian tendencies (Altemeyer, 1996;Rokeach, 1960). This discomfort with uncertainty becomes especially strong when that uncertainty seems to disrupt closely held values and beliefs.

Value-& belief-based processing: trust and views of science
When people come across scientific information, they interpret it in ways that are shaped by values related to the information. Much of this processing is called directional motivated reasoning (Kunda, 1990). People are motivated to interpret information in ways biased by their Just Accepted held-beliefs to help prevent belief-incongruent information from triggering discomfort, or cognitive dissonance (see, for overviews, Festinger, 1957;Kunda, 1990). In general, the more strongly held the belief about one's self or the way that the world does or should work, the more uncomfortable and threatening information contrary to that belief is.
Because the specific beliefs that are relevant across issues and individuals will vary, this overview focuses on two more general beliefs that can apply across science communication settings and play a role in reactions to scientific uncertainty: trust and epistemic beliefs.

Trust and confidence in scientists
Trust can shape how individuals view information from different sources as well as the opinions that they form about issues related to information (Freudenberg & Pastor, 1992;Freudenburg, 1993;Kasperson, Golding, & Tuler, 1992;R. G. Peters, Covello, & McCallum, 1997). It plays a key role in how people navigate uncertainty (Engdahl & Lidskog, 2014;Kasperson, 1992), and trust in scientists can affect perceived risk in the face of uncertainty, typically by reducing risk perceptions (Ho, Scheufele, & Corley, 2011;Siegrist, 2000;Siegrist, Connor, & Keller, 2012;Wachinger, Renn, Begg, & Kuhlicke, 2013). Additionally, levels of trust in scientists relative to trust in other relevant actors-or the "trust gap" between actors-for an issue can play a significant role in views of that issue (Priest, Bonfadelli, & Rusanen, 2003).
In the U.S., high trust in scientists has remained stable over the past few decades (Krause, Brossard, Scheufele, Xenos, & Franke, 2019). Around 40 percent of Americans indicate that they have a "great deal of confidence" in the scientific community . Compared to all other institutions in the U.S., this level of confidence is especially high: comparable to confidence in the medical community and below only confidence in the military in the past two decades. This confidence has been more stable than has confidence in other institutions, as well Just Accepted RUNNING HEAD: Uncertainty in the public eye 24 . If trust gaps play a role in how people respond to uncertainty in scientific information, then it is particularly important that the scientific community ranks ahead of other relevant institutions people look to when forming opinions on science-related issues.
Trust is also shaped by issue-relevant experiences and values that affect information processing, such as those captured by political ideology and religious views. Because of these interactions, despite overall high trust levels at the national level, trust within particular groups varies depending on the values held by individuals (Funk, Hefferon, Kennedy, & Johnson, 2019;Krause et al., 2019). For example, those who identify as members of the Republican party today are less confident in the scientific community than are other people . Similar gaps exist between Americans who identify as highly religious and those who do not (those who are not religious indicating greater confidence), across people of different religions, and between those who live in urban versus rural areas (urban residents indicating greater confidence) . Despite some movement and gaps within groups, however, the U.S. public generally has consistent and high confidence in the scientific community.
Briefly, on a related note, in addition to possibly relating to different levels of trust for certain science issues, political ideology can also affect what kinds of news people see and pay attention to (Garrett, 2009a(Garrett, , 2009bGarrett et al., 2016;Stroud, 2010). It is less clear how this selection and differing levels of trust play out in how people with different political ideologies receive and reason through science-related information and uncertainty in science in general, however. Evidence suggests that Republicans and Democrats are similarly likely to be active consumers of science information, to report being very or somewhat interested in science news, and to say they sometimes or often consume science-related entertainment (Funk et al., 2017).
Only 32% of Republicans and 27% of Democrats saw that they see news reporting on Just Accepted disagreement among scientific experts (or the conflict-based uncertainty described earlier), and 22% of Republicans and 16% of Democrats say they see news reports about scientific research that seem made up. Republicans are more likely than are Democrats, however, to believe that news media do a poor job of covering science, however, which is in line with partisan differences on ratings of news media in general (Funk et al., 2017).
As we see with the current pandemic, although partisan and other divides can appear on beliefs related to the issues and levels of certainty involved, especially when it comes to belief in misinformation (Schaeffer, 2020), broad public consensus in perceptions of the scientific uncertainty involved also exist. For example, recent data shows that when it comes to research concerning COVID-19, most Democrats and Republicans think both that the core scientific understanding of the virus is well-understood (74% and 66%, respectively) and that it makes sense that studies show conflicting evidence as the research improves (83% and 72%, respectively) (Tyson & Spencer, 2020). The evidence suggests, then, that communicators should not assume that Americans have widely different interest in, exposure to, or ability to reason through science-related and uncertainty information in general depending on partisan identity, unless there is evidence to suggest partisan-related value-based divisions on particular aspects of the issue or information being communicated.

Epistemic beliefs and views of science
Despite the wide range of data on trust in scientists, there do not appear to be measures capturing general trust in science. Data on views of the nature of scientific knowledge, however, often called epistemic beliefs, do exist, and research finds that such beliefs significantly relate to how people process information on scientific uncertainty. Epistemic beliefs capture whether one sees scientific knowledge as absolute truth, relative, or contingent, with such beliefs typically Just Accepted RUNNING HEAD: Uncertainty in the public eye 26 being seen as a progression leading toward increasingly sophisticated epistemic beliefs (Sinatra, Kienhues, & Hofer, 2014). Individuals who hold more sophisticated beliefs are more likely to critically evaluate and perceive uncertainty in scientific information (Feinkohl, Flemming, Cress, & Kimmerle, 2016;Kimmerle et al., 2015;Rabinovich & Morton, 2012) but also more likely to view information containing uncertainty as persuasive (Rabinovich & Morton, 2012). More important for the communication of uncertainty, however, is that epistemic beliefs can change over shorter time spans, depending on communication. For example, more sophisticated beliefs can be induced by explaining the aims of science and of the scientific process (Rabinovich & Morton, 2012) prior to communicating uncertainty information.
As these findings highlight, the perception of uncertainty in scientific results is not necessarily undesirable, and it may reflect accurate views of the strengths and limitations of scientific information. Research on exposure to conflicting information, for example, found that such exposure decreased beliefs that it is possible to find one best solution to health issues (Kienhues, Stadtler, & Bromme, 2011), which is likely a more accurate view of the contextdependence of decision-making in health care. Participants in that study did not become "relativized" (seeing scientific knowledge as all relative or opinion-based), however, or helpless, and were able to gain knowledge about health and medical information (Kienhues et al., 2011).
This also aligns with Anvari & Laken's (2019) findings that learning about replication failures in psychological research decreased trust in past research but not future research.
Further, communicating uncertainty in news stories about a particular science topic does not appear to change beliefs about the nature of science in general (A. Retzbach & Maier, 2015).
Altogether, these findings suggest that uncertainty in a particular story or area of science will not necessarily bleed into views of science overall, and that people, at least those in these studies, Just Accepted expect uncertainty in science and do not view it negatively, per se. People appear to expect a level of uncertainty in scientific information. This is counter to the belief of many scientists that the American public has a primarily risk-focused view of science (Braun, Starkbaum, & Dabrock, 2015) and that they are unable to understand or "correctly" handle scientific information (Besley & Nisbet, 2011;Davies, 2008;Ecklund, James, & Lincoln, 2012). It is also commonly thought that providing the public with information on uncertainty will trigger distrust, panic, and confusion (Frewer et al., 2003).
In fact, those who are most interested in science and hold positive attitudes toward science in general can also be those most likely to perceive scientific uncertainty (Kohl et al., 2016;J. Retzbach, Otto, & Maier, 2016), and exposure to scientific uncertainty can sometimes increase interest in a particular science issue (A. Retzbach & Maier, 2015). These relationships between uncertainty in science and interest in science illustrate how the perception of uncertainty need not entail being "anti-science." As the studies above indicate, often the opposite is trueperhaps because those with the greatest familiarity with science are more likely to hold more complicated epistemic beliefs, rather than viewing scientific information as absolute or relative.

Recommendations for communicating uncertainty
With respect to public perceptions of uncertainty related to scientific information, we can draw several conclusions based on the existing literature described in this article. Because of the highly context-dependent nature of perceptions of uncertainty, views of uncertainty in a given field are unlikely fully to bleed over broader perceptions of scientific uncertainty as such.
Additionally, people often hold different views regarding the perceived rigidity of different fields (e.g., mathematics is seen as most structured and social fields as less structured (Buehl & Alexander, 2001)) and have varying levels of trust for different scientific actors depending on the Just Accepted RUNNING HEAD: Uncertainty in the public eye 28 issue or field (e.g., Funk et al., 2019). Therefore, views of "science" are likely better understood as views of multiple scientific issues, fields, and actors, which vary in saliency across different populations over time and depending on context.
Even with the variation across individuals, issues, and information sources, however, based on the literature review above, several considerations can aid individuals who want to communicate information related to uncertainty in science, including uncertainty related to reproducibility and replicability or the lack thereof. The most relevant section of the article that each recommendation draws from are listed after each consideration, for reference.
1. People appear to expect uncertainty in science and communicating through hedges/caveats can increase perceived credibility of that information (see Hedges, twosided information, and conflicts versus caveats).
2. Conflict between researchers in stories (consensus uncertainty) can unnecessarily increase perceptions of uncertainty. If conflict and disagreement is a necessary part of the story, however, communicators can effectively communicate existing disagreements by explaining sources of uncertainty and disagreement and the steps that researchers are taking to address them (see Hedges).
By addressing common sources of disagreement within one story, communicators can provide context for people when they encounter additional, potentially conflicting, information.
3. If misperceptions of the levels, sources, and implications of uncertainty arise, corrections are most effective when they coherently "fill the gap" in the story that the misperception originally filled and explain why the confusion arose. Corrections should also do so in a way that a) focuses on the misperception itself (rather than blaming any actors who spread it) and b) that aligns or does not clash with beliefs or worldviews of those more likely to hold the misperception (see Benefits of the media environment).
Additionally, although spread of information through traditional and online sources is difficult to predict or control, it's important to remember that online and social media in particular offer outlets for effective corrections of misinformation, such as through tweets from expert sources Just Accepted RUNNING HEAD: Uncertainty in the public eye 29 and links to related stories.
Finally, returning to concerns of communicating reproducibility and replicability in science, the last, more general, recommendation is as follows: 4. Effective communication of uncertainty related to reproducibility and replicability should include: a) the role of reproducibility and replicability in scientific research, and b) why reproducibility and replicability sometimes fail for certain studies, or what it can mean that a replication fails.
The last recommendation above indicates an area where we need much more research, especially on how information related to reproducibility and replicability affects perceptions of uncertainty and of science more broadly. In particular, the two studies that tested how explaining what steps researchers were taking to address reproducibility did not see that including such explanations increased trust in studies from the field received (Anvari & Lakens, 2019;Wingen et. al., 2019). Why this is the case, how long such effects hold, and whether different information formats are more effective, are just some of the areas where we need more research. Such studies should also continue to allow for distinctions between the causes of lack of reproducibility and replicability. As mentioned in the introduction, the lack of reproducibility and replicability can stem from a wide range of causes, including methods that are not strong enough to capture small effects, rare instances that are difficult to recreate, human error, and, in the worst case, fraud.
Each could have very different impacts on how people perceive a specific case of lack of reproducibility and replicability, and each have different relationships to uncertainty and steps for moving forward to address the causes.

Conclusion
Because of its context-dependence, uncertainty seems rarely to function as the sole or determining factor in views concerning a particular science-related issue. As research on science Just Accepted RUNNING HEAD: Uncertainty in the public eye 30 in various policy arenas has highlighted, uncertainty may create space for other values, considerations, and goals to come into play as stakeholders debate how to tolerate uncertainty, which may entail temporary inaction or delay (as, for example, with respect to COVID) or to act on information (Campbell, 1985;Post & Maier, 2016;Renn, 1992).
The space for discussion and interpretation created by uncertainty is part of why it is necessary effectively to communicate scientific uncertainty so that actors can make wellinformed decisions. It is also why it is important to understand how perceptions of uncertainty will be shaped by information presentation (both within and across stories and mediums) as well as by individual values, goals, and beliefs concerning scientific information and issues. The literature suggests that uncertainty in scientific information does not inevitably translate to a lack of credibility. For many members of the public, uncertainty in science is expected. Scientific information on uncertainty can serve as an indicator of the credibility of the messenger as well as of the information itself. Information on replicability and reproducibility in particular can also be seen as part of science's process of self-correction and quality control (Mede et al., 2020).
Communicating uncertainty appears to be most effective when it includes caveats from scientists involved in the research and avoids conflict between scientists. In the case of communicating disagreements concerning reproducibility and replicability, discussion of any sources of uncertainty and conflict will be more effective if it gives context for the uncertainty, both as it relates to the particular study or topic, and provides insight as to how reproducibility and replicability and uncertainty relate to scientific research and processes in general. This could be done by not only explicitly acknowledging uncertainty, its sources, and its potential implications, but by also including the steps that researchers are taking to address uncertainty and how reproducibility and replicability fits in that process.

Just Accepted
RUNNING HEAD: Uncertainty in the public eye 31 Addressing issues of replicability and reproducibility and their larger implications for science and perceptions of science will help us to develop a clear and accurate picture of public perceptions of science and what those perceptions mean for how we view, discuss, and act on information about science. While uncertainty can be manufactured, manipulated, and weaponized to argue for a particular course of (in)action, true uncertainty in scientific information itself or in our understanding of its implications for society opens up space for disagreement and dialogue about how we want to move forward as individuals and as a society.