Fake news has created a skeptical and misinformed public


One study says coffee is good for you, while another study says that it’s not. They’re both right, within context.

This dichotomy together with an environment of distrust spurred by anecdotes, fake news, and to a large extent, social media, has created a skeptical and misinformed public.

As a result, researchers from Florida Atlantic University’s Schmidt College of Medicine and collaborators say society is rejecting the facts.

Now more than ever, medical researchers must help the public understand the rigorous process of science, which has been around for thousands of years.

In return, the public has to pay attention, realize that one size doesn’t fit all, and understand that the answers are not just black or white. Lives are depending on it.

In an article published in the American Journal of Medicine, the researchers highlight opportunities for academic institutions to achieve and maintain research integrity, which encompasses accountability for all scientific and financial issues, including human subjects’ and animal protections, investigator accountability, grant submission, design, conduct, analyses, and interpretation of findings, oversight of colleagues and students, environmental health and safety, among others. Research integrity focuses on the many positive attributes that should be sought and maintained by academic institutions as well as their faculty, staff, and trainees. This includes transparency, rigor, and reproducibility.

The best way for medical researchers to meet this challenge is by continuing to ensure integrity, rigor, reproducibility and replication of their science and to earn the public’s trust by being morally responsible and completely free of any influences.

Medical researchers have a passion for truth and discovery, therefore, integrity and trust are essential attributes.

“The reason that the public has lost trust and confidence in science is multifaceted and complicated,” said Janet Robishaw, Ph.D., senior author, senior associate dean for research, and chair of the Department of Biomedical Science in FAU’s Schmidt College of Medicine, and a member of the FAU Brain Institute (I-BRAIN), one of the University’s four research pillars.

“One of the main reasons is anecdotal stories, which can be very powerful, and are being given too much weight.

There’s so much news coming out from so many sources including social media.

That’s why it’s imperative for the public to discern an anecdote from scientific results in a peer-reviewed journal.

This is how the premise that vaccinations cause autism evolved along with fabricated results that pushed the anti-vaccination movement.”

Robishaw and corresponding author Charles H. Hennekens, M.D., Dr.PH, first Sir Richard Doll Professor and senior academic advisor in FAU’s Schmidt College of Medicine, stress that research integrity starts with investigators who share the guiding principles of honesty, openness, and accountability and who provide scientific and ethical mentorship to their trainees.

As researchers compete for increasingly limited resources and face growing challenges with evolving technologies, broad consensus is required across the research enterprise, including funding agencies, medical journals as well as all academic institutions, to address these increasingly major clinical, ethical and legal challenges.

“Our common goal should be to return public trust in our research enterprise, which has done so much good for so many,” said Robishaw.

“The more we can do as scientists to promote our guiding principles of rigor, transparency, honesty and reproducibility and to provide the best evidence possible and get people to understand them, the greater the likelihood that they will listen to the message and follow it.”

Among the opportunities the authors provide for enhancing research integrity include identifying the best benchmarking practices, establishing a research compliance infrastructure and implementing a quality assurance plan.

These priorities should include assessing the research climate, developing policies and responsibilities for ethics investigations, and providing a process for resolution of formal disputes.

In addition, establishing lists of independent experts to conduct periodic reviews of institutional procedures could be helpful.

Reinforcing existing regulatory policies that include emails regarding grant routing and regulatory policies, and providing both formal and informal training to faculty, staff, and trainees are other suggestions the authors provide.

“We should not allow research misconduct committed by a very small minority of researchers to detract from the growing focus on efforts to improve the overall quality of the research process carried out by the vast majority,” said Hennekens.

“I continue to believe that the overwhelming majority of researchers strive for and achieve scientific excellence and research integrity.”

The best way for medical researchers to meet this challenge is by continuing to ensure integrity, rigor, reproducibility and replication of their science and to earn the public’s trust by being morally responsible and completely free of any influences.

In conclusion, the authors, which include David L. DeMets, Ph.D., professor emeritus, University of Wisconsin School of Medicine and Public Health; Sarah K. Wood, M.D., senior associate dean for medical education, and Phillip Boiselle, M.D., dean, both in FAU’s Schmidt College of Medicine, emphasize that research integrity requires synchronicity and collaboration between as well as within all academic institutions.

“If we fail to maintain research integrity we will lose public trust and it will lead to avoidable consequences of substantial penalties, financial and otherwise, adverse publicity and reputational damage,” said Robishaw. “Scientists must strive to self-regulate and earn public trust to advance health.”

Reducing the spread of misinformation, especially on social media, is a major challenge.

We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy.

To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories:

(i) mainstream media outlets,

(ii) hyperpartisan websites, and

(iii) websites that produce blatantly false content (“fake news”).

Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources.

Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans.

Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers.

We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources.

Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.

The emergence of social media as a key source of news content (1) has created a new ecosystem for the spreading of misinformation.

This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate (2).

So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.

Fake news, as it is presently being discussed, spreads largely via social media sites (and, in particular, Facebook; ref. 3).

As a result, understanding what can be done to discourage the sharing of—and belief in—false or misleading stories online is a question of great importance.

A natural approach to consider is using professional fact-checkers to determine which content is false, and then engaging in some combination of issuing corrections, tagging false content with warnings, and directly censoring false content (e.g., by demoting its placement in ranking algorithms so that it is less likely to be seen by users).

Indeed, correcting misinformation and replacing it with accurate information can diminish (although not entirely undo) the continued influence of misinformation (45), and explicit warnings diminish (but, again, do not entirely undo) later false belief (6).

However, because fact-checking necessarily takes more time and effort than creating false content, many (perhaps most) false stories will never get tagged.

Beyond just reducing the intervention’s effectiveness, failing to tag many false stories may actually increase belief in the untagged stories because the absence of a warning may be seen to suggest that the story has been verified (the “implied truth effect”) (7).

Furthermore, professional fact-checking primarily identifies blatantly false content, rather than biased or misleading coverage of events that did actually occur. Such “hyperpartisan” content also presents a pervasive challenge, although it often receives less attention than outright false claims (89).

Here, we consider an alternative approach that builds off the large literature on collective intelligence and the “wisdom of crowds” (1011): using crowdsourcing (rather than professional fact-checkers) to assess the reliability of news websites (rather than individual stories), and then adjusting social media platform ranking algorithms such that users are more likely to see content from news outlets that are broadly trusted by the crowd (12).

This approach is appealing because rating at the website level, rather than focusing on individual stories, does not require ratings to keep pace with the production of false headlines; and because using laypeople rather than experts allows large numbers of ratings to be easily (and frequently) acquired.

Furthermore, this approach is not limited to outright false claims but can also help identify websites that produce any class of misleading or biased content.

Naturally, however, there are factors that may undermine the success of this approach. First, it is not at all clear that laypeople are well equipped to assess the reliability of news outlets.

For example, studies on perceptions of news accuracy revealed that participants routinely (and incorrectly) judge around 40% of legitimate news stories as false, and 20% of fabricated news stories as true (71315).

If laypeople cannot effectively identify the quality of individual news stories, then they may also be unable to identify the quality of news sources.

Second, news consumption patterns vary markedly across the political spectrum (16) and it has been argued that political partisans are motivated consumers of misinformation (17). By this account, people believe misinformation because it is consistent with their political ideology.

As a result, sources that produce the most partisan content (which is likely to be the least reliable) may be judged as the most trustworthy. Rather than the wisdom of crowds, therefore, this approach may fall prey to the collective bias of crowds.

Recently, however, this motivated account of misinformation consumption has been challenged by work showing that greater cognitive reflection is associated with better truth discernment regardless of headlines’ ideological alignment—suggesting that falling for misinformation results from lack of reasoning rather than politically motivated reasoning per se (13).

Thus, whether politically motivated reasoning will interfere with news source trust judgments is an open empirical question.

Third, other research suggests that liberals and conservatives differ on various traits that might selectively undermine the formation of accurate beliefs about the trustworthiness of news sources.

For example, it has been argued that political conservatives show higher cognitive rigidity, are less tolerant of ambiguity, are more sensitive to threat, and have a higher personal need for order/structure/closure (see ref. 18 for a review). Furthermore, conservatives tend to be less reflective and more intuitive than liberals (at least in the United States) (19)—a particularly relevant distinction given that lack of reflection is associated with susceptibility to fake news headlines (1315).

However, there is some debate about whether there is actually an ideological asymmetry in partisan bias (2021). Thus, it remains unclear whether conservatives will be worse at judging the trustworthiness of media outlets and whether any such ideological differences will undermine the effectiveness of a politically balanced crowdsourcing intervention.

Finally, it also seems unlikely that most laypeople keep careful track of the content produced by a wide range of media outlets.

In fact, most social media users are unlikely to have even heard of many of the relevant news websites, particularly the more obscure sources that traffic in fake or hyperpartisan content.

If prior experience with an outlet’s content is necessary to form an accurate judgment about its reliability, this means that most laypeople will not be able to appropriately judge most outlets.

For these reasons, in two studies we investigate whether the crowdsourcing approach is effective at distinguishing between low- versus high-quality news outlets.

In the first study, we surveyed n = 1,010 Americans recruited from Amazon Mechanical Turk (MTurk; an online recruiting source that is not nationally representative but produces similar results to nationally representative samples in various experiments related to politics; ref. 22).

For a set of 60 news websites, participants were asked if they were familiar with each domain, and how much they trusted each domain. We included 20 mainstream media outlet websites (e.g., “cnn.com,” “npr.org,” “foxnews.com”), 22 websites that mostly produce hyperpartisan coverage of actual facts (e.g., “breitbart.com,” “dailykos.com”), and 18 websites that mostly produce blatantly false content (which we will call “fake news,” e.g., “thelastlineofdefense.org,” “now8news.com”).

The set of hyperpartisan and fake news sites was selected from aggregations of lists generated by Buzzfeed News (hyperpartisan list, ref. 8; fake news list, ref. 23), Melissa Zimdars (9), Politifact (24), and Grinberg et al. (25), as well as websites that generated fake stories (as indicated by snopes.com) used in previous experiments on fake news (7131526).

In the second study, we tested the generalizability of our findings by surveying an additional n = 970 Americans recruited from Lucid, providing a subject pool that is nationally representative on age, gender, ethnicity, and geography (27). To select 20 mainstream sources, we used a past Pew report on the news sources with the most US online traffic (28). T

his list has the benefit of containing a somewhat different set of sources than was used in study 1 while also clearly fitting into the definition of mainstream media. To determine which of the many fake and hyperpartisan sites that appeared on at least two of the lists described above to use, we selected the domains that had the largest number of unique URLs on Twitter between January 1, 2018, and July 20, 2018. We also included the three low-quality sources that were most familiar to individuals in study 1 (i.e., ∼50% familiarity): Breitbart, Infowars, and The Daily Wire.

See SI Appendix, section 1 for a full list of sources for both studies.

Finally, we sought to establish a more objective rating of news source quality by having eight professional fact-checkers provide their opinions about the trustworthiness of each outlet used in study 2. These ratings allowed us both to support our categorization of mainstream, hyperpartisan, and fake news sites, and to directly compare trust ratings of laypeople with experts.

For further details on design and analysis approach, see Methods. Our key analyses were preregistered (analyses labeled as “post hoc” were not preregistered).

Florida Atlantic University
Media Contacts:
Gisele Galoustian – Florida Atlantic University
Image Source:
The image is in the public domain.

Original Research: Closed access
“Establishing and Maintaining Research Integrity at Academic Institutions: Challenges and Opportunities”. Janet D. Robishaw, David L. DeMets, Sarah K. Wood, Phillip M. Boiselle, Charles H. Hennekens.
American Journal of Medicine doi:10.1016/j.amjmed.2019.08.036.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.