We optimize the learning when we fail 15% of the time


To learn new things, we must sometimes fail. But what’s the right amount of failure?

New research led by the University of Arizona proposes a mathematical answer to that question.

Educators and educational scholars have long recognized that there is something of a “sweet spot” when it comes to learning.

That is, we learn best when we are challenged to grasp something just outside the bounds of our existing knowledge. When a challenge is too simple, we don’t learn anything new; likewise, we don’t enhance our knowledge when a challenge is so difficult that we fail entirely or give up.

So where does the sweet spot lie?

According to the new study, to be published in the journal Nature Communications, it’s when failure occurs 15% of the time.

Put another way, it’s when the right answer is given 85% of the time.

“These ideas that were out there in the education field – that there is this ‘zone of proximal difficulty,’ in which you ought to be maximizing your learning – we’ve put that on a mathematical footing,” said UArizona assistant professor of psychology and cognitive science Robert Wilson, lead author of the study, titled “The Eighty Five Percent Rule for Optimal Learning.”

Wilson and his collaborators at Brown University, the University of California, Los Angeles and Princeton came up with the so-called “85% Rule” after conducting a series of machine-learning experiments in which they taught computers simple tasks, such as classifying different patterns into one of two categories or classifying photographs of handwritten digits as odd versus even numbers, or low versus high numbers.

The computers learned fastest in situations in which the difficulty was such that they responded with 85% accuracy.

“If you have an error rate of 15% or accuracy of 85%, you are always maximizing your rate of learning in these two-choice tasks,” Wilson said.

When researchers looked at previous studies of animal learning, they found that the 85% Rule held true in those instances as well, Wilson said.

When we think about how humans learn, the 85% Rule would mostly likely apply to perceptual learning, in which we gradually learn through experience and examples, Wilson said.

Imagine, for instance, a radiologist learning to tell the difference between images of tumors and non-tumors.

These are results of a functional MRI on the whole brain. Areas of the brain Wilson and his collaborators at Brown University, the University of California, Los Angeles and Princeton came up with the so-called “85% Rule” after conducting a series of machine-learning experiments in which they taught computers simple tasks, such as classifying different patterns into one of two categories or classifying photographs of handwritten digits as odd versus even numbers, or low versus high numbers.

“You get better at figuring out there’s a tumor in an image over time, and you need experience and you need examples to get better,” Wilson said.

“I can imagine giving easy examples and giving difficult examples and giving intermediate examples.

If I give really easy examples, you get 100% right all the time and there’s nothing left to learn.

If I give really hard examples, you’ll be 50% correct and still not learning anything new, whereas if I give you something in between, you can be at this sweet spot where you are getting the most information from each particular example.”

Since Wilson and his collaborators were looking only at simple tasks in which there was a clear correct and incorrect answer, Wilson won’t go so far as to say that students should aim for a B average in school.

However, he does think there might be some lessons for education that are worth further exploration.

“If you are taking classes that are too easy and acing them all the time, then you probably aren’t getting as much out of a class as someone who’s struggling but managing to keep up,” he said. “The hope is we can expand this work and start to talk about more complicated forms of learning.”

“Every failure is a step to success. Every detection of what is false directs us towards what is true: every trial exhausts some tempting form of error. Not only so; but scarcely any attempt is entirely a failure; scarcely any theory, the result of steady thought, is altogether false; no tempting form of Error is without some latent charm derived from Truth.”

–W. Whewell1

Contemporary society cherishes success. Success defines the person, the organization, the culture. It is a clear goal for every initiative that has an outcome. It is a gauge by which one measures impact, influence, and consequence. Success is defined as much by tangible achievement of a predefined goal as it is by its polar opposite, failure. The commonly held view is that failure is to be avoided because success is to be achieved, and both cannot coexist. In this editorial, I will dwell on failure and make the case that failure has at least as important a role in our experience, education, and professional development as success—if we would only learn from it.

We are first exposed to the concept of failure in elementary school, quickly realizing how it can affect our educational progress. My generation of students lived in fear of failing tests, subjects, and, ultimately, grades, thereby being left behind to repeat the school year. This early, first experience with failure obviously colors our perception of the concept with great negativity. Defined in this way, failure is simply the opposite of success, a notion that sets the stage for the role of failure and its interpretation throughout one’s life.

This is particularly true for physicians and scientists, who, as consummate overachievers, strive to succeed and implicitly strive to avoid failure with each challenge. Nothing but perfection will suffice, as failure renders our professional efforts, view of accomplishment, and sense of ourselves imperfect.

In the early 19th Century, the term failure was commonly used to describe a “breaking in business,” or going broke or bankrupt (2). Over time, this purely commercial definition evolved to pertain to personal deficiency, as well as tangible accomplishment or moral behavior. How did this change occur? How did one’s financial inadequacy morph into personal inadequacy? Sandler, in his book Born Losers: A History of Failure in America, argues that, in part, this evolution reflects the interpretation of the American dream in 19th Century America. He points out that the failures among us “embody the American fear that our fondest hopes and our worst nightmares may be one and the same….the [American] dream that equates freedom with success…could neither exist nor endure without failure.” We need failure, “the word and the person…to sort out our own defeats and dreams” (3). Put in contemporary Darwinian contrast, “It is not enough to succeed. Others must fail” (4).

Contemporary American education has taken this interpretation to an extreme, doing whatever it can to eliminate failure. Driven by the view that discouraging criticism (in the best case) and objective failure (in the worst case) will impair effective learning, the educational establishment has evolved to minimize the likelihood that students can fail—a course, or a grade, or a program. Failure in this educational ideal is considered a reflection of institutional inadequacy rather than a true learning limitation of the individual.

Maintaining the student’s self-esteem at all costs has been the mantra of American public education now for some time, and while it has had its benefits, especially for students who would be severely defeated by even modest failure, it has had its disadvantages, as well, creating a self-affirming culture of narcissism among many students.

Over the past decade, this change in educational philosophy has ‘trickled up’ from elementary school to professional school and postgraduate medical education. Rather than assume the old ‘sink or swim’ philosophy of past generations, we as medical educators must now be ever-sensitive to the needs of students, residents, and fellows, providing critical feedback at just the right time in just the right way.

Sensitivity to the needs of trainees has also influenced the process of identifying candidates for residency and fellowship, even at the most elite institutions: residents and fellows are now recruited rather than selected as they were in the past. In this way, we dampen the slight of rejection, viewing it now as a failure of the program to recruit the applicant rather than a failure of the applicant to be chosen by the program.

While encouragement is clearly important in early education and constructive, positive criticism essential for optimizing the learning experience, there comes a time in each person’s development at which clear criticism and the risk of true failure need to be conveyed. The complexity of life, biologically and experientially, is rife with uncertainty, and in the course of its execution, rich with the possibility of failure—to meet an aspiration or achieve a goal.

As parents, educators, and role models, we are not meeting our obligations to trainees unless we instruct them in the importance of failure, how to react to it, and, most importantly, how to learn from it. Failure is imperative for developing the tenacity and self-control necessary to interact effectively with our complex environment, and as such, is the true “secret to success,” as Paul Tough pointed out in his New York Times piece in 2011 (5).

A moment’s reflection will lead one to appreciate how important and pervasive failure is in the normal course of one’s personal and professional life. In fact, among some highly specialized goal-oriented professions, failure is a dominant and expected outcome. In the pharmaceutical industry, the clinical failure rate for drugs entering phase II testing was reported to be 81% for 50 illustrative compounds that entered clinical testing in 1993-2004 (6). Major league batters fail to hit the ball ~75% of the time (the overall major league baseball batting average for 2013 was 0.253 ) (7).

Meteorologists have an overall error rate for predicting precipitation over three days of 15%, with precipitation predicted but not observed 43% of the time, and precipitation observed but not predicted 10% of the time (for San Francisco, the last three months of 2011) (8). Health economists’ prediction algorithms are notoriously challenging, with absolute prediction errors of the actual means ranging from 98% (9) to 79% (10).

Failure is, of course, part of the scientific method. All well designed experiments are framed in terms of the null hypothesis, which more often turns out to hold rather than its alternate. Exploring the bases of biological phenomena or diseases requires the same approach and has the same low yield of positive results.

Every failed experiment changes the researcher’s perspective, helps reframe the experimental design, and leads to an increasingly refined approach to the problem narrowing iteratively over time the possibilities for fruitful study. Engineering disciplines have taken this process to an extreme, having developed the field of failure analysis using forensic engineering approaches (11).

No matter how insightful an investigator is believed to have been in retrospect, the scientific approach is one of informed trial and error in the best of circumstances, and, therefore, invariably subject to the play of chance.

Given the importance of failure in scientific research and problem solving, it should come as no surprise that the current educational strategy of ego preservation by limiting challenging course work to minimize harm to self-esteem runs counter to optimal scientific education. Research and innovation in science require failure, which must be taught (12), nurtured, understood, and incorporated in one’s scientific paradigm.

The scientific community can only achieve this level of understanding if it is aware of both positive results (i.e., those in which the null hypothesis is rejected) as well as negative results (i.e., those in which the null hypothesis is not rejected). Hence, publishing negative trial outcomes is essential for the scientific enterprise, a fact that had not been widely accepted by major journals until relatively recently.

Preconceived bias driven by the desire to achieve an hypothetical outcome can lead to significant type I errors, misleading future experimentation in that scientific community. With ever more riding on the success of an experiment and the scientist ever less equipped to handle a negative result (which equates with a failed experiment), is it any wonder that published experimental results are increasingly recognized as incorrect or false (13)?

Add to this the great importance placed on publishing papers in journals of the greatest impact, which invariably require proving an hypothesis in multiple systems using a variety of internally consistent methodologies, and any reasonable experimentalist can easily understand how this growth in false outcomes has occurred.

As anyone who has performed scientific experiments in different systems (cellular, animal, or human) can attest, it is invariably the case that complete consistency is extraordinarily uncommon owing to experimental imprecision, biological variability, and the play of chance.

Thus, one can either conclude from reading these elegant studies in which everything works that some of the data must be overinterpreted or, worse, manipulated, or that inconsistent experiment results were excluded from the published study. This point again illustrates how fear of failure in an ever-competitive scientific landscape with increasingly limited resources misguides the scientific enterprise.

Scientific failure has more challenging dimensions that affect all of us, including the most accomplished investigators. Risk taking and innovation are particularly dependent on failure, but in a fiscally conservative environment such as that which currently exists in the US economy, true innovation is rarely rewarded.

NIH grant proposals are more likely to be funded if the proposed research is viewed as likely to meet the specific aims than are proposals that address high-risk hypotheses with limited supportive data.

Even in industry where, it can be argued, high-risk research can more readily be performed than in academics, the fiscal environment has taken a toll, with the research and innovation units of many major pharmaceutical companies having been downsized or eliminated.

Biomedical investigators—even the most academically successful—are painfully aware of common failures of other sorts, as well.

Rejections of manuscripts for publication, unsuccessful or delayed promotions, and the lack of success of trainees to secure desirable positions all are part and parcel of our academic lives. Ideally, each of these failures pushes us to improve, serving as a barometer against which to compare progress through the profession.

Failure in clinical medicine is particularly troublesome to trainees and established practitioners alike. As medical students, residents, and fellows, we continue to aim for perfection in our educational experience, viewing failure as a personal shortcoming.

Bias enters into our scientific mindset in this realm, as well, by clinically framing a case within the context of diagnoses with which we are familiar.

We are forever attempting to force square pegs in round holes, rather than focusing on the features of a clinical case that do not fit the mould. We do so to gain comfort in an uncertain clinical landscape, avoiding failure and perceived inadequacy in the process.

Yet, as clinicians, we worry that we have not made the best decision for our patients, concerned that we may have missed a diagnosis, misread a laboratory result, or chosen a therapeutic strategy that is suboptimal or, worse, incorrect. None of us wishes to be perceived as the physician who failed his patient.

Failure is, however, a great teacher, especially in clinical medicine. I can clearly remember many of the clinical mistakes I have made, learning from each of them; clinical successes are far less memorable and far less instructive.

Osler, in A Way of Life, wrote of keeping a journal of one’s mistakes, stating that one should “[b]egin early to make a threefold category—clear cases, doubtful cases, mistakes ….[as] it is only by getting your cases grouped in this way that you can make any real progress….only in this way can you gain wisdom with experience” (14).

From an even broader, more philosophical perspective, failure has great importance for a number of reasons. Bradaton highlights some of them with keen insight (15). Failure gives us the opportunity to see our existence close-up.

It is a lens through which we begin to see the flaws in our otherwise perfect and perfectly predictable being. When failure gives us this insight, it emphasizes for us the existential threat that constantly stalks our lives, giving us pause to consider how extraordinary life is.

For this reason, failure can be therapeutic, forcing us to realize that the world does not revolve around us, guarding against unbridled arrogance, offering the comfort of humility in its stead.

I believe we should celebrate failure. In fact, we should exhort our students, trainees, and offspring to be alert to failure and its potential as a learning experience, rather than guard against it at all costs. We cannot avoid failure, and if we view it through the constructive lens of self-improvement, we need not cushion its blow to self-esteem. “The only real mistake” one can make “is the one from which we learn nothing” (16).

University of Arizona
Media Contacts:
Alexis Blue – University of Arizona
Image Source:
The image is in the public domain.

Original Research: Open access
“The Eighty Five Percent Rule for optimal learning”. Robert C. Wilson, Amitai Shenhav, Mark Straccia & Jonathan D. Cohen.
Nature Communications doi:10.1038/s41467-019-12552-4.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.