It may be silly but bears repeating – experts are people too, subject to a variety of influences that may distort their thinking, approach and conclusions without realizing this is occurring. This understanding of expert non-neutrality is nothing new – the 2009 report STRENGTHENING FORENSIC SCIENCE – A PATH FORWARD emphasized this.

Some initial and striking research has uncovered the effects of some biases in forensic science procedures, but much more must be done to understand the sources of bias and to develop countermeasures…The traps created by such biases can be very subtle, and typically one is not aware that his or her judgment is being affected…Decisions regarding what analyses need to be performed and in what order also can be influenced by bias and ultimately have the potential to skew results.

Id., 184-185.  And the audience for expert testimony – often, the courts – is sometime oblivious to this as well.  This arises from the perception of neutrality and objectivity that understandably comes along with the entry of science into the courtroom.

A new article offers a comprehensive paradigm for grasping and potentially responding to expert bias. “Cognitive and Human Factors in Expert Decision Making: Six Fallacies and the Eight Sources of Bias” (Anal. Chem. 2020, 92, 7998−8004, available at ) is among the latest from researcher and cognitive psychologist Dr. Itiel Dror.

Dror begins with his list of prevalent fallacies, ones he has identified in years of studying and training experts and the consumers of expert knowledge – judges and lawyers:

  • Bias is a problem only with “corrupt and unscrupulous individuals” and thus is a matter of personal integrity.
  • Bias occurs only among the “bad apples” of the community, people who have yet or do not care to learn “how to do their job properly.”
  • There is no bias as experts are immune as long as they perform competently and with integrity.
  • When forensic analysis is based on the use of technology, instrumentation or other non-human machinery there can be no bias.
  • The “blind spot” phenomenon of seeing other experts as being biased but not oneself.
  • What Dror calls “Illusion of Control: ‘I am aware that bias impacts me, and therefore, I can control and counter its affect. I can overcome bias by mere willpower.’”

The list is not just anecdotal; for each, Dror identifies confirming sources.

Why is understanding of the fallacies essential?  Without having them as a starting point, experts will be blinded to their own limitations, and those who retain, rely on or challenge experts will be unable to critically assess their work.

Dror offers more.  Having identified the fallacies that impede fair assessment of whether a particular expert’s approach or conclusion was hindered by biases, he then identifies eight forms of bias to test for.  This is his illustration.


The article walks the reader through how each level has a risk of biasing the examination and/or the resulting decision.

  • The case specific circumstances, such as the data/material being examined, the reference material [e.g. a “target suspect” whose features can affect what is being looked for in a crime scene sample or a latent print], and contextual domain irrelevant information.
  • Environment, culture and  experience include base rate [e.g. what the ‘normal’ conclusion is when finding certain features], organizational factors such as “allegiance effect” and “myside bias,’ and education and training that may predispose an examiner to look at evidence from only one or limited perspectives.
  • Human nature is the last confounding source of bias, ranging from the purely individualistic motivation and belief system to the general aspects of decision-making such as top-down thinking.

To keep these from merely being labels, Dror provides illustrations.  How might reference materials bias?  “[T]his source of bias is not limited to circumstances that have a “target” suspect per se, but can also arise from pre-existing templates and patterns, such as in the interpretation of blood pattern analysis or a crime scene. It can even impact what color is observed.”

Contextual information can have its own ramifications.

In toxicology, for example, contextual information can bias testing strategies. Consider, for instance, when a post-mortem case is provided with contextual informant, such as “drug overdose” or/and that “the deceased was a known to have a history of heroin use.” Such information can impact the established testing strategies, such as to go straight to the confirmation and quantification of a limited range of opiate-type drugs (morphine, codeine, 6-monoacetylmorphine, and other heroin markers), without running the other standard testing, such as an immunoassay or looking for other opioids that may have been present (e.g., fentanyl). Hence, the contextual information caused a confirmation bias approach and deviation from the standard testing and screening protocols.

So, too, can “base rate,” illustrated by an example from forensic pathology.  If there is a hanging that results in cerebral hypoxia, that correlates primarily with suicide; and if there is strangulation resulting in the same condition it correlate highly with homicide.  But occasionally there can be homicides by hanging; and failure to consider this can skew not only the ultimate determination but “other stages of the analysis, even the sampling and data collection, as well as detection of relevant marks, items, or signals, or even verification.”

Are there solutions or at least mitigating steps to take?  Some are clear – preventing exposure to domain irrelevant material, using “linear sequential unmasking”  – and others are ore difficult, as they involve overcoming defensiveness on the part of examiners that bias is not a problem for them.  But without a fundamental understanding of bias and its sources and a corresponding system for checking and correcting for bias, the risk of error in core forensic analysis will persist.