The waning days of the Trump administration saw a fervid, if not frenzied, rush to put out policy statement or decisions meant to tie the hands of the Biden administration, including declaring Cuba a state sponsor of terrorism, labeling a Yemeni group a terrorist organization, recognizing Moroccan sovereignty over parts of the Sahara, allowing oil leasing in Alaska and limiting the reach of the Endangered Species Act.
With that approach to governance, it is unsurprising that in the last days of December, 2020, the Department of Justice issued an unsigned, unattributed attack on the PCAST report Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods, four years after its publication.
The PCAST report questioned the validity of various forensic disciplines – put simply, the DOJ’s express intent for this “Statement on the PCAST Report” was to show the PCAST report to be incorrect because “a number of recent federal and state court opinions have cited the Report as support for limiting the admissibility of firearms/toolmarks evidence in criminal cases.”
It is apparent that the DOJ response was rushed in order to be released before the change of administrations – it is actually labeled “pre-decisional draft.” The attack on PCAST, from the same administration that de-commissioned the National Commission on Forensic Science (where this author was a member), has a multitude of questionable averments. Three in particular warrant exposure.
The first, a illogical argument, is that the PCAST report cannot be accurate because it classes forensic disciplines such as latent print examination, firearms comparison, and bitemark examination, as all being forms of “metrology.” In a somewhat tautological fashion, the DOJ reasons that
- Metrology measures quantity;
- Quantity is expressed numerically;
- “forensic pattern comparison methods compare the features/characteristics and overall patterns of a questioned sample to a known source; they do not measure them[;]” and
- Therefore, these disciplines fall outside of metrology and as a result the PCAST report is wrong in many of its conclusions.
The problems here are simple. PCAST defines metrology as “measuring and comparing features” and, more importantly, simply applies scientific standards to disciplines that purport to be and present themselves in court as science. None of those standards – such as “scientific validity” and the need for “reliable principles and methods” -is limited to “quantity” measurement. So whether the nomenclature of “metrology” meets the definition cited by DOJ is of no moment.
The second error is that of ‘blaming’ the PCAST report for restrictions on firearm matching testimony in “recent federal and state court opinions[.]” Even a cursory review shows otherwise. In one case, the report was cited only in regard to “error rates” and is not relied on as a ground for restricting the scope of testimony, a restriction that actually came in response to the firearms examiner’s inability to articulate how conclusions were reached.
In at least one case, the Department of Justice presented its own experts to support firearms opinions, and those experts were deemed inadequate even after the court acknowledged that “the PCAST report is hardly beyond critique, and the government’s experts stated many valid criticisms of it throughout the hearing[.]” Finally, one case had the report cited only in a concurrence, upheld the firearms testimony as generally accepted, and rejected only one unsustainable opinion of the expert – that therefore the projectiles came from the same gun “to the practical exclusion of all other guns.” Ironically, this ruling echoes the DOJ position on firearms “match” testimony that
[a]n examiner shall not assert that two toolmarks originated from the same source to the exclusion of all other sources. This may wrongly imply that a ‘source identification’ conclusion is based upon a statistically-derived or verified measurement or an actual comparison to all other toolmarks in the world, rather than an examiner’s expert opinion.
Approved ULTR for the Forensic Firearms/Toolmarks Discipline – Pattern Match (effective 1.24.19 to 8.14.20)
In other cases, the PCAST report was but one of many bases for limiting examiner conclusions and in no instance caused or supported banning such testimony in its entirety. The Government intent here is clear – to devalue these cases as precedent and, as a consequence, the limitations on testimony should be lessened or discarded entirely.
Perhaps most concerning is the DOJ Report’s selective excerpting of reports on error rates. DOJ purports to challenge only the PCAST contention that “black box” studies are appropriate for assessing error rate but then spends much time disparaging all error rate studies as having no utility. Two instances will confirm this.
A report by the American Association for the Advancement of Science is quoted by DOJ as concluding that “[I]t is unreasonable to think that the ‘error rate’ of latent fingerprint examination can meaningfully be reduced to a single number or even a single set of numbers.” But the AAAS report emphatically supports error rate analysis and utilization, and simply calls for more and better studies.
This occurs again when the DOJ excerpts language from a recent article on error rates by Professor Itiel Dror. According to the excerpted language, Dr. Dror concludes that error rates may be “misleading” and “say very little about a specific examiner’s decision in a particular case.” But Dror’s same article actually says much more:
an average error rate for an average expert, in an average case, may not be informative (may even be misleading) for evaluating a specific expert examiner, doing a specific case. However, providing ranges, confidence intervals, standard deviations, and other information about “the error rate,” as well as comparative error rates of others, may all be helpful in understanding and evaluating a general error rate.
Dror, The Error in “Error Rate”: Why Error Rates Are So Needed, Yet So Elusive, 65 JOURNAL OF FORENSIC SCIENCES 5, 15-16 (2020). That current error rate research is deficient in some regards in no way undercuts the relevance of such information or the thrust of the PCAST report.
This author contacted Professor Dror for comments on the DOJ stance. Here is what he shared:
- “Attacking use of error rates is attacking scientific measurement.”
- “My article points out difficulties not to give up on the science but to point out challenges and limitations.”
- “If [the forensic disciplines] are not metrology, then they are giving up on science.”
There are several bottom lines here. Selective excerpting is not a valid form of argument; and focusing on the label “metrology” in no way undercuts the principles of the PCAST report. Most importantly, with the change of administration and the appointing of Eric Lander, lead author of the PCAST report, to the cabinet-level position director of the White House Office of Science and Technology Policy, it can be hoped that the Department of Justice will take a less science-phobic and more informed and accurate stance in the future.
– – – – – –
The 2016 PCAST report can be found at https://obamawhitehouse.archives.gov/blog/2016/09/20/pcast-releases-report-forensic-science-criminal-courts
The December 2020 DOJ response can be found at https://www.justice.gov/opa/pr/justice-department-publishes-statement-2016-presidents-council-advisors-science-and