In 2009, The National Academies of Science of the United States published it’s Congressionally commissioned report: “Strengthening Forensic Science in the United States – A Path Forward.” Chapter 5 of the report presents a review of a number of forensic disciplines and their shortcomings. A qualitative summary (by this author) of Chapter 5 findings is presented in the following chart:
I have “graded” each of the forensic disciplines on the following attributes:
1) Statistical reliability for “class inclusion” of a suspect
2) Statistical reliability for “individual identification” of a suspect.
3) Statistical reliability for “class and individual exclusion” of a suspect.
4) Verified scientific validity with documented statistics.
5) Clear non-ambiguous terminology related to statistical validity of results.
6) Does not rely on competence, training, experience, or judgement of individual examiners.
Of great concern is all the red, pink, and yellow on the chart. Here is a link to a downloadable (and legible) copy of the chart:
The National Academies were given the Congressional “charge” in the Fall of 2005 to investigate and report on the state of forensics in the US. By the Fall of 2006, a panel of 52 scientists, academics, and experts had been assembled, and started work. Two and a half years later, and after exhaustive review of all findings, the report was published. What it had to say about forensics in the US (and by extension, the world) was not very good. In summary, what they found was that forensics (with the exception of DNA) lacks scientific rigor and statistical validation.
It can be said that every forensic discipline (with the exception of DNA) fails the test of “show me the data from which I can compute a probability of occurrence.” This is, of course, echoed in the Daubert doctrine’s “no known error rate”.
Has this lack of scientific rigor and statistical validity led to wrongful convictions? ABSOLUTELY.
But …. more about the validity of forensics in future posts.