Calculating Bad Math’s Contribution to Wrongful Conviction

This is my second stab at responding to an opinion piece (here) in the New York Times written by Leila Schneps, a mathematician and mystery writer, and her daughter Coralie Colmez, who have co-authored “Math on Trial: How Numbers Get Used and Abused in the Courtroom.” (I deleted yesterday’s post to give this more thought and expand on the issues raised in the op-ed piece.) I’ll first briefly address the authors’ troubling and rather contrived tie-in to the Italian High Court’s overturning of the acquittal of Amanda Knox. The NY Times piece otherwise makes a point worth stressing.

The op-ed writers suggest that, in the Knox case, the judge’s lack of understanding of the significance of confirming results by retesting may have prompted his refusal to have an important piece of DNA evidence retested. This example is offered as support of the opinion, “Justice Flunks Math.”

Phil Locke, a scientist and contributing editor on this blog, who has closely followed the Knox case, wrote a comprehensive article (here) on the DNA in question, which offers a more plausible reason for the judge’s decision. The real math in this matter indicates that the evidence falls far short of recognized thresholds for a reliable test sample.

According to Locke’s article, Professor Greg Hampikian of Boise State University, an international expert on DNA and Director of the Idaho Innocence Project, “advised Amanda’s legal team about the DNA issues.  The Italian court appointed two independent, Italian DNA experts to review the evidence, and their conclusion was that the DNA evidence presented at trial was not scientifically supportable, and consequently not valid.”

But getting past the Knox case reference, the opinion piece makes a point worth repeating. It cautions against the “overbearing impressiveness” of numbers in a courtroom. The quoted words are from Harvard law professor Laurence H. Tribe, who decades ago warned of the compelling nature of expert testimony on frequencies and probabilities in a courtroom.

Anyone who has followed wrongful convictions knows that flawed forensics—unreliable expert forensic testimony based upon unreliable science—has been a major contributor to DNA-proven wrongful convictions. But the important point made in the New York Times opinion is that unsubstantiated frequencies often delivered in mathematical probabilities by forensic experts make bad science overwhelmingly compelling.

An example provided is that of Sally Clark, a young lawyer who lost two of her children—their deaths initially attributed to crib death—and was wrongfully convicted of murdering them after a forensic expert testified that the likelihood of this happening was “one in 73 million.” The writers note that double crib deaths in a family actually occur more frequently. While Clark’s conviction was eventually overturned, the tragedy of the deaths and the wrongful conviction presumably contributed to her death by alcoholic poisoning just four years later.

In our book “False Justice,” we imply that Michael Green probably would have been convicted of a 1988 rape in Cleveland on eyewitness misidentification alone. Green, a black former dishwasher at the hotel where the rape occurred, roughly matched the description of the perpetrator, and the victim identified him in a flawed photo lineup procedure. While the positive identification by the frail young white woman, who was dying of cancer, probably would have been enough to send Green to prison, her misidentification was bolstered by bad science made particularly compelling by unfounded math.

A forensic expert said that a hair found on the washcloth the rapist used to clean himself after the crime was a head hair “consistent with” Michael Green’s. When challenged by the prosecutor to estimate the likelihood of two hairs matching like this but coming from two different persons, the expert said that he recalled a study that would put that likelihood at about 1 in 40,000.

With that testimony, the jury must have felt extremely confident in their guilty verdict. But after Green served 13 years in prison, DNA proved that the incriminating hair was not a head hair; it was a pubic hair. And it did not belong to Michael Green; it belonged to Rodney Rhines, who voluntarily came forth and confessed belatedly to the crime. Subsequent DNA testing of the hair confirmed his confession.

The opinion piece provides other tragic examples of math gone awry in the courtroom. Unfortunately, most lawyers, judges, and jurors are not well schooled in the forensic sciences. The testimony of an expert is therefore often very convincing and significant. Moreover, in the vast majority of DNA cases in which faulty forensic testimony contributed to wrongful conviction, perhaps due to limited resources, defense attorneys did not present experts who might have given opposing or different forensic testimony.

Just as we’ve learned that the level of confidence an eyewitness has in an identification has little relationship to whether or not the identification is correct, we must guard against allowing unsubstantiated math to bolster confidence in bad science. At the risk of wrongful conviction, the criminal justice system must take all possible steps to remove bogus math from the courtroom and seek to make clear the true meaning of frequencies and probabilities to jurors—who are generally unschooled in science or statistics—even when the math and science are founded on sound quantitative methods.

Leave a comment