Morrison Bonpasse has encouraged discussion on this blog of his study, “Polygraphs and Exonerations — A Promising Relationship,” on which he made a presentation at last month’s Innocence Network Conference. Bonpasse and I had several exchanges as he finalized his study, and he made some corrections and adjustments as a result.
After his presentation, Bonpasse said in an email that “the facts in my article speak for themselves” about the value of the polygraph in innocence investigations. But do they?
One key fact Bonpasse uses in his paper, which is available here, says that ”the 2003 National Research Council report, The Polygraph and Lie Detection, found an 86% accuracy rate for polygraphs on single issue testing.” But in an analysis of the polygraph’s reliability by a U.S. District Court judge in Atlanta in the case U.S. v. Ricardo C. Williams, the judge took issue with that 86% accuracy claim. It noted that the Research Council went on to say that the quality of the polygraph studies it reviewed “falls far short of what is desirable” and that the accuracy rates that resulted are “highly likely to overestimate real-world polygraph accuracy.”
Part of the problem with many polygraph studies is that they are conducted by people who directly or indirectly are on the payroll of the polygraph industry, whose first interest is profit, not truth.
McClatchy Newspapers Washington bureau reporter Marisa Taylor provided a good example of that in May 20 article here. The article reported that “police departments and federal agencies across the country are using a type of polygraph despite evidence of a technical problem that could label truthful people as liars or the guilty as innocent” because they haven’t been notified of the issue.
Taylor said the technical glitch in question produced errors in the computerized measurements of sweat in one of the most popular polygraphs, the Lafayette Instrument Co’s LX4000. “Although polygraphers first noticed the problem a decade ago, many government agencies hadn’t known about the risk of inaccurate measurements until McClatchy recently raised questions about it,” Taylor wrote.
The story noted that polygraphs, unlike medical or other computerized equipment, aren’t required to meet any independent testing standards to verify the accuracy of their measurements.
Although the LX4000’s problem has long been known, the article said, the experts or decision makers who should have been spreading the word or acting on it didn’t. One reason for that, Taylor reported in a separate story , might be that those experts — including full-time law-enforcement officers — are being paid by the machine’s manufacturer as consultants or dealers.
This can lead to serious conflicts of interest. Consider the two experts who developed the American Polygraph Association’s highly critical response to McClatchy’s findings about Lafayette’s LX4000, which accused McClatchy of exaggerating the problem and working for a competitor. McClatchy said both experts are on Lafayette’s payroll. While Lafayette’s competitors have used the LX4000’s problems to their advantage, they have done it quietly, lest someone start taking a closer look at potential flaws in their own instruments or raise more questions about the polygraph in general.
While Bonpasse’s study is interesting and he makes some good recommendations on how to make polygraph testing better, the polygraph still doesn’t pass scientific muster. For that reason, courts are not likely to accept a polygraph exam’s validity, which is what happened in the Ricardo Williams case mentioned above. In fact, in the two cases Bonpasse mentions that he has used the polygraph in an attempt to prove inmates’ innocence, both men remain in prison. So the polygraph is likely to remain a secondary investigative tool at best.