eDiscovery Daily Blog
“Not Me”, The Fallibility of Human Review – eDiscovery Best Practices
When I talk with attorneys about using technology to assist with review (whether via techniques such as predictive coding or merely advanced searching and culling mechanisms), most of them still seem to question whether these techniques can measure up to good, old-fashioned human attorney review. Despite several studies that question the accuracy of human review, many attorneys still feel that their review capability is as good or better than technical approaches. Here is perhaps the best explanation I’ve seen yet why that may not be the case.
In Craig Ball’s latest blog post on his Ball in Your Court blog (The ‘Not Me’ Factor), Craig provides a terrific explanation as to why predictive coding is “every bit as good (and actually much, much better) at dealing with the overwhelming majority of documents that don’t require careful judgment—the very ones where keyword search and human reviewers fail miserably.”
“It turns out that well-designed and –trained software also has little difficulty distinguishing the obviously relevant from the obviously irrelevant. And, again, there are many, many more of these clear cut cases in a collection than ones requiring judgment calls.
So, for the vast majority of documents in a collection, the machines are every bit as capable as human reviewers. A tie. But giving the extra point to humans as better at the judgment call documents, HUMANS WIN! Yeah! GO HUMANS! Except….
Except, the machines work much faster and much cheaper than humans, and it turns out that there really is something humans do much, much better than machines: they screw up.
The biggest problem with human reviewers isn’t that they can’t tell the difference between relevant and irrelevant documents; it’s that they often don’t. Human reviewers make inexplicable choices and transient, unwarranted assumptions. Their minds wander. Brains go on autopilot. They lose their place. They check the wrong box. There are many ways for human reviewers to err and just one way to perform correctly.
The incidence of error and inconsistent assessments among human reviewers is mind boggling. It’s unbelievable. And therein lays the problem: it’s unbelievable. People I talk to about reviewer error might accept that some nameless, faceless contract reviewer blows the call with regularity, but they can’t accept that potential in themselves. ‘Not me,’ they think, ‘If I were doing the review, I’d be as good as or better than the machines.’ It’s the ‘Not Me’ Factor.”
While Craig acknowledges that “there is some cause to believe that the best trained reviewers on the best managed review teams get very close to the performance of technology-assisted review”, he notes that they “can only achieve the same result by reviewing all of the documents in the collection, instead of the 2%-5% of the collection needed to be reviewed using predictive coding”. He asks “[i]f human review isn’t better (and it appears to generally be far worse) and predictive coding costs much less and takes less time, where’s the rational argument for human review?”
Good question. Having worked with some large review teams with experienced and proficient document reviewers at an eDiscovery provider that employed a follow-up QC check of reviewed documents, I can still recall how often those well-trained reviewers were surprised at some of the classification mistakes they made. And, I worked on one project with over a hundred reviewers working several months, so you can imagine how expensive that was.
BTW, Craig is no stranger to this blog – in addition to several of his articles we’ve referenced, we’ve also conducted thought leader interviews with him at LegalTech New York the past three years. Here’s a link if you want to check those out.
So, what do you think? Do you think human review is better than technology assisted review? If so, why? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.