eDiscovery Daily Blog

Cooperation in Predictive Coding Exercise Fails to Avoid Disputed Production: eDiscovery Case Law

Court Denies Defendant’s Motion to Overrule Plaintiff’s Objections to Discovery Requests
 In Dynamo Holdings v. Commissioner of Internal Revenue, Docket Nos. 2685-11, 8393-12 (U.S. Tax Ct. July 13, 2016), Texas Tax Court Judge Ronald Buch ruled denied the respondent’s Motion to Compel Production of Documents Containing Certain Terms, finding that there is “no question that petitioners satisfied our Rules when they responded using predictive coding”.

Case Background

In this case involving various transfers from one entity to a related entity where the respondent determined that the transfers were disguised gifts to the petitioner’s owners and the petitioners asserted that the transfers were loans, the parties previously disputed the use of predictive coding for this case and, in September 2014 (covered by us here), Judge Buch ruled that “[p]etitioners may use predictive coding in responding to respondent’s discovery request. If, after reviewing the results, respondent believes that the response to the discovery request is incomplete, he may file a motion to compel at that time.”

At the outset of this ruling, Judge Buch noted that “[t]he parties are to be commended for working together to develop a predictive coding protocol from which they worked”.  As indicated by the parties’ joint status reports, the parties agreed to and followed a framework for producing the electronically stored information (ESI) using predictive coding: (1) restoring and processing backup tapes, (2) selecting and reviewing seed sets, (3) establishing and applying the predictive coding algorithm; and (4) reviewing and returning the production set

While the petitioners were restoring the first backup tape, the respondent requested that the petitioners conduct a Boolean search and provided petitioners with a list of 76 search terms for the petitioners to run against the processed data.  That search yielded over 406,000 documents, from which two 1,000 document samples were conducted and provided to the respondent for review.  After the model was run against the second 1,000 documents, the petitioners’ technical professionals reported that the model was not performing well, so the parties agreed that the petitioners would select an additional 1,000 documents that the algorithm had ranked high for likely relevancy and the respondent reviewed them as well.  The respondent declined to review one more validation sample of 1,000 documents when the petitioner’s technical professionals explained that the additional review would be unlikely to improve the model.

Ultimately, using the respondent’s selected recall rate of 95 percent, the petitioners ran the algorithm against the 406,000 documents to identify documents to produce (followed by a second algorithm to identify privileged materials) and, between January and March 2016, the petitioners delivered a production set of approximately 180,000 total documents on a portable device for the respondent to review and included a relevancy score for each document – ultimately, the respondent only found 5,796 to be responsive (barely over 3% of the production) and returned the rest.

On June 17, 2016, the respondent filed a motion to compel production of the documents identified in the Boolean search that were not produced in the production set (1,353 of 1,645 documents containing those terms they claimed were not produced), asserting that those documents were “highly likely to be relevant.”  Ten days later, the petitioner filed an objection to the respondent’s motion to compel, challenging the respondent’s calculations of documents that were incorrectly produced by noting that only 1,360 of documents actually contained those terms, that 440 of them had actually been produced and that many of the remaining documents predated or postdated the relevant time period.  They also argued that the documents were selected by the predictive coding algorithm based on selection criteria set by the respondent.

Judge’s Ruling

Judge Buch noted that “[r]espondent’s motion is predicated on two myths”: 1) the myth that “manual review by humans of large amounts of information is as accurate and complete as possible – perhaps even perfect – and constitutes the gold standard by which all searches should be measured”, and 2) the myth of a perfect response to the respondent’s discovery request, which the Tax Court Rules don’t require.  Judge Buch cited Rio Tinto where Judge Andrew Peck stated:

“One point must be stressed – it is inappropriate to hold TAR [technology assisted review] to a higher standard than keywords or manual review.  Doing so discourages parties from using TAR for fear of spending more in motion practice than the savings from using from using TAR for review.”

Stating that “[t]here is no question that petitioners satisfied our Rules when they responded using predictive coding”, Judge Buch denied the respondent’s Motion to Compel Production of Documents Containing Certain Terms.

So, what do you think?  If parties agree to the predictive coding process, should they accept the results no matter what?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

print