Searching

Court Plays Referee in Search Term Dispute Between Parties: eDiscovery Case Law

In Digital Ally, Inc. v. Taser Int’l, Inc., No. 16-cv-2032-CM-TJJ (D. Kan. Sept. 11, 2018), Kansas Magistrate Judge Teresa J. James granted in part and denied in part the defendant’s Motion to Compel ESI Discovery, sustaining in part the plaintiff’s overbreadth and relevance objections to specific defendant ESI Requests by providing a compromised scope between the defendant’s proposed searches (deemed to be overbroad) and the plaintiff’s proposed searches (most of which were deemed to be too narrow).

Case Background

In this patent infringement case, the parties agreed to limit each party’s email production requests to five custodians and to a total of five search terms per custodian per party.  The parties could not come to an agreement on the scope for four of the five search term Requests in Defendant’s Second Set of Requests for E-Mail Production for the plaintiff’s Chief Financial Officer.  The parties subsequently conferred and exchanged proposed revisions to the defendant’s ESI requests. Unable to reach an agreement, the defendant filed the instant motion, to which the plaintiff objected, citing that the terms proposed by the defendant were overbroad.

Judge’s Ruling

In each of the four disputed ESI requests, Judge James sustained the plaintiff’s overbreadth objection, but also rejected or modified the plaintiff suggested term in most cases as too narrow.  Here are the rulings on the four requests:

Request 2:

Defendant proposed:

“Compet!” and (“vievu” or “Safariland” or “watchguard” or “(watch /2 guard)” or “WG” or “ICOP” or “20!” or “vault” or “ivault” or “wireless” or “(wc /2 1000)” or “(wc /2 2000)” or “WMIC” or “MIC” or “extreme” or “TACOM” or “(signal! /2 device)” or “SPPM” or “flex” or “fleet” or “DVM” or “FirstVu” or “microvu” or “vulink” or “FleetVu” or “trigger” or “(auto /2 activat!)” or “strateg!” or “evidenc!” or “hardware” or “software” or “cloud” or “system” or “(law /2 enforcement)” or “military” or “advantage!” or “success!” or “fail!” or “win!” or “los!” or “bid” or “rfp” or “bundl!”)

Plaintiff proposed:

“Compet!” and (“vievu” or “Safariland” or “watchguard” or “(watch /2 guard)” or “WG”)

Judge James ruled: “Defendant’s ESI Request 2 shall be limited insofar as Plaintiff shall not be required to search CFO Heckman’s ESI for the term “Compet!” combined with any of the following other commonly used words: “extreme,” “trigger,” “strateg!,” “evidenc!,” “hardware,” “software” or “cloud” or “system” or “advantage!” or “success!” or “fail!” or “win!” or “los!” or “bid” or “rfp” or “bundl!””

Request 3:

Defendant proposed:

“Invest!” or “shareholder!” or “stock!” or “Roth” or “Gibson” or “Truelock” or “Searle” or “Fortress” or “FIG” or “Palmer” or “Shtein” or “Eriksmith” or “Aegis” or “Lubitz” or “Rockowitz” or “Fidelity”) AND (“acqui!” or “financ!” or “secur!” or “monet!” or “loan!” or “offer!” or “merg!” or “buy” or “sell” or “valuation!” or “patent!” or “licens!”)

Plaintiff proposed:

(“Invest!” or “shareholder!” or “stock!” or “Roth” or “Gibson” or “Truelock” or “Searle” or “Fortress” or “FIG” or “Palmer” or “Shtein” or “Eriksmith” or “Aegis” or “Lubitz” or “Rockowitz” or “Fidelity”) AND (“vulink” or “DVM” or “FirstVu” or “microvu” or “FleetVu”)

Judge James ruled: “The Court finds Plaintiff’s counterproposal is too restrictive of the terms following the “AND” in Request 3. Instead the Court will limit Request 3 to the following search-term combinations:

(“Roth” or “Gibson” or “Truelock” or “Searle” or “Fortress” or “FIG” or “Palmer” or “Shtein” or “Eriksmith” or “Aegis” or “Lubitz” or “Rockowitz” or “Fidelity”) AND (“acqui!” or “financ!” or “secur!” or “monet!” or “loan!” or “offer!” or “merg!” or “buy” or “sell” or “valuation!” or “patent!” or “licens!”).”

Request 4:

Defendant proposed:

(“cam!” or “!cam” or “product” or “vault” or “ivault” or “DVM” or “FirstVu” or “microvu” or “vulink” or “FleetVu” or “trigger” or “(auto /2 activat!)” or “bundl!”) AND (“financ!” or “net” or “revenue!” or “cost!” or “profit!” or “margin!” or “sale!” or “sell!” or “sold” or “royalt!” or “licens!” or “unit!” or “period” or “quarter” or “annual” or “monet!” or “balance” or “income” or “cash!”)

Plaintiff proposed:

“DVM” or “FirstVu” or “microvu” or “vulink” or “FleetVu” or “VuVault” AND (“financ!” or “net” or “revenue!” or “cost!” or “profit!” or “margin!” or “sale!” or “sell!” or “sold” or “unit!” or “period” or “quarter” or “annual” or “monet!” or “balance” or “income” or “cash!”)

Judge James ruled: “The Court sustains Plaintiff’s overbreadth objections to Defendant’s ESI Request 4 for the terms “product” and “bundl!” combined with generic and commonly used finance and business search terms after the “AND.” The Court finds these search-term combinations are overly broad and Plaintiff shall not be required to search its CFO’s ESI using these search-term combinations. The Court rejects Plaintiff’s counterproposal, but modifies Defendant’s ESI Request 4 so that Plaintiff shall not be required to search Heckman’s ESI for the terms “product” or “bundl!” combined with any terms after the “AND.””

Request 5:

Defendant proposed:

“Patent!” AND (“Cam!” or “!Cam” or “(auto /2 activat!)” or “compet!” or “strateg!” or “(auto /2 activat!)” or “auto-activat!” or “automatic-activat!” or “automatically-activat!” or “auto! activat!” or “9,253,452” or “9253452” or “452” or “vulink”)

Plaintiff: Objected to the proposed connector terms “cam!,” “compet!,” and “strategy”, arguing that the “cam!” term would capture any patent about camera, which was far broader than the subject matter of this case and argued the terms “compet!” and “strategy” would return any email mentioning patents and competition or strategy of any type.

Sustaining the plaintiff’s objections that the defendant’s request was overbroad, Judge James ruled: “Defendant’s Request 5 shall be limited to the following search-term combinations:

“Patent!” AND (“(auto /2 activat!)” or “(auto /2 activat!)” or “auto-activat!” or “automatic-activat!” or “automatically-activat!” or “auto! activat!” or “9,253,452” or “9253452” or “452” or “vulink”)”

So, what do you think?  When should courts step in and rule on search term disputes between parties?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Case opinion link courtesy of eDiscovery Assistant.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Plaintiffs Granted Discovery Extension Due to Defendant’s TAR Review Glitch: eDiscovery Case Law

In the case In Re Domestic Airline Travel Antitrust Litigation, MDL Docket No. 2656, Misc. No. 15-1404 (CKK), (D.D.C. Sept. 13, 2018), District of Columbia District Judge Colleen Kollar-Kotelly granted the Plaintiffs’ Motion for an Extension of Fact Discovery Deadlines (over the defendants’ objections) for six months, finding that defendant “United’s production of core documents that varied greatly from the control set in terms of the applicable standards for recall and precision and included a much larger number of non-responsive documents that was anticipated” (United’s core production of 3.5 million documents contained only 600,000 documents that were responsive).

Case Background

In the case involves a multidistrict class action litigation brought by the plaintiffs (purchasers of air passenger transportation for domestic travel) alleging that the defendant airlines willingly conspired to engage in unlawful restraint of trade, the plaintiffs filed an instant Motion for Extension of Time to Complete Discovery, requesting an extension of six months, predicated on an “issue with United’s ‘core’ document production,” asserting that defendant United produced more than 3.5 million [core] documents to the Plaintiffs, but “due to United’s technology assisted review process (‘TAR’), only approximately 17%, or 600,000, of the documents produced are responsive to Plaintiffs’ requests,” and the plaintiffs (despite having staffed their discovery review with 70 attorneys) required additional time to sort through them.

Both defendants (Delta and United) opposed the plaintiffs’ request for an extension, questioning whether the plaintiffs had staffed the document review with 70 attorneys and suggesting the Court review the plaintiffs’ counsel’s monthly time sheets to verify that statement.  Delta also questioned by it would take the plaintiffs so long to review the documents and tried to extrapolate how long it would take to review the entire set of documents based on a review of 3 documents per minute (an analysis that the plaintiffs called “preposterous”).  United indicated that it engaged “over 180 temporary contract attorneys to accomplish its document production and privilege log process within the deadlines” set by the Court, so the plaintiffs should be expected to engage in the same expenditure of resources.  But, the plaintiffs contended that they “could not have foreseen United’s voluminous document production made up [of] predominantly non-responsive documents resulting from its deficient TAR process when they jointly proposed an extension of the fact discovery deadline in February 2018.”

Judge’s Ruling

Judge Kollar-Kotelly noted that “Plaintiffs contend that a showing of diligence involves three factors — (1) whether the moving party diligently assisted the Court in developing a workable scheduling order; (2) that despite the diligence, the moving party cannot comply with the order due to unforeseen or unanticipated matters; and (3) that the party diligently sought an amendment of the schedule once it became apparent that it could not comply without some modification of the schedule.”  She noted that “there is no dispute that the parties diligently assisted the Court in developing workable scheduling orders through their preparation of Joint Status Reports prior to the status conferences in which discovery issues and scheduling were discussed, and in their meetings with the Special Master, who is handling discovery matters in this case.”

Judge Kollar-Kotelly also observed that “United’s core production of 3.5 million documents — containing numerous nonresponsive documents — was unanticipated by Plaintiffs, considering the circumstances leading up to that production” and that “Plaintiffs devoted considerable resources to the review of the United documents prior to filing this motion seeking an extension”.  Finding also that “Plaintiffs’ claim of prejudice in not having the deadlines extended far outweighs any inconvenience that Defendants will experience if the deadlines are extended”, Judge Kollar-Kotelly found “that Plaintiffs have demonstrated good cause to warrant an extension of deadlines in this case based upon Plaintiffs’ demonstration of diligence and a showing of nominal prejudice to the Defendants, if an extension is granted, while Plaintiffs will be greatly prejudiced if the extension is not granted.”  As a result, she granted the motion to request the extension.

So, what do you think?  Was the court right to have granted the extension?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Case opinion link courtesy of eDiscovery Assistant.

Also, if you’re going to be in Houston on Thursday, September 27, just a reminder that I will be speaking at the second annual Legal Technology Showcase & Conference, hosted by the Women in eDiscovery (WiE), Houston Chapter, South Texas College of Law and the Association of Certified E-Discovery Specialists (ACEDS).  I’ll be part of the panel discussion AI and TAR for Legal: Use Cases for Discovery and Beyond at 3:00pm and CloudNine is also a Premier Platinum Sponsor for the event (as well as an Exhibitor, so you can come learn about us too).  Click here to register!

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Survey Says! Predictive Coding Technologies and Protocols Survey Results: eDiscovery Trends

Last week, I discussed the predictive coding survey that Rob Robinson was conducting on his Complex Discovery site (along with the overview of key predictive coding related terms.  The results are in and here are some of the findings.

As Rob notes in the results post here, the Predictive Coding Technologies and Protocols Survey was initiated on August 31 and concluded on September 15.  It’s a non-scientific survey designed to help provide a general understanding of the use of predictive coding technologies and protocols from data discovery and legal discovery professionals within the eDiscovery ecosystem.  The survey was designed to provide a general understanding of predictive coding technologies and protocols and had two primary educational objectives:

  • To provide a consolidated listing of potential predictive coding technology and protocol definitions. While not all-inclusive or comprehensive, the listing was vetted with selected industry predictive coding experts for completeness and accuracy, thus it appears to be profitable for use in educational efforts.
  • To ask eDiscovery ecosystem professionals about their usage and preferences of predictive coding platforms, technologies, and protocols.

There were 31 total respondents in the survey.  Here are some of the more notable results:

  • More than 80% of responders (80.64%) shared that they did have a specific primary platform for predictive coding versus just under 20% (19.35%), who indicated they did not.
  • There were 12 different platforms noted as primary predictive platforms by responders, but only three platforms received more than one vote and they accounted for more than 50% of responses (61%).
  • Active Learning was the most used predictive coding technology, with more than 70% of responders (70.96%) reporting that they use it in their predictive coding efforts.
  • Just over two-thirds of responders (67.74%) use more than one predictive coding technology in their predictive coding efforts, while just under one-third (32.25%) use only one.
  • Continuous Active Learning (CAL) was (by far) the most used predictive coding protocol, with more than 87% of responders (87.09%) reporting that they use it in their predictive coding efforts.

Rob has reported several other results and provided graphs for additional details.  To check out all of the results, click here.

So, what do you think?  Do any of the results surprise you?  Please share any comments you might have or if you’d like to know more about a particular topic.

Also, if you’re going to be in Houston on Thursday, September 27, just a reminder that I will be speaking at the second annual Legal Technology Showcase & Conference, hosted by the Women in eDiscovery (WiE), Houston Chapter, South Texas College of Law and the Association of Certified E-Discovery Specialists (ACEDS).  I’ll be part of the panel discussion AI and TAR for Legal: Use Cases for Discovery and Beyond at 3:00pm and CloudNine is also a Premier Platinum Sponsor for the event (as well as an Exhibitor, so you can come learn about us too).  Click here to register!

Image Copyright (C) FremantleMedia North America, Inc.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Houston, We Have an Adverse Inference Finding: eDiscovery Case Law

In Hernandez, et al. v. City of Houston, No. 4:16-CV-3577 (S.D. Tex. Aug. 30, 2018), Texas District Judge Kenneth M. Hoyt, finding that the defendant “intentionally destroyed” evidence by wiping the hard drives of several custodians no longer employed by the City, determined “that entering an adverse inference finding is appropriate” against the defendant.

Case Background

In this case regarding alleged illegal detainment of the plaintiffs in City jail where each of the plaintiffs contends that he was held in the City’s jail for more than 48 hours without a judicial determination or a probable cause hearing, the Court entered an agreed ESI order in November 2017, which promoted cooperation between the parties (including agreement on search terms) and designated thirteen specific custodians, whose records the plaintiffs were seeking.  Weeks after the ESI Order, the defendant had still not supplemented missing metadata from an earlier production to bring the production into compliance with the Court’s Order and, after several meet and confers by phone, defendant’s counsel requested an in-person meeting.

On December 13, 2017, during that in-person meeting, the defendant represented that (i) it had not interviewed any of the custodians listed in the ESI Order, (ii) it had not collected documents from any of the custodians listed in the ESI Order and (iii) it had “wiped” the hard drives of six of those custodians no longer employed by the defendant.  At that meeting, the plaintiffs offered to provide names of vendors to help with document processing and review and offered to pay a substantial portion, if not all, of the costs that might be incurred. The defendant refused this offer and missed its December 15, 2017 deadline to certify document production was complete.

In January 2018, the defendant represented that it had collected 72,000 documents, but had yet to review them, despite the passage of the discovery deadline. By February 28, 2018, when the plaintiffs moved to compel production, the defendant had only produced 126 files from the Mayor’s office – all of which was unresponsive to the plaintiffs’ document requests.  In April 2018, the defendant claimed it had collected 2.6 million documents by running “word searches based on the ESI Protocol” and it would take 17,000 hours to review all of those documents.  Based on these representations, the plaintiffs agreed to provide a narrower set of search terms.  On April 10, 2018, the Court ordered the defendant to “produce all non-privileged documents responsive to the plaintiffs’ requests for production nos. 1-4, 8 and 9 in accordance with the Court’s November 8, 2017, ESI Order” and also notified the defendant that “[f]ailure to comply with this Order will result in sanctions, including but not limited to monetary sanctions and an adverse inference instruction”.

When the defendant ran the plaintiffs’ narrowed search terms, it retrieved 48,976 documents.  However, it then proceeded to unilaterally apply its own search terms, which retrieved 9,992 documents, which were reviewed for responsiveness.  The defendant produced only 368 responsive documents in response to the April 10 court order.

Judge’s Ruling

With regard to the wiped drives for the six custodians no longer employed by the defendant, Judge Hoyt stated: “Those hard drives contained ESI that should have been preserved by the City as soon as it anticipated litigation, and definitely after the instant lawsuit was filed. The City acknowledged its “clear obligation” to preserve all responsive documents after the litigation was pending. Yet the City failed to take reasonable steps to preserve the data on the hard drives and intentionally wiped the drives. The Court determines that the information on the hard drives cannot be restored or replaced through additional discovery.”

Judge Hoyt also found that the defendant had “Made Misrepresentations to the Court About Its Flawed Discovery Process”, indicating that it: 1) “represented that it needed to review 2.6 million documents”, 2) “did not review the 78,702 documents generated by the plaintiff’s April 2018 search terms”, 3) “represented that it had issued a litigation hold” and 4) “obfuscated the status of the hard drives”.

As a result, Judge Hoyt ruled, as follows:

“Federal Rule of Civil Procedure 37(b)(2) provides that an order establishing contested facts as true is an appropriate remedy when a party violates a discovery order. See Rule 37(b)(2)(i)-(ii). This type remedy cures the violation without inflicting additional costs on the parties, and for that reason, the Court determines, in its discretion that entering an adverse inference finding is appropriate…

Therefore, the Court HOLDS that the following inference is appropriate based on the City’s conduct:

It is established that (a) throughout the class period, the City of Houston had a policy of not releasing warrantless arrestees who had not received neutral determinations of probable cause within the constitutionally required period of time; (b) throughout the class period, the City’s policymakers were aware of this policy; and (c) the City’s policymakers acted with deliberate indifference to the unconstitutional policy and the constitutional violations that resulted.”

So, what do you think?  Was the adverse inference sanction appropriate in this case?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Also, if you’re going to be in Houston on Thursday, September 27, just a reminder that I will be speaking at the second annual Legal Technology Showcase & Conference, hosted by the Women in eDiscovery (WiE), Houston Chapter, South Texas College of Law and the Association of Certified E-Discovery Specialists (ACEDS).  I’ll be part of the panel discussion AI and TAR for Legal: Use Cases for Discovery and Beyond at 3:00pm and CloudNine is also a Premier Platinum Sponsor for the event (as well as an Exhibitor, so you can come learn about us too).  Click here to register!

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

If You’re an eDiscovery Professional Interested in Predictive Coding, Here is a Site You May Want to Check Out: eDiscovery Trends

On his Complex Discovery site, Rob Robinson does a great job of analyzing trends in the eDiscovery industry and often uses surveys to gauge sentiment within the industry for things like industry business confidence.  Now, Rob is proving and overview and conducting a survey regarding predictive coding technologies and protocols for representatives of leading eDiscovery providers that should prove interesting.

On his site at Predictive Coding Technologies and Protocols: Overview and Survey, Rob notes that “it is increasingly more important for electronic discovery professionals to have a general understanding of the technologies that may be implemented in electronic discovery platforms to facilitate predictive coding of electronically stored information.”  To help in that, Rob provides working lists of predictive coding technologies and TAR protocols that is worth a review.

You probably know what Active Learning is.  Do you know what Latent Semantic Analysis is? What about Logistic Regression?  Or a Naïve Bayesian Classifier?  If you don’t, Rob discusses definitions for these different types of predictive coding technologies and others.

Then, Rob also provides a list of general TAR protocols that includes Simple Passive Learning (SPL), Simple Active Learning (SAL), Continuous Active Learning (CAL) and Scalable Continuous Active Learning (S-CAL), as well as the Hybrid Multmodal Method used by Ralph Losey.

Rob concludes with a link to a simple three-question survey designed to help electronic discovery professionals identify the specific machine learning technologies and protocols used by eDiscovery providers in delivering the technology-assisted review feature of predictive coding.  It literally take 30 seconds to complete.  To find out the questions, you’ll have to check out the survey.  ;o)

So far, Rob has received 19 responses (mine was one of those).  It will be interesting to see the results when he closes the survey and publishes the results.

So, what do you think?  Are you an expert in predictive coding technologies and protocols?  Please share any comments you might have or if you’d like to know more about a particular topic.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Court Sides with Plaintiff’s Proposal, Orders Random Sample of the Null Set: eDiscovery Case Law

In City of Rockford v. Mallinckrodt ARD Inc., No. 17 CV 50107, No. 18 CV 379 (N.D. Ill. Aug. 7, 2018), Illinois Magistrate Judge Iain D. Johnston adopted the parties’ proposed order establishing the production protocol for ESI with the inclusion of the plaintiffs’ proposal that a random sample of the null set will occur after the production and that any responsive documents found as a result of that process will be produced.

Case Background

In this case involving alleged breach of contract, racketeering and antitrust violations related to the defendant’s prescription medication, the parties agreed on several aspects of discovery, including a plan to use keyword searching and a protocol for agreeing on search terms, date restrictions, and custodian restrictions.  The protocol also addressed the steps to be taken if a party were to dispute a specific term as being overly broad, with the producing party to review a statistically valid sample of documents to determine if the term is returning mostly responsive documents, followed by negotiation as to any modifications to the term, with a plan to submit to the Court if they could not agree.

However, the parties could not agree on what to do after the production.  The defendants’ proposed that if “the requesting party reasonably believes that certain categories of requested documents exist that were not included in the production, the parties will meet and confer to discuss whether additional terms are necessary.”  On the other hand, the plaintiffs proposed a random sample of the null set (the documents not returned via search), with the following specific provision:

“The producing party agrees to quality check the data that does not hit on any terms (the Null Set) by selecting a statistically random sample of documents from the Null Set. The size of the statistically random sample shall be calculated using a confidence level of 95% and a margin of error of 2%. If responsive documents are found during the Null Set review, the producing party agrees to produce the responsive documents separate and apart from the regular production. The parties will then meet and confer to determine if any additional terms, or modifications to existing terms, are needed to ensure substantive, responsive documents are not missed.”

Judge’s Ruling

While noting that “the parties have agreed to use key word searching”, Judge Johnston evaluated the “pros and cons” of keyword searching as compared to technology assisted review (TAR), but ultimately decided that he “will not micromanage the litigation and force TAR onto the parties.”

As for the proposal in dispute, Judge Johnston ruled that sampling the null set is reasonable under Rule 26(g), stating that “Defendants provide no reason establishing that a random sampling of the null set cannot be done when using key word searching. Indeed, sampling the null set when using key word searching provides for validation to defend the search and production process, and was commonly used before the movement towards TAR.”

Judge Johnston also ruled that sampling the null set is proportionate under Rule 26(b)(1), stating: “The Court’s experience and understanding is that a random sample of the null set will not be unreasonably expensive or burdensome. Moreover and critically, Defendants have failed to provide any evidence to support their contention…Indeed, the Court’s experience and understanding is that the random sample will not be voluminous in the context of a case of this magnitude.”  Judge Johnston also cited the issues at stake, the potential amount in controversy, asymmetrical discovery (with the defendants having access to the vast majority of the relevant information), the “substantial resources” of the defendant and that “the burden and expense of a random sampling of the null set does not outweigh its likely benefit of ensuring proper and reasonable – not perfect – document disclosure” all as reasons as to why sampling was proportionate in this case.

As a result, Judge Johnston ordered a random sample of the null set, determining that “Plaintiffs’ proposed 95% confidence level with +/-margin of 2% is acceptable.”

Editor’s Note: It’s worth noting that if you plug the proposed confidence level and margin of error into the Raosoft sample size calculator, you get no more than 2,401 documents that need to be sampled — even if the size of the null set is as large as 10 million documents.  Conducting a random sample is one of the most proportionate activities associated with eDiscovery review.

So, what do you think?  Should random sampling of the null set always be required in cases like this to help confirm a comprehensive search result?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Case opinion link courtesy of eDiscovery Assistant.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

TAR Rules for the New York Commercial Division: eDiscovery Trends

File this one under stories I missed until yesterday.  We’ve seen plenty of cases where the use of Technology Assisted Review (TAR) has been approved and even one this year where a protocol for TAR was ordered by the court.  But, here is a case of a jurisdiction that has proposed and adopted a rule to encourage use of the most efficient means to review documents, including TAR.

As reported in the New York Law Journal (NY Commercial Division Gives Fuller Embrace to E-Discovery Under New Rule, written by Andrew Denney), the New York Commercial Division has adopted a new rule to support the use of technology-assisted document review in appropriate cases.

As the author notes, plenty of commercial litigants are already using technology to help them breeze through potentially labor-intensive tasks such as weeding out irrelevant documents via predictive coding or threading emails for easier reading.  But unlike the U.S. District Court for the Southern District of New York, which has developed a substantial volume of case law bringing eDiscovery proficiency to the bar (much of it authored by recently retired U.S. Magistrate Judge Andrew Peck), New York state courts have provided little guidance on the topic.

Until now.  The new rule, proposed last December by the Commercial Division Advisory Council and approved last month by Lawrence Marks, the state court system’s chief administrative judge and himself a former Commercial Division jurist, would fill the gap in the rules, said Elizabeth Sacksteder, a Paul, Weiss, Rifkind, Wharton & Garrison partner and member of the advisory council.  That rule, to be incorporated as a subpart of current Rule 11-e of the Rules of the Commercial Division, reads as follows:

The parties are encouraged to use the most efficient means to review documents, including electronically stored information (“ESI”), that is consistent with the parties’ disclosure obligations under Article 31 of the CPLR and proportional to the needs of the case.  Such means may include technology-assisted review, including predictive coding, in appropriate cases.

Muhammad Faridi, a commercial litigator and a partner at Patterson Belknap Webb & Tyler, said that using technology-assisted review is nothing new to most practitioners in the Commercial Division, but it is “revolutionary” for the courts to adopt a rule encouraging its use.  Maybe so!

So, what do you think?  Are you aware of any other rules out there supporting or encouraging the use of TAR?  If so, let us know about them!  And, as always, please share any comments you might have or if you’d like to know more about a particular topic.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Court Denies Defendant’s Motion for Protective Order in Broiler Chicken Case: eDiscovery Case Law

In the In re Broiler Chicken Antitrust Litigation, No. 16 C 8637 (N.D. Ill. July 26, 2018), Illinois Magistrate Judge Jeffrey T. Gilbert denied defendant Agri Stats’ Motion for Protective Order, ruling the defendant “Has Not Made a Threshold Showing” and, the information requested by the End User Consumer Plaintiffs (“EUCPs”) was not reasonably accessible because of undue burden or cost (and, even if they had, the EUCPs showed good cause for requesting custodial searches of ESI, throughout the time frame set forth in the ESI Protocol) and that Agri Stats “Does Not Satisfy the Rule 26(b)(2)(C) Factors” to limit discovery.

Case Background

Prior to this class action lawsuit involving broiler chicken prices, defendant Agri Stats was the subject of a DOJ investigation and claimed it “searched for and produced to the DOJ documents and information like what the EUCPs are requesting”.  Agri Stats ran custodial searches for designated custodians for the period between September 17, 2008 through September 17, 2010, and it produced to the DOJ responsive documents it collected with those searches. But, the time frame for discovery in this case was much broader, extending from January 1, 2007 until September 2, 2016.

Agri Stats argued that it should not be required to run custodial searches of ESI created prior to October 3, 2012 (the date the DOJ investigation closed) for the agreed upon 12 custodians because it ran similar searches for most of those custodians during the DOJ investigation and “requiring it to re-run expensive searches with the EUCPs’ search terms for those same custodians for a broader time period than it already ran is burdensome, disproportionate to the needs of this case, and unreasonable when viewed through the filter of Federal Rule of Civil Procedure 26(b)(2).”

The EUCPs disagreed and contended that Agri Stats should be required, like every other Defendant in this case, to perform the requested searches with the EUCPs’ proposed search terms for the time frame stated in the ESI Protocol, contending that both were broader than what Agri Stats produced for the DOJ investigation.

Judge’s Ruling

Considering the arguments, Judge Gilbert stated:

“The Court agrees with EUCPs. Although Agri Stats conducted custodial searches for a limited two-year period in connection with the DOJ’s investigation of possible agreements to exchange competitively sensitive price and cost information in the broiler, turkey, egg, swine, beef and dairy industries, that investigation focused on different conduct than is at the heart of EUCPs’ allegations in this case, which cover a broader time period than was involved in the DOJ’s investigation. The Court finds that a protective order is not warranted under these circumstances.”

While noting that “Agri Stats says that it already has produced in this case more than 296,000 documents, including approximately 155,000 documents from before October 2012” and that “Agri Stats represents that the estimated cost to run the custodial searches EUCPs propose and to review and produce the ESI is approximately $1.2 to $1.7 million”, Judge Gilbert observed that the “estimated cost, however, is not itemized nor broken down for the Court to understand how it was calculated”.  Judge Gilbert also noted that “EUCPs say they already have agreed, or are working towards agreement, that 2.5 million documents might be excluded from Agri Stats’s review. That leaves approximately 520,000 documents that remain to be reviewed. In addition, EUCPs say they have provided to Agri Stats revised search terms, but Agri Stats has not responded.”

As a result, Judge Gilbert determined that “Agri Stats falls woefully short of satisfying its obligation to show that the information EUCPs are seeking is not reasonably accessible because of undue burden or cost.”  In denying the defendant’s motion, he also ruled that “Even if Agri Stats Had Shown Undue Burden or Cost, EUCPs Have Shown Good Cause for the Production of the Requested ESI and Agri Stats Does Not Satisfy the Rule 26(b)(2)(C) Factors”.

So, what do you think?  Could the defendant have done a better job of showing undue burden and cost?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Don’t Be “Chicken”! Consider Having a Good Protocol for Handling eDiscovery: eDiscovery Case Week

We’re getting a head start on next week’s shark week, er, case week here on the blog!  With that in mind, we’re catching up on a couple of cases leading up to our webcast on Wednesday where Tom O’Connor and I will be talking about key eDiscovery case law for the first half of 2018.  With that in mind, let’s discuss the most notable search methodology order having to do with broiler chicken litigation ever!

In the In re Broiler Chicken Antitrust Litigation, No. 1:16-cv-08637 (N.D. Ill. Jan. 3, 2018), Illinois Magistrate Judge Jeffrey Gilbert appointed a special master (noted Technology Assisted Review expert Maura Grossman) to help the parties resolve eDiscovery disputes.  Judge Gilbert and Special Master Grossman issued a very detailed procedure (Order Regarding Search Methodology for Electronically Stored Information) for how the parties were to conduct TAR, including search, validation and document sourcing approaches, split into three primary sections: (1) how the parties will act, (2) what search technologies will be used, and (3) an outline of a document review validation protocol.

In this class action lawsuit filed on September 2016, the plaintiffs alleged that companies in the broiler chicken industry were colluding to limit the supply of chickens to raise, by almost 50%, the prices consumers would need to pay for chicken.  In February 2017, the plaintiffs filed their first set of requests for production.  With 3 putative plaintiff classes, nearly 30 defendants, multiple theories of liability, and activity covering close to ten years in a $20 billion plus dollar industry, Judge Gilbert appointed Special Master Grossman in October 2017 to address and resolve disputes regarding eDiscovery, which led to this order right after the first of the year.

The order set forth expectations with regard to:

  1. Transparency and the use of culling technologies prior to search, including de-duplication, email threading, email domains, targeted collections, exception reporting and other culling;
  2. Search methods, divided into “TAR/CAL” (Technology Assisted Review/Continuous Active Learning) and Keyword Search Processes;
  3. Document review validation protocol involving specifications for QC sampling, regardless whether TAR or “exhaustive manual review” was used.

The Order also included an appendix, detailing the recall estimation method for a review process involving TAR as well as the method for manual review.

In terms of a model protocol to not only cover how to conduct TAR and/or keyword search, but manage eDiscovery in general, this is a terrific protocol which will certainly be referenced for some time to come.

So, what do you think?  Have you been involved in a case where the court ordered a protocol for managing eDiscovery?  Please let us know if any comments you might have or if you’d like to know more about a particular topic.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Court Rules Search Terms Overly Broad Under Rule 26 in Convertible Top Patent Case: eDiscovery Case Law

In Webastro Thermo & Comfort v. BesTop, Inc., No.16-13456 (E.D. Mich. June 29, 2018), Michigan Magistrate Judge R. Steven Whalen ruled in favor of the plaintiff’s protective order, requesting the narrowing of search terms for ESI production in this patent dispute.

Case Background

The plaintiff manufactures an automobile roof and also a roof-opening mechanism for which it has a patent and claimed that the defendant manufactures a roof-opening mechanism under the name “Sunrider for Hartop” that infringes on their patent. The defendant contended that its Sunrider product is based on prior art, invalidating the plaintiff’s patent.

The defendant requested emails during discovery, but the plaintiff claimed the total emails generated and received by these companies was voluminous and many would encompass matters having nothing to do with this lawsuit. The ESI order from the court contemplated that search terms should be narrowed to exclude extraneous and irrelevant information and that production requests should be limited to eight key custodians and ten search times on each side.

However, the plaintiff contended that the defendant’s proposed search terms were “overbroad, indiscriminate, and contrary to BesTop’s obligations under the Court’s ESI Order,” and despite pre-motion communication between counsel, the parties were at an impasse, leading to the plaintiff seeking a protective order “sparing Webasto from unduly burdensome email discovery, until such time as BesTop propounds reasonable email search requests containing appropriate narrowing criteria.”  The plaintiff also requested an order requiring the defendant to cover costs associated with the plaintiff’s production.

Judge’s Ruling

Judge Whalen stated in his discussion, “The majority of BesTop’s search terms are overly broad, and in some cases violate the ESI Order on its face. For example, the terms ‘throwback’ and ‘swap top’ refer to Webasto’s product names, which are specifically excluded under…the ESI Order. The overbreadth of other terms is obvious, especially in relation to a company that manufactures and sells convertible tops: ‘top,’ ‘convertible,’ ‘fabric,’ ‘fold,’ ‘sale or sales. Using ‘dwg’ as an alternate designation for ‘drawing’ (which is itself a rather broad term) would call into play files with common file extension .dwg.”

Judge Whalen continued: “Apart from the obviously impermissible breadth of BesTop’s search terms, their overbreadth is borne out by Mr. Carnevale’s [plaintiff’s attorney] declarations, which detail a return of multiple gigabytes of ESI potentially comprising tens of millions of pages of documents, based on only a partial production. In addition, the search of just the first 100 records produced using BesTop’s search terms revealed that none were related to the issues in this lawsuit.  Contrary to BesTop’s contention that Webasto’s claim of prejudice is conclusory, I find that Webasto has sufficiently ‘articulate[d] specific facts showing clearly defined and serious injury resulting from the discovery sought….’”

Counsel for the parties was ordered to meet and confer in order to show a good-faith effort in focusing and narrowing the defendant’s search terms, so that the plaintiff’s production of ESI would remain relevant within the meaning of Rule 26 and exclude ESI that would have no relationship to this case.  The defendant was also ordered to submit an amended discovery request with the narrowed search terms within 14 days, after which, a new deadline for production of the ESI would be determined.

Because the opportunity was granted to the defendant to reformulate its discovery request to conform to the ESI Order, the plaintiff’s request for cost-shifting was denied, but Judge Whalen indicated the court “may reconsider” if the defendant “does not reasonably narrow its requests”.

So, what do you think? Is this ruling within the correct interpretation of proportionality under FRCP 26? Please share any comments you might have or if you’d like to know more about a particular topic.

P.S. – The case style refers to the plaintiff as “Webastro”, while the body of the order correctly refers to the plaintiff as “Webasto”.

Case opinion link courtesy of eDiscovery Assistant.

Sponsor: This blog is sponsored by CloudNine, which is a data and legal discovery technology company with proven expertise in simplifying and automating the discovery of data for audits, investigations, and litigation. Used by legal and business customers worldwide including more than 50 of the top 250 Am Law firms and many of the world’s leading corporations, CloudNine’s eDiscovery automation software and services help customers gain insight and intelligence on electronic data.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.