Review

eDiscovery Trends: EDRM and Statistical Sampling

 

I’ve been proud to be a member of The Electronic Discovery Reference Model (EDRM) for the past six years (all but the first year) and I’m always keen to report on activities and accomplishments of the various working groups within EDRM.  Since this blog was founded, we’ve reported on 1) the unveiling of the EDRM Data Set, which has become a standard for useful eDiscovery test and demo data, 2) the EDRM Metrics Privilege Survey (which I helped draft), to collect typical volumes and percentages of privileged documents throughout the industry, 3) Model Code of Conduct which focuses on the ethical duties of eDiscovery service providers, and 4) the collaboration between EDRM and ARMA and subsequent joint Information Governance white paper.  EDRM’s latest announcement yesterday is a new guide, Statistical Sampling Applied to Electronic Discovery, which is now available for review and comment. 

As EDRM notes in their announcement, “The purpose of the guide is to provide guidance regarding the use of statistical sampling in e-discovery contexts. Most of the material is definitional and conceptual, and is intended for a broad audience. The later material and the accompanying spreadsheet provide additional information, particularly technical information, to people in e-discovery roles who become responsible for developing further expertise in this area.”

The Guide is comprised of six sections, as follows:

  1. Introduction: Includes basic concepts and definitions, alludes to mathematical techniques to be discussed in more detail in subsequent sections, identifies potential eDiscovery situations where sampling techniques may be useful and identifies areas not covered in this initial guide.
  2. Estimating Proportions within a Binary Population: Provides some common sense observations as to why sampling is useful, along with a straightforward explanation of statistical terminology and the interdependence of sample size, margin of error/confidence range and confidence level.
  3. Guidelines and Considerations: Provides guidelines for effective statistical sampling, such as cull prior to sampling, account for family relationships, simple vs. stratified random sampling and use of sampling in machine learning, among others.
  4. Additional Guidance on Statistical Theory: Covers mathematical concepts such as binomial distribution, hypergeometric distribution, and normal distribution.  Bring your mental “slide-rule”!
  5. Examples Using the Accompanying Excel Spreadsheet: Describes an attached workbook (EDRM Statistics Examples 20120427.xlsm) that contains six sheets that include a notes section as well as basic, observed and population normal approximation models and basic and observed binomial methods to assist in learning these different sampling methods.
  6. Validation Study: References a Daegis article that provides an empirical study of sampling in the eDiscovery context.  In addition to that article, consider reading our previous posts on determining an appropriate sample size to test your search, how to generate a random selection and a practical example to test your search using sampling.

Comments can be posted at any of the EDRM Statistical Sampling pages, or emailed to the group at mail@edrm.net.  As a big proponent of statistical sampling as an effective and cost-effective method for verifying results, I’m very interested to see where this guide goes and how people will use it.  BTW, EDRM’s Annual Kickoff Meeting is next week (May 16 and 17) in St. Paul, MN – it’s not too late to become a member and help shape the future of eDiscovery with other industry leaders!

So, what do you think?  Do you perform statistical sampling to verify results within your eDiscovery process?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery BREAKING Case Law: Judge Carter Upholds Judge Peck’s Predictive Coding Order

A few weeks ago, in Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y. Feb. 24, 2012), Magistrate Judge Andrew J. Peck of the U.S. District Court for the Southern District of New York issued an opinion making it likely the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  However, on March 13, District Court Judge Andrew L. Carter, Jr. granted plaintiffs’ request to submit additional briefing on their February 22 objections to the ruling.  In that briefing (filed on March 26), the plaintiffs claimed that the protocol approved for predictive coding “risks failing to capture a staggering 65% of the relevant documents in this case” and questioned Judge Peck’s relationship with defense counsel and with the selected vendor for the case, Recommind.  Then, on April 5, Judge Peck issued an order in response to Plaintiffs’ letter requesting his recusal, directing plaintiffs to indicate whether they would file a formal motion for recusal or ask the Court to consider the letter as the motion.  On April 13, (Friday the 13th, that is), the plaintiffs did just that, by formally requesting the recusal of Judge Peck.

Now, on April 25 (signed two days ago and filed yesterday), Judge Carter has issued an opinion and order in the case, upholding Judge Peck’s opinion approving computer-assisted review.  In the opinion and order, Judge Carter noted:

“[T]he Court adopts Judge Peck’s rulings because they are well reasoned and they consider the potential advantages and pitfalls of the predictive coding software. The Court has thoroughly reviewed the ESI protocol along with the parties’ submissions.  At the outset, the Court notes that Plaintiffs and Judge Peck disagree about the scope of Plaintiffs’ acquiescence concerning the use of the method. Judge Peck’s written order states that Plaintiffs have consented to its use, (Opinion and Order at 17 (“The decision to allow computer-assisted review in this case was relatively easy – the parties agreed to its use (although disagreed about how best to implement such review.”))), while Plaintiffs argue that Judge Peck’s order mischaracterizes their position (Pl. Reply, dated March 19, 2012, at 4-5). Nevertheless, the confusion is immaterial because the ESI protocol contains standards for measuring the reliability of the process and the protocol builds in levels of participation by Plaintiffs. It provides that the search methods will be carefully crafted and tested for quality assurance, with Plaintiffs participating in their implementation. For example, Plaintiffs’ counsel may provide keywords and review the documents and the issue coding before the production is made. If there is a concern with the relevance of the culled documents, the parties may raise the issue before Judge Peck before the final production. Further, upon the receipt of the production, if Plaintiffs determine that they are missing relevant documents, they may revisit the issue of whether the software is the best method. At this stage, there is insufficient evidence to conclude that the use of the predictive coding software will deny Plaintiffs access to liberal discovery. “

“Plaintiffs’ arguments concerning the reliability of the method are also premature. It is difficult to ascertain that the predictive software is less reliable than the traditional keyword search. Experts were present during the February 8 conference and Judge Peck heard from these experts. The lack of a formal evidentiary hearing at the conference is a minor issue because if the method appears unreliable as the litigation continues and the parties continue to dispute its effectiveness, the Magistrate Judge may then conduct an evidentiary hearing. Judge Peck is in the best position to determine when and if an evidentiary hearing is required and the exercise of his discretion is not contrary to law. Judge Peck has ruled that if the predictive coding software is flawed or if Plaintiffs are not receiving the types of documents that should be produced, the parties are allowed to reconsider their methods and raise their concerns with the Magistrate Judge. The Court understands that the majority of documentary evidence has to be produced by MSLGroup and that Plaintiffs do not have many documents of their own. If the method provided in the protocol does not work or if the sample size is indeed too small to properly apply the technology, the Court will not preclude Plaintiffs from receiving relevant information, but to call the method unreliable at this stage is speculative.”

“There simply is no review tool that guarantees perfection. The parties and Judge Peck have acknowledged that there are risks inherent in any method of reviewing electronic documents. Manual review with keyword searches is costly, though appropriate in certain situations. However, even if all parties here were willing to entertain the notion of manually reviewing the documents, such review is prone to human error and marred with inconsistencies from the various attorneys’ determination of whether a document is responsive. Judge Peck concluded that under the circumstances of this particular case, the use of the predictive coding software as specified in the ESI protocol is more appropriate than keyword searching. The Court does not find a basis to hold that his conclusion is clearly erroneous or contrary to law. Thus, Judge Peck’s orders are adopted and Plaintiffs’ objections are denied.”

So, what do you think?  Will this settle the issue?  Or will the plaintiffs attempt another strategy to derail the approved predictive coding plan?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Is the Third Time the Charm for Technology Assisted Review?

 

A few weeks ago, in Da Silva Moore v. Publicis Groupe & MSL Group, Magistrate Judge Andrew J. Peck issued an opinion making it the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  Or, so we thought.  Now, the plaintiff has objected to the plan and even formally requested the recusal of Judge Peck.  Conversely, in Kleen Products LLC v. Packaging Corporation of America, et al., the plaintiffs have asked Magistrate Judge Nan Nolan to require the producing parties to employ a technology assisted review approach (referred to as "content-based advanced analytics," or CBAA) in their production of documents for discovery purposes, and that request is currently being considered.  Now, there’s a third case where the use of technology assisted review is actually being approved in an order by the judge.

In Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles Jet Center, et al, Virginia State Circuit Court Judge James H. Chamblin ordered that the defendants can use predictive coding for discovery in this case, despite the plaintiff's objections that the technology is not as effective as human review.  The order was issued after the defendants issued a motion requesting either that predictive coding technology be allowed in the case or that the plaintiffs pay any additional costs associated with traditional review.  The defendant has an 8 terabyte data set that they are hoping to reduce to a few hundred gigabytes through advanced culling techniques.

In ruling, Judge Chamblin noted: “Having heard argument with regard to the Motion of Landow Aviation Limited Partnership, Landow Aviation I, Inc., and Landow Company Builders, Inc., pursuant to Virginia Rules of Supreme Court 4:1(b) and (c) and 4:15, it is hereby ordered Defendants shall be allowed to proceed with the use of predictive coding for purposes of processing and production of electronically stored information.”

Judge Chamblin’s order specified 60 days for processing, and another 60 days for production and noted that the receiving party will still be able to question "the completeness of the contents of the production or the ongoing use of predictive coding."  (Editor’s note: I would have included the entire quote, but it’s handwritten and Judge Chamblin has handwriting almost as bad as mine!)

As in the other cases, it will be interesting to see what happens next.  Will the plaintiff attempt to appeal or even attempt a Da Silva-like push for recusal of the Judge?  Or will they accept the decision and gear their efforts toward scrutinizing the resulting production?  Stay tuned.

So, what do you think?  Will this be the landmark case that becomes the first court-approved use of technology assisted review?  Or will the parties continue to “fight it out”?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Friday the 13th Is Unlucky for Judge Peck

 

A few weeks ago, in Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y. Feb. 24, 2012), Magistrate Judge Andrew J. Peck of the U.S. District Court for the Southern District of New York issued an opinion making it likely the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  However, on March 13, District Court Judge Andrew L. Carter, Jr. granted plaintiffs’ request to submit additional briefing on their February 22 objections to the ruling.  In that briefing (filed on March 26), the plaintiffs claimed that the protocol approved for predictive coding “risks failing to capture a staggering 65% of the relevant documents in this case” and questioned Judge Peck’s relationship with defense counsel and with the selected vendor for the case, Recommind.  Then, on April 5, Judge Peck issued an order in response to Plaintiffs’ letter requesting his recusal, directing plaintiffs to indicate whether they would file a formal motion for recusal or ask the Court to consider the letter as the motion.

This past Friday, April 13, the plaintiffs filed their formal motion, which included a Notice of Motion for Recusal or Disqualification, Memorandum of Law in Support of Plaintiffs’ Motion for Recusal or Disqualification and Declaration of Steven L. Wittels in Support of Plaintiffs’ Motion for Recusal or Disqualification.

In the 28 page Memorandum of Law, the plaintiffs made several arguments that they contended justified Judge Peck’s recusal in this case.  They included:

  • In the first conference over which Judge Peck presided on December 2, 2011, he remarked that Defendants “must have thought they died and went to Heaven” to have him assigned to this case and he subsequently repeated that remark in at least two public panels afterward.  In one of the panel appearances, he also (according to the plaintiffs) acknowledged that the plaintiffs’ only alternative was to ask him to recuse himself (in that same panel discussion, Judge Peck also quoted the plaintiff as saying “Oh no no, we’re ok with using computer-assisted review; we just had some questions about the exact process”).
  • In the second status conference held before Judge Peck on January 4, the plaintiffs noted that he encouraged the defendants to enlist the assistance of their eDiscovery counsel, Ralph Losey – whom Judge Peck claimed to know “very well.” During the next four weeks, Judge Peck served on three public panels with defense counsel Losey about predictive coding which the plaintiffs referred to as “ex parte contacts” where the plaintiffs were not informed.  Judge Peck also wrote an article last year entitled Search Forward, where, according to the plaintiffs, he “cited favorably to defense counsel Losey's blog post Go Fish” and Losey responded “in kind to Judge Peck‟s article by posting a blog entry, entitled Judge Peck Calls Upon Lawyers to Use Artificial Intelligence and Jason Barn[sic] Warns of a Dark Future of Information Burn-Out If We Don’t, where he embraced Judge Peck's position on predictive coding”.
  • One week after the LegalTech trade show, on February 8, the plaintiffs contended that “Judge Peck adopted Defendant MSL’s predictive coding protocol wholesale from the bench” and, on February 24 (link above), he issued the written order “[f]or the benefit of the Bar”.  Some of the materials cited were authored by Judge Peck, Ralph Losey, and Maura R. Grossman, eDiscovery counsel at Wachtell, Lipton, Rosen & Katz, all of whom served together on the panel at LegalTech.
  • The plaintiffs also noted that Judge Peck “confirms that he received, at a minimum, transportation, lodging, and meals free of cost for no less than 10 appearances at eDiscovery conferences in 2010” and did not disclose this compensation (or compensation for similar appearances in 2011 and 2012) to the plaintiffs.  They also noted that Judge Peck failed to inform them of Recommind’s sponsorship of the LegalTech conference where Judge Peck participated on panel discussions regarding predictive coding.

Regardless whether Judge Peck is partial or not, the plaintiffs argued in the Memorandum that “§ 455(a) requires a judge‟s recusal for the mere appearance of impropriety or partiality – i.e. if a reasonable outsider might entertain a plausible suspicion or doubt as to the judge‟s impartiality”.

In his order on April 5, Judge Peck noted that the “defendants will have 14 days to respond”, so it will be interesting to see if they do and what that response entails.  They will certainly have some bold statements to address from the plaintiffs if they do respond.

So, what do you think?  Do the plaintiffs make a valid argument for recusal?  Or is this just a case of “sour grapes” on their part for disagreeing, not with predictive coding in general, but the specific approach to predictive coding addressed in Judge Peck’s order of February 24?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: The Other Technology Assisted Review Case

 

We’ve covered the Da Silva Moore case quite a bit over the past few weeks (with posts here, here, here and here), but that’s not the only case where technology assisted review is currently being considered and debated.  On February 21, in Kleen Products LLC v. Packaging Corporation of America, et al., the plaintiffs asked Magistrate Judge Nan Nolan to require the producing parties to employ a technology assisted review approach (referred to as "content-based advanced analytics," or CBAA) in their production of documents for discovery purposes.

In their filing, the plaintiffs claimed that “[t]he large disparity between the effectiveness of [the computer-assisted coding] methodology and Boolean keyword search methodology demonstrates that Defendants cannot establish that their proposed [keyword] search methodology is reasonable and adequate as they are required.”  Citing studies conducted between 1994 and 2011 claimed to demonstrate the superiority of computer-assisted review over keyword approaches, the plaintiffs claimed that computer-assisted coding retrieved for production “70 percent (worst case) of responsive documents rather than no more than 24 percent (best case) for Defendants’ Boolean, keyword search.”

In their filing, the defendants contended that the plaintiffs "provided no legitimate reason that this Court should deviate here from reliable, recognized, and established discovery practices" in favor of their "unproven" CBAA methods. The defendants also emphasized that they have "tested, independently validated, and implemented a search term methodology that is wholly consistent with the case law around the nation and that more than satisfies the ESI production guidelines endorsed by the Seventh Circuit and the Sedona Conference." Having (according to their briefing) already produced more than one million pages of documents using their search methods, the defendants conveyed outrage that the plaintiffs would ask the court to "establish a new and radically different ESI standard for cases in this District."

The defendants also cited Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, a 2007 publication from The Sedona Conference (available for download here), which includes a quote from a 2004 federal district court opinion, saying "by far the most commonly used search methodology today is the use of 'keyword searches.'" The defendants also stated that the plaintiffs cited no case with a ruling to use computer-assisted review.  True at the time, the Da Silva Moore ruling by Judge Andrew Peck approving the use of technology assisted review was issued just three days later.

The hearing was continued to April, and it will be interesting to see whether Magistrate Judge Nolan will require, over objection, the use of computer-assisted review for the review and production of electronically stored information in this case. Based on the disputes we’ve seen in the first two cases (Da Silva Moore and Kleen Products) contemplating the use of technology assisted review, it appears that the acceptance curve for technology assisted review processes will be a rocky one.

So, what do you think?  Should Judge Nolan rule in favor of the plaintiffs, or have the defendants done enough to ensure a complete and accurate production?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Best Practices: First Name Searches Are Not Always Proper

I’ve worked with numerous clients over the years and provided assistance regarding searching best practices to maximize recall without sacrificing search precision, including the use of fuzzy and synonym searches to identify additional potentially responsive files and sampling to test the effectiveness of searches.  In several cases, the initial list of proposed search terms sent to me by the client includes first names of several individuals to search for as standalone terms.  Unfortunately, first names don’t always make the best search terms.

Why?  Because, in many cases, the first names are so common that they can apply to several people, not just the desired individuals to be retrieved.  Depending on the size of the collection, searching for names like “Bob”, “Bill”, “Cathy”, “Jim”, “Karen” or “Pat” could retrieve many additional files to be reviewed for numerous individuals other than those specifically sought, potentially driving up review costs unnecessarily.

Another issue with first name searches is the potential variations in first names that must be included to ensure that retrieval is complete.  Take this name, for example:

“Billy Bob Byrd”

To adequately perform a first name search, your search might need to include the following: “Billy”, “Bill”, “William”, “WR” (for “William Robert”), “Bob”, “Bobby”, “Robert” and maybe even “BB” (or “BBB”).  Searching for all these terms could yield many additional hits that are probably not responsive, costing time and money to review.  While emails and other informal communications may just refer to him as “Billy Bob”, more formalized communications such as financial documents would probably refer to his name differently.  So, it’s important to include all potential variations, several of which could add considerably more false hits.

You also have the potential that the name might also have another meaning.  For example, “Bill” can be a person’s name, but “bill” is another word for invoice (keep in mind that most search engines are case insensitive, so it doesn’t matter if it’s capitalized or not).  So, searching for “bill” as a person would also yield every instance where an invoice is referred to as a “bill”.

With that in mind, it’s important to get the complete names of the people you’re searching for, as well as any known nicknames, so that you can then make decisions on the best terms to use to retrieve the most hits for each person.  Consider these names:

  • Terry Bradshaw: “Terry” is a fairly common name, so I might opt to search for “Bradshaw” first and see what I get.  Or, to limit further, retrieve only documents where both “Terry” and “Bradshaw” are both mentioned.
  • Jay Leno: Same here, “Jay” is common, “Leno” is more unique.
  • Jennifer Lopez: “Jennifer” is more common than “Lopez”, though both are fairly common.  I would search for “Lopez” first, but assuming that the client provided the nickname “JLo”, I would search for that alternative also (if not, that would hopefully fall out during review as an additional term to search for).
  • Shaquille O’Neal: This is one case where the first name is actually more unusual than the last name, so I might prefer to search for “Shaquille” and would also search for the nickname of “Shaq”.

Of course, there may be occasions where only the first name is mentioned in a document without the last name.  If you can, try to combine with some other criteria to refine the broad search for the first name, such as email address of the individual in question or email addresses of those most likely to be talking about that individual.

What about the instances where both the first and last names are common?  What about my name, “Doug Austin”?  “Doug” isn’t an extremely common first name, but it’s somewhat common, and “Austin” is the name of a city.  Searching for either term by itself could be overbroad.  So, it makes sense to try to combine them.  To do so in a phrase search, however, could be limiting as searching for “Doug Austin” could miss occurrences of “Austin, Doug”.  Conducting the search as a proximity search (e.g., “Doug within 3 words of Austin”) will catch variations, regardless of order.

This is just one example why keyword searching isn’t an exact science.  These aren’t necessarily hard and fast rules and each situation is different.  It’s important to randomly sample and test search terms to ensure an appropriate balance of recall and precision.  Of course, parties sometimes agree that it may be necessary to include first names as standalone terms, even when they are common and may retrieve a high number of additional files that are not responsive, though testing those terms before negotiating with opposing counsel can help you to be prepared to negotiate a more favorable set of terms.

So, what do you think?  Do your search term lists include standalone first names?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Judge Peck Responds to Plaintiff’s Request for Recusal

 

Normally, we make one post per business day to the blog.  However, we decided to make a second post for this important case (that has been discussed so intently in the industry) today as we couldn’t wait until after the holiday to report on it.

A few weeks ago, in Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y. Feb. 24, 2012), Magistrate Judge Andrew J. Peck of the U.S. District Court for the Southern District of New York issued an opinion making it likely the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  However, on March 13, District Court Judge Andrew L. Carter, Jr. granted plaintiffs’ request to submit additional briefing on their February 22 objections to the ruling.  In that briefing (filed on March 26), the plaintiffs claimed that the protocol approved for predictive coding “risks failing to capture a staggering 65% of the relevant documents in this case” and questioned Judge Peck’s relationship with defense counsel and with the selected vendor for the case, Recommind.

On Monday, Judge Peck issued an order in response to Plaintiffs’ request for his recusal, which, according to Judge Peck, was contained in a letter dated March 28, 2012 (not currently publicly available).  Here is what the Order said:

“The Court is in receipt of plaintiffs' March 28, 2012 letter requesting my recusal.  Plaintiffs shall advise as to whether they wish to file a formal motion or for the Court to consider the letter as the motion (in which case defendants will have 14 days to respond, from the date of plaintiffs' confirmation that the letter constitutes their motion).”

“The Court notes that my favorable view of computer assisted review technology in general was well known to plaintiffs before I made any ruling in this case, and I have never endorsed Recommind's methodology or technology, nor received any reimbursement from Recommind for appearing at any conference that (apparently) they and other vendors sponsored, such as Legal Tech. I have had no discussions with Mr. Losey about this case, nor was I aware that he is working on the case. It appears that after plaintiffs' counsel and vendor represented to me that they agreed to the use of predictive coding, plaintiffs now claim that my public statements approving generally of computer assisted review make me biased. If plaintiffs were to prevail, it would serve to discourage judges (and for that matter attorneys) from speaking on educational panels about ediscovery (or any other subject for that matter). The Court suspects this will fall on deaf ears, but I strongly suggest that plaintiffs rethink their "scorched earth" approach to this litigation.”

So, what do you think?  What will happen next?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Two Pages Inadvertently Disclosed Out of Two Million May Still Waive Privilege

 

In Jacob v. Duane Reade, Inc., 11 Civ. 0160 (JMO) (THK), Magistrate Judge Theodore Katz of the US District Court for the Southern District of New York found that a privileged, two-page email that was inadvertently produced did not have to be returned and that the privilege had been waived because the producing party, Duane Reade, had failed to request its return in a timely manner.  According to Defendants' counsel, the ESI production involved the review of over two million documents in less than a month; that review was accomplished with the assistance of an outside vendor and document review team.

The Plaintiffs in this matter are Assistant Store Managers pursuing a collective action for overtime wages, under the Fair Labor Standards Act ("FLSA"), against the Defendant, Duane Reade.  The email that was inadvertently produced (on November 8, 2011 and subsequently used in deposition) related to a meeting among several individuals within Human Resources, including an in-house attorney at Duane Reade (assumed to be Julie Ko). The defendants discovered the inadvertent production on January 17 of this year when Duane Reade’s HR Manager (an attendee at the meeting) was noticed for deposition.  The defendants argued that the email was inadvertently produced because it was neither from nor to an attorney, and only included advice received at a meeting from an in-house attorney, identified in the email only by the first name “Julie.”

With regard to whether the email was privileged, the court examined the email and found that the first half, where Ko received information from business managers and, in her role as legal counsel, gave legal advice on the requirements of the FLSA, was privileged.  However, the second half of the email, consisting of proposals that came out of the meeting, to get the Store Managers and Assistant Store Managers to view and treat the ASM's as managers, contained no legal advice and, therefore, was not privileged.

As to whether the Defendant’s waived attorney-client privilege when inadvertently producing the email, the Court referenced a summary of the law in this subject provided by Judge Shira Scheindlin, as follows:

“Although the federal courts have differed as to the legal consequences of a party's inadvertent disclosure of privileged information, the general consensus in this district is that the disclosing party may demonstrate, in appropriate circumstances, that such production does not constitute a waiver of the privilege or work-product immunity and that it is entitled to the return of the mistakenly produced documents. In determining whether an inadvertent disclosure waives privilege, courts in the Second Circuit have adopted a middle of the road approach. Under this flexible test, courts are called on to balance the following factors: (1) the reasonableness of the precautions to prevent inadvertent disclosure; (2) the time taken to rectify the error; (3) "the scope of the discovery;" (4) the extent of the disclosure; and (5) an over[arching] issue of fairness.”

The Court ruled that the production of the email was inadvertent and that Duane Reade had employed reasonable precautions to prevent inadvertent disclosures (such as drafting lists of attorney names, employing search filters and quality control reviews). However, given the over two month time frame for the Defendants to request return of the email, the Court determined that the privilege was waived because the Defendants did not act “promptly to rectify the disclosure of the privileged email.”

So, what do you think?  Was waiver of privilege fair for this document?  Or should the Defendants have been able to claw it back?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Da Silva Moore Plaintiffs Question Predictive Coding Proposal, Judge Peck’s Activities

 

A few weeks ago, in Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y. Feb. 24, 2012), Magistrate Judge Andrew J. Peck of the U.S. District Court for the Southern District of New York issued an opinion making it likely the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  However, on March 13, District Court Judge Andrew L. Carter, Jr. granted plaintiffs’ request to submit additional briefing on their February 22 objections to the ruling.  In that briefing (filed last Monday, March 26), the plaintiffs claimed that the protocol approved for predictive coding “risks failing to capture a staggering 65% of the relevant documents in this case” and questioned Judge Peck’s relationship with defense counsel and with the selected vendor for the case, Recommind.

While the plaintiffs noted that “the use of predictive coding may be appropriate under certain circumstances”, they made several contentions in their brief, including the following:

  • That the protocol approved for predictive coding “was adopted virtually wholesale from Defendant MSLGroup”;
  • That “Judge Peck authored an article and made no fewer than six public appearances espousing the use of predictive coding” during “the ten months between the filing of the Amended Complaint and the February 24 written opinion”;
  • That Judge Peck appeared on several of these panels (three alone with Ralph Losey, Jackson Lewis’ ediscovery counsel in this case (and a previous thought leader on this blog) who the plaintiff refers to as “another outspoken predictive coding advocate whom Judge Peck ‘know[s] very well’”;
  • That “defense counsel Losey and Judge Peck cited each other’s positions on predictive coding with approval in their respective articles, which came out just four months before Judge Peck issued his ESI opinion”;
  • That, to promote its predictive coding technology, “Recommind is a frequent sponsor of the e-discovery panels on which Judge Peck and Defense counsel Losey sit” and “Judge Peck’s February 24 e-discovery ruling is expected to be a boon not only to the predictive coding industry, but also to Recommind’s bottom line”;
  • That, with regard to the defendants’ proposed protocol, “Judge Peck failed to hold an evidentiary hearing or obtain expert testimony as to its reliability and accuracy”; and
  • That, “in the same preliminary study MSL relies on to tout the quality of the technology to be used in its predictive coding protocol, the technology’s “recall,” was very low, on average 35%”, so the defendants’ proposed protocol “risks failing to capture up to 65% of the documents material to Plaintiffs’ case”.

In a declaration supplementing the plaintiffs’ filing, Paul J. Neale, chief executive officer at DOAR Litigation Consulting and the plaintiffs’ eDiscovery consultant, contended that Judge Peck approved a predictive coding process that “does not include a scientifically supported method for validating the results”. He also contended in the declaration that Peck relied on “misstatements” by two Recommind employees (Eric Seggebruch and Jan Puzicha) that misrepresent the effectiveness and accuracy of the Recommind predictive coding process and also noted that Recommind did not perform as well at the 2011 Text Retrieval Conference (TREC) as its marketing materials and experts assert.

Now, the ball is back in Judge Carter’s court.  Will he hold an evidentiary hearing on the eDiscovery issues raised by the plaintiff?  Will he direct Judge Peck to do so?  It will be interesting to see what happens next?

So, what do you think?  Do the plaintiff’s objections have merit?  Will Judge Carter give the defendants a chance to respond?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Daily Is Eighteen! (Months Old, That Is)

 

Eighteen months ago yesterday, eDiscovery Daily was launched.  A lot has happened in the industry in eighteen months.  We thought we might be crazy to commit to a daily blog each business day.  We may be crazy indeed, but we still haven’t missed a business day yet.

The eDiscovery industry has grown quite a bit over the past eighteen months and is expected to continue to do so.   So, there has not been a shortage of topics to address; instead, the challenge has been selecting which topics to address.

Thanks for noticing us!  We’ve more than doubled our readership since the first six month period, had two of our biggest “hit count” days in the last month and have more than quintupled our subscriber base since those first six months!  We appreciate the interest you’ve shown in the topics and will do our best to continue to provide interesting and useful eDiscovery news and analysis.  And, as always, please share any comments you might have or if you’d like to know more about a particular topic!

We also want to thank the blogs and publications that have linked to our posts and raised our public awareness, including Pinhawk, The Electronic Discovery Reading Room, Unfiltered Orange, Atkinson-Baker (depo.com), Litigation Support Technology & News, Next Generation eDiscovery Law & Tech Blog, InfoGovernance Engagement Area, Justia Blawg Search, Learn About E-Discovery, Ride the Lightning, Litigation Support Blog.com, ABA Journal, Law.com and any other publication that has picked up at least one of our posts for reference (sorry if I missed any!).  We really appreciate it!

As we’ve done in the past, we like to take a look back every six months at some of the important stories and topics during that time.  So, here are some posts over the last six months you may have missed.  Enjoy!

eDiscovery Trends: Is Email Still the Most Common Form of Requested ESI?

eDiscovery Trends: Sedona Conference Provides Guidance for Judges

eDiscovery Trends: Economy Woes Not Slowing eDiscovery Industry Growth

eDiscovery Law: Model Order Proposes to Limit eDiscovery in Patent Cases

eDiscovery Case Law: Court Rules 'Circumstantial Evidence' Must Support Authorship of Text Messages for Admissibility

eDiscovery Best Practices: Cluster Documents for More Effective Review

eDiscovery Best Practices: Could This Be the Most Expensive eDiscovery Mistake Ever?

eDiscovery 101: Simply Deleting a File Doesn’t Mean It’s Gone

eDiscovery Case Law: Facebook Spoliation Significantly Mitigates Plaintiff’s Win

eDiscovery Best Practices: Production is the “Ringo” of the eDiscovery Phases

eDiscovery Case Law: Court Grants Adverse Inference Sanctions Against BOTH Sides

eDiscovery Trends: ARMA International and EDRM Jointly Release Information Governance White Paper

eDiscovery Trends: The Sedona Conference International Principles

eDiscovery Trends: Sampling within eDiscovery Software

eDiscovery Trends: Small Cases Need Love Too!

eDiscovery Case Law: Court Rules Exact Search Terms Are Limited

eDiscovery Trends: DOJ Criminal Attorneys Now Have Their Own eDiscovery Protocols

eDiscovery Best Practices: Perspective on the Amount of Data Contained in 1 Gigabyte

eDiscovery Case Law: Computer Assisted Review Approved by Judge Peck in New York Case

eDiscovery Case Law: Not So Fast on Computer Assisted Review

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.