eDiscoveryDaily

eDiscovery Trends: Christine Musil of Informative Graphics Corporation (IGC)

 

This is the second of the 2012 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What do you consider to be the emerging trends in eDiscovery that will have the greatest impact in 2012?
  2. Which trend(s), if any, haven’t emerged to this point like you thought they would?
  3. What are your general observations about LTNY this year and how it fits into emerging trends? (Note: Christine was interviewed the night before the show, so there were obviously no observations at that point)
  4. What are you working on that you’d like our readers to know about?

Today’s thought leader is Christine Musil.  Christine has a diverse career in engineering and marketing spanning 18 years. Christine has been with IGC since March 1996, when she started as a technical writer and a quality assurance engineer. After moving to marketing in 2001, she has applied her in-depth knowledge of IGC's products and benefits to marketing initiatives, including branding, overall messaging, and public relations. She has also been a contributing author to a number of publications on archiving formats, redaction, and viewing technology in the enterprise.

What do you consider to be the emerging trends in eDiscovery that will have the greatest impact in 2012?  And which trend(s), if any, haven’t emerged to this point like you thought they would?

That's a hard question.  Especially for us because we're somewhat tangential to the market, and not as deeply enmeshed in the market as a lot of the other vendors are.  I think the number of acquisitions in the industry was what we expected, though maybe the M&A players themselves were surprising.  For example, I didn't personally see the recent ADI acquisition (Applied Discovery acquired by Siris Capital) coming.  And while we weren’t surprised that Clearwell was acquired, we thought that their being acquired by Symantec was an interesting move.

So, we expect the consolidation to continue.  We watched the major content management players like EMC OpenText to see if they would acquire additional, targeted eDiscovery providers to round out some of their solutions, but through 2011 they didn’t seem to have decided whether they're “all in” despite some previous acquisitions in the space.  We had wondered if some of them have decided maybe they're out again, though EMC is here in force for Kazeon this year.  So, I think that’s some of what surprised me about the market.

Other trends that I see are potentially more changes in the FRCP (Federal Rules of Civil Procedure) and probably a continued push towards project-based pricing.    We have certainly felt the pressure to do more project-based pricing, so we're watching that. Escalating data volumes have caused cost increases and, obviously, something's going to have to give there.  That's where I think we’re going to see more regulations come out through new FRCP rules to provide more proportionality to the Discovery process, or clients will simply dictate more pricing alternatives.

What are you working on that you’d like our readers to know about?

We just announced a new release of our Brava!® product, version 7.1, at the show.  The biggest additions to Brava are in the Enterprise version, and we’re debuting a the new Brava Changemark®  Viewer (Changemark®) for smartphones as well as an upcoming Brava HTML client for tablets.  iPads have been a bigger game changer than I think a lot of people even anticipated.  So, we’re excited about it. Also new with Brava 7.1 isvideo collaboration and improved enterprise readiness and performance for very large deployments.

We also just announced the results of our Redaction Survey, which we conducted to gauge user adoption of toward electronic redaction software. Nearly 65% of the survey respondents were from law firms, so that was a key indicator of the importance of redaction within the legal community.  Of the respondents, 25% of them indicated that they are still doing redaction manually, with markers or redaction tape, 32% are redacting electronically, and nearly 38% are using a combined approach with paper-based and software-driven redaction.  Of those that redact electronically, the reasons that they prefer electronic redaction included professional look of the redactions, time savings, efficiency and “environmental friendliness” of doing it electronically. 

For us, it's exciting moving into those areas and our partnerships continue to be exciting, as well.  We have partnerships with LexisNexis and Clearwell, both of which are unaffected by the recent acquisitions.  So, that's what's new at IGC.

Thanks, Christine, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

eDiscovery Case Law: Predictive Coding Considered by Judge in New York Case

In Da Silva Moore v. Publicis Groupe, No. 11 Civ. 1279 (ALC) (S.D.N.Y. Feb. 8, 2012), Magistrate Judge Andrew J. Peck of the U.S. District Court for the Southern District of New York instructed the parties to submit proposals to adopt a protocol for e-discovery that includes the use of predictive coding, perhaps the first known case where a technology assisted review approach was considered by the court.

In this case, the plaintiff, Monique Da Silva Moore, filed a Title VII gender discrimination action against advertising conglomerate Publicis Groupe, on her behalf and the behalf of other women alleged to have suffered discriminatory job reassignments, demotions and terminations.  Discovery proceeded to address whether Publicis Groupe:

  • Compensated female employees less than comparably situated males through salary, bonuses, or perks;
  • Precluded or delayed selection and promotion of females into higher level jobs held by male employees; and
  • Disproportionately terminated or reassigned female employees when the company was reorganized in 2008.

Consultants provided guidance to the plaintiffs and the court to develop a protocol to use iterative sample sets of 2,399 documents from a collection of 3 million documents to yield a 95 percent confidence level and a 2 percent margin of error (see our previous posts here, here and here on how to determine an appropriate sample size, randomly select files and conduct an iterative approach). In all, the parties expect to review between 15,000 to 20,000 files to create the “seed set” to be used to predictively code the remainder of the collection.

The parties were instructed to submit their draft protocols by February 16th, which is today(!).  The February 8th hearing was attended by counsel and their respective ESI experts.  It will be interesting to see what results from the draft protocols submitted and the opinion from Judge Peck that results.

So, what do you think?  Should courts order the use of technology such as predictive coding in litigation?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: George Socha of Socha Consulting

 

This is the first of the 2012 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What do you consider to be the emerging trends in eDiscovery that will have the greatest impact in 2012?
  2. Which trend(s), if any, haven’t emerged to this point like you thought they would?
  3. What are your general observations about LTNY this year and how it fits into emerging trends?
  4. What are you working on that you’d like our readers to know about?

Today’s thought leader is George Socha.  A litigator for 16 years, George is President of Socha Consulting LLC, offering services as an electronic discovery expert witness, special master and advisor to corporations, law firms and their clients, and legal vertical market software and service providers in the areas of electronic discovery and automated litigation support. George has also been co-author of the leading survey on the electronic discovery market, The Socha-Gelbmann Electronic Discovery Survey; last year he and Tom Gelbmann converted the Survey into Apersee, an online system for selecting eDiscovery providers and their offerings.  In 2005, he and Tom Gelbmann launched the Electronic Discovery Reference Model project to establish standards within the eDiscovery industry – today, the EDRM model has become a standard in the industry for the eDiscovery life cycle and there are nine active projects with over 300 members from 81 participating organizations.  George has a J.D. for Cornell Law School and a B.A. from the University of Wisconsin – Madison.

What do you consider to be the emerging trends in eDiscovery that will have the greatest impact in 2012?

I may have said this last year too, but it holds true even more this year – if there's an emerging trend, it's the trend of people talking about the emerging trend.  It started last year and this year every person in the industry seems to be delivering the emerging trend.  Not to be too crass about it, but often the message is, "Buy our stuff", a message that is not especially helpful.

Regarding actual emerging trends, each year we all try to sum up legal tech in two or three words.  The two words for this year can be “predictive coding.”  Use whatever name you want, but that's what everyone seems to be hawking and talking about at LegalTech this year.  This does not necessarily mean they really can deliver.  It doesn't mean they know what “predictive coding” is.  And it doesn't mean they've figured out what to do with “predictive coding.”  Having said that, expanding the use of machine assisted review capabilities as part of the e-discovery process is a important step forward.  It also has been a while coming.  The earliest I can remember working with a client, doing what's now being called predictive coding, was in 2003.  A key difference is that at that time they had to create their own tools.  There wasn't really anything they could buy to help them with the process.

Which trend(s), if any, haven’t emerged to this point like you thought they would?

One thing I don't yet hear is discussion about using predictive coding capabilities as a tool to assist with determining what data to preserve in the first place.  Right now the focus is almost exclusively on what do you do once you’ve “teed up” data for review, and then how to use predictive coding to try to help with the review process.

Think about taking the predictive coding capabilities and using them early on to make defensible decisions about what to and what not to preserve and collect.  Then consider continuing to use those capabilities throughout the e-discovery process.  Finally, look into using those capabilities to more effectively analyze the data you're seeing, not just to determine relevance or privilege, but also to help you figure out how to handle the matter and what to do on a substantive level.

What are your general observations about LTNY this year and how it fits into emerging trends?

Well, Legal Tech continues to have been taken over by electronic discovery.  As a result, we tend to overlook whole worlds of technologies that can be used to support and enhance the practice of law. It is unfortunate that in our hyper-focus on e-discovery, we risk losing track of those other capabilities.

What are you working on that you’d like our readers to know about?

With regard to EDRM, we recently announced that we have hit key milestones in five projects.  Our EDRM Enron Email Data Set has now officially become an Amazon public dataset, which I think will mean wider use of the materials.

We announced the publication of our Model Code of Conduct, which was five years in the making.  We have four signatories so far, and are looking forward to seeing more organizations sign on.

We announced the publication of version 2.0 of our EDRM XML schema.  It's a tightened-up schema, reorganized so that it should be a bit easier to use and more efficient in the operation.

With the Metrics project, we are beginning to add information to a database that we've developed to gather metrics, the objective being to be able to make available metrics with an empirical basis, rather than the types of numbers bandied about today, where no one seems to know how they were arrived at. Also, last year the Uniform Task Billing Management System (UTBMS) code set for litigation was updated.  The codes to use for tracking e-discovery activities were expanded from a single code that covered not just e-discovery but other activities, to a number of codes based on the EDRM Metrics code set.

On the Information Governance Reference Model (IGRM) side, we recently published a joint white paper with ARMA.  The paper cross-maps the EDRMs Information Governance Reference Model (IGRM) with ARMA's Generally Accepted Recordkeeping Principles (GARP).  We look forward to more collaborative materials coming out of the two organizations.

As for Apersee, we continue to allow consumers search the data on the site for free, but we also are longer charging providers a fee for their information to be available.  Instead, we now have two sponsors and some advertising on the site.  This means that any provider can put information in, and everyone can search that information.  The more data that goes in, the more useful the searching process comes because.  All this fits our goal of creating a better way to match consumers with the providers who have the services, software, skills and expertise that the consumers actually need.

And on a consulting and testifying side, I continue to work a broad array of law firms; corporate and governmental consumers of e-discovery services and software; and providers offering those capabilities.

Thanks, George, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

eDiscovery Trends: “Assisted” is the Key Word for Technology Assisted Review

 

As noted in our blog post entitled 2012 Predictions – By The Numbers, almost all of the sets of eDiscovery predictions we reviewed (9 out of 10) predicted a greater emphasis on Technology Assisted Review (TAR) in the coming year.  It was one of our predictions, as well.  And, during all three days at LegalTech New York (LTNY) a couple of weeks ago, sessions were conducted that addressed technology assisted review concepts and best practices.

While some equate technology assisted review with predictive coding, other technology approaches such as conceptual clustering are also increasing in popularity.  They qualify as TAR approaches, as well.  However, for purposes of this blog post, we will focus on predictive coding.

Over a year ago, I attended a Virtual LegalTech session entitled Frontiers of E-Discovery: What Lawyers Need to Know About “Predictive Coding” and wrote a blog post from that entitled What the Heck is “Predictive Coding”?  The speakers for the session were Jason R. Baron, Maura Grossman and Bennett Borden (Jason and Bennett are previous thought leader interviewees on this blog).  The panel gave the best descriptive definition that I’ve seen yet for predictive coding, as follows:

“The use of machine learning technologies to categorize an entire collection of documents as responsive or non-responsive, based on human review of only a subset of the document collection. These technologies typically rank the documents from most to least likely to be responsive to a specific information request. This ranking can then be used to “cut” or partition the documents into one or more categories, such as potentially responsive or not, in need of further review or not, etc.”

It’s very cool technology and capable of efficient and accurate review of the document collection, saving costs without sacrificing quality of review (in some cases, it yields even better results than traditional manual review).  However, there is one key phrase in the definition above that can make or break the success of the predictive coding process: “based on human review of only a subset of the document collection”. 

Key to the success of any review effort, whether linear or technology assisted, is knowledge of the subject matter.  For linear review, knowledge of the subject matter usually results in preparation of high quality review instructions that (assuming the reviewers competently follow those instructions) result in a high quality review.  In the case of predictive coding, use of subject matter experts (SMEs) to review a core subset of documents (typically known as a “seed set”) and make determinations regarding that subset is what enables the technology in predictive coding to “predict” the responsiveness and importance of the remaining documents in the collection.  The more knowledgeable the SMEs are in creating the “seed set”, the more accurate the “predictions” will be.

And, as is the case with other processes such as document searching, sampling the results (by determining the appropriate sample size of responsive and non-responsive items, randomly selecting those samples and reviewing both groups – responsive and non-responsive – to test the results) will enable you to determine how effective the process was in predictively coding the document set.  If sampling shows that the process yielded inadequate results, take what you’ve learned from the sample set review and apply it to create a more accurate “seed set” for re-categorizing the document collection.  Sampling will enable you to defend the accuracy of the predictive coding process, while saving considerable review costs.

So, what do you think?  Have you utilized predictive coding in any of your reviews?  How did it work for you?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: International Trade Commission Considers Proportionality Proposal

 

As eDiscovery costs continue to escalate, proposals to bring proportionality to the eDiscovery process have become increasingly popular, such as this model order to limit eDiscovery in patent cases proposed by Federal Circuit Chief Judge Randall Rader last year (which was adopted for use in this case).  In January, Chief Judge Rader and three members of the Council (Council Chairman Ed Reines of Weil, Tina Chappell of Intel Corporation, and John Whealan, Associate Dean of Intellectual Property Studies at the George Washington University School of Law) presented a proposal to the U.S. International Trade Commission (USITC) to streamline eDiscovery in section 337 investigations.

Under Section 337 of the Tariff Act of 1930 (19 U.S.C. § 1337), the USITC conducts investigations into allegations of certain unfair practices in import trade. Section 337 declares the infringement of certain statutory intellectual property rights and other forms of unfair competition in import trade to be unlawful practices. Most Section 337 investigations involve allegations of patent or registered trademark infringement.

The proposal tracks the approach of the district court eDiscovery model order that is being adopted in several district courts and under consideration in others. Chairman Reines described the proposal as flexible, reasonably simple, and easy to administer. Under the proposal, litigants would:

  • Indicate whether ESI such as email is being sought or not;
  • Presumptively limit the number of custodians whose files will be searched, the locations of those documents, and the search terms that will be used (if litigants exceed the specified limits, they would assume the additional costs);
  • Use focused search terms limited to specific contested issues; and
  • Allow privileged documents to be exchanged without losing privilege.

For more regarding the regarding the USITC proposal to streamline eDiscovery in section 337 investigations, including reactions from USITC members, click to see the USITC press release here.

So, what do you think?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Court Rules Exact Search Terms Are Limited

 

In Custom Hardware Eng’g & Consulting v. Dowell, 2012 U.S. Dist. LEXIS 146, 7-8 (E.D. Mo. Jan. 3, 2012), the plaintiff and defendant could not agree on search terms to be used for discovery on defendant’s forensically imaged computers.  The court directed each party to submit a proposed list of search terms and indicated that each party would be permitted to file objections to the opposing party's proposed list.  After reviewing the proposals, and the defendant’s objections to the plaintiff’s proposed list, the court ruled that the defendant’s proposed list was “problematic and inappropriate” and that their objections to the plaintiff’s proposed terms were “without merit” and ruled for use of the plaintiff’s search terms in discovery.

Plaintiff alleged the defendants formed a competing company by “illegally accessing, copying, and using Plaintiff's computer software and data programming source code systems” and sued defendants for copyright infringement, trade secret misappropriation, breach of contract and other claims.  The court ordered discovery of ESI on defendants' computers through use of a forensic process to recover and then search the ESI.  In July 2011, the plaintiffs provided a request for production to defendants that requested “any and all documents which contain, describe, and/or relate in any manner to any of the words, phrases and acronyms, or derivatives thereof, contained in the list [provided], irrespective of whether exact capitalization, alternative spelling, or any other grammatical standard was used.”  The defendants submitted their own proposed list, which “excludes irrelevant information by requiring precise matches between search terms and ESI”.

Referencing Victor Stanley (previous blog posts regarding that case here, here, here and here), Missouri District Court Judge Richard Webber noted in his ruling that “While keyword searches have long been recognized as appropriate and helpful for ESI search and retrieval, there are well-known limitations and risks associated with them.”  Quoting from The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, the court noted that keyword searches “capture many documents irrelevant to the user's query…but at the same time exclude common or inadvertently misspelled instances” of the search terms.

The defendant issued three objections to the plaintiff’s terms, which the court addressed as follows:

  • Plaintiffs’ Terms would Include an Unreasonable Number of Irrelevant Results: Assuming that the argument was based on a contention by the defendants that the discovery would be overly burdensome, the court noted that the “burden or expense of conducting such a search must be low, and Defendants have presented the Court with no evidence that suggests otherwise.”
  • Plaintiffs' Terms would Produce Privileged Results: The Court noted that a producing party can create a privilege log to exclude documents that would otherwise fit the search term results.
  • Some of Plaintiffs' terms will Encompass Only Irrelevant Information: Noting that the defendants' “objection is a conclusory statement, stated without any argumentation or other support”, the Court found that a search of these terms may produce "matter that is relevant to any party's claim or defense”.

The Court also found that the defendants' proposed list would be “problematic and inappropriate” and “would fail to produce discoverable ESI simply because of an inexact match in capitalization or phrasing between a search term and the ESI” and rejected that list, ordering use of the plaintiff’s list for searching.

So, what do you think?  Was that the right call, or was the plaintiff’s request overbroad?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: Announcing Second Annual LTNY Thought Leader Series!

 

In our efforts to continue to bring our readers perspectives from various thought leaders throughout the eDiscovery community, eDiscoveryDaily has published several thought leader interviews over the nearly 1 1/2 years since our inception.  Last year at LegalTech New York (LTNY), we were able to conduct interviews with several eDiscovery industry thought leaders and announced the schedule for those interviews after the show.  Click here to see the schedule for last year’s interviews with links to each interview we conducted.

It appears that the LTNY Thought Leader interviews have become a tradition, as we were able to conduct interviews again with several industry thought leaders!  We’re pleased to introduce the schedule for the series, which will begin next Wednesday, February 15.  Something to look forward to after Valentine’s Day!

Here are the interviews that we will be publishing over the next few weeks:

Wednesday, February 15: George Socha, President of Socha Consulting LLC and co-founder of the Electronic Discovery Reference Model (EDRM).  As President of Socha Consulting LLC, George offers services as an eDiscovery expert witness, special master and advisor to corporations, law firms and their clients, and legal vertical market software and service providers in the areas of electronic discovery and automated litigation support.

Friday, February 17: Christine Musil, Director of Marketing of Informative Graphics Corporation (IGC).  Christine has applied her in-depth knowledge of IGC's products and benefits to marketing initiatives, including branding, overall messaging, and public relations. She has also been a contributing author to a number of publications on archiving formats, redaction, and viewing technology in the enterprise.

Monday, February 20: Jim McGann, Vice President of Index Engines.  Jim has extensive experience with eDiscovery and Information Management in the Fortune 2000 sector and has worked for leading software firms that provided financial services and information management solutions.

Wednesday, February 22: Tom Gelbmann, Principal Analyst of Gelbmann & Associates and co-founder of the Electronic Discovery Reference Model (EDRM).  Since 1993, Tom has helped law firms and Corporate Law Departments realize the full benefit of their investments in Information Technology.

Friday, February 24: Brian Schrader, Co-Founder and President of Business Intelligence Associates, Inc. (BIA).  Brian is an expert and frequent writer and speaker on eDiscovery and computer forensics topics, particularly those addressing the collection, preservation and processing functions of the eDiscovery process.

Monday, February 27: Ralph Losey, Partner and National eDiscovery Counsel for Jackson Lewis, LLP.  Ralph is also an Adjunct Professor at the University of Florida College of Law teaching eDiscovery, a prolific author of eDiscovery books and articles and the principle author and publisher of the popular e-Discovery Team® Blog.

Wednesday, February 29: Craig Ball, Law Offices of Craig D. Ball, P.C.  Craig has delivered over 600 presentations and papers to continuing legal and professional education programs throughout the United States.  Craig’s articles on forensic technology and electronic discovery frequently appear in the national media.  He also writes a monthly column on computer forensics and eDiscovery for Law Technology News and publishes a blog called "Ball in your Court".

Thanks to everyone for their time in participating in these interviews, especially during a busy LegalTech week!

So, what do you think?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Best Practices: Preparing Your 30(b)(6) Witnesses

 

When it comes to questions and potential issues that the receiving party may have about the discovery process of the producing party, one of the most common and direct methods for conducting “discovery about the discovery” is a deposition under Federal Rule 30(b)(6). This rule enables a party to serve a deposition notice on the entity involved in the litigation rather than an individual. The notice identifies the topics to be covered in the deposition, and the entity being deposed must designate one or more people qualified to answer questions on the identified topics.

While those designated to testify may not necessarily have day-to-day responsibility related to the identified topics, they must be educated enough in those issues to sufficiently address them during the testimony. Serving a deposition notice on the entity under Federal Rule 30(b)(6) saves the deposing party from having to identify specific individual(s) to depose while still enabling the topics to be fully explored in a single deposition.

Topics to be covered in a 30(b)(6) deposition can vary widely, depending on the facts and circumstances of the case. However, there are some typical topics that the deponent(s) should be prepared to address.

Legal Hold Process: Perhaps the most common area of focus in a 30(b)(6) deposition is the legal hold process as spoliation of data can occur when the legal hold process is unsound and data spoliation is the most common cause of sanctions resulting from the eDiscovery process.  Issues to address include:

  • General description of the legal hold process including all details of that policy and specific steps that were taken in this case to effectuate a hold.
  • Timing of issuing the legal hold and to whom it was issued.
  • Substance of the legal hold communication (if the communication is not considered privileged).
  • Process for selecting sources for legal hold, identification of sources that were eliminated from legal hold, and a description of the rationale behind those decisions.
  • Tracking and follow-up with the legal hold sources to ensure understanding and compliance with the hold process.
  • Whether there are any processes in place in the company to automatically delete data and, if so, what steps were taken to disable them and when were those steps taken?

Collection Process: Logically, the next eDiscovery step discussed in the 30(b)(6) deposition is the process for collecting preserved data:

  • Method of collecting ESI for review, including whether the method preserved all relevant metadata intact.
  • Chain of custody tracking from origination to destination.

Searching and Culling: Once the ESI is collected, the methods for conducting searches and culling the collection down for review must be discussed:

  • Method used to cull the ESI prior to review, including the tools used, the search criteria for inclusion in review and how the search criteria was developed (including potential use of subject matter experts to flush out search terms).
  • Process for testing and refining search terms used.

Review Process: The 30(b)(6) witness(es) should be prepared to fully describe the review process, including:

  • Methods to conduct review of the ESI including review application(s) used and workflow associated with the review process.
  • Use of technology to assist with the review, such as clustering, predictive coding, duplicate and near-duplicate identification.
  • To the extent the process can be described, methodology for identifying and documenting privileged ESI on the privilege log (this methodology may be important if the producing party may request to “claw back” any inadvertently produced privileged ESI).
  • Personnel employed to conduct ESI review, including their qualifications, experience, and training.

Production Process: Information regarding the production process, including:

  • Methodology for organizing and verifying the production, including confirmation of file counts and spot QC checks of produced files for content.
  • The total volume of ESI collected, reviewed, and produced.

Depending on the specifics of the case and discovery efforts, there may be further topics to be addressed to ensure that the producing party has met its preservation and discovery obligations.

So, what do you think?  Have you had to prepare 30(b)(6) witnesses for deposition?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: KPMG Loses Another Round to Pippins

 

As discussed previously in eDiscovery Daily, KPMG sought a protective order in Pippins v. KPMG LLP, No. 11 Civ. 0377 (CM)(JLC), 2011 WL 4701849 (S.D.N.Y. Oct. 7, 2011) to require the preservation of only a random sample of 100 hard drives from among those it had already preserved for this and other litigation or shift the cost of any preservation beyond that requested scope.  Lawyers for Pippins won a ruling last November by Magistrate Judge James Cott to use all available drives and Judge Cott encouraged the parties to continue to meet and confer to reach agreement on sampling.  However, the parties were unable to agree and KPMG appealed to the District Court.

Last Friday, District Court Judge Colleen McMahon upheld the lower court ruling, noting:

"It smacks of chutzpah (no definition required) to argue that the Magistrate failed to balance the costs and benefits of preservation when KPMG refused to cooperate with that analysis by providing the very item that would, if examined, demonstrate whether there was any benefit at all to preservation.”

“KPMG could have established [that producing all the drives was unnecessary] by producing several hard drives to Plaintiffs and Magistrate Judge Cott. … But KPMG has established nothing of the sort,” McMahon added.

“Even assuming that KPMG’s preservation costs are both accurate and wholly attributable to this litigation — which I cannot verify — I cannot possibly balance the costs and benefits of preservations when I’m missing one side of the scale (the benefits).”

“I gather that KPMG takes the position that the only Audit Associates who are presently ‘parties’ are the named plaintiffs, and so only the named plaintiffs’ hard drives really need to be preserved. But that is nonsense,” she continued. “Under Zubulake IV, the duty to preserve all relevant information for ‘key players’ is triggered when a party ‘reasonably anticipates litigation.’ … At the present moment, KPMG should ‘reasonably anticipate’ that every Audit Associate who will be receiving opt-in notice is a potential plaintiff in this action,” McMahon concluded.

Outten & Golden partner Justin Swartz, representing Pippins, commented after the ruling: "All we're asking for is the chance to look at a few hard drives, find out what's on them, and negotiate a resolution."  Steven Catlett, representing Sidley Austin for KPMG, did not provide a comment.

So, what do you think?  Was this a ruling against proportionality in eDiscovery or is KPMG’s refusal to provide any hard drives defeating their proportionality argument?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: Deadly Sins of Document Review

With all of the attention on Technology Assisted Review (TAR) during the LegalTech New York (LTNY) show, you would think that no one is conducting manual review anymore.  However, even the staunchest advocates of TAR that I spoke to last week indicated that manual review is still a key part of an effective review process where technology is used to identify potentially responsive and privileged ESI before the manual review process enables the final determinations to be made.

There are “dos” and “don’ts” for conducting an effective manual review.  There was an interesting article in Texas Lawyer (via Law Technology News) entitled The 7 Deadly Sins of Document Review by Dalton Young that focuses on the “don’ts”.  As review is the most expensive phase of the eDiscovery process and legal budgets are stretched to the limit, it’s important to get the most out of your spend on manual review.  With that in mind, here are the seven deadly sins of document review (along with a few of my observations):

  1. Hiring overqualified reviewers: Although there are many qualified lawyers available due to the recent recession, those lawyers often don’t have as much experience as seasoned review paralegals, who are also less expensive and less likely to leave for another offer.
  2. Failing to establish a firm time commitment: If lead counsel doesn’t clearly establish the expected review timeline up front and expect reviewers to commit to that time frame, turnover of reviewers can drive up costs and delay project completion.
  3. Failing to provide reviewers with thorough training on the review tools: Train beyond just the basics so that reviewers can take advantage of advanced software features and training starts with lead counsel.  I would adjust this point a bit: Lead counsel needs to become fully proficient on the review tools, then develop a workflow that manages the efficiency of the reviewers and train the reviewers according to that workflow.  While it may be nice for reviewers to know all of the advanced search features, full understanding of searching best practices isn’t something that can be accomplished in a single training session and should be managed by someone with considerable experience using advanced searching capabilities in an efficient and defensible manner.
  4. Failing to empower reviewers with sufficient background on the case: Providing reviewers with not just a list of expected key words, but also an understanding of the issues of the case enables them to recognize important documents that might not fit within the key words identified.  I would also add that it’s important to have regular “huddles” so that learned knowledge by selected reviewers can be shared with the entire team to maximize review effectiveness.
  5. Failing to foster bonds within the review team: Just like any other team member, reviewers like to know that they’re an important part of the cause and that their work is appreciated, so treating them to lunch or an occasional happy hour can foster a more enjoyable work environment and increase reviewer retention.
  6. Failing to predetermine tags and codes before the project begins: A lead member of the law firm should complete an overview of the discovery to identify the process and establish tags up front instead of “on the fly” as review progresses (even though that tag list will often need to be supplemented regardless how well the upfront overview is conducted).  I would add inclusion of one or more subject matter experts in that upfront process to help identify those tags.
  7. Providing reviewers with a too-structured work environment: The author indicates that counsel should “consider providing a relaxed, somewhat self-directed work environment”.  The key here is “somewhat”, but flexibility in start and stop work times and break/lunch times can enable you to keep good reviewers who may need some flexibility.  Regular monitoring of reviewer metrics will enable the review manager to confirm that reviewer performance is not adversely affected by increased flexibility, or adjust accordingly if the review environment becomes too lax.

So, what do you think?  Are there other “deadly sins” that the author doesn’t mention?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.