Analysis

eDiscovery Trends: Cloud Covered by Ball

 

What is the cloud, why is it becoming so popular and why is it important to eDiscovery? These are the questions being addressed—and very ably answered—in the recent article Cloud Cover (via Law Technology News) by computer forensics and eDiscovery expert Craig Ball, a previous thought leader interviewee on this blog.

Ball believes that the fears about cloud data security are easily dismissed when considering that “neither local storage nor on-premises data centers have proved immune to failure and breach”. And as far as the cloud's importance to the law and to eDiscovery, he says, "the cloud is re-inventing electronic data discovery in marvelous new ways while most lawyers are still grappling with the old."

What kinds of marvelous new ways, and what do they mean for the future of eDiscovery?

What is the Cloud?

First we have to understand just what the cloud is.  The cloud is more than just the Internet, although it's that, too. In fact, what we call "the cloud" is made up of three on-demand services:

  • Software as a Service (SaaS) covers web-based software that performs tasks you once carried out on your computer's own hard drive, without requiring you to perform your own backups or updates. If you check your email virtually on Hotmail or Gmail or run a Google calendar, you're using SaaS.
  • Platform as a Service (PaaS) happens when companies or individuals rent virtual machines (VMs) to test software applications or to run processes that take up too much hard drive space to run on real machines.
  • Infrastructure as a Service (IaaS) encompasses the use and configuration of virtual machines or hard drive space in whatever manner you need to store, sort, or operate your electronic information.

These three models combine to make up the cloud, a virtual space where electronic storage and processing is faster, easier and more affordable.

How the Cloud Will Change eDiscovery

One reason that processing is faster is through distributed processing, which Ball calls “going wide”.  Here’s his analogy:

“Remember that scene in The Matrix where Neo and Trinity arm themselves from gun racks that appear out of nowhere? That's what it's like to go wide in the cloud. Cloud computing makes it possible to conjure up hundreds of virtual machines and make short work of complex computing tasks. Need a supercomputer-like array of VMs for a day? No problem. When the grunt work's done, those VMs pop like soap bubbles, and usage fees cease. There's no capital expenditure, no amortization, no idle capacity. Want to try the latest concept search tool? There's nothing to buy! Just throw the tool up on a VM and point it at the data.”

Because the cloud is entirely virtual, operating on servers whose locations are unknown and mostly irrelevant, it throws the rules for eDiscovery right out the metaphorical window.

Ball also believes that everything changes once discoverable information goes into the cloud. "Bringing ESI beneath one big tent narrows the gap between retention policy and practice and fosters compatible forms of ESI across web-enabled applications".

"Moving ESI to the cloud," Ball adds, "also spells an end to computer forensics." Where there are no hard drives, there can be no artifacts of deleted information—so, deleted really means deleted.

What's more, “[c]loud computing makes collection unnecessary”. Where discovery requires that information be collected to guarantee its preservation, putting a hold on ESI located in the cloud will safely keep any users from destroying it. And because cloud computing allows for faster processing than can be accomplished on a regular hard drive, the search for discovery documents will move to where they're located, in the cloud. Not only will this approach be easier, it will also save money.

Ball concludes his analysis with the statement, "That e-discovery will live primarily in the cloud isn't a question of whether but when."

So, what do you think? Is cloud computing the future of eDiscovery? Is that future already here? Please share any comments you might have or if you'd like to know more about a particular topic.

eDiscovery Trends: More On the Recommind Patent Controversy

 

Perhaps the most controversial story discussed in the eDiscovery community in quite some time is the controversy regarding the patent recently announced by Recommind for Predictive Coding via press release entitled, Recommind Patents Predictive Coding, issued on June 8.  I haven’t seen this much backlash against a company or individual since last summer when LeBron James’ decision to leave the Cleveland Cavaliers for the Miami Heat (and the subsequent championship-like celebration that he and his teammates conducted before the season).  How did that turn out?  😉

Since that announcement, there have been several articles and blog posts about it, including:

  • This one, from Monica Bay of Law Technology News, asking the question: “Is Recommind Blowing Smoke?”  where discussed the buzz over Recommind’s announcement;
  • This one, from Evan Koblentz (also of Law Technology News), entitled “Recommend Intends to Flex Predictive Coding Muscles” which includes responses from Catalyst and Valora Technologies;
  • This one, also from Evan Koblentz, a blog post from EDD Update, where Recommind General Counsel and Vice President Craig Carpenter acknowledges that Recommind failed to obtain a trademark for the term Predictive Coding (though Recommind is still using the ™ symbol on the term Predictive Coding onthis page);
  • Three blog posts in four days from Sharon D. Nelson of Ride the Lightning blog, which debate the enforceability of the patent and include a response from OrcaTec, noting that Recommind’s implied threat of litigation is “nothing more than an attempt to bully the market place”.

There are several other articles and blog posts regarding the topic, but if I listed them all, I’d have no room left for anything new!  Sorry that I couldn’t include them all.

I reached out to Bill Dimm, founder of Hot Neuron LLC, makers of Clustify, which clusters documents in groups for effective, expedited review and asked him his thoughts about the Recommind press release and patent.  Here are his comments:

"Recommind's press release would have been accurately titled 'Recommind Patents a Method for Predictive Coding,' but it went with the much more provocative title 'Recommind Patents Predictive Coding,' implying  that its patent covers every conceivable way of doing predictive coding.  The only way I can see that being accurate is if you DEFINE predictive coding to be exactly the procedure outlined in claim 1 of Recommind's patent.  Of course, 'predictive coding' is a relatively new term, so the definition is up for debate.  The patent itself says:

'Predictive coding refers to the capability to use a small set of coded documents (or partially coded documents) to predict document coding of a corpus.' That sure sounds like it allows for a lot of possibilities beyond the procedure in claim 1 of the patent.  The press release goes on to say: 'ONLY [emphasis is mine] Recommind's patented, iterative, computer-assisted approach can 'bend the cost curve' of document review.'  Really?  So, Recommind has the ONLY product in the industry that works?  A few of us disagree.  Even clustering, which Recommind claims does not qualify as predictive coding will bend the cost curve because the efficiency boost it provides increases with the size of the document set.

Moving on from the press release to the patent itself, I would recommend reading claim 1 if you are interested in such things.  It is the most general method that the USPTO allowed Recommind to claim –  the other claims are all dependent claims that describe more specific embodiments of claim 1, presumably so that Recommind would have a leg left to stand on if prior art was found to invalidate claim 1.  Claim 1 describes a procedure for predictive coding that involves quite a few steps.  It is my understanding (I am NOT a lawyer) that the patent is irrelevant for any predictive coding procedure that does not include every single one of the steps listed in claim 1.  Since claim 1 includes things like identification cycles, rolling loads, and random sampling, it seems unlikely that existing products would accidentally infringe on the patent.

As far as Clustify is concerned, Recommind's patent is irrelevant since our procedure for predictive coding is different.  In fact, I explained in a presentation at a recent conference why random sampling is a very inefficient approach (something that has been known for decades in other fields), so I wouldn't even be tempted to follow Recommind's procedure."

So, what do you think?  Will the Recommind predictive coding patent allow them to rule predictive coding?  Or only their specific approach?  Will LeBron James ever win a championship?  Please share any comments you might have or if you’d like to know more about a particular topic.

Full disclosure: Hot Neuron is a partner of Trial Solutions, which has used their product, Clustify, in various client projects.

eDiscovery Case Law: Downloading Confidential Information Leads to Motion to Compel Production

The North Dakota District Court has recently decided in favor of a motion to compel production of electronic evidence, requiring imaging of computer hard drives, in a case involving the possible electronic theft of trade secrets.

In Weatherford U.S., L.P. v. Chase Innis and Noble Casings Inc., No. 4:09-cv-061, 2011 WL 2174045 (D.N.D. June 2, 2011), the court ruled to allow the plaintiff to select and hire a forensic expert at its own expense to conduct imaging of the defendants’ hard drives. The purpose of this investigation was to discern whether or not confidential data that was downloaded from the plaintiff’s computers was, in fact, used in the building of the defendants’ own oil services firm.

Although the judge noted that courts are generally “cautious” in authorizing such hard drive imaging, this motion was substantiated by the defendant, Innis’s, “acknowledgment that he downloaded [plaintiff’s] files to a thumb drive without permission.” The court believed that circumstances of the case warranted further investigation into the defendant’s computer history:

  • The plaintiff, Weatherford US LP, had previously alleged that Chance Innis, a former employee, had downloaded confidential and proprietary information and used it to his advantage in starting his own competing company, Noble Casing Inc.
  • Innis had admitted to returning to Weatherford US offices late in the evening of the day he was terminated and downloading files onto a thumb drive without permission. Two weeks later, he launched his own competing oil services company, the co-defendant in this case, Noble Casing Inc. However, Innis maintains that he did not later access the files stored on his thumb drive and never used them in the process of starting his own company.
  • Contrary to these assertions, forensic examination of the thumb drive showed that the files were later accessed; whether or not they were instrumental in the startup of Noble Casing Inc. remains in question.
  • The plaintiff requested access to the defendant’s computers in the pursuit of previously subpoenaed documents, proposing that they select, hire, and pay for the services of a forensic investigator to image the defendants’ hard drives.
  • The defendants objected, proposing instead that an expert be chosen in agreement by all parties.
  • The court ruled in favor of the plaintiff’s motion in this instance, agreeing that all materials imaged will be shown to the defendant to screen for privilege before being shared with the plaintiff.
  • The court maintained that it is not unusual for imaging of hard drives to be allowed by the court in cases such as this, “particularly in cases where trade secrets and electronic evidence are both involved.”

So, what do you think?  Do you agree that Weatherford should have been allowed to examine images of the defendants’ hard drives, or should Innis’ privacy and that of his company have been protected?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Avoiding eDiscovery Nightmares: 10 Ways CEOs Can Sleep Easier

 

I found this article in the CIO Central blog on Forbes.com from Robert D. Brownstone – it’s a good summary of issues for organizations to consider so that they can avoid major eDiscovery nightmares.  The author counts down his top ten list David Letterman style (clever!) to provide a nice easy to follow summary of the issues.  Here’s a summary recap, with my ‘two cents’ on each item:

10. Less is more: The U.S. Supreme Court ruled unanimously in 2005 in the Arthur Andersen case that a “retention” policy is actually a destruction policy.  It’s important to routinely dispose of old data that is no longer needed to have less data subject to discovery and just as important to know where that data resides.  My two cents: A data map is a great way to keep track of where the data resides.

9. Sing Kumbaya: They may speak different languages, but you need to find a way to bridge the communication gap between Legal and IT to develop an effective litigation-preparedness program.  My two cents: Require cross-training so that each department can understand the terms and concepts important to the other.  And, don’t forget the records management folks!

8. Preserve or Perish: Assign the litigation hold protocol to one key person, either a lawyer or a C-level executive to decide when a litigation hold must be issued.  Ensure an adequate process and memorialize steps taken – and not taken.  My two cents: Memorialize is underlined because an organization that has a defined process and the documentation to back it up is much more likely to be given leeway in the courts than a company that doesn’t document its decisions.

7. Build the Three-Legged Stool: A successful eDiscovery approach involves knowledgeable people, great technology, and up-to-date written protocols.  My two cents: Up-to-date written protocols are the first thing to slide when people get busy – don’t let it happen.

6. Preserve, Protect, Defend: Your techs need the knowledge to avoid altering metadata, maintain chain-of-custody information and limit access to a working copy for processing and review.  My two cents: A good review platform will assist greatly in all three areas.

5. Natives Need Not Make You Restless: Consider exchanging files to be produced in their original/”native” formats to avoid huge out-of-pocket costs of converting thousands of files to image format.  My two cents: Be sure to address how redactions will be handled as some parties prefer to image those while others prefer to agree to alter the natives to obscure that information.

4. Get M.A.D.?  Then Get Even: Apply the Mutually Assured Destruction (M.A.D.) principle to agree with the other side to take off the table costly volumes of data, such as digital voicemails and back-up data created down the road.  My two cents: That’s assuming, of course, you have the same levels of data.  If one party has a lot more data than the other party, there may be no incentive for that party to agree to concessions.

3. Cooperate to Cull Aggressively and to Preserve Clawback Rights: Setting expectations regarding culling efforts and reaching a clawback agreement with opposing counsel enables each side to cull more aggressively to reduce eDiscovery costs.  My two cents: Some parties will agree on search terms up front while others will feel that gives away case strategy, so the level of cooperation may vary from case to case.

2. QA/QC: Employ Quality Assurance (QA) tests throughout review to ensure a high accuracy rate, then perform Quality Control (QC) testing before the data goes out the door, building time in the schedule for that QC testing.  Also, consider involving a search-methodology expert.  My two cents: I cannot stress that last point enough – the ability to illustrate how you got from the large collection set to the smaller production set will be imperative to responding to any objections you may encounter to the produced set.

1. Never Drop Your Laptop Bag and Run: Dig in, learn as much as you can and start building repeatable, efficient approaches.  My two cents: It’s the duty of your attorneys and providers to demonstrate competency in eDiscovery best practices.  How will you know whether they have or not unless you develop that competency yourself?

So, what do you think?  Are there other ways for CEOs to avoid eDiscovery nightmares?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Message Thread Review Saves Costs and Improves Consistency

 

Insanity is doing the same thing over and over again and expecting a different result.  But, in ESI review, it can be even worse when you get a different result.

One of the biggest challenges when reviewing ESI is identifying duplicates so that your reviewers aren’t reviewing the same files again and again.  Not only does that drive up costs unnecessarily, but it could lead to problems if the same file is categorized differently by different reviewers (for example, inadvertent production of a duplicate of a privileged file if it is not correctly categorized).

Of course, there are a number of ways to identify duplicates.  Exact duplicates (that contain the exact same content in the same file format) can be identified through hash values, which are a digital fingerprint of the content of the file.  MD5 and SHA-1 are the most popular hashing algorithms, which can identify exact duplicates of a file, so that they can be removed from the review population.  Since many of the same emails are emailed to multiple parties and the same files are stored on different drives, deduplication through hashing can save considerable review costs.

Sometimes, files are not exact duplicates but contain the same (or almost the same) information.  One example is a Word document published to an Adobe PDF file – the content is the same, but the file format is different, so the hash value will be different.  Near-deduplication can be used to identify files where most or all of the content matches so they can be verified as duplicates and eliminated from review.

Then, there is message thread analysis.  Of course, most email messages are part of a larger discussion, which could be just between two parties, or include a number of parties in the discussion.  To review each email in the discussion thread would result in much of the same information being reviewed over and over again.  Instead, message thread analysis pulls those messages together and enables them to be reviewed as an entire discussion.  That includes any side conversations within the discussion that may or may not be related to the original topic (e.g., a side discussion about lunch plans or did you see American Idol last night).

FirstPass®, powered by Venio FPR™, is one example of an application that provides a mechanism for message thread analysis of Outlook emails that pulls the entire thread into one conversation for review as one big “tree”.  The “tree” representation gives you the ability to see all of the conversations within the discussion and focus your review on the last emails in each conversation to see what is said without having to review each email.  Side conversations are “branches” of the tree and FirstPass enables you to tag individual messages, specific branches or the entire tree as responsive, non-responsive, privileged or some other designation.  Also, because of the way that Outlook tracks emails in the thread, FirstPass identifies messages that are missing from the collection with a red X, enabling you to investigate and determine if additional collection is needed and avoiding potential spoliation claims.

With message thread analysis, you can minimize review of duplicative information within emails, saving time and cost and ensuring consistency in the review.

So, what do you think?  Does your review tool support message thread analysis?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Competency Ethics – It’s Not Just About the Law Anymore

 

A few months ago at LegalTech New York, I conducted a thought leader interview with Tom O’Connor of Gulf Coast Legal Technology Center, who didn’t exactly mince words when talking about the trend for attorneys to “finally tak[e] technology seriously”.  As he noted, “lawyers are finally trying to take some time to try to get up to speed – whining and screaming pitifully all the way about how it’s not fair, and the sanctions are too high and there’s too much data.  Get a life, get a grip.  Use the tools that are out there that have been given to you for years.” 

Strong words, indeed.  The American Bar Association (ABA) Model Rules of Professional Conduct (Model Rules) require that an attorney possess and demonstrate a certain requisite level of knowledge in order to be considered competent to handle a given matter.  Specifically, Model Rule 1.1 states that, "[a] lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation."

Preparation not only means understanding a specific area of the law (for example, antitrust or patent law, both highly specialized.).  It also means having the technical knowledge and skills necessary to serve the client in the area of discovery.

The ethical responsibilities of counsel these days includes competently directing and managing the identification, preservation, collection, processing, analysis, review and production of electronically stored information (ESI) required to be produced pursuant to lawful discovery requests.  If counsel does not have that level of competency in a particular area, he or she is obligated to either acquire the knowledge or skill necessary to support those needs, or include someone else who does have the requisite skills as part of the representation.

Not too long ago, I met with an attorney and discussed how they handled preservation obligations with their clients.  The attorney indicated that he expected his clients to self-manage their own preservation and collection.  When I asked him why he didn’t try to get more involved to make sure it was being handled properly, he said, “I don’t want to alarm them.  They might decide they need a bigger firm.”

Recent case law is full of cases where counsel didn’t fully understand their eDiscovery obligations, and got themselves and their clients “burned” in the process.  If your organization gets involved in litigation, make sure to include eDiscovery competence among the factors you consider when determining counsel qualifications to represent you.

So, what do you think?  Is your counsel eDiscovery savvy?  If not, do they use a provider that is?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: 4 Steps to Effective eDiscovery With Software Analytics

 

I read an interesting article from Texas Lawyer via Law.com entitled “4 Steps to Effective E-Discovery With Software Analytics” that has some interesting takes on project management principles related to eDiscovery and I’ve interjected some of my thoughts into the analysis below.  A copy of the full article is located here.  The steps are as follows:

1. With the vendor, negotiate clear terms that serve the project's key objectives.  The article notes the important of tying each collection and review milestone (e.g., collecting and imaging data; filtering data by file type; removing duplicates; processing data for review in a specific review platform; processing data to allow for optical character recognition (OCR) searching; and converting data into a tag image file format (TIFF) for final production to opposing counsel) to contract terms with the vendor. 

The specific milestones will vary – for example, conversion to TIFF may not be necessary if the parties agree to a native production – so it’s important to know the size and complexity of the project, and choose only an experienced eDiscovery vendor who can handle the variations.

2. Collect and process data.  Forensically sound data collection and culling of obviously unresponsive files (such as system files) to drastically decrease the overall review costs are key services that a vendor provides in this area.  As we’ve noted many times on this blog, effective culling can save considerable review costs – each gigabyte (GB) culled can save $16-$18K in attorney review costs.

The article notes that a hidden cost is the OCR process of translating extracted text into a searchable form and that it’s an optimal negotiation point with the vendor.  This may have been true when most collections were paper based, but as most collections today are electronic based, the percentage of documents requiring OCR is considerably less than it used to be.  However, it is important to be prepared that there are some native files which will be “image only”, such as TIFFs and scanned PDFs – those will require OCR to be effectively searched.

3. Select a data and document review platform.  Factors such as ease of use, robustness, and reliability of analytic tools, support staff accessibility to fix software bugs quickly, monthly user and hosting fees, and software training and support fees should be considered when selecting a document review platform.

The article notes that a hidden cost is selecting a platform with which the firm’s litigation support staff has no experience as follow-up consultation with the vendor could be costly.  This can be true, though a good vendor training program and an intuitive interface can minimize or even eliminate this component.

The article also notes that to take advantage of the vendor’s more modern technology “[a] viable option is to use a vendor's review platform that fits the needs of the current data set and then transfer the data to the in-house system”.  I’m not sure why the need exists to transfer the data back – there are a number of vendors that provide a cost-effective solution appropriate for the duration of the case.

4. Designate clear areas of responsibility.  By doing so, you minimize or eliminate inefficiencies in the project and the article mentions the RACI matrix to determine who is responsible (individuals responsible for performing each task, such as review or litigation support), accountable (the attorney in charge of discovery), consulted (the lead attorney on the case), and informed (the client).

Managing these areas of responsibility effectively is probably the biggest key to project success and the article does a nice job of providing a handy reference model (the RACI matrix) for defining responsibility within the project.

So, what do you think?  Do you have any specific thoughts about this article?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Trends: Sedona Conference Database Principles

 

A few months ago, eDiscovery Daily posted about discovery of databases and how few legal teams understand database discovery and know how to handle it.  We provided a little pop quiz to test your knowledge of databases, with the answers here.

Last month, The Sedona Conference® Working Group on Electronic Document Retention & Production (WG1) published the Public Comment Version of The Sedona Conference® Database Principles – Addressing the Preservation & Production of Databases &Database Information in Civil Litigation to provide guidance and recommendations to both requesting and producing parties to simplify discovery of databases and information derived from databases.  You can download the publication here.

As noted in the Executive Overview of the publication, some of the issues that make database discovery so challenging include:

  • More enterprise-level information is being stored in searchable data repositories, rather than in discrete electronic files,
  • The diverse and complicated ways in which database information can be stored has made it difficult to develop universal “best-practice” approaches to requesting and producing information stored in databases,
  • Retention guidelines that make sense for archival databases (databases that add new information without deleting past records) rapidly break down when applied to transactional databases where much of the system’s data may be retained for a limited time – as short as thirty days or even thirty seconds.

The commentary is broken into three primary sections:

  • Section I: Introduction to databases and database theory,
  • Section II: Application of The Sedona Principles, designed for all forms of ESI, to discovery of databases,
  • Section III: Proposal of six new Principles that pertain specifically to databases with commentary to support the Working Group’s recommendations.  The principles are stated as follows:
    • Absent a specific showing of need or relevance, a requesting party is entitled only to database fields that contain relevant information, not the entire database in which the information resides or the underlying database application or database engine.
    • Due to differences in the way that information is stored or programmed into a database, not all information in a database may be equally accessible, and a party’s request for such information must be analyzed for relevance and proportionality.
    • Requesting and responding parties should use empirical information, such as that generated from test queries and pilot projects, to ascertain the burden to produce information stored in databases and to reach consensus on the scope of discovery.
    • A responding party must use reasonable measures to validate ESI collected from database systems to ensure completeness and accuracy of the data acquisition.
    • Verifying information that has been correctly exported from a larger database or repository is a separate analysis from establishing the accuracy, authenticity, or admissibility of the substantive information contained within the data.
    • The way in which a requesting party intends to use database information is an important factor in determining an appropriate format of production.

To submit a public comment, you can download a public comment form here, complete it and fax (yes, fax) it to The Sedona Conference® at 928-284-4240.  You can also email a general comment to them at tsc@sedona.net.

eDiscovery Daily will be delving into this document in more detail in future posts.  Stay tuned!

So, what do you think?  Do you have a need for guidelines for database discovery?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Your ESI Collection May Be Larger Than You Think

 

Here’s a sample scenario: You identify custodians relevant to the case and collect files from each.  Roughly 100 gigabytes (GB) of Microsoft Outlook email PST files and loose “efiles” is collected in total from the custodians.  You identify a vendor to process the files to load into a review tool, so that you can perform first pass review and, eventually, linear review and produce the files to opposing counsel.  After processing, the vendor sends you a bill – and they’ve charged you to process over 200 GB!!  What happened?!?

Did the vendor accidentally “double-bill” you?  That would be great – but no.  There’s a much more logical explanation and, unfortunately, you may wind up paying a lot more to process these files that you expected.

Many of the files in most ESI collections are stored in what are known as “archive” or “container” files.  For example, as noted above, Outlook emails are typically saved for each custodian in a personal storage (.PST) file format, which is an expanding container file. For most custodians, all of their email (and the corresponding attachments, if present) resides in a few PST files.  The scanned size for the PST file is the size of the file on disk.

Did you ever see one of those vacuum bags that you store clothes in and then suck all the air out so that the clothes won’t take as much space?  The PST file is like one of those vacuum bags – it typically stores the emails and attachments in a compressed format to save space.  When the emails and attachments are processed into a review tool, they are expanded into their normal size.  This expanded size can be 1.5 to 2 times larger than the scanned size (or more).  And, that’s what many vendors will bill on – the expanded size.

There are other types of archive container files that compress the contents – .zip and .rar files are two examples of compressed container files.  These files are often used to not only to compress files for storage on hard drives, but they are also used to compact or group a set of files when transmitting them, usually in – you guessed it – email.  With email comprising a majority of most ESI collections and the popularity of other archive container files for compressing file collections, the expanded size of your collection may be considerably larger than it appears when stored on disk.  It’s important to be prepared for that and know your options when processing that data, so you can effectively anticipate those processing costs.

So, what do you think?  Have you ever been surprised by processing costs of your ESI?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Testing Your Search Using Sampling

Friday, we talked about how to determine an appropriate sample size to test your search results as well as the items NOT retrieved by the search, using a site that provides a sample size calculator.  Yesterday, we talked about how to make sure the sample size is randomly selected.

Today, we’ll walk through an example of how you can test and refine a search using sampling.

TEST #1: Let’s say in an oil company we’re looking for documents related to oil rights.  To try to be as inclusive as possible, we will search for “oil” AND “rights”.  Here is the result:

  • Files retrieved with “oil” AND “rights”: 200,000
  • Files NOT retrieved with “oil” AND “rights”: 1,000,000

Using the site to determine an appropriate sample size that we identified before, we determine a sample size of 662 for the retrieved files and 664 for the non-retrieved files to achieve a 99% confidence level with a margin of error of 5%.  We then use this site to generate random numbers and then proceed to review each item in the retrieved and NOT retrieved items sets to determine responsiveness to the case.  Here are the results:

  • Retrieved Items: 662 reviewed, 24 responsive, 3.6% responsive rate.
  • NOT Retrieved Items: 664 reviewed, 661 non-responsive, 99.5% non-responsive rate.

Nearly every item in the NOT retrieved category was non-responsive, which is good.  But, only 3.6% of the retrieved items were responsive, which means our search was WAY over-inclusive.  At that rate, 192,800 out of 200,000 files retrieved will be NOT responsive and will be a waste of time and resource to review.  Why?  Because, as we determined during the review, almost every published and copyrighted document in our oil company has the phrase “All Rights Reserved” in the document and will be retrieved.

TEST #2: Let’s try again.  This time, we’ll conduct a phrase search for “oil rights” (which requires those words as an exact phrase).  Here is the result:

  • Files retrieved with “oil rights”: 1,500
  • Files NOT retrieved with “oil rights”: 1,198,500

This time, we determine a sample size of 461 for the retrieved files and (again) 664 for the NOT retrieved files to achieve a 99% confidence level with a margin of error of 5%.  Even though, we still have a sample size of 664 for the NOT retrieved files, we generate a new list of random numbers to review those items, as well as the 461 randomly selected retrieved items.  Here are the results:

  • Retrieved Items: 461 reviewed, 435 responsive, 94.4% responsive rate.
  • NOT Retrieved Items: 664 reviewed, 523 non-responsive, 78.8% non-responsive rate.

Nearly every item in the retrieved category was responsive, which is good.  But, only 78.8% of the NOT retrieved items were responsive, which means over 20% of the NOT retrieved items were actually responsive to the case (we also failed to retrieve 8 of the items identified as responsive in the first iteration).  So, now what?

TEST #3: If you saw this previous post, you know that proximity searching is a good alternative for finding hits that are close to each other without requiring the exact phrase.  So, this time, we’ll conduct a proximity search for “oil within 5 words of rights”.  Here is the result:

  • Files retrieved with “oil within 5 words of rights”: 5,700
  • Files NOT retrieved with “oil within 5 words of rights”: 1,194,300

This time, we determine a sample size of 595 for the retrieved files and (once again) 664 for the NOT retrieved files, generating a new list of random numbers for both sets of items.  Here are the results:

  • Retrieved Items: 595 reviewed, 542 responsive, 91.1% responsive rate.
  • NOT Retrieved Items: 664 reviewed, 655 non-responsive, 98.6% non-responsive rate.

Over 90% of the items in the retrieved category were responsive AND nearly every item in the NOT retrieved category was non-responsive, which is GREAT.  Also, all but one of the items previously identified as responsive was retrieved.  So, this is a search that appears to maximize recall and precision.

Had we proceeded with the original search, we would have reviewed 200,000 files – 192,800 of which would have been NOT responsive to the case.  By testing and refining, we only had to review 8,815 files –  3,710 sample files reviewed plus the remaining retrieved items from the third search (5,700595 = 5,105) – most of which ARE responsive to the case.  We saved tens of thousands in review costs while still retrieving most of the responsive files, using a defensible approach.

Keep in mind that this is a simple example — we’re not taking into account misspellings and other variations we may want to include in our criteria.

So, what do you think?  Do you use sampling to test your search results?   Please share any comments you might have or if you’d like to know more about a particular topic.