Processing

Plaintiffs Take the Supreme Step in Da Silva Moore – eDiscovery Case Law

As mentioned in Law Technology News (‘Da Silva Moore’ Goes to Washington), attorneys representing lead plaintiff Monique Da Silva Moore and five other employees have filed a petition for certiorari filed with the Supreme Court arguing that New York Magistrate Judge Andrew Peck, who approved an eDiscovery protocol agreed to by the parties that included predictive coding technology, should have recused himself given his previous public statements expressing strong support of predictive coding.

Da Silva Moore and her co-plaintiffs argued in the petition that the Second Circuit Court of Appeals was too deferential to Peck when denying the plaintiff’s petition to recuse him, asking the Supreme Court to order the Second Circuit to use the less deferential “de novo” standard.  As noted in the LTN article:

“The employees also cited a circuit split in how appellate courts reviewed judicial recusals, pointing out that the Seventh Circuit reviews disqualification motions de novo. Besides resolving the circuit split, the employees asked the Supreme Court to find that the Second Circuit’s standard was incorrect under the law. Citing federal statute governing judicial recusals, the employees claimed that the law required motions for disqualification to be reviewed objectively and that a deferential standard flew in the face of statutory intent. “Rather than dispelling the appearance of a self-serving judiciary, deferential review exacerbates the appearance of impropriety that arises from judges deciding their own cases and thus undermines the purposes of [the statute],” wrote the employees in their cert petition.”

This battle over predictive coding and Judge Peck’s participation has continued for 15 months.  For a recap of the events during that time, click here.

So, what do you think?  Is this a “hail mary” for the plaintiffs and will it succeed?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

In False Claims Act Case, Reimbursement of eDiscovery Costs Awarded to Plaintiff – eDiscovery Case Law

In United States ex rel. Becker v. Tools & Metals, Inc., No. 3:05-CV-0627-L (N.D. Tex. Mar. 31, 2013), a qui tam False Claims Act litigation, the plaintiffs sought, and the court awarded, costs for, among other things, uploading ESI, creating a Relativity index, and processing data over the objection that expenses should be limited to “reasonable out-of-pocket expenses which are part of the costs normally charged to a fee-paying client.” The court also approved electronic hosting costs, rejecting a defendant’s claim that “reasonableness is determined based on the number of documents used in the litigation.” However, the court refused to award costs for project management and for extracting data from hard drives where the plaintiff could have used better means to conduct a “targeted extraction of information.”

One of the defendants, Lockheed Martin, appealed the magistrate judge’s award of costs on the grounds that the recovery of expenses should be limited to “reasonable out-of-pocket expenses which are part of the costs normally charged to a fee-paying client,” as allowed under 42 U.S.C. § 1988. As part of its argument, Lockheed suggested the following:

“(1) Spencer’s request to be reimbursed for nearly $1 million in eDiscovery services is unreasonable and the magistrate’s recommendation does not cite any authority holding that a request for expenses in the amount sought by Spencer for eDiscovery is reasonable and reimbursable; (2) an award of $174,395.97 for uploading ESI and creating a search index is unfounded and arbitrary because it requires Lockheed to pay for Spencer’s decision to request ESI in a format that was different from the format that his vendor actually wanted; and (3) the recommended award punishes Lockheed for Spencer’s failure to submit detailed expense records because the actual cost of uploading and creating a search index “may have been substantially less” than the magistrate judge’s $174,395.97 estimate.” {emphasis added}

The district judge found that the “FCA does not limit recovery of expenses to those normally charged to a fee-paying client”: instead, 31 U.S.C. § 3730(d)(1)-(2) provides that “a qui tam plaintiff ‘shall . . . receive an amount for reasonable expenses which the court finds to have been necessarily incurred, plus reasonable attorneys’ fees and costs. All such expenses, fees, and costs shall be awarded against the defendant.’” The district judge agreed with the magistrate’s finding, which allowed the recovery of these expenses. Although the defendant offered an affidavit of an expert eDiscovery consultant that suggested the amount the plaintiff requested was unreasonable, the magistrate found the costs of data processing and uploading and the creation of a Relativity index permissible; however, she denied the recovery of the more than $38,000 attributable to repairing and reprocessing allegedly broken or corrupt files produced by Lockheed because Lockheed had produced the documents in the requested format. She also found that Spencer “could have and should have simply requested Lockheed to reproduce the data files at no cost rather than embarking on the expensive undertaking of repairing and reprocessing the data.”

Because the plaintiff’s “billing records did not segregate the costs for reprocessing and uploading the data and creating a searchable index,” the magistrate judge apportioned the vendor’s expenses evenly between reprocessing, uploading, and creating an index. The district court agreed and rejected Lockheed’s argument that the actual cost “may have been substantially less” as “purely speculative.”

Lockheed also complained about the magistrate judge’s award of more than $271,000 for electronic hosting costs because the plaintiff failed to show that the expenses were “reasonable and necessarily incurred” and the magistrate’s report did not cite any authority showing that this expense was recoverable. Lockheed also argued that the vendor’s bill of “$440,039 for hosting of and user access to the documents produced in the litigation is unreasonable under the circumstances because Spencer used only five of these documents during the litigation and did not notice a single deposition.” {emphasis added}

The district judge found that the data-hosting expenses were recoverable because the FCA does not limit the types of recoverable expenses. The district judge also agreed with the magistrate judge’s reduction of the hosting fees requested by nearly 40 percent—over Spencer’s objection—by limiting the time frame of recovery to the time before settlement was on the table and the number of database use accounts requested. He rejected Lockheed’s “contention that reasonableness is determined based on the number of documents used in the litigation.” He noted that in this data-intensive age, many documents collected and reviewed may not be responsive or used in the litigation; however, this “does not necessarily mean that the documents do not have to be reviewed by the parties for relevance by physically examining them or through the use of litigation software with searching capability to assist parties in identifying key documents.”

The district court also agreed with the magistrate’s decision to uphold Lockheed’s objection to the amount Spencer spent on extracting ESI from hard drives and related travel costs. The magistrate found that Spencer did not need to review everything on the hard drives; instead, he should have conducted a “targeted extraction of information” like Lockheed did or conduct depositions “to determine how best to conduct more limited discovery” to save time and expense. The magistrate deducted nearly $65,000 from Spencer’s request, awarding him $20,000. The district court opined:

“With the availability of technology and the capability of eDiscovery vendors today in this area, the court concludes that it was unreasonable for Spencer to simply image all of the hard drives without at least first considering or attempting a more targeted and focused extraction. Also, lack of familiarity with technology in this regard is not an excuse and does not relieve parties or their attorneys of their duty to ensure that the services performed and fees charged by third party vendors are reasonable, particularly when recovery of such expenses is sought in litigation. The court therefore overrules this objection.”

Finally, the district court upheld the magistrate judge’s determination that Spencer was not entitled to recover his project management costs. Spencer argued that the “IT management of the electronic database is critical, especially when poor quality electronic evidence is produced. All complex cases of this magnitude require professional IT support.” Because Spencer failed to adequately describe the services provided and because the record did not support the need for a project manager, the magistrate declined to reimburse this expense.

Ultimately, the court reduced the costs by $1,650 and the fees by $85,883, awarding the plaintiffs more than $1.6 million in fees and nearly $550,000 in costs. In closing, the district judge warned the parties that if they filed a motion for reconsideration or to amend the judgment without good cause, he would impose monetary sanctions against them.

So, what do you think?  Were the right cost reimbursements awarded?  Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

More Updates from the EDRM Annual Meeting – eDiscovery Trends

Yesterday, we discussed some general observations from the Annual Meeting for the Electronic Discovery Reference Model (EDRM) group and discussed some significant efforts and accomplishments by the (suddenly heavily talked about) EDRM Data Set project.  Here are some updates from other projects within EDRM.

It should be noted these are summary updates and that most of the focus on these updates is on accomplishments for the past year and deliverables that are imminent.  Over the next few weeks, eDiscovery Daily will cover each project in more depth with more details regarding planned activities for the coming year.

Model Code of Conduct (MCoC)

The MCoC was introduced in 2011 and became available for organizations to subscribe last year.  To learn more about the MCoC, you can read the code online here, or download it as a 22 page PDF file here.  Subscribing is easy!  To voluntarily subscribe to the MCoC, you can register on the EDRM website here.  Identify your organization, provide information for an authorized representative and answer four verification questions (truthfully, of course) to affirm your organization’s commitment to the spirit of the MCoC, and your organization is in!  You can also provide a logo for EDRM to include when adding you to the list of subscribing organizations.  Pending a survey of EDRM members to determine if any changes are needed, this project has been completed.  Team leaders include Eric Mandel of Zelle Hofmann, Kevin Esposito of Rivulex and Nancy Wallrich.

Information Governance Reference Model (IGRM)

The IGRM team has continued to make strides and improvements on an already terrific model.  Last October, they unveiled the release of version 3.0 of the IGRMAs their press release noted, “The updated model now includes privacy and security as primary functions and stakeholders in the effective governance of information.”  IGRM continues to be one of the most active and well participated EDRM projects.  This year, the early focus – as quoted from Judge Andrew Peck’s keynote speech at Legal Tech this past year – is “getting rid of the junk”.  Project leaders are Aliye Ergulen from IBM, Reed Irvin from Viewpointe and Marcus Ledergerber from Morgan Lewis.

Search

One of the best examples of the new, more agile process for creating deliverables within EDRM comes from the Search team, which released its new draft Computer Assisted Review Reference Model (CARRM), which depicts the flow for a successful Computer Assisted Review project. The entire model was created in only a matter of weeks.  Early focus for the Search project for the coming year includes adjustments to CARRM (based on feedback at the annual meeting).  You can also still send your comments regarding the model to mail@edrm.net or post them on the EDRM site here.  A webinar regarding CARRM is also planned for late July.  Kudos to the Search team, including project leaders Dominic Brown of Autonomy and also Jay Lieb of kCura, who got unmerciful ribbing for insisting (jokingly, I think) that TIFF files, unlike Generalissimo Francisco Franco, are still alive.  🙂

Jobs

In late January, the Jobs Project announced the release of the EDRM Talent Task Matrix diagram and spreadsheet, which is available in XLSX or PDF format. As noted in their press release, the Matrix is a tool designed to help hiring managers better understand the responsibilities associated with common eDiscovery roles. The Matrix maps responsibilities to the EDRM framework, so eDiscovery duties associated can be assigned to the appropriate parties.  Project leader Keith Tom noted that next steps include surveying EDRM members regarding the Matrix, requesting and co-authoring case-studies and white papers, and creating a short video on how to use the Matrix.

Metrics

In today’s session, the Metrics project team unveiled the first draft of the new Metrics model to EDRM participants!  Feedback was provided during the session and the team will make the model available for additional comments from EDRM members over the next week or so, with a goal of publishing for public comments in the next two to three weeks.  The team is also working to create a page to collect Metrics measurement tools from eDiscovery professionals that can benefit the eDiscovery community as a whole.  Project leaders Dera Nevin of TD Bank and Kevin Clark noted that June is “budget calculator month”.

Other Initiatives

As noted yesterday, there is a new project to address standards for working with native files in the different EDRM phases led by Eric Mandel from Zelle Hofmann and also a new initiative to establish collection guidelines, spearheaded by Julie Brown from Vorys.  There is also an effort underway to refocus the XML project, as it works to complete the 2.0 version of the EDRM XML model.  In addition, there was quite a spirited discussion as to where EDRM is heading as it approaches ten years of existence and it will be interesting to see how the EDRM group continues to evolve over the next year or so.  As you can see, a lot is happening within the EDRM group – there’s a lot more to it than just the base Electronic Discovery Reference Model.

So, what do you think?  Are you a member of EDRM?  If not, why not?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Reporting from the EDRM Annual Meeting and a Data Set Update – eDiscovery Trends

The Electronic Discovery Reference Model (EDRM) Project was created in May 2005 by George Socha of Socha Consulting LLC and Tom Gelbmann of Gelbmann & Associates to address the lack of standards and guidelines in the electronic discovery market.  Now, beginning its ninth year of operation with its annual meeting in St. Paul, MN, EDRM is accomplishing more than ever to address those needs.  Here are some highlights from the meeting, and an update regarding the (suddenly heavily talked about) EDRM Data Set project.

Annual Meeting

Twice a year, in May and October, eDiscovery professionals who are EDRM members meet to continue the process of working together on various standards projects.  This will be my eighth year participating in EDRM at some level and, oddly enough, I’m assisting with PR and promotion (how am I doing so far?).  eDiscovery Daily has referenced EDRM and its phases many times in the 2 1/2 years plus history of the blog – this is our 144th post that relates to EDRM!

Some notable observations about today’s meeting:

  • New Participants: More than half the attendees at this year’s annual meeting are attending for the first time.  EDRM is not just a core group of “die-hards”, it continues to find appeal with eDiscovery professionals throughout the industry.
  • Agile Approach: EDRM has adopted an Agile approach to shorten the time to complete and publish deliverables, a change in philosophy that facilitated several notable accomplishments from working groups over the past year including the Model Code of Conduct (MCoC), Information Governance Reference Model (IGRM), Search and Jobs (among others).  More on that tomorrow.
  • Educational Alliances: For the first time, EDRM has formed some interesting and unique educational alliances.  In April, EDRM teamed with the University of Florida Levin College of Law to present a day and a half conference entitled E-Discovery for the Small and Medium Case.  And, this June, EDRM will team with Bryan University to provide an in-depth, four-week E-Discovery Software & Applied Skills Summer Immersion Program for Law School Students.
  • New Working Group: A new working group to be lead by Eric Mandel of Zelle Hoffman was formed to address standards for working with native files in the different EDRM phases.

Tomorrow, we’ll discuss the highlights for most of the individual working groups.  Given the recent amount of discussion about the EDRM Data Set group, we’ll start with that one today!

Data Set

The EDRM Enron Data Set has been around for several years and has been a valuable resource for eDiscovery software demonstration and testing (we covered it here back in January 2011).  The data in the EDRM Enron PST Data Set files is sourced from the FERC Enron Investigation release made available by Lockheed Martin Corporation.  It was reconstituted as PST files with attachments for the EDRM Data Set Project.  So, in essence EDRM took already public domain available data and made the data much more usable.  Initially, the data was made available for download on the EDRM site, then subsequently moved to Amazon Web Services (AWS).

In the past several days, there has been much discussion about the personally-identifiable information (“PII”) available within the FERC (and consequently the EDRM Data Set), including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers.  Consequently, the EDRM Data Set has been taken down from the AWS site.

The Data Set team led by Michael Lappin of Nuix and Eric Robi of Elluma Discovery has been working on a process (using predictive coding technology) to identify and remove the PII data from the EDRM Data Set.  Discussions about this process began months ago, prior to the recent discussions about the PII data contained within the set.  The team has completed this iterative process for V1 of the data set (which contains 1,317,158 items), identifying and removing 10,568 items with PII, HIPAA and other sensitive information.  This version of the data set will be made available within the EDRM community shortly for peer review testing.  The data set team will then repeat the process for the larger V2 version of the data set (2,287,984 items).  A timetable for republishing both sets should be available soon and the efforts of the Data Set team on this project should pay dividends in developing and standardizing processes for identifying and eliminating sensitive data that eDiscovery professionals can use in their own data sets.

The team has also implemented a Forensic Files Testing Project site where users can upload their own “modern”, non-copyrighted file samples that are typically encountered during electronic discovery processing to provide a more diverse set of data than is currently available within the Enron data set.

So, what do you think?  How has EDRM impacted how you manage eDiscovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Skip the HASH When Deduping Outlook MSG Files – eDiscovery Best Practices

As we discussed recently in this blog, Microsoft® Outlook emails can take many forms.  One of those forms is the MSG file extension, which is used to represent a self-contained unit for an individual message “family” (email and its attachments).  MSG files can exist on your computer in the same folders as Word, Excel and other data files.  But, when it comes to deduping those MSG files, the approach to do so is typically different.

A few years ago, I was assisting a client and collecting emails from their email archiving system for discovery, outputting the selected emails to individual MSG files (per their request).  Because this was an enterprise-wide search of email archives, the searches that I performed found the same emails again and again in different custodian folders.  There was literally hundreds of thousands of duplicate emails in this collection.  Of course, this is typical – anytime you send an email to three co-workers, all four of you have a copy of the email (assuming none of you deleted it).  If the email is responsive and your goal is to dedupe across custodians, you only want to review and produce one copy, not four.

However, had I performed a HASH value identification of duplicates on those output MSG files, I would find no duplicates.  Why is that?

That’s because each MSG file contains a field which stores the Creation Date and Time. Because this value will be set at the date and time the MSG is saved, two emails with otherwise identical content will not be considered duplicates based on the HASH value.  Remember how “drag and drop” sets the Creation Date and Time of the copy to the current date and time?  The same thing happens when an MSG file is created.

Hmmm, what to do?  Typically, the approach for MSG files is to use key metadata fields to identify duplicates.  Many processing vendors use a typical combination of fields that consist of: From, To, CC, BCC, Subject, Attachment Name, Sent Date/Time and Body of the email.  Some use those fields only on MSG files; others use it on all emails (to dedupe individual emails within MSG files against those same emails within an OST or a PST file).

So, if you’re hungry to eliminate duplicates from your collection of MSG files, skip the HASH and use the metadata fields.  It’s much more (ful)filling.

So, what do you think?  Have you encountered any challenges when it comes to deduping emails?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Appeals Court Upholds Decision Not to Recuse Judge Peck in Da Silva Moore – eDiscovery Case Law

As reported by IT-Lex, the Second Circuit of the US Court of Appeals rejected the Plaintiff’s request for a writ of mandamus recusing Magistrate Judge Andrew J. Peck from Da Silva Moore v. Publicis Groupe SA.

The entire opinion is stated as follows:

“Petitioners, through counsel, petition this Court for a writ of mandamus compelling the recusal of Magistrate Judge Andrew J. Peck. Upon due consideration, it is hereby ORDERED that the mandamus petition is DENIED because Petitioners have not ‘clearly and indisputably demonstrate[d] that [Magistrate Judge Peck] abused [his] discretion’ in denying their district court recusal motion, In re Basciano, 542 F. 3d 950, 956 (2d Cir. 2008) (internal quotation marks omitted) (quoting In re Drexel Burnham Lambert Inc., 861 F.2d 1307, 1312-13 (2d Cir. 1988)), or that the district court erred in overruling their objection to that decision.”

Now, the plaintiffs have been denied in their recusal efforts in three courts.

Since it has been a while, let’s recap the case for those who may have not been following it and may be new to the blog.

Last year, back in February, Judge Peck issued an opinion making this case likely the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case.  However, on March 13, District Court Judge Andrew L. Carter, Jr. granted the plaintiffs’ request to submit additional briefing on their February 22 objections to the ruling.  In that briefing (filed on March 26), the plaintiffs claimed that the protocol approved for predictive coding “risks failing to capture a staggering 65% of the relevant documents in this case” and questioned Judge Peck’s relationship with defense counsel and with the selected vendor for the case, Recommind.

Then, on April 5, Judge Peck issued an order in response to Plaintiffs’ letter requesting his recusal, directing plaintiffs to indicate whether they would file a formal motion for recusal or ask the Court to consider the letter as the motion.  On April 13, (Friday the 13th, that is), the plaintiffs did just that, by formally requesting the recusal of Judge Peck (the defendants issued a response in opposition on April 30).  But, on April 25, Judge Carter issued an opinion and order in the case, upholding Judge Peck’s opinion approving computer-assisted review.

Not done, the plaintiffs filed an objection on May 9 to Judge Peck’s rejection of their request to stay discovery pending the resolution of outstanding motions and objections (including the recusal motion, which has yet to be ruled on.  Then, on May 14, Judge Peck issued a stay, stopping defendant MSLGroup’s production of electronically stored information.  On June 15, in a 56 page opinion and order, Judge Peck denied the plaintiffs’ motion for recusal.  Judge Carter ruled on the plaintiff’s recusal request on November 7, denying the request and stating that “Judge Peck’s decision accepting computer-assisted review … was not influenced by bias, nor did it create any appearance of bias”.

So, what do you think?  Will this finally end the recusal question in this case?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Court Says Scanning Documents to TIFF and Loading into Database is Taxable – eDiscovery Case Law

Awarding reimbursement of eDiscovery costs continues to be a mixed bag.  Sometimes, reimbursement of costs is awarded, such as in this case and this case.  Other times, those requests have been denied (or reversed) by the courts, including this case, this case and this case.  This time, reimbursement of eDiscovery costs was approved.

In Amana Society, Inc. v. Excel Engineering, Inc., No. 10-CV-168-LRR, (N.D. Iowa Feb. 4, 2013), Iowa District Judge Linda R. Reade found that “scanning [to TIFF format] for Summation purposes qualifies as ‘making copies of materials’ and that these costs are recoverable”.

With regard to the plaintiff’s claims of negligent misrepresentation and professional negligence, the defendant obtained partial summary judgment from the court on one claim and prevailed at trial on the other claim. The defendant subsequently filed a bill of costs asking the court to tax $51,233.51 in fees against the plaintiff, including “fees and disbursements for printing.” Last October, the plaintiff filed an objection to the bill of costs; in its response, the defendant withdrew its requests for certain costs and reduced the total amount requested to $50,050.61.

The requested costs included $6,000 in copying costs, including almost $5,000 in costs for uploading documents to Summation, the popular litigation support software application. The plaintiff claimed the costs were not taxable because “(1) the costs were incurred for the convenience of counsel; and (2) the costs were discovery related and were not necessary for use at trial.” On the other hand, the defendant asserted “‘[t]he electronic scanning of documents is the modern-day equivalent of exemplification and copies of paper and therefore can be taxed pursuant to§ 1920(4).’”  Taxable costs under 28 U.S.C. § 1920, includes “[f]ees for exemplification and the costs of making copies of any materials where the copies are necessarily obtained for use in the case.”

Judge Reade cited Race Tires America, Inc. v. Hoosier Racing Tire Corp. (where the winning defendants were originally awarded $367,000 as reimbursement for eDiscovery costs, but that amount was reduced to $30,370 on appeal), and found “the conversion of native files to TIFF . . . and the scanning of documents to create digital duplicates are generally recognized as the taxable ‘making copies of material.’”  Approving reimbursement for these expenses “in light of the facts and document-intensive nature of this case”, the judge rejected the plaintiff’s claim that $2,435.68 of the Summation costs awarded should be disallowed because “they were incurred for discovery purposes”, noting that “[t]here is no absolute bar to recovering costs for discovery-related copying and scanning.”

Judge Reade refused to reimburse some other document related costs, noting that “Bates match, OCR and document utilization are used to organize documents and make them searchable, activities that would traditionally be done by attorneys or support staff, and therefore, are not taxable.”

So, what do you think?  Should the costs have been awarded?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Four More Tips to Quash the Cost of eDiscovery – eDiscovery Best Practices

Thursday, we covered the first four tips from Craig Ball’s informative post on his blog (Ball in your Court) entitled Eight Tips to Quash the Cost of E-Discovery with tips on saving eDiscovery costs.  Today, we’ll discuss the last four tips.

5. Test your Methods and Know your ESI: Craig says that “Staggering sums are spent in e-discovery to collect and review data that would never have been collected if only someone had run a small scale test before deploying an enterprise search”.  Knowing your ESI will, as Craig notes, “narrow the scope of collection and review with consequent cost savings”.  In one of the posts on our very first day of the blog, I relayed an actual example from a client regarding a search that included a wildcard of “min*” to retrieve variations like “mine”, “mines” and “mining”.  Because there are 269 words in the English language that begin with “min”, that overly broad search retrieved over 300,000 files with hits in an enterprise-wide search.  Unfortunately, the client had already agreed to the search term before finding that out, which resulted in considerable negotiation (and embarrassment) to get the other side to agree to modify the term.  That’s why it’s always a good idea to test your searches before the meet and confer.  The better you know your ESI, the more you save.

6. Use Good Tools: Craig provides another great analogy in observing that “If you needed to dig a big hole, you wouldn’t use a teaspoon, nor would you hire a hundred people with teaspoons.  You’d use the right power tool and a skilled operator.”  Collection and review tools must fit your requirements and workflow, so, guess what?  You need to understand those requirements and your workflow to pick the right tool.  If you’re putting together a wooden table, you don’t have to learn how to operate a blowtorch if all you need is a hammer and some nails, or a screwdriver and some screws for the job.  The better that the tools fit your workflow, the more you save.

7. Communicate and Cooperate: Craig says that “Much of the waste in e-discovery grows out of apprehension and uncertainty.  Litigants often over-collect and over-review, preferring to spend more than necessary instead of giving the transparency needed to secure a crucial concession on scope or methodology”.  A big part of communication and cooperation, at least in Federal cases, is the Rule 26(f) conference (which is also known as the “meet and confer”, here are two posts on the subject).  The more straightforward you make discovery through communication and cooperation, the more you save.

8. Price is What the Seller Accepts: Craig notes that there is much “pliant pricing” for eDiscovery tools and services and relayed an example where a vendor initially quoted $43.5 million to complete a large expedited project, only to drop that quote all the way down to $3.5 million after some haggling.  Yes, it’s important to shop around.  It’s also important to be able to know the costs going in, through predictable pricing.  If you have 10 gigabytes or 1 terabyte of data, providers should be able to tell you exactly what it will cost to collect, process, load and host that data.  And, it’s always good if the provider will let you try their tools for free, on your actual data, so you know whether those tools are worth the price.  The more predictable price and value of the tools and services are, the more you save.

So, what do you think?  What are you doing to keep eDiscovery costs down?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Eight Tips to Quash the Cost of eDiscovery – eDiscovery Best Practices

By now, Craig Ball needs no introduction our readers as he has been a thought leader interview participant for the past three years.  Two years ago, we published his interview in a single post, his interview last year was split into a two part series and this year’s interview was split into a three part series.  Perhaps next year, I will be lucky enough to interview him for an hour and we can simply have a five-part “Ball Week” (like the Discovery Channel has “Shark Week”).  Hmmm…

Regardless, I’m a regular reader of his blog, Ball in your Court, as well, and, last week, he published a very informative post entitled Eight Tips to Quash the Cost of E-Discovery with tips on saving eDiscovery costs.  I thought we would cover those tips here, with some commentary:

  1. Eliminate Waste: Craig notes that “irrational fears [that] flow from lack of familiarity with systems, tools and techniques that achieve better outcomes at lower cost” results in waste.  Over-preservation and over-collection of ESI, conversion of ESI, failing to deduplicate and reviewing unnecessary files all drive the cost up.  Last September, we ran a post regarding quality control and making sure the numbers add up when you subtract filtered, NIST/system, exception, duplicate and culled (during searching) files from the collected total.  In that somewhat hypothetical example based on Enron data sets, after removing those files, only 17% of the collected files were actually reviewed (which, in many cases, would still be too high a percentage).  The less number of files that require attorney “eyes on”, the more you save.
  2. Reduce Redundancy and Fragmentation: While, according to the Compliance, Governance and Oversight Council (CGOC), information volume in most organizations doubles every 18-24 months, Craig points out that “human beings don’t create that much more unique information; they mostly make more copies of the same information and break it into smaller pieces.”  Insanity is doing the same thing over and over and expecting different results and insane review is reviewing the same documents over and over and (potentially) getting different results, which is not only inefficient, but could lead to inconsistencies and even inadvertent disclosures.  Most collections not only contain exact duplicates in the exact format (which can identified through hash-based deduplication), but also “near” duplicates that include the same content in different file formats (and at different sizes) or portions of the content in eMail threads.  The less duplicative content that requires review, the more you save.
  3. Don’t Convert ESI: In addition to noting the pitfalls of converting ESI to page-like image formats like TIFF, Craig also wrote a post about it, entitled Are They Trying to Screw Me? (discussed in this blog here).  ‘Nuff said.  The less ESI you convert, the more you save.
  4. Review Rationally: Craig discussed a couple of irrational approaches to review, including reviewing attachments without hits when the eMail has been determined to be non-responsive and the tendency to “treat information in any form from any source as requiring privilege review when even a dollop of thought would make clear that not all forms or sources of ESI are created equal when it comes to their potential to hold privileged content”.  For the latter, he advocates using technology to “isolate privileged content” as well as clawback agreements and Federal Rule of Evidence 502 for protection against inadvertent disclosure.  It’s also important to be able to adjust during the review process if certain groups of documents are identified as needing to be excluded or handled differently, such as the “All Rights Reserved” documents that I previously referenced in the “oil” AND “rights” search example.  The more intelligent the review process, the more you save.

There is too much to say about these eight tips to limit to one blog post, so on Monday (after the Good Friday holiday) we’ll cover tips 5 through 8.  The waiting is the hardest part.

So, what do you think?  What are you doing to keep eDiscovery costs down?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Daily Is Thirty! (Months Old, That Is)

Thirty months ago yesterday, eDiscovery Daily was launched.  It’s hard to believe that it has been 2 1/2 years since our first three posts that debuted on our first day.  635 posts later, a lot has happened in the industry that we’ve covered.  And, yes we’re still crazy after all these years for committing to a daily post each business day, but we still haven’t missed a business day yet.  Twice a year, we like to take a look back at some of the important stories and topics during that time.  So, here are just a few of the posts over the last six months you may have missed.  Enjoy!

In addition, Jane Gennarelli has been publishing an excellent series to introduce new eDiscovery professionals to the litigation process and litigation terminology.  Here is the latest post, which includes links to the previous twenty one posts.

Thanks for noticing us!  We’ve nearly quadrupled our readership since the first six month period and almost septupled (that’s grown 7 times in size!) our subscriber base since those first six months!  We appreciate the interest you’ve shown in the topics and will do our best to continue to provide interesting and useful eDiscovery news and analysis.  And, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.