Searching

Data May Be Doubling Every Couple of Years, But How Much of it is Original? – eDiscovery Best Practices

According to the Compliance, Governance and Oversight Council (CGOC), information volume in most organizations doubles every 18-24 months. However, just because it doubles doesn’t mean that it’s all original. Like a bad cover band singing Free Bird, the rendition may be unique, but the content is the same. The key is limiting review to unique content.

When reviewers are reviewing the same files again and again, it not only drives up costs unnecessarily, but it could also lead to problems if the same file is categorized differently by different reviewers (for example, inadvertent production of a duplicate of a privileged file if it is not correctly categorized).

Of course, we all know the importance of identifying exact duplicates (that contain the exact same content in the same file format) which can be identified through MD5 and SHA-1 hash values, so that they can be removed from the review population and save considerable review costs.

Identifying near duplicates that contain the same (or almost the same) information (such as a Word document published to an Adobe PDF file where the content is the same, but the file format is different, so the hash value will be different) also reduces redundant review and saves costs.

Then, there is message thread analysis. Many email messages are part of a larger discussion, sometimes just between two parties, and, other times, between a number of parties in the discussion. To review each email in the discussion thread would result in much of the same information being reviewed over and over again. Pulling those messages together and enabling them to be reviewed as an entire discussion can eliminate that redundant review. That includes any side conversations within the discussion that may or may not be related to the original topic (e.g., a side discussion about the latest misstep by Anthony Weiner).

Clustering is a process which pulls similar documents together based on content so that the duplicative information can be identified more quickly and eliminated to reduce redundancy. With clustering, you can minimize review of duplicative information within documents and emails, saving time and cost and ensuring consistency in the review. As a result, even if the data in your organization doubles every couple of years, the cost of your review shouldn’t.

So, what do you think? Does your review tool support clustering technology to pull similar content together for review? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Plaintiffs Take the Supreme Step in Da Silva Moore – eDiscovery Case Law

As mentioned in Law Technology News (‘Da Silva Moore’ Goes to Washington), attorneys representing lead plaintiff Monique Da Silva Moore and five other employees have filed a petition for certiorari filed with the Supreme Court arguing that New York Magistrate Judge Andrew Peck, who approved an eDiscovery protocol agreed to by the parties that included predictive coding technology, should have recused himself given his previous public statements expressing strong support of predictive coding.

Da Silva Moore and her co-plaintiffs argued in the petition that the Second Circuit Court of Appeals was too deferential to Peck when denying the plaintiff’s petition to recuse him, asking the Supreme Court to order the Second Circuit to use the less deferential “de novo” standard.  As noted in the LTN article:

“The employees also cited a circuit split in how appellate courts reviewed judicial recusals, pointing out that the Seventh Circuit reviews disqualification motions de novo. Besides resolving the circuit split, the employees asked the Supreme Court to find that the Second Circuit’s standard was incorrect under the law. Citing federal statute governing judicial recusals, the employees claimed that the law required motions for disqualification to be reviewed objectively and that a deferential standard flew in the face of statutory intent. “Rather than dispelling the appearance of a self-serving judiciary, deferential review exacerbates the appearance of impropriety that arises from judges deciding their own cases and thus undermines the purposes of [the statute],” wrote the employees in their cert petition.”

This battle over predictive coding and Judge Peck’s participation has continued for 15 months.  For a recap of the events during that time, click here.

So, what do you think?  Is this a “hail mary” for the plaintiffs and will it succeed?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Word’s Stupid “Smart Quotes” – Best of eDiscovery Best Practices

Even those of us at eDiscoveryDaily have to take an occasional vacation day; however, instead of “going dark” for today, we thought we would republish a post from the early days of the blog (when we didn’t have many readers yet).  So, chances are, you haven’t seen this post yet!  Enjoy!

I have run into this issue more times than I can count.

A client sends me a list of search terms that they want to use to cull a set of data for review in a Microsoft® Word document.  I copy the terms into the search tool and then, all hell breaks loose!!  Either:

The search indicates there is a syntax error

OR

The search returns some obviously odd results

And, then, I remember…

It’s those stupid Word “smart quotes”.  Starting with Office 2003, Microsoft Word, by default, automatically changes straight quotation marks ( ‘ or ” ) to curly quotes as you type. This is fine for display of a document in Word, but when you copy that text to a format that doesn’t support the smart quotes (such as HTML or a plain text editor), the quotes will show up as garbage characters because they are not supported ASCII characters.  So:

“smart quotes”

will look like this…

âsmart quotesâ

As you can imagine, that doesn’t look so “smart” when you feed it into a search tool and you get odd results (if the search even runs).  So, you’ll need to address those to make sure that the quotes are handled correctly when searching for phrases with your search tool.

To disable the automatic changing of quotes to Microsoft Word smart quotes: Click the Microsoft Office icon button at the top left of Word, and then click the Word Options button to open options for Word.  Click Proofing along the side of the pop-up window, then click AutoCorrect Options.  Click the AutoFormat tab and uncheck the Replace “Smart Quotes” with “Smart Quotes” check box.  Then, click OK.

Often, however, the file you’ve received already has smart quotes in it.  If you’re going to use the terms in that file, you’ll need to copy them to a text editor first – (e.g., Notepad or Wordpad – if Wordpad is in plain text document mode) should be fine.  Highlight the beginning quote and copy it to the clipboard (Ctrl+C), then Ctrl+H to open up the Find and Replace dialog, put your cursor in the Find box and press Ctrl+V to paste it in.  Type the character on the keyboard into the Replace box, then press Replace All to replace all beginning smart quotes with straight ones.  Repeat the process for the ending smart quotes.  You’ll also have to do this if you have any single quotes, double-hyphens, fraction characters (e.g., Word converts “1/2” to “½”) that impact your terms.

So, what do you think?  Have you ever run into issues with Word smart quotes or other auto formatting options?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Self-Collecting? Don’t Forget to Check for Image Only Files – eDiscovery Best Practices

Yesterday, we talked about the importance of tracking chain of custody order to be able to fight challenges of electronically stored information (ESI) by opposing parties.  Today, let’s talk about a common mistake that organizations make when collecting their own files to turn over for discovery purposes.

I’ve worked with a number of attorneys who have turned over the collection of potentially responsive files to the individual custodians of those files, or to someone in the organization responsible for collecting those files (typically, an IT person).  Self-collection by custodians, unless managed closely, can be a wildly inconsistent process (at best).  In some cases, those attorneys have instructed those individuals to perform various searches to turn “self-collection” into “self-culling”.  Self-culling can cause at least two issues:

  1. You have to go back to the custodians and repeat the process if additional search terms are identified.
  2. Potentially responsive image-only files will be missed with self-culling.

Unless search terms are agreed to by the parties up front, it’s not unusual to identify additional searches to be performed – even when up front agreement, terms can often be renegotiated during the case.  It’s also common to have a number of image-only files within any collection, especially if the custodians frequently scan executed documents or use fax software to receive documents from other parties.  In those cases, image-only PDF or TIFF files can often make up as much as 20% of the collection.  When custodians are asked to perform “self-culling” by performing their own searches of their data, these files will typically be missed.

For these reasons, I usually advise against self-culling by custodians and also don’t recommend that IT perform self-culling, unless they have the ability to process that data to identify image-only files and perform Optical Character Recognition (OCR) to capture text from them.  If your IT department has the capabilities and experience to do so (and the process and chain of custody is well documented), then that’s great.  Many internal IT departments either don’t have the capabilities or expertise, in which case it’s best to collect all potentially responsive files from the custodians and turn them over to a qualified eDiscovery provider to perform the culling (performing OCR as needed to include responsive image-only files in the resulting responsive document set).  With the full data set available, there is also no need to go back to the custodians to collect additional data (unless the case requires supplemental productions).

So, what do you think?  Do you self-collect data for discovery purposes?  If so, how do you account for image-only files?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

“Not Me”, The Fallibility of Human Review – eDiscovery Best Practices

When I talk with attorneys about using technology to assist with review (whether via techniques such as predictive coding or merely advanced searching and culling mechanisms), most of them still seem to question whether these techniques can measure up to good, old-fashioned human attorney review.  Despite several studies that question the accuracy of human review, many attorneys still feel that their review capability is as good or better than technical approaches.  Here is perhaps the best explanation I’ve seen yet why that may not be the case.

In Craig Ball’s latest blog post on his Ball in Your Court blog (The ‘Not Me’ Factor), Craig provides a terrific explanation as to why predictive coding is “every bit as good (and actually much, much better) at dealing with the overwhelming majority of documents that don’t require careful judgment—the very ones where keyword search and human reviewers fail miserably.”

“It turns out that well-designed and –trained software also has little difficulty distinguishing the obviously relevant from the obviously irrelevant.  And, again, there are many, many more of these clear cut cases in a collection than ones requiring judgment calls.

So, for the vast majority of documents in a collection, the machines are every bit as capable as human reviewers.  A tie.  But giving the extra point to humans as better at the judgment call documents, HUMANS WIN!  Yeah!  GO HUMANS!   Except….

Except, the machines work much faster and much cheaper than humans, and it turns out that there really is something humans do much, much better than machines:  they screw up.

The biggest problem with human reviewers isn’t that they can’t tell the difference between relevant and irrelevant documents; it’s that they often don’t.  Human reviewers make inexplicable choices and transient, unwarranted assumptions.  Their minds wander.  Brains go on autopilot.  They lose their place.  They check the wrong box.  There are many ways for human reviewers to err and just one way to perform correctly.

The incidence of error and inconsistent assessments among human reviewers is mind boggling.  It’s unbelievable.  And therein lays the problem: it’s unbelievable.    People I talk to about reviewer error might accept that some nameless, faceless contract reviewer blows the call with regularity, but they can’t accept that potential in themselves.  ‘Not me,’ they think, ‘If I were doing the review, I’d be as good as or better than the machines.’  It’s the ‘Not Me’ Factor.”

While Craig acknowledges that “there is some cause to believe that the best trained reviewers on the best managed review teams get very close to the performance of technology-assisted review”, he notes that they “can only achieve the same result by reviewing all of the documents in the collection, instead of the 2%-5% of the collection needed to be reviewed using predictive coding”.  He asks “[i]f human review isn’t better (and it appears to generally be far worse) and predictive coding costs much less and takes less time, where’s the rational argument for human review?”

Good question.  Having worked with some large review teams with experienced and proficient document reviewers at an eDiscovery provider that employed a follow-up QC check of reviewed documents, I can still recall how often those well-trained reviewers were surprised at some of the classification mistakes they made.  And, I worked on one project with over a hundred reviewers working several months, so you can imagine how expensive that was.

BTW, Craig is no stranger to this blog – in addition to several of his articles we’ve referenced, we’ve also conducted thought leader interviews with him at LegalTech New York the past three years.  Here’s a link if you want to check those out.

So, what do you think?  Do you think human review is better than technology assisted review?  If so, why?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Motion to Compel Dismissed after Defendant Agrees to Conditional Meet and Confer – eDiscovery Case Law

In Gordon v. Kaleida Health, No. 08-CV-378S(F) (W.D.N.Y. May 21, 2013), New York Magistrate Judge Leslie G. Foschio dismissed (without prejudice) the plaintiffs’ motion to compel the defendant to meet and confer to establish an agreed protocol for implementing the use of predictive coding software after the defendants stated that they were prepared to meet and confer with the plaintiffs and their non-disqualified ESI consultants regarding the defendants’ predictive coding process.

For over a year, the parties unsuccessfully attempted to agree on how to achieve a cost-effective review of the defendants’ 200,000 to 300,000 emails using a keyword search methodology.  Eventually, in June 2012, the court expressed dissatisfaction with the parties’ lack of progress toward resolving the issues and pointed to the availability of predictive coding, citing its approval in Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y. Feb. 24, 2012) (much more on that case here).

In a September 2012 email, after informing the plaintiffs that they intended to use predictive coding, the defendants objected to the plaintiffs’ ESI consultants participating in discussions with Defendants relating to the use of predictive coding and establishing a protocol.  Later that month, despite the plaintiffs’ requests for discussion of numerous search issues to ensure a successful predictive coding outcome, the defendants sent their ESI protocol to the plaintiffs and indicated they would also send a list of their email custodians to the plaintiffs.  In October 2012, the plaintiffs objected to the defendants’ proposed ESI protocol and filed the motion to compel, also citing Da Silva Moore and noting several technical issues “which should be discussed with the assistance of Plaintiffs’ ESI consultants and cooperatively resolved by the parties”.

Complaining that the defendants refused to discuss issues other than the defendants’ custodians, the plaintiffs claimed that “the defendants’ position excludes Plaintiffs’ access to important information regarding Defendants’ selection of so-called ‘seed set documents’ which are used to ‘train the computer’ in the predictive coding search method.  The defendants responded, indicating they had no objection to a meet and confer with the plaintiffs and their consultants, except for those consultants that were the subject of the defendants’ motion to disqualify (because they had previously provided services to the defendants in the case). With regard to sharing seed set document information, the defendants stated that “courts do not order parties in ESI discovery disputes to agree to specific protocols to facilitate a computer-based review of ESI based on the general rule that ESI production is within the ‘sound discretion’ of the producing party” and noted that the defendants in Da Silva Moore weren’t required to provide the plaintiffs with their seed set documents, but volunteered to do so.

Because the defendants stated that “they are prepared to meet and confer with Plaintiffs and Plaintiffs’ ESI consultants, who are not disqualified”, Judge Foschio ruled that “it is not necessary for the court to further address the merits of Plaintiffs’ motion at this time” and dismissed the motion without prejudice.  It will be interesting to see if the parties can ultimately agree on sharing the protocol or if the question regarding sharing information about seed set documents will come back before the court.

So, what do you think?  Should producing parties be required to share information regarding selection of seed set documents?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Important Considerations when Negotiating Search Terms with Opposing Counsel – eDiscovery Best Practices

Negotiating search terms with opposing counsel has become commonplace to agree on the scope of discovery.  However, when you negotiate terms with the other side, you could be agreeing to produce more than you think.  Craig Ball’s latest article in Law Technology News discusses the issues and tries to answer the question: Are Keywords Just Filters?

Many attorneys still consider attorney eyes-on linear review as the final step to decide relevance of the document collection, but Craig notes that “requesting parties frequently believe that by agreeing to the use of a set of keywords as a proxy for attorney review, those agreed searches serve as a de facto request for production and define responsiveness per se, requiring production if not privileged.”

While producing parties may object to keyword search as a proxy for attorney review, Craig notes that “there’s sufficient ambiguity surrounding the issue to prompt prudent counsel to address the point explicitly when negotiating keyword search protocols and drafting memorializing agreements.”

Craig states what more and more people have come to accept, “Objective culling, keyword search, and emerging technologies such as predictive coding make clear that the idealized view of counsel as ultimate arbiter of relevance is mostly myth.”  We discussed a study regarding the reliability of review attorneys in a post here.  “Consequently, as more parties forge detailed agreements establishing objective evidentiary identifiers such as dates, sources, custodians, circulation, data types, and lexical content, litigants and courts grow impatient with the cost and time required for attorney review and reluctant to give it deference.”

Craig’s article discusses the issue in greater depth and even provides a couple of examples of agreed upon language – one where keyword search would be considered as a filter for attorney review, the other where it would be considered as a replacement for review.  His advice to producing parties: “In effect, requesting parties regard an agreement to use queries as an agreement to treat those queries as requests for production. Producing parties who reject this thinking would nevertheless be wise to plan for opponents (and judges) who embrace it.”

It’s a terrific article and I don’t want to steal all his thunder, so click here to check it out.

BTW, Craig is no stranger to this blog – in addition to several of his articles we’ve referenced, we’ve also conducted thought leader interviews with him at LegalTech New York the past three years.  Here’s a link if you want to check those out.

So, what do you think?  Do you negotiate search terms with opposing counsel?  If so, do you use the terms as a filter or a proxy for attorney review?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Never Mind! Plaintiffs Not Required to Use Predictive Coding After All – eDiscovery Case Law

Remember EORHB v. HOA Holdings, where, in a surprise ruling, both parties were instructed to use predictive coding by the judge?  Well, the judge has changed his mind.

As reported by Robert Hilson in the Association of Certified E-Discovery Specialists® (ACEDS) web site (subscription required), Delaware Chancery Court Vice Chancellor J. Travis Laster has revised his decision in EORHB, Inc. v. HOA Holdings, LLC, No. 7409-VCL (Del. Ch. May 6, 2013).  The new order enables the defendants to continue to utilize computer assisted review with their chosen vendor but no longer requires both parties to use the same vendor and enables the plaintiffs, “based on the low volume of relevant documents expected to be produced” to perform document review “using traditional methods.”

Here is the text of this very short order:

WHEREAS, on October 15, 2012, the Court entered an Order providing that, “[a]bsent a modification of this order for good cause shown, the parties shall (i) retain a single discovery vendor to be used by both sides, and (ii) conduct document review with the assistance of predictive coding;”

WHEREAS, the parties have proposed that HOA Holdings LLC and HOA Restaurant Group LLC (collectively, “Defendants”) retain ediscovery vendor Kroll OnTrack for electronic discovery;

WHEREAS, the parties have agreed that, based on the low volume of relevant documents expected to be produced in discovery by EORHB, Inc., Coby G. Brooks, Edward J. Greene, James P. Creel, Carter B. Wrenn and Glenn G. Brooks (collectively, “Plaintiffs”), the cost of using predictive coding assistance would likely be outweighed by any practical benefit of its use;

WHEREAS, the parties have agreed that there is no need for the parties to use the same discovery review platform;

WHEREAS, the requested modification of the Order will not prejudice any of the parties;

NOW THEREFORE, this –––– day of May 2013, for good cause shown, it is hereby ORDERED that:

(i) Defendants may retain ediscovery vendor Kroll OnTrack and employ Kroll OnTrack and its computer assisted review tools to conduct document review;

(ii) Plaintiffs and Defendants shall not be required to retain a single discovery vendor to be used by both sides; and

(iii) Plaintiffs may conduct document review using traditional methods.

Here is a link to the order from the article by Hilson.

So, what do you think?  Should a party ever be ordered to use predictive coding?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Google Compelled to Produce Search Terms in Apple v. Samsung – eDiscovery Case Law

In Apple v. Samsung, Case No. 12-cv-00630, (N.D. Cal., May 9, 2013), California Magistrate Judge Paul S. Grewal granted Apple’s motion to compel third party Google to produce the search terms and custodians used to respond to discovery requests and ordered the parties to “meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

In August of last year, a jury of nine found that Samsung infringed all but one of the seven patents at issue and found all seven of Apple’s patents valid – despite Samsung’s attempts to have them thrown out. They also determined that Apple didn’t violate any of the five patents Samsung asserted in the case.  Apple had been requesting $2.5 billion in damages.  Apple later requested additional damages of $707 million to be added to the $1.05 billion jury verdict, which was subsequently reduced to nearly $599 million, with a new trial being ordered on damages for 14 products.  This case was notable from an eDiscovery perspective due to the adverse inference instruction issued by Judge Grewal against Samsung just prior to the start of trial for spoliation of data, though it appears that the adverse inference instruction did not have a significant impact in the verdict.

Google’s Involvement

As part of the case, Apple subpoenaed Google to request discovery, though they did not discuss search terms or custodians during the meet and confer.  After Google responded to discovery requests, Apple requested search terms and custodians from Google used in responding to discovery requests, but Google refused, arguing that the search terms and choice of custodians were “privileged under the work-product immunity doctrine.”  Instead, Google asked Apple to suggest search terms and custodians, but Apple refused and filed a motion to compel Google to provide search terms and custodians used to respond to discovery requests.

Judge Grewal noted that Google’s arguments opposing Apple’s request “have shifted”, but that “[a]t the heart of its opposition, however, is Google’s belief that its status as a third party to this litigation exempts it from obligations parties may incur to show the sufficiency of their production, at least absent a showing by Apple that its production is deficient.”  Google complained that “the impact of requiring non-parties to provide complete ‘transparency’ into their search methodology and custodians in responding to non-party subpoenas whenever unsubstantiated claims of production deficiencies are made would be extraordinary.”

Judge’s Ruling

Referencing DeGeer v. Gillis, Judge Grewal noted that, in that case, it was ruled that the third party’s “failure to promptly disclose the list of employees or former employees whose emails it proposed to search and the specific search terms it proposed to be used for each individual violated the principles of an open, transparent discovery process.”

Therefore, while acknowledging that “Apple likewise failed to collaborate in its efforts to secure proper discovery from Google”, Judge Grewal ruled that “production of Google’s search terms and custodians to Apple will aid in uncovering the sufficiency of Google’s production and serves greater purposes of transparency in discovery.  Google shall produce the search terms and custodians no later than 48 hours from this order. Once those terms and custodians are provided, no later than 48 hours from the tender, the parties shall meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

So, what do you think?  Should a third party be held to the same standard of transparency, absent a showing of deficient discovery response?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

More Updates from the EDRM Annual Meeting – eDiscovery Trends

Yesterday, we discussed some general observations from the Annual Meeting for the Electronic Discovery Reference Model (EDRM) group and discussed some significant efforts and accomplishments by the (suddenly heavily talked about) EDRM Data Set project.  Here are some updates from other projects within EDRM.

It should be noted these are summary updates and that most of the focus on these updates is on accomplishments for the past year and deliverables that are imminent.  Over the next few weeks, eDiscovery Daily will cover each project in more depth with more details regarding planned activities for the coming year.

Model Code of Conduct (MCoC)

The MCoC was introduced in 2011 and became available for organizations to subscribe last year.  To learn more about the MCoC, you can read the code online here, or download it as a 22 page PDF file here.  Subscribing is easy!  To voluntarily subscribe to the MCoC, you can register on the EDRM website here.  Identify your organization, provide information for an authorized representative and answer four verification questions (truthfully, of course) to affirm your organization’s commitment to the spirit of the MCoC, and your organization is in!  You can also provide a logo for EDRM to include when adding you to the list of subscribing organizations.  Pending a survey of EDRM members to determine if any changes are needed, this project has been completed.  Team leaders include Eric Mandel of Zelle Hofmann, Kevin Esposito of Rivulex and Nancy Wallrich.

Information Governance Reference Model (IGRM)

The IGRM team has continued to make strides and improvements on an already terrific model.  Last October, they unveiled the release of version 3.0 of the IGRMAs their press release noted, “The updated model now includes privacy and security as primary functions and stakeholders in the effective governance of information.”  IGRM continues to be one of the most active and well participated EDRM projects.  This year, the early focus – as quoted from Judge Andrew Peck’s keynote speech at Legal Tech this past year – is “getting rid of the junk”.  Project leaders are Aliye Ergulen from IBM, Reed Irvin from Viewpointe and Marcus Ledergerber from Morgan Lewis.

Search

One of the best examples of the new, more agile process for creating deliverables within EDRM comes from the Search team, which released its new draft Computer Assisted Review Reference Model (CARRM), which depicts the flow for a successful Computer Assisted Review project. The entire model was created in only a matter of weeks.  Early focus for the Search project for the coming year includes adjustments to CARRM (based on feedback at the annual meeting).  You can also still send your comments regarding the model to mail@edrm.net or post them on the EDRM site here.  A webinar regarding CARRM is also planned for late July.  Kudos to the Search team, including project leaders Dominic Brown of Autonomy and also Jay Lieb of kCura, who got unmerciful ribbing for insisting (jokingly, I think) that TIFF files, unlike Generalissimo Francisco Franco, are still alive.  🙂

Jobs

In late January, the Jobs Project announced the release of the EDRM Talent Task Matrix diagram and spreadsheet, which is available in XLSX or PDF format. As noted in their press release, the Matrix is a tool designed to help hiring managers better understand the responsibilities associated with common eDiscovery roles. The Matrix maps responsibilities to the EDRM framework, so eDiscovery duties associated can be assigned to the appropriate parties.  Project leader Keith Tom noted that next steps include surveying EDRM members regarding the Matrix, requesting and co-authoring case-studies and white papers, and creating a short video on how to use the Matrix.

Metrics

In today’s session, the Metrics project team unveiled the first draft of the new Metrics model to EDRM participants!  Feedback was provided during the session and the team will make the model available for additional comments from EDRM members over the next week or so, with a goal of publishing for public comments in the next two to three weeks.  The team is also working to create a page to collect Metrics measurement tools from eDiscovery professionals that can benefit the eDiscovery community as a whole.  Project leaders Dera Nevin of TD Bank and Kevin Clark noted that June is “budget calculator month”.

Other Initiatives

As noted yesterday, there is a new project to address standards for working with native files in the different EDRM phases led by Eric Mandel from Zelle Hofmann and also a new initiative to establish collection guidelines, spearheaded by Julie Brown from Vorys.  There is also an effort underway to refocus the XML project, as it works to complete the 2.0 version of the EDRM XML model.  In addition, there was quite a spirited discussion as to where EDRM is heading as it approaches ten years of existence and it will be interesting to see how the EDRM group continues to evolve over the next year or so.  As you can see, a lot is happening within the EDRM group – there’s a lot more to it than just the base Electronic Discovery Reference Model.

So, what do you think?  Are you a member of EDRM?  If not, why not?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.