Analysis

Alon Israely, Esq., CISSP of BIA – eDiscovery Trends

This is the sixth of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Alon Israely.  Alon is a Manager of Strategic Partnerships at Business Intelligence Associates (BIA) and currently leads the Strategic Partner Program at BIA.  Alon has over seventeen years of experience in a variety of advanced computing-related technologies and has consulted with law firms and their clients on a variety of technology issues, including expert witness services related to computer forensics, digital evidence management and data security.  Alon is an attorney and Certified Information Security Specialist (CISSP).

What are your general observations about LTNY this year and how it fits into emerging trends?

{Interviewed on the second afternoon}  Looking at the show and walking around the exhibit hall, I feel like the show is less chaotic than in the past.  It seems like there are less vendors, though I don’t know that for a fact.  However, the vendors that are here appear to have accomplished quite a bit over the last twelve months to better clarify their messaging, as well as to better fine tune their offerings and the way they present those offerings.  It’s actually more enjoyable for me to walk through the exhibit hall this year – last year felt so chaotic and it was really difficult to differentiate the offerings.  That has been a problem in the legal technology business – no one really knows what the different vendors really do and they all seem to do the same thing.  Because of better messaging, I think this is the first year I started to truly feel that I can differentiate vendor offerings, probably because some of the vendors that entered the industry in the past few years have reached a maturity level.

So, it’s not that I am not seeing new technologies, methods or ways of doing things in eDiscovery; instead, I am seeing better ways of doing things.  As well as vendors simply getting better at their own pitch and messaging.  And, by that, I mean everything involved in the messaging – the booth, the sales reps in the booth, the product being offered, everything.

If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

I think this year’s “next big thing” follows the same theme as last year’s “next big thing”, only you’re going to see more mature Technology Assisted Review (TAR) solutions and more mature predictive coding.  It won’t be just that people provide a predictive coding solution; they will also provide a work flow around the solution and a UI around the solution, as well as a method, a process, testing and even certification.  So, what will happen is that the trend will still be technology assisted review and predictive coding and analytics, just that it won’t be so “bleeding edge”.  The key is presentation of data such that it helps attorneys get through the data in a smarter way – not necessarily just culling, but understanding the data that you have and how to get through it faster and more accurately.  I think that the delivery of those approaches through solution providers, software providers and even service providers seems to be more mature and more focused.  Now, there is an actual tangible “thing” that I can touch that shows it is not just a bullet point – “Hey, we do predictive coding!” – instead, there is actually a method in which it is deployed to you, and to your case or your matter.

What are you working on that you’d like our readers to know about?

BIA is really redefining eDiscovery with respect to how the corporate customer looks at it.  How does the corporation look at eDiscovery?  They look at it as part of information security and information management and we find that IT departments are very much more involved in the decision making process.  Having information security roots, BIA is leveraging our preservation technology and bringing in an eDiscovery tool kit and platform that a company can use that will get them where they need to be with respect to compliance, defensibility and efficiency.  We also have the only license model for eDiscovery in the business with respect to the kind of corporate license model, the per seat model that we offer.  We are saying “look, we have been doing this for 10 years and we know exactly what we are doing”.  We use cutting edge technology and while other cloud providers have claimed that they are leveraging utility computing, we are not only saying that, we are actually doing it.  If you don’t believe us, check it out and bring your best technology people and they will see we are telling the truth on that.  We are leveraging our technology for what happens from the corporate perspective.

We are not a review tool and you cannot produce documents out of our software, but that is why clients have software products like OnDemand®; with it, they can do all the different types of review they want and batch it out and use 100 reviewers or 10 reviewers or whatever.  BIA supports the corporations who care about legal hold and preservation and collections and insuring that they are not sending millions of gigs over for costly review.  We support from the corporate perspective, whether you want to call it on the left side of the EDRM model or not, what the GC needs.  GCs want to make sure that they have not deleted some piece of data that will be needed in court.  Notifying clients of that requirement, taking a “snap shot” of that data, locking it down, collecting that data and then insuring that our clients are following the right work flow is basically what we bring to the table.  We have also automated about 80% of the manual tasks with TotalDiscovery, which makes the GC happy and brings that protection to the organization at the right price.  Between TotalDiscovery and a review application like OnDemand, you don’t need anything else.  You don’t need twenty applications for a full solution – two applications are all you need.

Thanks, Alon, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Laura Zubulake, Author of “Zubulake's e-Discovery” – eDiscovery Trends

This is the fifth of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Laura Zubulake.  Laura worked on Wall Street for 20 years in institutional equity departments and, in 1991, authored the book The Complete Guide to Convertible Securities Worldwide. She was the plaintiff in the Zubulake vs. UBS Warburg case, which resulted in several landmark opinions related to eDiscovery and counsel’s obligations for the preservation of electronically stored information. The December 2006 amendments to the Federal Rules of Civil Procedure were influenced, in part, by the Zubulake case. Last year, Laura published a book titled Zubulake’s e-Discovery: The Untold Story of my Quest for Justice, previously discussed on this blog here and she speaks professionally about eDiscovery topics and her experiences related to the case.

What are your general observations about LTNY this year and how it fits into emerging trends?

{Interviewed the second day of the show}  The crowd is similar in size to last year’s conference.  As always, there is that buzz of activity. There is a diversity of speakers and panels.  The Judge’s panels should be informative as usual,  Ted Olsen’s keynote was an interesting and different introduction to the conference.  I’m also looking forward to the Thursday Closing Plenary Address on cyber security by Mary Galligan from the FBI.  As far as trends are concerned, based on the agenda it is clear that information governance is becoming more of an important topic.  Cyber security is also more of a focus.    Next year, I think cyber security, information governance, and big data will continue to be trends.  I think that by next year, predictive coding will be less of a hot topic.

Speaking of predictive coding, if last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

At this point, I think that predictive coding has moved along the learning curve. Personally, I like to use the word algorithms with regard to predictive coding.  For years, algorithms have been used in government, law enforcement, and Wall Street.  It is not a new concept.  I think there will be an increasing acceptance of using them.  A key to acceptance will be to get cases where both parties agree to use algorithms voluntarily (instead of being forced to use them) and both sides are comfortable with the results.

As for the next big thing, as I said earlier,  there will probably be increased attention on information governance.  As the eDiscovery industry matures, information governance will become more of a focus for corporations.  They will realize that, while they have legal obligations (with regard to electronic information), they also need to proactively manage that information. This will not only mitigate costs and risk but also leverage that information for business purposes.  So far, I have found the panel discussions regarding information governance to be most interesting.

What are you working on that you’d like our readers to know about?

My goal this past year was to publish my book.  Reviews have been  good and I’m very thankful for that – especially given that I worked on it for several years.  The feedback has been rewarding in two aspects.  First, those in the eDiscovery industry are appreciating the book, because they are getting the background story to the making of the precedents.  Second, and even more rewarding to me personally, are reactions from readers who are not in the in the industry and not familiar with eDiscovery.  They appreciate the human-interest side of the story.  There are two stories in the book.  The broader audience finds the legal story interesting, but finds the human-interest story compelling.  I am also encouraged that readers are recognizing my story is really more about information governance than eDiscovery.  It was my understanding of the value of information and desire to search for it that resulted in the eDiscovery opinions.  As I state in my book, Zubulake I was the most important opinion because it gave me the opportunity to search for information.

Going forward, I will continue to market the book, plan events to market it and work towards getting more reviews in what I would call the broader media, not just in eDiscovery or legal media outlets.  Another one of my goals for this year and next year is to get back into the workforce in the area of information governance.  I think my Wall Street background and eDiscovery experiences are a perfect combination for information governance.  I also hope to use my book as a platform for my job search.

Thanks, Laura, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Tom Gelbmann of Gelbmann & Associates, LLC – eDiscovery Trends

This is the third of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Tom Gelbmann. Tom is Principal of Gelbmann & Associates, LLC.  Since 1993, Gelbmann & Associates, LLC has advised law firms and Corporate Law Departments to realize the full benefit of their investments in Information Technology.  Tom has also been co-author of the leading survey on the electronic discovery market, The Socha-Gelbmann Electronic Discovery Survey; in 2011 he and George Socha converted the Survey into Apersee, an online system for selecting eDiscovery providers and their offerings.  In 2005, he and George Socha launched the Electronic Discovery Reference Model project to establish standards within the eDiscovery industry – today, the EDRM model has become a standard in the industry for the eDiscovery life cycle.

What are your general observations about LTNY this year and how it fits into emerging trends?

{Interviewed the first morning of LegalTech}  The most notable trend I have seen to lead up to LegalTech is the rush to jump on the computer assisted review bandwagon.  There are several sessions here at the show related to computer assisted review.  In addition, many in the industry seem to have a tool now some are promoting it as an “easy” button.  There is no “easy” button and, if I can mention a plug for EDRM, that’s one of the things the Search group was concerned with, so the group published the Computer Assisted Review Reference Model (CARRM) (our blog post about CARRM here).

To help people understand what computer assisted review is all about: it’s great technology and, if well used, it can really deliver great results, save time and save money, but it has to be understood it’s a tool.  It’s not a substitute for a process.  The good news is the technology is helping and, as I have been seeing for years, the more technology is intelligently used, the more you can start to bend the cost curve down for electronic discovery.  So, what I think it has started to do and will continue to do is level off those costs on the right hand side of the model.

If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

I think one of the “next big things” which has already started is the whole left side of the model which I would characterize as information governance.  Information governance is on the rise and a lot of people in the industry believe that information governance today might be where electronic discovery was in about 2005 or 2006.  We need a lot of understanding, standards and education on effective approaches to information governance because that’s really where the problems are.  There are significant expenditures by organizations trying to work with too much data and not being able to find their data.  Associated with that, will be technology that will help and I also anticipate a significant increase in consulting services to help organizations develop effective policies and procedures.  The consulting organizations that can get it right and communicate it effectively will be able to capitalize on this aspect of the market.  Related to that, from a preservation standpoint, we have been seeing more software tools to help with litigation hold as more organizations get serious about preservation.

Another big trend is education.  EDRM is involved with the programs at Bryan University and the University of Florida (Bill Hamilton deserves a lot of credit for what is happening there).  I think you are going to see that continue to expand as more universities and educational facilities will be providing quality programs in the area of electronic discovery and perhaps information governance along the way.

The last trend I want to mention is a greater focus on marketing.  From a provider’s standpoint, it seems that there has been a flood of announcements about organizations that have hired a new marketing director, either overall for a specific region (west coast, east coast, South America, etc.).  Marketing is really expanding in the community, so it seems that providers are realizing they really have to intelligently go after business.  I don’t believe we saw that level of activity even two or three years ago.

What are you working on that you’d like our readers to know about?

With regard to EDRM, we had a very productive mid-year meeting where we asked our participants to help us plan for the future of EDRM.  As a result, we came up with several changes we are immediately implementing. One change is that projects are going to be much smaller and shorter duration with as few as one to five people working on a particular item to get it done and get it out to the community more quickly for feedback.  One example of that which we discussed above is CARRM.  We just announced another project yesterday which was the Talent Task Matrix (our blog post about it here).  We already have 91 downloads of the diagram and 87 downloads of the spreadsheet in less than a day. The matrix was very good work done by a very small group of EDRM folks.  We also dropped the prices for EDRM participation and there are also going to be additional changes in terms of sponsorships and advertising, so we are changing as we are gearing up for our 10th year.

Also, we’re very excited about the additions we have made to Apersee in the last six monthsOne addition is the calendar which we believe is the most comprehensive calendar around for eDiscovery events.  If it is happening in the eDiscovery world globally, it’s probably on the Apersee calendar.  For conferences and webinars, the participating organizations will be listed, with a link back to their profile within Apersee.  We are also tracking press releases related to eDiscovery, enabling users to view press releases chronologically and also see the press releases associated within organization to see what they have said about themselves through their press releases.  These are examples of what Apersee is doing to build the comprehensive view of eDiscovery organizations to show what is happening, what they are doing and what services and products they offer.

Thanks, Tom, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Brad Jenkins of CloudNine Discovery – eDiscovery Trends

This is the first of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Brad Jenkins of CloudNine Discovery.  Brad has over 20 years of experience as an entrepreneur, as well as 14 years leading customer focused companies in the litigation support arena. Brad also writes the Litigation Support Industry Blog, which covers news about litigation support and e-discovery companies’ funding activities, acquisitions & mergers and notable business successes. He has authored many articles on document management and litigation support issues, and has appeared as a speaker before national audiences on document management practices and solutions.  He’s also my boss!   🙂

What are your general observations about LTNY this year and how it fits into emerging trends?

Well, clearly the technology assisted review/predictive coding wave is the most popular topic here at the show.  I think I counted at least six sessions discussing the topic and numerous vendors touting their tools.  And, this blog covered it and the cases using quite a bit last year.  I’m sure you’ll hear that from a lot of the folks you’re interviewing.

Another trend that I’m seeing is integration of applications to make the discovery process more seamless, especially the integration of cloud-based collection and review applications.  We have an alliance with BIA and their TotalDiscovery legal hold and collection tool, which can export data into our review application, OnDemand®, which our clients are using quite successfully to collect data and move it along the process.  I think the “best of breed” approach between an application that’s focused on the left side of the EDRM model and one that’s focused on the right side is an approach that makes sense for a lot of organizations.

If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

I’m not sure that I see just one thing as the “next big thing”.  I certainly see the continued focus on integration of applications as one big thing.  Another big thing that I see is a broadening acceptance of technology assisted review from more than just predictive coding.  For example, clustering similar documents together can make review more efficient and more accurate and we provide that in OnDemand through our partnership with Hot Neuron’s Clustify™.

Perhaps the biggest thing that I see is education and adoption of the technology.  Many lawyers still don’t actively use the technology and don’t find the applications intuitive.  We’ve worked hard to make OnDemand easy to use, requiring minimal or no training.  A lot of vendors tout their products as easy to use, but we’re backing our claim with our free no risk trial of OnDemand that includes free data assessment, free native processing, free data load and free first month hosting for the first data set on any new OnDemand project.  We feel that we have a team of “Aces” and a hand full of aces is almost impossible to beat.  So, the free no risk trial reflects our confidence that clients that try OnDemand will embrace its ease-of-use and self-service features and continue to use it and us for their discovery needs.

What are you working on that you’d like our readers to know about?

In addition to our integration success with BIA, our partnership with Clustify and our free no risk trial, we’re also previewing the initial release of the mobile version of OnDemand.  The first mobile version will be designed for project administrators to add users and maintain user rights, as well as obtain key statistics about their projects.  It’s our first step toward our 2013 goal of making OnDemand completely platform independent and we are targeting a third quarter release of a new redesigned version of OnDemand that will support that goal.

Thanks, Brad, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Is 31,000 Missed Relevant Documents an Acceptable Outcome? – eDiscovery Case Law

It might be, if the alternative is 62,000 missed relevant documents.

Last week, we reported on the first case for technology assisted review to be completed, Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles Jet Center, et al, in which predictive coding was approved last April by Virginia State Circuit Court Judge James H. Chamblin.  Now, as reported by the ABA Journal (by way of the Wall Street Journal Law Blog), we have an idea of the results from the predictive coding exercise.  Notable numbers:

  • Attorneys “coded a sample of 5,000 documents out of 1.3 million as either relevant or irrelevant” to serve as the seed set for the predictive coding process,
  • The predictive coding “program turned up about 173,000 documents deemed relevant”,
  • The attorneys “checked a sample of about 400 documents deemed relevant by the computer program. About 80 percent were indeed relevant. The lawyers then checked a sample of the documents deemed irrelevant. About 2.9 percent were possibly relevant”,
  • Subtracting the 173,000 documents deemed relevant from the 1.3 million total document population yields 1,127,000 documents not deemed relevant.  Extrapolating the 2.9 percent sample of missed potentially relevant document to the rest of the documents deemed non relevant yields 32,683 potentially relevant documents missed.

“For some this may be hard to stomach,” the WSJ Law Blog says in the article. “The finding suggests that more than 31,000 documents may be relevant to the litigation but won’t get turned over to the other side. What if the smoking gun is among them?”

However, the defendants, in arguing for the use of predictive coding in this case, asserted that “manual review of the approximately two million documents at issue would be extremely costly while locating only about 60 percent of potentially relevant documents”.  Of course, the rise in popularity of technology assisted review is not only due to the cost savings but also the growing belief of increased accuracy over human review as concluded in the oft-cited Richmond Journal of Law and Technology white paper from Maura Grossman and Gordon Cormack, Technology-Assisted Review in e-Discovery can be More Effective and More Efficient than Exhaustive Manual Review.

Assuming that the defendants’ effectiveness estimate of manual review is reasonable, then it could be argued that more than 62,000 relevant documents could have been missed using manual review at a much higher cost for review.  While we don’t know what the actual number of missed documents would have been, it’s certainly fair to conclude that the predictive coding effort saved considerable review costs in this case with comparable, if not better, accuracy.

There will be several sessions at Legal Tech® New York 2013 starting tomorrow discussing aspects of predictive coding.  For a preview of LegalTech, click here.

So, what do you think?  What do you think of the results?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

First Case for Technology Assisted Review to be Completed – eDiscovery Trends

As reported in Law Technology News by Evan Koblentz, it appears we have our first case in which predictive coding has been completed.

Last April, as reported in this blog, in Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles Jet Center, et al, Virginia State Circuit Court Judge James H. Chamblin ordered that the defendants can use predictive coding for discovery in this case, despite the plaintiff’s objections that the technology is not as effective as human review.  The order was issued after the defendants issued a motion requesting either that predictive coding technology be allowed in the case or that the plaintiffs pay any additional costs associated with traditional review.  The defendant had an 8 terabyte data set that they were hoping to reduce to a few hundred gigabytes through advanced culling techniques.

According to the Law Technology News article, defense counsel at Schnader Harrison Segal & Lewis, and also at Baxter, Baker, Sidle, Conn & Jones, used OrcaTec’s Document Decisioning Suite technology and that OrcaTec will announce that the process is finished after plaintiff’s counsel at Jones Day did not object to the results by a recent deadline.

As reported in the article, eDiscovery analyst David Horrigan of 451 Research, expressed his surprise that Global Aerospace didn’t head in a different direction and wondered aloud why plaintiff’s counsel did not object to the results after initially objecting to the technology itself.

“It’s disappointing this issue has apparently been resolved on [plaintiff’s] missed procedural deadline,” he said. “Not unlike the predictive coding vs. keyword search debate in Kleen Products being postponed, if this court deadline has really been missed, we’ve lost an opportunity for a court ruling on predictive coding being decided on the merits.”

For more about what predictive coding is and its effectiveness, here are a couple of previous posts on the subject.  For other cases where predictive coding and other technology assisted review mechanisms have been discussed, check out this year end case summary from last week.

So, what do you think?  Does this pave the way for more cases to use technology assisted review?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

2012 eDiscovery Year in Review: eDiscovery Case Law, Part 2

As we noted yesterday, eDiscoveryDaily published 98 posts related to eDiscovery case decisions and activities over the past year, covering 62 unique cases!  Yesterday, we looked back at cases related to proportionality and cooperation, privilege and inadvertent disclosures, and eDiscovery cost reimbursement.  Today, let’s take a look back at cases related to social media and, of course, technology assisted review(!).

We grouped those cases into common subject themes and will review them over the next few posts.  Perhaps you missed some of these?  Now is your chance to catch up!

SOCIAL MEDIA

Requests for social media data in litigation continue.  Unlike last year, however, not all requests for social media data were granted as some requests were deemed overbroad.  However, Twitter fought “tooth and nail” (unsuccessfully, as it turns out) to avoid turning over a user’s tweets in at least one case.  Here are six cases related to social media data:

Class Action Plaintiffs Required to Provide Social Media Passwords and Cell Phones.  Considering proportionality and accessibility concerns in EEOC v. Original Honeybaked Ham Co. of Georgia, Colorado Magistrate Judge Michael Hegarty held that where a party had showed certain of its adversaries’ social media content and text messages were relevant, the adversaries must produce usernames and passwords for their social media accounts, usernames and passwords for e-mail accounts and blogs, and cell phones used to send or receive text messages to be examined by a forensic expert as a special master in camera.

Another Social Media Discovery Request Ruled Overbroad.  As was the case in Mailhoit v. Home Depot previously, Magistrate Judge Mark R. Abel ruled in Howell v. The Buckeye Ranch that the defendant’s request (to compel the plaintiff to provide her user names and passwords for each of the social media sites she uses) was overbroad.

Twitter Turns Over Tweets in People v. Harris.  As reported by Reuters, Twitter has turned over Tweets and Twitter account user information for Malcolm Harris in People v. Harris, after their motion for a stay of enforcement was denied by the Appellate Division, First Department in New York and they faced a finding of contempt for not turning over the information. Twitter surrendered an “inch-high stack of paper inside a mailing envelope” to Manhattan Criminal Court Judge Matthew Sciarrino, which will remain under seal while a request for a stay by Harris is heard in a higher court.

Home Depot’s “Extremely Broad” Request for Social Media Posts Denied.  In Mailhoit v. Home Depot, Magistrate Judge Suzanne Segal ruled that the three out of four of the defendant’s discovery requests failed Federal Rule 34(b)(1)(A)’s “reasonable particularity” requirement, were, therefore, not reasonably calculated to lead to the discovery of admissible evidence and were denied.

Social Media Is No Different than eMail for Discovery Purposes.  In Robinson v. Jones Lang LaSalle Americas, Inc., Oregon Magistrate Judge Paul Papak found that social media is just another form of electronically stored information (ESI), stating “I see no principled reason to articulate different standards for the discoverability of communications through email, text message, or social media platforms. I therefore fashion a single order covering all these communications.”

Plaintiff Not Compelled To Turn Over Facebook Login Information.  In Davids v. Novartis Pharm. Corp., the Eastern District of New York ruled against the defendant on whether the plaintiff in her claim against a pharmaceutical company could be compelled to turn over her Facebook account’s login username and password.

TECHNOLOGY ASSISTED REVIEW

eDiscovery vendors everywhere had been “waiting with bated breath” for the first case law pertaining to acceptance of technology assisted review within the courtroom.  Not only did they get their case, they got a few others – and, in one case, the judge actually required both parties to use predictive coding.  And, of course, there was a titanic battle over the use of predictive coding in the DaSilva Moore – easily the most discussed case of the year.  Here are five cases where technology assisted review was at issue:

Louisiana Order Dictates That the Parties Cooperate on Technology Assisted Review.  In the case In re Actos (Pioglitazone) Products Liability Litigation, a case management order applicable to pretrial proceedings in a multidistrict litigation consolidating eleven civil actions, the court issued comprehensive instructions for the use of technology-assisted review (“TAR”).

Judge Carter Refuses to Recuse Judge Peck in Da Silva Moore.  This is only the final post of the year in eDiscovery Daily related to Da Silva Moore v. Publicis Groupe & MSL Group.  There were at least nine others (linked within this final post) detailing New York Magistrate Judge Andrew J. Peck’s original opinion accepting computer assisted review, the plaintiff’s objections to the opinion, their subsequent attempts to have Judge Peck recused from the case (alleging bias) and, eventually, District Court Judge Andrew L. Carter’s orders upholding Judge Peck’s original opinion and refusing to recuse him in the case.

Both Sides Instructed to Use Predictive Coding or Show Cause Why Not.  Vice Chancellor J. Travis Laster in Delaware Chancery Court – in EORHB, Inc., et al v. HOA Holdings, LLC, – has issued a “surprise” bench order requiring both sides to use predictive coding and to use the same vendor.

No Kleen Sweep for Technology Assisted Review.  For much of the year, proponents of predictive coding and other technology assisted review (TAR) concepts have been pointing to three significant cases where the technology based approaches have either been approved or are seriously being considered. Da Silva Moore v. Publicis Groupe and Global Aerospace v. Landow Aviation are two of the cases, the third one is Kleen Products v. Packaging Corp. of America. However, in the Kleen case, the parties have now reached an agreement to drop the TAR-based approach, at least for the first request for production.

Is the Third Time the Charm for Technology Assisted Review?  In Da Silva Moore v. Publicis Groupe & MSL Group, Magistrate Judge Andrew J. Peck issued an opinion making it the first case to accept the use of computer-assisted review of electronically stored information (“ESI”) for this case. Or, so we thought. Conversely, in Kleen Products LLC v. Packaging Corporation of America, et al., the plaintiffs have asked Magistrate Judge Nan Nolan to require the producing parties to employ a technology assisted review approach in their production of documents. Now, there’s a third case where the use of technology assisted review is actually being approved in an order by the judge.

Tune in tomorrow for more key cases of 2012 and one of the most common themes of the year!

So, what do you think?  Did you miss any of these?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Problems with Review? It’s Not the End of the World – eDiscovery Best Practices

If you’re reading this, the Mayans were wrong… 🙂

If 2012 will be remembered for anything from an eDiscovery standpoint, it will be remembered for the arrival of Technology Assisted Review (TAR), aka Computer Assisted Review (CAR), as a court accepted method for conducting eDiscovery review.  Here are a few of the recent TAR cases reported on this blog.

Many associate TAR with predictive coding, but that’s not the only form of TAR to assist with review.  How the documents are organized for review can make a big difference in the efficiency of review, not only saving costs, but also improving accuracy by assigning similar documents to the same reviewer.  Organizing documents with similar content into “clusters” enables each reviewer to make quicker review decisions (for example, by looking at one document to determine responsiveness and applying the same categorization to duplicates or mere variations of that first document).  This also promotes consistency by enabling the same reviewer to review all similar documents in a cluster avoiding potential inadvertent disclosures where one reviewer marks a document as privileged while another reviewer fails to mark a copy of the that same document as such and that document gets produced.

Hot Neuron’s Clustify™ is an example of clustering software that examines the text in your documents, determines which documents are related to each other, and groups them into clusters, labeling each cluster with a set of keywords which provides a quick overview of the cluster, as well as a “representative document” against which all other documents in the cluster are compared.

Clustering can make review more efficient and effective for these types of documents:

  • Email Message Threads: The ability to group messages from a thread into a cluster enables the reviewer to quickly identify the email(s) containing the entire conversation, categorize those and either apply the same categorization to the rest or dismiss as duplicative (if so instructed).
  • Routine Reports: Periodic reports – such as a weekly accounts receivable report – that are generated can be grouped together in a cluster to enable a single reviewer to make a relevancy determination and quickly apply it to all documents in the cluster.
  • Versions of Documents: The content of each draft of a document is often similar to the previous version, so categorizing one version of the document could be quickly applied to the rest of the versions.
  • Published Documents: Publishing a file to Adobe PDF format generates an exact copy (from Word, Excel or other application) of the original file in content, but different in format, so these documents won’t be identified as “dupes” based on their HASH value.  With clustering, those documents still get grouped together so that those non-HASH dupes are still identified and addressed.

Within the parameters of a review tool like OnDemand®, which manages the review process and delivers documents quickly and effectively for review, clustering documents can speed decision making during review, saving considerable time and review costs, yet improving consistency of document classifications.

So, what do you think?  Have you used clustering software to organize documents for review?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Daily will take a break for the holidays and will return on Wednesday, January 2, 2013. Happy Holidays from all of us at Cloudnine Discovery and eDiscovery Daily!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Baby, You Can Drive My CARRM – eDiscovery Trends

Full disclosure: this post is NOT about the Beatles’ song, but I liked the title.

There have been a number of terms applied to using technology to aid in eDiscovery review, including technology assisted review (often referred to by its acronym “TAR”) and predictive coding.  Another term is Computer Assisted Review (which lends itself to the obvious acronym of “CAR”).

Now, the Electronic Discovery Reference Model (EDRM) is looking to provide an “owner’s manual” to that CAR with its new draft Computer Assisted Review Reference Model (CARRM), which depicts the flow for a successful CAR project.  The CAR process depends on, among other things, a sound approach for identifying appropriate example documents for the collection, ensuring educated and knowledgeable reviewers to appropriately code those documents and testing and evaluating the results to confirm success.  That’s why the “A” in CAR stands for “assisted” – regardless of how good the tool is, a flawed approach will yield flawed results.

As noted on the EDRM site, the major steps in the CARRM process are:

Set Goals

The process of deciding the outcome of the Computer Assisted Review process for a specific case. Some of the outcomes may be:

  • reduction and culling of not-relevant documents;
  • prioritization of the most substantive documents; and
  • quality control of the human reviewers.

Set Protocol

The process of building the human coding rules that take into account the use of CAR technology. CAR technology must be taught about the document collection by having the human reviewers submit documents to be used as examples of a particular category, e.g. Relevant documents. Creating a coding protocol that can properly incorporate the fact pattern of the case and the training requirements of the CAR system takes place at this stage. An example of a protocol determination is to decide how to treat the coding of family documents during the CAR training process.

Educate Reviewer

The process of transferring the review protocol information to the human reviewers prior to the start of the CAR Review.

Code Documents

The process of human reviewers applying subjective coding decisions to documents in an effort to adequately train the CAR system to “understand” the boundaries of a category, e.g. Relevancy.

Predict Results

The process of the CAR system applying the information “learned” from the human reviewers and classifying a selected document corpus with pre-determined labels.

Test Results

The process of human reviewers using a validation process, typically statistical sampling, in an effort to create a meaningful metric of CAR performance. The metrics can take many forms, they may include estimates in defect counts in the classified population, or use information retrieval metrics like Precision, Recall and F1.

Evaluate Results

The process of the review team deciding if the CAR system has achieved the goals of anticipated by the review team.

Achieve Goals

The process of ending the CAR workflow and moving to the next phase in the review lifecycle, e.g. Privilege Review.

The diagram does a good job of reflecting the linear steps (Set Goals, Set Protocol, Educate Reviewer and, at the end, Achieve Goals) and a circle to represent the iterative steps (Code Documents, Predict Results, Test Results and Evaluate Results) that may need to be performed more than once to achieve the desired results.  It’s a very straightforward model to represent the process.  Nicely done!

Nonetheless, it’s a draft version of the model and EDRM wants your feedback.  You can send your comments to mail@edrm.net or post them on the EDRM site here.

So, what do you think?  Does the CARRM model make computer assisted review more straightforward?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Percentage of eDiscovery Sanctions Cases Declining – eDiscovery Trends

According to Kroll Ontrack, the percentage of eDiscovery cases addressing sanctions “dropped by approximately ten percent” compared to 2011, while “cases addressing procedural issues more than doubled”.  Let’s take a closer look at the numbers and look at some cases in each category.

As indicated in their December 4 news release, in the past year, Kroll Ontrack experts summarized 70 of the most significant state and federal judicial opinions related to the preservation, collection, review and production of electronically stored information (ESI). The breakdown of the major issues that arose in these eDiscovery cases is as follows:

  • Thirty-two percent (32%) of cases addressed sanctions regarding a variety of issues, such as preservation and spoliation, noncompliance with court orders and production disputes.  Out of 70 cases, that would be about 22 cases addressing sanctions this past year.  Here are a few of the recent sanction cases previously reported on this blog.
  • Twenty-nine percent (29%) of cases addressed procedural issues, such as search protocols, cooperation, production and privilege considerations.  Out of 70 cases, that would be about 20 cases.  Here are a few of the recent procedural issues cases previously reported on this blog.
  • Sixteen percent (16%) of cases addressed discoverability and admissibility issues.  Out of 70 cases, that would be about 11 cases.  Here are a few of the recent discoverability / admissibility cases previously reported on this blog.
  • Fourteen percent (14%) of cases discussed cost considerations, such as shifting or taxation of eDiscovery costs.  Out of 70 cases, that would be about 10 cases.  Here are a few of the recent eDiscovery costs cases previously reported on this blog.
  • Nine percent (9%) of cases discussed technology-assisted review (TAR) or predictive coding.  Out of 70 cases, that would be about 6 cases.  Here are a few of the recent TAR cases previously reported on this blog, how many did you get?

While it’s nice and appreciated that Kroll Ontrack has been summarizing the cases and compiling these statistics, I do have a couple of observations/questions about their numbers (sorry if they appear “nit-picky”):

  • Sometimes Cases Belong in More Than One Category: The case percentage totals add up to 100%, which would make sense except that some cases address issues in more than one category.  For example, In re Actos (Pioglitazone) Products Liability Litigation addressed both cooperation and technology-assisted review, and Freeman v. Dal-Tile Corp. addressed both search protocols and discovery / admissibility.  It appears that Kroll classified each case in only one group, which makes the numbers add up, but could be somewhat misleading.  In theory, some cases belong in multiple categories, so the total should exceed 100%.
  • Did Cases Addressing Procedural Issues Really Double?: Kroll reported that “cases addressing procedural issues more than doubled”; however, here is how they broke down the category last year: 14% of cases addressed various procedural issues such as searching protocol and cooperation, 13% of cases addressed various production considerations, and 12% of cases addressed privilege considerations and waivers.  That’s a total of 39% for three separate categories that now appear to be described as “procedural issues, such as search protocols, cooperation, production and privilege considerations” (29%).  So, it looks to me like the percentage of cases addressing procedural issues actually dropped 10%.  Actually, the two biggest category jumps appear to be discoverability and admissibility issues (2% last year to 16% this year) and TAR (0% last year to 9% this year).

So, what do you think?  Has your organization been involved in any eDiscovery opinions this year?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.