eDiscoveryDaily

200,000 Visits on eDiscovery Daily! – eDiscovery Milestones

While we may be “just a bit behind” Google in popularity (900 million visits per month), we’re proud to announce that yesterday eDiscoveryDaily reached the 200,000 visit milestone!  It took us a little over 21 months to reach 100,000 visits and just over 11 months to get to 200,000 (don’t tell my boss, he’ll expect 300,000 in 5 1/2 months).  When we reach key milestones, we like to take a look back at some of the recent stories we’ve covered, so here are some recent eDiscovery items of interest.

EDRM Data Set “Controversy”: Including last Friday, we have covered the discussion related to the presence of personally-identifiable information (PII) data (including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers) within the Electronic Discovery Reference Model (EDRM) Enron Data Set and the “controversy” regarding the effort to clean it up (additional posts here and here).

Minnesota Implements Changes to eDiscovery Rules: States continue to be busy with changes to eDiscovery rules. One such state is Minnesota, which has amending its rules to emphasize proportionality, collaboration, and informality in the discovery process.

Changes to Federal eDiscovery Rules Could Be Coming Within a Year: Another major set of amendments to the discovery provisions of the Federal Rules of Civil Procedure is getting closer and could be adopted within the year.  The United States Courts’ Advisory Committee on Civil Rules voted in April to send a slate of proposed amendments up the rulemaking chain, to its Standing Committee on Rules of Practice and Procedure, with a recommendation that the proposals be approved for publication and public comment later this year.

I Tell Ya, Information Governance Gets No Respect: A new report from 451 Research has indicated that “although lawyers are bullish about the prospects of information governance to reduce litigation risks, executives, and staff of small and midsize businesses, are bearish and ‘may not be placing a high priority’ on the legal and regulatory needs for litigation or government investigation.”

Is it Time to Ditch the Per Hour Model for Document Review?: Some of the recent stories involving alleged overbilling by law firms for legal work – much of it for document review – begs the question whether it’s time to ditch the per hour model for document review in place of a per document rate for review?

Fulbright’s Litigation Trends Survey Shows Increased Litigation, Mobile Device Collection: According to Fulbright’s 9th Annual Litigation Trends Survey released last month, companies in the United States and United Kingdom continue to deal with, and spend more on litigation.  From an eDiscovery standpoint, the survey showed an increase in requirements to preserve and collect data from employee mobile devices, a high reliance on self-preservation to fulfill preservation obligations and a decent percentage of organizations using technology assisted review.

We also covered Craig Ball’s Eight Tips to Quash the Cost of E-Discovery (here and here) and interviewed Adam Losey, the editor of IT-Lex.org (here and here).

Jane Gennarelli has continued her terrific series on Litigation 101 for eDiscovery Tech Professionals – 32 posts so far, here is the latest.

We’ve also had 15 posts about case law, just in the last 2 months (and 214 overall!).  Here is a link to our case law posts.

On behalf of everyone at CloudNine Discovery who has worked on the blog over the last 32+ months, thanks to all of you who read the blog every day!  In addition, thanks to the other publications that have picked up and either linked to or republished our posts!  We really appreciate the support!  Now, on to 300,000!

And, as always, please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Some Additional Perspective on the EDRM Enron Data Set “Controversy” – eDiscovery Trends

Sharon Nelson wrote a terrific post about the “controversy” regarding the Electronic Discovery Reference Model (EDRM) Enron Data Set in her Ride the Lightning blog (Is the Enron E-Mail Data Set Worth All the Mudslinging?).  I wanted to repeat some of her key points here and offer some of my own perspective directly from sitting in on the Data Set team during the EDRM Annual Meeting earlier this month.

But, First a Recap

To recap, the EDRM Enron Data Set, sourced from the FERC Enron Investigation release made available by Lockheed Martin Corporation, has been a valuable resource for eDiscovery software demonstration and testing (we covered it here back in January 2011).  Initially, the data was made available for download on the EDRM site, then subsequently moved to Amazon Web Services (AWS).  However, after much recent discussion about personally-identifiable information (PII) data (including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers) available within FERC (and consequently the EDRM Data Set), the EDRM Data Set was taken down from the AWS site.

Then, a couple of weeks ago, EDRM, along with Nuix, announced that they have republished version 1 of the EDRM Enron PST Data Set (which contains over 1.3 million items) after cleansing it of private, health and personal financial information. Nuix and EDRM have also published the methodology Nuix’s staff used to identify and remove more than 10,000 high-risk items, including credit card numbers (60 items), Social Security or other national identity numbers (572), individuals’ dates of birth (292) and other personal data.  All personal data gone, right?

Not so fast.

As noted in this Law Technology News article by Sean Doherty (Enron Sandbox Stirs Up Private Data, Again), “Index Engines (IE) obtained a copy of the Nuix-cleansed Enron data for review and claims to have found many ‘social security numbers, legal documents, and other information that should not be made public.’ IE evidenced its ‘find’ by republishing a redacted version of a document with PII” (actually, a handful of them).  IE and others were quite critical of the effort by Nuix/EDRM and the extent of the PII data still remaining.

As he does so well, Rob Robinson has compiled a list of articles, comments and posts related to the PII issue, here is the link.

Collaboration, not criticism

Sharon’s post had several observations regarding the data set “controversy”, some of which are repeated here:

  • “Is the legal status of the data pretty clear? Yes, when a court refused to block it from being made public apparently accepting the greater good of its release, the status is pretty clear.”
  • “Should Nuix be taken to task for failure to wholly cleanse the data? I don’t think so. I am not inclined to let perfect be the enemy of the good. A lot was cleansed and it may be fair to say that Nuix was surprised by how much PII remained.”
  • “The terms governing the download of the data set made clear that there was no guarantee that all the PII was removed.” (more on that below in my observations)
  • “While one can argue that EDRM should have done something about the PII earlier, at least it is doing something now. It may be actively helpful to Nuix to point out PII that was not cleansed so it can figure out why.”
  • “Our expectations here should be that we are in the midst of a cleansing process, not looking at the data set in a black or white manner of cleansed or uncleansed.”
  • “My suggestion? Collaboration, not criticism. I believe Nuix is anxious to provide the cleanest version of the data possible – to the extent that others can help, it would be a public service.”

My Perspective from the Data Set Meeting

I sat in on part of the Data Set meeting earlier this month and there was a couple of points discussed during the meeting that I thought were worth relaying:

1.     We understood that there was no guarantee that all of the PII data was removed.

As with any process, we understood that there was no effective way to ensure that all PII data was removed after the process was complete and discussed needing a mechanism for people to continue to report PII data that they find.  On the download page for the data set, there was a link to the legal disclaimer page, which states in section 1.8:

“While the Company endeavours to ensure that the information in the Data Set is correct and all PII is removed, the Company does not warrant the accuracy and/or completeness of the Data Set, nor that all PII has been removed from the Data Set. The Company may make changes to the Data Set at any time without notice.”

With regard to a mechanism for reporting persistent PII data, there is this statement on the Data Set page on the EDRM site:

PII: These files may contain personally identifiable information, in spite of efforts to remove that information. If you find PII that you think should be removed, please notify us at mail@edrm.net.”

2.     We agreed that any documents with PII data should be removed, not redacted.

Because the original data set, with all of the original PII data, is available via FERC, we agreed that any documents containing sensitive personal information should be removed from the data set – NOT redacted.  In essence, redacting those documents is putting a beacon on them to make it easier to find them in the FERC set or downloaded copies of the original EDRM set, so the published redacted examples of missed PII only serves to facilitate finding those documents in the original sets.

Conclusion

Regardless of how effective the “cleansing” of the data set was perceived to be by some, it did result in removing over 10,000 items with personal data.  Yet, some PII data evidently remains.  While some people think (and they may have a point) that the data set should not have been published until after an independent audit for remaining PII data, it seems impractical (to me, at least) to wait until it is “perfect” before publishing the set.  So, when is it good enough to publish?  That appears to be open to interpretation.

Like Sharon, my hope is that we can move forward to continue to improve the Data Set through collaboration and that those who continue to find PII data in the set will notify EDRM, so that they can remove those items and continue to make the set better.  I’d love to see the Data Set page on the EDRM site reflect a history of each data set update, with the revision date, the number of additional PII items found and removed and who identified them (to give credit to those finding the data).  As Canned Heat would say, “Let’s Work Together”.

And, we haven’t even gotten to version 2 of the Data Set yet – more fun ahead!  🙂

So, what do you think?  Have you used the EDRM Enron Data Set?  If so, do you plan to download the new version?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Hard Drive Turned Over to Criminal Defendant – Eight Years Later – eDiscovery Case Law

If you think discovery violations by the other side can cause you problems, imagine being this guy.

As reported by WRAL.com in Durham, North Carolina, the defense in State of North Carolina v. Raven S. Abaroa, No. 10 CRS 1087 filed a Motion to Dismiss the Case for Discovery Violations after the state produced a forensic image of a hard drive (in the middle of trial) that had been locked away in the Durham Police Department for eight years.

After the state responded to the defendant’s March 2010 discovery request, the defendant filed a Motion to Compel Discovery in October 2012, alleging that the state had failed to disclose all discoverable “information in the possession of the state, including law enforcement officers, that tends to undermine the statements of or reflects negatively on the credibility of potential witnesses”.  At the hearing on the motion, the Assistant DA stated that all emails had been produced and the court agreed.

On April 29 of this year, the defendant filed another Motion to Compel Specific Items of Discovery “questioning whether all items within the state’s custody had been revealed, including information with exculpatory or impeachment value”.  Once again, the state assured the court it had met its discovery obligations and the court again denied the motion.

During pre-trial preparation of a former forensic examiner of the Durham Police Department (DPD) and testimony of detectives in the case, it became apparent that a hard drive of the victim’s that was imaged was never turned over to the defense.  On May 15, representatives of the DPD located the image from the victim’s hard drive which had been locked away in a cabinet for eight years.  Once defense counsel obtained a copy of the drive, their forensic examiner retrieved several emails between the victim and her former boyfriend that were exchanged within a few weeks of the murder that belied the prosecution’s portrayal of the defendant as an unfaithful, verbally abusive and controlling husband feared by his wife.  In testimony, the defendant’s forensic examiner testified that had he known about the hard drive in 2005, steps could have been taken to preserve the emails on the email server and that they could have provided a better snapshot of the victim’s email and Internet activity.

This led to the filing of the Motion to Dismiss the Case for Discovery Violations by the defense (link to the filing here).

As reported by WTVD, Judge Orlando Hudson, having been recently ruled against by the North Carolina Court of Appeals in another murder case where he dismissed the case based on discovery violations by Durham prosecutors, denied the defense’s requests for a dismissal or a mistrial.  Sounds like interesting grounds for appeal if the defendant is convicted.

So, what do you think?  Should the judge have granted the defense’s request for a dismissal, or at least a mistrial?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Never Mind! Plaintiffs Not Required to Use Predictive Coding After All – eDiscovery Case Law

Remember EORHB v. HOA Holdings, where, in a surprise ruling, both parties were instructed to use predictive coding by the judge?  Well, the judge has changed his mind.

As reported by Robert Hilson in the Association of Certified E-Discovery Specialists® (ACEDS) web site (subscription required), Delaware Chancery Court Vice Chancellor J. Travis Laster has revised his decision in EORHB, Inc. v. HOA Holdings, LLC, No. 7409-VCL (Del. Ch. May 6, 2013).  The new order enables the defendants to continue to utilize computer assisted review with their chosen vendor but no longer requires both parties to use the same vendor and enables the plaintiffs, “based on the low volume of relevant documents expected to be produced” to perform document review “using traditional methods.”

Here is the text of this very short order:

WHEREAS, on October 15, 2012, the Court entered an Order providing that, “[a]bsent a modification of this order for good cause shown, the parties shall (i) retain a single discovery vendor to be used by both sides, and (ii) conduct document review with the assistance of predictive coding;”

WHEREAS, the parties have proposed that HOA Holdings LLC and HOA Restaurant Group LLC (collectively, “Defendants”) retain ediscovery vendor Kroll OnTrack for electronic discovery;

WHEREAS, the parties have agreed that, based on the low volume of relevant documents expected to be produced in discovery by EORHB, Inc., Coby G. Brooks, Edward J. Greene, James P. Creel, Carter B. Wrenn and Glenn G. Brooks (collectively, “Plaintiffs”), the cost of using predictive coding assistance would likely be outweighed by any practical benefit of its use;

WHEREAS, the parties have agreed that there is no need for the parties to use the same discovery review platform;

WHEREAS, the requested modification of the Order will not prejudice any of the parties;

NOW THEREFORE, this –––– day of May 2013, for good cause shown, it is hereby ORDERED that:

(i) Defendants may retain ediscovery vendor Kroll OnTrack and employ Kroll OnTrack and its computer assisted review tools to conduct document review;

(ii) Plaintiffs and Defendants shall not be required to retain a single discovery vendor to be used by both sides; and

(iii) Plaintiffs may conduct document review using traditional methods.

Here is a link to the order from the article by Hilson.

So, what do you think?  Should a party ever be ordered to use predictive coding?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Just a Reminder to Think Before You Hit Send – eDiscovery Best Practices

With Anthony Weiner’s announcement that he is attempting a political comeback by running for mayor on New York City, it’s worth remembering the “Twittergate” story that ultimately cost his congressional seat in the first place – not to bash him, but to remind all of us how important it is to think before you hit send (even if he did start his campaign by using a picture of Pittsburgh’s skyline instead of NYC’s — oops!).  Here is another reminder of that fact.

Chili’s Waitress Fired Over Facebook Post Insulting ‘Stupid Cops’

As noted on jobs.aol.com, a waitress at an Oklahoma City Chili’s posted a photo of three Oklahoma County Sheriff’s deputies on her Facebook page along with the comment: “Stupid Cops better hope I’m not their server FDP.” (A handy abbreviation for F*** Da Police.)

The woman, Ashley Warden, might have had reason to hold a grudge against her local police force. Last year she made national news when her potty-training toddler pulled down his pants in his grandmother’s front yard, and a passing officer handed Warden a public urination ticket for $2,500. (The police chief later apologized and dropped the charges, while the ticketing officer was fired.)

Nonetheless, Warden’s Facebook post quickly went viral on law enforcement sites and Chili’s was barraged with calls demanding that she be fired. Chili’s agreed. “With the changing world of digital and social media, Chili’s has Social Media Guidelines in place, asking our team members to always be respectful of our guests and to use proper judgement when discussing actions in the work place …,” the restaurant chain said in a statement. “After looking into the matter, we have taken action to prevent this from happening again.”

Best Practices and Social Media Guidelines

Another post on jobs.aol.com discusses some additional examples of people losing their jobs for Facebook posts, along with six tips for making posts that should keep you from getting fired, by making sure the posts would be protected by the National Labor Relations Board (NLRB), which is the federal agency tasked with protecting employees’ rights to association and union representation.

Perhaps so, though, as the article notes, the NLRB “has struggled to define how these rights apply to the virtual realm”.  It’s worth noting that, in their statement, Chili’s referred to violation of their social media guidelines as a reason for the termination.  As we discussed on this blog some time ago, having a social governance policy in place is a good idea to govern use of outside email, chat and social media that covers what employees should and should not do (and the post identified several factors that such a policy should address).

Thinking before you hit send in these days of pervasive social media means, among other things, being familiar with your organization’s social media policies and ensuring compliance with those policies.  If you’re going to post anything related to your job, that’s important to keep in mind.  To think before you hit send also involves educating yourself as to what you should and should not do when posting to social media sites.

Of course it’s also important to remember that social media factors into discovery more than ever these days, as these four cases (just from the first few months of this year) illustrate.

So, what do you think?  Does your organization have social media guidelines?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Daily will return after the Memorial Day Holiday.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Google Compelled to Produce Search Terms in Apple v. Samsung – eDiscovery Case Law

In Apple v. Samsung, Case No. 12-cv-00630, (N.D. Cal., May 9, 2013), California Magistrate Judge Paul S. Grewal granted Apple’s motion to compel third party Google to produce the search terms and custodians used to respond to discovery requests and ordered the parties to “meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

In August of last year, a jury of nine found that Samsung infringed all but one of the seven patents at issue and found all seven of Apple’s patents valid – despite Samsung’s attempts to have them thrown out. They also determined that Apple didn’t violate any of the five patents Samsung asserted in the case.  Apple had been requesting $2.5 billion in damages.  Apple later requested additional damages of $707 million to be added to the $1.05 billion jury verdict, which was subsequently reduced to nearly $599 million, with a new trial being ordered on damages for 14 products.  This case was notable from an eDiscovery perspective due to the adverse inference instruction issued by Judge Grewal against Samsung just prior to the start of trial for spoliation of data, though it appears that the adverse inference instruction did not have a significant impact in the verdict.

Google’s Involvement

As part of the case, Apple subpoenaed Google to request discovery, though they did not discuss search terms or custodians during the meet and confer.  After Google responded to discovery requests, Apple requested search terms and custodians from Google used in responding to discovery requests, but Google refused, arguing that the search terms and choice of custodians were “privileged under the work-product immunity doctrine.”  Instead, Google asked Apple to suggest search terms and custodians, but Apple refused and filed a motion to compel Google to provide search terms and custodians used to respond to discovery requests.

Judge Grewal noted that Google’s arguments opposing Apple’s request “have shifted”, but that “[a]t the heart of its opposition, however, is Google’s belief that its status as a third party to this litigation exempts it from obligations parties may incur to show the sufficiency of their production, at least absent a showing by Apple that its production is deficient.”  Google complained that “the impact of requiring non-parties to provide complete ‘transparency’ into their search methodology and custodians in responding to non-party subpoenas whenever unsubstantiated claims of production deficiencies are made would be extraordinary.”

Judge’s Ruling

Referencing DeGeer v. Gillis, Judge Grewal noted that, in that case, it was ruled that the third party’s “failure to promptly disclose the list of employees or former employees whose emails it proposed to search and the specific search terms it proposed to be used for each individual violated the principles of an open, transparent discovery process.”

Therefore, while acknowledging that “Apple likewise failed to collaborate in its efforts to secure proper discovery from Google”, Judge Grewal ruled that “production of Google’s search terms and custodians to Apple will aid in uncovering the sufficiency of Google’s production and serves greater purposes of transparency in discovery.  Google shall produce the search terms and custodians no later than 48 hours from this order. Once those terms and custodians are provided, no later than 48 hours from the tender, the parties shall meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

So, what do you think?  Should a third party be held to the same standard of transparency, absent a showing of deficient discovery response?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Welcome to LegalTech West Coast 2013! – eDiscovery Trends

Today is the start of LegalTech® West Coast 2013 (LTWC) and eDiscoveryDaily is here to report about the latest eDiscovery trends being discussed at the show.  Today, we will provide a description of some of the sessions related to eDiscovery to give you a sense of the topics being covered.  If you’re in the Los Angeles area, come check out the show – there are a number of sessions available and 62 exhibitors providing information on their products and services, including (shameless plug warning!) my company, CloudNine Discovery, which just announced yesterday that we will be previewing a brand new, browser-independent version of our linear review application, OnDemand®.

OnDemand’s completely new interface also includes several new analytics and filtering capabilities, and we will be exhibiting at booth #111 along with our partners, First Digital Solutions.  Come by and say hi!  End of shameless plug!  🙂

Perform a “find” on today’s LTNY conference schedule for “discovery” and you’ll get 24 hits.  So, there is plenty to talk about!  Sessions in the main conference tracks include:

10:30 AM – 12:00 PM:

International Privacy and its Impact on e-Discovery

As technology makes the world smaller and smaller, the US market is running into significant challenges when it comes to e-discovery. Other countries, specifically those in Asia, have much more lax rules on privacy and this can prove an ED nightmare. Simply, what are the rules for international privacy and how will the impact your e-Discovery work. This panel of industry experts will examine the privacy implications you must consider when conducting e-discovery globally. With a special focus on Asia and those in the Southern California market, panelists will provide guidance and substantive information to address privacy and security concerns in relation to international e-Discovery.

Speakers are: Aaron Crews, Shareholder, Littler Mendelson; Therese Miller, Of Counsel, Shook, Hardy & Bacon and Cameron R. Krieger, eDiscovery Attorney, Latham & Watkins LLP.

A Panel of Experts: A Candid Conversation

A panel of expert judges and lawyers will discuss cutting edge ediscovery challenges. Bring your questions for prestigious members of the bench and bar.

Speakers are: Honorable Suzanne H. Segal, United States Chief Magistrate Judge, Central District of California; Honorable Jay C. Gandhi, United States Magistrate Judge, Central District of California; Jeffrey Fowler, Partner, O’Melveny & Myers LLP.  Moderator: David D. Lewis Ph. D., IR Consultant.

2:00 – 3:15 PM:

The E-Discovery Debate

Is there a silver bullet when it comes to Technology Assisted Review (TAR)? What about Predictive Coding? There are those in the market who feel there is a one stop solution for all organizations; others believe it is a case by case basis. This session will allow you to be the judge. Hear both sides of the equation to better apply the lessons learned to your own e-Discovery. The debaters will also cover the various points of view when it comes to the cloud, social media and security policies with regards to e-Discovery.

Speaker is: Jack Halprin, Head of Ediscovery, Enterprise, Google Inc.  Moderator: Hunter W. McMahon, JD, Senior Consultant, Driven, Inc.

Creative Ediscovery Problem Solving

There is no right answer. There is no wrong answer. There is only the BEST answer. This session will help you find the best possible solution to your ediscovery problems. In this brainstorm power session, you will:

  • Tackle the latest ediscovery problems
  • Develop action plans
  • Discuss meaningful ways to implement solutions

Speakers are: Linda Baynes, Associate Operations Director, Orrick, Herrington & Sutcliffe and Adam Sand, Associate General Counsel, Ancestry.com.  Moderator: John Reikes, Account Executive, Kroll Ontrack.

3:45 – 5:00 PM:

Judges’ Panel: The Current State of the ED Market

Join us for the always informative judges’ panel at LegalTech West Coast. We’ve assembled a panel of judges from the west coast to discuss their views of the ED market today and give their insight to where they see the market going. Make sure you are up on the current issues in the market and prepare your team for future complications and concerns by hearing what it is the bench is considering today.

5 Daunting Ediscovery Challenges: A Live Deliberation

This exercise will involve audience participation through a panel-led discussion. Be prepared to deliberate some of the most complex challenges facing ediscovery including:

  • Ever-changing technology rules
  • Unpredictable costs
  • Underutilization of Technology-assisted Review (TAR)
  • Primitive case analytics
  • Transactional ediscovery

Panelists are: Jack Halprin, Head of Ediscovery, Enterprise, Google Inc. and Adam Sand, Associate General Counsel, Ancestry.com.  Moderator: Chris Castaldini, Account Executive, Kroll Ontrack.

In addition to these, there are other eDiscovery-related sessions today.  For a complete description for all sessions today, click here.

So, what do you think?  Are you planning to attend LTWC this year?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Adding Evidence Items with FTK Imager – eDiscovery Best Practices

A couple of weeks ago, we talked about the benefits and capabilities of Forensic Toolkit (FTK) Imager, which is a computer forensics software application provided by AccessData, as well as how to download your own free copy.  Then, last week, we discussed how to create a disk image.  This week, let’s discuss how to add evidence items with FTK Imager for the purpose of reviewing the contents of evidence items, such as physical drives or images that you’ve created.

Adding Evidence Items Using FTK Imager

Last week, I created an image of one of my flash drives to illustrate the process of creating an image.  Let’s take a look at that image as an evidence item.

From the File menu, you can select Add Evidence Item to add a single evidence item to the evidence tree.  You can also select Add All Attached Devices to add all of the attached physical and logical devices (If no media is present in an attached device such as a CD- or DVD-ROM or a DVD-RW, the device is skipped).  In this case we’ll add a single evidence item.

Source Evidence Type: The first step is to identify the source type that you want to review.  You can select Physical Drive or Logical Drive (as we noted before, a physical device can contain more than one logical drive).  You can also select an Image File to view an image file you created before or Contents of a Folder, to look at a specific folder.  In this example, we’ll select Image File to view the image of the flash drive we created and locate the source path of the image file.

The evidence tree will then display the item – you can keep adding evidence items if you want to look at more than one at once.  The top node is the selected item, from which you can drill down to the contents of the item.  This includes partitions and unpartitioned space, folders from the root folder on down and unallocated space, which could contain recoverable data.  Looking at the “Blog Posts” folder, you see a list of files in the folder, along with file slack.  File slack is the space between the end of a file and the end of the disk cluster in which it is stored. It’s common because data rarely fills clusters exactly, and residual data occur when a smaller file is written into the same cluster as a previous larger file, leaving potentially meaningful data.

You’ll also notice that some of the files have an “X” on them – these are files that have been deleted, but not overwritten.  So, with FTK Imager, you can not only view active data, you can also view inactive data in deleted files, file slack or unallocated space!  When you click on a file, you can view the bit-by-bit contents of the file in the lower right window.  You can also right-click on one or more files (or even an entire folder) to display a pop-up menu to enable you to export a copy of the file(s) out and review them with the native software.  You can also Add to Custom Content Image to begin compiling a list of files to put into an image, enabling you to selectively include specific files (instead of all of the files from the device) into the image file you create.

Next time, we’ll discuss Add to Custom Content Image in more detail and discuss creating the custom content image of specific files you select.

For more information, go to the Help menu to access the User Guide in PDF format.

So, what do you think?  Have you used FTK Imager as a mechanism for eDiscovery collection?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Defendant Compelled by Court to Produce Metadata – eDiscovery Case Law

Remember when we talked about the issue of metadata spoliation resulting from “drag and drop” to collect files?  Here’s a case where it appears that method may have been used, resulting in a judgment against the producing party.

In AtHome Care, Inc. v. The Evangelical Lutheran Good Samaritan Society, No. 1:12-cv-053-BLW (D. ID. Apr. 30, 2013), Idaho District Judge B. Lynn Winmill granted the plaintiff’s motion to compel documents, ordering the defendant to identify and produce metadata for the documents in this case.

In this pilot project contract dispute between two health care organizations, the plaintiff filed a motion to compel after failing to resolve some of the discovery disputes with the defendant “through meet and confers and informal mediation with the Court’s staff”.  One of the disputes was related to the omission of metadata in the defendant’s production.

Judge Winmill stated that “Although metadata is not addressed directly in the Federal Rules of Civil Procedure, it is subject to the same general rules of discovery…That means the discovery of metadata is also subject to the balancing test of Rule 26(b)(2)(C), which requires courts to weigh the probative value of proposed discovery against its potential burden.” {emphasis added}

“Courts typically order the production of metadata when it is sought in the initial document request and the producing party has not yet produced the documents in any form”, Judge Winmill continued, but noted that “there is no dispute that Good Samaritan essentially agreed to produce metadata, and would have produced the requested metadata but for an inadvertent change to the creation date on certain documents.”

The plaintiff claimed that the system metadata was relevant because its claims focused on the unauthorized use and misappropriation of its proprietary information and whether the defendant used the plaintiff’s proprietary information to create their own materials and model, contending “that the system metadata can answer the question of who received what information when and when documents were created”.  The defendant argued that the plaintiff “exaggerates the strength of its trade secret claim”.

Weighing the value against the burden of producing the metadata, Judge Winmill ruled that “The requested metadata ‘appears reasonably calculated to lead to the discovery of admissible evidence.’ Fed.R. Civ.P. 26(b)(1). Thus, it is discoverable.” {emphasis added}

“The only question, then, is whether the burden of producing the metadata outweighs the benefit…As an initial matter, the Court must acknowledge that Good Samaritan created the problem by inadvertently changing the creation date on the documents. The Court does not find any degree of bad faith on the part of Good Samaritan — accidents happen — but this fact does weight in favor of requiring Good Samaritan to bear the burden of production…Moreover, the Court does not find the burden all that great.”

Therefore, the plaintiff’s motion to compel production of the metadata was granted.

So, what do you think?  Should a party be required to produce metadata?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Version 1 of the EDRM Enron Data Set NOW AVAILABLE – eDiscovery Trends

Last week, we reported from the Annual Meeting for the Electronic Discovery Reference Model (EDRM) group and discussed some significant efforts and accomplishments by each of the project teams within EDRM.  That included an update from the EDRM Data Set project, where an effort was underway to identify and remove personally-identifiable information (“PII”) data from the EDRM Data Set.  Now, version 1 of the Data Set is completed and available for download.

To recap, the EDRM Enron Data Set, sourced from the FERC Enron Investigation release made available by Lockheed Martin Corporation, has been a valuable resource for eDiscovery software demonstration and testing (we covered it here back in January 2011).  Initially, the data was made available for download on the EDRM site, then subsequently moved to Amazon Web Services (AWS).  However, after much recent discussion about PII data (including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers) available within FERC (and consequently the EDRM Data Set), the EDRM Data Set was taken down from the AWS site.

Yesterday, EDRM, along with Nuix, announced that they have republished version 1 of the EDRM Enron PST Data Set (which contains over 1.3 million items) after cleansing it of private, health and personal financial information. Nuix and EDRM have also published the methodology Nuix’s staff used to identify and remove more than 10,000 high-risk items.

As noted in the announcement, Nuix consultants Matthew Westwood-Hill and Ady Cassidy used a series of investigative workflows to identify the items, which included:

  • 60 items containing credit card numbers, including departmental contact lists that each contained hundreds of individual credit cards;
  • 572 items containing Social Security or other national identity numbers—thousands of individuals’ identity numbers in total;
  • 292 items containing individuals’ dates of birth;
  • 532 items containing information of a highly personal nature such as medical or legal matters.

While the personal data was (and still is) available via FERC long before the EDRM version was created, completion of this process will mean that many in the eDiscovery industry that rely on this highly useful data set for testing and software demonstration can now use a version which should be free from sensitive personal information!

For more information regarding the announcement, click here. The republished version 1 of the Data Set, as well as the white paper discussing the methodology is available at nuix.com/enron.  Nuix is currently applying the same methodology to the EDRM Enron Data Set v2 (which contains nearly 2.3 million items) and will publish to the same site when complete.

So, what do you think?  Have you used the EDRM Enron Data Set?  If so, do you plan to download the new version?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.