Identification

eDiscovery Best Practices: 4 Steps to Effective eDiscovery With Software Analytics

 

I read an interesting article from Texas Lawyer via Law.com entitled “4 Steps to Effective E-Discovery With Software Analytics” that has some interesting takes on project management principles related to eDiscovery and I’ve interjected some of my thoughts into the analysis below.  A copy of the full article is located here.  The steps are as follows:

1. With the vendor, negotiate clear terms that serve the project's key objectives.  The article notes the important of tying each collection and review milestone (e.g., collecting and imaging data; filtering data by file type; removing duplicates; processing data for review in a specific review platform; processing data to allow for optical character recognition (OCR) searching; and converting data into a tag image file format (TIFF) for final production to opposing counsel) to contract terms with the vendor. 

The specific milestones will vary – for example, conversion to TIFF may not be necessary if the parties agree to a native production – so it’s important to know the size and complexity of the project, and choose only an experienced eDiscovery vendor who can handle the variations.

2. Collect and process data.  Forensically sound data collection and culling of obviously unresponsive files (such as system files) to drastically decrease the overall review costs are key services that a vendor provides in this area.  As we’ve noted many times on this blog, effective culling can save considerable review costs – each gigabyte (GB) culled can save $16-$18K in attorney review costs.

The article notes that a hidden cost is the OCR process of translating extracted text into a searchable form and that it’s an optimal negotiation point with the vendor.  This may have been true when most collections were paper based, but as most collections today are electronic based, the percentage of documents requiring OCR is considerably less than it used to be.  However, it is important to be prepared that there are some native files which will be “image only”, such as TIFFs and scanned PDFs – those will require OCR to be effectively searched.

3. Select a data and document review platform.  Factors such as ease of use, robustness, and reliability of analytic tools, support staff accessibility to fix software bugs quickly, monthly user and hosting fees, and software training and support fees should be considered when selecting a document review platform.

The article notes that a hidden cost is selecting a platform with which the firm’s litigation support staff has no experience as follow-up consultation with the vendor could be costly.  This can be true, though a good vendor training program and an intuitive interface can minimize or even eliminate this component.

The article also notes that to take advantage of the vendor’s more modern technology “[a] viable option is to use a vendor's review platform that fits the needs of the current data set and then transfer the data to the in-house system”.  I’m not sure why the need exists to transfer the data back – there are a number of vendors that provide a cost-effective solution appropriate for the duration of the case.

4. Designate clear areas of responsibility.  By doing so, you minimize or eliminate inefficiencies in the project and the article mentions the RACI matrix to determine who is responsible (individuals responsible for performing each task, such as review or litigation support), accountable (the attorney in charge of discovery), consulted (the lead attorney on the case), and informed (the client).

Managing these areas of responsibility effectively is probably the biggest key to project success and the article does a nice job of providing a handy reference model (the RACI matrix) for defining responsibility within the project.

So, what do you think?  Do you have any specific thoughts about this article?   Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Trends: George Socha of Socha Consulting

 

This is the seventh of the LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and asked each of them the same three questions:

  1. What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?
  2. Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is George Socha.  A litigator for 16 years, George is President of Socha Consulting LLC, offering services as an electronic discovery expert witness, special master and advisor to corporations, law firms and their clients, and legal vertical market software and service providers in the areas of electronic discovery and automated litigation support. George has also been co-author of the leading survey on the electronic discovery market, The Socha-Gelbmann Electronic Discovery Survey.  In 2005, he and Tom Gelbmann launched the Electronic Discovery Reference Model project to establish standards within the eDiscovery industry – today, the EDRM model has become a standard in the industry for the eDiscovery life cycle and there are eight active projects with over 300 members from 81 participating organizations. George has a J.D. for Cornell Law School and a B.A. from the University of Wisconsin – Madison.

What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?

On the very “flip” side, the number one trend to date in 2011 is predictions about trends in 2011.  They are part of a consistent and long-term pattern, which is that many of these trend predictions are not trend predictions at all – they are marketing material and the prediction is “you will buy my product or service in the coming year”.

That said, there are a couple of things of note.  Since I understand you talked to Tom about Apersee, it’s worth noting that corporations are struggling with working through a list of providers to find out who provides what services.  You would figure that there is somewhere in the range of 500 or so total providers.  But, my ever-growing list, which includes both external and law firm providers, is at more than 1,200.  Of course, some of those are probably not around anymore, but I am confident that there are at least 200-300 that I do not yet have on the list.  My guess when the list shakes out is that there are roughly 1,100 active providers out there today.  If you look at information from the National Center for State Courts and the Federal Judicial Center, you’ll see that there are about 11 million new lawsuits filed every year.  I saw an article in the Cornell Law Forum a week or two ago which indicated that there are roughly 1.1 million lawyers in the country.  So, there are 11 million lawsuits, 1.1 million lawyers and 1,100 providers.  Most of those lawyers have no experience with eDiscovery and most of those lawsuits have no provider involved, which means eDiscovery is still very much an emerging market, not even close to being a mature market.  As fast as providers disappear, through attrition or acquisition, new providers enter the market to take their place.

Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?

{Interviewed on the second afternoon of LTNY}  Maybe this is overly optimistic, but part of what I’m seeing in leading up to the conference, on various web sites and at the conference itself, is that a series of incremental changes taking place over a long period are finally leading to some radical differences.  One of those differences is that we finally are reaching a point where a number of providers can make the claim to being “end-to-end providers” with some legitimacy.  For as long as we’ve had the EDRM model, we’ve had providers that have professed to cover the full EDRM landscape, by which they generally have meant Identification through Production.  A growing number of providers not only cover that portion of the EDRM spectrum but have some ability to address Information Management, Presentation, or both   By and large, those providers are getting there by building their software and services based on experience and learning over the past 8 to 10 to 12 years, introducing new offerings at the show that reflect that learned experience.

A couple of days ago, I only half-jokingly issued “the Dyson challenge” (as in the Dyson vacuum cleaner).  Every year, come January, our living room carpet is strewn with pine tree needles and none of the vacuum cleaners that we have ever had have done a good job of picking up those needles.  The Dyson vacuum cleaner claims it cyclones capture more dirt than anything, but I was convinced that could not include those needles.  Nonetheless I tried, and to my surprise it worked like a charm!  I want to see the providers offering products able to perform at that high level, not just meeting but exceeding expectations.

I also see a feeling of excitement and optimism that wasn’t apparent at last year’s show.

What are you working on that you’d like our readers to know about?

As I mentioned, we have launched the Apersee web site, designed to allow consumers to find providers and products that fit their specific needs.  The site is in beta and the link is live.  It’s in beta because we’re still working on features to make it as useful as possible to customers and providers.  We’re hoping it’s a question of weeks, not months, before those features are implemented.  Once we go fully live, we will go two months with the system “wide open” – where every consumer can see all the provider and product information that any provider has put in the system.  After that, consumers will be able to see full provider and product profiles for providers who have purchased blocks of views.  Even if a provider does not purchase views, all selection criteria it enters are searchable, but search results will display only the provider’s name and website name.  Providers will be able to get stats on queries and how many times their information is viewed, but not detailed information as to which customers are connecting and performing the queries.

As for EDRM, we continue to make progress with an array of projects and a growing number of collaborative efforts, such as the work the Data Set group has down with TREC Legal and the work the Metrics group has done with the LEDES Committee. We not only want to see membership continue to grow, but we also want to continue to push for more active participation to continue to make progress in the various working groups.  We’ve just met at the show here regarding the EDRM Testing pilot project to address testing standards.  There are very few guidelines for testing of electronic discovery software and services, so the Testing project will become a full EDRM project as of the EDRM annual meeting this May to begin to address the need for those guidelines.

Thanks, George, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

eDiscovery Trends: Jim McGann of Index Engines

 

This is the third of the LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and asked each of them the same three questions:

  1. What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?
  2. Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Jim McGann.  Jim is Vice President of Information Discovery at Index Engines.  Jim has extensive experience with the eDiscovery and Information Management in the Fortune 2000 sector. He has worked for leading software firms, including Information Builders and the French-based engineering software provider Dassault Systemes.  In recent years he has worked for technology-based start-ups that provided financial services and information management solutions.

What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?

What we’re seeing is that companies are becoming a bit more proactive.  Over the past few years we’ve seen companies that have simply been reacting to litigation and it’s been a very painful process because ESI collection has been a “fire drill” – a very last minute operation.  Not because lawyers have waited and waited, but because the data collection process has been slow, complex and overly expensive.  But things are changing. Companies are seeing that eDiscovery is here to stay, ESI collection is not going away and the argument of saying that it’s too complex or expensive for us to collect is not holding water. So, companies are starting to take a proactive stance on ESI collection and understanding their data assets proactively.  We’re talking to companies that are not specifically responding to litigation; instead, they’re building a defensible policy that they can apply to their data sources and make data available on demand as needed.    

Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?

{Interviewed on the first morning of LTNY}  Well, in walking the floor as people were setting up, you saw a lot of early case assessment last year; this year you’re seeing a lot of information governance..  That’s showing that eDiscovery is really rolling into the records management/information governance area.  On the CIO and General Counsel level, information governance is getting a lot of exposure and there’s a lot of technology that can solve the problems.  Litigation support’s role will be to help the executives understand the available technology and how it applies to information governance and records management initiatives.  You’ll see more information governance messaging, which is really a higher level records management message.

As for other trends, one that I’ll tie Index Engines into is ESI collection and pricing.  Per GB pricing is going down as the volume of data is going up.  Years ago, prices were a thousand per GB, then hundreds of dollars per GB, etc.  Now the cost is close to tens of dollars per GB. To really manage large volumes of data more cost-effectively, the collection price had to become more affordable.  Because Index Engines can make data on backup tapes searchable very cost-effectively, for as little as $50 per tape, data on tape has become  as easy to access and search as online data. Perhaps even easier because it’s not on a live network.  Backup tapes have a bad reputation because people think of them as complex or expensive, but if you take away the complexity and expense (which is what Index Engines has done), then they really become “full point-in-time” snapshots.  So, if you have litigation from a specific date range, you can request that data snapshot (which is a tape) and perform discovery on it.  Tape is really a natural litigation hold when you think about it, and there is no need to perform the hold retroactively.

So, what does the ease of which the information can be indexed from tape do to address the inaccessible argument for tape retrieval?  That argument has been eroding over the years, thanks to technology like ours.  And, you see decisions from judges like Judge Scheindlin saying “if you cannot find data in your primary network, go to your backup tapes”, indicating that they consider backup tapes in the next source right after online networks.  You also see people like Craig Ball writing that backup tapes may be the most convenient and cost-effective way to get access to data.  If you had a choice between doing a “server crawl” in a corporate environment or just asking for a backup tape of that time frame, tape is the much more convenient and less disruptive option.  So, if your opponent goes to the judge and says it’s going to take millions of dollars to get the information off of twenty tapes, you must know enough to be in front of a judge and say “that’s not accurate”.  Those are old numbers.  There are court cases where parties have been instructed to use tapes as a cost-effective means of getting to the data.  Technology removes the inaccessible argument by making it easier, faster and cheaper to retrieve data from backup tapes.

The erosion of the accessibility burden is sparking the information governance initiatives. We’re seeing companies come to us for legacy data remediation or management projects, basically getting rid of old tapes. They are saying “if I’ve got ten years of backup tapes sitting in offsite storage, I need to manage that proactively and address any liability that’s there” (that they may not even be aware exists).  These projects reflect a proactive focus towards information governance by remediating those tapes and getting rid of data they don’t need.  Ninety-eight percent of the data on old tapes is not going to be relevant to any case.  The remaining two percent can be found and put into the company’s litigation hold system, and then they can get rid of the tapes.

How do incremental backups play into that?  Tapes are very incremental and repetitive.  If you’re backing up the same data over and over again, you may have 50+ copies of the same email.  Index Engines technology automatically gets rid of system files and applies a standard MD5Hash to dedupe.  Also, by using tape cataloguing, you can read the header and say “we have a Saturday full backup and five incremental during the week, then another Saturday full backup”. You can ignore the incremental tapes and just go after the full backups.  That’s a significant percent of the tapes you can ignore.

What are you working on that you’d like our readers to know about?

Index Engines just announced today a partnership with LeClairRyan. This partnership combines legal expertise for data retention with the technology that makes applying the policy to legacy data possible.   For companies that want to build policy for the retention of legacy data and implement the tape remediation process we have advisors like LeClairRyan that can provide legacy data consultation and oversight.  By proactively managing the potential liability  of legacy data, you are also saving the IT costs to explore that data.

Index Engines  also just announced a new cloud-based tape load service that will provide full identification, search and access to tape data for eDiscovery. The Look & Learn service, starting at $50 per tape, will provide clients with full access to the index of their tape data without the need to install any hardware or software. Customers will be able to search the index and gather knowledge about content, custodians, email and metadata all via cloud access to the Index Engines interface, making discovery of data from tapes even more convenient and affordable.

Thanks, Jim, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

eDiscovery Trends: Alon Israely, Esq., CISSP of BIA

 

This is the second of the LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and asked each of them the same three questions:

  1. What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?
  2. Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Alon Israely.  Alon is a Senior Advisor in BIA’s Advisory Services group and when he’s not advising clients on e-discovery issues he works closely with BIA’s product development group for its core technology products.  Alon has over fifteen years of experience in a variety of advanced computing-related technologies and has consulted with law firms and their clients on a variety of technology issues, including expert witness services related to computer forensics, digital evidence management and data security.

What do you consider to be the current significant trends in eDiscovery on which people in the industry are, or should be, focused?

I think one of the important trends for corporate clients and law firms is cost control, whether it’s trying to minimize the amount of project management hours that are being billed or the manner in which the engagement is facilitated.  I’m not suggesting going full-bore necessarily, but taking baby steps to help control costs is a good approach.  I don’t think it’s only about bringing prices down, because I think that the industry in general has been able to do that naturally well.  But, I definitely see a new focus on the manner in which costs are managed and outsourced.  So, very specifically, scoping correctly is key, making sure you’re using the right tool for the right job, keeping efficiencies (whether that’s on the vendor side or the client side) by doing things such as not having five phone calls for a meeting to figure out what the key words are for field searching or just going out and imaging every drive before deciding what’s really needed. Bringing simple efficiencies to the mechanics of doing e-discovery saves tons of money in unnecessary legal, vendor and project management fees.  You can do things that are about creating efficiencies, but are not necessarily changing the process or changing the pricing.

I also see trends in technology, using more focused tools and different tools to facilitate a single project.  Historically, parties would hire three or four different vendors for a single project, but today it may be just one or two vendors or maybe even no vendors, (just the law firm) but, it’s the use of the right technologies for the right situations – maybe not just one piece of software, but leveraging several for different parts of the process.  Overall, I foresee fewer vendors per project, but more vendors increasing their stable of tools.  So, whereas a vendor may have had a review tool and one way of doing collection, now they may have two or three review tools, including an ECA tool, and one or two ways of doing collections. They have a toolkit from which they can choose the best set of tools to bring to the engagement.  Because they have more tools to market, vendors can have the right tool in-their-back-pocket whereas before the tool belonged to just one service provider so you bought from them, or you just didn’t have it.

Which of those trends are evident here at LTNY, which are not being talked about enough, and/or what are your general observations about LTNY this year?

{Interviewed on the first morning of LTNY} I think you have either a little or a lot of – depending on how aggressive I want to be with my opinion – that there seems to be a disconnect between what they’re speaking about in the panels and what we’re seeing on the floor.  But, I think that’s OK in that the conference itself, is usually a little bit ahead of the curve with respect to topics, and the technology will catch up.  You have topics such as predictive coding and social networking related issues – those are two big ones that you’ll see.  I think, for example, there are very few companies that have a solution for social networking, though we happen to have one.  And, predictive coding is the same scenario.  You have a lot of providers that talk about it, but you have a handful that actually do it, and you have probably even fewer than that who do it right.  I think that next year you’ll see many predictive coding solutions and technologies and many more tools that have that capability built into them.  So, on the conference side, there is one level of information and on the floor side, a different level.

What are you working on that you’d like our readers to know about?

BIA has a new product called TotalDiscovery.com, the industry’s first SaaS (software-as-a-service), on-demand collection technology that provides defensible collections.  We just rolled it out, we’re introducing it here at LegalTech and we’re starting a technology preview and signing up people who want to use the application or try it.  It’s specifically for attorneys, corporations, service providers – anyone who’s in the business and needs a tool for defensible data collection performed with agility (always hard to balance) – so without having to buy software or have expert training, users simply login or register and can start immediately.  You don’t have to worry about the traditional business processes to get things set up and started.  Which, if you think about it on the collections side of e-discovery it means that  the client’s CEO or VP of Marketing can call you up and say “I’m leaving, I have my PST here, can you just come get it?” and you can facilitate that process through the web, download an application, walk through a wizard, collect it defensibly, encrypt it and then deliver a filtered set, as needed, for review..

The tool is designed to collect defensibly and to move the collected data – or some subset of that data –to delivery, from there you would select your review tool of choice and we hand it off to the selected review tool.  So, we’re not trying to be everything, we’re focused on automating the left side of the EDRM.  We have loads to certain tools, having been a service provider for ten years, and we’re connecting with partners so that we can do the handoff, so when the client says “I’m ready to deliver my data”, they can choose OnDemand or Concordance or another review tool, and then either directly send it or the client can download and ship it.  We’re not trying to be a review tool and not trying to be an ECA tool that helps you find the needle in the haystack; instead, we’re focused on collecting the data, normalizing it, cataloguing it and handing if off for the attorneys to do their work.

Thanks, Alon, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

eDiscovery Trends: 2011 Predictions — By The Numbers

 

Comedian Nick Bakay”>Nick Bakay always ends his Tale of the Tape skits where he compares everything from Married vs. Single to Divas vs. Hot Dogs with the phrase “It's all so simple when you break things down scientifically.”

The late December/early January time frame is always when various people in eDiscovery make their annual predictions as to what trends to expect in the coming year.  We’ll have some of our own in the next few days (hey, the longer we wait, the more likely we are to be right!).  However, before stating those predictions, I thought we would take a look at other predictions and see if we can spot some common trends among those, “googling” for 2011 eDiscovery predictions, and organized the predictions into common themes.  I found serious predictions here, here, here, here and here.  Oh, also here and here.

A couple of quick comments: 1) I had NO IDEA how many times that predictions are re-posted by other sites, so it took some work to isolate each unique set of predictions.  I even found two sets of predictions from ZL Technologies, one with twelve predictions and another with seven, so I had to pick one set and I chose the one with seven (sorry, eWEEK!). If I have failed to accurately attribute the original source for a set of predictions, please feel free to comment.  2) This is probably not an exhaustive list of predictions (I have other duties in my “day job”, so I couldn’t search forever), so I apologize if I’ve left anybody’s published predictions out.  Again, feel free to comment if you’re aware of other predictions.

Here are some of the common themes:

  • Cloud and SaaS Computing: Six out of seven “prognosticators” indicated that adoption of Software as a Service (SaaS) “cloud” solutions will continue to increase, which will become increasingly relevant in eDiscovery.  No surprise here, given last year’s IDC forecast for SaaS growth and many articles addressing the subject, including a few posts right here on this blog.
  • Collaboration/Integration: Six out of seven “augurs” also had predictions related to various themes associated with collaboration (more collaboration tools, greater legal/IT coordination, etc.) and integration (greater focus by software vendors on data exchange with other systems, etc.).  Two people specifically noted an expectation of greater eDiscovery integration within organization governance, risk management and compliance (GRC) processes.
  • In-House Discovery: Five “pundits” forecasted eDiscovery functions and software will continue to be brought in-house, especially on the “left-side of the EDRM model” (Information Management).
  • Diverse Data Sources: Three “soothsayers” presaged that sources of data will continue to be more diverse, which shouldn’t be a surprise to anyone, given the popularity of gadgets and the rise of social media.
  • Social Media: Speaking of social media, three “prophets” (yes, I’ve been consulting my thesaurus!) expect social media to continue to be a big area to be addressed for eDiscovery.
  • End to End Discovery: Three “psychics” also predicted that there will continue to be more single-source end-to-end eDiscovery offerings in the marketplace.

The “others receiving votes” category (two predicting each of these) included maturing and acceptance of automated review (including predictive coding), early case assessment moving toward the Information Management stage, consolidation within the eDiscovery industry, more focus on proportionality, maturing of global eDiscovery and predictive/disruptive pricing.

Predictive/disruptive pricing (via Kriss Wilson of Superior Document Services and Charles Skamser of eDiscovery Solutions Group respective blogs) is a particularly intriguing prediction to me because data volumes are continuing to grow at an astronomical rate, so greater volumes lead to greater costs.  Creativity will be key in how companies deal with the larger volumes effectively, and pressures will become greater for providers (even, dare I say, review attorneys) to price their services more creatively.

Another interesting prediction (via ZL Technologies) is that “Discovery of Databases and other Structured Data will Increase”, which is something I’ve expected to see for some time.  I hope this is finally the year for that.

Finally, I said that I found serious predictions and analyzed them; however, there are a couple of not-so-serious sets of predictions here and here.  My favorite prediction is from The Posse List, as follows: “LegalTech…renames itself “EDiscoveryTech” after Law.com survey reveals that of the 422 vendors present, 419 do e-discovery, and the other 3 are Hyundai HotWheels, Speedway Racers and Convert-A-Van who thought they were at the Javits Auto Show.”

So, what do you think?  Care to offer your own “hunches” from your crystal ball?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Case Law: Pension Committee

This holiday week, we’re taking a look back at some of the cases which have had the most significance (from an eDiscovery standpoint) of the year.  The first case we will look at is The Pension Committee of the Montreal Pension Plan v. Banc of America Securities, LLC, 29010 U.S. Dist. Lexis 4546 (S.D.N.Y. Jan. 15, 2010) (as amended May 28, 2010), commonly referred to as “Pension Committee”.

In “Pension Committee”, New York District Court Judge Shira Scheindlin defined negligence, gross negligence, and willfulness from an eDiscovery standpoint and cementing her status as the most famous “Judge Scheindlin” in New York (as opposed to “Judge Judy” Sheindlin, who spells her last name without a “c”).  Judge Scheindlin titled her 85-page opinion Zubulake Revisited: Six Years Later.  The

This case addresses preservation and spoliation requirements of the plaintiff and information which should have been preserved by the plaintiffs after the lawsuit was filed. Judge Scheindlin addresses in considerable detail, defining the levels of culpability — negligence, gross negligence, and willfulness in the electronic discovery context.

Issues that constituted negligence according to Judge Scheindlin’s opinion included:

  • Failure to obtain records from all employees (some of whom may have had only a passing encounter with the issues in the litigation), as opposed to key players;
  • Failure to take all appropriate measures to preserve ESI;
  • Failure to assess the accuracy and validity of selected search terms.

Issues that constituted gross negligence or willfulness according to Judge Scheindlin’s opinion included:

  • Failure to issue a written litigation hold;
  • Failure to collect information from key players;
  • Destruction of email or backup tapes after the duty to preserve has attached;
  • Failure to collect information from the files of former employees that remain in a party’s possession, custody, or control after the duty to preserve has attached.

The opinion also addresses 1) responsibility to establish the relevance of evidence that is lost as well as responsibility to prove that the absence of the missing material has caused prejudice to the innocent party, 2) a novel burden-shifting test in addressing burden of proof and severity of the sanction requested and 3) guidance on the important issue of preservation of backup tapes.

The result: spoliation sanctions against 13 plaintiffs based on their alleged failure to timely issue written litigation holds and to preserve certain evidence before the filing of the complaint.

Scheindlin based sanctions on the conduct and culpability of the spoliating party, regardless of the relevance of the documents destroyed, which has caused some to label the opinion as “draconian”.  In at least one case, Orbit One Communications Inc. v. Numerex Corp., 2010 WL 4615547 (S.D.N.Y. Oct. 26, 2010)., Magistrate Judge James C. Francis concluded that sanctions for spoliation must be based on the loss of at least some information relevant to the dispute.  It will be interesting to see how other cases refer to the Pension Committee case down the road.

So, what do you think?  Is this the most significant eDiscovery case of 2010?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Tips: SaaS and eDiscovery – More Top Considerations

Friday, we began talking about the article regarding Software as a Service (SaaS) and eDiscovery entitled Top 7 Legal Things to Know about Cloud, SaaS and eDiscovery on CIO Update.com, written by David Morris and James Shook from EMC.  The article, which relates to storage of ESI within cloud and SaaS providers, can be found here.

The article looks at key eDiscovery issues that must be addressed for organizations using public cloud and SaaS offerings for ESI, and Friday’s post looked at the first three issues.  Here are the remaining four issues from the article (requirements in bold are quoted directly from the article):

4. What if there are technical issues with e-discovery in the cloud?  The article discusses how identifying and collecting large volumes of data can have significant bandwidth, CPU, and storage requirements and that the cloud provider may have to do all of this work for the organization.  It pays to be proactive, determine potential eDiscovery needs for the data up front and, to the extent possible, negotiate eDiscovery requirements into the agreement with the cloud provider.

5. If the cloud/SaaS provider loses or inadvertently deletes our information, aren’t they responsible? As noted above, if the agreement with the cloud provider includes eDiscovery requirements for the cloud provider to meet, then it’s easier to enforce those requirements.  Currently, however, these agreements rarely include these types of requirements.  “Possession, custody or control” over the data points to the cloud provider, but courts usually focus their efforts on the named parties in the case when deciding on spoliation claims.  Sounds like a potential for third party lawsuits.

6. If the cloud/SaaS provider loses or inadvertently deletes our information, what are the potential legal ramifications?  If data was lost because of the cloud provider, the organization will probably want to establish that they’re not at fault. But it may take more than establishing who deleted the data. – the organization may need to demonstrate that it acted diligently in selecting the provider, negotiating terms with established controls and notifying the provider of hold requirements in a timely manner.  Even then, there is no case law guidance as to whether demonstrating such would shift that responsibility and most agreements with cloud providers will limit potential damages for loss of data or data access.

7. How do I protect our corporation from fines and sanction for ESI in the cloud?  The article discusses understanding what ESI is potentially relevant and where it’s located.  This can be accomplished, in part, by creating a data map for the organization that covers data in the cloud as well as data stored within the organization.  Again, covering eDiscovery and other compliance requirements with the provider when negotiating the initial agreement can make a big difference.  As always, be proactive to minimize issues when litigation strikes.

Let’s face it, cloud and SaaS solutions are here to stay and they are becoming increasingly popular for organizations of all sizes to avoid the software and infrastructure costs of internal solutions.  Being proactive and including corporate counsel up front in decisions related to SaaS selections will enable your organization to avoid many potential problems down the line.

So, what do you think?  Does your company have mechanisms in place for discovery of your cloud data?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Tips: SaaS and eDiscovery – Top Considerations

 

There was an interesting article this week regarding Software as a Service (SaaS) and eDiscovery entitled Top 7 Legal Things to Know about Cloud, SaaS and eDiscovery on CIO Update.com, written by David Morris and James Shook from EMC.  The article, which relates to storage of ESI within cloud and SaaS providers, can be found here.

The authors note that “[p]roponents of the cloud compare it to the shift in electrical power generation at the turn of the century [1900’s], where companies had to generate their own electric power to run factories.  Leveraging expertise and economies of scale, electric companies soon emerged and began delivering on-demand electricity at an unmatched cost point and service level.”, which is what cloud components argue that the SaaS model is doing for IT services.

However, the decision to move to SaaS solutions for IT services doesn’t just affect IT – there are compliance and legal considerations to consider as well.  Because the parties to a case have a duty to identify, preserve and produce relevant electronically stored information (ESI), information for those parties stored in a cloud infrastructure or SaaS application is subject to those same requirements, even though it isn’t necessarily in their total control.  With that in mind, the article looks at key eDiscovery issues that must be addressed for organizations using public cloud and SaaS offerings for ESI, as follows (requirements in bold are quoted directly from the article):

  1. Where is ESI actually located when it is in the ethereal cloud or SaaS application?  It’s important to know where your data is actually stored.  Because SaaS providers are expected to deliver data on demand at any time, they may store your data in more than one data center for redundancy purposes.  Data centers could be located outside of the US, so different compliance and privacy requirements may come into play if there is a need to produce data from these locations.
  2. What are the legal implications of e-discovery in the cloud? Little case law exists on the subject, but it is expected that the responsibility for timely preservation, collection and production of the data remains with the organization at party in the lawsuit, even though that data may be in direct control of the cloud provider.
  3. What happens if a lawsuit is in the US but one company’s headquarters is in another country? Or what if the data is in a country where the privacy rules are different?  The article references one case – AccessData Corp. v. ALSTE Technologies GMBH , 2010 WL 318477 (D. Utah Jan. 21, 2010) – where the German company ALSTE cited German privacy laws as preventing it from collecting relevant company emails that were located in Germany (the US court compelled production anyway).  So, jurisdictional factors can come into play when cloud data is housed in a foreign jurisdiction.

This is too big a topic to cover in one post, so we’ll cover the other four eDiscovery issues to address in Monday’s post.  Let the anticipation build!

So, what do you think?  Does your company have ESI hosted in the cloud?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Data Mapping for Litigation Readiness

 

Federal Rule 26(f)–the Meet and Confer rule–requires the parties in litigation to meet at an early stage to discuss the information they have and what they will share.   The parties must meet “at least 21 days before a scheduling conference is to be held or a scheduling order is due under Rule 16(b)”, which states that the “judge must issue the scheduling order…within the earlier of 120 days after any defendant has been served with the complaint or 90 days after any defendant has appeared.”.

That means the meet and confer is required 90-100 days after the case has been filed and, at that meeting the parties must disclose to each other “a copy of, or a description by category and location of, all documents, electronically stored information and tangible things that are in the possession, custody or control of the party and that the disclosing party may use to support its claims or defenses” (Rule 26(a)(1)(A)(ii)).  That’s not much time to develop a thorough understanding of what data may be potentially responsive to the case.

The best way for organizations to address this potential issue is proactively, before litigation even begins, by preparing a data map.  As the name implies, a data map simply provides a guide for legal and IT to the location of data throughout the company and important information about that data, such as the business units, processes and technology responsible for maintaining the data, as well as retention periods for that data.  An effective data map should enable in-house counsel to identify the location, accessibility and format of potentially responsive electronically stored information (ESI).

Four tips to creating and maintaining an effective data map:

  • Obtain Early “Buy-In”: Various departments within the organization have key information about their data, so it’s important to obtain early “buy-in” with each of them to ensure full cooperation and a comprehensive data map,
  • Document and Educate: It’s important to develop logical and comprehensive practices for managing data and provide regular education to employees (especially legal) about the organization’s data management policies so that data is where it is supposed to be,
  • Communicate Regularly: Groups need to communicate regularly so that new initiatives that may affect existing data stores or create new ones are known by all,
  • Update Periodically: Technology is constantly evolving, employees come and go and terminologies change.  Data maps must be reviewed and updated regularly to stay accurate.  If you created a data map two years ago and haven’t updated it, it probably doesn’t address new social media sources.

Preparing and maintaining a data map for your organization puts you in a considerably better position to respond quickly when litigation hits.

So, what do you think?  Does your organization have a data map?  Please share any comments you might have or if you’d like to know more about a particular topic.

Thought Leader Q&A: Alon Israely of BIA

 

Tell me about your company and the products you represent.  BIA is a full solution E-Discovery provider. Our core competencies are around E-Discovery Collections and Processing, but we offer the full spectrum of services around E-Discovery.   For almost a decade, BIA has been developing and implementing defensible, technology driven solutions that reduce the costs and risks related to litigation, regulatory compliance and internal audits.  BIA provides software and services to Fortune 1000, Global 2000 companies and Am Law 100 law firms. We are headquartered in New York City, and have offices in San Francisco, Seattle, Washington DC and in Southwest Michigan. We also maintain digital evidence response units throughout the United States, Europe, Asia, and the Middle East.

BIA’s products are defensible and cost effective, offering defensible remote collections with DiscoveryBOT™, fast e-discovery processing with our TD Grid system and automated and secure legal hold software with Solis™.  For more about BIA’s product, click here.

What is the best way for lawyers and litigation support professionals to take control of their eDiscovery?  The best way for litigation support professionals to take control of their e-discovery is to scope projects correctly.  It is important to understand that not one size fits all in e-discovery.  That is, there are many tools and service providers out there – it is important to focus (at the beginning) on what needs to be accomplished from a legal and IT perspective first and then to determine which technologies and methods fit that strategy best. 

What is a good way to achieve predictability in eDiscovery costs?  Most of the cost analysis that exists in e-discovery today is focused on the Review side, where the data has already been collected and perhaps culled. Yet, there are still too many documents, where most of the documents are not responsive. With a focus on the left side of the EDRM, e-discovery costs are visible early on in the process.  For example, using a good (light-touch) collection tool and method to lock data down is one of the best ways to control e-discovery costs – that is, doing the right collection early-on and getting the right metrics from those collections, allow you to analyze that data (even at a high-level without incurring processing and other costs) which can then help can help the attorneys and the institutional client determine costs early in the process, and in a more predictable manner.

Is there a way to perform self collection in a defensible manner?  Yes.  Use the right tools and methods and importantly, have those tools and methods vetted (reviewed and approved) by e-discovery collection professionals.  Defensible self-collections do NOT mean that the custodian or the IT people are left to perform the collection on their own without the right plan behind them.  There are best-practices that should be followed and there are some tools that maintain the integrity of the data.  Make sure that those best practices and tools are used (having been scoped correctly – see response above) by professionals or at least used by staff and peer-reviewed or monitored by professionals.  Also, rely on custodians for good ESI identification – that is, the custodians (users) usually know better than anyone where they maintain records – so, using custodian questionnaires early-on will help inform those systems which will be most relevant – which goes to diligence (an important factor in defensible collections).  Also then the professional can work in tandem with the custodian to gather the data in a manner which will ensure the evidentiary integrity of the data.  At BIA we have been following those methods for years and have been very successful with our clients, the Courts and Opposing parties, at defending those ways of identifying and collecting ESI.

What is the importance of the left side of the EDRM model?  The left side is where it all starts with e-discovery – that is, ESI collections are usually the most affordable parts of the overall e-discovery process and are arguably the most important – that is, “garbage in/garbage-out.”  Because the subsequent parts of the e-discovery process (i.e., the “right-side of the EDRM”) rely on the data identified and gathered in the early parts of the process, it is imperative that those tasks and activities performed for the “left side of EDRM” are done in the correct manner – that is, maintaining the evidentiary integrity of the data collected.  Also, the left side of the EDRM includes preserving data and notifying custodians of their obligations to preserve – which is a piece critical to defensible e-discovery – especially in light of Pension Committee and some other recent cases.  As for the money piece, the left side of the EDRM is an area where much of the planning can occur for the rest of the process without incurring substantial costs – that planning goes a long way to ascertaining the real costs and timing with respect to the remainder of the e-discovery process.

About Alon Israely

Alon Israely has over fifteen years of experience in a variety of advanced computing-related technologies. Alon is a Senior Advisor in BIA’s Advisory Services group and currently oversees BIA’s product development for its core technology products. Prior to BIA, Alon consulted with law firms and their clients on a variety of technology issues, including expert witness services related to computer forensics, digital evidence management and data security. Prior to that, he was a senior member of several IT teams working on projects for Fortune 500 companies related to global network architecture and data migrations projects for enterprise information systems. As a pioneer in the field of digital evidence collection and handling, Alon has worked on a wide variety of matters, including several notable financial fraud cases; large-scale multi-party international lawsuits; and corporate matters involving the SEC, FTC, and international regulatory boards.  Alon holds a B.A. from UCLA and received his J.D. from New York Law School with an emphasis in Telecommunications Law. He is a member of the New York State Bar as well as several legal and computer forensic associations.