EDRM

eDiscovery Daily is Now an Education Partner of EDRM!

If you’re a regular reader of this blog, you know that we have frequently covered announcements by EDRM that range from announcements about new practical tools (such as those here, here and here) to announcements about new partnerships (such as this one here). We love EDRM because they regularly have something interesting to announce which gives us plenty of topic ideas for this blog. Now, EDRM’s latest announcement includes eDiscoveryDaily as we are now an Education partner of EDRM!

Having participated in EDRM since 2006, I have seen firsthand its rise to become the leading standards organization in eDiscovery and I have had the pleasure of attending several EDRM annual and mid-year meetings since then. The Electronic Discovery Reference Model has become the most recognized framework guide in eDiscovery, but it’s not the only model that EDRM members have collaborated to create, as there are five other models that have been developed and updated over the years, as well as considerably other resources, including an industry standard data set, budget calculators and a Model Code of Conduct for providers and attorneys (to name a few). Think EDRM has been busy over the 4+ years that eDiscovery Daily has existed? We have published 186 posts (and counting) related to EDRM activities and work product.

As mentioned in the announcement, eDiscovery Daily will be published daily on the EDRM site – here’s the link – and you can find the last few posts on the site. We are also listed as one of EDRM’s partners here, along with ACEDS and fellow Education partners, Bryan University and University of Florida Levin College of Law.

As part of the new partnership, eDiscovery Daily will also provide exclusive content to EDRM, including articles sharing real-life examples of organizations using EDRM resources in their own eDiscovery workflows. Look for those to appear soon on the EDRM site. Given our commitment to education at eDiscovery Daily, we are excited about teaming with EDRM to continue working to promote best practices and standards and continue to educate the legal community to manage ESI more effectively in discovery.

By the way, as we noted a couple of weeks ago, EDRM also provides free several webinars per year. Tomorrow, they are providing another one: Getting Cloud Data from the New Big Three: Google, iCloud & MS Office 365, sponsored by Zapproved. Click here for more information and for the link to register.

So, what do you think? Is an alliance between EDRM and eDiscovery Daily good for the eDiscovery industry? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscoveryDaily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

EDRM Publishes Updated Statistical Sampling Guide with Public Comments: eDiscovery Trends

In 2012, we covered EDRM’s initial announcement of a new guide called Statistical Sampling Applied to Electronic Discovery and we covered the release of the updated guide (Release 2) back in December. That version of the guide has now been updated with feedback from the comment period.

The public comment period for EDRM’s Statistical Sampling Applied to Electronic Discovery, Release 2, published on the EDRM website here, concluded on January 9, 2015 and EDRM has announced the release of the updated guide today.

The guide ranges from the introductory and explanation of basic statistical terms (such as sample size, margin of error and confidence level) to more advanced concepts such as binomial distribution and hypergeometric distribution. Bring your brain.

The guide includes an accompanying Excel spreadsheet which can be downloaded from the page, EDRM Statistics Examples 20150123.xlsm, which implements relevant calculations supporting Sections 7, 8 and 9 of the 10 section guide. The spreadsheet was developed using Microsoft Excel 2013 and is an .xlsm file, meaning that it contains VBA code (macros), so you may have to adjust your security settings in order to view and use them. You’ll also want to read the guide first (especially sections 7 thru 10) as the Excel workbook is a bit cryptic.

Even though the public comment period has ended, comments can still be posted at the bottom of the EDRM Statistical Sampling Release 2 page, or emailed to the group at sampling@edrm.net or you can fill out their comment form here.

As I noted back in December, the old guide, from April of 2012, is still on the EDRM site. You’ll want to make sure you go to the new updated guide, located here.

So, what do you think? Do you perform statistical sampling to verify results within your eDiscovery process? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Here’s Another Resource from EDRM That You May Not Know About: eDiscovery Webinars

If you’re like me, you get a lot of email invites to webinars for all sorts of topics. Most are free and I wish I could attend them all, but I have a day job (beyond my role as editor of eDiscoveryDaily, I’m also VP of Professional Services for CloudNine), so I don’t have a lot of free time and have to pass on most of them (including many that I’d like to attend). If that’s true for you too and the webinar that you’re missing is provided by EDRM, you might be happy to know that you can probably still view it, whenever you have time.

EDRM uses BrightTALK, which is a technology media company that provides professional webinar and video solutions to a variety of industries, including eDiscovery, to schedule and host its webinars. Via the BrightTALK site, you can register to attend upcoming EDRM webinars that have been scheduled – such as the one coming up tomorrow (February 18) titled Cross Border Issues in eDiscovery, sponsored by UBIC, from 1:00 to 2:00 pm Central time. Here is the link to sign up.

Busy tomorrow and can’t attend? You can still catch the webinar later on – via that same link. At the bottom of the page, you’ll see two tabs: Live and Recorded and Upcoming. Currently, there is a 29 next to the Live and Recorded tab, indicating 29 previously recorded webinars by EDRM.

Did you miss the webinar earlier this month titled Assembling the Team to Complete the eMSAT? You can still catch it here, as well as the first two webinars in the eMSAT series (Understanding the EDRM eDiscovery Maturity Self-Assessment Tool and Building the Business Case for the eMSAT-1). So, if you want to learn plenty about the new eMSAT tool provided by EDRM’s Metrics team (and previously covered by this blog here), you can do so at your own pace.

In fact, that’s true for 11 webinars held by EDRM in 2014. And also for webinars held back as far as May 30, 2012 (nearly three years ago), when EDRM conducted a webinar, sponsored by AccessData, about early data assessment, titled Early and Often. Each of the webinars also includes an Attachments button to enable you to download presentation materials, including PowerPoint slides (when available) for later reference.

All you’ll have to do is to register for a free BrightTALK account – if you’ve attended a webinar in the past, you probably already have a login account – and log in when you want to catch a webinar. It’s that easy.

So, what do you think? Are you like me and can’t always find time to attend webinars during the work day? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscoveryDaily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

EDRM Publishes Clarification to its Model Code of Conduct – eDiscovery Trends

Yesterday, we discussed an update to the Cooperation Proclamation: Resources for the Judiciary from The Sedona Conference®. Today, another titan of eDiscovery standards and best practices, EDRM, has an update of its own.

The EDRM Model Code of Conduct (MCoC) (previously covered by this blog here and here) focuses on the ethical duties of service providers associated with five key principles and also provides a corollary for each principle to illustrate ethical duties of their clients. Yesterday, EDRM announced a proposed clarification to the language in Principle 3 – Conflicts of Interest of the MCoC as well as a clarification to the language in the corollary to Principle 3.

As noted in their press release, the “clarification distinguishes between (a) entities that are “members of the team,” i.e., participants in shaping legal strategy; and (b) technology providers that, while delivering capabilities to the team, are not on the team; i.e., they are not privy to or helping to shape case strategy. Principle 3 of the MCoC is intended to apply to the former, not the latter.”

The revised Principle 3 now reads:

“When (a) a Service Provider is engaged primarily to provide consulting services in connection with the broad range of activities covered by the EDRM and (b) as a material part of that engagement the Service Provider receives information about case strategy or assists in developing case strategy, then the Service Provider should employ reasonable proactive measures to identify potential conflicts of interest, as defined and discussed below. In the event that an actual or potential conflict of interest is identified, the Service Provider should disclose any such conflict and take immediate steps to resolve it in accordance with the Guidelines set forth below.”

The revised corollary to Principle 3 now reads:

“Clients should furnish Service Providers subject to Principle 3 with sufficient information at the commencement of each engagement to enable each Service Provider to identify potential conflicts of interest. If an actual or potential conflict of interest is identified and disclosed and the Client elects to proceed with the engagement, the Client should work in good faith with the Service Provider and other parties to facilitate a resolution to any such conflict in accordance with the Guidelines set forth below.”

You can read the full code, download it, comment on the proposed changes or subscribe to the code here. You can also see all the subscribing organizations and individuals (CloudNine has been a subscriber to the MCoC since it was officially released in 2012).

So, what do you think? Is this a necessary revision to the code? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscoveryDaily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

EDRM Updates Statistical Sampling Applied to Electronic Discovery Guide – eDiscovery Trends

Over two years ago, we covered EDRM’s initial announcement of a new guide called Statistical Sampling Applied to Electronic Discovery.  Now, they have announced an updated version of the guide.

The release of EDRM’s Statistical Sampling Applied to Electronic Discovery, Release 2, announced last week and published on the EDRM website, is open for public comment until January 9, 2015, after which any input received will be reviewed and considered for incorporation before the updated materials are finalized.

As EDRM notes in their announcement, “The updated materials provide guidance regarding the use of statistical sampling in e-discovery. Much of the information is definitional and conceptual and intended for a broad audience. Other materials (including an accompanying spreadsheet) provide additional information, particularly technical information, for e-discovery practitioners who are responsible for developing further expertise in this area.”

The expanded Guide is comprised of ten sections (most of which have several sub-sections), as follows:

  1. Introduction
  2. Estimating Proportions within a Binary Population
  3. Acceptance Sampling
  4. Sampling in the Context of the Information Retrieval Grid – Recall, Precision and Elusion
  5. Seed Set Selection in Machine Learning
  6. Guidelines and Considerations
  7. Additional Guidance on Statistical Theory
  8. Calculating Confidence Levels, Confidence Intervals and Sample Sizes
  9. Acceptance Sampling
  10. Examples in the Accompanying Excel Spreadsheet

The guide ranges from the introductory and explanation of basic statistical terms (such as sample size, margin of error and confidence level) to more advanced concepts such as binomial distribution and hypergeometric distribution.  Bring your brain.

As section 10 indicates, there is also an accompanying Excel spreadsheet which can be downloaded from the page, EDRM Statistics Examples 20141023.xlsm, which implements relevant calculations supporting Sections 7, 8 and 9. The spreadsheet was developed using Microsoft Excel 2013 and is an .xlsm file, meaning that it contains VBA code (macros), so you may have to adjust your security settings in order to view and use them.  You’ll also want to read the guide first (especially sections 7 thru 10) as the Excel workbook is a bit cryptic.

Comments can be posted at the bottom of the EDRM Statistical Sampling page, or emailed to the group at mail@edrm.net or you can fill out their comment form here.

One thing that I noticed is that the old guide, from April of 2012, is still on the EDRM site.  It might be a good idea to archive that page to avoid confusion with the new guide.

So, what do you think?  Do you perform statistical sampling to verify results within your eDiscovery process?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

How Mature is Your Organization in Handling eDiscovery? – eDiscovery Best Practices

A new self-assessment resource from EDRM helps you answer that question.

A few days ago, EDRM announced the release of the EDRM eDiscovery Maturity Self-Assessment Test (eMSAT-1), the “first self-assessment resource to help organizations measure their eDiscovery maturity” (according to their press release linked here).

As stated in the press release, eMSAT-1 is a downloadable Excel workbook containing 25 worksheets (actually 27 worksheets when you count the Summary sheet and the List sheet of valid choices at the end) organized into seven sections covering various aspects of the e-discovery process. Complete the worksheets and the assessment results are displayed in summary form at the beginning of the spreadsheet.  eMSAT-1 is the first of several resources and tools being developed by the EDRM Metrics group, led by Clark and Dera Nevin, with assistance from a diverse collection of industry professionals, as part of an ambitious Maturity Model project.

The seven sections covered by the workbook are:

  1. General Information Governance: Contains ten questions to answer regarding your organization’s handling of information governance.
  2. Data Identification, Preservation & Collection: Contains five questions to answer regarding your organization’s handling of these “left side” phases.
  3. Data Processing & Hosting: Contains three questions to answer regarding your organization’s handling of processing, early data assessment and hosting.
  4. Data Review & Analysis: Contains two questions to answer regarding your organization’s handling of search and review.
  5. Data Production: Contains two questions to answer regarding your organization’s handling of production and protecting privileged information.
  6. Personnel & Support: Contains two questions to answer regarding your organization’s hiring, training and procurement processes.
  7. Project Conclusion: Contains one question to answer regarding your organization’s processes for managing data once a matter has concluded.

Each question is a separate sheet, with five answers ranked from 1 to 5 to reflect your organization’s maturity in that area (with descriptions to associate with each level of maturity).  Default value of 1 for each question.  The five answers are:

  • 1: No Process, Reactive
  • 2: Fragmented Process
  • 3: Standardized Process, Not Enforced
  • 4: Standardized Process, Enforced
  • 5: Actively Managed Process, Proactive

Once you answer all the questions, the Summary sheet shows your overall average, as well as your average for each section.  It’s an easy workbook to use with input areas defined by cells in yellow.  The whole workbook is editable, so perhaps the next edition could lock down the calculated only cells.  Nonetheless, the workbook is intuitive and provides a nice exercise for an organization to grade their level of eDiscovery maturity.

You can download a copy of the eMSAT-1 Excel workbook from here, as well as get more information on how to use it (the page also describes how to provide feedback to make the next iterations even better).

The EDRM Maturity Model Self-Assessment Test is the fourth release in recent months by the EDRM Metrics team. In June 2013, the new Metrics Model was released, in November 2013 a supporting glossary of terms for the Metrics Model was published and in November 2013 the EDRM Budget Calculators project kicked off (with four calculators covered by us here, here, here and here).  They’ve been busy.

So, what do you think?  How mature is your organization in handling eDiscovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

When Preparing Production Sets, Quality is Job 1 – Best of eDiscovery Daily

OK, I admit I stole that line from an old Ford commercial😉

France Strikes Back!  Today, we’re heading back to Paris for one final evening before heading home (assuming the Air France pilots let us).  For the next two weeks except for Jane Gennarelli’s Throwback Thursday series, we will be re-publishing some of our more popular and frequently referenced posts.  Today’s post is a best practice topic for preparing production sets.  Enjoy!

Yesterday, we talked about addressing parameters of production up front to ensure that those requirements make sense and avoid foreseeable production problems well before the production step.  Today, we will talk about quality control (QC) mechanisms to make sure that the production is complete and accurate.

Quality Control Checks

There are a number of checks that can and should be performed on the production set, prior to producing it to the requesting party.  Here are some examples:

  • File Counts: The most obvious check you can perform is to ensure that the count of files matches the count of documents or pages you have identified to be produced.  However, depending on the production, there may be multiple file counts to check:
    • Image Files: If you have agreed with opposing counsel to produce images for all documents, then there will be a count of images to confirm.  If you’re producing multi-page image files (typically, PDF or TIFF), the count of images should match the count of documents being produced.  If you’re producing single-page image files (usually TIFF), then the count should match the number of pages being produced.
    • Text Files: When producing image files, you may also be producing searchable text files.  Again, the count should match either the documents (multi-page text files) or pages (single-page text files) with one possible exception.  If a document or page has no searchable text, are you still producing an empty file for those?  If not, you will need to be aware of how many of those instances there are and adjust the count accordingly to verify for QC purposes.
    • Native Files: Native files (if produced) are typically at the document level, so you would want to confirm that one exists for each document being produced.
    • Subset Counts: If the documents are being produced in a certain organized manner (e.g., a folder for each custodian), it’s a good idea to identify subset counts at those levels and verify those counts as well.  Not only does this provide an extra level of count verification, but it helps to find the problem more quickly if the overall count is off.
    • Verify Counts on Final Production Media: If you’re verifying counts of the production set before copying it to the media (which is common when burning files to CD or DVD), you will need to verify those counts again after copying to ensure that all files made it to the final media.
    • Sampling of Results: Unless the production is relatively small, it may be impractical to open every last file to be produced to confirm that it is correct.  If so, employ accepted statistical sampling procedures (such as those described here and here for searching) to identify an appropriate sample size and randomly select that sample to open and confirm that the correct files were selected, HASH values of produced native files match the original source versions of those files, images are clear and text files contain the correct text.
    • Redacted Files: If any redacted files are being produced, each of these (not just a sample subset) should be reviewed to confirm that redactions of privileged or confidential information made it to the produced file.  Many review platforms overlay redactions which have to be burned into the images at production time, so it’s easy for mistakes in the process to cause those redactions to be left out or burned in at the wrong location.  Very Important! – You also need to confirm that the redacted text has been removed from any text files that have been produced
    • Inclusion of Logs: Depending on agreed upon parameters, the production may include log files such as:
      • Production Log: Listing of all files being produced, with an agreed upon list of metadata fields to identify those files.
      • Privilege Log: Listing of responsive files not being produced because of privilege (and possibly confidentiality as well).  This listing often identifies the privilege being asserted for each file in the privilege log.
      • Exception Log: Listing of files that could not be produced because of a problem with the file.  Examples of types of exception files are included here.

Each production will have different parameters, so the QC requirements will differ, so these are examples, but not necessarily a comprehensive list of all potential QC checks to perform.

So, what do you think?  Can you think of other appropriate QC checks to perform on production sets?  If so, please share them!  As well as any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Production is the “Ringo” of the eDiscovery Phases – Best of eDiscovery Daily

 

God Save the Queen!  Today is our last full day in London and we’re planning to visit Westminster Abbey, which is where all of England’s kings and queens are crowned.  For the next two weeks except for Jane Gennarelli’s Throwback Thursday series, we will be re-publishing some of our more popular and frequently referenced posts.  Today’s post is a topic where people can frequently make mistakes, causing production delays and costly rework.  Enjoy!

Most of the “press” associated with eDiscovery ranges from the “left side of the EDRM model” (i.e., Information Management, Identification, Preservation, Collection) through the stages to prepare materials for production (i.e., Processing, Review and Analysis).  All of those phases lead to one inevitable stage in eDiscovery: Production.  Yet, few people talk about the actual production step.  If Preservation, Collection and Review are the “John”, “Paul” and “George” of the eDiscovery process, Production is “Ringo”.

It’s the final crucial step in the process, and if it’s not handled correctly, all of the due diligence spent in the earlier phases could mean nothing.  So, it’s important to plan for production up front and to apply a number of quality control (QC) checks to the actual production set to ensure that the production process goes as smooth as possible.

Planning for Production Up Front

When discussing the production requirements with opposing counsel, it’s important to ensure that those requirements make sense, not only from a legal standpoint, but a technical standpoint as well.  Involve support and IT personnel in the process of deciding those parameters as they will be the people who have to meet them.  Issues to be addressed include, but not limited to:

  • Format of production (e.g., paper, images or native files);
  • Organization of files (e.g., organized by custodian, legal issue, etc.);
  • Numbering scheme (e.g., Bates labels for images, sequential file names for native files);
  • Handling of confidential and privileged documents, including log requirements and stamps to be applied;
  • Handling of redactions;
  • Format and content of production log;
  • Production media (e.g., CD, DVD, portable hard drive, FTP, etc.).

I was involved in a case a couple of years ago where opposing counsel was requesting an unusual production format where the names of the files would be the subject line of the emails being produced (for example, “Re: Completed Contract, dated 12/01/2011”).  Two issues with that approach: 1) The proposed format only addressed emails, and 2) Windows file names don’t support certain characters, such as colons (:) or slashes (/).  I provided that feedback to the attorneys so that they could address with opposing counsel and hopefully agree on a revised format that made more sense.  So, let the tech folks confirm the feasibility of the production parameters.

The workflow throughout the eDiscovery process should also keep in mind the end goal of meeting the agreed upon production requirements.  For example, if you’re producing native files with metadata, you may need to take appropriate steps to keep the metadata intact during the collection and review process so that the metadata is not inadvertently changed. For some file types, metadata is changed merely by opening the file, so it may be necessary to collect the files in a forensically sound manner and conduct review using copies of the files to keep the originals intact.

Tomorrow, we will talk about preparing the production set and performing QC checks to ensure that the ESI being produced to the requesting party is complete and accurate.

So, what do you think?  Have you had issues with production planning in your cases?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Our 1,000th Post! – eDiscovery Milestones

When we launched nearly four years ago on September 20, 2010, our goal was to be a daily resource for eDiscovery news and analysis.  Now, after doing so each business day (except for one), I’m happy to announce that today is our 1,000th post on eDiscovery Daily!

We’ve covered the gamut in eDiscovery, from case law to industry trends to best practices.  Here are some of the categories that we’ve covered and the number of posts (to date) for each:

We’ve also covered every phase of the EDRM (177) life cycle, including:

Every post we have published is still available on the site for your reference, which has made eDiscovery Daily into quite a knowledgebase!  We’re quite proud of that.

Comparing our first three months of existence to now, we have seen traffic on our site grow an amazing 474%!  Our subscriber base has more than tripled in the last three years!  We want to take this time to thank you, our readers and subcribers, for making that happen.  Thanks for making the eDiscoveryDaily blog a regular resource for your eDiscovery news and analysis!  We really appreciate the support!

We also want to thank the blogs and publications that have linked to our posts and raised our public awareness, including Pinhawk, Ride the Lightning, Litigation Support Guru, Complex Discovery, Bryan University, The Electronic Discovery Reading Room, Litigation Support Today, Alltop, ABA Journal, Litigation Support Blog.com, InfoGovernance Engagement Area, EDD Blog Online, eDiscovery Journal, e-Discovery Team ® and any other publication that has picked up at least one of our posts for reference (sorry if I missed any!).  We really appreciate it!

I also want to extend a special thanks to Jane Gennarelli, who has provided some serial topics, ranging from project management to coordinating review teams to what litigation support and discovery used to be like back in the 80’s (to which some of us “old timers” can relate).  Her contributions are always well received and appreciated by the readers – and also especially by me, since I get a day off!

We always end each post with a request: “Please share any comments you might have or if you’d like to know more about a particular topic.”  And, we mean it.  We want to cover the topics you want to hear about, so please let us know.

Tomorrow, we’ll be back with a new, original post.  In the meantime, feel free to click on any of the links above and peruse some of our 999 previous posts.  Now is your chance to catch up!  😉

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Throwback Thursdays – Collecting Documents

 

In 1978, I took my first job in litigation, with the law department of a Fortune 100 corporation headquartered in New York City. I was one of a team assembled to collect responsive documents to be produced in a major antitrust litigation. The documents were located in the corporation’s office and warehouse facilities around the country. While the process of collecting documents varied from case to case, this project was representative of the general approach to collecting documents in large-scale litigation. Let me describe how it worked.

Before travelling to a facility, reviews of custodian files were scheduled, central files to be reviewed were identified, and indices of boxes of archived materials in warehouses were reviewed.  We coordinated with an on-site administrator to ensure that the supplies we needed would be ready (boxes, manila file folders, pads, stickers, markers, etc.), and we made arrangements for a temporary photocopy operation to be set-up on site. 

Upon arrival at an office facility, we each went to our first assigned custodian office — and with empty archive boxes on hand, we’d start the review. We’d put numbered stickers on every file cabinet drawer and desk drawer to be reviewed. We’d label the outside of an archive box with the custodian’s name. Upon finding a responsive document, it was pulled out, an “out card” was put in its place in the original file (we wrote the number of pages of the document that was removed on the out card).  We created a file folder labeled with the number of the file cabinet/desk drawer, followed by the same title on the file from which the document was removed, and we placed the document in the folder, which went into the archive box.  An entire office was reviewed like this, and when we finished an office, we labeled each box with “1 of N”, “2 of N” and so on.

The next step was photocopying, which was quite an involved operation.  Most of this was ‘glass-work’ – that is, stacks of paper were not fed into the machine for bulk photocopying; rather, documents were photocopied one-by-one, by hand. This was necessary because staples, paper clips, and binder clips had to be removed, post-it notes had to be photocopied separately, spiral bound materials needed to be un-bound, and so on. A photocopy operator removed a document form an archive box, did the required preparation, made a photocopy, reassembled the original and put it back in the archive box, assembled the photocopy to match the original and placed it in a second archive box labeled the same as the first, and within the box, in a folder labeled the same as the original.  You get the picture. 

After photocopying, a second operator did a quality control review to ensure that everything copied properly and nothing was missed.  Originals were returned to the document reviewer to re-file, and the copies – which were now the ‘original working copy’ for purposes of litigation – were sent on to the next step… document numbering.  A sequential number was applied to every page using either a Bates stamp machine or a number label.  After numbering, documents were boxed for shipping.

After reviewing the office files, we usually moved on to a warehouse facility at which we used the same approach.  For the most part, the warehouse reviews were unpleasant.  Very often there was inadequate heat or air conditioning, poor ventilation, uncomfortable furniture, and lots of dust.  On the bright side, we got to wear jeans and t-shirts to work, which was unheard of in the days before ‘business casual’ and ‘casual Friday’ were the norm.

This operation was a pretty routine document collection project.  There was, however, one thing about this case that wasn’t routine at all.  After numbering, these documents were shipped to a litigation support service provider for document coding.  This was one of those rare, bet-your-company cases for which a database was built. In the next few blog posts in this series, I’ll describe the litigation support industry and a typical litigation support database.

Please let us know if there are eDiscovery topics you’d like to see us cover in eDiscoveryDaily.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.