eDiscovery

Thought Leader Q&A: Kirke Snyder

 

Tell me about yourself and your experience.  I am a professor of law and ethics at Regis University College for Professional Studies in Denver, Colorado. The opinions expressed in this article are mine and are based upon my 25 years of experience consulting with public and private organizations.

Why is records management so important within the scope of eDiscovery?  Records Management is a sub-set of an organization’s overall information management. Take a look at the Electronic Discovery Reference Model (EDRM). Information management is on the far left-hand side of the model. An effective records/information management program is the most effective way for a company to reduce the volume of data that will become snagged in litigation hold, collection, production, and attorney review.

What are the most important concerns about corporate records and information management?  Organizations should be concerned about managing their corporate records for two main reasons: (1) the risk associated with regulatory compliance and litigation hold requirements, and (2) the cost of reviewing data to identify potentially relevant documents associated with litigation or an investigation.

What are the main risks associated with regulatory compliance and litigation hold requirements?  There are thousands of recordkeeping laws and regulations. A sound corporate records and information (RIM) program must be based upon legal research that identifies the applicable regulatory requirements (federal, state, and industry specific). Retention or destruction requirements apply to commonly encountered corporate records, such as job applications, employee medical records, and tax returns, as well as to the distinctive recorded information associated with specific industries, such as banking, insurance, pharmaceuticals, healthcare, energy, and telecommunications. Further, certain business records are subject to privacy legislation and regulations that protect personal information from unauthorized disclosure or use. Examples of U.S. laws with such privacy provisions include the Fair Credit Reporting Act (1992), the Health Insurance Portability and Accountability Act (1996), and the Gramm-Leach-Bliley Act (1999).

In addition to retaining corporate information to support a regulatory requirement, organizations must hold information that may be potentially relevant to litigation or an investigation. As a matter of fact, it is illegal for any organization to knowingly and intentionally destroy records relevant to pending or ongoing litigation or government investigations, even though their document management policies would otherwise permit such destruction. For public companies, the Sarbanes-Oxley Act of 2002 includes additional recordkeeping provisions and mandated retention requirements for certain types of records. It also criminalizes and provides severe penalties for executives and employees who obstruct justice by destroying or tampering with corporate accounting records. Most notably, the Sarbanes-Oxley Act created a new federal crime for the destruction, mutilation, or alteration of corporate records with the intent to impede or influence a government investigation or other official proceeding, either “in relation to or in contemplation of any such matter or case.” This provision expands upon previous laws relating to the destruction of records with presumed intent to obstruct justice.

How do you justify the cost of a good records management and information program?  With regards to litigation, size does matter. The cost of the litigation discovery process has a direct correlation to the volume of potentially relevant documents related to the matter. The smaller the population of potentially relevant data, the lower the costs will be from vendors to process the data into a searchable database and the lower the fees will be from outside counsel to review each email or document. Most organizations do not have an automated means to identify, collect, and preserve electronically stored information (ESI) based upon search criteria (key words, document type, document date or author). We hear the terminology megabyte, gigabyte, and terabyte used with regards to storage capacity of network servers, computer hard drives, and even portable “thumb drives.” To cost justify a budget for a new records/information management program, it’s important to convert the MB’s, GB’s. and TB’s into something to which management can relate. One megabyte of user documents is approximately one ream of paper. One ream of paper wouldn’t take the lawyers too long to review. However, one gigabyte of user documents if printed would be the approximate length of a basketball court and would require a team of lawyers to review. One terabyte of user documents if printed would be the approximate length of Long Island. It’s easy to see the economic and strategic advantage for an organization to be able to identify the smallest legally defensible data population (without duplicates) prior to handing over the data to vendors for processing or outside counsel for review.

About Kirke Snyder

Kirke has earned a law degree and also a masters degree in legal administration. He is an expert in document retention and litigation electronic discovery issues. He can be reached at KSnyder@Regis.edu.

Thought Leader Q&A: Chris Jurkiewicz of Venio Systems

 

Tell me about your company and the products you represent.  Venio Systems is an Electronic Discovery software solution provider specializing in early case assessment and first pass review.  Our product, Venio FPR™, allows forensic units, attorneys and litigation support teams to process, analyze, search, report, interact with and export responsive data for linear review or production.

What do you consider to be the reason for the enormous growth of early case assessment/first pass review tools in the industry?  I believe much of the growth we’ve seen in the past few years can be attributed to many factors, of which the primary one is the exponential growth of data within an organization.  The inexpensive cost of data storage available to an organization is making it easier for them to keep unnecessary data on their systems.  Companies who practice litigation and/or work with litigative data are seeking out quick and cost effective methods of funneling the necessary data from all the unnecessary data stored in these vast systems thereby making early case assessment/first pass review tools not only appealing but necessary.

Are there other areas where first pass review tools can be useful during eDiscovery?  Clients have found creative ways in using first pass review/ECA technology; recently a client utilized it to analyze a recent production received by opposing counsel. They were able to determine that the email information produced was not complete.  They were then able to force the opposing counsel to fill in the missing email gaps.

There have been several key cases related to search defensibility in the past couple of years.  How will those decisions affect organizations’ approach to ESI searching?  More organizations will have to adopt a defensible process for searching and use tools that support that process.  Venio’s software has many key features focused on search defensibility including: Search List Analysis, Wild Card Variation searching, Search Audit Reporting and Fuzzy Searching.  All searches run in Venio FPR™ are audited by user, date and time, terms, scope, and frequency.  By using these tools, clients have been able to find additional responsive files that would be otherwise missed and easily document their search approach and refinement.

How do you think the explosion of data and technology will affect the review process in the future?  I believe that technology will continue to evolve and provide innovative tools to allow for more efficient reviews of ESI.  In the past few years the industry has already seen several new technologies released such as near deduping, concept searching and clustering which have significantly improved the speed of the review.  Legal teams will have to continue to make greater utilization of these technologies to provide efficient and cost-effective review as their clients will demand it.

About Chris Jurkiewicz
Chris graduated in 2000 with a Bachelor of Science in Computer Information Systems at Marymount University in Arlington, Virginia.  He began working for On-Site Sourcing while still an intern at Marymount and became the youngest Director on On-Site’s management team within three years as the Director of their Electronic Data Discovery Division.  In 2009, Chris co-founded Venio Systems to fill a void in Early Case Assessment (ECA) technology with Venio FPR™ to provide law firms, corporations and government entities the ability to gain a comprehensive picture of their data set at the front-end; thereby, saving precious time and money on the back-end..  Chris is an industry recognized expert in the field of eDiscovery, having spoken on several eDiscovery panels and served as an eDiscovery expert witness.

Thought Leader Q&A: Alon Israely of BIA

 

Tell me about your company and the products you represent.  BIA is a full solution E-Discovery provider. Our core competencies are around E-Discovery Collections and Processing, but we offer the full spectrum of services around E-Discovery.   For almost a decade, BIA has been developing and implementing defensible, technology driven solutions that reduce the costs and risks related to litigation, regulatory compliance and internal audits.  BIA provides software and services to Fortune 1000, Global 2000 companies and Am Law 100 law firms. We are headquartered in New York City, and have offices in San Francisco, Seattle, Washington DC and in Southwest Michigan. We also maintain digital evidence response units throughout the United States, Europe, Asia, and the Middle East.

BIA’s products are defensible and cost effective, offering defensible remote collections with DiscoveryBOT™, fast e-discovery processing with our TD Grid system and automated and secure legal hold software with Solis™.  For more about BIA’s product, click here.

What is the best way for lawyers and litigation support professionals to take control of their eDiscovery?  The best way for litigation support professionals to take control of their e-discovery is to scope projects correctly.  It is important to understand that not one size fits all in e-discovery.  That is, there are many tools and service providers out there – it is important to focus (at the beginning) on what needs to be accomplished from a legal and IT perspective first and then to determine which technologies and methods fit that strategy best. 

What is a good way to achieve predictability in eDiscovery costs?  Most of the cost analysis that exists in e-discovery today is focused on the Review side, where the data has already been collected and perhaps culled. Yet, there are still too many documents, where most of the documents are not responsive. With a focus on the left side of the EDRM, e-discovery costs are visible early on in the process.  For example, using a good (light-touch) collection tool and method to lock data down is one of the best ways to control e-discovery costs – that is, doing the right collection early-on and getting the right metrics from those collections, allow you to analyze that data (even at a high-level without incurring processing and other costs) which can then help can help the attorneys and the institutional client determine costs early in the process, and in a more predictable manner.

Is there a way to perform self collection in a defensible manner?  Yes.  Use the right tools and methods and importantly, have those tools and methods vetted (reviewed and approved) by e-discovery collection professionals.  Defensible self-collections do NOT mean that the custodian or the IT people are left to perform the collection on their own without the right plan behind them.  There are best-practices that should be followed and there are some tools that maintain the integrity of the data.  Make sure that those best practices and tools are used (having been scoped correctly – see response above) by professionals or at least used by staff and peer-reviewed or monitored by professionals.  Also, rely on custodians for good ESI identification – that is, the custodians (users) usually know better than anyone where they maintain records – so, using custodian questionnaires early-on will help inform those systems which will be most relevant – which goes to diligence (an important factor in defensible collections).  Also then the professional can work in tandem with the custodian to gather the data in a manner which will ensure the evidentiary integrity of the data.  At BIA we have been following those methods for years and have been very successful with our clients, the Courts and Opposing parties, at defending those ways of identifying and collecting ESI.

What is the importance of the left side of the EDRM model?  The left side is where it all starts with e-discovery – that is, ESI collections are usually the most affordable parts of the overall e-discovery process and are arguably the most important – that is, “garbage in/garbage-out.”  Because the subsequent parts of the e-discovery process (i.e., the “right-side of the EDRM”) rely on the data identified and gathered in the early parts of the process, it is imperative that those tasks and activities performed for the “left side of EDRM” are done in the correct manner – that is, maintaining the evidentiary integrity of the data collected.  Also, the left side of the EDRM includes preserving data and notifying custodians of their obligations to preserve – which is a piece critical to defensible e-discovery – especially in light of Pension Committee and some other recent cases.  As for the money piece, the left side of the EDRM is an area where much of the planning can occur for the rest of the process without incurring substantial costs – that planning goes a long way to ascertaining the real costs and timing with respect to the remainder of the e-discovery process.

About Alon Israely

Alon Israely has over fifteen years of experience in a variety of advanced computing-related technologies. Alon is a Senior Advisor in BIA’s Advisory Services group and currently oversees BIA’s product development for its core technology products. Prior to BIA, Alon consulted with law firms and their clients on a variety of technology issues, including expert witness services related to computer forensics, digital evidence management and data security. Prior to that, he was a senior member of several IT teams working on projects for Fortune 500 companies related to global network architecture and data migrations projects for enterprise information systems. As a pioneer in the field of digital evidence collection and handling, Alon has worked on a wide variety of matters, including several notable financial fraud cases; large-scale multi-party international lawsuits; and corporate matters involving the SEC, FTC, and international regulatory boards.  Alon holds a B.A. from UCLA and received his J.D. from New York Law School with an emphasis in Telecommunications Law. He is a member of the New York State Bar as well as several legal and computer forensic associations.

Reporting from the EDRM Mid-Year Meeting

 

Launched in May 2005, the Electronic Discovery Reference Model (EDRM) Project was created to address the lack of standards and guidelines in the electronic discovery market.  Now, in its sixth year of operation, EDRM has become the gold standard for…well…standards in eDiscovery.  Most references to the eDiscovery industry these days refer to the EDRM model as a representation of the eDiscovery life cycle.

At the first meeting in May 2005, there were 35 attendees, according to Tom Gelbmann of Gelbmann & Associates, co-founder of EDRM along with George Socha of Socha Consulting LLC.  Check out the preliminary first draft of the EDRM diagram – it has evolved a bit!  Most participants were eDiscovery providers and, according to Gelbmann, they asked “Do you really expect us all to work together?”  The answer was “yes”, and the question hasn’t been asked again.  Today, there are over 300 members from 81 participating organizations including eDiscovery providers, law firms and corporations (as well as some individual participants).

This week, the EDRM Mid-Year meeting is taking place in St. Paul, MN.  Twice a year, in May and October, eDiscovery professionals who are EDRM members meet to continue the process of working together on various standards projects.  EDRM has eight currently active projects, as follows:

  • Data Set: provides industry-standard, reference data sets of electronically stored information (ESI) and software files that can be used to test various aspects of eDiscovery software and services,
  • Evergreen: ensures that EDRM remains current, practical and relevant and educates about how to make effective use of the Model,
  • Information Management Reference Model (IMRM): provides a common, practical, flexible framework to help organizations develop and implement effective and actionable information management programs,
  • Jobs: develops a framework for evaluating pre-discovery and discovery personnel needs or issues,
  • Metrics: provides an effective means of measuring the time, money and volumes associated with eDiscovery activities,
  • Model Code of Conduct: evaluates and defines acceptable boundaries of ethical business practices within the eDiscovery service industry,
  • Search: provides a framework for defining and managing various aspects of Search as applied to eDiscovery workflow,
  • XML: provides a standard format for e-discovery data exchange between parties and systems, reducing the time and risk involved with data exchange.

This is my fourth year participating in the EDRM Metrics project and it has been exciting to see several accomplishments made by the group, including creation of a code schema for measuring activities across the EDRM phases, glossary definitions of those codes and tools to track early data assessment, collection and review activities.  Today, we made significant progress in developing survey questions designed to gather and provide typical metrics experienced by eDiscovery legal teams in today’s environment.

So, what do you think?  Has EDRM impacted how you manage eDiscovery?  If so, how?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Project Management: Tips for Creating Effective Procedures

Yesterday, we talked about why written procedures are important in eDiscovery and the types of procedures you should be writing.  Today I’m going to give you some tips for creating effective procedures.

First, let me say that writing procedures is easy.  In fact, it’s probably the easiest writing task you’ll ever do.  You don’t need to be creative.  You don’t need to develop an elegant writing style. In fact, the best procedures are simple and to the point.  All that’s required to write good procedures is knowledge of how to do the task and some guidelines.  Here are the guidelines:

  • When possible break a task down into its subcomponents and draft procedures for each sub-component.  It’s likely that different parts of a task may be handled by different people and done at different times.  Each component, therefore, should have its own set of procedures.  For example, your procedures for collecting responsive data may have components for notifying custodians, interviewing custodians, copying the data, maintaining records, and preparing and transporting media.
  • Use simple, clear language.  Keep sentences short and simple.  Use simple words.  If you are writing instructions to be used by attorneys, avoid using technical terms and acronyms with which they may not be familiar.
  • Make the procedures detailed.  Assume your reader doesn’t know anything about the task.
  • Make sure the steps are well organized and in the right order.
  • Format the procedures so that they are easy to read.  Use bullets, numbered points, and outline formats.  It’s much easier to follow instructions that are clearly laid out in steps than it is to follow procedures written in paragraphs.  This, incidentally, makes it easier to write procedures.  You don’t need to worry about the flow of sentences or paragraphs.  You just really need to put together a set of clear bullet points.
  • When possible, use illustrations:  If you are providing instructions for using a technology tool, include screenshots, and mark up those screen shots with annotations such as arrows and circles to emphasize the instructions.

It’s always a good idea to test your procedures before you apply them.  Ask someone who hasn’t done the task before to apply the procedures to a sample of the work.  Holes in the procedures will surface quickly.

So, what do you think?  Do you have any good tips for drafting procedures in eDiscovery?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

eDiscovery Project Management: The Importance of Good Written Procedures

 

Even for simple eDiscovery tasks, good written procedures are critical.  They will:

  • Ensure that everyone doing the work understands the task.
  • Cut down or eliminate inconsistencies in the work product.
  • Cut down or eliminate the need for re-work.
  • Foster efficiencies that will help prevent cost over-runs and missed deadlines.
  • Eliminate time spent “reinventing the wheel” each time a task is done.

Written procedures are a good idea for all tasks, but they are especially important for work done by multiple people.  Often procedures are overlooked for simple tasks.  It’s easy to feel comfortable that everyone will do a simple task well.  The problem is that it’s very easy for two people to interpret a task differently.  When you have a large group of people working on a task – for example, a group doing a review of an electronic document collection – the potential for inconsistent work is enormous.

Let me give you some examples of the types of procedures you should be creating:

  • Procedures for gathering potentially responsive documents from your client:  These procedures should include instructions for notifying custodians, for interviewing custodians, for the tools that are to be used, for the types of copies that are to be made, for the storage media to be used, for keeping records of the collection effort, and for delivering data for subsequent processing.
  • Procedures for a document review:  These procedures should include clear, objective criteria for responsiveness and privilege, instructions for using the review tool, instructions for retrieving batches of documents to review, and instructions for resolving questions.

In a perfect world, you would have detailed, written procedures for all of the tasks that you do yourself, and for all of the tasks done by those who report to you.  Unfortunately, most organizations aren’t there yet.  If you don’t have a complete set of procedures yet, create them whenever a task is at hand.  Over time, you will build a library of procedures for the tasks that you handle.  Procedures are not hard to write.  Tomorrow I’ll give you some tips that will serve as a guideline for creating effective procedures.

So, what do you think?  Have you worked on eDiscovery projects where written procedures would have helped?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

Thought Leader Q&A: Jim McGann of Index Engines

 

Tell me about your company and the products you represent.  Businesses today face a significant challenge organizing their files and email to ensure timely and cost efficient access, while also maintaining compliance to regulations governing electronic data. Founded in 2003, Index Engines’ mission is to organize enterprise data assets, and make them immediately accessible, searchable and easy to manage. 

Index Engines’ discovery platform is the only solution on the market to offer a complete view of electronic data assets. Online data is indexed in-stream at wire speed in native enterprise storage protocols, enabling high-speed, efficient indexing of proprietary backup and transfer formats. Our unique approach to offline records scans backup tapes, indexes the contents and extracts relevant data, eliminating the time-consuming restoration process. Index Engines provides the only comprehensive discovery platform across both online and offline data, saving time and money when managing enterprise information.

What has caused backup tapes to become so relevant in eDiscovery?  Tape discovery actually appeared on the map after the renowned Zubulake case in 2003, and was reinforced by the FRCP amendments in 2006 and then again last year with the adoption of California’s eDiscovery act AB-5. Each of these milestones propelled tape discovery further into the eDiscovery market. These days, tapes are as common as any other container to discover relevant electronically stored information (ESI).

What can companies proactively do to address tape storage?  Needlessly storing old backup tapes is both a potential liability and a wasted expense. The liability comes from not knowing what information the tapes contain. The cost of offsite tape storage –  even if it is only a few dollars a month per tape –  quickly adds up. Tape remediation is the process of proactively discovering data contained on legacy backup tapes, and then applying a corporate retention policy to this tape data. Once the relevant data has been identified and archived accordingly, the tapes can be destroyed or recycled. 

How can a legal or litigation support professional substantiate claims of processing speed made by eDiscovery vendors?  Without an industry standard vendor-neutral benchmarking process, this is a difficult challenge. I would recommend performing a proof of concept to actually see the performance in action. Another idea would be to question the components of the technology. Is the technology simply off-the-shelf freeware that has been repackaged, or is it something more powerful?

You have recently had patents approved for your technology. Can you explain this in greater detail?  Index Engines has engineered a platform that performs sequential processing of data. We received both US and European patents for this unique approach towards the processing of enterprise data, which makes the data searchable and discoverable across both primary and secondary (backup) storage. Our patented approach enables the indexing of electronic data as it flows to backup, as well as documented high speed indexing of network data at 1TB per hour per node.

About Jim McGann
Jim is Vice President of Information Discovery for Index Engines. Jim has extensive experience with the eDiscovery and Information Management. He is currently contributing to the Sedona working group addressing electronic document retention and production. Jim is also a frequent speaker for industry organizations such as ARMA and ILTA, and has authored multiple articles for legal technology and information management publications.  In recent years, Jim has worked for technology based start-ups that provided financial services and information management solutions. Prior to Index Engines, he worked for leading software firms, including Information Builders and the French based engineering software provider Dassault Systemes. Jim was responsible for the Business Development of Scopeware at Mirror Worlds Technologies, the knowledge management software firm founded by Dr. David Gelernter of Yale University. Jim graduated from Villanova University with a degree in Mechanical Engineering.

Thought Leader Q&A: Christine Musil of Informative Graphics Corporation

 

Tell me about your company and the products you represent.  Informative Graphics Corp. (IGC) is a leading developer of commercial software to view, collaborate on, redact and publish documents. Our products are used by corporations, law firms and government agencies around the world to access and safely share content without altering the original document.

What are some examples of how electronic redaction has been relevant in eDiscovery lately?  Redaction is walking the line between being responsive and protecting privilege and privacy. A great recent example of a redaction mistake having pretty broad implications includes the lawyers for former Illinois governor Rod Blagojevich requesting a subpoena of President Obama. The court filing included areas that had been improperly redacted by Blagojevich’s lawyers. While nothing new or shocking was revealed, this snafu put his reputation up for public inspection and opinion once again.  

What are some of the pitfalls in redacting PDFs?  The big pitfall is not understanding what a redaction is and why it is important to do it correctly. People continue to make the mistake of using a drawing tool to cover text and then publishing the document to PDF. The drawing shape visually blocks the text, but someone can use the Text tool in Acrobat to highlight the text and paste it into Notepad.  Using a true electronic redaction tool like Redact-It and being properly trained to use it is essential. 

Is there such thing as native redaction?  This is such a hot topic that I recently wrote a white paper on the subject titled “The Reality of Native Format Production and Redaction.” The answer is: It depends who you ask. From a realistic perspective, no, there is no such thing as native redaction. There is no tool that supports multiple formats and gives you back the document in the same format as the original. Even if there was such a tool, this seems dangerous and ripe for abuse (what else might “accidentally” get changed while they are at it?). 

You recently joined EDRM’s XML section. What are you currently working on in that endeavor, to the extent you can talk about, and why do you think XML is an important part of the EDRM?  The EDRM XML project is all about creating a single, universal format for eDiscovery. The organization’s goal is really to eliminate issues around the multitude of formats in the world and streamline review and production. Imagine never again receiving a CD full of flat TIFF files with separate text files! This whole issue of how users control and see document content is at the core of what IGC does, which makes this project a great fit for IGC’s expertise.  

About Christine Musil

Christine Musil is Director of Marketing for Informative Graphics Corporation, a viewing, annotation and content management software company based in Arizona. Informative Graphics makes several products including Redact-It, an electronic redaction solution used by law firms, corporate legal departments, government agencies and a variety of other professional service companies.

eDiscovery Project Management: Data Gathering Plan, Schedule Collection

We’ve already covered the first step of the data gathering plan:  preparing a list of data sources of potentially relevant materials and identifying custodians.  Now let’s fill out the plan.  Here’s a step-by-step approach:

  • Determine who will gather the data.  You need an experienced computer expert who has specialized tools that collect data in a way that preserves its integrity and who can testify – if needed – regarding the processes and tools that were used.
  • For each data source on your list, identify where the data is located.  You should interview custodians to find out what computers, storage devices, communications devices and third party service providers they use.
  • For each data source on your list, identify what type of data exists.  You should interview custodians to find out what software programs they use to generate documents and the types of files they receive.  This list will get filled out further as you start looking at data, but getting this information early will give you a good feel for what to expect and will also give you a heads up on what may be required for processing and reviewing data.
  • Next, put together a schedule for the collection effort.  Determine the order in which data will be collected and assign dates to each data source.  Work with your client to build a schedule that causes minimal disruption to business operations.
  • Notify custodians in advance of when you’ll be working with their data and what you’ll need from them.

Once your schedule is in place, you’ll be able to start planning and scheduling subsequent tasks such as processing the data.

In our next eDiscovery Project Management blog, we’ll talk about documented procedures.  We’ll cover why they are important and I’ll give you some tips for preparing effective procedures.

So, what do you think?  What do you include in your data gathering plans?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

eDiscovery Project Management: Data Gathering Plan, Identify Data Sources

 

One of the first electronic discovery tasks you’ll do for a case is to collect potentially responsive electronic documents from your client.  Before you start that collection effort, you should prepare a data-gathering plan to ensure that you are covering all the bases.  That plan should identify the locations from which data will be collected, who will collect the data, and a schedule for the collection effort.

Learn about Your Client

First, you need information from your client that is aimed at identifying all the possible locations and custodians of responsive data.  Some of this information may be available in written form, and some is best gleaned by interviewing client employees.   

Start by looking at:

  • Organization charts to identify potential custodians.
  • Organization charts for the IT and Records Management departments so you’ll know what individuals have knowledge of the technology that is used and how and where data is stored.
  • Written policies on computer use, back-ups, record-retention, disaster recovery, and so on.

To identify all locations of potentially relevant data, interview client employees to find out about:

  • The computer systems that are used, including hardware, software, operating systems and email programs.
  • Central databases and central electronic filing systems.
  • Devices and secondary computers that are used by employees.
  • Methods that employees use for communicating including cell phones, instant messaging, and social networking.
  • Legacy programs and how and where legacy data is stored.
  • What happens to the email and documents of employees that have left the organization.
  • Third party providers that store company information.

Once you’ve done your homework and learned what you can from your client, compile a list of data sources of potentially relevant materials.  To compile that list, you should get input from:

  • Attorneys who are familiar with the issues in the case and the rules of civil procedure.
  • Technical staff who understand how data is accessed and how and where data is stored
  • Records management staff who are familiar with the organization’s record retention policies
  • Client representatives who are experts in the subject matter of the litigation and familiar with the operations and business units at issue. 

Once you’ve got your list of data sources, you’re ready to put together the data-gathering plan. 

So, what do you think?  Do you routinely prepare a data-gathering plan?  Have you had problems when you didn’t?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.