eDiscoveryDaily

Thought Leader Q&A: Alon Israely of BIA

 

Tell me about your company and the products you represent.  BIA is a full solution E-Discovery provider. Our core competencies are around E-Discovery Collections and Processing, but we offer the full spectrum of services around E-Discovery.   For almost a decade, BIA has been developing and implementing defensible, technology driven solutions that reduce the costs and risks related to litigation, regulatory compliance and internal audits.  BIA provides software and services to Fortune 1000, Global 2000 companies and Am Law 100 law firms. We are headquartered in New York City, and have offices in San Francisco, Seattle, Washington DC and in Southwest Michigan. We also maintain digital evidence response units throughout the United States, Europe, Asia, and the Middle East.

BIA’s products are defensible and cost effective, offering defensible remote collections with DiscoveryBOT™, fast e-discovery processing with our TD Grid system and automated and secure legal hold software with Solis™.  For more about BIA’s product, click here.

What is the best way for lawyers and litigation support professionals to take control of their eDiscovery?  The best way for litigation support professionals to take control of their e-discovery is to scope projects correctly.  It is important to understand that not one size fits all in e-discovery.  That is, there are many tools and service providers out there – it is important to focus (at the beginning) on what needs to be accomplished from a legal and IT perspective first and then to determine which technologies and methods fit that strategy best. 

What is a good way to achieve predictability in eDiscovery costs?  Most of the cost analysis that exists in e-discovery today is focused on the Review side, where the data has already been collected and perhaps culled. Yet, there are still too many documents, where most of the documents are not responsive. With a focus on the left side of the EDRM, e-discovery costs are visible early on in the process.  For example, using a good (light-touch) collection tool and method to lock data down is one of the best ways to control e-discovery costs – that is, doing the right collection early-on and getting the right metrics from those collections, allow you to analyze that data (even at a high-level without incurring processing and other costs) which can then help can help the attorneys and the institutional client determine costs early in the process, and in a more predictable manner.

Is there a way to perform self collection in a defensible manner?  Yes.  Use the right tools and methods and importantly, have those tools and methods vetted (reviewed and approved) by e-discovery collection professionals.  Defensible self-collections do NOT mean that the custodian or the IT people are left to perform the collection on their own without the right plan behind them.  There are best-practices that should be followed and there are some tools that maintain the integrity of the data.  Make sure that those best practices and tools are used (having been scoped correctly – see response above) by professionals or at least used by staff and peer-reviewed or monitored by professionals.  Also, rely on custodians for good ESI identification – that is, the custodians (users) usually know better than anyone where they maintain records – so, using custodian questionnaires early-on will help inform those systems which will be most relevant – which goes to diligence (an important factor in defensible collections).  Also then the professional can work in tandem with the custodian to gather the data in a manner which will ensure the evidentiary integrity of the data.  At BIA we have been following those methods for years and have been very successful with our clients, the Courts and Opposing parties, at defending those ways of identifying and collecting ESI.

What is the importance of the left side of the EDRM model?  The left side is where it all starts with e-discovery – that is, ESI collections are usually the most affordable parts of the overall e-discovery process and are arguably the most important – that is, “garbage in/garbage-out.”  Because the subsequent parts of the e-discovery process (i.e., the “right-side of the EDRM”) rely on the data identified and gathered in the early parts of the process, it is imperative that those tasks and activities performed for the “left side of EDRM” are done in the correct manner – that is, maintaining the evidentiary integrity of the data collected.  Also, the left side of the EDRM includes preserving data and notifying custodians of their obligations to preserve – which is a piece critical to defensible e-discovery – especially in light of Pension Committee and some other recent cases.  As for the money piece, the left side of the EDRM is an area where much of the planning can occur for the rest of the process without incurring substantial costs – that planning goes a long way to ascertaining the real costs and timing with respect to the remainder of the e-discovery process.

About Alon Israely

Alon Israely has over fifteen years of experience in a variety of advanced computing-related technologies. Alon is a Senior Advisor in BIA’s Advisory Services group and currently oversees BIA’s product development for its core technology products. Prior to BIA, Alon consulted with law firms and their clients on a variety of technology issues, including expert witness services related to computer forensics, digital evidence management and data security. Prior to that, he was a senior member of several IT teams working on projects for Fortune 500 companies related to global network architecture and data migrations projects for enterprise information systems. As a pioneer in the field of digital evidence collection and handling, Alon has worked on a wide variety of matters, including several notable financial fraud cases; large-scale multi-party international lawsuits; and corporate matters involving the SEC, FTC, and international regulatory boards.  Alon holds a B.A. from UCLA and received his J.D. from New York Law School with an emphasis in Telecommunications Law. He is a member of the New York State Bar as well as several legal and computer forensic associations.

Reporting from the EDRM Mid-Year Meeting

 

Launched in May 2005, the Electronic Discovery Reference Model (EDRM) Project was created to address the lack of standards and guidelines in the electronic discovery market.  Now, in its sixth year of operation, EDRM has become the gold standard for…well…standards in eDiscovery.  Most references to the eDiscovery industry these days refer to the EDRM model as a representation of the eDiscovery life cycle.

At the first meeting in May 2005, there were 35 attendees, according to Tom Gelbmann of Gelbmann & Associates, co-founder of EDRM along with George Socha of Socha Consulting LLC.  Check out the preliminary first draft of the EDRM diagram – it has evolved a bit!  Most participants were eDiscovery providers and, according to Gelbmann, they asked “Do you really expect us all to work together?”  The answer was “yes”, and the question hasn’t been asked again.  Today, there are over 300 members from 81 participating organizations including eDiscovery providers, law firms and corporations (as well as some individual participants).

This week, the EDRM Mid-Year meeting is taking place in St. Paul, MN.  Twice a year, in May and October, eDiscovery professionals who are EDRM members meet to continue the process of working together on various standards projects.  EDRM has eight currently active projects, as follows:

  • Data Set: provides industry-standard, reference data sets of electronically stored information (ESI) and software files that can be used to test various aspects of eDiscovery software and services,
  • Evergreen: ensures that EDRM remains current, practical and relevant and educates about how to make effective use of the Model,
  • Information Management Reference Model (IMRM): provides a common, practical, flexible framework to help organizations develop and implement effective and actionable information management programs,
  • Jobs: develops a framework for evaluating pre-discovery and discovery personnel needs or issues,
  • Metrics: provides an effective means of measuring the time, money and volumes associated with eDiscovery activities,
  • Model Code of Conduct: evaluates and defines acceptable boundaries of ethical business practices within the eDiscovery service industry,
  • Search: provides a framework for defining and managing various aspects of Search as applied to eDiscovery workflow,
  • XML: provides a standard format for e-discovery data exchange between parties and systems, reducing the time and risk involved with data exchange.

This is my fourth year participating in the EDRM Metrics project and it has been exciting to see several accomplishments made by the group, including creation of a code schema for measuring activities across the EDRM phases, glossary definitions of those codes and tools to track early data assessment, collection and review activities.  Today, we made significant progress in developing survey questions designed to gather and provide typical metrics experienced by eDiscovery legal teams in today’s environment.

So, what do you think?  Has EDRM impacted how you manage eDiscovery?  If so, how?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Project Management: Tips for Creating Effective Procedures

Yesterday, we talked about why written procedures are important in eDiscovery and the types of procedures you should be writing.  Today I’m going to give you some tips for creating effective procedures.

First, let me say that writing procedures is easy.  In fact, it’s probably the easiest writing task you’ll ever do.  You don’t need to be creative.  You don’t need to develop an elegant writing style. In fact, the best procedures are simple and to the point.  All that’s required to write good procedures is knowledge of how to do the task and some guidelines.  Here are the guidelines:

  • When possible break a task down into its subcomponents and draft procedures for each sub-component.  It’s likely that different parts of a task may be handled by different people and done at different times.  Each component, therefore, should have its own set of procedures.  For example, your procedures for collecting responsive data may have components for notifying custodians, interviewing custodians, copying the data, maintaining records, and preparing and transporting media.
  • Use simple, clear language.  Keep sentences short and simple.  Use simple words.  If you are writing instructions to be used by attorneys, avoid using technical terms and acronyms with which they may not be familiar.
  • Make the procedures detailed.  Assume your reader doesn’t know anything about the task.
  • Make sure the steps are well organized and in the right order.
  • Format the procedures so that they are easy to read.  Use bullets, numbered points, and outline formats.  It’s much easier to follow instructions that are clearly laid out in steps than it is to follow procedures written in paragraphs.  This, incidentally, makes it easier to write procedures.  You don’t need to worry about the flow of sentences or paragraphs.  You just really need to put together a set of clear bullet points.
  • When possible, use illustrations:  If you are providing instructions for using a technology tool, include screenshots, and mark up those screen shots with annotations such as arrows and circles to emphasize the instructions.

It’s always a good idea to test your procedures before you apply them.  Ask someone who hasn’t done the task before to apply the procedures to a sample of the work.  Holes in the procedures will surface quickly.

So, what do you think?  Do you have any good tips for drafting procedures in eDiscovery?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

eDiscovery Project Management: The Importance of Good Written Procedures

 

Even for simple eDiscovery tasks, good written procedures are critical.  They will:

  • Ensure that everyone doing the work understands the task.
  • Cut down or eliminate inconsistencies in the work product.
  • Cut down or eliminate the need for re-work.
  • Foster efficiencies that will help prevent cost over-runs and missed deadlines.
  • Eliminate time spent “reinventing the wheel” each time a task is done.

Written procedures are a good idea for all tasks, but they are especially important for work done by multiple people.  Often procedures are overlooked for simple tasks.  It’s easy to feel comfortable that everyone will do a simple task well.  The problem is that it’s very easy for two people to interpret a task differently.  When you have a large group of people working on a task – for example, a group doing a review of an electronic document collection – the potential for inconsistent work is enormous.

Let me give you some examples of the types of procedures you should be creating:

  • Procedures for gathering potentially responsive documents from your client:  These procedures should include instructions for notifying custodians, for interviewing custodians, for the tools that are to be used, for the types of copies that are to be made, for the storage media to be used, for keeping records of the collection effort, and for delivering data for subsequent processing.
  • Procedures for a document review:  These procedures should include clear, objective criteria for responsiveness and privilege, instructions for using the review tool, instructions for retrieving batches of documents to review, and instructions for resolving questions.

In a perfect world, you would have detailed, written procedures for all of the tasks that you do yourself, and for all of the tasks done by those who report to you.  Unfortunately, most organizations aren’t there yet.  If you don’t have a complete set of procedures yet, create them whenever a task is at hand.  Over time, you will build a library of procedures for the tasks that you handle.  Procedures are not hard to write.  Tomorrow I’ll give you some tips that will serve as a guideline for creating effective procedures.

So, what do you think?  Have you worked on eDiscovery projects where written procedures would have helped?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

Thought Leader Q&A: Jim McGann of Index Engines

 

Tell me about your company and the products you represent.  Businesses today face a significant challenge organizing their files and email to ensure timely and cost efficient access, while also maintaining compliance to regulations governing electronic data. Founded in 2003, Index Engines’ mission is to organize enterprise data assets, and make them immediately accessible, searchable and easy to manage. 

Index Engines’ discovery platform is the only solution on the market to offer a complete view of electronic data assets. Online data is indexed in-stream at wire speed in native enterprise storage protocols, enabling high-speed, efficient indexing of proprietary backup and transfer formats. Our unique approach to offline records scans backup tapes, indexes the contents and extracts relevant data, eliminating the time-consuming restoration process. Index Engines provides the only comprehensive discovery platform across both online and offline data, saving time and money when managing enterprise information.

What has caused backup tapes to become so relevant in eDiscovery?  Tape discovery actually appeared on the map after the renowned Zubulake case in 2003, and was reinforced by the FRCP amendments in 2006 and then again last year with the adoption of California’s eDiscovery act AB-5. Each of these milestones propelled tape discovery further into the eDiscovery market. These days, tapes are as common as any other container to discover relevant electronically stored information (ESI).

What can companies proactively do to address tape storage?  Needlessly storing old backup tapes is both a potential liability and a wasted expense. The liability comes from not knowing what information the tapes contain. The cost of offsite tape storage –  even if it is only a few dollars a month per tape –  quickly adds up. Tape remediation is the process of proactively discovering data contained on legacy backup tapes, and then applying a corporate retention policy to this tape data. Once the relevant data has been identified and archived accordingly, the tapes can be destroyed or recycled. 

How can a legal or litigation support professional substantiate claims of processing speed made by eDiscovery vendors?  Without an industry standard vendor-neutral benchmarking process, this is a difficult challenge. I would recommend performing a proof of concept to actually see the performance in action. Another idea would be to question the components of the technology. Is the technology simply off-the-shelf freeware that has been repackaged, or is it something more powerful?

You have recently had patents approved for your technology. Can you explain this in greater detail?  Index Engines has engineered a platform that performs sequential processing of data. We received both US and European patents for this unique approach towards the processing of enterprise data, which makes the data searchable and discoverable across both primary and secondary (backup) storage. Our patented approach enables the indexing of electronic data as it flows to backup, as well as documented high speed indexing of network data at 1TB per hour per node.

About Jim McGann
Jim is Vice President of Information Discovery for Index Engines. Jim has extensive experience with the eDiscovery and Information Management. He is currently contributing to the Sedona working group addressing electronic document retention and production. Jim is also a frequent speaker for industry organizations such as ARMA and ILTA, and has authored multiple articles for legal technology and information management publications.  In recent years, Jim has worked for technology based start-ups that provided financial services and information management solutions. Prior to Index Engines, he worked for leading software firms, including Information Builders and the French based engineering software provider Dassault Systemes. Jim was responsible for the Business Development of Scopeware at Mirror Worlds Technologies, the knowledge management software firm founded by Dr. David Gelernter of Yale University. Jim graduated from Villanova University with a degree in Mechanical Engineering.

Thought Leader Q&A: Christine Musil of Informative Graphics Corporation

 

Tell me about your company and the products you represent.  Informative Graphics Corp. (IGC) is a leading developer of commercial software to view, collaborate on, redact and publish documents. Our products are used by corporations, law firms and government agencies around the world to access and safely share content without altering the original document.

What are some examples of how electronic redaction has been relevant in eDiscovery lately?  Redaction is walking the line between being responsive and protecting privilege and privacy. A great recent example of a redaction mistake having pretty broad implications includes the lawyers for former Illinois governor Rod Blagojevich requesting a subpoena of President Obama. The court filing included areas that had been improperly redacted by Blagojevich’s lawyers. While nothing new or shocking was revealed, this snafu put his reputation up for public inspection and opinion once again.  

What are some of the pitfalls in redacting PDFs?  The big pitfall is not understanding what a redaction is and why it is important to do it correctly. People continue to make the mistake of using a drawing tool to cover text and then publishing the document to PDF. The drawing shape visually blocks the text, but someone can use the Text tool in Acrobat to highlight the text and paste it into Notepad.  Using a true electronic redaction tool like Redact-It and being properly trained to use it is essential. 

Is there such thing as native redaction?  This is such a hot topic that I recently wrote a white paper on the subject titled “The Reality of Native Format Production and Redaction.” The answer is: It depends who you ask. From a realistic perspective, no, there is no such thing as native redaction. There is no tool that supports multiple formats and gives you back the document in the same format as the original. Even if there was such a tool, this seems dangerous and ripe for abuse (what else might “accidentally” get changed while they are at it?). 

You recently joined EDRM’s XML section. What are you currently working on in that endeavor, to the extent you can talk about, and why do you think XML is an important part of the EDRM?  The EDRM XML project is all about creating a single, universal format for eDiscovery. The organization’s goal is really to eliminate issues around the multitude of formats in the world and streamline review and production. Imagine never again receiving a CD full of flat TIFF files with separate text files! This whole issue of how users control and see document content is at the core of what IGC does, which makes this project a great fit for IGC’s expertise.  

About Christine Musil

Christine Musil is Director of Marketing for Informative Graphics Corporation, a viewing, annotation and content management software company based in Arizona. Informative Graphics makes several products including Redact-It, an electronic redaction solution used by law firms, corporate legal departments, government agencies and a variety of other professional service companies.

eDiscovery Project Management: Data Gathering Plan, Schedule Collection

We’ve already covered the first step of the data gathering plan:  preparing a list of data sources of potentially relevant materials and identifying custodians.  Now let’s fill out the plan.  Here’s a step-by-step approach:

  • Determine who will gather the data.  You need an experienced computer expert who has specialized tools that collect data in a way that preserves its integrity and who can testify – if needed – regarding the processes and tools that were used.
  • For each data source on your list, identify where the data is located.  You should interview custodians to find out what computers, storage devices, communications devices and third party service providers they use.
  • For each data source on your list, identify what type of data exists.  You should interview custodians to find out what software programs they use to generate documents and the types of files they receive.  This list will get filled out further as you start looking at data, but getting this information early will give you a good feel for what to expect and will also give you a heads up on what may be required for processing and reviewing data.
  • Next, put together a schedule for the collection effort.  Determine the order in which data will be collected and assign dates to each data source.  Work with your client to build a schedule that causes minimal disruption to business operations.
  • Notify custodians in advance of when you’ll be working with their data and what you’ll need from them.

Once your schedule is in place, you’ll be able to start planning and scheduling subsequent tasks such as processing the data.

In our next eDiscovery Project Management blog, we’ll talk about documented procedures.  We’ll cover why they are important and I’ll give you some tips for preparing effective procedures.

So, what do you think?  What do you include in your data gathering plans?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

eDiscovery Project Management: Data Gathering Plan, Identify Data Sources

 

One of the first electronic discovery tasks you’ll do for a case is to collect potentially responsive electronic documents from your client.  Before you start that collection effort, you should prepare a data-gathering plan to ensure that you are covering all the bases.  That plan should identify the locations from which data will be collected, who will collect the data, and a schedule for the collection effort.

Learn about Your Client

First, you need information from your client that is aimed at identifying all the possible locations and custodians of responsive data.  Some of this information may be available in written form, and some is best gleaned by interviewing client employees.   

Start by looking at:

  • Organization charts to identify potential custodians.
  • Organization charts for the IT and Records Management departments so you’ll know what individuals have knowledge of the technology that is used and how and where data is stored.
  • Written policies on computer use, back-ups, record-retention, disaster recovery, and so on.

To identify all locations of potentially relevant data, interview client employees to find out about:

  • The computer systems that are used, including hardware, software, operating systems and email programs.
  • Central databases and central electronic filing systems.
  • Devices and secondary computers that are used by employees.
  • Methods that employees use for communicating including cell phones, instant messaging, and social networking.
  • Legacy programs and how and where legacy data is stored.
  • What happens to the email and documents of employees that have left the organization.
  • Third party providers that store company information.

Once you’ve done your homework and learned what you can from your client, compile a list of data sources of potentially relevant materials.  To compile that list, you should get input from:

  • Attorneys who are familiar with the issues in the case and the rules of civil procedure.
  • Technical staff who understand how data is accessed and how and where data is stored
  • Records management staff who are familiar with the organization’s record retention policies
  • Client representatives who are experts in the subject matter of the litigation and familiar with the operations and business units at issue. 

Once you’ve got your list of data sources, you’re ready to put together the data-gathering plan. 

So, what do you think?  Do you routinely prepare a data-gathering plan?  Have you had problems when you didn’t?  Please share any comments you might have or tell us if you’d like to know more about a particular topic.

Announcing eDiscovery Thought Leader Q&A Series!

 

eDiscovery Daily is excited to announce a new blog series of Q&A interviews with various eDiscovery thought leaders.  Over the next three weeks, we will publish interviews conducted with six individuals with unique and informative perspectives on various eDiscovery topics.  Mark your calendars for these industry experts!

Christine Musil is Director of Marketing for Informative Graphics Corporation, a viewing, annotation and content management software company based in Arizona.  Christine will be discussing issues associated with native redaction and redaction of Adobe PDF files.  Her interview will be published this Thursday, October 14.

Jim McGann is Vice President of Information Discovery for Index Engines. Jim has extensive experience with the eDiscovery and Information Management.  Jim will be discussing issues associated with tape backup and retrieval.  His interview will be published this Friday, October 15.

Alon Israely is a Senior Advisor in BIA’s Advisory Services group and currently oversees BIA’s product development for its core technology products.  Alon will be discussing best practices associated with “left side of the EDRM model” processes such as preservation and collection.  His interview will be published next Thursday, October 21.

Chris Jurkiewicz is Co-Founder of Venio Systems, which provides Venio FPR™ allowing legal teams to analyze data, provide an early case assessment and a first pass review of any size data set.  Chris will be discussing current trends associated with early case assessment and first pass review tools.  His interview will be published next Friday, October 22.

Kirke Snyder is Owner of Legal Information Consultants, a consulting firm specializing in eDiscovery Process Audits to help organizations lower the risk and cost of e-discovery.  Kirke will be discussing best practices associated with records and information management.  His interview will be published on Monday, October 25.

Brad Jenkins is President and CEO for Trial Solutions, which is an electronic discovery software and services company that assists litigators in the collection, processing and review of electronic information.  Brad will be discussing trends associated with SaaS eDiscovery solutions.  His interview will be published on Tuesday, October 26.

We thank all of our guests for participating!

So, what do you think?  Is there someone you would like to see interviewed for the blog?  Are you an industry expert with some information to share from your “soapbox”?  If so, please share any comments or contact me at daustin@trialsolutions.net.  We’re looking to assemble our next group of interviews now!

eDiscovery Case Study: Term List Searching for Deadline Emergencies!

 

A few weeks ago, I was preparing to conduct a Friday morning training session for a client to show them how to use FirstPass™, powered by Venio FPR™, to conduct a first pass review of their data when I received a call from the client.  “We thought we were going to have a month to review this data, but because of a judge’s ruling in the case, we now have to start depo prep for two key custodians on Monday for depositions now scheduled next week”, said Megan Moore, attorney with Steele Sturm, PLLC, in Houston.  “We have to complete our review of their files this weekend.”

So, what do you do when you have to conduct both a first pass and final review of the data in a weekend?

It was determined that Steele Sturm had to complete first pass review that Friday, so that we could prepare the potentially responsive files for an attorney review starting Saturday morning.  Steele Sturm identified a list of responsive search terms and Trial Solutions worked with the attorneys to include variations of the terms (such as proximity searches and synonyms) to finalize a list of terms to apply to the data to identify potentially responsive files.  Because FirstPass provides the ability to import and search an entire term list at once, we were able to identify potentially responsive files in a simple, two step process.  “Using FirstPass, Trial Solutions helped us cull out 75% of the collection as non-responsive, enabling our review team to focus review on the remaining 25%”, said Moore.

Once the potentially responsive files were identified, they were imported into OnDemand™, powered by ImageDepot™, for linear attorney review.  During review, the attorneys identified that some of the terms used in identifying potentially responsive files were overbroad, so additional searches were performed in OnDemand to “group tag” those files as non-responsive.  “Trial Solutions provided training and support throughout the weekend to enable our review team to quickly "tag" each file using OnDemand as to responsiveness and privilege to enable us to meet our deadline”, said Moore.

So, what do you think?  Do you have any “emergency” war stories to share?  Please share any comments you might have or if you’d like to know more about a particular topic.