eDiscoveryDaily

eDiscovery Best Practices: When is it OK to Produce without Linear Review?

 

At eDiscoveryDaily, the title of our daily post usually reflects some eDiscovery news and/or analysis that we are providing our readers.  However, based on a comment I received from a colleague last week, I thought I would ask a thought provoking question for this post.

There was an interesting post in the EDD Update blog a few days ago entitled Ediscovery Production Without Review, written by Albert Barsocchini, Esq.  The post noted that due to “[a]dvanced analytics, judicial acceptance of computer aided coding, claw back/quick-peek agreements, and aggressive use of Rule 16 hearings”, many attorneys are choosing to produce responsive ESI without spending time and money on a final linear review.

A colleague of mine sent me an email with a link to the post and stated, “I would not hire a firm if I knew they were producing without a doc by doc review.”

Really?  What if:

  • You collected the equivalent of 10 million pages* and still had 1.2 million potentially responsive pages after early data assessment/first pass review? (reducing 88% of the population, which is a very high culling percentage in most cases)
  • And your review team could review 60 pages per hour, requiring 20,000 hours to complete the responsiveness review?
  • And their average rate was a very reasonable $75 per hour to review, resulting in a total cost of $1.5 million to perform a doc by doc review?
  • And you had a clawback agreement in place so that you could claw back any inadvertently produced privileged files?

“Would you insist on a doc by doc review then?”, I asked.

Let’s face it, $1.5 million is a lot of money.  That may seem like an inordinate amount of money to spend on linear review and the data volume for some large cases may be so voluminous that an effective argument might be made to rely on technology to identify the files to produce.

On the other hand, if you’re a company like Google and you inadvertently produced a document in a case potentially worth billions of dollars, $1.5 million doesn’t seem near as big an amount to spend given the risk associated with potential mistakes.  Also, as the Google case and this case illustrate, there are no guarantees with regards to the ability to claw back inadvertently produced files.  The cost of linear review will, especially in larger cases, need to be weighed against the potential risk of not conducting that review for the organization to determine what’s the best approach for them.

So, what do you think?  Do you produce in cases where not all of the responsive documents are reviewed before production? Are there criteria that you use to determine when to conduct or forego linear review?  Please share any comments you might have or if you’d like to know more about a particular topic.

*I used pages in the example to provide a frame of reference to which most attorneys can relate.  While 10 million pages may seem like a large collection, at an average of 50,000 pages per GB, that is only 200 total GB.  Many laptops and desktops these days have a drive that big, if not larger.  Depending on your review approach, most, if not all, original native files would probably never be converted to a standard paginated document format (i.e., TIFF or PDF).  So, it is unlikely that the total page count of the collection would ever be truly known.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: Announcing Holiday Thought Leader Series!

 

eDiscoveryDaily thought quite a bit about what to get for our readers to celebrate these holidays, and what better to give you than interviews with some of the most influential thought leaders in eDiscovery today!  We haven’t had this much fun since the last round of thought leader interviews we conducted at Legal Tech New York earlier this year!  For a recap of those interviews, click here.

Jason Krause has been working hard and “chased” down several well respected individuals and, as a result, we’re pleased to introduce the schedule for the series, which will begin this Wednesday, December 14.

Here are the interviews that we will be publishing over the next two weeks:

Wednesday, December 14: Jason Baron, National Archives' Director of Litigation since 2000 and Co-Chair of the Working Group on Electronic Document Retention and Production for the Sedona Conference.  Jason is also one of the founding coordinators of the TREC Legal Track, a search project organized through the National Institute of Standards and Technology to evaluate search protocols used in eDiscovery. This year, Jason was awarded the Emmett Leahy Award for Outstanding Contributions and Accomplishments in the Records and Information Management Profession.

Thursday, December 15: Bennett Borden, Co-Chair of Williams Mullen’s eDiscovery and Information Governance Section.  Based in Richmond, Va., Bennett’s practice is focused on Electronic Discovery and Information Law. Bennett has published several papers on the use of predictive coding in litigation and is a frequent speaker on eDiscovery topics.

Friday, December 16: John Simek, Vice President of Sensei Enterprises, a computer forensics firm in Fairfax, Va, where he has worked since 1997. He is an encase Certified Examiner and is a nationally known testifying expert in computer forensic issues.

Monday, December 19: Joshua Poje, Research Specialist with the American Bar Association’s Legal Technology Resource Center, which publishes the Annual Legal Technology Survey. He is a graduate of DePaul University College of Law and Augustana College.

Tuesday, December 20: Joseph Collins, co-founder and president of VaporStream, which provides recordless communications. Collins previously worked in the energy marketplace, but has become an advocate for private communication in business, even within the legal community.

Wednesday, December 21: Sharon Nelson, President of Sensei Enterprises, where she had worked on the front lines of computer forensics and EDD- topics also discusses on the blog Ride the Lightning (one of my favorites!).  She is a graduate of the Georgetown University Law Center and is the president elect of the Virginia Bar Association.

Thanks to everyone for their time in participating in these interviews!  And, thanks to Jason for securing interviews with these key individuals for eDiscoveryDaily.

So, what do you think?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Case Law: Another Losing Plaintiff Taxed for eDiscovery Costs

As noted yesterday and back in May, prevailing defendants are becoming increasingly successful in obtaining awards against plaintiffs for reimbursement of eDiscovery costs.

An award of costs to the successful defendants in a patent infringement action included $64,295 in costs for conversion of data to TIFF format and $5,950 for an eDiscovery project manager in Jardin v. DATAllegro, Inc., No. 08-CV-1462-IEG (WVG), (S.D. Cal. Oct. 12, 2011).

Defendants in a patent infringement action obtained summary judgment of non-infringement and submitted bills of costs that included $64,295 in costs for conversion of data to TIFF format and $5,950 for an eDiscovery project manager. Plaintiff contended that the costs should be denied because he had litigated the action and its difficult issues in good faith and there was a significant economic disparity between him and the corporate parent of one of the defendants.

The court concluded that plaintiff had failed to rebut the presumption in Fed. R. Civ. P. 54 in favor of awarding costs. The action was resolved through summary judgment rather than a complicated trial, and there was no case law suggesting that the assets of a parent corporation should be considered in assessing costs. The financial position of the party having to pay the costs might be relevant, but it appeared plaintiff was the founder of a company that had been sold for $500 million.

Taxing of costs for converting files to TIFF format was appropriate, according to the court, because the Federal Rules required production of electronically stored information and “a categorical rule prohibiting costs for converting data into an accessible, readable, and searchable format would ignore the practical realities of discovery in modern litigation.” The court stated: “Therefore, where the circumstances of a particular case necessitate converting e-data from various native formats to the .TIFF or another format accessible to all parties, costs stemming from the process of that conversion are taxable exemplification costs under 28 U.S.C. § 1920(4).”

The court also rejected plaintiff’s argument that costs associated with an eDiscovery “project manager” were not taxable because they related to the intellectual effort involved in document production:

Here, the project manager did not review documents or contribute to any strategic decision-making; he oversaw the process of converting data to the .TIFF format to prevent inconsistent or duplicative processing. Because the project manager’s duties were limited to the physical production of data, the related costs are recoverable.

So, what do you think?  Will more prevailing defendants seek to recover eDiscovery costs from plaintiffs? Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Case Law: Plaintiff Responsible for Taxation of eDiscovery Costs

Back in May, we discussed a case where the plaintiff, after losing its lawsuit, was responsible for repaying the defendant more than $367,000 in eDiscovery costs.  It appears that making plaintiffs responsible for eDiscovery costs when they lose is becoming a trend.

In re Aspartame Antitrust Litig., No. 2:06-CV-1732-LDD, (E.D. Pa. Oct. 5, 2011),a case with a “staggering” volume of discovery, successful defendants were awarded about $500,000 of their electronic discovery costs for a litigation database, imaging hard drives, keyword searches, de-duplication, and data extraction that allowed for cost-effective discovery. However, the court refused to award costs for defendants’ use of an eDiscovery program that provided visual clustering of documents and went beyond necessary keyword search and filtering functions.

Defendants in an artificial sweetener market allocation and price fixing class action obtained summary judgment against two representative plaintiffs that had not purchased the sweetener within the four-year statute of limitations. Defendants filed bills of costs, and the plaintiffs asked the court to deny or reduce those costs.

The court granted about $500,000 in disputed costs, most of which were incurred by defendants during electronic discovery. The volume of discovery was “staggering,” according to the court, and “in cases of this complexity, eDiscovery saves costs overall by allowing discovery to be conducted in an efficient and cost-effective manner.” Defendants’ use of third party vendors for keyword searches and culling of duplicates allowed one defendant to reduce over 366 gigabytes of potentially responsive data by 85%. The court stated:

“We therefore award costs for the creation of a litigation database, storage of data, imaging hard drives, keyword searches, de-duplication, data extraction and processing. Because a privilege screen is simply a keyword search for potentially privileged documents, we award that cost as well. In addition, we award costs associated with hosting data that accrued after defendants produced documents to plaintiffs because, as the plaintiffs themselves acknowledged earlier in the proceedings, discovery was ongoing in this case up until summary judgment was issued.”

The court also awarded costs for technical support and the creation of load files. However, it would “draw the line” at awarding costs for use of a “sophisticated eDiscovery program” that provided concept-based visual clustering of document collections. Such a service was “undoubtedly helpful,” but it was “squarely within the realm of costs that are not necessary for litigation but rather are acquired for the convenience of counsel.”

So, what do you think?  Should plaintiffs have to reimburse eDiscovery costs to defendants if they lose? Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Project Management: “Belt and Suspenders” Approach for Effective Communication

 

eDiscovery Daily has published 57 posts to date related to Project Management principles (including this one).  Those include two excellent series by Jane Gennarelli, one covering a range of eDiscovery Project Management best practice topics from October thru December last year, and another covering management of a contract review team, which ran from January to early March this year.

Effective communication is a key part of effective project management, whether that communication is internally within the project team or externally with your client.  It is so easy for miscommunications to occur that can derail your project and cause deadlines to be missed, or work product to be incomplete or not meet the client’s expectations.

I like to employ a “belt and suspenders” approach to communication with clients as much as possible, by discussing requirements or issues with the client and then following up with documentation to confirm the understanding.  That seems obvious and many project managers start out that way – they discuss project requirements and services with a client and then formally document into a contract or other binding agreement.  However, as time progresses, many PMs start to lax in following up to document changes discussed to scope or approach to handling specific exceptions with clients.  Often, it’s the little day to day discussions and decisions that aren’t documented that can come back to haunt you. Or PMs communicate solely via email and keep the project team waiting for the client to respond to the latest email.  Unless there is a critical decision for which documented agreement is required to proceed, discussing and documenting keeps the project moving while ensuring each decision gets documented.

I can think of several instances where this approach helped avoid major issues, especially with the follow-up agreement or email.  If nothing else, it gives you something to point back to if miscommunication occurs.  Years ago, I met with a client and reviewed a set of hard copy documents that they wanted scanned, processed and loaded into a database (we had a Master Services Agreement in place to cover those services).  The client said they had “sticky notes” on the documents that they wanted.  I took the time to go through those, ask questions and verbally confirm my understanding of which documents they wanted processed.  I then documented in an email what services they wanted and the ranges of documents they requested to be processed and they confirmed the services and those documents in their response (evidently without looking too closely at the list of document ranges).

What the client didn’t know is that one of their paralegals had removed “sticky notes” from some of the documents, so I didn’t have all of the document ranges they intended to process.  When they later started asking questions why certain documents weren’t processed, I was able to point back to the email showing their approval of the document ranges to process, verifying that we had processed the documents as instructed.  The client realized the mistake was theirs, not ours, and we helped them get the remaining documents processed and loaded.  Our reputation with that client remained strong – thanks to the “belt and suspenders” approach!

So, what do you think?  Have you had miscommunications with clients because of inadequate documentation? Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Search “Gotchas” Still Get You

 

A few days ago, I reviewed search syntax that one of my clients had prepared and noticed a couple of “gotchas” that typically cause problems.  While we’ve discussed them on this blog before, it was over a year ago (when eDiscovery Daily was still in its infancy and had a fraction of the readers it has today), so it bears covering them again.

Letting Your Wildcards Run Wild

This client liberally used wildcards to catch variations of words in their hits.  As noted previously, sometimes you can retrieve WAY more with your wildcards than you expect.  In this case, one of the wildcard terms was “win*” (presumably to catch win, wins, winner, winning, etc.).  Unfortunately, there are 253 words that begin with “win”, including wince, winch, wind, windbag, window, wine, wing, wink, winsome, winter, etc.

How do I know that there are 253 words that begin with “win”?  Am I an English professor?  No.  But, I did stay at a Holiday Inn Express last night.  Just kidding.

Actually, there is a site to show a list of words that begin with your search string.  Morewords.com shows a list of words that begin with your search string (e.g., to get all 253 words beginning with “win”, go here – simply substitute any characters for “win” in the URL to see the words that start with those characters).  This site enables you to test out your wildcard terms before using them in searches and substitute the variations you want if the wildcard search is likely to retrieve too many false hits.  Or, if you use an application like FirstPass™, powered by Venio FPR™, for first pass review, you can type the wildcard string in the search form, display all the words – in your collection – that begin with that string, and select the variations on which to search.  Either way enables you to avoid retrieving a lot of false hits you don’t want.

Those Stupid Word “Smart” Quotes

As many attorneys do, this client used Microsoft Word to prepare his proposed search syntax.  The last few versions of Microsoft Word, by default, automatically change straight quotation marks ( ' or " ) to curly quotes as you type. When you copy that text to a format that doesn’t support the smart quotes (such as HTML or a plain text editor), the quotes will show up as garbage characters because they are not supported ASCII characters.  So:

“smart quotes” aren’t very smart

will look like this…

âsmart quotesâ arenât very smart

And, your search will either return an error or some very odd results.

To learn how to disable the automatic changing of quotes to smart quotes or replace smart quotes already in a file, refer to this post from last year.  And, be careful, there’s a lot of “gotchas” out there that can cause search problems.  That’s why it’s always best to be a “STARR” and test your searches, refine and repeat them until they yield expected results.

So, what do you think?  Have you run into these “gotchas” in your searches? Please share any comments you might have or if you’d like to know more about a particular topic.

LitigationWorld Pick of the Week: Could This Be the Most Expensive eDiscovery Mistake Ever?

 

We’re pleased to announce that our blog post “eDiscovery Best Practices: Could This Be the Most Expensive eDiscovery Mistake Ever?”, regarding Google’s inadvertent disclosure during its litigation with Oracle was selected as the Pick of the Week from TechnoLawyer in the November 21, 2011 issue of LitigationWorldLitigationWorld is a free weekly email newsletter that provides helpful tips regarding electronic discovery, litigation strategy, and litigation technology.  It’s also a great source of ideas for blog posts!  😉

In each issue, the editorial team at LitigationWorld links to the most noteworthy articles on the litigation Web published during the previous week. From these articles, they then select one as their Pick of the Week.

Thanks to the folks at TechnoLawyer for this recognition.  We appreciate it!

eDiscovery Case Law: New York Supreme Court Requires Production of Software to Review Files

The petitioner – in TJS of New York, Inc. v. New York State Dep’t of Taxation and Fin., 932 N.Y.S.2d 243 (N.Y. App. Div. Nov. 3, 2011) – brought article 78 proceeding to compel Department of Taxation and Finance to produce records that were responsive to petitioner’s request under Freedom of Information Law (FOIL) for records related to sales tax audit.  Some of the records, however, could not be reviewed without a copy of the Department’s Audit Framework Extension software, which the Department refused to provide.  The petitioner then moved to compel production of the software program in order to install it on his computer and view the electronic files. The court denied petitioner’s motion, concluding that the software program was exempt from disclosure and also denied the petitioner’s subsequent motion to renew.

The court determined that the term “record” was broadly defined as “any information kept, held, filed, produced or reproduced by, with or for an agency …, in any physical form whatsoever, including, but not limited to, reports, statements, examinations, memoranda, opinions, folders, files, books, manuals, pamphlets, forms, papers, designs, drawings, maps, photos, letters, microfilms, computer tapes or discs, rules, regulations or codes”.  However, the petitioner disagreed, “citing the Department’s own description of the software as well as advisory opinions in which the Committee on Open Government concludes that software can constitute a record under FOIL”.

The court agreed with that argument, noting:

  • “The description of the software submitted by the Department and the reasoning and analysis contained in the advisory opinions relied on by petitioner lead us to conclude that the software at issue contains information and, thus, constitutes a record for FOIL purposes.”
  • “Specifically, the affidavit submitted by the Department from an auditor involved in the design and development of the software program, as well as the attached training manual for the software, reveals that the software is the means for conducting an audit and that, based on data entered by an auditor, the program does reconciliations, creates letters, produces forms, determines taxes due or refunds owed and creates a comprehensive audit report.  The June 1998 advisory opinion cited by petitioner concludes that software that enables an agency to manipulate data is a record pursuant to FOIL in the same way that a written manual describing a series of procedures would be subject to disclosure under FOIL”.
  • “The 2001 advisory opinion references a definition of software as ‘a series of instructions designed to produce information that can be seen on a screen, printed, stored, transferred and transmitted’ and concludes that it is a record subject to FOIL”
  • “Given these opinions and the Department’s own description of the capabilities of the program, we conclude that it is more than just a delivery system or data warehouse and, instead, falls within FOIL’s broad definition of a record subject to disclosure”

So, what do you think?  Should producing parties be required to produce specialized software to review produced records? Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: When Preparing Production Sets, Quality is Job 1

 

OK, I admit I stole that line from an old Ford commercial😉

Yesterday, we talked about addressing parameters of production up front to ensure that those requirements make sense and avoid foreseeable production problems well before the production step.  Today, we will talk about quality control (QC) mechanisms to make sure that the production is complete and accurate.

Quality Control Checks

There are a number of checks that can and should be performed on the production set, prior to producing it to the requesting party.  Here are some examples:

  • File Counts: The most obvious check you can perform is to ensure that the count of files matches the count of documents or pages you have identified to be produced.  However, depending on the production, there may be multiple file counts to check:
    • Image Files: If you have agreed with opposing counsel to produce images for all documents, then there will be a count of images to confirm.  If you’re producing multi-page image files (typically, PDF or TIFF), the count of images should match the count of documents being produced.  If you’re producing single-page image files (usually TIFF), then the count should match the number of pages being produced.
    • Text Files: When producing image files, you may also be producing searchable text files.  Again, the count should match either the documents (multi-page text files) or pages (single-page text files) with one possible exception.  If a document or page has no searchable text, are you still producing an empty file for those?  If not, you will need to be aware of how many of those instances there are and adjust the count accordingly to verify for QC purposes.
    • Native Files: Native files (if produced) are typically at the document level, so you would want to confirm that one exists for each document being produced.
    • Subset Counts: If the documents are being produced in a certain organized manner (e.g., a folder for each custodian), it’s a good idea to identify subset counts at those levels and verify those counts as well.  Not only does this provide an extra level of count verification, but it helps to find the problem more quickly if the overall count is off.
    • Verify Counts on Final Production Media: If you’re verifying counts of the production set before copying it to the media (which is common when burning files to CD or DVD), you will need to verify those counts again after copying to ensure that all files made it to the final media.
  • Sampling of Results: Unless the production is relatively small, it may be impractical to open every last file to be produced to confirm that it is correct.  If so, employ accepted statistical sampling procedures (such as those described here and here for searching) to identify an appropriate sample size and randomly select that sample to open and confirm that the correct files were selected, HASH values of produced native files match the original source versions of those files, images are clear and text files contain the correct text.
  • Redacted Files: If any redacted files are being produced, each of these (not just a sample subset) should be reviewed to confirm that redactions of privileged or confidential information made it to the produced file.  Many review platforms overlay redactions which have to be burned into the images at production time, so it’s easy for mistakes in the process to cause those redactions to be left out or burned in at the wrong location.
  • Inclusion of Logs: Depending on agreed upon parameters, the production may include log files such as:
    • Production Log: Listing of all files being produced, with an agreed upon list of metadata fields to identify those files.
    • Privilege Log: Listing of responsive files not being produced because of privilege (and possibly confidentiality as well).  This listing often identifies the privilege being asserted for each file in the privilege log.
    • Exception Log: Listing of files that could not be produced because of a problem with the file.  Examples of types of exception files are included here.

Each production will have different parameters, so the QC requirements will differ, so there are examples, but not necessarily a comprehensive list of all potential QC checks to perform.

So, what do you think?  Can you think of other appropriate QC checks to perform on production sets?  If so, please share them!  As well as any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Best Practices: Production is the “Ringo” of the eDiscovery Phases

 

Since eDiscovery Daily debuted over 14 months ago, we’ve covered a lot of case law decisions related to eDiscovery.  65 posts related to case law to date, in fact.  We’ve covered cases associated with sanctions related to failure to preserve data, issues associated with incomplete collections, inadequate searching methodologies, and inadvertent disclosures of privileged documents, among other things.  We’ve noted that 80% of the costs associated with eDiscovery are in the Review phase and that volume of data and sources from which to retrieve it (including social media and “cloud” repositories) are growing exponentially.  Most of the “press” associated with eDiscovery ranges from the “left side of the EDRM model” (i.e., Information Management, Identification, Preservation, Collection) through the stages to prepare materials for production (i.e., Processing, Review and Analysis).

All of those phases lead to one inevitable stage in eDiscovery: Production.  Yet, few people talk about the actual production step.  If Preservation, Collection and Review are the “John”, “Paul” and “George” of the eDiscovery process, Production is “Ringo”.

It’s the final crucial step in the process, and if it’s not handled correctly, all of the due diligence spent in the earlier phases could mean nothing.  So, it’s important to plan for production up front and to apply a number of quality control (QC) checks to the actual production set to ensure that the production process goes as smooth as possible.

Planning for Production Up Front

When discussing the production requirements with opposing counsel, it’s important to ensure that those requirements make sense, not only from a legal standpoint, but a technical standpoint as well.  Involve support and IT personnel in the process of deciding those parameters as they will be the people who have to meet them.  Issues to be addressed include, but not limited to:

  • Format of production (e.g., paper, images or native files);
  • Organization of files (e.g., organized by custodian, legal issue, etc.);
  • Numbering scheme (e.g., Bates labels for images, sequential file names for native files);
  • Handling of confidential and privileged documents, including log requirements and stamps to be applied;
  • Handling of redactions;
  • Format and content of production log;
  • Production media (e.g., CD, DVD, portable hard drive, FTP, etc.).

I was involved in a case recently where opposing counsel was requesting an unusual production format where the names of the files would be the subject line of the emails being produced (for example, “Re: Completed Contract, dated 12/01/2011”).  Two issues with that approach: 1) The proposed format only addressed emails, and 2) Windows file names don’t support certain characters, such as colons (:) or slashes (/).  I provided that feedback to the attorneys so that they could address with opposing counsel and hopefully agree on a revised format that made more sense.  So, let the tech folks confirm the feasibility of the production parameters.

The workflow throughout the eDiscovery process should also keep in mind the end goal of meeting the agreed upon production requirements.  For example, if you’re producing native files with metadata, you may need to take appropriate steps to keep the metadata intact during the collection and review process so that the metadata is not inadvertently changed. For some file types, metadata is changed merely by opening the file, so it may be necessary to collect the files in a forensically sound manner and conduct review using copies of the files to keep the originals intact.

Tomorrow, we will talk about preparing the production set and performing QC checks to ensure that the ESI being produced to the requesting party is complete and accurate.

So, what do you think?  Have you had issues with production planning in your cases?  Please share any comments you might have or if you’d like to know more about a particular topic.