Electronic Discovery

Word’s Stupid “Smart Quotes” – Best of eDiscovery Best Practices

Even those of us at eDiscoveryDaily have to take an occasional vacation day; however, instead of “going dark” for today, we thought we would republish a post from the early days of the blog (when we didn’t have many readers yet).  So, chances are, you haven’t seen this post yet!  Enjoy!

I have run into this issue more times than I can count.

A client sends me a list of search terms that they want to use to cull a set of data for review in a Microsoft® Word document.  I copy the terms into the search tool and then, all hell breaks loose!!  Either:

The search indicates there is a syntax error

OR

The search returns some obviously odd results

And, then, I remember…

It’s those stupid Word “smart quotes”.  Starting with Office 2003, Microsoft Word, by default, automatically changes straight quotation marks ( ‘ or ” ) to curly quotes as you type. This is fine for display of a document in Word, but when you copy that text to a format that doesn’t support the smart quotes (such as HTML or a plain text editor), the quotes will show up as garbage characters because they are not supported ASCII characters.  So:

“smart quotes”

will look like this…

âsmart quotesâ

As you can imagine, that doesn’t look so “smart” when you feed it into a search tool and you get odd results (if the search even runs).  So, you’ll need to address those to make sure that the quotes are handled correctly when searching for phrases with your search tool.

To disable the automatic changing of quotes to Microsoft Word smart quotes: Click the Microsoft Office icon button at the top left of Word, and then click the Word Options button to open options for Word.  Click Proofing along the side of the pop-up window, then click AutoCorrect Options.  Click the AutoFormat tab and uncheck the Replace “Smart Quotes” with “Smart Quotes” check box.  Then, click OK.

Often, however, the file you’ve received already has smart quotes in it.  If you’re going to use the terms in that file, you’ll need to copy them to a text editor first – (e.g., Notepad or Wordpad – if Wordpad is in plain text document mode) should be fine.  Highlight the beginning quote and copy it to the clipboard (Ctrl+C), then Ctrl+H to open up the Find and Replace dialog, put your cursor in the Find box and press Ctrl+V to paste it in.  Type the character on the keyboard into the Replace box, then press Replace All to replace all beginning smart quotes with straight ones.  Repeat the process for the ending smart quotes.  You’ll also have to do this if you have any single quotes, double-hyphens, fraction characters (e.g., Word converts “1/2” to “½”) that impact your terms.

So, what do you think?  Have you ever run into issues with Word smart quotes or other auto formatting options?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Judge Rules Against Spoliation Sanctions when the Evidence Doesn’t Support the Case – eDiscovery Case Law

In Cottle-Banks v. Cox Commc’ns, Inc., No. 10cv2133-GPC (WVG) (S.D. Cal. May 21, 2013), California District Judge Gonzalo P. Curiel denied the plaintiff’s motion for spolation sanctions because the plaintiff was unable to show that deleted recordings of customer calls would have likely been relevant and supportive of her claim.

The defendant provides services and products such as set-top cable boxes and customers call in to order these services and products.  The plaintiff alleged a practice of charging customers for boxes without disclosing, and obtaining approval for equipment charges – a violation of the Communications Act of 1934, 47 U.S.C. § 543(f).  The plaintiff’s discovery requests included copies of recording of her own calls with the defendant, and the defendant began preserving tapes when the plaintiff notified the defendant that she would seek call recordings in discovery, not before that.  As a result, the plaintiff filed a motion for spoliation sanctions, requesting an adverse inference and requesting that the defendant be excluded from introducing evidence that it’s call recordings complied with 47 U.S.C. § 543(f).

From the call recordings still available, a sample of recordings was provided to the plaintiff – in those calls, it was evident that the defendant did, in fact, get affirmative acceptance of the additional charges as a matter of practice.

Judge Curiel ruled that the defendant “had an obligation to preserve the call recordings when the complaint was filed in September 2010” and that the defendant “had an obligation to preserve the call recording, [so] Defendant was negligent in failing to preserve the back up tapes. Thus, Defendant had a culpable state of mind.”  However, because the “Plaintiff cited only two call recordings out of 280 call recordings produced to support her position”, the judge concluded “that the deleted call recordings would not have been supportive of Plaintiff’s claim.”  Because “Plaintiff has not demonstrated all three factors to support an adverse inference sanction”, Judge Curiel denied the plaintiff’s motion as to adverse inference and preclusion.

So, what do you think?  Should the sanction request have been denied?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

EDRM Publishes New Metrics Model – eDiscovery Trends

When I attended the Annual Meeting for the Electronic Discovery Reference Model (EDRM) last month, one of the projects that was close to a major deliverable was the Metrics project – a project that I worked on during my first two years as a participant in EDRM.  Now, EDRM has announced and published that deliverable: a brand new Metrics model.

As their press release notes, the “EDRM Metrics Model provides a framework for planning, preparation, execution and follow-up of e-discovery matters and projects by showing the relationship between the e-discovery process and how information, activities and outcomes may be measured.”  It consists of two inter-dependent elements: (a) The Center, which includes the key metrics variables of Volume, Time and Cost, and (b) The outside nodes, which identify work components that affect the outcome associated with the elements at the Center.  There is no indicated starting node on the Metrics Wheel, because any of the seven nodes could be a starting point or factor in an eDiscovery project.

Information at the Center

The model depicts Volume, Time, and Cost at its center, and all of the outside nodes impact each of these three major variables. Time, Cost, & Volume are inter-related variables that fluctuate for each project.

Outside Nodes

Here is a brief description of each of the seven nodes:

Activities: Things that are happening or being done by either people or technology; examples can include: collecting documents, designing a search, interviewing a custodian, etc.

Custodians: Person having administrative control of a document or electronic file or system; for example, the custodian of an email is the owner of the mailbox which contains the message.

Systems: The places, technologies, tools and locations in which electronic information is created, stored or managed; examples of systems include shared drives, email, computer applications, databases, cloud sources and archival sources such as back-up tapes.

Media: The storage devices for electronic information; examples include: CDs, DVDs, floppy disks, hard drives, tapes and paper.

Status: A unique point in time in a project or process that relates to the performance or completion of the project or process; measured qualitatively in reference to a desired outcome.

Formats: The way information is arranged or set out; for example, the format of a file which affects which applications are required to view, process, and store it.

Quality Assurance (“QA”): Ongoing methods to ensure that reasonable results are being achieved; an example of QA would be to ensure that no privileged documents are released in a production by performing a operation, such as checking for privilege tags within the production set.

A complete explanation of the model, including graphics, descriptions, glossary and downloadable content is available here.  Kudos to the team, led by Kevin Clark and Dera Nevin (TD Bank Group)!

So, what do you think?  Do you think the model will be useful to help your team better understand the activities and how they impact volume, time and cost for the project?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Spoliation of Data Can Get You Sent Up the River – eDiscovery Case Law

Sometimes, eDiscovery can literally be a fishing expedition.

I got a kick out of Ralph Losey’s article on E-Discovery Law Today (Fishing Expedition Discovers Laptop Cast into Indian River) where the defendant employee in a RICO case in Simon Property Group, Inc. v. Lauria, 2012 U.S. Dist. LEXIS 184638 (M.D. Fla. 2012) threw her laptop into a river.  Needless to say, given the intentional spoliation of evidence, the court imposed struck all of the defenses raised by the defendant and scheduled the case for trial on the issue of damages.  Magistrate Judge Karla Spaulding summarized the defendant’s actions in the ruling:

“This case has all the elements of a made-for-TV movie: A company vice president surreptitiously awards lucrative business deals to a series of entities that she and her immediate family members control. To cover up the egregious self-dealing, she fabricates multiple fictitious personas and then uses those fictitious personas to “communicate” with her employer on behalf of the entities she controls. She also cut-and-pastes her supervisor’s signature onto service agreements in an attempt to make it seem as if her activities have been approved. After several years, a whistleblower exposes the scheme to the company. The company then tells the vice president that she is being investigated and warns her not to destroy any documents or evidence. Sensing that her scheme is about to collapse around her and wanting to cover her tracks, the vice president then travels to the East Coast of Florida and throws her laptop computer containing information about these activities into a river.”

At least she didn’t deny it when deposed as noted in the ruling:

“When asked why she threw the laptop away, Lauria testified as follows:

Q: Okay. Why did you throw the laptop away?

A: Because I knew that something was coming down and I just didn’t want all the stuff around.

Q: So you were trying to get rid of documentation and e-mails and things?

A: Uh-huh, yes.

Q: That directly related to the lawsuit?

A: Yes. Now, they do, yes.”

Maybe she should have used the George Costanza excuse and state that she didn’t know it was “frowned upon”.

So, what do you think?  Was that wrong?  Just kidding.  Please share any comments you might have or if you’d like to know more about a particular topic.

BTW, Ralph is no stranger to this blog – in addition to several of his articles we’ve referenced, we’ve also conducted thought leader interviews with him at LegalTech New York the past two years.  Here’s a link if you want to check those out.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Self-Collecting? Don’t Forget to Check for Image Only Files – eDiscovery Best Practices

Yesterday, we talked about the importance of tracking chain of custody order to be able to fight challenges of electronically stored information (ESI) by opposing parties.  Today, let’s talk about a common mistake that organizations make when collecting their own files to turn over for discovery purposes.

I’ve worked with a number of attorneys who have turned over the collection of potentially responsive files to the individual custodians of those files, or to someone in the organization responsible for collecting those files (typically, an IT person).  Self-collection by custodians, unless managed closely, can be a wildly inconsistent process (at best).  In some cases, those attorneys have instructed those individuals to perform various searches to turn “self-collection” into “self-culling”.  Self-culling can cause at least two issues:

  1. You have to go back to the custodians and repeat the process if additional search terms are identified.
  2. Potentially responsive image-only files will be missed with self-culling.

Unless search terms are agreed to by the parties up front, it’s not unusual to identify additional searches to be performed – even when up front agreement, terms can often be renegotiated during the case.  It’s also common to have a number of image-only files within any collection, especially if the custodians frequently scan executed documents or use fax software to receive documents from other parties.  In those cases, image-only PDF or TIFF files can often make up as much as 20% of the collection.  When custodians are asked to perform “self-culling” by performing their own searches of their data, these files will typically be missed.

For these reasons, I usually advise against self-culling by custodians and also don’t recommend that IT perform self-culling, unless they have the ability to process that data to identify image-only files and perform Optical Character Recognition (OCR) to capture text from them.  If your IT department has the capabilities and experience to do so (and the process and chain of custody is well documented), then that’s great.  Many internal IT departments either don’t have the capabilities or expertise, in which case it’s best to collect all potentially responsive files from the custodians and turn them over to a qualified eDiscovery provider to perform the culling (performing OCR as needed to include responsive image-only files in the resulting responsive document set).  With the full data set available, there is also no need to go back to the custodians to collect additional data (unless the case requires supplemental productions).

So, what do you think?  Do you self-collect data for discovery purposes?  If so, how do you account for image-only files?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Chain, Chain, Chain: Chain of Custody – eDiscovery Best Practices

If you’re a baseball fan you probably remember Ryan Braun and the reported failed test for performance enhancing drugs that he successfully challenged by challenging the chain of custody associated with his blood sample.  When it comes to electronically stored information (ESI), ensuring proper chain of custody tracking is also an important part of handling that ESI through the eDiscovery process in order to be able to fight challenges of the ESI by opposing parties.  An insufficient chain of custody is a chain, chain, chain of fools.

Information to Track for Chain of Custody

ESI can be provided by a variety of sources and in a variety of media, so you need a standardized way of recording chain of custody for the ESI that you collect within your organization or from your clients.  At CloudNine Discovery, we use a standard form for capturing chain of custody information.  Because we never know when a client will call and ask us to pick up data, our client services personnel typically have a supply of blank forms either in their briefcase or in their car (maybe even both).

Our chain of custody tracking form includes the following:

  • Date and Time: The date and time that the media containing ESI was provided to us.
  • Pick Up or Delivery Location: Information about the location where the ESI was provided to us, including the company name, address, physical location within the facility (e.g., a specific employee’s office) and any additional information important to note where the data was received.
  • Delivering Party: Name of the company and the name of representative of the company providing the media, with a place for that representative to sign for tracking purposes.
  • Delivery Detail (Description of Items): A detailed description of the item(s) being received.  Portable hard drives are one typical example of the media used to provide ESI to us, so we like to describe the brand and type of hard drive (e.g., Western Digital My Passport drive) and the serial number, if available.  Record whatever information is necessary to uniquely identify the item(s).
  • Receiving Party: Name of the company and the name of representative of the company receiving the media, with a place for that representative to sign for tracking purposes.  In our form, that’s usually somebody from CloudNine Discovery, but can be a third party if they are receiving the data from the original source – then, another chain of custody form gets completed for them to deliver it to us.
  • Comments: Any general comments about the transfer of media not already addressed above.

I’ve been involved in several cases where the opposing party, to try to discredit damaging data against them, has attacked the chain of custody of that data to raise the possibility that the data was spoliated during the process and mitigate its effect on the case.  In these types of cases, you should be prepared to have an expert ready to testify about the chain of custody process to counteract those attacks.  Otherwise, you might be singing like Aretha Franklin.

So, what do you think?  How does your organization track chain of custody of its data during discovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Another Case where Reimbursement of eDiscovery Costs are Denied – eDiscovery Case Law

When it comes to reimbursement of eDiscovery costs, sometimes courts feel like a nut and sometimes they don’t.  In other words, there appears to be no consistency.

In The Country Vintner of North Carolina, LLC v. E. & J. Gallo Winery, Inc., No. 12-2074, 2013 U.S. App. (4th Cir. Apr. 29, 2013), when deciding which costs are taxable, the Fourth Circuit chose to follow the Third Circuit’s reasoning in Race Tires America, Inc. v. Hoosier Racing Tire Corp.,674 F.3d 158 (3d Cir. 2012), which read 28 U.S.C. § 1920(4) narrowly. Specifically, the court approved taxation of file conversion and transferring files onto CDs as “[f]ees for exemplification and the costs of making copies of any materials where the copies are necessarily obtained for use in the case” but no other tasks related to electronically stored information (ESI).

In this case, the defendant balked at the plaintiff’s discovery requests, arguing that the requests created an undue burden, rendering the documents sought inaccessible. The plaintiff filed a motion to compel, which the district court granted. The defendant then collected more than 62 gigabytes of data for review.

After prevailing on a motion to dismiss several claims and having the rest dismissed at summary judgment, the defendant filed a bill of costs seeking $111,047.75 for e-discovery-related charges, including the following:

  • $71,910 for ‘flattening’ and ‘indexing’ ESI,”
  • $15,660 for ‘Searching/Review Set/Data Extraction,’”
  • $178.59 for ‘TIFF Production’ and ‘PDF Production,’”
  • $74.16 for electronic ‘Bates Numbering,’”
  • $40 for copying images onto a CD or DVD,” and
  • $23,185 for ‘management of the processing of the electronic data,’ ‘quality assurance procedures,’ ‘analyzing corrupt documents and other errors,’ and ‘preparing the production of documents to opposing counsel.’”

Following the Third Circuit’s reasoning in Race Tires America, the district court found that the defendant was entitled to costs for tasks that were the equivalent of copying or duplicating files, but not for “any other ESI-related expenses.” Here, the only reimbursable tasks were converting files to TIFF and PDF format and transferring files to CDs. Therefore, the court taxed $218.59.

On appeal, the defendant claimed its ESI-related charges were taxable because “ESI has ‘unique features’: ESI is ‘more easily and thoroughly changeable than paper documents,’ it contains metadata, and it often has searchable text.” The defendant argued converting native files to PDF and TIFF formats resulted in “‘static, two-dimensional images that, by themselves, [we]re incomplete copies of dynamic, multi-dimensional ESI,’” such that other processing was required to copy the “‘all integral features of the ESI.’”

The Fourth Circuit rejected the defendant’s argument.  The court noted that “the presumption is that the responding party must bear the expense of complying with discovery requests.” Moreover, the U.S. Supreme Court has opined that “‘costs almost always amount to less than the successful litigant’s total expenses’” and that Section 1920 is “‘limited to relatively minor, incidental expenses.’” Finally, the appellate court also relied on Race Tires in finding that Section 1920(4) limited the taxable costs to file conversion and burning files onto discs. The ESI-related charges were not taxable as “fees for exemplification” under the statute because they did not involve the authentication of public records, exhibits, or demonstrative aids.

Although the court’s reasoning meant the defendant would be reimbursed for only a fraction of its costs, it did not mean its interpretation of the statute was “too grudging in an age of unforeseen innovations in litigation-support technology.” Rather, the court suggested that where parties believe costs are excessive, they can file a motion seeking a protective order.

So, what do you think?  Should the costs have been awarded?  Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

“Not Me”, The Fallibility of Human Review – eDiscovery Best Practices

When I talk with attorneys about using technology to assist with review (whether via techniques such as predictive coding or merely advanced searching and culling mechanisms), most of them still seem to question whether these techniques can measure up to good, old-fashioned human attorney review.  Despite several studies that question the accuracy of human review, many attorneys still feel that their review capability is as good or better than technical approaches.  Here is perhaps the best explanation I’ve seen yet why that may not be the case.

In Craig Ball’s latest blog post on his Ball in Your Court blog (The ‘Not Me’ Factor), Craig provides a terrific explanation as to why predictive coding is “every bit as good (and actually much, much better) at dealing with the overwhelming majority of documents that don’t require careful judgment—the very ones where keyword search and human reviewers fail miserably.”

“It turns out that well-designed and –trained software also has little difficulty distinguishing the obviously relevant from the obviously irrelevant.  And, again, there are many, many more of these clear cut cases in a collection than ones requiring judgment calls.

So, for the vast majority of documents in a collection, the machines are every bit as capable as human reviewers.  A tie.  But giving the extra point to humans as better at the judgment call documents, HUMANS WIN!  Yeah!  GO HUMANS!   Except….

Except, the machines work much faster and much cheaper than humans, and it turns out that there really is something humans do much, much better than machines:  they screw up.

The biggest problem with human reviewers isn’t that they can’t tell the difference between relevant and irrelevant documents; it’s that they often don’t.  Human reviewers make inexplicable choices and transient, unwarranted assumptions.  Their minds wander.  Brains go on autopilot.  They lose their place.  They check the wrong box.  There are many ways for human reviewers to err and just one way to perform correctly.

The incidence of error and inconsistent assessments among human reviewers is mind boggling.  It’s unbelievable.  And therein lays the problem: it’s unbelievable.    People I talk to about reviewer error might accept that some nameless, faceless contract reviewer blows the call with regularity, but they can’t accept that potential in themselves.  ‘Not me,’ they think, ‘If I were doing the review, I’d be as good as or better than the machines.’  It’s the ‘Not Me’ Factor.”

While Craig acknowledges that “there is some cause to believe that the best trained reviewers on the best managed review teams get very close to the performance of technology-assisted review”, he notes that they “can only achieve the same result by reviewing all of the documents in the collection, instead of the 2%-5% of the collection needed to be reviewed using predictive coding”.  He asks “[i]f human review isn’t better (and it appears to generally be far worse) and predictive coding costs much less and takes less time, where’s the rational argument for human review?”

Good question.  Having worked with some large review teams with experienced and proficient document reviewers at an eDiscovery provider that employed a follow-up QC check of reviewed documents, I can still recall how often those well-trained reviewers were surprised at some of the classification mistakes they made.  And, I worked on one project with over a hundred reviewers working several months, so you can imagine how expensive that was.

BTW, Craig is no stranger to this blog – in addition to several of his articles we’ve referenced, we’ve also conducted thought leader interviews with him at LegalTech New York the past three years.  Here’s a link if you want to check those out.

So, what do you think?  Do you think human review is better than technology assisted review?  If so, why?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Adverse Inference Sanction for Defendant who Failed to Stop Automatic Deletion – eDiscovery Case Law

Remember the adverse inference instructions in the Zubulake v. UBS Warburg and Apple v. Samsung cases?  This case has characteristics of both of those.

In Pillay v. Millard Refrigerated Servs., Inc., No. 09 C 5725 (N.D. Ill. May 22, 2013), Illinois District Judge Joan H. Lefkow granted the plaintiff’s motion for an adverse inference jury instruction due to the defendant’s failure to stop automatic deletion of employee productivity tracking data used as a reason for terminating a disabled employee.

Case Background

The plaintiff alleged that the defendant is liable for retaliation under the Americans with Disabilities Act (“ADA”) for terminating his employment after the plaintiff opposed the defendant’s decision to terminate another employee because of a perceived disability.  The defendant employed a labor management system (“LMS”) to track its warehouse employees’ productivity and performance.  Shortly after hiring the employee and telling him that his LMS numbers were great, the defendant fired the employee when it was determined that a prior work injury he suffered rendered him with a disability rating of 17.5 percent by the Illinois Industrial Commission, which prompted the senior vice president to send an email to the general manager stating “We have this all documented right? … Let’s get him out asap.”  The employee (and the plaintiff, for objecting to the termination) was terminated in August 2008 and the defendant contended that the employee’s termination resulted from his unacceptable LMS performance rating of 59 percent.

Deletion of LMS Data

In August 2009, the raw data used to create the employee’s LMS numbers were deleted because the LMS software automatically deleted the underlying data after a year. Before the information was deleted, the plaintiff and other terminated employee provided several notices of the duty to preserve this information, including:

  • A demand letter from the plaintiff in September 2008;
  • Preservation notices from the plaintiff and other terminated employee in December 2008 reminding the defendant of its obligations to preserve evidence;
  • Charges filed by both terminated employees with the Equal Employment Opportunity Commission (“EEOC”) in January 2009.

Also, the defendant’s 30(b)(6) witness testified that supervisors could lower an LMS performance rating by deleting the underlying data showing that an employee worked a certain number of jobs for a given period of time, which the plaintiff contended happened in this case.  As a result, the plaintiff filed a motion for the adverse inference jury instruction.

Judge’s Ruling

Noting that the defendant “relied on this information when responding to the EEOC charges, which occurred before the deletion of the underlying LMS data” and that “[i]nformation regarding the underlying LMS data would have been discoverable to challenge Millard’s explanation for Ramirez’s termination”, Judge Lefkow found that the defendant had a duty to preserve the LMS data (“A party must preserve evidence that it has notice is reasonably likely to be the subject of a discovery request, even before a request is actually received.”).

With regard to the defendant’s culpability in deleting the data, Judge Lefkow stated “[t]hat Millard knew about the pending lawsuit and that the underlying LMS data would be deleted but failed to preserve the information was objectively unreasonable. Accordingly, even without a finding of bad faith, the court may craft a proper sanction based on Millard’s failure to preserve the underlying LMS data.”

So, Judge Lefkow granted the plaintiff’s request for an adverse inference sanction with the following instruction to be given to the jury:

“Pillay contends that Millard at one time possessed data documenting Ramirez’s productivity and performance that was destroyed by Millard. Millard contends that the loss of the data was accidental. You may assume that such evidence would have been unfavorable to Millard only if you find by a preponderance of the evidence that (1) Millard intentionally or recklessly caused the evidence to be destroyed; and (2) Millard caused the evidence to be destroyed in bad faith.”

So, what do you think?  Should the adverse inference sanction have been awarded?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Capturing Memory and Obtaining Protected Files with FTK Imager – eDiscovery Best Practices

Over the past few weeks, we have talked about the benefits and capabilities of Forensic Toolkit (FTK) Imager from AccessData (and obtaining your own free copy), how to create a disk image, how to add evidence items for the purpose of reviewing the contents of those evidence items (such as physical drives or images that you’ve created) and how to export files and create a custom content image of a targeted collection of files with FTK Imager.  This week, let’s discuss how to Capture Memory and Obtain Protected Files to collect a user’s account information and possible passwords to other files.

Capture Memory

If you’re trying to access the contents of memory from an existing system that’s running, you can use a runtime version of FTK Imager from a flash drive to access that memory.  From the File menu, you can select Capture Memory to capture data stored in memory within the system.

Capturing memory can be useful for a number of reasons.  For example, if TrueCrypt is running to encrypt the contents of the drive, the password could be stored in memory – if it is, Capture Memory enables you to capture the contents of memory (including the password) before it is lost.

Simply specify the destination path and filename to capture memory to the specified file.  You can also include the contents of pagefile.sys, which is a Windows system file that acts as a swap file for memory; hence, it can contain useful memory information as well.  Creating an AD1 file enables you to create an AD1 image of the memory contents – then you can add it as an evidence item to review the contents.

Obtain Protected Files

Because Windows does not allow you to copy or save live Registry files, you would have to image the hard drive and then extract the Registry files, or boot the computer from a boot disk and copy the Registry files from the inactive operating system on the drive. From the File menu, you can select Obtain Protected Files to circumvent the Windows operating system and its file locks, thus allowing you to copy the live Registry files.  If the user allows Windows to remember his or her passwords, that information can be stored within the registry files.

Specify the destination path for the obtained files, then select the option for which files you would like to obtain.  The Minimum files for login recovery option retrieves Users, System, and SAM files from which you can recover a user’s account information.  The Password recovery and all Registry files option is more comprehensive, retrieving Users, System, SAM, NTUSER.DAT, Default, Security, Software, and Userdiff files from which you can recover account information and possible passwords to other files, so it’s the one we tend to use.

For more information, go to the Help menu to access the User Guide in PDF format.

So, what do you think?  Have you used FTK Imager as a mechanism for eDiscovery collection?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.