Posts By :

Doug Austin

Just a Reminder to Think Before You Hit Send – eDiscovery Best Practices

With Anthony Weiner’s announcement that he is attempting a political comeback by running for mayor on New York City, it’s worth remembering the “Twittergate” story that ultimately cost his congressional seat in the first place – not to bash him, but to remind all of us how important it is to think before you hit send (even if he did start his campaign by using a picture of Pittsburgh’s skyline instead of NYC’s — oops!).  Here is another reminder of that fact.

Chili’s Waitress Fired Over Facebook Post Insulting ‘Stupid Cops’

As noted on jobs.aol.com, a waitress at an Oklahoma City Chili’s posted a photo of three Oklahoma County Sheriff’s deputies on her Facebook page along with the comment: “Stupid Cops better hope I’m not their server FDP.” (A handy abbreviation for F*** Da Police.)

The woman, Ashley Warden, might have had reason to hold a grudge against her local police force. Last year she made national news when her potty-training toddler pulled down his pants in his grandmother’s front yard, and a passing officer handed Warden a public urination ticket for $2,500. (The police chief later apologized and dropped the charges, while the ticketing officer was fired.)

Nonetheless, Warden’s Facebook post quickly went viral on law enforcement sites and Chili’s was barraged with calls demanding that she be fired. Chili’s agreed. “With the changing world of digital and social media, Chili’s has Social Media Guidelines in place, asking our team members to always be respectful of our guests and to use proper judgement when discussing actions in the work place …,” the restaurant chain said in a statement. “After looking into the matter, we have taken action to prevent this from happening again.”

Best Practices and Social Media Guidelines

Another post on jobs.aol.com discusses some additional examples of people losing their jobs for Facebook posts, along with six tips for making posts that should keep you from getting fired, by making sure the posts would be protected by the National Labor Relations Board (NLRB), which is the federal agency tasked with protecting employees’ rights to association and union representation.

Perhaps so, though, as the article notes, the NLRB “has struggled to define how these rights apply to the virtual realm”.  It’s worth noting that, in their statement, Chili’s referred to violation of their social media guidelines as a reason for the termination.  As we discussed on this blog some time ago, having a social governance policy in place is a good idea to govern use of outside email, chat and social media that covers what employees should and should not do (and the post identified several factors that such a policy should address).

Thinking before you hit send in these days of pervasive social media means, among other things, being familiar with your organization’s social media policies and ensuring compliance with those policies.  If you’re going to post anything related to your job, that’s important to keep in mind.  To think before you hit send also involves educating yourself as to what you should and should not do when posting to social media sites.

Of course it’s also important to remember that social media factors into discovery more than ever these days, as these four cases (just from the first few months of this year) illustrate.

So, what do you think?  Does your organization have social media guidelines?  Please share any comments you might have or if you’d like to know more about a particular topic.

eDiscovery Daily will return after the Memorial Day Holiday.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Google Compelled to Produce Search Terms in Apple v. Samsung – eDiscovery Case Law

In Apple v. Samsung, Case No. 12-cv-00630, (N.D. Cal., May 9, 2013), California Magistrate Judge Paul S. Grewal granted Apple’s motion to compel third party Google to produce the search terms and custodians used to respond to discovery requests and ordered the parties to “meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

In August of last year, a jury of nine found that Samsung infringed all but one of the seven patents at issue and found all seven of Apple’s patents valid – despite Samsung’s attempts to have them thrown out. They also determined that Apple didn’t violate any of the five patents Samsung asserted in the case.  Apple had been requesting $2.5 billion in damages.  Apple later requested additional damages of $707 million to be added to the $1.05 billion jury verdict, which was subsequently reduced to nearly $599 million, with a new trial being ordered on damages for 14 products.  This case was notable from an eDiscovery perspective due to the adverse inference instruction issued by Judge Grewal against Samsung just prior to the start of trial for spoliation of data, though it appears that the adverse inference instruction did not have a significant impact in the verdict.

Google’s Involvement

As part of the case, Apple subpoenaed Google to request discovery, though they did not discuss search terms or custodians during the meet and confer.  After Google responded to discovery requests, Apple requested search terms and custodians from Google used in responding to discovery requests, but Google refused, arguing that the search terms and choice of custodians were “privileged under the work-product immunity doctrine.”  Instead, Google asked Apple to suggest search terms and custodians, but Apple refused and filed a motion to compel Google to provide search terms and custodians used to respond to discovery requests.

Judge Grewal noted that Google’s arguments opposing Apple’s request “have shifted”, but that “[a]t the heart of its opposition, however, is Google’s belief that its status as a third party to this litigation exempts it from obligations parties may incur to show the sufficiency of their production, at least absent a showing by Apple that its production is deficient.”  Google complained that “the impact of requiring non-parties to provide complete ‘transparency’ into their search methodology and custodians in responding to non-party subpoenas whenever unsubstantiated claims of production deficiencies are made would be extraordinary.”

Judge’s Ruling

Referencing DeGeer v. Gillis, Judge Grewal noted that, in that case, it was ruled that the third party’s “failure to promptly disclose the list of employees or former employees whose emails it proposed to search and the specific search terms it proposed to be used for each individual violated the principles of an open, transparent discovery process.”

Therefore, while acknowledging that “Apple likewise failed to collaborate in its efforts to secure proper discovery from Google”, Judge Grewal ruled that “production of Google’s search terms and custodians to Apple will aid in uncovering the sufficiency of Google’s production and serves greater purposes of transparency in discovery.  Google shall produce the search terms and custodians no later than 48 hours from this order. Once those terms and custodians are provided, no later than 48 hours from the tender, the parties shall meet and confer in person to discuss the lists and to attempt to resolve any remaining disputes regarding Google’s production.”

So, what do you think?  Should a third party be held to the same standard of transparency, absent a showing of deficient discovery response?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Welcome to LegalTech West Coast 2013! – eDiscovery Trends

Today is the start of LegalTech® West Coast 2013 (LTWC) and eDiscoveryDaily is here to report about the latest eDiscovery trends being discussed at the show.  Today, we will provide a description of some of the sessions related to eDiscovery to give you a sense of the topics being covered.  If you’re in the Los Angeles area, come check out the show – there are a number of sessions available and 62 exhibitors providing information on their products and services, including (shameless plug warning!) my company, CloudNine Discovery, which just announced yesterday that we will be previewing a brand new, browser-independent version of our linear review application, OnDemand®.

OnDemand’s completely new interface also includes several new analytics and filtering capabilities, and we will be exhibiting at booth #111 along with our partners, First Digital Solutions.  Come by and say hi!  End of shameless plug!  🙂

Perform a “find” on today’s LTNY conference schedule for “discovery” and you’ll get 24 hits.  So, there is plenty to talk about!  Sessions in the main conference tracks include:

10:30 AM – 12:00 PM:

International Privacy and its Impact on e-Discovery

As technology makes the world smaller and smaller, the US market is running into significant challenges when it comes to e-discovery. Other countries, specifically those in Asia, have much more lax rules on privacy and this can prove an ED nightmare. Simply, what are the rules for international privacy and how will the impact your e-Discovery work. This panel of industry experts will examine the privacy implications you must consider when conducting e-discovery globally. With a special focus on Asia and those in the Southern California market, panelists will provide guidance and substantive information to address privacy and security concerns in relation to international e-Discovery.

Speakers are: Aaron Crews, Shareholder, Littler Mendelson; Therese Miller, Of Counsel, Shook, Hardy & Bacon and Cameron R. Krieger, eDiscovery Attorney, Latham & Watkins LLP.

A Panel of Experts: A Candid Conversation

A panel of expert judges and lawyers will discuss cutting edge ediscovery challenges. Bring your questions for prestigious members of the bench and bar.

Speakers are: Honorable Suzanne H. Segal, United States Chief Magistrate Judge, Central District of California; Honorable Jay C. Gandhi, United States Magistrate Judge, Central District of California; Jeffrey Fowler, Partner, O’Melveny & Myers LLP.  Moderator: David D. Lewis Ph. D., IR Consultant.

2:00 – 3:15 PM:

The E-Discovery Debate

Is there a silver bullet when it comes to Technology Assisted Review (TAR)? What about Predictive Coding? There are those in the market who feel there is a one stop solution for all organizations; others believe it is a case by case basis. This session will allow you to be the judge. Hear both sides of the equation to better apply the lessons learned to your own e-Discovery. The debaters will also cover the various points of view when it comes to the cloud, social media and security policies with regards to e-Discovery.

Speaker is: Jack Halprin, Head of Ediscovery, Enterprise, Google Inc.  Moderator: Hunter W. McMahon, JD, Senior Consultant, Driven, Inc.

Creative Ediscovery Problem Solving

There is no right answer. There is no wrong answer. There is only the BEST answer. This session will help you find the best possible solution to your ediscovery problems. In this brainstorm power session, you will:

  • Tackle the latest ediscovery problems
  • Develop action plans
  • Discuss meaningful ways to implement solutions

Speakers are: Linda Baynes, Associate Operations Director, Orrick, Herrington & Sutcliffe and Adam Sand, Associate General Counsel, Ancestry.com.  Moderator: John Reikes, Account Executive, Kroll Ontrack.

3:45 – 5:00 PM:

Judges’ Panel: The Current State of the ED Market

Join us for the always informative judges’ panel at LegalTech West Coast. We’ve assembled a panel of judges from the west coast to discuss their views of the ED market today and give their insight to where they see the market going. Make sure you are up on the current issues in the market and prepare your team for future complications and concerns by hearing what it is the bench is considering today.

5 Daunting Ediscovery Challenges: A Live Deliberation

This exercise will involve audience participation through a panel-led discussion. Be prepared to deliberate some of the most complex challenges facing ediscovery including:

  • Ever-changing technology rules
  • Unpredictable costs
  • Underutilization of Technology-assisted Review (TAR)
  • Primitive case analytics
  • Transactional ediscovery

Panelists are: Jack Halprin, Head of Ediscovery, Enterprise, Google Inc. and Adam Sand, Associate General Counsel, Ancestry.com.  Moderator: Chris Castaldini, Account Executive, Kroll Ontrack.

In addition to these, there are other eDiscovery-related sessions today.  For a complete description for all sessions today, click here.

So, what do you think?  Are you planning to attend LTWC this year?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Adding Evidence Items with FTK Imager – eDiscovery Best Practices

A couple of weeks ago, we talked about the benefits and capabilities of Forensic Toolkit (FTK) Imager, which is a computer forensics software application provided by AccessData, as well as how to download your own free copy.  Then, last week, we discussed how to create a disk image.  This week, let’s discuss how to add evidence items with FTK Imager for the purpose of reviewing the contents of evidence items, such as physical drives or images that you’ve created.

Adding Evidence Items Using FTK Imager

Last week, I created an image of one of my flash drives to illustrate the process of creating an image.  Let’s take a look at that image as an evidence item.

From the File menu, you can select Add Evidence Item to add a single evidence item to the evidence tree.  You can also select Add All Attached Devices to add all of the attached physical and logical devices (If no media is present in an attached device such as a CD- or DVD-ROM or a DVD-RW, the device is skipped).  In this case we’ll add a single evidence item.

Source Evidence Type: The first step is to identify the source type that you want to review.  You can select Physical Drive or Logical Drive (as we noted before, a physical device can contain more than one logical drive).  You can also select an Image File to view an image file you created before or Contents of a Folder, to look at a specific folder.  In this example, we’ll select Image File to view the image of the flash drive we created and locate the source path of the image file.

The evidence tree will then display the item – you can keep adding evidence items if you want to look at more than one at once.  The top node is the selected item, from which you can drill down to the contents of the item.  This includes partitions and unpartitioned space, folders from the root folder on down and unallocated space, which could contain recoverable data.  Looking at the “Blog Posts” folder, you see a list of files in the folder, along with file slack.  File slack is the space between the end of a file and the end of the disk cluster in which it is stored. It’s common because data rarely fills clusters exactly, and residual data occur when a smaller file is written into the same cluster as a previous larger file, leaving potentially meaningful data.

You’ll also notice that some of the files have an “X” on them – these are files that have been deleted, but not overwritten.  So, with FTK Imager, you can not only view active data, you can also view inactive data in deleted files, file slack or unallocated space!  When you click on a file, you can view the bit-by-bit contents of the file in the lower right window.  You can also right-click on one or more files (or even an entire folder) to display a pop-up menu to enable you to export a copy of the file(s) out and review them with the native software.  You can also Add to Custom Content Image to begin compiling a list of files to put into an image, enabling you to selectively include specific files (instead of all of the files from the device) into the image file you create.

Next time, we’ll discuss Add to Custom Content Image in more detail and discuss creating the custom content image of specific files you select.

For more information, go to the Help menu to access the User Guide in PDF format.

So, what do you think?  Have you used FTK Imager as a mechanism for eDiscovery collection?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Defendant Compelled by Court to Produce Metadata – eDiscovery Case Law

Remember when we talked about the issue of metadata spoliation resulting from “drag and drop” to collect files?  Here’s a case where it appears that method may have been used, resulting in a judgment against the producing party.

In AtHome Care, Inc. v. The Evangelical Lutheran Good Samaritan Society, No. 1:12-cv-053-BLW (D. ID. Apr. 30, 2013), Idaho District Judge B. Lynn Winmill granted the plaintiff’s motion to compel documents, ordering the defendant to identify and produce metadata for the documents in this case.

In this pilot project contract dispute between two health care organizations, the plaintiff filed a motion to compel after failing to resolve some of the discovery disputes with the defendant “through meet and confers and informal mediation with the Court’s staff”.  One of the disputes was related to the omission of metadata in the defendant’s production.

Judge Winmill stated that “Although metadata is not addressed directly in the Federal Rules of Civil Procedure, it is subject to the same general rules of discovery…That means the discovery of metadata is also subject to the balancing test of Rule 26(b)(2)(C), which requires courts to weigh the probative value of proposed discovery against its potential burden.” {emphasis added}

“Courts typically order the production of metadata when it is sought in the initial document request and the producing party has not yet produced the documents in any form”, Judge Winmill continued, but noted that “there is no dispute that Good Samaritan essentially agreed to produce metadata, and would have produced the requested metadata but for an inadvertent change to the creation date on certain documents.”

The plaintiff claimed that the system metadata was relevant because its claims focused on the unauthorized use and misappropriation of its proprietary information and whether the defendant used the plaintiff’s proprietary information to create their own materials and model, contending “that the system metadata can answer the question of who received what information when and when documents were created”.  The defendant argued that the plaintiff “exaggerates the strength of its trade secret claim”.

Weighing the value against the burden of producing the metadata, Judge Winmill ruled that “The requested metadata ‘appears reasonably calculated to lead to the discovery of admissible evidence.’ Fed.R. Civ.P. 26(b)(1). Thus, it is discoverable.” {emphasis added}

“The only question, then, is whether the burden of producing the metadata outweighs the benefit…As an initial matter, the Court must acknowledge that Good Samaritan created the problem by inadvertently changing the creation date on the documents. The Court does not find any degree of bad faith on the part of Good Samaritan — accidents happen — but this fact does weight in favor of requiring Good Samaritan to bear the burden of production…Moreover, the Court does not find the burden all that great.”

Therefore, the plaintiff’s motion to compel production of the metadata was granted.

So, what do you think?  Should a party be required to produce metadata?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Version 1 of the EDRM Enron Data Set NOW AVAILABLE – eDiscovery Trends

Last week, we reported from the Annual Meeting for the Electronic Discovery Reference Model (EDRM) group and discussed some significant efforts and accomplishments by each of the project teams within EDRM.  That included an update from the EDRM Data Set project, where an effort was underway to identify and remove personally-identifiable information (“PII”) data from the EDRM Data Set.  Now, version 1 of the Data Set is completed and available for download.

To recap, the EDRM Enron Data Set, sourced from the FERC Enron Investigation release made available by Lockheed Martin Corporation, has been a valuable resource for eDiscovery software demonstration and testing (we covered it here back in January 2011).  Initially, the data was made available for download on the EDRM site, then subsequently moved to Amazon Web Services (AWS).  However, after much recent discussion about PII data (including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers) available within FERC (and consequently the EDRM Data Set), the EDRM Data Set was taken down from the AWS site.

Yesterday, EDRM, along with Nuix, announced that they have republished version 1 of the EDRM Enron PST Data Set (which contains over 1.3 million items) after cleansing it of private, health and personal financial information. Nuix and EDRM have also published the methodology Nuix’s staff used to identify and remove more than 10,000 high-risk items.

As noted in the announcement, Nuix consultants Matthew Westwood-Hill and Ady Cassidy used a series of investigative workflows to identify the items, which included:

  • 60 items containing credit card numbers, including departmental contact lists that each contained hundreds of individual credit cards;
  • 572 items containing Social Security or other national identity numbers—thousands of individuals’ identity numbers in total;
  • 292 items containing individuals’ dates of birth;
  • 532 items containing information of a highly personal nature such as medical or legal matters.

While the personal data was (and still is) available via FERC long before the EDRM version was created, completion of this process will mean that many in the eDiscovery industry that rely on this highly useful data set for testing and software demonstration can now use a version which should be free from sensitive personal information!

For more information regarding the announcement, click here. The republished version 1 of the Data Set, as well as the white paper discussing the methodology is available at nuix.com/enron.  Nuix is currently applying the same methodology to the EDRM Enron Data Set v2 (which contains nearly 2.3 million items) and will publish to the same site when complete.

So, what do you think?  Have you used the EDRM Enron Data Set?  If so, do you plan to download the new version?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Plaintiff Granted Access to Defendant’s Database – eDiscovery Case Law

Last week in the EDRM Annual Meeting, one of our group discussion sessions was centered on production and presentation of native files – a topic which has led to the creation of a new EDRM project to address standards for working with native files in these areas.  This case provides an example of a unique form of native production.

In Advanced Tactical Ordnance Systems, LLC v. Real Action Paintball, Inc., No. 1:12-CV-296 (N.D. Ind. Feb. 25, 2013), Indiana Magistrate Judge Roger B. Cosbey took the unusual step of allowing the plaintiff direct access to a defendant company’s database under Federal Rule of Civil Procedure 34 because the plaintiff made a specific showing that the information in the database was highly relevant to the plaintiff’s claims, the benefit of producing it substantially outweighed the burden of producing it, and there was no prejudice to the defendant.

In this case involving numerous claims, including trademark infringement and fraud, Advanced Tactical Ordnance Systems LLC (“ATO”) sought expedited discovery after it obtained a temporary restraining order against the defendants. One of its document requests sought the production of defendant Real Action Paintball’s OS Commerce database to search for responsive evidence. Real Action objected, claiming that the request asked for confidential and sensitive information from its “most important asset” that would give the plaintiff a competitive advantage and that the request amounted to “‘an obvious fishing expedition.”

To decide the issue, Judge Cosbey looked to Federal Rule of Civil Procedure 34(a)(1)(A), which allows parties to ask to “inspect, copy, test, or sample . . . any designated documents or electronically stored information . . . stored in any medium from which information can be obtained either directly or, if necessary, after translation by the responding party into a reasonably usable form.” The advisory committee notes to this rule explain that the testing and sampling does not “create a routine right of direct access to a party’s electronic information system, although such access might be justified in some circumstances.” Judge Cosbey also considered whether the discovery request was proportionate under Federal Rule of Civil Procedure 26(b)(2)(C)(iii), comparing the “burden or expense” of the request against its “likely benefit, considering the needs of the case, the amount in controversy, the parties’ resources, the importance of the issues at stake in the action, and the importance of the discovery in resolving the issues.”

Based on its analysis, Judge Cosbey permitted ATO’s request. The benefits of allowing the plaintiff to access the defendant’s OS Commerce database outweighed the burden of producing data from it, especially because the parties had entered a protective order. The information was particularly important to the plaintiff’s argument that the defendant was using hidden metatags referencing ATO’s product to improve its results in search engines, thereby stealing the plaintiff’s customers.

Despite the defendant company’s claims that the information the database contained was proprietary and potentially harmful to the business’s competitive advantage, the court found the company failed to establish how the information in the database constituted a trade secret or how its disclosure could harm the company, especially where much of the information had already been produced or was readily available on the company’s website. Moreover, the company could limit the accessibility of the database to “‘Attorneys’ Eyes Only.’”

So, what do you think?  Was it appropriate to grant the plaintiff direct access to the defendant’s database?  Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

How to Create an Image Using FTK Imager – eDiscovery Best Practices

A few days ago, we talked about the benefits and capabilities of Forensic Toolkit (FTK), which is a computer forensics software application provided by AccessData, as well as how to download your own free copy.  Now, let’s discuss how to create a disk image.

Before we begin, it’s important to note that best practices when creating a disk image includes the use of a write blocker.  Write blockers are devices that allow data to be acquired from a drive without creating the possibility of accidentally damaging the drive contents. They allow read commands to pass but block write commands, protecting the drive contents from being changed.  Tableau and FireFly are two examples of write blockers.

It’s also important to note that while we’re showing you how to “try this at home”, use of a certified forensic collection specialist is recommended when collecting data forensically that could require expert testimony on the collection process.

Create an Image Using FTK Imager

I’m going to create an image of one of my flash drives to illustrate the process.  To create an image, select Create Disk Image from the File menu.

Source Evidence Type: To image an entire device, select Physical Drive (a physical device can contain more than one Logical Drive).  You can also create an image of an Image File, which seems silly, but it could be desirable if, say, you want to create a more compressed version of the image.  You can also image the specific Contents of a Folder or of a Femico Device (which is ideal for creating images of multiple CDs or DVDs with the same parameters).  In this example, we’ll select Physical Drive to create an image of the flash drive.

Source Drive Selection: Based on our selection of physical drive, we then have a choice of the current physical drives we can see, so we select the drive corresponding to the flash drive.

Create Image: Here is where you can specify where the image will be created.  We also always choose Verify images after they are created as a way to run a hash value check on the image file.  You can also Create directory listings of all files in the image after they are created, but be prepared that this will be a huge listing for a typical hard drive with hundreds of thousands of entries.

Select Image Type: This indicates the type of image file that will be created – Raw is a bit-by-bit uncompressed copy of the original, while the other three alternatives are designed for use with a specific forensics program.  We typically use Raw or E01, which is an EnCase forensic image file format.  In this example, we’re using Raw.

Evidence Item Information: This is where you can enter key information about the evidence item you are about to create to aid in documenting the item.  This information will be saved as part of the image summary information once the image is complete.

Select Image Destination: We’ll browse to a folder that I’ve created called “FTKImage” on the C: drive and give the image a file name.  Image Fragment Size indicates the size of each fragment when you want to break a larger image file into multiple parts.  Compression indicates the level of compression of the image file, from 0 (no compression) to 9 (maximum compression – and a slower image creation process).  For Raw uncompressed images, compression is always 0.  Use AD Encryption indicates whether to encrypt the image – we don’t typically select that, instead choosing to put an image on an encrypted drive (when encryption is desired).  Click Finish to begin the image process and a dialog will be displayed throughout the image creation process.  Because it is a bit-by-bit image of the device, it will take the same amount of time regardless of how many files are currently stored on the device.

Drive/Image Verify Results: When the image is complete, this popup window will appear to show the name of the image file, the sector count, computed (before image creation) and reported (after image creation) MD5 and SHA1 hash values with a confirmation that they match and a list of bad sectors (if any).  The hash verification is a key check to ensure a valid image and the hash values should be the same regardless which image type you create.

Image Summary: When the image is complete, click the Image Summary button to see the view a summary of the image that is created, including the evidence item information you entered, drive information, hash verification information, etc.  This information is also saved as a text file.

Directory Listing: If you selected Create directory listings of all files in the image, the results will be stored in a CSV file, which can be opened with Excel.

And, there you have it – a bit-by-bit image of the device!  You’ve just captured everything on the device, including deleted files and slack space data.  Next time, we’ll discuss Adding an Evidence Item to look at contents or drives or images (including the image we created here).

For more information, go to the Help menu to access the User Guide in PDF format.

So, what do you think?  Have you used FTK Imager as a mechanism for eDiscovery collection?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

More Updates from the EDRM Annual Meeting – eDiscovery Trends

Yesterday, we discussed some general observations from the Annual Meeting for the Electronic Discovery Reference Model (EDRM) group and discussed some significant efforts and accomplishments by the (suddenly heavily talked about) EDRM Data Set project.  Here are some updates from other projects within EDRM.

It should be noted these are summary updates and that most of the focus on these updates is on accomplishments for the past year and deliverables that are imminent.  Over the next few weeks, eDiscovery Daily will cover each project in more depth with more details regarding planned activities for the coming year.

Model Code of Conduct (MCoC)

The MCoC was introduced in 2011 and became available for organizations to subscribe last year.  To learn more about the MCoC, you can read the code online here, or download it as a 22 page PDF file here.  Subscribing is easy!  To voluntarily subscribe to the MCoC, you can register on the EDRM website here.  Identify your organization, provide information for an authorized representative and answer four verification questions (truthfully, of course) to affirm your organization’s commitment to the spirit of the MCoC, and your organization is in!  You can also provide a logo for EDRM to include when adding you to the list of subscribing organizations.  Pending a survey of EDRM members to determine if any changes are needed, this project has been completed.  Team leaders include Eric Mandel of Zelle Hofmann, Kevin Esposito of Rivulex and Nancy Wallrich.

Information Governance Reference Model (IGRM)

The IGRM team has continued to make strides and improvements on an already terrific model.  Last October, they unveiled the release of version 3.0 of the IGRMAs their press release noted, “The updated model now includes privacy and security as primary functions and stakeholders in the effective governance of information.”  IGRM continues to be one of the most active and well participated EDRM projects.  This year, the early focus – as quoted from Judge Andrew Peck’s keynote speech at Legal Tech this past year – is “getting rid of the junk”.  Project leaders are Aliye Ergulen from IBM, Reed Irvin from Viewpointe and Marcus Ledergerber from Morgan Lewis.

Search

One of the best examples of the new, more agile process for creating deliverables within EDRM comes from the Search team, which released its new draft Computer Assisted Review Reference Model (CARRM), which depicts the flow for a successful Computer Assisted Review project. The entire model was created in only a matter of weeks.  Early focus for the Search project for the coming year includes adjustments to CARRM (based on feedback at the annual meeting).  You can also still send your comments regarding the model to mail@edrm.net or post them on the EDRM site here.  A webinar regarding CARRM is also planned for late July.  Kudos to the Search team, including project leaders Dominic Brown of Autonomy and also Jay Lieb of kCura, who got unmerciful ribbing for insisting (jokingly, I think) that TIFF files, unlike Generalissimo Francisco Franco, are still alive.  🙂

Jobs

In late January, the Jobs Project announced the release of the EDRM Talent Task Matrix diagram and spreadsheet, which is available in XLSX or PDF format. As noted in their press release, the Matrix is a tool designed to help hiring managers better understand the responsibilities associated with common eDiscovery roles. The Matrix maps responsibilities to the EDRM framework, so eDiscovery duties associated can be assigned to the appropriate parties.  Project leader Keith Tom noted that next steps include surveying EDRM members regarding the Matrix, requesting and co-authoring case-studies and white papers, and creating a short video on how to use the Matrix.

Metrics

In today’s session, the Metrics project team unveiled the first draft of the new Metrics model to EDRM participants!  Feedback was provided during the session and the team will make the model available for additional comments from EDRM members over the next week or so, with a goal of publishing for public comments in the next two to three weeks.  The team is also working to create a page to collect Metrics measurement tools from eDiscovery professionals that can benefit the eDiscovery community as a whole.  Project leaders Dera Nevin of TD Bank and Kevin Clark noted that June is “budget calculator month”.

Other Initiatives

As noted yesterday, there is a new project to address standards for working with native files in the different EDRM phases led by Eric Mandel from Zelle Hofmann and also a new initiative to establish collection guidelines, spearheaded by Julie Brown from Vorys.  There is also an effort underway to refocus the XML project, as it works to complete the 2.0 version of the EDRM XML model.  In addition, there was quite a spirited discussion as to where EDRM is heading as it approaches ten years of existence and it will be interesting to see how the EDRM group continues to evolve over the next year or so.  As you can see, a lot is happening within the EDRM group – there’s a lot more to it than just the base Electronic Discovery Reference Model.

So, what do you think?  Are you a member of EDRM?  If not, why not?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Reporting from the EDRM Annual Meeting and a Data Set Update – eDiscovery Trends

The Electronic Discovery Reference Model (EDRM) Project was created in May 2005 by George Socha of Socha Consulting LLC and Tom Gelbmann of Gelbmann & Associates to address the lack of standards and guidelines in the electronic discovery market.  Now, beginning its ninth year of operation with its annual meeting in St. Paul, MN, EDRM is accomplishing more than ever to address those needs.  Here are some highlights from the meeting, and an update regarding the (suddenly heavily talked about) EDRM Data Set project.

Annual Meeting

Twice a year, in May and October, eDiscovery professionals who are EDRM members meet to continue the process of working together on various standards projects.  This will be my eighth year participating in EDRM at some level and, oddly enough, I’m assisting with PR and promotion (how am I doing so far?).  eDiscovery Daily has referenced EDRM and its phases many times in the 2 1/2 years plus history of the blog – this is our 144th post that relates to EDRM!

Some notable observations about today’s meeting:

  • New Participants: More than half the attendees at this year’s annual meeting are attending for the first time.  EDRM is not just a core group of “die-hards”, it continues to find appeal with eDiscovery professionals throughout the industry.
  • Agile Approach: EDRM has adopted an Agile approach to shorten the time to complete and publish deliverables, a change in philosophy that facilitated several notable accomplishments from working groups over the past year including the Model Code of Conduct (MCoC), Information Governance Reference Model (IGRM), Search and Jobs (among others).  More on that tomorrow.
  • Educational Alliances: For the first time, EDRM has formed some interesting and unique educational alliances.  In April, EDRM teamed with the University of Florida Levin College of Law to present a day and a half conference entitled E-Discovery for the Small and Medium Case.  And, this June, EDRM will team with Bryan University to provide an in-depth, four-week E-Discovery Software & Applied Skills Summer Immersion Program for Law School Students.
  • New Working Group: A new working group to be lead by Eric Mandel of Zelle Hoffman was formed to address standards for working with native files in the different EDRM phases.

Tomorrow, we’ll discuss the highlights for most of the individual working groups.  Given the recent amount of discussion about the EDRM Data Set group, we’ll start with that one today!

Data Set

The EDRM Enron Data Set has been around for several years and has been a valuable resource for eDiscovery software demonstration and testing (we covered it here back in January 2011).  The data in the EDRM Enron PST Data Set files is sourced from the FERC Enron Investigation release made available by Lockheed Martin Corporation.  It was reconstituted as PST files with attachments for the EDRM Data Set Project.  So, in essence EDRM took already public domain available data and made the data much more usable.  Initially, the data was made available for download on the EDRM site, then subsequently moved to Amazon Web Services (AWS).

In the past several days, there has been much discussion about the personally-identifiable information (“PII”) available within the FERC (and consequently the EDRM Data Set), including social security numbers, credit card numbers, dates of birth, home addresses and phone numbers.  Consequently, the EDRM Data Set has been taken down from the AWS site.

The Data Set team led by Michael Lappin of Nuix and Eric Robi of Elluma Discovery has been working on a process (using predictive coding technology) to identify and remove the PII data from the EDRM Data Set.  Discussions about this process began months ago, prior to the recent discussions about the PII data contained within the set.  The team has completed this iterative process for V1 of the data set (which contains 1,317,158 items), identifying and removing 10,568 items with PII, HIPAA and other sensitive information.  This version of the data set will be made available within the EDRM community shortly for peer review testing.  The data set team will then repeat the process for the larger V2 version of the data set (2,287,984 items).  A timetable for republishing both sets should be available soon and the efforts of the Data Set team on this project should pay dividends in developing and standardizing processes for identifying and eliminating sensitive data that eDiscovery professionals can use in their own data sets.

The team has also implemented a Forensic Files Testing Project site where users can upload their own “modern”, non-copyrighted file samples that are typically encountered during electronic discovery processing to provide a more diverse set of data than is currently available within the Enron data set.

So, what do you think?  How has EDRM impacted how you manage eDiscovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.