Review

Is it Time to Ditch the Per Hour Model for Document Review? – eDiscovery Trends

Some of the recent stories involving alleged overbilling by law firms for legal work – much of it for document review – begs the question whether it’s time to ditch the per hour model for document review in place of a per document rate for review?

As discussed by D. Casey Flaherty in Law Technology News (DLA Piper Is Not Alone: Why Law Firms Overbill), DLA Piper has been sued by its client – to the tune of over $22 million – for overbilling.  When DLA Piper produced some 250,000 documents in response to its client’s eDiscovery requests, some embarrassing internal emails were included in that production.  For example:

  • “I hear we are already 200K over our estimate – that’s Team DLA Piper!”
  • “DLA seems to love to low ball the bills and with the number of bodies being thrown at this thing, it’s going to stay stupidly high and with the absurd litigation POA has been in for years, it does have lots of wrinkles.”
  •  “It’s a Thomson project, he goes full time on whatever debtor case he has running. Full time, 2 days a week.”
  • “[N]ow Vince has random people working full time on random research projects in standard ‘churn that bill, baby!’ mode. That bill shall show no limits.”
  • “Didn’t you use three associates to prepare for a first day hearing where you filed three documents?”

In his article, Flaherty provides two other examples of (at least) perceived overbilling:

  • In the Madoff case, the government “used 6,000 hours of attorney time to procure a $140 million settlement offer (more than $23,000 delivered per hour spent)”.  Your federal tax dollars hard at work!  However, the plaintiffs’ law firms “expended 118,000 additional attorney hours on the same matter to deliver the final version of that settlement at $219 million” and seek $40 million for delivering $39 million in incremental value (once you subtract their proposed $40 million in fees).  “It appears that most of the 110 lawyers are contract attorneys performing basic document review; the plaintiffs firms are just marking them up at many, many multiples of their actual cost.”
  • In the Citigroup derivatives class action settlement, plaintiffs firms “reached a $590 million settlement from which they now seek almost $100 million in fees for 87,000 hours of billable time (average, $1,150 per hour). The bulk of the hours were spent on low-level document review work” where contract attorneys were paid $40 to $60 per hour and “the plaintiffs firms are seeking $550 to $1,000 plus per hour for those services”.

While the DLA Piper example isn’t specifically about document review overbilling, it does reflect how cavalier some firms (or at least some attorneys at those firms) can be about the subject of overbilling.  For the other two examples above, document review overbilling appears to be at the core of those disputes.  There are admittedly different levels of document review, depending on whether the attorneys are performing a straightforward responsiveness review, a privilege review, or a more detailed subject matter/issue coding review.  Nonetheless, the number of documents in the collection is finite and the cost for review should be somewhat predictable, regardless of the level of review being conducted.

Why don’t more firms offer a per document rate for document review?  Or, perhaps a better question would be why don’t more organizations insist on a per document rate?  That seems like a better way to make document review costs more predictable and more consistent.  I’m not sure why, other than “that’s the way we’ve always done it”, that it hasn’t become more predominant.  Knowing the per document rate and the number of documents to be reviewed up front would seem to eliminate overbilling disputes for document review, at least.

So, what do you think?  Is it time to ditch the per hour model for document review?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Four More Tips to Quash the Cost of eDiscovery – eDiscovery Best Practices

Thursday, we covered the first four tips from Craig Ball’s informative post on his blog (Ball in your Court) entitled Eight Tips to Quash the Cost of E-Discovery with tips on saving eDiscovery costs.  Today, we’ll discuss the last four tips.

5. Test your Methods and Know your ESI: Craig says that “Staggering sums are spent in e-discovery to collect and review data that would never have been collected if only someone had run a small scale test before deploying an enterprise search”.  Knowing your ESI will, as Craig notes, “narrow the scope of collection and review with consequent cost savings”.  In one of the posts on our very first day of the blog, I relayed an actual example from a client regarding a search that included a wildcard of “min*” to retrieve variations like “mine”, “mines” and “mining”.  Because there are 269 words in the English language that begin with “min”, that overly broad search retrieved over 300,000 files with hits in an enterprise-wide search.  Unfortunately, the client had already agreed to the search term before finding that out, which resulted in considerable negotiation (and embarrassment) to get the other side to agree to modify the term.  That’s why it’s always a good idea to test your searches before the meet and confer.  The better you know your ESI, the more you save.

6. Use Good Tools: Craig provides another great analogy in observing that “If you needed to dig a big hole, you wouldn’t use a teaspoon, nor would you hire a hundred people with teaspoons.  You’d use the right power tool and a skilled operator.”  Collection and review tools must fit your requirements and workflow, so, guess what?  You need to understand those requirements and your workflow to pick the right tool.  If you’re putting together a wooden table, you don’t have to learn how to operate a blowtorch if all you need is a hammer and some nails, or a screwdriver and some screws for the job.  The better that the tools fit your workflow, the more you save.

7. Communicate and Cooperate: Craig says that “Much of the waste in e-discovery grows out of apprehension and uncertainty.  Litigants often over-collect and over-review, preferring to spend more than necessary instead of giving the transparency needed to secure a crucial concession on scope or methodology”.  A big part of communication and cooperation, at least in Federal cases, is the Rule 26(f) conference (which is also known as the “meet and confer”, here are two posts on the subject).  The more straightforward you make discovery through communication and cooperation, the more you save.

8. Price is What the Seller Accepts: Craig notes that there is much “pliant pricing” for eDiscovery tools and services and relayed an example where a vendor initially quoted $43.5 million to complete a large expedited project, only to drop that quote all the way down to $3.5 million after some haggling.  Yes, it’s important to shop around.  It’s also important to be able to know the costs going in, through predictable pricing.  If you have 10 gigabytes or 1 terabyte of data, providers should be able to tell you exactly what it will cost to collect, process, load and host that data.  And, it’s always good if the provider will let you try their tools for free, on your actual data, so you know whether those tools are worth the price.  The more predictable price and value of the tools and services are, the more you save.

So, what do you think?  What are you doing to keep eDiscovery costs down?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Eight Tips to Quash the Cost of eDiscovery – eDiscovery Best Practices

By now, Craig Ball needs no introduction our readers as he has been a thought leader interview participant for the past three years.  Two years ago, we published his interview in a single post, his interview last year was split into a two part series and this year’s interview was split into a three part series.  Perhaps next year, I will be lucky enough to interview him for an hour and we can simply have a five-part “Ball Week” (like the Discovery Channel has “Shark Week”).  Hmmm…

Regardless, I’m a regular reader of his blog, Ball in your Court, as well, and, last week, he published a very informative post entitled Eight Tips to Quash the Cost of E-Discovery with tips on saving eDiscovery costs.  I thought we would cover those tips here, with some commentary:

  1. Eliminate Waste: Craig notes that “irrational fears [that] flow from lack of familiarity with systems, tools and techniques that achieve better outcomes at lower cost” results in waste.  Over-preservation and over-collection of ESI, conversion of ESI, failing to deduplicate and reviewing unnecessary files all drive the cost up.  Last September, we ran a post regarding quality control and making sure the numbers add up when you subtract filtered, NIST/system, exception, duplicate and culled (during searching) files from the collected total.  In that somewhat hypothetical example based on Enron data sets, after removing those files, only 17% of the collected files were actually reviewed (which, in many cases, would still be too high a percentage).  The less number of files that require attorney “eyes on”, the more you save.
  2. Reduce Redundancy and Fragmentation: While, according to the Compliance, Governance and Oversight Council (CGOC), information volume in most organizations doubles every 18-24 months, Craig points out that “human beings don’t create that much more unique information; they mostly make more copies of the same information and break it into smaller pieces.”  Insanity is doing the same thing over and over and expecting different results and insane review is reviewing the same documents over and over and (potentially) getting different results, which is not only inefficient, but could lead to inconsistencies and even inadvertent disclosures.  Most collections not only contain exact duplicates in the exact format (which can identified through hash-based deduplication), but also “near” duplicates that include the same content in different file formats (and at different sizes) or portions of the content in eMail threads.  The less duplicative content that requires review, the more you save.
  3. Don’t Convert ESI: In addition to noting the pitfalls of converting ESI to page-like image formats like TIFF, Craig also wrote a post about it, entitled Are They Trying to Screw Me? (discussed in this blog here).  ‘Nuff said.  The less ESI you convert, the more you save.
  4. Review Rationally: Craig discussed a couple of irrational approaches to review, including reviewing attachments without hits when the eMail has been determined to be non-responsive and the tendency to “treat information in any form from any source as requiring privilege review when even a dollop of thought would make clear that not all forms or sources of ESI are created equal when it comes to their potential to hold privileged content”.  For the latter, he advocates using technology to “isolate privileged content” as well as clawback agreements and Federal Rule of Evidence 502 for protection against inadvertent disclosure.  It’s also important to be able to adjust during the review process if certain groups of documents are identified as needing to be excluded or handled differently, such as the “All Rights Reserved” documents that I previously referenced in the “oil” AND “rights” search example.  The more intelligent the review process, the more you save.

There is too much to say about these eight tips to limit to one blog post, so on Monday (after the Good Friday holiday) we’ll cover tips 5 through 8.  The waiting is the hardest part.

So, what do you think?  What are you doing to keep eDiscovery costs down?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Fulbright’s Litigation Trends Survey Shows Increased Litigation, Mobile Device Collection – eDiscovery Trends

According to Fulbright’s 9th Annual Litigation Trends Survey released last month, companies in the United States and United Kingdom continue to deal with, and spend more on litigation.  From an eDiscovery standpoint, the survey showed an increase in requirements to preserve and collect data from employee mobile devices, a high reliance on self-preservation to fulfill preservation obligations and a decent percentage of organizations using technology assisted review.

Here are some interesting statistics from the report:

PARTICIPANTS

Here is a breakdown of the participants in the survey.

  • There were 392 total participants from the US and UK, 96% of which were either General Counsel (82%) or Head of Litigation (14%).
  • About half (49%) of the companies surveyed, were billion dollar companies with $1 billion or more in gross revenue.  36% of the total companies have revenues of $10 billion or more.

LITIGATION TRENDS

The report showed increases in both the number of cases being encountered by organizations, as well as the total expenditures for litigation.

Increasing Litigation Cases

  • This year, 92% of respondents anticipate either the same amount or more litigation, up from 89% last year.  26% of respondents expect litigation to increase, while 66% expect litigation to stay the same.  Among the larger companies, 33% of respondents expect more disputes, and 94% expect either the same number or an increase.
  • The number of respondents reporting that they had received a lawsuit rose this year to 86% estimating at least one matter, compared with 73% last year. Those estimating at least 21 lawsuits or more rose to 33% from 22% last year.
  • Companies facing at least one $20 million lawsuit rose to 31% this year, from 23% the previous year.

Increasing Litigation Costs

  • The percentage of companies spending $1 million or more on litigation has increased for the third year in a row to 54%, up from 51% in 2011 and 46% in 2010, primarily due to a sharp rise in $1 million+ cases in the UK (rising from 38% in 2010 up to 53% in 2012).
  • In the US, 53% of organizations spend $1 million or more on litigation and 17% spend $10 million or more.
  • 33% of larger companies spent $10 million on litigation, way up from 19% the year before (and 22% in 2010).

EDISCOVERY TRENDS

The report showed an increase in requirements to preserve and collect data from employee mobile devices, a high reliance on self-preservation to fulfill preservation obligations and a decent percentage of organizations using technology assisted review.

Mobile Device Preservation and Collection

  • 41% of companies had to preserve and/or collect data from an employee mobile device because of litigation or an investigation in 2012, up from 32% in 2011.
  • Similar increases were reported by respondents from larger companies (38% in 2011, up to 54% in 2012) and midsized companies (26% in 2011, up to 40% in 2012).  Only respondents from smaller companies reported a drop (from 26% to 14%).

Self-Preservation

  • 69% of companies rely on individuals preserving their own data (i.e., self-preservation) in any of their disputes or investigations.  Larger and mid-sized companies are more likely to utilize self-preservation (73% and 72% respectively) than smaller companies (52%).
  • 41% of companies use self-preservation in all of their matters, and 73% use it for half or more of all matters.
  • When not relying on self-preservation, 72% of respondents say they depend on the IT function to collect all data sources of pertinent custodians.
  • Reasons that respondents gave for not relying on self-preservation included: More cost effective and efficient not to rely on custodian 29%; Lack of compliance by custodians 24%; High profile matter 23%; High monetary or other exposure 22%; Need to conduct forensics 20%; Some or all custodians may have an incentive to improperly delete potentially relevant information; 18%; Case law does not support self-preservation 14% and High profile custodian 11%.

Technology Assisted Review

  • 35% of all respondents are using technology assisted review for at least some of their matters.  U.S. companies are more likely to employ technology-assisted review than their U.K. counterparts (40% versus 23%).
  • 43% of larger companies surveyed use technology assisted review, compared with 32% of mid-sized companies and 23% of the smaller companies.
  • Of those companies utilizing technology assisted review, 21% use it in all of their matters and 51% use it for half or more of their matters.

There are plenty more interesting stats and trends in the report, which is free(!).  To download your own copy of the report, click here.

So, what do you think?  Do any of those trends surprise you?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Daily Is Thirty! (Months Old, That Is)

Thirty months ago yesterday, eDiscovery Daily was launched.  It’s hard to believe that it has been 2 1/2 years since our first three posts that debuted on our first day.  635 posts later, a lot has happened in the industry that we’ve covered.  And, yes we’re still crazy after all these years for committing to a daily post each business day, but we still haven’t missed a business day yet.  Twice a year, we like to take a look back at some of the important stories and topics during that time.  So, here are just a few of the posts over the last six months you may have missed.  Enjoy!

In addition, Jane Gennarelli has been publishing an excellent series to introduce new eDiscovery professionals to the litigation process and litigation terminology.  Here is the latest post, which includes links to the previous twenty one posts.

Thanks for noticing us!  We’ve nearly quadrupled our readership since the first six month period and almost septupled (that’s grown 7 times in size!) our subscriber base since those first six months!  We appreciate the interest you’ve shown in the topics and will do our best to continue to provide interesting and useful eDiscovery news and analysis.  And, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Five Common Myths About Predictive Coding – eDiscovery Best Practices

During my interviews with various thought leaders (a list of which can be found here, with links to each interview), we discussed various aspects of predictive coding and some of the perceived myths that exist regarding predictive coding and what it means to the review process.  I thought it would be a good idea to recap some of those myths and how they compare to the “reality” (at least as some of us see it).  Or maybe just me.  🙂

1.     Predictive Coding is New Technology

Actually, with all due respect to each of the various vendors that have their own custom algorithm for predictive coding, the technology for predictive coding as a whole is not new technology.  Ever heard of artificial intelligence?  Predictive coding, in fact, applies artificial intelligence to the review process.  With all of the acronyms we use to describe predictive coding, here’s one more for consideration: “Artificial Intelligence for Review” or “AIR”.  May not catch on, but I like it.

Maybe attorneys would be more receptive to it if they understood as artificial intelligence?  As Laura Zubulake pointed out in my interview with her, “For years, algorithms have been used in government, law enforcement, and Wall Street.  It is not a new concept.”  With that in mind, Ralph Losey predicts that “The future is artificial intelligence leveraging your human intelligence and teaching a computer what you know about a particular case and then letting the computer do what it does best – which is read at 1 million miles per hour and be totally consistent.”

2.     Predictive Coding is Just Technology

Treating predictive coding as just the algorithm that “reviews” the documents is shortsighted.  Predictive coding is a process that includes the algorithm.  Without a sound approach for identifying appropriate example documents for the collection, ensuring educated and knowledgeable reviewers to appropriately code those documents and testing and evaluating the results to confirm success, the algorithm alone would simply be another case of “garbage in, garbage out” and doomed to fail.

As discussed by both George Socha and Tom Gelbmann during their interviews with this blog, EDRM’s Search project has published the Computer Assisted Review Reference Model (CARRM), which has taken steps to define that sound approach.  Nigel Murray also noted that “The people who really understand computer assisted review understand that it requires a process.”  So, it’s more than just the technology.

3.     Predictive Coding and Keyword Searching are Mutually Exclusive

I’ve talked to some people that think that predictive coding and key word searching are mutually exclusive, i.e., that you wouldn’t perform key word searching on a case where you plan to use predictive coding.  Not necessarily.  Ralph Losey advocates a “multimodal” approach, noting it as: “more than one kind of search – using predictive coding, but also using keyword search, concept search, similarity search, all kinds of other methods that we have developed over the years to help train the machine.  The main goal is to train the machine.”

4.     Predictive Coding Eliminates Manual Review

Many people think of predictive coding as the death of manual review, with all attorney reviewers being replaced by machines.  Actually, manual review is a part of the predictive coding process in several aspects, including: 1) Subject matter knowledgeable reviewers are necessary to perform review to create a training set of documents for the technology, 2) After the process is performed, both sets (the included and excluded documents) are sampled and the samples are reviewed to determine the effectiveness of the process, and 3) The resulting responsive set is generally reviewed to confirm responsiveness and also to determine whether the documents are privileged.  Without manual review to train the technology and verify the results, the process would fail.

5.     Predictive Coding Has to Be Perfect to Be Useful

Detractors of predictive coding note that predictive coding can miss plenty of responsive documents and is nowhere near 100% accurate.  In one recent case, the producing party estimated as many as 31,000 relevant documents may have been missed by the predictive coding process.  However, they also estimated that a much more costly manual review would have missed as many as 62,000 relevant documents.

Craig Ball’s analogy about the two hikers that encounter the angry grizzly bear is appropriate – the one hiker doesn’t have to outrun the bear, just the other hiker.  Craig notes: “That is how I look at technology assisted review.  It does not have to be vastly superior to human review; it only has to outrun human review.  It just has to be as good or better while being faster and cheaper.”

So, what do you think?  Do you agree that these are myths?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Craig Ball of Craig D. Ball, P.C. – eDiscovery Trends, Part 3

This is the tenth (and final) of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Craig Ball.  A frequent court appointed special master in electronic evidence, Craig is a prolific contributor to continuing legal and professional education programs throughout the United States, having delivered over 1,000 presentations and papers.  Craig’s articles on forensic technology and electronic discovery frequently appear in the national media, and he writes a monthly column on computer forensics and eDiscovery for Law Technology News called Ball in your Court, as well as blogs on those topics at ballinyourcourt.com.

Craig was very generous with his time again this year and our interview with Craig had so much good information in it, we couldn’t fit it all into a single post.  Wednesday was part 1 and yesterday was part 2.  Today is the third and last part.  A three-parter!

Note: I asked Craig the questions in a different order and, since the show had not started yet when I interviewed him, instead asked about the sessions in which he was speaking.

What are you working on that you’d like our readers to know about?

I’m really trying to make 2013 the year of distilling an extensive but idiosyncratic body of work that I’ve amassed through years of writing and bring it together into a more coherent curriculum.  I want to develop a no-cost casebook for law students and to structure my work so that it can be more useful for people in different places and phases of their eDiscovery education.  So, I’ll be working on that in the first six or eight months of 2013 as both an academic and a personal project.

I’m also trying to go back to roots and rethink some of the assumptions that I’ve made about what people understand.  It’s frustrating to find that lawyers talking about, say, load files when they don’t really know what a load file is, they’ve never looked at a load file.  They’ve left it to somebody else and, so, the resolution of difficulties has gone through so many hands and is plagued by so much miscommunication.   I’d like to put some things out there that will enable lawyers in a non-threatening and accessible way to gain comfort in having a dialog about the fundamentals of eDiscovery that you and I take for granted.  So, that we don’t have to have this reliance upon vendors for the simplest issues.  I don’t mean that vendors won’t do the work, but I don’t think we should have to bring a technical translator in for every phone call.

There should be a corpus of competence that every litigator brings to the party, enabling them to frame basic protocols and agreements that aren’t merely parroting something that they don’t understand, but enabling them to negotiate about issues in ways that the resolutions actually make sense.  Saying “I won’t give you 500 search terms, but I’ll give you 250” isn’t a rational resolution.  It’s arbitrary.

There are other kinds of cases that you can identify search terms “all the live long day” and they’re really never going to get you that much closer to the documents you want.  The best example in recent years was the Pippins v. KPMG case.  KPMG was arguing that they could use search terms against samples to identify forensically significant information about work day and work responsibility.  That didn’t make any sense to me at all.  The kinds of data they were looking for wasn’t going to be easily found by using keyword search.  It was going to require finding data of a certain character and bringing a certain kind of analysis to it, not an objective culling method like search terms.  Search terms have become like the expression “if you have a hammer, the whole world looks like a nail”.  We need to get away from that.

I think a little education made palatable will go a long way.  We need some good solid education and I’m trying to come up with something that people will borrow and build on.  I want it to be something that’s good enough that people will say “let’s just steal his stuff”.  That’s why I put it out there – it’s nice that they credit me and I appreciate it; but if what you really want to do is teach people, you don’t do it for the credit, you do it for the education.  That’s what I’m about, more this year than ever before.

Thanks, Craig, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Craig Ball of Craig D. Ball, P.C. – eDiscovery Trends, Part 2

This is the tenth (and final) of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Craig Ball.  A frequent court appointed special master in electronic evidence, Craig is a prolific contributor to continuing legal and professional education programs throughout the United States, having delivered over 1,000 presentations and papers.  Craig’s articles on forensic technology and electronic discovery frequently appear in the national media, and he writes a monthly column on computer forensics and eDiscovery for Law Technology News called Ball in your Court, as well as blogs on those topics at ballinyourcourt.com.

Craig was very generous with his time again this year and our interview with Craig had so much good information in it, we couldn’t fit it all into a single post.  Yesterday was part 1.  Today is part 2 and part 3 will be published in the blog on Friday.  A three-parter!

Note: I asked Craig the questions in a different order and, since the show had not started yet when I interviewed him, instead asked about the sessions in which he was speaking.

I noticed that you are speaking at a couple of sessions here.  What would you like to tell me about those sessions?

{Interviewed the evening before the show}  I am on a Technology Assisted Review panel with Maura Grossman and Ralph Losey that should be as close to a barrel of laughs as one can have talking about technology assisted review.  It is based on a poker theme – which was actually Matt Nelson’s (of Symantec) idea.  I think it is a nice analogy, because a good poker player is a master or mistress of probabilities, whether intuitively or overtly performing mental arithmetic that are essentially statistical and probability calculations.  Such calculations are key to quality assurance and quality control in modern review.

We have to be cautious not to require the standards for electronic assessments to be dramatically higher than the standards applied to human assessments.  It is one thing with a new technology to demand more of it to build trust.  That’s a pragmatic imperative.  It is another thing to demand so exalted a level of scrutiny that you essentially void all advantages of the new technology, including the cost savings and efficiencies it brings.  You know the old story about the two hikers that encounter the angry grizzly bear?  They freeze, and then one guy pulls out running shoes and starts changing into them.  His friend says “What are you doing? You can’t outrun a grizzly bear!” The other guy says “I know.  I only have to outrun you”.  That is how I look at technology assisted review.  It does not have to be vastly superior to human review; it only has to outrun human review.  It just has to be as good or better while being faster and cheaper.

We cannot let the vague uneasiness about the technology cause it to implode.  If we have to essentially examine everything in the discard pile, so that we not only pay for the new technology but also back it up with the old.  That’s not going to work.  It will take a few pioneers who get the “arrows in the back” early on—people who spend more to build trust around the technology that is missing at this juncture.  Eventually, people are going to say “I’ve looked at the discard pile for the last three cases and this stuff works.  I don’t need to look at all of that any more.

Even the best predictive coding systems are not going to be anywhere near 100% accurate.  They start from human judgment where we’re not even sure what “100% accurate” is, in the context of responsiveness and relevance.  There’s no “gold standard”.  Two different qualified people can look at the same document and give a different assessment and approximately 40% of the time, they do.  And, the way we decide who’s right is that we bring in a third person.  We indulge the idea that the third person is the “topic authority” and what they say goes.  We define their judgment as right; but, even their judgments are human.  To err being human, they’re going to make misjudgments based on assumptions, fatigue, inattention, whatever.

So, getting back to the topic at hand, I do think that the focus on quality assurance is going to prompt a larger and long overdue discussion about the efficacy of human review.  We’ve kept human review in this mystical world of work product for a very long time.  Honestly, the rationale for work product doesn’t naturally extend over to decisions about responsiveness and relevance.  Even though, most of my colleagues would disagree with me out of hand.  They don’t want anybody messing with privilege or work product.  It’s like religion or gun control—you can’t even start a rational debate.

Things are still so partisan and bitter.  The notions of cooperation, collaboration, transparency, translucency, communication – they’re not embedded yet.  People come to these processes with animosity so deeply seated that you’re not really starting on a level playing field with an assessment of what’s best for our system of justice.  Justice is someone else’s problem.  The players just want to win.  That will be tough to change.

We “dinosaurs” will die off, and we won’t have to wait for the glaciers to advance.  I think we will have some meteoric events that will change the speed at which the dinosaurs die.  Technology assisted review is one.  We’ve seen a meteoric rise in the discussion of the topic, the interest in the topic, and I think it will have a meteoric effect in terms of more rapidly extinguishing very bad and very expensive practices that don’t carry with them any more superior assurance of quality.

More from Craig tomorrow!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

 

Craig Ball of Craig D. Ball, P.C. – eDiscovery Trends, Part 1

This is the tenth (and final) of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Craig Ball.  A frequent court appointed special master in electronic evidence, Craig is a prolific contributor to continuing legal and professional education programs throughout the United States, having delivered over 1,000 presentations and papers.  Craig’s articles on forensic technology and electronic discovery frequently appear in the national media, and he writes a monthly column on computer forensics and eDiscovery for Law Technology News called Ball in your Court, as well as blogs on those topics at ballinyourcourt.com.

Craig was very generous with his time again this year and our interview with Craig had so much good information in it, we couldn’t fit it all into a single post.  So, today is part 1.  Parts 2 and 3 will be published in the blog on Thursday and Friday.  A three-parter!

Note: I asked Craig the questions in a different order and, since the show had not started yet when I interviewed him, instead asked about the sessions in which he was speaking.

If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

I think this is the first year where I do not have a ready answer to that question.  It’s  like the wonderful movie Groundhog Day.  I am on the educational planning board for the show, and as hard as we try to find and present fresh ideas, technology assisted review is once again the dominant topic.

This year, we will see a change of the marketing language repositioning the (forgive the jargon) “value proposition” for the tools being sold continuing to move more towards the concept of information governance.  If knowledge management had a “hook up” here at LTNY with eDiscovery, their offspring would be information governance.  Information governance represents a way to spread the cost of eDiscovery infrastructure among different budgets.  It’s not a made up value proposition.  Security and regulatory people do have a need, and many departments can ultimately benefit from more granular and regimented management of their unstructured and legacy information stores.

I remain something of a skeptic about what has come to be called “defensible deletion.”  Most in-house IT people do not understand that, even after you purchase a single instance de-duplication solution, you’re still going to have as much of 40% “bloat” in your collection of data between local stores, embedded and encoded attachments, etc.  So, there are marked efficiencies we can achieve by implementing sensible de-duplication and indexing mechanisms that are effective, ongoing and systemic. Consider enterprise indexing models that basically let your organization and its information face an indexing mechanism in much the same way as the internet faces Google.   Almost all of us interact with the internet through Google, and often get the information we are seeking from the Google index or synopsis of the data without actually proceeding to the indexed site.  The index itself becomes the resource, and the document indexed a distinct (and often secondary) source.  We must ask ourselves: “if a document is indexed, does it ever leave our collection?”

I also think eDiscovery education is changing and I am cautiously optimistic.  But, people are getting just enough better information about eDiscovery to be dangerous.  And, they are still hurting themselves by expecting there to be some simple “I don’t really need to know it” rule of thumb that will get them through.  And, that’s an enormous problem.  You can’t cross examine from a script.  Advocates need to understand the answers they get and know how to frame the follow up and the kill.  My cautious optimism respecting education is function of my devoting so much more of my time to education at the law school and professional levels as well as for judicial organizations.  I am seeing a lot more students interested in the material at a deeper level, and my law class that just concluded in December impressed me greatly.   The level of enthusiasm the students brought to the topic and the quality and caliber of their questions were as good as any I get from my colleagues in the day to day practice of eDiscovery.  Not just from lawyers, but also from people like you who are deeply immersed in this topic.

That is not so much a credit to my teaching (although I hope it might be).  The greatest advantage that students have is that they have haven’t yet acquired bad habits and don’t come with preconceived notions about what eDiscovery is supposed to be.  Conversely, many lawyers literally do not want to hear about certain topics–they “glaze” and immediately start looking for a way to say “this cannot be important, I cannot have to know this”.  Law students don’t waste their energy that way. If the professor says “you need to know this”, then they make it their mission to learn.  Yesterday, I had a conversation with a student where she said “I really wish we could have learned more about search strategies and more ways to apply sophisticated tools hands on”.  That’s exactly what I wish lawyers would say.

I wish lawyers were clamoring to better understand things like search or de-duplication or the advantages of one form of production over another.  Sometimes, I feel like I am alone in my assessment that these are crucial issues. If I am the only one thinking that settling on forms of productions early and embracing native forms of production is crucial to quality, what is wrong with me?

I am still surprised at how many people TIFF most of their collection or production.

They have no clue how really bad that is, not just in terms in cost but also in terms of efficiency.  I am hoping the dialogue about TAR will bring us closer to a serious discussion about quality in eDiscovery.  We never had much of a dialogue about the quality of human review or the quality of paper production.  Either we didn’t have the need, or, more likely we were so immersed in what we were doing we did not have the language to even begin the conversation.

I wrote in a blog post recently about an experiment discussed in my college Introductory Psychology course where this cool experiment involved raising kittens such that they could only see for a few hours a day in an environment composed entirely horizontals or verticals.  Apparently, if you are raised from birth only seeing verticals, you do not learn to see horizontals, and vice-versa.  So, if I raise a kitten among the horizontals and take a black rod and put it in front of them, they see it when it is horizontal.  But, if I orient it vertically, it disappears in their brain.  That is kind of how we are with lawyers and eDiscovery.

There are just some topics that you and I and our colleagues see the importance of, but lawyers have been literally raised without the ability to see why those things matter.  They see what has long been presented to them in, say, Summation or Concordance, as an assemblage of lousy load files and error ridden OCR and colorless images stripped of embedded commentary.  They see this information so frequently and so exclusively that they think that’s the document and, since they only have paper document frames of reference (which aren’t really that much better than TIFFs), they think this must be what electronic evidence looks like.  They can’t see the invisible plane they’ve been bred to overlook.

You can look at a stone axe and appreciate the merits of a bronze axe – if all that you’re comparing it to are prehistoric tools, a bronze axe looks pretty good.  But, today we have chainsaws. I want lawyers demanding chainsaws to deal with electronic information and to throw away those incredibly expensive stone axes; but, unfortunately, they make more money using stone axes.  But, not for long.  I am seeing the “house of cards” start to shake and the house of cards I am talking about is the $100 to $300 (or more) per gigabyte pricing for eDiscovery.  I think that model is not only going to be short lived, but will soon be seen as negligence in the lawyers who go that route and as exploitive gouging by service providers, like selling a bottle of water for $10 after Hurricane Sandy.  There is a point at which price gouging will be called out.  We can’t get there fast enough.

More from Craig tomorrow!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

 

Ralph Losey of Jackson Lewis, LLP – eDiscovery Trends, Part 2

This is the ninth of the 2013 LegalTech New York (LTNY) Thought Leader Interview series.  eDiscoveryDaily interviewed several thought leaders at LTNY this year and generally asked each of them the following questions:

  1. What are your general observations about LTNY this year and how it fits into emerging trends?
  2. If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?
  3. What are you working on that you’d like our readers to know about?

Today’s thought leader is Ralph Losey. Ralph is an attorney in private practice with the law firm of Jackson Lewis, LLP, where he is a Partner and the firm’s National e-Discovery Counsel. Ralph is also an Adjunct Professor at the University of Florida College of Law teaching eDiscovery and advanced eDiscovery. Ralph is also a prolific author of eDiscovery books and articles, the principal author and publisher of the popular e-Discovery Team® Blog, founder and owner of an intensive online training program, e-Discovery Team Training, with attorney and technical students all over the world and founder of the new Electronic Discovery Best Practices (EDBP) lawyer-centric work flow model.

Our interview with Ralph had so much good information in it, we couldn’t fit it all into a single post.  Friday was part 1.  Here’s the rest of the interview!

If last year’s “next big thing” was the emergence of predictive coding, what do you feel is this year’s “next big thing”?

{continued}

I am currently conducting an experiment on my own where, while I wouldn’t call it scientific, I am following the fully automated approach.  I am following the “Borg” as I call it, because I don’t want to criticize something unless I have personal experience.  I will be writing about the results of it later this year.  Last year, if you remember, I wrote about a 50,000 word narrative that demonstrated the multimodal approach using keyword search with predictive coding.  Now, I am performing a monomodal, fully automated, approach to compare the differences between the approaches. While I don’t want to give away the results too soon, I will provide just a little clue and say “so far, no surprises”, but I will write that up.

So, I don’t think “what this year’s big thing” is the right question.  I think it is “CAR, CAR, CAR” for the next several years, because it has totally changed everything.  To give you an example of how it has changed everything, I am for the first time in many years able to do my own review (which I actually like to do).  While I’m a fairly high billable rate (as you would expect from someone in my experience), it now makes sense for me to do the review.  I can do the work of 50 linear reviewers, because I have this “steam shovel” and I think we are seeing the “death of John Henry”.

We are seeing the dying gasp of vendors saying “no, no, no” and, in a couple of years, we are going to see a very big shake down in the industry again. I think this is the trend of this decade and it will be all about machine learning because that is the future.  The future is artificial intelligence leveraging your human intelligence and teaching a computer what you know about a particular case and then letting the computer do what it does best – which is read at 1 million miles per hour and be totally consistent.

What are you working on that you’d like our readers to know about?

I want people to know about my new initiative that I started it last year to try and come up with a model on eDiscovery that was just for lawyers.  There are two kinds of eDiscovery going on and what the vendors is do very important but vendors cannot practice law, and cannot give legal advice or legal opinions on what is reasonable and what is not reasonable.  Vendors sometimes do so, not because they are bad people, but because the lawyers don’t have a clue, so they count on them for more than they should.

So, that’s why I created eDiscovery Best Practices (EDBP.com, our blog post about it here) as a guide for lawyers.  I was happy to get that domain, and I had to pay a little bit for it, but I got a good deal on it and it’s a long term investment, so obviously I am in this for the long term.  One of the happiest things about my life is that my son is following in my footsteps and already doing better than I am.  What we are here to do is to provide guidance to lawyers and there was no real guide to what a lawyer does in eDiscovery.  It came out of a frustration of teaching law school for many years and the only model we had was EDRM.  It’s an excellent model, I love it and have taught it and know it backwards and forwards, but it only takes you so far.  That model isn’t focused on what lawyers and law students are going to do, so EDBP is a similar model, with ten steps, but focused on what lawyers do.

We are not going to attempt to define minimum standards – that is a court function and has to do with malpractice. Instead, the EDBP model is about best practices.  Obviously, what the best practice for a billion dollar case is going to be very different from a 100 thousand dollar case.  The fundamental best practice is proportionality, so you will do a lot more for the big case.  But, EDBP provides a model which is meant to be a crowd sourcing model, but the crowd is only practicing lawyers who actually deal with discovery.

Thanks, Ralph, for participating in the interview!

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.