eDiscovery Daily Blog

Jason R. Baron of Drinker Biddle & Reath LLP, Part 1: eDiscovery Trends

This is the fifth of the 2017 Legaltech New York (LTNY) Thought Leader Interview series.  eDiscovery Daily interviewed several thought leaders at LTNY (aka Legalweek) this year to get their observations regarding trends at the show and generally within the eDiscovery industry.

Today’s thought leader is Jason R. Baron of Drinker Biddle & Reath LLP.  Jason is a member of Drinker Biddle’s Information Governance and eDiscovery practice and co-chair of the Information Governance Initiative.  An internationally recognized speaker and author on the preservation of electronic documents, Jason previously served as the first Director of Litigation for the U.S. National Archives and Records Administration, and as trial lawyer and senior counsel at the Department of Justice.  He also was a founding co-coordinator of the National Institute of Standards and Technology TREC Legal Track, a multi-year international information retrieval project devoted to evaluating search issues in a legal context.  He served as lead editor of the recently published ABA book, Perspectives on Predictive Coding and Other Advanced Search Methods for the Legal Practitioner.

Jason provided so much good information that we decided to publish his interview in two parts.  The remainder of his interview will be published on Monday and Craig Ball’s interview (also in two parts) will be published on Wednesday and Thursday of next week.

What are your observations about LTNY this year and how it compared to other LTNY shows that you have attended?

It certainly has had a different look and feel this year, given that it’s now Legalweek in its branding.  (Although some of us old timers still will always refer to it as Legaltech).  I was very impressed with the lineup of speakers, despite the fact that several of the names that have routinely appeared year after year, like myself, were not speaking this year.  Instead, it seems there has been a broader reach that goes beyond eDiscovery, including a whole set of people who are in the data analysis and forensics world.

I really liked the keynote on day one.  I previously had read Andrew McAfee’s book, The Second Machine Age – and I certainly agreed with his observation in the keynote that we are not only in an era of accelerating change but that the pace of acceleration is itself accelerating.  Part of McAfee’s presentation was about how, up until about a year ago, predictions were that it would take about another decade or two for a software program to beat the world’s best Go player.  The speaker showed articles from 2015, from publications like The Wall Street Journal, that talked up the complexity of the game Go.  The speaker noted how Go is intuitive, that the game progresses in complexity, and the really interesting thing is that, unlike chess, the best Go players in the world really have no idea how they do it.  They simply intuit, by looking at the 19 x 19 grid, as to a winning strategy.  And yet, remarkably, this past year, a machine did beat the world’s best Go player – a decade ahead of time in terms of the predictions! 

McAfee went on to highlight another element, true in both Go and chess, which will be increasingly true in every domain in which machines are learning, namely: that machines are filled with surprises.  They not only do better in some domains now than the best human, but they go about doing tasks in ways that are different than humans.  McAfee illustrated a move in Go that the machine did that no human would ever do; it so surprised the best player in the world that he got up out of this chair and walked around the room, as he just couldn’t believe it (in part because it seemed like a move that a novice would make).  And yet the machine won that game.  So, we’re actually at an inflection point where humans are learning from software how to play these games both better and differently (i.e., more like a computer).

Now, you can add to these examples to illustrate a larger point of special interest to the lawtech crowd: that we are closer and closer to experiencing a “Turing test” moment in a number of domains, where it is increasingly difficult to distinguish AI from human responses.  Because we are living in a world where things are happening at such an accelerated pace, it wouldn’t surprise me, in five years, that Legaltech (oops—Legalweek) will be mostly about the law of AI and robots – including the ethics of handling extremely smart robots that mimic human behavior and then some.  I am not a believer that soon we will be in peril based on the world being taken over by super-intelligent machines.  But I do believe that we will be increasingly reliant on software, and that software will perform at a level that the Alexas and Siris of the future will seem to be our buddies, not just limited automated personal assistants.  We won’t even need screens anymore — we’ll simply be giving verbal instructions to these devices.  You already see that increasingly with not only Amazon Echo’s Alexa but in smart dolls and a range of other products.  But all this also means that Alexa and the other devices are accumulating data (ESI) from the people using them — all of which is grist for the e-discovery mill.  This world of IoT, smart devices, and smart analytics, is what McAfee and others are talking about: the acceleration in technological change is itself accelerating!

I think all of this means an even more interesting Legaltech of the future. Predictive coding technology was the hot topic at Legaltech about four years ago, after Judge Peck issued his ruling in DaSilva Moore.  (I think there were a dozen or more panels on technology assisted review and predictive coding that year.)  More recently we’ve seen a wave of panels on information governance and data analytics – which I plead guilty to being a part of.  As I said, I think we are now looking at a world of smart devices, IoT, AI, and robotics that will soon dominate the conversation in raising lots of ethical issues.  Indeed, I just read that in the EU an effort has been initiated to have a committee looking into the ethics of robots and human interaction with robots.  So, we are living in very, very interesting and exciting times.  That’s what you get when you’re living in an exponentially growing world of data. 

Last year, there were a few a notable case law decisions related to the use of Technology Assisted Review.  How do you think those cases impacted the use and acceptance of TAR?

I think you’re seeing a more sophisticated level on the part of a greater slice of the judiciary in predictive coding cases; it isn’t simply the same cadre of judges providing the rulings.  There are also new rulings in the UK, Ireland, and Australia, and that is all good.  I’m not going to talk in detail about any one case, but I think that there is a trend line that can be seen where the lurking question in complex, document-intensive e-discovery cases is whether a party acted reasonably in not using some form of advanced search and review techniques, like technology assisted review.   As Judge Peck said in the Hyles case, we’re not there yet, but it seems to me that’s where the hockey puck will be soon.

If I’m right, and the burden will be to explain why one didn’t use advanced search methods, it follows that clients should be demanding the greater efficiencies that can be obtained through such methods. Granted, you have to get past a certain level of financial risk in a case to justify use of advanced search methods.  Obviously, employing keywords and even manual searching in very small collections is still perfectly viable. But when you’re in complex litigation of a certain size, it is unfathomable to me that a major Fortune 500 corporation wouldn’t at least game out the cost of using traditional manual search methods supplemented by keywords, versus the use of some advanced software to supplement those older, “tried and true” methods.  As you know, I am a very big advocate for all of us looking into these issues, not just to benefit clients in eDiscovery but also across all kinds of legal engagements.

I realize I have been an evangelist for advanced search techniques.  So let me just quote, for the record here, a couple of sentences I’ve written as part of an Introduction to the book Perspectives on Predictive Coding and Other Advanced Search Methods for the Legal Practitioner (link above):  “As the book goes to print, there appear to be voices in the profession questioning whether predictive coding has been oversold or overhyped and pointing to resistance in some quarters to wholesale embrace of the types of algorithmics and analytics on display throughout this volume.  Notwithstanding these critics, the editors of this volume remain serene in their certainty that the chapters of this book represent the future of eDiscovery and the legal profession as it will come to be practiced into the foreseeable future by a larger and larger contingent of lawyers.  Of course, for some, the prospect of needing to be technically competent in advanced search techniques may lead to considerations of early retirement. For others, the idea that lawyers may benefit by embracing predictive coding and other advanced technologies is exhilarating.  We hope this book inspires the latter feeling on the part of the reader.”

Since you have mentioned your book, tell us more about its contents.

This book was a labor of love, as no one will be getting any royalties!  Michael Berman originally suggested to me that this volume would be a useful addition to the legal literature, and over the next two-plus years he and I, with the able assistance of Ralph Losey, managed to pull off getting the best minds in the profession to contribute content and to work towards publication.  I think this is a volume that speaks not only to practitioners “inside the bubble” (i.e., at Legaltech or at places like The Sedona Conference®), but also to a larger contingent of lawyers who don’t know about the subject in any depth and wish to learn.  These are lawyers who earnestly want to be technologically competent under ABA Model Rule 1.1, and who are aware of a growing body of bar guidance, including the recent California Bar opinion on e-discovery competence.   I think more and more, especially in complex cases, such competency means being at least aware of emerging, advanced search and document review techniques.  That may not exactly be easy for some lawyers (especially in my age cohort), but I am sure it will be easier for the generation succeeding us. 

As for some specifics, Judge Andrew Peck wrote the book’s Foreword, Maura Grossman and Gordon Cormack were very generous in not only submitting an expert, original chapter (“A Tour of TAR”), but also allowing us to reprint their glossary of TAR terms.   Phil Favro provided a supplement to his leading article with Judge John Facciola on seed sets and privilege, and Judge Waxse’s important (and controversial) law review article on courts being called upon to apply a Daubert test for advanced search is included.

Most of the 20 chapters in the book are original. There is a really excellent chapter about antitrust law and predictive coding, by Robert Keeling and Jeffrey Sharer.  There is a wonderful chapter on emerging e-discovery standards by Gil Keteltas, Karin Jenson and James Sherer.  Ronni Solomon and her colleagues at King & Spalding wrote a chapter on the defensibility of TAR for a big firm on the defense side.  The late Bill Butterfield and Jeannine Kenney wrote a chapter spelling out from the plaintiff’s side considerations about how to use predictive coding in a fair way.  William Hamilton supplied a much-needed chapter discussing predictive coding for small cases.  Vincent Catanzano, Sandra Rampersaud, and Samantha Greene contributed chapter on setting up TAR protocols.  There are several other chapters talking about information governance written by Sandy Serkes, Leigh Isaacs, and including a reprint of Bennett Borden’s and my law review on “Finding the Signal in the Noise” (with a new coda).  Part of the book provides perspectives from other leading PhDs whom I’ve worked with during the TREC Legal Track and at other workshops, including Doug Oard, Dave Lewis, and William Webber.  Bruce Hedin and colleagues at H5 supplied a thought provoking chapter talking about standards in the field in the use of advanced search. Kathryn Hume educates us on deep learning.   Michael Berman and I (with co-authors), and Ralph Losey, each supplied additional articles rounding out the volume.

Although no one expects the book to be a best-seller on Amazon, I really believe the 650 pages of text will be of interest to readers of your column Doug, and so I do recommend checking it out while supplies last (kidding)!

Part 2 of Jason’s interview will be published on Monday.

And to the readers, as always, please share any comments you might have or if you’d like to know more about a particular topic!

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine. eDiscovery Daily is made available by CloudNine solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Daily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

print