eDiscoveryDaily

internal software infrastructure

Optimizing Your Infrastructure for LAW & Explore eDiscovery

By: Joshua Tucker

It’s safe to say Microsoft isn’t going out of business anytime soon. Last year alone they grew 18 percent, reaching 168 billion dollars*. They are continuously making updates to their software, improving their products and functionality, and purchasing emerging software. They want to empower every person and organization on the planet to achieve more*, but the power you obtain from the software is up to you. Microsoft does not know your intended purpose or use of their software; all they can do is provide the software and the barebone requirements to make it run.

CloudNine software is no different. Let’s take a deep dive into your infrastructure and how you can optimize it with the CloudNine on-premise processing platforms.

We see that several of our clients run their environments with the most minimal recommended resources. Just like Microsoft can’t know how large your SQL server needs to be, we don’t know the level of demand your client’s data is putting on your workstation. What we DO know is that the number of files per case is growing, the complexity of files is growing, and resources are sparse.

We will cover the areas where we can make vast improvements in the efficiency in the way you are using your CloudNine software.

Your Local Area Network

Let’s use the common “business triangles” as a frame of reference. Examples would be “people, technology, and process” or “team, leadership, and mission”, or, my favorite, “price, speed, and quality”. The more your balanced business triangle, the better. Too much or not enough emphasis on one side and that balance will start to wane.

The eDiscovery version of the business triangle is called the ‘Local Area Network’. The first side of this ‘Local Area Network’ is the hardware or the backbone of your infrastructure. The second side would be the software, or the muscle needed to use that backbone. The third side is your network file server or the brain’s storage area, which will hold all the knowledge that our software is going to discover for you. And finally, the three sides are then connected, like sinew, with your local network speed.

You want to find the sweet spot that balances cost, throughput demands, speed to review, and hardware budget. Let us go ahead and call this the “Goldilocks Zone”.

Real-life case study: About 8 years ago, we were working with a client that had a few virtual machines and a few physical machines. The virtual machines were 4 core and 8GB of RAM. The physical machines were 8 core and 16GB of RAM.  IT wanted to get rid of the physical machines, but there was resistance to letting them go because they were able to process so much faster than the virtual machines. We conducted some testing to find the Goldilocks Zone between the amount of data being processed, the expected speed, and the cost. We created a few virtual machines with 4, 8, and 12 cores and ran tests to determine the correct core count for our company. We determined that an 8-core box with 16GB of RAM was able to process data much faster than a 4-core box with only 8GB of RAM.

After we completed optimizing the processing machines, we ventured forth into the other areas of our infrastructure.

Next, we reached out to our SQL team to see what would happen if we added more RAM and more SQL cores. We saw the same result. As we added more resources, we found that we were able to increase the speed on LAW’s communication with SQL. Faster communication equals a faster read/write, which equated to a faster processing speed. During this testing we also found that the more SQL cores, the more we could horizontally spread out the processing tasks on our LAW machines (i.e., we could have more machines writing to the same database).

Note: Today, I have a simple equation to determine the correct size of SQL:  Take the total number of read/write instances that can be communicating or interacting with SQL. Divide that number by three. The resulting number is the SQL cores needed. For RAM, take the same number of instances and multiply it by four.

After we completed this environment review, we had larger machines, faster read/write capability, and more machines to process on each matter. The Goldilocks Zone for SQL ensures that you have the right number of SQL cores and RAM per instances that have read/write work with SQL.

(For LAW workstations is highly suggested at 8 core and 16gb of RAM. For Explore that was 8 core and 32gb of RAM.)

Note: Your LAN does not have to be local to your office, but SQL, the LAW database folder structure and the workstations all need to be in close proximity to each other. The closer the better.

Software and Upgrades

Let’s go back to our Microsoft analogy. Microsoft keeps improving their product and each version of the operating system has the potential of changing the location or how certain files work. It is imperative that the operating system that is installed on your workstations is supported by the version of the product that you are going to use. If it isn’t, the software could act in a way that is completely unexpected – or worse.

The data we process can be a threat to our organization (and this does go for everyone!) and the best way to protect yourself is to be up to date on patches and virus software. I highly suggest that you first patch in a test environment, testing each part of the tool and making sure that the patching will not interfere with your work. The more up to date you can test, the more secure your, and your client’s, data will be.

One thing I like about the right test environment is that once your testing is done, you can make an image and deploy that image to the rest of your workstations. It is fast and efficient.

How your processing engine gets metadata to you matters. For instance, there are engines, like LAW, that will expand the files and harvest all the metadata. This type of processing is slower in getting the data in review, but much faster in the final export. There are also engines, like CloudNine Explore, that will hold off on expanding the data but harvest all the text and metadata extremely quickly. This workflow is great for ECA purposes.

How deep these tools dig into your data is also important. You never want a want privileged document produced because your processing engine did not discover it. Find out if your engine is collecting all the natives, text, and metadata that you need for these legal matters, and then come up with a workflow that will accentuate the strengths of your tool.

Having an Investment in your File Storage

The price of data storage has been coming down for years. Which is great news considering the fact that discoverable data keeps growing and will continue grow at an astounding pace. It is estimated that this past year, that each person on the planet created 1.7 megabytes of information each second. Every matter’s data size has increased and with it, the speed to review. All of this must run efficiently, all of it must be backed up, and all of it must be in your disaster recovery plans.

Network speeds matters. It ties your infrastructure together. If the processing machine can’t talk to the SQL machines quickly, or to the network storage efficiently, then it won’t perform at top speed, no matter how many cores you have. Network speed should be considered not only for the processing department, but for your whole company. We highly suggest a gigabit network, and if you are a firm or legal service provider, you might want to be looking at a 10-gigabit network.

Even with a gigabit network, your workstations, SQL server, and file server need to be local to each other. Having one data center or a or central location helps keep those resources working more effectively, getting you a higher return on investment on your machines.

Pro tip! There is a quick and easy way to test your network speed without having to contacted IT. Find a photo that is near 1mb and put it in the source location. Log into one of your workstations, open a window to that source location, and drag that image to your desktop. Then, drag it back. Both times that you move this image should be instantaneous to you. If either move takes a more than one second, then your network speed needs to be improved.

RECAP

It is our responsibility to figure out what we need to get full capacity out of outside tools. To run CloudNine’s LAW we need workstations that have at least an 8 core and 16gb RAM. For CloudNine Explore workstations, we need 8core and 32gb or RAM and SQL environment that adjusts to number of instances that are interacting with it.

Ensure that your software matches up with the recommended versions for your processing engine. If you are on or are working with an operating system that wasn’t on the list of that processing engine, we know that you could get unexpected results – or worse data. Line up the programs, test before you deploy, and stay up to date.

Know where your data is stored and the speed at which your systems talk to each other. Keep your environment in close proximity.

All in all, in order to get the top speed and performance out of CloudNine’s tools (or our third-party software your purchase), you must invest into the right resources.

Keep working towards your “Goldilocks Zone” – the sweet spot between speed, price, and quality.

If you are interested in having a CloudNine expert analyze your environment and provide recommendations for efficiencies, please contact us for a free Health Check.

 

*https://www.statista.com/statistics/267805/microsofts-global-revenue-since-2002/

* https://www.priceintelligently.com/blog/subscription-revenue-adobe-gopro-microsoft-gillette

* https://www.comparably.com/companies/microsoft/mission

* https://docs.microsoft.com/en-us/sql/sql-server/install/hardware-and-software-requirements-for-installing-sql-server-2019?view=sql-server-ver15

 

Have you considered the implications of time zones when it comes to your litigation needs?

by: Trent Livingston, Chief Technology Officer

Most of today’s legal technology platforms require that a time zone be selected at the time of ingestion of data. Or, in the case of forensic software, the time stamp is displayed with a time zone offset based upon the device’s time zone setting. However, when conducting a review, the de facto time zone setting for your litigation is often determined ahead of time, often based upon subjective information. This is likely the region in which the primary custodian resides. Once that time zone is selected, everything is adjusted to that time zone. It is “set in stone” so to speak. In some cases, this is fine, but in others, it can complicate things, especially if you want to alter your time zone mid-review.

Let’s start by understanding time zones, which immediately begs the question, “how many time zones are there in the world?” After all, it can’t be that many, right? Well, don’t start up your time machine just yet! To summarize a Quora answer (https://www.quora.com/How-many-timezones-do-we-have-in-the-world) we arrive at the following confusing mess.

Spanning our globe, there are a total of 41 different time zones. Given the number of time zones, “shifting time” (so to speak) can be of the utmost importance when examining evidentiary data.

If everything is set to Eastern Standard Time but does not properly allocate for time zone changes, a software application could arbitrarily alter a time stamp inconsistently, and consistency is what really matters! What happens if two of the parties to a matter are in New York while two of the parties are in Arizona? Arizona does not observe Daylight Saving Time. This could result in a set of timestamps being thrown off by an hour spanning approximately five months of the data set (based upon Daylight Saving Time rules). Communication responses that may have happened within minutes now seemingly occur an hour later (or earlier depending on how to look at it). Forensic records could fall out of sync with other evidentiary data and communications or, worse yet, sworn testimony. The key is to ensure consistency to avoid confusion.

CloudNine’s ESI Analyst (ESIA) normalizes everything to Coordinated Universal Time (UTC) upon ingestion, leveraging the original time zone or offset. By doing this, ESIA can display the time zone of the project manager’s choosing (either set at the project level or by the specific user’s account time zone setting). This allows for the time stamp display of any evidence to be changed at any time to the desired time zone across an entire project, allowing for the dynamic view of time stamps. Not only can it be changed during a review, but also set at export. All original metadata is stored, and available during export so that the adjusted time stamp can be leveraged for timelines, while the original time stamp and time zone settings are preserved for evidentiary purposes.

When performing analysis of disparate data sets, this methodology allows users to adjust data to see relative time stamps to a particular party involved in that specific investigation. For example, an investigation may involve multiple parties that are all located in different time zones. Additionally, these users may be traveling to different countries. Adjusting everything to Eastern Time may show text messages arriving and being responded to in the late hours of the day not accounting for the fact that perhaps the user was abroad and was actually responding during normal business hours.

While seemingly innocuous, it can make a big difference in how a jury perceives the action of the party, depending on the nature of the investigation.

As they say… “timing is everything!” especially when it comes to digital evidence in today’s modern era.

Now, where did I leave my keys to my DeLorean?

Learn more about CloudNine ESI Analyst and its ability to deduplicate, search, filter, and adjust time zones across all data types at once here.

Private and Privileged Data: Public Records and FOIA Requests

By:  Julia Romero-Peter, Esq

Information requested from a government agency through a local public records request or the federal Freedom of Information Act (FOIA), may be considered private, personally identifiable information (PII) or privileged. These designations can apply in an ongoing investigation when personal information about an individual is disclosed.  And, in some cases, these designations can be appealed.

What Is Private and Privileged Information?

Private information considered personal in nature can be designated as PII. This can include medical records, financial information, or personal correspondence. Private information is typically exempt from public disclosure laws meaning, government agencies are not required to release such data in a public records request.

Privileged information is not subject to disclosure under the law. Examples of this can include attorney-client privilege, work product, matters of national security, or data related to an ongoing criminal investigation. Privileged information is typically exempt from public disclosure laws, which again, means government agencies are not required to release it in response to a public records request.

If the data requested contains private, privileged information, it may be redacted before being released to a requesting party to prevent the disclosure of national security information, for example.

Tools to Prepare Data for a Public or FOIA Request

CloudNine’s cloud-based solutions can help you locate relevant information for a public record or FOIA request.  CloudNine’s simplified review automation platform can help you manage, review, classify, redact, and prepare productions among all types of digital information. Your team can optimize your workflow and analyze data with precision using the CloudNine Suite, which includes CloudNine ESi Analyst —  the industry’s only investigation platform built and prepared to handle today’s modern data types, such as chat, text, social media, geotracking and more.

To see CloudNine software in action and learn how to save time and costs with an integrated, cloud-based review platform, contact us to schedule a consultation today.

 

To learn about the rise of modern data including social media, SMS, geolocation and corporate chat applications such as Slack and Teams, or click the link to request our newest eBook:  Modern Data Blueprint: Including All Data Sources in Your eDiscovery

 

Three Things to Consider When Moving to the Cloud

By:  Kyle Taylor

Cloud computing is trending today, and for good reasons. Reports from Flexera show that 50% of decision-makers in organizations believe that migration to the cloud will continue to increase.

While some consider it a risky move for data security, others think it’s necessary for business in many ways. What benefits do companies stand to enjoy by moving to the cloud?

Reduce Internal Infrastructure Demands and Hardware Costs

The traditional on-premise Concordance platform has many demands, especially when a company wants to scale upward. It must incur the cost of acquiring additional infrastructure when new employees come on board or it expands operations.

Cloud infrastructure is easier to grow, with a business only having to pay for other resources as required. The cloud environment requires no hardware investment.

Eliminate Time-Consuming Installs, Upgrades, And System Downtime

Migrating legacy systems to a cloud computing solution saves a company time rolling out new software and training. The team has no data centers to update regularly, saving time for more crucial activities. Cloud-based solutions also experience fewer downtimes.

Routine Backups and Disaster Recovery Process

Cloud solutions provide data encryption, regular automatic backups, and speedy data recovery. Cloud hosting providers regularly update security features based on the newest technology to keep your data protected at all times.

Other benefits of moving from the on-premise Concordance platform to the cloud include:

  • No database corruption and data integrity concerns
  • Data migration assistance from professional cloud service providers
  • Access new features, performance improvements, and bug fixes as soon as they are released
  • Unlimited data storage and processing space
  • Flexibility in using the software from anywhere with an internet connection
  • Easily collaborate with internal and external parties
  • Optional overflow services and consulting are available
  • Easy to use modern interface designed for a positive user experience
  • Automated seamless workflows
  • Customizable tag options and formats
  • Cloud-based databases support modern data formats
  • Reviewer statistics
  • Flexible database customization at the user level

The benefits of moving from an on-premise platform (like Concordance) are endless. If you would like to start the migration or get support for your cloud solution, contact us today to schedule a consultation.

Click to Download: Moving to the Cloud: Lessons from the Experts

JSON is not a document, it is data… and lots of it!

By:  Trent Livingston

Modern eDiscovery deals with much more than just documents. In 2020, people created 1.7 MB of data every second, and most of that data was likely stored in a database. Now, newer applications like Facebook, Twitter, Slack are storing copious amounts of data ranging from tweets, wall posts, and chats.

Many of these artifacts are stored with complementary data that can include file links, reactions (such as “likes”), and even geolocation information. Accessing this modern data in order to leverage it for a discovery request often requires some sort of archive process from the originating application, and that is when JSON enters the picture.

JSON stands for “JavaScript Object Notation”, but that doesn’t mean you need to know how to write JavaScript (or any code for that matter)!  If you’ve ever dealt with discovery surrounding any of the aforementioned applications, you’ve likely come across a few JSON files. In a nutshell, think of JSON as a relational spreadsheet where one column of data in one tab of your spreadsheet is defined by another column of data in another tab in another spreadsheet.

For example, you might have a column called “Address” in a spreadsheet and that column contains a series of numeric “id” values that reference another tab in that same spreadsheet. In this secondary tab, each address is broken down into values for that address that may include things like “country”, “zip”, or “street”.  All those values that have the same “id” belong to the “address” reference in the previous tab. Simply put, this is structured data. JSON data is no different. However, JSON can contain a multitude of data structures varying from simple to complex.

The problem with JSON is while there are multiple JSON viewers and formatters online, they do not understand the defined data structures within.  Each platform defines these data structures differently, and while the vehicle may be the same, the defined structure is usually different from application to application (as well as the application’s version).  Therefore, the data within the JSON often comes out in an unexpected format when using a generic formatting tool, and the relationships between the data are often lost or jumbled. (By the way, you should never use any online “free” tool to format potentially confidential or privileged information).

Therefore, it is important to work with someone who understands JSON as an eDiscovery data source.  A single JSON file just a few megabytes in size may represent hundreds, if not thousands of messages, contain numerous links to files, as well as key data relevant to your investigation or litigation. Contained within a JSON file could be any number of nested data formats, including:

  • Strings: a sequence of zero or more Unicode characters, which could include emojis in a Unicode format, usernames, or an actual text message.
  • Numbers: a numeric value that can represent a date, intrinsic value, id, or potentially a true/false value represented as “1” or “0”.
  • Objects: a series or collection of one or more data points represented as named value pairs that create meaning as a whole, such as longitude, latitude, elevation, rate of speed, and direction that make up the components of a device’s location.
  • Arrays: a collection of values or elements of the same type that can be individually referenced by using an index to a unique identifier, such as the choices of a color of a car on a website or a set of canned response values listed in a software chat application.

The thing to remember is the JSON file is actually one big “object”, which is the parent to all of the named value pairs beneath it.  Within this object, you can have more objects that contain numbers, strings, arrays, and yet even more objects. Confused yet?

Not to worry! It is understandable that all this JSON data can quickly become a source of frustration!  The question that remains is, “How do I make sense of all of this structured data and present it for review in a reasonably usable format?” 

Here are some tips:

  1. Make sure you do not overlook JSON as part of your electronic discovery protocol
  2. Leverage an experienced team to help you understand the JSON output
    • Document the source application whenever possible (some JSON include access keys that can expire or be terminated at any time, such as Slack)
    • Preserve the JSON file as you would any other evidentiary data source
    • Document the chain of custody for the JSON file (originating application and version of that application, who conducted the export, as well as any access keys that may be transitory or temporary and their date of expiration)
    • Treat each JSON file and associated content as a potential source of PII, confidential, and/or privileged information given the breadth of data that each may contain
  3. Work with a team and a product that can parse, ingest, and subsequently present JSON data in a usable format for review and production

While there is not an off-the-shelf solution for every JSON file in existence, CloudNine ESI Analyst is a platform designed for the multitude of data types that can be extracted from just about any JSON file out there. Many of which can be easily mapped to a data type construct within our SaaS application that allows for presentation, review, and production in a reasonably usable format.

Contact us today for a demonstration and further detail!

Three Use Cases to Navigate Modern Data in eDiscovery

In litigation, knowing the full picture is the only way to effectively represent your clients. The only problem is most of the story is often stored on electronic devices like smartphones, laptops, or tablets.

While eDiscovery can be dated back to 1981 and the first substantial use of email in litigation (Governors of United State Postal Service v. United States Postal Rate Commission), integrating newer, modern data types like text messages, computer activity, and financial data, has been a bit more challenging. These challenges relate back to how eDiscovery has historically worked and why modern data sources don’t fit nicely into that process. 

When eDiscovery was introduced with email and electronic documents as the primary source of information, simple messages and documents were sufficient to tell the story in a linear document review workflow. But, with sophisticated technology like Slack, chat applications and smartphone messaging where communications occur in real time, the conversion to documents for review hinders a proper evidence analysis.

Making sense of all that data only works when it is presented in the way it was originally communicated. The old documentation process simply doesn’t provide the insight you need to leverage modern data in litigation.

Case Study 1: Tackling Disparate Modern Data from Multiple Sources

No matter how small or large your case is, reviewing modern data can be challenging. Between smartphones, laptops, social media apps, and other connected devices, there’s a plethora of data to sift through to find the evidence you need to support your case. This process becomes even more complicated when the data is presented through the lens of traditional eDiscovery meaning, in traditional document format. What once worked for simple electronic communications no longer tells the whole story within complex, real-time and editable messaging technologies like WhatsApp, Slack and social media.

So what happens when you have to produce data from hundreds of international sources, and need it to tell the story of what actually happened? Let’s look at a case study of a construction company who had to do exactly that.

The Problem:

One of the largest construction companies in the world required the collection of modern data from 300 international sources. While the sheer number of sources was a challenge in itself, the real difficulty was working with disparate data from so many different sources.

Each business unit within the company used different technology so data had to be collected from a vast number of sources – local desktops, laptops, smartphones, tablets, and backup servers.

Plus, because so many BYOD devices were used, legal, data privacy, IT, and risk & compliance departments had to be consulted throughout the collection and review process to ensure no U.S. or international privacy laws were broken.

The Solution:

Ultimately, our client needed to understand who said or did what, and when. Basic documents with communications from readily available sources wouldn’t be enough, because they couldn’t easily identify the critical timeline of events or the intentions of each party. In the end, CloudNine ESI’s actor normalization function was the key to finding the evidence needed.

The Results:

By matching specific individuals to different aliases and phone numbers, attorneys were able to identify a handful of photos shared from the vast amount of data collected that proved the construction company was at fault. These photos were presented to the court in the form of inline bubble messaging that was easy to read and view.  To learn more about this use case, click here.

Case Study 2: Overcoming the Personal Device eDiscovery Challenge

There’s an expected, inherent trust between a company and its employees which means employees typically won’t do work that’s a conflict of interest with their current employer. Unfortunately, that trust is sometimes broken by actors that take advantage of their position.

The Problem:

When a heavy equipment manager became the target of a moonlighting case that cost his company money, attorneys were discouraged when they couldn’t find any evidence of wrongdoing. There were no documents, emails, or invoices to be found through traditional eDiscovery.

The Solution:

Fortunately, the key to the case was the manager’s smartphone.

By gaining access to his phone, attorneys were able to secure tens of thousands of text messages directly related to the case. This helped them discover how he operated his illegal side hustle. They also learned he was sharing confidential, copyrighted, and proprietary information through photos sent via text message.

The Results:

Through CloudNine ESI Analyst, attorneys were able to create conversation threads which were easy to review and produce. These threads not only helped them identify other involved parties, but let them produce messages, including embedded images, videos, GIFs, and emojis.

Without the ability to review and analyze the bad actor’s smartphone data, the case likely would not have gone forward.  

To learn more about this use case, click here.

Case Study 3: Protecting Company Data with Modern eDiscovery

The average American holds 12 jobs in their lifetime so it’s safe to say you will lose employees from time to time. Whether purposeful or accidental, the odds are good their personal devices will contain confidential or proprietary information when they walk through the door for the final time.

So what if you could examine their devices and remove all sensitive material before they left?

The Problem:

An employee spent six months working from home on a personal laptop before announcing his resignation to work for a competitor. If the employee was allowed to leave without a device review, he’d likely be leaving with documents that would benefit both him and his new employer.

Whether he would ever used those documents or not, chances are he’d find himself in the middle of a long, expensive trade secrets case which would also impact his new employer.

The Solution:

The representatives of the employee’s current company needed a way to access his personal laptop and identify any confidential or proprietary documents to be destroyed before he went to work for their competitor. They sought out a solution to easily identify these risks to protect their company, the employee and the employee’s future company.

The Results:

With CloudNine ESI Analyst, company representatives were able to access his personal laptop, create a chain of custody and review the data found on it. This allowed them to find confidential and proprietary data and remove it before the employee left for his new position, protecting all parties involved.

By doing this in advance, you preserve the data and protect it without relying on your employees to remember if they have sensitive information on their devices or not.

To learn more about this use case, click here

CloudNine ESI Analyst – A Modern eDiscovery Solution for Modern Data Review

While most law firms, corporations, and LSPs are challenged to review modern data through traditional eDiscovery tools, they struggle to pull the true value out of the data. Each text and corporate chat message are recreated as stand-alone documents, leaving you to piece them all together like a giant legal jigsaw puzzle with missing pieces and others that simply don’t fit the storyline.

With a robust and flexible modern eDiscovery tool like CloudNine ESI Analyst, you have access to alternative data to help you put the puzzle together through linear storytelling that creates a digital trail of evidence, including some of these popular sources:

  • Text messages (SMS and MMS)
  • Call logs
  • Voicemails
  • Messenger applications (Slack, Teams, WhatsApp, Messenger, etc.)
  • Computer activity
  • Financial transactions
  • Geolocation

Modernize your investigations and litigation by effectively managing the data in a single platform instead of wasting time managing a menagerie of documents in siloed systems.

Every case is unique and requires you to find the facts and context to tell the complete story. Contact CloudNine and learn how we can help you leverage all the data to get to the truth of the matter. Contact us to learn more.

Modern Data Discussions by Leading Experts at The Master’s Conference in DC

Last week, CloudNine Senior Director Rick Clark, VP Rob Lekowski and industry thought leaders convened in Washington, DC for The Master’s Conference first in-person event since early 2020. The two-day event tackled the latest challenges in eDiscovery, cybersecurity, and information governance. Managing modern data was the most popular recurrent topic with four distinct panels on the subject.

Smartphones, collaboration apps, and social media platforms all store a plethora of relevant information and is where today’s most important evidence resides. By avoiding or mismanaging this evidence, lawyers miss out on critical insights. So, how should legal teams incorporate modern data into investigations? Rick Clark addressed this question and more during the panel titled “Telling The Full Story: Leveraging the Data Between the Documents” with panelists Dave Rogers, Kroll; Kevin Albert, PAE; and Sonya Judkins, T-Mobile.

Here are three key takeaways from the panel discussion:

Modern data types can no longer be ignored – Emails and traditional documents are always going to be a part of discovery and investigations, but key data has moved away from these platforms. Conversations in Slack, MS Teams, device chat applications and text messages are additional communications needed to follow the conversation. Nowadays, modern data expands above communications into geolocation, social media posts, user activities and offer the largest insights. Recent case decisions have proven that judges are open to admitting modern data so long as the evidence is relevant and properly authenticated.

 Data shouldn’t be treated like documents – One reason why legal teams avoid modern data is linear document review. Imaging and exporting modern data leads to issues like missing metadata, families, threads, and file types. This method also requires legal teams to review evidence without any threading, deduplication, or link-analysis tools. Instead, large volumes of data are analyzed document by document. Linear document review is tedious, inefficient, and time-consuming but most important is legal teams who use linear document review also run the risk of overlooking important details. By opting for link analysis, litigants can connect various data points together to find relevant information faster.

“There just isn’t a good solution out there” – NOT TRUE – Four panels at the Master’s Conference discussed the challenges of modern data, but only Rick Clark’s panel offered a solution. CloudNine ESI Analyst is the only software that renders modern data in a near-native state. The platform uniquely offers users the ability to ingest and investigate multiple data sources within a single platform. Attendees of the CloudNine breakout session were given a full demonstration of ESI Analyst’s capabilities. Through timelines and 24-hour threads, ESI Analyst enables native analysis of communications, transactions, and computer activities.

Missed the Washington, DC event? Join us May 18th, 2022 for the next Master’s Conference in Chicago, Illinois.

Click here to learn more about how CloudNine ESI Analyst can help you manage your modern data.

BlueStar Accelerates Modern eDiscovery with ESI Analyst – CloudNine Podcasts

It’s a challenge to produce relevant evidence for large cases, especially when they feature non-traditional data types. JSON and PST formats simply don’t do modern data justice. The unwieldy files don’t possess threading or deduplication options. Instead, large amounts of irrelevant data are stretched across a multitude of pages and folders. Finding a team to manually review that data slows production speeds and raises discovery costs. It’s time to stop forcing a square peg into a round hole.

As the CTO and Managing Partner at BlueStar, Jeremy Schaper has seen an uptick of non-traditional data in the last five years. He and his team found CloudNine’s ESI Analyst while searching for an eDiscovery solution to process both traditional and modern data types. Jeremy joined Rick Clark for our CloudNine 360 Innovate Podcast to discuss how BlueStar leveraged ESI Analyst in large cases involving SMS, Slack, and Microsoft Teams data.
Click here to listen to the podcast and learn more.

The Challenges of Modern Discovery – CloudNine Webinar

In the past, attorneys relied on printed pages and forensic imaging to produce and review traditional documents. Neither method is capable of telling the whole story. By treating data as documents, legal teams are unable to draw context from metadata, families, and threads. Static documents also don’t permit the deduplication or isolation of messages at the individual level. These shortcomings lead to slowed review speeds, excessive redactions, and the loss of relevant information.

Since then, the eDiscovery landscape has broadened to encompass modern data types such as text, geolocation, and social media data. When exported by other eDiscovery providers, the data is displayed in unwieldy spreadsheets and JSON files. Lawyers who opt to export modern data face many of the same challenges as they would with traditional discovery measures. As an alternative, a lawyer may try to produce modern ESI in the form of screenshots. Screenshots are mistakenly viewed as an easier production form because they offer clear images of conversations and provide details such as contact names and messaging times. However, several judges have rejected their admission in court due to authenticity concerns. Nowadays, it’s very easy to fabricate text conversations.

As the amount of modern ESI grows, developing an efficient and defensible discovery process becomes paramount. Rob Lekowski and Rick Clark from the CloudNine team joined Kevin Thompson from the Chicago Bar Association to discuss how CloudNine ESI Analyst uniquely tackles the pitfalls of modern discovery. Rob and Rick also provided insight into common questions such as:

  • How should legal teams deal with deleted documents?
  • What are the estimated costs and duration times for small, medium, and big cases?
  • Can you still collect evidence with CloudNine ESI Analyst if you don’t have access to the physical device?
  • How should an attorney negotiate keyword searches with opposing counsel?

To learn how to process and review modern data types in a single platform, watch the webinar here.

CloudNine’s LegalWeek 2022 Recap

Last week, the CloudNine team visited New York City to provide virtual and in-person demos during LegalWeek 2022. Rick Clark, Robert Lekowski, Clint Lehew, and Jess Moore were able to share the capabilities of CloudNine ESI Analyst with over 50 attendees.

As the industry’s only near-native investigative platform, CloudNine ESI Analyst simplifies the discovery of mobile, chat, social, and geolocation data from collection to production. Through our platform, users can filter, search, and tag items at the individual level. Messages can also be viewed through our 24-hour thread feature which increases review speeds by permitting the numbering, tagging, and production of individual messages while displaying the full context of a conversation. The key difference between ESI Analyst and traditional review is our way of processing modern data. Traditional review turns all data into documents without providing sufficient means for filtering and searching conversations. Each message or thread must be reviewed page by page. By rendering messages and media inline, ESI Analyst allows legal teams to piece evidence together at a faster rate.

During LegalWeek, the CloudNine team was able to speak to our clients directly and learn how our platform changed their tactics on approaching data and case strategies. Clients like Phil Hodgkins raved about the user-based pricing and clarity that ESI Analyst offers in telling the whole story. While conducting an internal investigation, Phil’s team found that ESI Analyst saved time and yielded better insights than two traditional review platforms. Since then, Phil has encouraged lawyers to stop cramming mobile data into document-based spaces.

“If [our clients] start talking about mobile data [and getting] into the nuts and bolts of a laptop or any device, we immediately start talking about your tool. It allows you to gain insights much more quickly than putting the data into a typical review space” – Phil Hodgkins, Director of Data Insights and Forensics at Kroll. To learn more about Phil’s experiences with ESI Analyst, click here.

Missed out on CloudNine’s LegalWeek demos? Book a demo today to learn how our software simplifies and accelerates modern data discovery.