Industry Leading
eDiscovery Insight

Learn from renowned eDiscovery thought leaders


Learn More

Developing A Clear Picture With Early Case Timelines

Getting a grasp of the strengths and weaknesses of one’s case early becomes more important and more challenging with substantial ESI. Creating early case timelines can help keep your case perspective accurate.

Assessing one’s case early is as important as ever. Keyword selections, used for discovery requests and Rule 26 Meet and Confer agreements, depend on accurate identification of the important case issues. If the critical issues of a case are not identified until later, then it will become necessary to request additional documents based on new keywords, or from different custodians. This can add substantial cost to discovery collections and tries the patience of opponents and potentially the court. At some point in the discovery process the cost of additional collections will shift to the requestor, or it simply will not be possible to identify new issues too late in the process. A related concern involves identifying potential claims and defenses early enough in the discovery process to be able to include them in the pleadings.

ESI intensive cases are even more difficult to assess early because distilling information about the case from large amounts of data can be expensive and is dependent on both knowing the issues to generate keywords — a sort of ‘catch-22’ – and being in possession of the data. If one is trying to get the important facts early in the case from the ESI (as opposed to witness interviews and case participants, etc.) then the review of ESI will be an iterative process of trying proposed keyword searches, manually reviewing a sample of the documents returned by the search results, and then revising search terms.

As part of the early case assessment process, counsel can identify what discovery is needed, identify key witnesses and data custodians, construct factual timelines, and determine the strengths and weaknesses of the case. The goal of this proactive approach is to develop an early strategy for the case, realistically evaluate the case, determine the cost and budget to prosecute/defend, and identify business practices that might be modified to minimize litigation exposure in the future.

This probably sounds like ‘mom and apple pie’ and is certainly not a new idea. But then why isn’t this done in every case? There are several real life barriers to doing a comprehensive early case assessment. One is the need for client support. Conducting an early case assessment will cost money and a client may view it as ‘over-working’ the case if they don’t understand the process and benefits. These benefits include a better early understanding of the case that can lead to opportunities for settlement and reducing unnecessary eDiscovery costs.

A more subtle objection is the concern with being the bearer of bad news. A legitimate worry is that the client may not be ready to hear bad news about the case and may be inclined to ‘shoot the messenger’. Some clients think they need a very gung ho ‘junkyard dog’ type of litigator and might interpret an early balanced analysis of case strengths and weaknesses as lack of confidence from their attorney. This problem is compounded when the litigator has been recently hired and does not have experience with the client. For these types of clients, some lawyers take the approach of letting the adverse facts of their case sink in over time. Eventually the client begins to accept the shortcomings in their case and are more inclined to settle.

The cost of the ‘wait and see’ approach of not doing early case assessment is greater today, and will continue to grow, because of increased volumes of ESI. Some cases may have so much potentially relevant ESI that the cost to review it may outweigh the benefits of litigating. In reality, a case in which the opposition has enough merit in their case to allow extensive eDiscovery justifies an early evaluation of that cost and raises the case’s settlement value. Also, not understanding one’s case early can greatly increase the cost of conducting eDiscovery by requiring repeated requests and reviews as key issues and keywords are not understood – and custodians and data repositories not identified – until relatively late in the case.

When constructing chronologies very early in the case, they may be developed from limited available sources: facts from initial client interviews, review of key documents forwarded by the client, and alleged facts from the opponent’s’ pleadings. However, even limited sources can begin to give a good picture of where the strengths and weaknesses of the case lie. Early case assessment tools, like those found in Lexbe eDiscovery Platform, can help you construct timelines at the earliest stages. Certainly all lawyers construct timelines as a case progresses towards trial, but there are advantages in doing it as early as possible in an environment that is integrated with the case data. The most important of which is that, as iterative ESI collections arrive, early case timelines can be updated automatically and in real time.

Early case timelines can also include facts that are needed to support the claim or defense based on anticipated jury instructions for the claims involved. These facts remain ‘orphaned’ until associated with one or more documents or portions of deposition testimony. The existence of these orphaned facts serve as red flags for evidence that needs to be developed during the discovery stage. Similarly tracking facts that one’s opponent needs to prove for his or her case can serve as focal points for developing opposing evidence and summary judgment motions for evidence that has not been successfully developed during discovery.

Timelines also serve as a good way to begin educating clients about the potential holes or weaknesses in their case. A client can begin to digest a truer picture of his or her case through periodic review of timelines showing positive and negative facts backed up with documentary evidence or deposition testimony. Such chronologies can also help the client to develop supporting evidence, both documentary and oral, by jogging the client’s memory for missing or contrary evidence. As the case develops and more ESI comes in, the issue timelines can be supplemented more easily. With functional case timelines, an attorney is also in a better position to quickly develop and defend summary judgment motions and to identify specific facts backed with evidence.

Understanding Precision and Recall

Technology assisted review is a powerful tool for controlling review costs and workflows. But, to maximize the benefits of TAR, we must be able to understand the results.

Predictive coding has, for years, promised to reduce the time and expense of increasingly large scale litigation reviews. For attorneys and project managers assessing different methodologies, it has been challenging to understand what evaluative metrics are relevant. F-scores are often inappropriately interpreted as measures of review quality when evaluating predictive coding results. But to get a better understanding of how an application of predictive coding has performed and to manage the defensibility of your review, the component elements of the f-score – precision and recall – should be reviewed. But how do precision and recall scores relate? And, more importantly, what do these results tell you about your production?

In the context of TAR and predictive coding, precision is a measure of how often an algorithm accurately predicts a document to be responsive. In other words, what percentage of the produced documents are actually responsive. A low precision score tells us that there were many documents produced that were not actually responsive, potentially an indication of over-delivery. A high precision score on its own doesn’t mean much, either. One could deliver just 10 documents to opposing counsel, and if all 10 were responsive, we would have 100% precision but we would have almost certainly failed to deliver a very significant percentage of the responsive documents in the collection.

PerfectRecallPerfectPrecision To give our precision score any context relative to the over-riding goal of predictive coding — to quickly and defensibly deliver responsive documents to opposing counsel — we need to look at recall. Recall is a measure of what percentage of the responsive documents in a data set have been classified correctly by the TAR/predictive coding algorithm. When recall is 100%, the algorithm has correctly identified all of the responsive documents in a collection. A low recall score indicates that the algorithm has incorrectly marked responsive documents as non-responsive.

LowRecallHighPrecisionSquare To get an idea of how a predictive coding application has performed we need to look at precision and recall relative to each other. Due to the fundamental limitations of predictive coding technology, it would be very difficult to ever achieve perfect precision and recall on a collection. There is ultimately going to be a trade-off between optimizing the two measures. To improve precision, that is to reduce the proportion of false positives, we are likely going to reduce true positives — recall — as well. Similarly, to improve recall, or reduce the proportion of false negatives, we are likely going to increase the percentage of false positives and negatively affect precision. Because of this interrelation, much of what can be understood about TAR results is obscured by just looking at the f-score and accepting the result if it exceeds some arbitrary measure. Evaluating precision and recall in relation to each other tells a much more detailed story about TAR results.

HighRecallLowPrecisionSquare Given what we know about recall scores, it may occur that predictive coding actually gives us an explicit measure of how many responsive documents we didn’t deliver. How can we look at predictive coding results that indicate 80% recall and not be entirely focused on the 20% of responsive documents that haven’t been produced? The answer is that 80% recall may be a far better result than if a massively more expensive manual review of the documents was performed, instead. Though this seems controversial, it is a notion shared by The Sedona Conference, TREC legal track, and the judges who have been approving TAR use.

Are You Still Paying Too Much for eDiscovery Processing?

“Free eDiscovery Processing” sounds too good to be true. Until now, you may have spent many hundreds of dollars per GB to process native documents like Outlook Email and Microsoft Office files into paginated reviewable formats like TIFF or PDF. So how can those charges simply disappear? The revolutionary answer lies with Lexbe’s secure, scalable cloud technology.

Key Points

  • eDiscovery Processing Background
  • Rising Costs of eDiscovery
  • Leveraging New Technologies to Reduce Costs
  • Case Study: Dramatic Cost Savings by Utilizing Lexbe’s New Technologies
  • Lexbe Hosting and Review Technology Overview

About the Speaker

Stu Van Dusen is an eDiscovery solutions consultant with Lexbe, and a frequent speaker and writer on litigation technology. He has his MS Technology Commercialization from UT Austin, and his BS in Business Administration & Management from Trinity University.

Read More

Best Practices: Managed Review

Review costs continue to be the dominant portion of discovery expenditure for corporate legal departments and law firms involved in large-scale litigation and government investigations. As the number of documents to be reviewed in any given case continues to grow exponentially, the time to review them has not. The challenge of finding cost-efficient solutions to complete large review assignments on time and within budget becomes more pressing each year. Outsourced managed review is a favored option in many large document cases to bring specialized review expertise and staffing to bear to handle large-scale productions, privilege review, redactions, and issue coding.

Key Points

  • About Managed Review
  • Importance of Efficient Review
  • Traditional Staffing v. Managed Review
  • Project Planning and Management
  • Selecting and Training Your Review Team
  • Process/Workflow Design
  • Quality Control and Testing
  • Communicating With Counsel
  • Creating Review Reports
  • Custom Production and Privilege Logs

About the Speaker

Bob Roberts is responsible for leading Black Letter Discovery’s Cincinnati office and manages all aspects of the managed document review process. Prior to joining BLD, Bob’s experience included managing a portion of the civil division of the Prosecutor’s Office for one of Ohio’s largest counties. Bob also had several years experience representing private clients in state and federal courts throughout Ohio.

Read More

10 mistakes to Avoid when Running Productions

Whether producing in Native, TIFF, PDF, or a blended format, discovery productions are fraught with potential challenges and obstacles that can foil your ability to meet deadlines and satisfy discovery requirements. The webinar also explores different strategies you can use to avoid these common production problems before they unfold plus a variety methods that can be used to get back on track when unavoidable delays do occur.

Key Points

  • Being unaware of the rules (FRCP/state/local)
  • Neglecting to match review requests with your review approach
  • Not knowing the common file deliverables in productions
  • Missing the opportunity to use ‘Meet & Confer’ (Rule 26) to your advantage
  • Failing to Request specific file types & metadata as needed
  • No custodian tracking causing deduplication nightmares
  • Not Addressing placeholders, databases, and unusual file types
  • Negotiating incomplete discovery orders in complicated cases
  • Stepping into redactions traps
  • Decreasing privilege review accuracy by failing to apply Near dup checks

About the Speaker

Gene Albert is the CEO of Lexbe, and a frequent speaker and writer on litigation technology and eDiscovery topics. He is on the Planning Committee of the Texas State Bar eDiscovery Program. Gene has his JD from Southern Methodist University and his MBA from the University of Texas at Austin.

Read More

Latest Blog

Subscribe to LexNotes

LexNotes is our monthly newsletter of eDiscovery and legal document management and review tips and best practices.