The Future of Disclosure in the UK – And What This Means
Recently questions have been raised about the future of disclosure in the UK. The decision is significant for rejecting the Disclosure Pilot Scheme as unworkable, with Judge Smith issuing his own bespoke, polar opposite, “massive overdisclosure” order instead.
There have been several criticisms of Judge Smith’s process, with the Judge himself acknowledging that part of the process would be “quite an intrusive and possibly expensive obligation.”
The decision introduced conditions which Judge Smith relied upon to trigger his disclosure order. Practitioners interested in the potential burden of overdisclosure should pay special attention to the conditions which, if met, would trigger a similar order: (i) a risk of relevant documents being missed; (ii) no danger of a party being oppressed as a result of the order; (iii) the risk of disclosing privileged material is equivalent to risk expected under "standard" disclosure; and (iv) confidential material is adequately protected.
As discussed more fully below, the first condition provides parties the greatest chance of avoiding an overdisclosure order, with the odds solidly in favour of those using a tool recently developed to counter this same risk, which has been used with great success to revise electronic filters and provide precision.
Massive Overdisclosure – Process
Judge Smith's bespoke disclosure process requires parties to first identify the scope of documents subject to disclosure by referencing custodians, date ranges, etc., followed by an electronic review to filter out irrelevant documents using the Peruvian Guano test.
Parties then filter out privileged documents by means of “a robust process,” likely entailing an electronic search followed by human review to confirm the results.
Rather than any filtering for confidential documents, they will instead be disclosed with safeguards in place. This will include the “intrusive and possibly expensive obligation” of the receiving party to retain King’s Counsel to authorise access to specific documents.
Judge Smith’s bespoke disclosure process was no doubt conceived to allow for speedy and transparent exchanges. However, the process would compel parties to review not one but two massive overdisclosures: the one they receive, and the one they produce without the opportunity to review for relevance.
Massive Overdisclosure – Conditions
After discussing the deficiencies of “standard” disclosure, Judge Smith concluded that “massive overdisclosure … ought to be adopted provided the four conditions are satisfied.” While the Judge does not provide specific guidance on how these conditions should be applied in other cases, the second condition would be relevant only in those rare cases where the Receiving Party does not have access to a review platform with search functionality. The third and fourth criteria would be answered affirmatively in nearly every case, given that Judge Smith’s process is designed to provide the necessary safeguards.
The outcome of the first condition, on the other hand, would depend entirely on parties’ ability to employ targeted search strategies along with a powerful search tool that removes false positive hits.
False Positives – Pivot on the Noise
As Judge Smith correctly states, when revising search terms, it is not enough to find a clever combination of brackets, connectors, and proximities that return an agreeably low hit count. Instead, search results must be targeted from both ends, i.e., results to review straight-away and results to analyse and revise.
By identifying documents to review straight-away, parties can set up predictive coding and other AI tools to educate the ongoing search revisions. One effective strategy is to employ family level search term reporting, including counts of distinct terms and aggregate term hits per family. For example, a family that hits on 10 different terms with a total of 50 hits is unlikely to ever fall out of the search results and should be batched for review immediately.
The other end of the revision strategy – identifying terms to revise – should focus on identifying false positives.
Forensic Risk Alliance has developed a process that accomplishes this by reporting text strings for individual search term hits, much like Google search results that show the context of a search hit with the actual terms in bold.
By analysing text strings that drive the search results, attorneys can “pivot on the noise” to identify false positive patterns, which can then be fed back into the search term in the form of negative proximities. Note that these negative proximities do not exclude documents with a given pattern, but rather define patterns that should not be counted as hits.
With detailed documentation of false positives, corresponding term revisions, and resulting decreases in hit counts, parties should be able to provide sufficient evidence that relevant documents have not been missed to avoid a “massive overdisclosure” order.