ARTIFICIAL INTELLIGENCE AND EVIDENCE

AuthorDaniel SENG LLB (Singapore), BCL (Oxford), JSM (Stanford), JSD (Stanford); Advocate and Solicitor (Singapore); Director, Centre for Technology, Robotics, Artificial Intelligence & the Law; Director, LLM Programme in IP and Technology Law, Associate Professor, Faculty of Law, National University of Singapore. Stephen MASON BA (Hons) (CNAA), MA (City), LLM (London), PGCE(FE) (Greenwich); Barrister (Middle Temple); Associate Research Fellow at the Institute of Advanced Legal Studies, School of Advanced Study, University of London.
Publication year2021
Citation(2021) 33 SAcLJ 10241
Published date01 December 2021
Date01 December 2021
I. Introduction

1 With all the publicity regarding artificial intelligence (“AI”) and the fourth industrial revolution, it is important that any discussion regarding AI has to start with a definition, because the term can mean different things to different people. “AI” is defined here as a system that acts “intelligently” by doing what is appropriate for the circumstances and the purposes assigned to it, including behaving flexibly in changing environments and objectives, learning from experience and making appropriate choices given perceptual limitations and finite computation.1 AI can be further categorised as:2

(a) Narrow AI: AI developed as an aid to human thought, typically through the use of a system that solves tightly constrained problems;

(b) Strong AI: AI that attempts to mechanise human-level intelligence, typically when a system is used as a general purpose problem solver; and

(c) Artificial General Intelligence (“AGI”): an AI system that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

2 In spite of many of the significant advancements in their sophistication and application today, AGI and even Strong AI systems do not exist yet.3 Narrow AI, however, does not mean that AI systems that we have built are not powerful or are not able to perform useful tasks. It means that the AI systems that we have designed and are currently using are of narrow application. This includes their widespread use in automatic application processing, anomaly detection, and speech, language and visual pattern recognition for business and public processes. For instance, smart banking systems review and approve credit card and loan applications. Image recognition systems and cameras collect information about individuals and vehicles and track our movements. Electronic commerce platforms process our shopping preferences to generate advertisements and product recommendations. In fact, today, AI systems are gathering, processing and producing much of the information that is integral to the functioning of a modern society.

II. AI and electronic evidence in context

3 This article concerns evidence gathered and processed using AI systems. AI evidence is first and foremost evidence in electronic form.

Because electronic evidence is by its very nature capable of being altered or even deleted, deliberately or inadvertently, the main consideration is to ensure that such evidence and the systems that store, process and analyse them are trustworthy and reliable.4 This pervades the very nature of the evidence itself — from its inception to its use in legal proceedings — and encompasses the systems in which it is stored, processed and analysed. To take law enforcement as an example, today, police officers equipped with body-worn video cameras and patrol cars equipped with in-car cameras record crucial evidence in real time.5 For investigative work, police use cell phone tracking software and case-management software to streamline the collection and analysis of collected evidence.6 And prosecutors, lawyers and judges use case tracking and management systems to manage case filing, information, caseloads and dockets.7 Even as there is a move away from just automated systems to AI systems by investigative agencies for policing,8 intelligence gathering9 and evidential
analysis,10 and by the courts for administration, dispute resolution and decision-making purposes,11 the significant consideration for prosecutors and the courts will always be the trustworthiness and reliability of the evidence and its systems.

4 Lawyers also use AI systems for contract analysis, document review and electronic discovery or disclosure, for legal research and analysis, and in practice management applications, including electronic billing and programs for drafting documents and for redaction. Through legal analytics, lawyers are also using AI systems to predict judicial decisions with a view to maximising their success rates before particular courts on specific matters.12 However, aside from the matter of electronic discovery, a distinction can be made between the use of information, including information generated by AI systems, to prove matters of dispute in legal proceedings, and the use of such information to manage and administer various litigation, investigation and legal processes. As the latter does not involve issues of evidence, no further reference to such use cases will be made in this article.

III. Machine learning

5 The bulk of the AI systems today are built on machine learning (“ML”) algorithms. Rather than following pre-programmed rules, to carry out complex processes, ML works by “allow[ing] systems to learn

directly from examples, data, and experience”.13 There are three main permutations of ML:14

(a) supervised learning: where the AI system is trained with data that has been labelled, learns how it is structured and uses this to predict categories of new data;

(b) unsupervised learning: where the AI system aims to detect characteristics of similarities between data points; and

(c) reinforcement learning: where the AI system focuses on learning about the rules of the environment and the consequences of its action from experience by interacting with its environment, and tries to develop strategies for optimising a reward (the reward function) that it is given.

6 Although ML has enabled many impressive advancements and powered many of the innovative uses of AI systems today, it can give results that are not expected or are incorrect.15 In addition, ML is subject to a number of additional limitations:

(a) Many AI systems, especially supervised ML systems, rely on large amounts of training data given a label by a human being.

(b) ML will learn any biases that are contained in the training data, so (for example) an ML system for determining whether a prisoner should be released by the parole board will exhibit racial bias if it has been trained on data that contains such

bias.16 And correlations discovered through ML do not equate to causality.17

(c) Datasets will invariably contain hidden biases, as would the choice and use of ML algorithms.18 This is because the development of datasets and algorithms will involve decisions by humans, who, apart from their own qualifications (or lack thereof) and inherent biases, will have to consider compromises and trade-offs.19

(d) There are many constraints on the real world that we know from natural laws (such as physics and mathematics) or logic. It is not straightforward to include these constraints with ML methods.20

(e) When our expertise fails, humans fall back on “common sense”. But current ML systems do not define or encode this

behaviour.21 This means that when they fail, they may fail in a serious or brittle manner. In particular, an ML system may be unstable when presented with novel combinations of data, so even if it has been trained on past decisions that have been separately verified by experts, that may not be enough to justify high confidence in a subsequent decision.22

(f) Humans are good at transferring ideas from one problem domain to another. This remains challenging for computers even with the latest “transfer learning” ML techniques.23

(g) Related to this is the challenge of interpretability — the need to represent knowledge encoded in the learning system in a form that is comprehensible by humans. For instance, it is not easy for even programmers and analysts who train neural networks24 that are typically used in deep learning ML algorithms to identify or explain the factors or weights that have been used by the networks to arrive at a decision in a particular case.25

7 All these special characteristics of AI translate into real and substantial issues regarding the admission of electronic evidence, either in the form of real evidence or records produced by AI systems. It challenges presumptions in evidence about the reliability of automated systems, questions the characterisation of records from AI systems as real evidence or as hearsay, deepens the analysis of such evidence on grounds of authenticity and even goes to the issue of whether such evidence can be the proper subject of legal disclosure or discovery. It is to an examination of these issues that this article now turns.

IV. Presumption of reliability

8 At common law, a rule of significant concern that governs the admissibility of electronic evidence is the presumption that computer systems are “reliable”. In England and Wales, this presumption states that: “In the absence of evidence to the contrary, the courts will presume that mechanical instruments were in order at the material time.”26

9 While Commonwealth jurisprudence has gradually abandoned the requirement that computer systems are “reliable” as a precondition for the admissibility of electronic evidence,27 and by corollary, evidence produced by AI systems, the requirement that computer systems be shown to be “reliable” still underpins many exclusionary rules of evidence such as the best evidence rule,28 the business records exception to the hearsay rule,29 and the authentication evidence rule.30 Who has the burden of proving (or disproving) the reliability of the computer system? In the context of AI systems, this issue is especially pertinent because AI systems in general, and certain ML algorithms in particular, can operate in ways that are opaque and not obvious, even to their programmers and operators.

10 In practice, in considering the “reliability” of computers, many judges have not always taken relevant expert advice, or consulted the technical literature on the topic of the “reliability” of computers in reaching their conclusions on this issue.31 These judges conclude that because these systems do what was expected of them, notwithstanding the opponent's challenge, they are satisfied that these systems are “reliable”.32 In so doing, the Bench has erroneously...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT