NON-DETERMINISTIC ARTIFICIAL INTELLIGENCE SYSTEMS AND THE FUTURE OF THE LAW ON UNILATERAL MISTAKES IN SINGAPORE

Citation(2022) 34 SAcLJ 91
Published date01 March 2022
Publication year2022
Date01 March 2022
I. Introduction

1 As artificial intelligence (“AI”) continues to transform industries, public and private life,2 courts and legislators face the unenviable task of adapting the law to meet new challenges posed by AI. Sophisticated

traders and insurers, for instance, deploy AI to execute trades, calculate premiums and potentially conclude policies.3

2 However, using AI as an instrument to conclude contracts has highlighted, in some areas, the inadequacy of existing contractual principles. Of interest is the doctrine of unilateral mistakes as a vitiating factor in contract law. To rely on the doctrine of unilateral mistakes, the mistaken party (“MP”) must prove that its counterparty knew of the relevant mistake when the contract was formed. However, if the non-mistaken party (“NMP”) contracts via a non-deterministic AI, without contemporaneous human involvement at the time of contract formation, whose mental state becomes relevant under the defence of unilateral mistake (the “Knowledge Issue”)?

3 While the Singapore Court of Appeal in Quoine Pte Ltd v B2C2 Ltd4 (“Quoine”) provided much-needed clarity on the Knowledge Issue where contracts are concluded by deterministic AI, there is a lacuna in the law vis-à-vis contracts concluded by non-deterministic AI. This article examines the Knowledge Issue in the context of contracts concluded by non-deterministic AI before proposing reforms for analysing the Knowledge Issue and the related requirement of unconscionability in equitable unilateral mistakes. Determining the type or class of mistake (eg, what is a fundamental mistake as to the terms of the contract) that allows a party to invoke the unilateral mistake doctrine in law or equity is an issue beyond the scope of this article.

II. Non-deterministic artificial intelligence and the Black Box Problem

4 AI is an emerging technology that eludes precise definition.5 This article will focus on non-deterministic algorithms deployed with humans out-of-the-loop.

5 AI systems are, generally, either deterministic or non-deterministic algorithms. Deterministic algorithms produce the same output given the same inputs. To solve a problem, the algorithm merely applies pre-programmed rules to reach pre-set conclusions.6 As noted in B2C2 Ltd v Quoine Pte Ltd7 (“B2C2”), deterministic algorithms function “as they have been programmed to operate once activated”. Each step leading to the AI's conclusion is attributable to a decision made by the programmer.8

6 In contrast, a non-deterministic algorithm has “a mind of its own” — given the same inputs, the output may vary.9 AIs trained by certain machine learning (“ML”) algorithms are non-deterministic. ML involves conditioning a software to recognise statistical patterns from inputted data, including a training data set.10 While ML algorithms using decision trees and Bayesian classifiers remain deterministic, others like artificial neural networks (“ANN”) do not.11

A. Artificial neural networks and deep learning

7 ANNs consist of artificial neurons arranged in layers. After the first layer receives raw inputs, neurons in subsequent layers weight the inputs to produce an intermediate output which is passed to downstream neurons. Assuming the inputs are x1xn with weights w1wn, the intermediate output passed on is f(w1.x1wn.xn).12 The above process repeats until the output layer generates the final output. Deep learning networks typically comprise thousands of layers, each containing numerous neurons.13

8 Non-determinism results from the ANN's ability to adapt and learn on the go.14 Assume that the network is tasked with issuing an offer price for a particular stock. Possible inputs include the current market price, trading volume and the peak and trough price for a given period. However, as it encounters new data, the ANN may adjust each input's weight. Such adjustments are made pursuant to an optimisation algorithm, such as a gradient descent algorithm, which alters the weights to minimise the ANN's “error” (the difference between the network's offer price and the programmer's desired offer price).15 Continuous adjustments lead to non-determinism as the ANN may respond differently to the same inputs at different points in time. While an ANN may be restrained from learning post-deployment, this article does not consider such ANNs non-deterministic. Once deployed, such ANNs are unable to vary each input's weight. These weights would have been determined when the ANN was being trained by its programmer and fixed from that time onwards. In theory, once deployed, the ANN would provide the same output given the same input.

B. The Black Box Problem

9 Non-determinism results from continuous adjustments to inputs' weights. Such non-deterministic ANNs are black boxes16 — while humans may observe the data inputted and conclusions outputted, the machine's intervening logic, viz, why certain weights were assigned to each input and thus why a particular output was rendered remains “in the dark” (“Black Box Problem”).17 Such opacity places the AI's future behaviour beyond the control and reasonable foreseeability of its programmers and users.18

10 Quite apart from the ANN's adaptive learning ability, the weights assigned to each input are themselves determined by non-linear relationships, discerned by the ANN, among the inputs.19 Unfortunately, while humans can envisage the relationship among three variables in a plane, complex geometric relationships among more variables elude human comprehension.20 Taking non-determinism and non-linearity together, it comes as no surprise that ANNs have been denounced as “completely unintelligible” even to their own programmers.21

11 With regard to the Knowledge Issue, a question arises as to whether the black box may be “opened” to explain the algorithm's reasoning and reveal what it “knew” or, failing which, how the law should determine if a party, who entered into a contract using a non-deterministic AI, had knowledge of its counterparty's unilateral mistake.

12 The Black Box Problem may arise regardless of whether the AI received unsupervised or supervised training. Even if learning is supervised, ie, the training data is labelled to indicate the desired outputs to provide the foundation for the AI's decision-making, the AI can adjust the weights it gives to each input if allowed to do so, thereby obscuring its logic. For unsupervised learning, the AI's logic is even more opaque as the system analyses unlabelled training data to identify outliers or correlations without any guidance — the programmer is not privy to why the AI clusters certain data together, or deems others as outliers.22

13 For clarity, an autonomous AI may be subject to varying degrees of human supervision. On one extreme, the AI may be deployed with humans out-of-the-loop, ie, without human intervention before the AI performs an action in the real-world. Contrastingly, a human in-the-loop will make the final decision under the AI's “advisement”. A middle ground involves a human over-the-loop overseeing the task but only

intervening when necessary.23 This article focuses on out-of-the-loop non-deterministic AIs as the Knowledge Issue is the most nettlesome in this context. To some extent, this article is premature as most ANNs continue to receive human supervision.24 However, as the growth of AI computational power continues to outpace Moore's Law, this author seeks to pre-empt unfair reliance on non-determinism (and the Black Box Problem) to circumvent the unilateral mistakes doctrine in contexts such as algorithmic trading.25 To this end, the following sections will consider whether it is appropriate to broaden the grounds on which constructive knowledge and unconscionability may be established for the purposes of the unilateral mistakes doctrine and, if so, how this may be done.
III. The Knowledge Issue in unilateral mistakes

14 Singapore recognises the doctrine of unilateral mistakes in common law and equity as a vitiating factor in contract law.26 This article focuses on unilateral mistakes as to the terms of the contract.27 The tests for unilateral mistakes in law and equity will first be outlined before considering why the Knowledge Issue is problematic when the NMP contracts via a non-deterministic AI.

A. Unilateral mistake in law

15 The rationale underpinning unilateral mistakes at common law is best elucidated by the “promisee objectivity” theory endorsed by the majority in Quoine. Namely, there is no contract to enforce if the NMP knows that the MP did not intend to contract on the present terms; objectively, there is no consensus ad idem.28

16 Accordingly, to render a contract void ab initio for a unilateral mistake as to the contract's terms at common law, the MP must prove that:29

(a) it made a fundamental mistake as to a term; and

(b) the NMP had actual knowledge of this mistake.

17 The Knowledge Issue arises under element (b) where the MP must prove that the NMP actually knew of its mistake “from all the surrounding circumstances, including the experiences and idiosyncrasies of the [NMP]”.30 Further, actual knowledge is satisfied if the NMP was wilfully blind. Such Nelsonian knowledge arises if the NMP had a “real reason” to suppose the existence of a mistake, yet deliberately refused to make an inquiry he ought to have made — the NMP is deemed to know what he feared he might find.31

18 However, this author subsequently demonstrates that MPs will find it onerous to prove actual knowledge of the mistake when an NMP contracts via a non-deterministic AI. As a result, the court's equitable jurisdiction to declare a contract voidable will prove pivotal.

B. Unilateral mistake in equity

19 Where the contract is void under common law, equity cannot intervene.32 However, if the contract survives in law, equity still may render the contract voidable. As compared to its counterpart in common law, the rationale behind the doctrine...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT