AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal Student Series

You are here:  AustLII >> Databases >> University of New South Wales Law Journal Student Series >> 2024 >> [2024] UNSWLawJlStuS 18

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Koo, Katie --- "Navigating Legal Liability in AI-Driven Contracts: Who Should Bear the Error of AI Contracting Issues?" [2024] UNSWLawJlStuS 18; (2024) UNSWLJ Student Series No 24-18


NAVIGATING LEGAL LIABILITY IN AI-DRIVEN CONTRACTS: WHO SHOULD BEAR THE ERROR OF AI CONTRACTING ISSUES?

KATIE HEI-TING KOO

I INTRODUCTION

The commercial adoption of advanced technologies has incentivised businesses to incorporate Machine Learning and Artificial Intelligence (‘AI’) in automating their contracting process. Despite its efficiency, adopting AI can harbour issues concerning unpredictability and explainability due to its ‘black box’ decision-making process. Mik argues that the liability should ultimately be attributed to the operator, as they can control and program the algorithmic system, which outwardly manifests their intent in the algorithmic contracts with the users.[1]

This essay will critique Mik’s argument surrounding the concept of generative AI in creating unexpected contractual outcomes and attributing liability for errors. It will first examine the limitations of Mik’s statement in addressing the unpredictability of generative AI with its machine learning nature, which challenges the traditional contract principles. It then comments on the practical and moral pitfalls of attributing sole liability to the operators, emphasising the continuous tension between legal protection and innovation. Acknowledging the limited scope of common law protection, the essay will explore the possibility of securing the party’s interest from an equitable approach. Subsequently, it will discuss the benefit of Australia’s lack of rigid AI-specific regulations in promoting flexibility in forming contracts. Lastly, the essay will argue for legal reform in creating legal guidelines and standards to favour a hybrid approach of AI and human involvement when attributing liability. The reformed system should give weight to multiple factors, such as the party’s degree of control, reasonable steps taken, and compliance with government guidelines to balance the operator’s and users’ interests and risks.

II MIK AND AI

Transitioning contracts from physical to digital or autonomous platforms aims to enhance efficiency in resource allocation, repetitive document arrangements, and reduce human errors. AI systems function to automate complex activities by recognising patterns using encoded rules, datasets and computer-processable information managed by operators.[2] Mik, therefore, explores three intriguing regulatory concepts in this changing legal landscape as AI and automated machines are adopted.

This essay supports Mik’s argument of treating AI and machines as tools within contracts instead of recognising their legal personhood. Granting AI legal personhood is arguably impossible as they lack all the legal characteristics of a natural person.[3] They cannot afford the legal consequences of paying damages or obeying court orders as they do not have ‘assets capable of sequestration’.[4] Hence, this essay’s analysis stems from viewing AI as the operator’s transaction tool and facilitating the contracting process.[5] Mik acknowledges that integrating AI can complicate the contracting process, particularly due to its unpredictability arising from its ‘black-box’ decision-making. However, Mik argued that unexpected outcomes are unlikely as the operators’ intention is ultimately embedded within the AI’s programming.[6] Building upon this, Mik advances a third argument asserting that operators should be liable for algorithmic errors as they are the controllers and creators of the AI model.[7]

Mik’s perspective appears cogent when considering machines that still require a certain degree of human involvement, enabling programmers and operators to limit operational scope through computer codes. However, Mik’s assertion becomes contentious as society traverses into the generative AI era with AI that modifies its output by collecting user data and refining its output through reinforcement learning.[8] Hence, the issue of intent and liability attribution becomes ambiguous when AI autonomously executes decisions using machine learning algorithms. The subsequent sections will then address the shortcomings of Mik’s argument concerning generative AI, discussing unexpected outcomes and the complexity of liability attribution.

III THE UNEXPECTED OUTCOME

The entire delegation to generative modelling AI (‘generative AI’) in the contract drafting and transaction process can exponentially heighten the probability of unexpected outcomes and contractual errors, causing distrust and ambiguity between contracting parties.

A Generative AI

Generative AI is an AI that employs machine learning to train the algorithm to learn from new content through data collection and practical software application, generating and predicting new outcomes.[9] This can challenge traditional contractual elements due to the unpredictable nature of generative AI’s decision-making process.

With the full implementation of generative AI in the commercial realm, operators’ control over the generative AI progressively diminishes. An AI model has six main elements: the model itself, goals, training data, input data, output data and the operating environment.[10] The model and goal components allow the operator to establish parameters, mathematical functions and codes that enable the generative AI to produce satisfactory outputs.[11] The model algorithms will then run on training datasets that facilitate calibration and pattern recognition that can be applied in the future using unknown values provided by users.[12] The operator can then configure the operating environment for the generative AI once it is universally deployed for practical use, determining the source and validity of the incoming data.[13] The generative AI will then learn from interacting with users through reinforcement learning, repetition, and trial and error.[14] Recent examples of generative AI development include OpenAI’s GPT-3.5 model and the GitHub CoPilot (a generative co-programming software), which can generate human-like output upon user request.

B Mik and Generative AI

Mik’s argument in the embedment of the operator’s intention within the AI appears persuasive when AI engages in contract automation to a lesser extent. Such applications include ‘gap-filler’ algorithmic contracts that rely on generative AI to fill gaps in a standardised set of contractual terms or agreements that only require the user’s signature on a pre-determined contract.[15] In this process, AI serves as a tool for sending offers of a finalised contract to the other party and executing the obligations in response to the user’s acceptance. Operators still retain considerable control in determining the remaining contract terms, despite delegating specific gaps such as party identity, dates, or details to the algorithm. As such, the operator’s intention within these algorithmic contracts is obvious as they retain most control during the drafting phase. However, the certainty and validity of contractual terms become ambiguous with the rise of negotiator algorithmic contracts, where generative AI actively dominates the negotiation phase and determines contract content or acceptance.[16]

C Loss of Operator Control

With lacking transparency regarding the sources and previous datasets that generative AI has drawn on in its decision-making process, legal uncertainty arises of whether the generated outcome truly encapsulated the parties’ original intent in the algorithmic contract. The generative AI’s capacity to learn implies a loss of operators’ control in the generated outcome despite the pre-programmed limits. The data source available in the operating environment is extensive. The source can range from the program itself, past agreements produced by the algorithm, and announcements about legal changes to the information from stakeholders such as the government, company staff, organisations, and academics.

Indeed, operators can control the output data range by asking the programmer to limit the computer codes or data sources the generative AI is authorised to learn. It is also possible for operators to prescribe a set of legal terminologies and essential clauses to be incorporated into the algorithmic contracts. This may include contracting parties, contract objectives, rights and obligations of the parties, termination, and dispute resolution clauses. However, the loss of human oversight suggests no guarantee of the algorithmic wording’s precision when drafting or even including clauses that should be specifically integrated for specific users. Whether the negotiating algorithm can precisely manage the flexibility in determining specific details that maximise both parties’ interests is contentious. It is challenging to predict the appropriateness of the algorithm’s drafting language since the other party’s background can vary from their expertise, profession, industry and cultural region. If the contract specifics are unpredictable and influenced by the loss of human control, is it still reasonable and valid to claim that the operator’s intentions are manifested in the AI-generated contract? The answer is arguably ‘no’.

D Risks of Unpredictability

The unpredictability of AI algorithms that causes lacking intention can challenge traditional contract principles, triggering legal inefficiency in the algorithmic system. The unpredictability can undermine the traditional legal principles of contract formation if the generated terms exceed the operators’ expectations and intention to form legal relations. If the operator’s intent is absent in the contract, its validity and enforceability are debatable due to the lack of essential contractual elements. It can spark legal disputes regarding unpredictable terms, potentially rendering the contract voidable or leading to the interpretation of ambiguous terms. This can, nevertheless, burden the court’s resources and time. Similarly, generative AI can generate supplementary clauses that impose unintended obligations on parties or include outdated information, inadvertently escalating litigation. The long-term repercussions can even extend to the parties’ reputational damage, lower public confidence in algorithmic contracts and challenging the general legality of AI-induced contracts. Hence, the unpredictability due to the ‘black-box’ decision-making creates serious legal tensions concerning accountability and available remedies when the operators contend a lack of legal intention with unexpected outcomes.

IV THE ATTRIBUTION OF LIABILITY

The inherent unpredictability of generative AI can further complicate the attribution of liability among contracting parties. Mik argued that operators should be primarily liable for deploying a system they cannot fully comprehend or even control in offering legal protection to the users.[17] Indeed, applying a strict liability can foster system safety and internalises economic risks to operators in protecting users who typically possess weaker bargaining power.[18] However, as Pasquale argues, solely attributing liability to operators may be insufficient due to the evolving nature of AI algorithms.[19] With generative AI actively determining contract terms using machine learning, operators do not need to preconceive and fully anticipate every possible state in advance.[20] Generative AI’s inherent unpredictability thus complicates society’s understanding of generative AI’s decision-making process and data inputs. In traditional contracts, objective theory often forms the foundational common law principles of contract in ascertaining the parties’ intention to the contract through their observable actions like words and conduct rather than their underlying cognitive behaviours.[21] It is thus impractical and unfair to view generative AI contracts in light of the objective theory, as explainability is important in attributing liability to the ‘right’ parties.

As Mik has identified, objective theory interprets the party's intention based on the expressed terms or actions in which the generated terms can be affected by multiple parties such as the operators, consumers, or the generative AI’s own algorithm.[22] With generative AI’s machine-learning nature and the possibility of the consumers’ influencing the contract output during the negotiation process, there could be no transparency in the decision-making process that identifies the party’s influence over specific phrases or terms. With generative AI’s lowered explainability and the lack of personhood, it would be harder to manifest the operator’s intention in every contract except for the standardised contracts with minimal AI assistance. It would therefore be unfair for individuals to adopt this traditional theory in modern contracts with its emphasis on ‘reasonable person and mind’ in attributing liabilities to parties.[23] Hence, generative AI’s lowered explainability thus complicates the apportion of liability for contracting errors, which is heavily associated with the degree of control.

A Establishing Causation

Contrary to Mik’s argument, liability should be proportionate to the degree of control each party exerts during the contracting process, so operators who lack full control over generative AIs should not be held strictly responsible for all errors. Reflecting on the proposed liability model by organisations and academics, the vantage point should be determined based on the amount of influence parties have over the generative AI in the unexpected outcome, on a case-to-case basis. This is especially important as it is unlikely to have a ‘one-fits-for-all’ solution or framework for generative AIs in automating contract formations with different subject matters and contents.

In particular, Zech and the European Commission have advocated for a negligence liability model depending on the degree of risk control exerted on the AI system.[24] The Commission has suggested a two-tiered approach to attributing liability. Firstly, for the generative AI with designs that do not pose serious risks or disadvantages to other parties, the user should be responsible for choosing the appropriate AI and is liable for breach on a ‘fault-based’ regime. Secondly, if the generative AI carries increased risks to others, the operator should be strictly liable for the damages derived from its operation.[25] Indeed, apportioning liability in the case of generative AIs is problematic considering their evolving character that can create structural uncertainties of the unknown risks which cannot pinpoint the fault on specific parties.[26] This then requires the claimant to establish causation between unexpected outcomes and system errors, and determine the fault-based liability system based on the extent of influences parties can exert over the system.[27] The categorisation of exerted influence can be broken down into multiple phases, namely (i) the internal design and model functioning phase; (ii) the machine learning phase; and (iii) the contract negotiating phase. Depending on relevant phases, the court can give more weight in establishing causation against certain parties with their ability to possess higher control over others.

This essay supports Mik’s argument to the extent that operators should only be strictly liable for errors resulting from the generative AI's design and model functioning.[28] Operators’ exercise of control over generative AIs is still relatively high with strong causation links, given their ability in the initial stage to restrict the modelling of generative AI or regulate its data source and operating environment. At this stage, viewing the ‘analytical vantage point’ from the users’ perspective is reasonable as they are unlikely to exert any influence and control to safeguard their interests. The automated generative AI process has also shifted control from the operator to other parties, such as programmers, users, and external professionals providing the primary and secondary data source.[29] With this shifting nature of controlling power, operators can create a contract with each mentioned party and attribute liability subject to the made contract or liability clauses. Echoing the Commission and Zech’s model, liability can be attributed based on the risks and types of control to the relevant parties, and determine the types of errors parties would be liable under the contract. For example, programmers can be liable for generating an inaccurate algorithm that caused programming errors, external professionals can be liable for providing unauthorised and unverified data sources that affected the generative AI’s training data accuracy when calibrating their operating performances. As such, this can increase assurance and protection to individuals or parties using the generative AI services. Hence, imposing liability on the operators to the extent of system modelling can promote system maintenance and safety.

B Strict Liability – Legal and Ethical Tensions

However, attributing strict liability to operators for every unexpected output can lead to legal and ethical tensions. In cases where unidentified or unpredictable errors were not agreed upon in the contract, the opacity and reduced explainability of generative AI’s decision-making process complicate pinpointing the exact cause of the error within the sequential contracting processes. This is because the error may result from generative AI’s inherent functions, the embedded database developed by programmers, or the user’s input during negotiation. Consequently, establishing legal causation between the operator and output error becomes challenging beyond the agreed liability terms. Indeed, the application of strict liability can ‘cast a wider net’ as compared to negligence liability by making the parties, in this case, the operator, responsible for all harm even if they have already performed beyond the alleged duty and quality of care.[30] Although strict liability can offer more protection and assurance to individual consumers with vulnerable bargaining power and limited resources to litigation, it is impractical and unethical to attribute strict liability to operators when unexpected errors occur during the implementation and operational environment process.

1 Legal Tension

Strict liability ultimately generates conflicting interests between legal protection and innovation, causing economic inefficiency. Unjustified liability attribution to operators can engender legal tensions such as increased litigations, user abuse of the generative AI system, and legal uncertainty. Mik’s argument favours users with arguably less bargaining power over generative AIs, emphasising the importance of legal protection. However, imposing strict liability on the operator, regardless of the error’s nature, can pose significant risks to operators, especially in transactions involving critical commercial decisions and large monetary values. In scenarios of generative AI commercial agreements being adopted between large corporations, strict liability potentially allows them to exploit the doctrine and relate the contract terms that were unfavoured to them as the operator’s fault even if they have performed beyond their reasonable care level for escaping from the agreement.

With the nature of AI-induced contracts, operators might have tortious liability for breaching their duty of care and contractual liability that can attract great fines and lower their reputations. This approach can cause an increase in litigations, burdening both parties with time and resources. From the perspective of civil practices and standards, the court might also face an exponential number of claims especially with large corporations being liable for serious legal or financial consequences should their strict liability be established, prolonging the court case management time and hindering people’s access to justice. Various reports also illustrate users’ struggle to obtain evidence regarding the system error and security to satisfy the causation burden of proof, which can disadvantage the users in litigations.[31] In most instances, multiple operators will participate in system development, so the apportionment of liability can be complex when many defendants are involved.[32]

2 Moral Tension

Moreover, there could be moral tensions for attributing sole liability to the operator when errors are user-incurred. As Gerbert has argued, strict liability might not provide a transparent way of legal reasoning in court, with judges mainly providing a basis for justifying their decision to apply strict liability.[33] This model might disrupt an individual’s trust in the judicial integrity of the legal system. This could also disincentivise individuals from seeking legal assistance when they face unjust treatment from another party. As such, imposing strict liability might instead deter operators from adopting advanced technologies to shield themselves from potential legal liabilities, ultimately stifling legal innovation across industries. This could cause a backward impression of the legal industry that causes economic inefficiency, reverting the dynamic to the traditional process that discourages technology and reduces productivity within the industry. Therefore, an extreme application of strict liability can adversely exacerbate legal and practical inefficiencies despite promoting legal certainty through strict liability.

C Explainability and Reasonable Care

To support a fairer liability attribution among parties, emphasising explainability in generative AI-induced contracts is imperative to assess the liability from both parties’ perspectives. Some industries are developing explainable AI movements, which aim to explain generative AI’s decision-making process to provide clear and supporting evidence to users.[34] Hypothetically, explainable AI movements can document output data and decisions, significantly assisting parties in comprehending the decision-making process. Consequently, it becomes easier to understand the interplay between the hardware, data input and the operating environment, allowing liability attributed to the parties with a higher probability of incurring the error.[35] Likewise, the European Commission of Regulation also suggested considering factors such as ‘the likelihood that the technology contributed to the harm; the risk of a known defect; the degree of traceability and intelligibility of the technology’s processes in collecting and generating data; and the kind and degree of harm’ when determining causation.[36] By encouraging the adoption of explainable AI movements in generative AI-induced contracts amongst industries, this can increase the overall explainability of between parties. This can assist courts in analysing the case with tracked evidence between parties, increasing the accuracy and reasonableness for attributing liability to relevant parties. This approach arguably provides more flexibility and fairness in attributing liability for algorithmic errors.

Apart from referring to the contract agreement between operators and parties for attributing liability, courts should also consider the exercise of reasonable care by operators and users during the contracting process. Echoing Mik’s argument, users should be liable when they elect to accept the contract despite apparent faults in the algorithmic contract.[37] They should undertake reasonable measures to select, converse, and use the generative AI carefully. Users can bear partial liability if they fail to communicate noticeable faults to the generative AI or the operator before accepting the contract.[38] As such, despite the unpredictability within the black-box decision-making process, parties can give weight to more factors when apportioning liability depending on the parties’ control and behaviour. This ensures both parties fairer and more balanced accountability, promoting efficiencies in contract automation. Hence, this essay argues that the analytical ‘vantage point’ should not be entirely from the users’ perspective. Instead, it should be determined according to the disputing circumstance on a case-by-case basis.

V COMMON LAW VERSUS EQUITABLE APPROACH IN REMEDY

Reflecting upon the unpredictability and liability attribution within generative AI-induced contracts, it is helpful to analyse whether an equitable approach can supplement common law by offering a more flexible remedial framework to mitigate the risks and liability of both parties.

A Common Law Approach

Parties will likely wish for the contract to be discharged or rescinded if the generative AI incorporated unintended clauses due to their ‘black-box’ decision-making process. Generally, contracts can be voided or rescinded in common law under a few circumstances: (i) termination due to the breach of essential terms or serious breach of intermediate terms;[39] (ii) repudiation due to the parties being unwilling or unable to perform the contractual obligations;[40] and (iii) rescission due to the presence of vitiating factors. However, the limited common law remedial options hinder operators and users from obtaining adequate compensation. Firstly, parties cannot rely on termination to terminate the contract when there is no serious term breach or expressed terminating right.[41] Secondly, parties cannot rely on unwillingness or inability to perform the contract as the party that repudiates would be obliged to compensate the opponent party. Thirdly, it is rare for parties to be able to raise other vitiating factors in these circumstances. Although parties can argue unilateral mistakes, parties that mistake the contract’s nature can only seek equitable remedies for rescission.[42] Likewise, raising other vitiating factors, such as misrepresentation and common mistakes, is challenging. It is unlikely to argue ‘common mistake’ regarding the contract’s subject matter when any modelling error is already attributed to the operator, as discussed.[43] It is also hard to prove fraudulent misrepresentation if both parties act in good faith or innocent misrepresentation when the operator is not actively involved in the contracting process. Lastly, the limited common law remedies, such as seeking damages, might not provide adequate compensation to address parties’ losses. The remedy can only compensate the party’s monetary loss but does not address the potential impact on reputation, fairness and missed business opportunities. Hence, the inadequacy of common law remedies suggests the need for equitable remedies to intervene with a more flexible avenue in compensation.

B Equitable Approach – Undue Influence

This essay argues that it is possible for parties to raise the equitable doctrine of undue influence to expand the variety of available remedies. Undue influence concerns the circumstances where a ‘party’s decision to confer a benefit on another is affected by the excessive influence and ascendancy of another over the party’.[44] It consists of two common types, namely actual or presumed undue influence. From Throne v Kennedy, the court held that undue influence could occur to a ‘presumption that the transaction is not the exercise of the party’s free will, which considers the existence of a particular relationship or the nature of involving a “substantial benefit” in the transaction’.[45] The proof of particular relationship outside of the court’s recognised relationship category usually requires the involvement of ascendency, dependency, or trust.[46] However, the peculiar novel relationship between algorithms and users challenges the core element of establishing undue influence as the operators are technically not involved in the automated contracting process.

In response, Rizzi and Skead argued that the trust and deference that a contracting party places in the generative AI to produce an appropriate and fair output is arguably analogous to the traditional relationship of trust, dependence and deference.[47] As such, due to the perceived trust both parties have deferred onto the generative AI, the determination of certain unpredictable contract terms can arguably be unduly influenced,[48] allowing individuals to seek equitable remedy. Likewise, since the actual undue influence is not dependent upon an established relationship, it may be transaction-specific.[49] Parties can also raise actual undue influence if they find it arising from an external source beyond both parties’ control that led to the errors. If the parties satisfy other required elements for undue influence, they can seek equitable remedies such rescinding the contract or seeking specific performance from the court to request the party to perform certain conducts.[50]

C Common Law versus Equity

Therefore, the equity framework can offer greater flexibility in securing parties’ interests and reducing operators’ and users’ liability in unpredictable errors. Equity arguably offers remedies beyond monetary measures, such as rescission and specific performance, that can better address the unpredictability issue due to generative AI’s decision-making process. Compared to common law remedies, such as damages that do not address the party’s non-monetary loss, equity allows more options for parties to compromise their agreement. Parties can elect to set aside or affirm the contract since the contract will be voidable upon proving undue influence. This allows liability attribution to be justified based on fairness and mitigates the risks operators and users have to bear when unexpected errors are likely beyond the party’s control. With the flexibility of equitable remedies, it offers parties more options to compromise and resolve generative AI-natured issues.

VI AUSTRALIA’S LEGISLATION

The lack of AI-specific Australian legislation has undoubtedly raised legal concerns on protection, yet this approach arguably promotes flexibility in contract-related interactions. Australia has yet to adopt AI-specific legislation, yet some AI development and usage fall into the patchwork of general law aspects. AI productions and operations can concern copyright infringement under the Copyright Act 1968 (Cth)[51] or data privacy under the Privacy Act 1988 (Cth).[52] However, there is yet to have Australian legislation governing generative AI in the context of contract law and liability. Although parties can potentially argue their claims under equity through undue influence as mentioned, there are so far no Australian court cases that accepted this argument and applied an equitable approach in cases of AI-induced errors. Indeed, strict legislation certainty promotes legal certainty and prevents operators from abusing the AI operation in contracts to gain unfair advantages. However, Australia’s lack of statutory limitation in the generative AI and contract arguably maximises both parties’ wants to preserve flexibility and opportunities for bargaining contract terms that best secure their interests.

A soft statutory approach can facilitate party negotiation, allowing parties to fully utilise generative AIs to achieve their objectives. This flexibility can favour AI contracts with a commercial nature, enabling individuals to be adaptive and tailor their contracts to the changing commercial landscape. It can be counterintuitive to have strict rules governing the usage of AI with a heavier regulatory burden as the aim of adopting AI is to increase economic efficiency, notwithstanding that each contract goal is subjective to the contracting parties. Furthermore, adopting soft laws such as AI-specific guidelines and standards can supplement the general contract law principle that is currently in place without overcomplicating the legal field. The Australian government have published eight AI Ethics Principle that provides a voluntary framework to promote AI safety, security and reliability.[53] The principles concern AI operations in fairness, privacy, safety, transparency, and accountability.[54] As such, individuals and developers have guidelines to model their generative AI system to abide by the standards, which can drive public confidence and incentivise legal innovation in business operations.

VIII RECOMMENDATION: HYBRID APPROACH

Therefore, Australia should adopt a hybrid approach to regulating the AI industry that reduces the risks and uncertainty around liability attribution and generative AI’s explainability. To tackle unpredictability, Australia should mandate hard laws for adopting explainable AI movements that document their data source pool to facilitate higher explainability in AI products. This can effectively reduce the legal complexities in liability due to generative AI’s unpredictable decision-making.

On the other hand, Australia should promote a hybrid approach to human intervention by mandating humans to sign and finalise the contract considering the serious commercial consequences of frequent voidable contracts. There can be more voluntary AI guidelines in model-building processes that outline the reasonable steps both parties should take when monitoring, controlling, and framing contracts. The government can incentivise parties to comply with the voluntary framework by giving weight to these factors when attributing liability in circumstances of unpredicted errors, potentially limiting operators’ liability if they have fully complied with the guidelines. When the cause of the AI-related error cannot be traced or predicted, operators should develop a no-fault-based compensation scheme to compensate claimants for their loss in promoting user loyalty.[55] As such, this flexible approach strives to balance the competing interest between legal protection and innovation in the industry. It allows parties to negotiate their contract terms freely while complying with the AI guidelines and current contract law.

IX CONCLUSION

In conclusion, the recent development of generative AI in contract automation gives rise to contentious issues around its unpredictability and explainability due to its machine-learning and ‘black box’ decision-making process. These issues have led to ambiguity in the contract formation process due to removing human control, allowing generative AIs to potentially automate contracts beyond operators’ embedded intentions. The alleged unpredictability further causes unethical and legal tensions in attributing strict liability for AI-induced error to operators, as other stakeholders can inflict the error during the process with the involvement of multiple parties, including the users. This can ultimately hinder the development of legal innovation and discourage the party’s confidence in the legal system to access justice. The vantage point of attributing liability should therefore depend on the parties’ degree of control over the generative AI process to establish causation while advocating a wide adoption of explainable AI instruments that can raise explainability and allow a fairer liability attribution to relevant parties. This essay, therefore, argues the possibility of equitable remedies in expanding the remedial options compared to the common law approach. The doctrine of undue influence can arguably offer more protection to the parties for influenced AI errors, reducing parties’ liable risks and promoting fairness in AI-induced contracts. Reflecting on the benefit of Australia’s current soft statutory approach in regulating AI around contract laws, this essay proposed adopting a hybrid approach that encourages human intervention and the compliance of voluntary guidelines in operating the generative AI contracts. This can effectively offer more appropriate measures for regulating the Australian industry to promote user safety and legal certainty while encouraging legal innovation for efficiency.


[1] Eliza Mik, ‘From Automation to Autonomy: Some Non-Existent Problems in Contract Law’ (2020) 36 JCL 205, 229.

[2] Harry Surden, ‘Artificial Intelligence and Law: An Overview’ (2019) 35(4) Georgia State University Law Review 1306, 1307–8.

[3] Mik (n 1) 212–3.

[4] Ibid 212.

[5] Ibid 223–4.

[6] Ibid 216–7.

[7] Ibid 225.

[8] University of Gothenburg (Sweden) et al, ‘Challenges of Explaining the Behavior of Black-Box AI Systems’ (2020) MIS Quarterly Executive 259, 265.

[9] Jiao Sun et al, ‘Investigating Explainability of Generative AI for Code through Scenario-Based Design’ in 27th International Conference on Intelligent User Interfaces (ACM, 2022) 212, 212 <https://dl.acm.org/doi/10.1145/3490099.3511119>.

[10] University of Gothenburg (Sweden) et al (n 8) 262–5.

[11] Ibid 262.

[12] Ibid 264.

[13] Ibid 266.

[14] Ibid.

[15] Marco Rizzi and Natalie Skead, ‘Algorithmic Contracts and the Equitable Doctrine of Undue Influence: Adapting Old Rules to a New Legal Landscape’ (2020) 14 Journal of Equity 301, 309.

[16] Ibid.

[17] Mik (n 1) 229.

[18] Herbert Zech, ‘Liability for AI: Public Policy Considerations’ (2021) 22(1) ERA Forum 147, 152 (‘Liability for AI’).

[19] Frank Pasquale, ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society Lecture’ (2017) 78(5) Ohio State Law Journal 1244, 1251.

[20] Zech (n 18) 148.

[21] Mik (n 1) 220; Joseph M Perillo, ‘The Origins of the Objective Theory of Contract Formation and Interpretation’ (2000) 69(2) Fordham Law Review 427, 427.

[22] Mik (n 1) 220–1.

[23] Perillo (n 21) 470.

[24] Zech (n 18) 154; Andrea Bertolini and Francesca Episcopo, ‘The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: A Critical Assessment’ (2021) 12(3) European Journal of Risk Regulation 644, 645.

[25] Bertolini and Episcopo (n 24) 645.

[26] Ibid 650.

[27] Ibid.

[28] Pasquale (n 19) 1248.

[29] Zech (n 18) 149.

[30] Peter M Gerhart, ‘The Death of Strict Liability’ (2008) 56(1) Buffalo Law Review 245, 246.

[31] Bertolini and Episcopo (n 24) 650.

[32] Ibid 652.

[33] Gerhart (n 30) 248.

[34] Bartosz Brożek et al, ‘The Black Box Problem Revisited. Real and Imaginary Challenges for Automated Legal Decision Making’ (2023) Artificial Intelligence and Law 1, 3 <https://doi.org/10.1007/s10506-023-09356-9>.

[35] Pasquale (n 19) 1252.

[36] Bertolini and Episcopo (n 24) 653.

[37] Mik (n 1) 225.

[38] Bertolini and Episcopo (n 24) 645.

[39] Hong Kong Fir Shipping Co Ltd v Kawasaki Kisen Kaisha Ltd [1961] EWCA Civ 7; (1962) 2 QB 26.

[40] Rawson v Hobbs [1961] HCA 72; (1961) 107 CLR 466, 481.

[41] Perri v Collangatta Investments Pty Ltd [1982] HCA 29.

[42] Taylor v Johnson [1983] HCA 5; (1983) 151 CLR 422.

[43] McRae v Commonwealth Disposals Commission [1951] HCA 79; (1951) 84 CLR 377; Leaf v International Galleries (1950) 2 KB 86; Cooper v Phibbs [1867] UKHL 1.

[44] Rizzi and Skead (n 15) 302.

[45] Thorne v Kennedy [2017] HCA 49; (2017) 263 CLR 85, 109.

[46] Ibid.

[47] Rizzi and Skead (n 15) 322.

[48] Ibid 323.

[49] Ibid 324.

[50] Johnson v Buttress [1936] HCA 41; (1936) 56 CLR 113.

[51] Copyright Act 1968 (Cth).

[52] Privacy Act 1988 (Cth).

[53] Department of Industry, Science and Resources, ‘Australia’s AI Ethics Principles’ Australias Artificial Intelligence Ethics Framework (Web Page, 5 October 2022) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles>.

[54] Ibid.

[55] Bertolini and Episcopo (n 24) 657.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJlStuS/2024/18.html