AustLII Home | Databases | WorldLII | Search | Feedback

Law, Technology and Humans

You are here:  AustLII >> Databases >> Law, Technology and Humans >> 2024 >> [2024] LawTechHum 12

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Cesta, Will --- "The Regulation of Judicial Analytics: Towards a New Research Agenda" [2024] LawTechHum 12; (2024) 6(2) Law, Technology and Humans 69


The Regulation of Judicial Analytics:

Towards a New Research Agenda

Will Cesta

Sydney Law School, University of Sydney, Australia

Harvard Law School, Harvard University, United States

Abstract

Keywords: Judicial analytics; judicial statistics; judicial data; legal technology; regulation.

1. Introduction

In 2015, Judge Street of the Federal Circuit Court of Australia rejected an application for judicial review of a migration decision.[1] The unsuccessful applicant appealed to the Full Federal Court of Australia, alleging that the first-instance decision was biased. Such applications are common in administrative law, but this one had an evidentiary twist: the applicant adduced an affidavit – sworn by the editor of the Federal Court Reports – claiming that Judge Street had rejected 252 review applications out of 254 (or 99.21 per cent).[2] The appeal failed, but the statistic was reported widely throughout the Australian media.[3]

Four years later, the same judge was once again under the spotlight of statisticians. A research group based at Macquarie University found that the judge had rejected 830 of 844 applications for judicial review of migration decisions, and that 91 of these verdicts were overturned on appeal.[4] These statistics were again reported widely, generating public conversation about how the performance of judges is evaluated.[5] This time, the Federal Circuit Court intervened, assuring the public that ‘Judge Street is receiving mentoring to assist and support him to fulfil his duties’.[6]

This was not, however, the end of the matter. In 2022, Ghezelbash and colleagues published an article analysing the performance of 30 judges – including Judge Street – and calculating the success of applicants who argued before them.[7] The judge in question was said to have deviated from the median by 75.15 per cent. Yet again, these findings were thrust into the public domain.[8]

The practice of using quantitative methods to interrogate judicial data is not risk-free. As Appleby and Opeskin argued in rebuttal to Silbert’s statistical critique of the Victorian Court of Appeal, quantitative analysis can offer an ‘inaccurate and misleading account’ of judicial phenomena.[9] Their work highlighted ‘the dangers to public confidence in the judicial system of quantitative analysis that lacks methodological rigor’.[10]

Yet at no point did the Australian Government signal an imminent regulatory intervention designed to manage the risks of using quantitative methods to scrutinise judicial data.[11] Even in the United States and Canada, where a multi-million dollar ‘judicial insights’ industry has emerged, the practice has stayed off the regulatory agenda.[12]

Amid the Judge Street saga, however, one jurisdiction arrived at a markedly different position. In 2019, the French parliament criminalised the analysis of judicial data. This move came in response to growing concerns about the abrogation of judicial independence and the unwanted strategic behaviour of litigants.[13] One might have expected France’s decisive regulatory intervention, which occurred against the backdrop of a broader push for artificial intelligence regulation, to have caused other governments to doubt the viability of regulatory silence, but this did not occur. One might also have expected a large body of scholarship on the regulation of the practice to have emerged, but again this did not occur. Work undertaken to date has focused on the benefits of analysing judicial data,[14] although a handful of exceptions have emerged.[15]

This article contributes to the emerging field concerned with the regulation of a family of practices that will be described as ‘judicial analytics’,[16] defined as ‘the use of data to monitor, understand and predict judicial behaviour’.[17] Given the field’s nascence, there are many available areas for study. There would be value in creating empirical evidence on the impacts of judicial analytics on individuals and societies, or in generating insights into how international regulatory frameworks ‒ like the European Union’s incoming Artificial Intelligence Act[18] – might apply to the practice. But a more foundational opportunity beckons: the field lacks a clear research agenda, and it is worth taking stock of what has been said and suggesting a way forward.

Part 2 traces the development of judicial analytics and its regulation. The aim of this section is to leave the reader with a clear sense of the varieties of judicial analytics and its history as an object of regulation. Part 3 undertakes a meta-analysis that evaluates the current state of knowledge, focusing on risks identified and regulatory interventions proposed. It finds that, while the literature has succeeded in ventilating key risks and generating conversation about potential regulatory strategies, it has not yet convincingly explained how judicial analytics should be regulated. The major critique presented here is that the existing literature has an epistemic problem: as there is so little empirical research on the impact of judicial analytics, it is difficult to appraise the magnitude of risks it creates, and therefore difficult to evaluate the appropriateness of regulatory strategies proposed by commentators.

Part 4 asks how researchers can help diverse jurisdictions move towards the informed and intentional regulation (or non-regulation) of judicial analytics. It proposes three priority actions: ensuring that the field self-identifies using more consistent terminology (at least ten aliases for ‘judicial analytics’ are in use, making relevant material difficult to find); augmenting sociotechnical speculative ethics with empirical analysis; and theorising regulatory success across diverse jurisdictions. These actions would lay the foundations for longer-term projects, such as analysing the adequacy of existing regulatory arrangements and experimenting with novel regulatory strategies.

2. The Varieties of Judicial Analytics and its Largely Unregulated Expansion

There is nothing new about the practice of using mathematical methods to extract insights from data produced by or about judiciaries. The progenitors of modern forms of judicial analytics date back to 1805, when ‘stables of trials and assizes and quarter sessions’ started being tabled in English parliamentary papers.[19] Other jurisdictions soon caught on. By 1827, France had begun publishing detailed descriptive statistics on judicial decisions.[20] Throughout the nonage of judicial analytics, analysts principally used simple arithmetic operations to address simple questions, such as whether criminal convictions increased or decreased over a given period[21] and what the ratio of convictions to acquittals was in certain courts.[22]

The modern field of mathematical statistics – the foundations of which were laid between 1890 and 1930[23] – drove the diversification and increased the sophistication of judicial analytics. Rather than merely ‘juxtaposing statistics’,[24] analysts could use more precise techniques – such as calculating correlation coefficients – to answer increasingly complex questions. Jacoby, for instance, investigated ‘whether there are any definite correlations between the frequency of certain types of criminal litigations and the general character of the respective communities, ascertained by extrinsic data concerning the economic, educational, and welfare standard of the cities’.[25]

By 1935, an ambitious vision for judicial analytics had crystallised – one in which data and mathematics informed court administration.[26] A large body of academic work had also built up around the concept of ‘judicial statistics’ (also known as ‘nomostatistics’).[27] This work was routinely praised for its capacity to generate valuable insights about the operation of law. The Dean of Yale Law School described reliable statistical insights into judicial affairs as ‘invaluable’.[28] Three years later, the Dean of Harvard Law School argued that ‘compiling and publishing ... intelligently gathered and adequately organised judicial statistics is an important item in a program of improving our administration of justice’.[29] Soon after, the Chief of the Division of Procedural Studies and Statistics within the Administrative Office of the United States Courts described statistics as ‘useful in improving the administration of justice, both in an individual court and in a court system’.[30]

However, twentieth-century scholars and commentators did not see the practice as entirely risk free. For Clark, misleading statistics were ‘worse than none at all’,[31] while Pound expressed concerns about ‘bureaus seeking to make a showing in order to obtain increased appropriations’ and ‘well-meaning blunderers preparing tables with no clear idea of what use is to be made of them’.[32] Still, no one was calling for regulation – even when, as the twenty-first century approached, digital tools diminished the friction associated with analogue computation and sped up the rate at which judicial data could be analysed.

By the twenty-first century, computing power had grown significantly. This enabled the digital implementation of complex mathematical principles. Perhaps the most important development is the use of machine learning, a process by which computers are used to draw inferences from data rather than execute human-defined rules,[33] as an instrument of judicial analytics. Today, we can ‘yield insights that would be inaccessible using human cognition or traditional technologies’,[34] determining – for instance – how likely a plaintiff’s claim is to succeed[35] and how extrinsic legal factors, such as economic climate or the time of day, influence legal outcomes.[36]

Another critical twenty-first century development for judicial analytics was the identification of its significant economic value and potential for commercialisation. What was once a tool used by governments to understand and optimise the administration of justice became a commercial service rendered by firms such as Lex Machina.[37] Insights into the judiciary are now for sale throughout the United States, Canada and Europe, and some scholars have anticipated the emergence of ‘mainstreamed judicial analytics’.[38]

Many of these emerging services are described as ‘predictive’ rather than ‘descriptive’.[39] However, the line between these modes of analysis should not be drawn too clearly: to say that a judge consistently finds a particular type of argument or precedent compelling ‒ which is formally descriptive ‒ has some predictive utility.

The most important regulatory event in the history of judicial analytics is France’s intervention in 2019. The French parliament decided, based on concerns about unwanted strategic behaviour and pressure on judges, that the practice of reusing judicial identity data for analytical purposes should be criminalised.[40] Calls for other forms of regulation – although certainly not of the French kind – soon followed.[41]

3. On the Regulation of Judicial Analytics: A Meta-Analysis

Most of what has been written about judicial analytics extols its virtues or explains how it can be utilised for the benefit of legal service providers and their advisees. In other words, it is not concerned with the regulation of judicial analytics, but rather its utility. There are, however, a handful of scholars who have started thinking about how the practice can be guided and constrained by the state or other regulatory actors. These scholars include McGill and Salyzyn,[42] Stewart and Stuhmcke,[43] and Steponenaite and Valcke.[44] This section asks whether the literature has distilled the opportunities and risks that need to be incorporated into regulatory frameworks, given adequate consideration to the exportability of France’s bold regulatory strategy, and proposed viable alternatives.

3.1 Opportunities

Legal knowledge is not distributed uniformly throughout societies: some people are in a position of epistemic legal inferiority relative to others. Those who can pay for high-quality legal advice, for example, are likely to identify the strongest legal arguments available to them; and those who spend their working lives in and around courts are better placed to anticipate case outcomes. Judicial analytics is often portrayed as a tool for reducing this epistemic gap by creating useful proxies for traditional forms of legal knowledge.

Contributors to the literature on the regulation of judicial analytics have recognised that the practice could ‘provide an opportunity for the public to better critique and more effectively operate within the justice system’[45] by exposing bias[46] and revealing how extra-legal factors influence decision-making.[47] It could also ‘help litigants decide whether to bring cases’[48] and provide judges with a way of evaluating their own performance.[49]

There is a vast opportunity-oriented literature that could be reviewed. However, a more pressing task for an article on the regulation of judicial analytics is to consider salient risks, which are markedly more contentious.

3.2 Risks

One of the focus areas of the emerging field on the regulation of judicial analytics is which risks are created by the technology. Concerns ventilated include misinformation, pressure on judicial independence, threats to the wellbeing of judges, the gamification of law, forum shopping, consumer vulnerability and inequity. This section will evaluate commentary on these risks.

Risk 1: Misinformation

One of the central risks identified in the literature is ‘the propagation of inaccurate or misleading information about judges’.[50] Its potential causes include incomplete and low-quality data,[51] errors in programming[52] and user misinterpretation.[53]

Two main ways of conceptualising this problem have emerged. One assumes that misinformation is inherently bad (or that the truth has inherent value and should prevail where possible). The other focuses on the risk of unjustifiably engendering mistrust in the judiciary.[54] It has been contemplated that the public, armed with a ‘multitude of data points’, could ‘become skeptical about whether the legal system is yielding legally correct or fair decisions’.[55] Unless these datapoints are accurate, this loss of trust would be unwarranted.

Some might argue that there is no real risk here – that misinformation is ubiquitous, and that it is incumbent upon each member of a society to verify the truth of what they read or hear about. Libraries, to say nothing of web servers, are littered with claims now accepted to be false, and this is not always a bad thing: the path towards truth is littered with disproven ‘facts’.

Yet there are domains in which misinformation is seen as a risk worthy of regulatory attention. Consider, for instance, the law of misleading and deceptive conduct. It could be argued that misinformation produced by judicial analytics systems is analogous to this example of regulated misinformation. For one thing, the stakes are high: confidence in public institutions – like confidence in markets – matters. For another, ‘self-checking’ is not always possible. This might be because claims are based on outputs from notoriously opaque machine learning systems that rely on abstract and voluminous calculations that cannot be held in the human mind[56] or it might be because it would be a monumental task for a concerned citizen to scrutinise the data underpinning the claim, even assuming that the data has been made publicly available. However, the literature must do more to explain why misinformation is undesirable, ideally empirically demonstrating theoretical concerns. It must also say more about whether judicial analytics significantly augments the existing problem of ubiquitous misinformation.

Risk 2: Threats to Judicial Independence

It is widely accepted that performance metrics can ‘modify organizational behaviour and influence key decisions’.[57] Should we be concerned about such metrics exerting unwanted pressure on the judiciary? While the tenure of judges is generally not contingent upon them satisfying official key performance indicators, perhaps publicly available metrics generated through judicial analytics have a similar effect. Indeed, the literature has presented some evidence that judges ‘alter their judicial decision-making in the name of advancing their careers’,[58] and perhaps the increased transparency created by judicial analytics could increase the pressure to conform.

There are, however, a few rebuttals worth considering. One is that the empirical evidence base is thin: to the author’s knowledge, there is no empirical work on the extent to which judicial analytics increases pressure to conform. This work would also be difficult to produce. Isolating the impact of judicial analytics would be challenging given that there are numerous other pressures on the independence of judges. As Shetreet points out, ‘It is now commonplace for the media, politicians and commentators to attack judges when they disagree with the results of judicial decisions, such as sentences handed down by the courts, particularly in highly publicized trials.’[59]

Further, there is some empirical evidence that casts doubt on the theory. In Australia, a recent national survey of 142 judges indicates that from the perspective of the judiciary, the most significant issue relating to judicial independence is the appointment of acting judges (no concerns were raised about ‘judicial analytics’).[60] Yet if similar studies were run in the United States – where websites rank judicial performance, the Chamber of Commerce ranks courts (and judges) by performance, and legal reform institutes run radio ads about ‘ranking drops’ of particular courts[61] – we might expect judicial analytics to be on the list of judges’ concerns.

It is worth adding that not all metric-induced pressure on the judiciary is necessarily bad. Consider, for instance, Patrick’s widely publicised finding that a particular judge ‘takes an average of 108 days to write his judgments’:[62] it could be argued that this exerted a useful form of pressure that serves to hasten the administration of justice.[63]

Pressure from metrics is not, however, the only threat to judicial independence contemplated in the literature. Steponenaite and Valcke point out that evidence adduced about the most likely outcome of the case at hand could undermine the impartiality of judges. One potential source of pressure is the ‘automation bias’, which they define as ‘the propensity to uncritically favor automated decision-making systems’.[64] They suggest that the presentation of statistical evidence could ‘[cast] doubt on the impartiality of a judge both in terms of his or her mind and his or her appearance’.[65] But like claims about the other means by which judicial independence could be abrogated, this one lacks an empirical foundation.

The idea that judicial analytics could threaten judicial independence is plausible and should be taken seriously, but it is not supported by strong evidence. Empirical study of the impacts of the practice in different jurisdictions is overdue.

Risk 3: Reducing Judicial Wellbeing

McGill and Salyzyn argue that judicial analytics could diminish the wellbeing of judges. After reminding readers that the Bench is a workplace, they consider whether the practice could create ‘a new form of workplace surveillance’.[66] A particular area of concern is the weaponisation of judicial analytics tools against ‘outsider judges’:

In the social sciences, there is a rich literature demonstrating the many ways that differently-situated people are exposed differently to the dangers of surveillance. There are numerous examples of intense scrutiny of racialized judges who reference or challenge racism in the legal system.[67]

A further concern is whether judicial analytics can increase existing scrutiny and augment existing pressure on judges. An undesirable scenario would be that the intense scrutiny that follows politicians becomes an inevitability of judicial life. This shift could disincentivise lawyers from pursuing judicial careers and diminish the wellbeing of those who do. Not all will sympathise, but this consideration should not be overlooked. Once again, further empirical study must be undertaken across different jurisdictions to explain this risk more precisely.

Risk 4: Non-Normative Thinking

Judicial analytics encourages a non-normative way of conceptualising legal disputes. The promise of ‘litigation analysts’ such as Premonition and Lex Machina is not to identify the best available legal arguments but to provide insights that enable litigants to make optimal strategic decisions. Game theorists have repeatedly demonstrated that tactical behaviour is already a salient feature of legal practice,[68] and some have gone so far as to encourage ‘a rigorous focus on strategic behavior’.[69] However, as contributors to the conversation about the regulation of judicial analytics note, the notion that law should be understood as a game – and that we should create tools that augment the gamification of law – is not beyond reproach.

The risk at hand is that substituting normative modes of reasoning for statistical proxies could harm certain participants in the legal system – or as Zalnieriute and Bell put it, there is a risk that ‘individual differences or nuances of a case are overlooked in pursuit of a machine-generated pattern’.[70] Of course, the performance of models has become more nuanced over time; but even so, models are trained on historical data and cannot be designed to anticipate every deviation from the norm that judges – in the exercise of discretion – may make.

But who or what, specifically, stands to be harmed by this practice? There are some ambiguities in the literature. One plausible answer is the legal system as a whole: if cases are approached by litigants as a game of probabilities and this encourages settlement, the opportunity to develop or modify norms in the pursuit of justice may be compromised. However, the notion that litigation exists for the benefit of the community rather than the individual seeking justice is open to doubt.

Another concern is that if predictive systems were used as a proxy for real decisions, litigants would be receiving a sub-par form of adjudication. However, it is not clear that such systems are in fact being used. While it is often said that autonomous data-driven decision-making systems are being used to decide cases in Estonia,[71] the Estonian government has expressly denied such claims.[72] Other kinds of predictive systems are in use – such as the COMPAS recidivism predictor[73] – but they are not trained on judicial data, so fall beyond the scope of ‘judicial analytics’.

Once again, more work is needed to explain the ostensible risk of non-normative thinking and why regulators should care about it.

Risk 5: Unwanted Strategic Behaviour

France’s decision to ban judicial analytics was justified by reference to two core concerns. One was threats to judicial independence; the other was ‘strategic behavior by litigants’.[74] The principal concern here appears to be forum shopping: using insights gleaned through judicial analytics to determine the optimal jurisdiction in which to file a statement of claim. Some commentators accept without hesitation that ‘forum shopping’ is undesirable insofar as it undermines the authority of state legal systems and clogs up certain courts.

However, whether judicial analytics augments forum shopping is open to doubt. Many courts randomly allocate cases, reducing the strategic advantage of knowing which judge is likely to be sympathetic to one’s case (although this insight may provide leverage in settlement negotiations once cases have been allocated). Further, similar knowledge might already be possessed – legal practitioners typically have a strong working knowledge of the jurisdictions in which a litigant’s claims are more likely to succeed. As Stewart and Stuhmcke note, ‘It might be argued that this is simply a more reliable evidence-based way of doing what has always been done.’[75]

Complicating matters further, it could be said that some instances of forum shopping are ethically justifiable: perhaps an instance in which a litigant avoids a demonstrably biased judge by filing in a more favourable jurisdiction is consistent with principles of justice, not in opposition to them.[76] More work needs to be done to explain how judicial analytics could increase unwanted strategic behaviour and what makes it ‘unwanted’.

Risk 6: Harm to Consumers

One of the central promises of traditional professions like law – indeed, the justification for the exclusive service provision rights they are granted – is consumer safety. As Susskind and Susskind observe:

Professionals are licensed to undertake particular categories of work. This effective monopoly is granted by law and is generally justified by reference to the protection it affords members of the public. Only doctors can prescribe certain medicines, so that patients can be assured the drugs they consume are not dangerous. Only auditors are authorized to assure the accuracy of the financial statements of public companies, so that shareholders can confidently make investment decisions.[77]

Statistical advisers and computer scientists are not, however, ‘professionals’ in the strict sense of the word: there is no ‘Society of Judicial Analysts’ that is empowered to revoke a person’s right to render analytical services. Despite this, the risks of low-quality analytics services are similar to the risks of low-quality traditional legal services. Poor litigation decisions can result in significant financial detriment irrespective of whether they are based on quantitative analysis or the intuitions of a lawyer.

Some scholars have predicted that ‘if judicial analytics tools become mainstreamed [and] cheaper to create and deliver, there is more risk of poorly developed tools’.[78] In order to understand the extent of this risk, we must determine whether the law could remedy harm suffered. If a law firm provides wildly inaccurate advice based on a faulty judicial analytics system, presumably it will be accountable under ordinary principles of legal profession law. However, if a start-up comprising non-lawyers offers a similar service – while insisting that it is not offering ‘legal advice’ but mere statistical information – does the consumer have the same kind of remedy it would have against a legal service provider? It is not clear. As the answer to this question will shape the risk of consumer harm – the availability of legal remedies may deter high-risk ventures and shape our appraisal of the cost of the risk materialising – these details must be clarified. Such clarity will provide a more complete understanding of consumer risks created by judicial analytics.

Risk 7: Inequity

Well before the commercialisation of advanced analytical tools, strategic advantages were enjoyed by some litigants but not others.[79] Concerns about the injustice of judicial analytics should therefore be understood as the potential augmentation – not creation – of resource inequality. The question is therefore whether it is unfair that one client can use their resources to access analytical services that may increase existing disparities. Some will argue that everyone should be free to apply their resources however they like, and that if one wishes to spend their own money on the advice of a judicial analytics firm, that is their prerogative. But this is not the only way of seeing the problem.

Stewart and Stuhmcke worry that ‘predictive judicial analytics will likely become a tool used by the most seasoned and advantaged litigants who have access to sophisticated data analytics systems’.[80] Judicial analytics may become ‘yet another factor advantaging the most capable litigants over others’.[81] Similarly, Steponenaite and Valcke worry that if judicial analytics were to offer a ‘procedural advantage’, this might abrogate the right to a fair trial.[82]

On the other hand, it could reasonably be said that judicial analytics could – if its costs came down over time – decrease resource disparities by providing a different (and cheap) way of conceptualising legal problems. Indeed, some suggest that ‘tools that were previously the domain of “high-end” segments of the legal industry and pockets of the academy will become more easily accessible to the public’.[83] If so, the real problem is how we deal with the transition period while costs are decreasing. Yet again, it is difficult to assess the ostensible risks identified. Further study of the impact of resource inequality in specific jurisdictions is required before we can draw firm conclusions.

Limitations of Risk Analysis Undertaken to Date

Before turning to commentary on France’s regulatory approach, it is worth noting three themes that emerged from the above analysis. The first is that study of the regulation of judicial analytics has an epistemic problem. Scholars have so far relied on a mode of analysis described in a recent Google DeepMind collaboration as ‘sociotechnical speculative ethics’, which involves ‘evaluat[ing] potential paths and outcomes in light of the best available evidence about the current state of affairs’.[84] The trouble is that the ‘best available evidence’ is limited. All that can reasonably be said is that there is a rational basis for believing that judicial analytics is a cause for concern.

The second is that research has largely been carried out at the trans-jurisdictional scale. While the literature occasionally differentiates between the risks in different jurisdictions, it has not gone far enough to recognise the deeply context-dependent nature of these risks (or opportunities). And the third is that this risk analysis has not factored in how existing regulation shapes risk. There is, in sum, more work to be done.

3.3 Should Judicial Analytics be Criminalised?

Before turning to the regulatory arrangements recommended by contributors to the conversation about the regulation of judicial analytics, it is worth reviewing commentary on the most significant regulatory intervention to date: France’s passage of Article L111-13 of the Justice Reform Act in 2019. It provides that:

The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices’, with a maximum penalty of five years’ imprisonment.[85]

While some commentators have described Article L111-13 as being concerned with the ‘publication’ of judicial analytics,[86] this is imprecise. The provision does not only constrain the dissemination of information; it prohibits any ‘réutilisation’ of judicial identity data. The word ‘réutilisation’, which roughly translates to ‘reuse’, seeks to distinguish between two kinds of uses: the first is identifying who wrote the judgment; the second is using that information for any other purpose, such as building predictive models and selling insights. The effect of the provision is that analysts must ignore – that is, treat as non-existent – any data that reveals the identity of the judge when conducting analysis on judicial data. It would be unlawful, for example, to determine the extent to which the gender or political persuasion of judges influences decisions because this insight could only be generated if the identity data of the judges were ascertained and incorporated into a model.

For the avoidance of doubt, not all judicial analysis is prohibited. Article L111-13 permits analysis of the rate at which a particular court finds in favour of plaintiffs because this insight does not rely on the ‘reuse’ of a particular judge’s identity data. However, the provision captures the overwhelming majority of practices, including prediction services offered by firms such as Lex Machina and academic work on correlations between judicial demographics and decisions.[87]

Do scholars think this approach should be exported? Their resounding answer is ‘no’. Concerns about the viability of exporting France’s scheme include legal incompatibility and disproportionality. Taking Canada as an example, McGill and Salytzyn are dubious that the French ban would be permissible given its ‘constitutional protection of freedom of expression’, not to mention its ‘strong open courts principle’.[88] Further, most scholars do not want to see judicial analytics disappear, implicitly accepting Lawrence Friedman’s oft-quoted remark that, ‘Law is a massive, vital presence in the United States. It is too important to be left to the lawyers – or even to the realm of pure thought.’[89] As Stewart and Stuhmcke summarise, the French approach is ‘extreme’, and ‘Given that data analytics is a useful tool and in terms of the open justice principle, upholds the rule of law’, prohibition is not a viable regulatory response.[90]

3.4 Alternative Regulatory Strategies

Four main regulatory strategies have been proposed as alternatives to criminalisation: ethics frameworks, trustmarks, government-created judicial analytics models, and judicial education. This section explores the strengths and weaknesses of these proposals.

Ethics Frameworks

One method of regulating judicial analytics is to promote ethical guidelines, which involves ‘set[ting] standards for the creation, development and use of judicial predictive analytics by academics, publishers, legal commentators and government’ in order to ‘heighten awareness of the use of data analytics and the responsibility that accompanies it in law as much as in any other discipline or area of social interaction’.[91]

A strength of this approach is that it would not restrict forms of judicial analytics that have the potential to increase the transparency of the administration of justice and enhance public confidence in the judicial process, such as academic research on judicial decision-making.[92] Further, the basis for pursuing it does not depend on the resolution of epistemic concerns raised above: the initiative would not restrict the use of technologies or create any other mandatory obligations. It is therefore difficult to see why it should not be pursued.

However, one might doubt whether this approach alone is sufficient. Consider the impact of ethical standards on a journalist who is interested in breaking a story about an ostensibly biased court. Let us assume that they have done some rough calculations on correlations between sentence lengths and the ethnicity of offenders and are aware of ethical guidelines stating that any quantitative analysis on judicial matters should ‘avoid disproportionate risks.’ Would legally optional standards deter careless or inflammatory reporting?[93]

A further problem is that ethical guidelines often afford significant scope for rationalisation. Consider a data science firm that becomes aware of the guidelines and takes seriously its ethical obligations to avoid unreasonable risks: the firm might argue that it believes so deeply in the value of transparency that it sees all externalities of its work as being proportionate (just as Meta, X and other platforms ostensibly believe so strongly in the value of connecting people that they deem the spread of misinformation by their users to be proportionate).

There is little doubt that ethical guidelines would be beneficial, but they may not be a sufficient regulatory strategy for balancing the opportunities and risks of judicial analytics.

Trustmarks

A strategy endorsed by McGill and Salyzyn is creating ‘trustmarks’, which they describe as follows:

This method would convene a group of experts to develop appropriate standards and procedures to evaluate the quality of judicial analytics tools. Providers of judicial analytics tools could be incentivized to participate in the certification process with the promise of being able to use a trustmark if they meet the required standards. The value of a trustmark to legal technology providers is the ability to easily signal to the public that they are providing a high-quality tool. The public would also benefit from this signalling: they could quickly distinguish which judicial analytics tools have met certain standards and which have not.[94]

Testing algorithmic systems prior to deploying them has precedent. It is now undertaken in the Netherlands[95] and Canada[96] for systems deployed in the public sector and is one of the cornerstones of the incoming European Union approach to regulating high-risk systems deployed by private entities.[97]

However, there are several limitations of the scheme described. First, unlike the Dutch, Canadian, and European Union examples, the approach envisaged is an opt-in one. It could therefore be seen as an initiative that lacks ‘teeth’. As the proponents of the framework acknowledge, private certification is expensive so, unless it is mandatory, it is unclear why providers would pursue the option.

The second limitation is that it does not address all the issues outlined – its focus is squarely on improving the performance of the technology. Certification may partially address consumer vulnerability and misinformation. Yet concerns like unwanted strategic behaviour and non-normative dispute resolution would be augmented, not reduced, by the validation of systems’ accuracy: the better they perform, the more advantageous they are to litigants and the more likely they are to be used as a proxy for legal advice.

Non-Profit Models

A further idea suggested by McGill and Salyzyn is developing non-profit judicial analytics tools. This is an interesting idea that has since been pursued in other domains. Bello y Villarino, for instance, has called for ‘the development of an education-specific foundation AI model driven and directed by public authorities’.[98] The rationale for this is clear: foundation models like Bert, GPT-4, and Claude were designed to be useful across a variety of domains and may perform sub-optimally in educational settings.[99]

However, the case for a state-built judicial analytics model is weaker. Unlike in education, commercial incentives for building high-quality judicial analytics systems exist and the major companies that have chased these incentives – such as Lex Machina – would be hard to beat.[100] On the other hand, a free, publicly available judicial analytics model would at least reduce potentially detrimental reliance on cheap, second-tier models. Still, it is not yet clear why governments would invest in this project. Access to justice, like education, is of critical importance – but the proposition here is not to invest in access to justice per se, but rather access to a proxy for legal advice.

Judicial Education

Some scholars are concerned that judges may not be prepared for the task of critically evaluating the outputs of judicial analytics systems.[101] A solution proposed by Steponenaite and Valcke is judicial training.[102] The strength of the proposal is that its putative causal nexus to the problem is clear: in theory, more education would reduce misinterpretation of statistical evidence. However, yet again the evidence base for this proposal is weak. Would training in fact ensure that judges would avoid making erroneous presumptions? Scholars working at the intersection of the psychology and law may be able to answer this question, but they have not yet contributed to the conversation.

The Need for Further Research

Until we strengthen the theoretical and empirical foundations of our study of judicial analytics ‒ at both trans-jurisdictional and intra-jurisdictional scales ‒ it is hard to say what would constitute an optimal regulatory framework. As scholars on the regulation of judicial analytics have acknowledged, compelling answers to fundamental regulatory questions – such as whether a restriction on the freedom to deploy a technology is proportionate to risk reduction – depend on a stronger evidence base than we presently possess.[103]

4. Towards a New Research Agenda

Foundational work on the regulation of judicial analytics has been undertaken by a small network of scholars working across North America, Europe and Oceania. This work has yielded a catalogue of risks worth taking seriously and regulatory strategies worth exploring. However, the regulation of judicial analytics is a nascent area of research, and more work needs to be done before firm conclusions about the risk and regulation of judicial analytics can be drawn. This section suggests three priority actions for the field – steps that will lay the foundation for a more robust debate about how to regulate judicial analytics.

4.1 Priority Action 1: Adopt Consistent Terminology

The practice of using mathematical methods to extract insights from data produced by or about judiciaries has been described in remarkably diverse language: ‘judicial statistics’,[104] ‘jurimetrics’,[105] ‘legal analytics’,[106] ‘arbitral analytics’,[107] ‘judicial profiling’,[108] ‘litigation analytics’,[109] ‘nomostatistics’,[110] ‘judiciary statistics’[111] and ‘judge analytics’.[112] A few patterns of usage can be discerned. One is that the term ‘statistics’ was used more commonly in the pre-digital era, while ‘analytics’ came into use after the advent of software-driven analysis. Another is that some of these terms differ in scope: ‘legal analytics’ is a parent concept encompassing various kinds of quantitative analytical practices. However, beyond that, the terminological diversity is inexplicable.

The field should adopt a consistent label, thereby ensuring that its future contributors are not confronted with the herculean task of trying to understand the relationships between inconsistently labelled but conceptually overlapping work (to say nothing of locating it). One possible way forward would be to adopt an umbrella term – such as ‘judicial data science’ – and include it in article metadata (like titles and keywords). But rather than add to the long list of terms in use, it may be preferable to use ‘judicial analytics’ – which is slightly more common than its relatives – as an umbrella term denoting a family of practices involving the application of mathematical methods to judicial data. That approach has been adopted throughout this article.

4.2 Priority Action 2: Empirical Study of Opportunities and Risks

Key Principles

It has been said that ‘time will tell how the rise of judicial analytics will change the legal system and impact its stakeholders’.[113] But, strictly speaking, changes and impacts will only become clear to the extent that they are measured. How should we do this? This article proposes three guiding principles for analysing the relationships between judicial analytics, society and regulation.

The first is employing empirical methods rather than over-relying on the dominant method used to date (‘sociotechnical speculative ethics’ based on weak empirical foundations).[114] This is not a new idea. Consider, for instance, Loevinger’s call for the embrace of scientific methods in legal studies in 1949:

The next step forward in the long path of man’s progress must be from jurisprudence (which is mere speculation about law) to jurimetrics, which is the scientific investigation of legal problems. In the field of social control (which is law) we must at least begin to use the same approach and the same methods that have enabled us to progress toward greater knowledge and control in every other field.[115]

Of course, forecasting opportunities and risks inevitably has a speculative dimension, but speculation can be more or less grounded in empirically validated assumptions about the world. If, for example, it is widely known that consumers of legal analytics services are in fact consistently being misled to their detriment, we have a stronger case for asserting the existence of an ongoing risk.

The second feature should be ensuring that this empirical work does not conflate technological specifications with technological impact. The study of machines often focuses on technical benchmarks or thresholds.[116] However, as Rahwan and colleagues note, properly understanding technology requires us to look beyond technical standards:

Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate.[117]

The third feature is ensuring that future opportunity and risk assessments are not ‘jurisdiction-blind’. Both opportunities and risks are shaped by regulatory norms (legal and otherwise), and risk analysis should seek to incorporate the normative influence of these contextual features. For example, strong defamation laws provide a disincentive to publish disinformation produced by judicial analytics systems, at least to the extent that the existence and effect of those laws is known.

Avenues for Study

Regulatory interventions typically engage with problems, not theories about problems. Building a more robust evidence base about the impact of judicial analytics on societies – one that focuses on the impact of machines in their environment and factors in the influence of existing regulation – is therefore an important step towards regulating (or consciously not regulating) judicial analytics. It will be difficult to precisely quantify societal harms occasioned by judicial analytics, but even a more modest contribution – such as demonstrating that there is some empirical evidence supporting theoretical concerns – is worth making.

Let us now consider potential avenues for empirical research.

One avenue to explore is the extent to which judicial analytics is prone to disseminating misinformation. The most intuitive approach might be to attempt to validate high-profile claims through replication. However, there are three principal challenges with this. The first is that outputs of judicial analytics systems can be very difficult to verify. In the case of predictions, we must wait for the anticipated result to materialise – or fail to materialise – before we can say whether it was accurate; and in the case of descriptive claims, assessing accuracy involves completely rerunning resource-intensive research projects. A further problem is selection bias and small sample sizes: even assuming that one could ascertain the accuracy of some outputs, these outputs may not be representative of all outputs thrust into the public domain. Finally, even if we could overcome these challenges, there is more to the story than the accuracy of system outputs: it is one thing to determine whether a claim based on judicial analytics is inaccurate, and it is another to say whether individuals are misled by it. Perhaps misinformation that enters the public domain is treated with scepticism and therefore fails to misinform the public.

A better approach, therefore, might be to run experiments whereby participants are shown an article containing known errors and asked to respond to questions designed to elicit the extent to which it shapes their opinions. This kind of research would not establish the error rate of claims made by analysts but demonstrate the susceptibility of participants to inaccurate claims based on judicial analytics. One could also test, by way of control, reactions to erroneous claims that are not backed by ‘hard data’. We may find that all inaccurate claims about the judiciary – irrespective of whether they are based on judicial analytics – are apt to mislead, meaning that the problem is not judicial analytics per se; or we might find that claims based on judicial analytics are more believable.

Another avenue concerns whether judicial analytics can be used to create unofficial ‘performance metrics’ that exert pressure on judges. There is now a large body of research showing that performance metrics can shape behaviour.[118] But are judges susceptible to this effect? One way to answer this question would be to survey judges about the extent to which they believe that their independence is undermined by judicial analytics. However, judges may underestimate the extent to which their own decision-making is influenced. It might therefore be worth asking whether they are concerned about the extent to which judicial analytics puts pressure on the judiciary, which does not require them to admit that they are personally susceptible to external pressures.

What could we infer from a finding that they are indeed concerned? It depends on how strong the finding is. Overwhelming evidence that judges are concerned about pressure on the judiciary, when read together with broader research on the behaviour-altering effects of performance metrics, might increase cause for concern. On the other hand, if we show that judges are not at all concerned about judicial analytics – or that they are scarcely aware of the practice – it would be difficult to sustain an argument that judicial analytics is a threat to judicial independence.

One would expect findings to vary between jurisdictions. For instance, we might expect to find that judges are deeply concerned in the United States (where websites rank judicial performance,[119] the Chamber of Commerce ranks courts and judges by performance[120] and legal reform institutes run radio ads about ‘ranking drops’ of particular courts),[121] but not in jurisdictions where judicial analytics is uncommon.

Yet another avenue for study is empirically testing whether the consumers of judicial analytics services are effectively being ‘ripped off’. The ideal way of evaluating this concern would be to identify a large group of users of these services and run surveys on their experiences. However, it is not clear how we would assemble a sufficiently large sample to generate statistically significant findings; it is not as if one can ask Lex Machina for a list of its former customers. A further problem is that it may prove difficult to say whether the advice was in fact ‘bad’ without having access to the data and analytical methods used. An alternative would be to rely on small focus groups and seek to identify gross negligence. Of course, this method would not enable us to quantify the risk of consumer harm, but it might generate a more complete picture of how consumer risks can materialise in practice.

4.3 Priority Action 3: Define Regulatory Success

Once we have a more nuanced understanding of how judicial analytics impacts societies, we can then move into the normative domain, considering how the risks and opportunities should be balanced through regulatory interventions. The foundation of this work is a normative benchmark – a standard against which existing law and regulatory interventions can be assessed. This section considers principles of developing a normative benchmark for the regulation of judicial analytics.

Focus on Impact, not only Technical Standards

Definitions of regulatory success should focus on how machines interact with their environment, not – or at least not exclusively – whether they meet certain technical specifications. It is one thing to say that a system should pass a bias audit, and another to say that it does not materially increase inequality or inequity. Numerous national ethics frameworks now recognise that the former metric is not always an adequate proxy for the latter. Australia’s AI Ethics Principles, for instance, engage with the kinds of effects technology should – and should not – have on human beings and societies. The principles do not only demand that bias audits take place, but that the systems are not in fact causing ‘unfair discrimination against individuals, communities or groups’.[122] Further, they do not only require systems to meet technical transparency requirements, but stress the importance of ensuring that ‘people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them’.[123]

Consider the Applicability of Human Rights Law

Even if we accept that regulatory success must be defined context-sensitively and focus on impact rather than technical standards, a question remains: which concepts should we use to define a normative benchmark that judicial analytics should meet?

Part of the answer must surely be that ‘it depends on the society’. Indeed, when devising the standards against which both the status quo and regulatory interventions should be measured, one must pay close attention to the values of a particular jurisdiction. For example, in Canada – where the Supreme Court has declared that ‘publicity is the very soul of justice’[124] – banning judicial analytics is likely to be deemed culturally untenable, even assuming that such a move would be constitutional.

However, a degree of universality must also be pursued – there must surely be an area of consensus among the disagreement. This is the premise of human rights approaches to regulating emerging technologies, which are increasingly common internationally.[125] These approaches emphasise ‘the importance of human rights law in analysing the social impact of technology’.[126]

Consider, for instance, how the right of ‘equality of arms’ might apply to judicial analytics. This right, which is embedded within Article 14(1) of the International Covenant on Civil and Political Rights, provides that ‘all parties to a proceeding must have a reasonable opportunity of presenting their case under conditions that do not disadvantage them as against other parties to the proceedings’.[127] As Steponenaite and Valcke point out, differential access to analytics resources creates an epistemic advantage that arguably undermines this right.[128]

It is worth noting that human rights approaches are compatible with the idea that scholars should study judicial analytics as a sociotechnical phenomenon rather than as a closed system. As Grover and Oever argue, a system’s impact ‘cannot solely be deduced from its design, but its usage and implementation should also be studied to form a full [assessment of its] human rights impact’.[129]

Incorporate Proportionality

The concept of legal proportionality should be embedded within definitions of regulatory success. Legal proportionality is the idea that the actions taken by a government – or other entities – are appropriate and necessary in relation to the objectives of those actions. There are many formulations of proportionality. One used in human rights contexts in European law is the ‘Huang test’, which asks:

Does the policy (or measure) in question pursue a sufficiently important objective?

Is the rule or decision under review rationally connected with that objective?

Are the means adopted no more than necessary to achieve that objective?

Does the measure achieve a fair balance between the interests of the individual(s) affected and the wider community (i.e. a question of whether a measure constitutes a proportionate means of achieving a legitimate aim)?[130]

This mode of reasoning has become central to technology regulation in some jurisdictions. For instance, the European Union’s incoming Artificial Intelligence Act states that its proposed regulatory measures are:

limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.[131]

The applicability of this approach should be explored by researchers working on the regulation of judicial analytics. Its promise is to frame – and to make transparent – the means by which we analyse trade-offs.

The difficulty with proportionality assessments, however, is that they are value dependent. In some parts of the United States, the freedom to bear arms is valued to such a significant extent by some members of society that even minimally invasive policies – such as background checks – cause controversy, while in Australia, a far stricter regime is deemed proportionate to the avoidance of gun violence. In other areas, consensus has been found. For instance, international aviation regulators agree that onerous burdens are proportionate to the goal of air safety. Though the concept of proportionality has proven interminably slippery, there is still good sense in the idea that the regulation of judicial analytics should fairly balance the competing interests at play when pursuing a legitimate objective with which it is rationally connected.

5. Closing Remarks: Beyond Priority Actions

This article has argued that the emerging field concerned with the regulation of judicial analytics should pursue three short-term priority actions: (1) minimise the difficulty of locating relevant material and building a coherent field by describing contributions to it uniformly; (2) augment sociotechnical speculative ethics with empirical analysis; and (3) develop jurisdiction-sensitive definitions of ‘regulatory success’. These proposed actions do not comprise a comprehensive long-term research agenda for judicial analytics, but a path to a strong foundation for further work. These closing remarks reflect on what could be built upon this foundation.

In order to assess the adequacy of present regulatory arrangements, we could use our normative benchmark (developed in accordance with Priority Action 3) to consider how well the risks and opportunities of judicial analytics (identified in accordance with Priority Action 2) are balanced by regulatory arrangements of a given jurisdiction. This involves, among other things, asking whether any breaches of those standards are proportionate.

We may determine that, in certain jurisdictions, existing regulatory arrangements are inadequate. If so, we must ask whether existing laws could be incrementally extended to solve problems; or, if not, which novel regulatory arrangements are needed.

It is often presumed that new regulation is needed before the applicability – or extendibility – of existing regulation has been fully understood. As Santow notes in the foreword to a discussion paper on technology and human rights:

Sometimes it’s said that the world of new technology is unregulated space; that we need to dream up entirely new rules for this new era. However, our laws apply to the use of AI, as they do in every other context. The challenge is that AI can cause old problems – like unlawful discrimination – to appear in new forms.[132]

Scholars working at the intersection of regulation and judicial analytics should not make this mistake. There are numerous bodies of law that might be capable of being extended to the problems of judicial analytics, including the law of human rights, product liability and defamation. For instance, we must ask whether, as McGill and Salyzyn suggest, existing consumer protection laws ‘provide some defense against particularly egregious issues arising from poor-quality analytics tools’.[133] This work obviously cannot be undertaken from a global vantage point. Each area of inquiry demands intra-jurisdictional thinking.

We might find that the law can indeed be incrementally extended to novel problems, and that increasing awareness of this applicable law could solve them – for example, deter the dissemination of misinformation. However, we could also find that it falls short. By analogy, in the course of considering the applicability of tort and product safety law to autonomous software systems, Beckers and Teubner found that liability gaps arise because the ‘law insists on responding to the new digital realities exclusively with traditional concepts that have been developed for human actors’.[134]

If we have successfully carried out the Priority Actions suggested in this article and determined how old law could be extended to new problems, we would be in a much stronger position to devise novel regulatory interventions. Not only would the impetus to act be stronger, but the initiative could be designed in a more targeted way.

The trouble is that the impacts of a novel regulatory strategy can never be forecast perfectly. What, then, is the justification for implementing a strategy? This is a persistent problem for regulators, and one that has kept regulatory theorists preoccupied. In the field of competition law, some have suggested taking an ‘experimentalist’ approach to regulating markets, which involves establishing mechanisms for monitoring and making adjustments in response to effects as they become apparent.[135] Interest has also grown in the related concept of ‘temporary legislation’ (although there remains a ‘lacuna’ in both theoretical and empirical study of its efficacy).[136] Work on the regulation of judicial analytics may benefit from drawing upon these approaches: rather than seeking to devise ‘the’ solution to risks posed by judicial analytics, temporary or experimental regulatory initiatives could be deployed and monitored. If a temporary approach proves to be the wrong course of action, it can more easily be reversed, reducing the risk of long-term over-regulation. The reversibility of such actions may also reduce the risk of under-regulation: regulators will be less averse to trying out approaches that can be flexibly wound back as needed.

Whatever direction research on the regulation of judicial analytics takes in the long term, there is a strong case for ensuring that imminent work embraces consistent terminology, uses empirical methods to address the epistemic problems identified and develops a robust theory of regulatory success.

Acknowledgment

The author would like to thank José-Miguel Bello y Villarino for his comments on the manuscript and for his ongoing support.

Bibliography

Legal materials

ALA15 v Minister for Immigration and Border Protection [2015] FCCA 2047.

ALA15 v Minister for Immigration and Border Protection [2016] FCAFC 30.

Attorney General of Nova Scotia v MacIntyre (1982) 1 SCR 175.

Secondary sources

Appleby, Gabrielle, Suzanne Le Mire, Andrew Lynch, and Brian Opeskin. “Contemporary Challenges Facing the Australian Judiciary: An Empirical Interruption.” Melbourne University Law Review 42, no 1 (2019): 299–369.

Aprile, J Vincent II. “Judicial Profiling: Another Perspective.” Criminal Justice Matters 37 (2022): 48–49.

Artificial Lawyer. “France Bans Judge Analytics, 5 Years in Prison for Rule Breakers.” Artificial Lawyer, June 4, 2019. https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-prison-for-rule-breakers.

Ashley, Kevin D. Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press, 2017.

Australian Government Attorney-General’s Department. “Fair Trial and Fair Hearing Rights,” 2024. https://www.ag.gov.au/rights-and-protections/human-rights-and-anti-discrimination/human-rights-scrutiny/public-sector-guidance-sheets/fair-trial-and-fair-hearing-rights.

Australian Government Department of Industry, Science and Resources. “Australia’s AI Ethics Principles,” 2019. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles.

Australian Human Rights Commission. “Human Rights and Technology: Discussion Paper,” January 1, 2019. https://humanrights.gov.au/sites/default/files/document/publication/techrights_2019_discussionpaper_0.pdf.

Australian Human Rights Commission. Human Rights and Technology: Final Report. Canberra: Attorney-General’s Department, 2021.

Baird, Douglas G, Randal C Picker and Robert H Gertner. Game Theory and the Law. Cambridge, MA: Harvard University Press, 2003.

Bar‐Siman‐Tov, Ittai. “Temporary Legislation, Better Regulation, and Experimentalist Governance: An Empirical Study.” Regulation & Governance 12, no 2 (2018): 192–219. https://doi.org/10.1111/rego.12148.

Bathurst, Tom. “Who Judges the Judges, and How Should They Be Judged?” (2021). https://www.judcom.nsw.gov.au/publications/benchbks/judicial_officers/who_judges_the_judges.html.

Beckers, Anna and Gunther Teubner. Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds. Oxford: Hart, 2021.

Bell, Felicity. “Empirical Research in Law.” Griffith Law Review 25, no 2 (2016): 262–82. https://doi.org/10.1080/10383441.2016.1236440.

Bello y Villarino, José-Miguel. “An AI Foundation Model for Education.” In Artificial Intelligence for Human-Centric Society: The Future is Here, edited by Nina Tomaževič, Dejan Ravšelj and Aleksander Aristovnik, 114–33. Sydney: Judicial Commission of New South Wales, 2023. https://liberalforum.eu/wp-content/uploads/2023/12/Artificial-Intelligence-for-human-centric-society.pdf.

Capper, John. “Remarks on the Collection of Statistical Information in Ceylon.” The Journal of the Ceylon Branch of the Royal Asiatic Society of Great Britain and Ireland 1, no 1 (1845): 86.

Chen, Daniel. “Judicial Analytics and the Great Transformation of American Law.” Artificial Intelligence and Law 27 (2019): 15–42. https://doi.org/10.1007/s10506-018-9237-x.

Chen, Daniel. “Machine Learning and the Rule of Law.” Revista Forumul Judecatorilor 1 (2019): 19–25.

Chen, Laurie. “China Vows Judicial Disclosure After Outcry over Plan to Curb Access to Rulings.” Reuters, January 22, 2024. https://www.reuters.com/world/china/china-vows-judicial-disclosure-after-outcry-over-plan-curb-access-rulings-2024-01-22.

Choi, Stephen, Mitu Guilati, and Eric Posner. “Judicial Evaluations and Information Forcing: Ranking State High Courts and Their Judges.” Duke Law Journal 58, no 7 (2009): 1313–1381.

Clark, Charles. “The Present State of Judicial Statistics.” Journal of the American Judicature Society 14, no. 3 (1930): 84.

Cohen, Hager. “Almost 99 per cent of Protection Visa Review Applications Fail When Heard by Controversial Judge, New Figures Reveal.” ABC News, September 6, 2019. https://www.abc.net.au/news/2019-09-06/almost-99-per-cent-fail-when-heard-by-judge/11457114.

Cohen, Hager. “Who Watches Over Our Judges?” Podcast. ABC Listen, September 8, 2019. https://www.abc.net.au/listen/programs/backgroundbriefing/judge-street-under-scrutiny-again-v2/11480818.

Council of the State of New York. “Report No 1.” 1 January 1935.

Curtice, Martin, Fareed Bashir, Sanjay Khurmi, Juli Crocombe, Tim Hawkins, and Tim Exworthy. “The Proportionality Principle and What It Means in Practice.” The Psychiatrist 35, no 3 (2011): 111–16. https://doi.org/10.1192/pb.bp.110.032458.

Davidson, Helen. “Snap Judgment: Why Sandy Street’s Record on Asylum Cases Stands Out.” The Guardian, September 22, 2019. https://www.theguardian.com/australia-news/2019/sep/21/snap-judgment-why-sandy-streets-record-on-asylum-cases-stands-out.

Davies, Benjamin. “Arbitral Analytics: How Moneyball Based Litigation/Judicial Analytics Can Be Used to Predict Arbitration Claims and Outcomes.” Pepperdine Dispute Resolution Law Journal 22, no 2 (2022): 321–75.

Dotan, Yoav. “Do the ‘Haves’ Still Come Out Ahead? Resource Inequalities in Ideological Courts: The Case of the Israeli High Court of Justice.” Law and Society Review 33 (1999): 1059–80. https://doi.org/10.2307/3115159.

European Parliament. “EU AI Act: First Regulation on Artificial Intelligence,” 2023. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

European Union. “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.

Franceschini, Fiorenzo, Maurizio Galetto, and Domenico A Maisano. Management by Measurement: Designing Key Indicators and Performance Measurement Systems. Berlin: Springer, 2007.

Friedman, Lawrence M. “The Law and Society Movement.” Stanford Law Review 38, no 3 (1986): 763–80. https://doi.org/10.2307/1228563.

Gabriel, Iason, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Tomašev et al. “The Ethics of Advanced AI Assistants,” arXiv (2024). https://doi.org/10.48550/ARXIV.2404.16244.

Ghassemi, Marzyeh, Luke Oakden-Rayner, and Andrew L. Beam. “The False Hope of Current Approaches to Explainable Artificial Intelligence in Health Care.” The Lancet Digital Health 3, no 11 (2021): 745–50. https://doi.org/10.1016/S2589-7500(21)00208-9.

Ghezelbash, Daniel and Keyvan Dorostkar. “New Data Reveals Decisions About an Asylum Seeker’s Visa Can Vary Depending on Which Judge They Get.” ABC News, August 4, 2022. https://www.abc.net.au/news/2022-08-04/law-report-is-there-bias-in-judgments-of-asylum-seekers-visas/101291192.

Ghezelbash, Daniel, Kevyan Dorostkar, and Shannon Walsh. “A Data Driven Approach to Evaluating and Improving Judicial Decision-Making: Statistical Analysis of the Judicial Review of Refugee Cases in Australia.” University of New South Wales Law Journal 45, no 4 (2022). https://doi.org/10.53637/TCNQ8226.

Government of Canada. “Algorithmic Impact Assessment Tool,” 2024. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html.

Government of the Netherlands. “Impact Assessment: Fundamental Rights and Algorithms,” 2022. https://www.government.nl/binaries/government/documenten/reports/2021/07/31/impact-assessment-fundamental-rights-and-algorithms/fundamental-rights-and-algorithms-impact-assessment-fraia.pdf.

Grover, Gurshabad and Niels ten Oever. “Guidelines for Human Rights Protocol and Architecture Considerations,” May 13, 2024. https://datatracker.ietf.org/doc/draft-irtf-hrpc-guidelines.

Haynie, Stacia. “Resource Inequalities and Litigation Outcomes in the Philippine Supreme Court.” The Journal of Politics 48(2) (1995): 371–80. https://doi.org/10.2307/2132191.

Jacoby, Sidney. “Some Realism about Judicial Statistics.” Virginia Law Review 25, no 5 (1939): 528–58.

Jaffin, George H. “Prologue to Nomostatistics.” Columbia Law Review 35, no 1 (1935): 1–32.

Jenkins, Kirk. “Making Sense of the Litigation Analytics Revolution.” California Supreme Court Review (2017). https://www.californiasupremecourtreview.com/2017/10/making-sense-of-the-litigation-analytics-revolution.

Lahav, Alexandra. “Symmetry and Class Action Litigation.” UCLA Law Review 60 (2013): 1494–1523.

Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin. “How We Analyzed the COMPAS Recidivism Algorithm” (2016). https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

Lex Machina. “Legal Analytics: The Winning Edge for Law Firms” (2024). https://lexmachina.com/law-firms.

Loevinger, Lee. “Jurimetrics: The Next Step Forward.” Minnesota Law Review 33, no 5 (1949): 455–93.

Maclure, Jocelyn. “AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.” Minds and Machines 31, no 3 (2021): 421–38. https://doi.org/10.1007/s11023-021-09570-x.

McGill, Jena, and Amy Salyzyn. “Judging by Numbers: Judicial Analytics, the Justice System and Its Stakeholders.” Dalhousie Law Journal 44, no 1 (2021): 249–84.

Mott, Rodney. “Judicial Influence.” American Political Science Review 30, no 2 (1936): 295–315. https://doi.org/10.2307/1947260.

Niler, Eric. “Can AI Be a Fair Judge in Court? Estonia Thinks So.” WIRED, March 25, 2019. https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so.

Ognyanova, Katherine, David Lazer, Ronald E. Robertson, and Christo Wilson. “Misinformation in Action: Fake News Exposure is Linked to Lower Trust in Media, Higher Trust in Government When Your Side Is in Power.” Harvard Kennedy School Misinformation Review 1, no 4 (2020): 1. https://doi.org/10.37016/mr-2020-024.

Opeskin, Brian, and Gabrielle Appleby. “Responsible Jurimetrics: A Reply to Silbert’s Critique of the Victorian Court of Appeal” Australian Law Journal 94, no 12 (2020): 923.

Parliament of France. “Art. L111-13, Code de l’organisation Judiciaire.” LexBase, 25 March 2019. https://www.lexbase.fr/texte-de-loi/109002415-autre-version.

Patrick, Aaron. “In the Federal Court, Speed of Justice Depends on the Judge,” Financial Review, October 26, 2018. https://www.afr.com/companies/professional-services/in-the-federal-court-speed-of-justice-depends-on-the-judge-20181014-h16mk9.

Petit, Jacques-Guy. “La Justice En France, 1789-1939. Une Étatisation Modèle ?” Crime, Histoire & Sociétés 6, no 1 (2002): 85–103. https://doi.org/10.4000/chs.238.

Porter, Theodore. The Rise of Statistical Thinking: 1820–1900. Princeton, NJ: Princeton University Press, 1986.

Pound, Roscoe. “What Use Can Be Made of Judicial Statistics?” Oregon Law Review 12, no 2 (1933): 89–95.

Rahwan, Iyad, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-Francois Bonnefon, Cynthia Breazel, Jacob W. Crandall, Nicholas A Christakis, and Iain D Couzin. “Machine Behaviour.” Nature 568 (2019): 477–84. https://doi.org/10.1038/s41586-019-1138-y.

Republic of Estonia Ministry of Justice. “Estonia Does Not Develop AI Judge.” Media Release. Republic of Estonia, February 16, 2022. https://www.just.ee/en/news/estonia-does-not-develop-ai-judge.

Robinson, Natasha. “Federal Circuit Court Judge Alexander Street Accused of Bias after Rejecting Hundreds of Migration Cases.” ABC News, September 10, 2015. https://www.abc.net.au/news/2015-09-10/federal-court-judge-alexander-street-accused-of-bias/6764704.

Robinson, Natasha. “Federal Circuit Court Judge Alexander Street Not Biased Over Asylum Seeker Decisions, Court Rules.” ABC News, March 10, 2016. https://www.abc.net.au/news/2016-03-10/judge-rejected-almost-all-asylum-seeker-cases-not-biased/7237666.

Saxe, Leonard. “Civil Judicial Statistics in New York.” St John’s Law Review 10 (1935): 1–25.

Shafroth, Will. “Federal Judicial Statistics.” Law and Contemporary Problems 13, no 1 (1948): 200–15.

Shetreet, Shimon. “Judicial Independence and Accountability.” In Judiciaries in Comparative Perspective, edited by Hoong Lee Lee, 3–24. Cambridge: Cambridge University Press, 2011.

Steponenaite, Kristina Vilte, and Peggy Valcke. “Judicial Analytics on Trial: An Assessment of Legal Analytics in Judicial Systems in Light of the Right to a Fair Trial.” Maastricht Journal of European and Comparative Law 26, no 6 (2020): 1. https://doi.org/10.1177/1023263X20981472.

Stewart, Pamela, and Anita Stuhmcke. “Judicial Analytics and Australian Courts: A Call for National Ethical Guidelines.” Alternative Law Journal 45, no 2 (2020): 82–87. https://doi.org/10.1177/1037969X19899674.

Sunstein, Cass R. Conformity and Dissent. Chicago: University of Chicago Law School, 2002.

Susskind, Richard E., and Daniel Susskind. The Future of the Professions: How Technology will Transform the Work of Human Experts, 2nd ed. Oxford: Oxford University Press, 2022.

Svetiev, Yane. Experimentalist Competition Law and the Regulation of Markets. Oxford: Hart, 2020.

Turner, Sydney. “Decrease of Crime in Ireland During the Past Year.” The Irish Quarterly Review 1 (1859): 24.

Wills, William. “A Paper on Judiciary Statistics.” The Analyst 10 (1840): 205.

Zalnieriute, Monika, and Felicity Bell. “Technology and the Judicial Role.” In The Judge, the Judiciary and the Court, edited by Gabrielle Appleby and Andrew Lynch, 116–42. Cambridge: Cambridge University Press, 2021.

Ződi, Zsolt. “Algorithmic Explainability and Legal Reasoning.” The Theory and Practice of Legislation 10, no 1 (2022): 67–92. https://doi.org/10.1080/20508840.2022.2033945.


[1] ALA15 v Minister for Immigration and Border Protection [2015] FCCA 2047.

[2] ALA15 v Minister for Immigration and Border Protection [2016] FCAFC 30, [11].

[3] See, for example, Robinson, “Federal Circuit Court Judge Alexander Street Accused of Bias”; Robinson, “Federal Circuit Court Judge Alexander Street Not Biased.”

[4] Cohen, “Almost 99 per cent of Protection Visa Review Applications Fail.”

[5] See, for example, Cohen, “Who Watches over Our Judges?”

[6] Davidson, “Snap Judgment.”

[7] Ghezelbash, “A Data Driven Approach.”

[8] Ghezelbash, “New Data Reveals Decisions.”

[9] Opeskin, “Responsible Jurimetrics,” 923.

[10] Opeskin, “Responsible Jurimetrics,” 923.

[11] On the contrary, the Victorian opposition leader made an impassioned promise to increase the use of judicial statistics: Bathurst, “Who Judges the Judges?”

[12] There are many plausible explanations for this. One is that the risks of judicial analytics have been insignificant compared with other issues on the regulatory agenda. Another is that the risks posed by judicial analytics were poorly understood – invisible, perhaps. Yet another is that even when these risks became apparent, it was very difficult to find consensus on what constitutes proportionate regulation of technologies that present both opportunities and risks. Another still is that the opportunities of judicial analytics have distracted us from the risks. Perhaps a combination of these factors contributed to the persistent regulatory silence.

[13] For a summary of the reforms, see McGill, “Judging by Numbers,” 250–51.

[14] Chen, “Judicial Analytics and the Great Transformation of American Law.”

[15] See especially McGill, “Judging by Numbers”; Stewart, “Judicial Analytics and Australian Courts.”

[16] As noted in Part 4, the practices in question have at least ten aliases. However, ‘judicial analytics’ is used throughout this paper as an umbrella term, capturing a variety of both digital and non-digital practices.

[17] Stewart, “Judicial Analytics and Australian Courts,” 82.

[18] European Parliament, “EU AI Act: First Regulation on Artificial Intelligence.”

[19] Clark, “The Present State of Judicial Statistics,” 84.

[20] Jaffin, “Prologue to Nomostatistics,” 19–20. See also Petit, “La Justice en France, 1789–1939?” paras 28–29.

[21] Turner, “Decrease of Crime in Ireland During the Past Year.”

[22] Wills, “A Paper on Judiciary Statistics.”

[23] Porter, The Rise of Statistical Thinking, 3. As Porter notes, ‘The foundations of mathematical statistics were laid between 1890 and 1930, and the principal families of techniques for analyzing numerical data were established during the same period.’

[24] Capper, “Remarks on the Collection of Statistical Information in Ceylon.” He writes: “It would be interesting in the extreme, to peruse tables showing the number of schools and scholars, in each district, in juxtaposition to returns of the extent and nature of crime in the same places.”

[25] Jacoby, “Some Realism about Judicial Statistics,” 541.

[26] Council of the State of New York, “Report No 1,” 17.

[27] See, for example, Saxe, “Civil Judicial Statistics in New York”; Clark, “The Present State of Judicial Statistics”; Jaffin, “Prologue to Nomostatistics”; Jacoby, “Some Realism about Judicial Statistics”; Mott, “Judicial Influence.”

[28] Clark, “The Present State of Judicial Statistics,” 84.

[29] Pound, “What Use Can Be Made of Judicial Statistics?” 102.

[30] Shafroth, “Federal Judicial Statistics,” 200.

[31] Clark, “The Present State of Judicial Statistics,” 84.

[32] Pound, “What Use Can Be Made of Judicial Statistics?” 102.

[33] Chen, “Machine Learning and the Rule of Law”; Stewart and Stuhmcke, “Judicial Analytics and Australian Courts.”

[34] McGill, “Judging by Numbers,” 256.

[35] Discussed in Stewart and Stuhmcke, “Judicial Analytics and Australian Courts,” 83.

[36] Davies, “Arbitral Analytics.”

[37] Lex Machina, “The Winning Edge for Law Firms.”

[38] McGill, “Judging by Numbers,” 252–55.

[39] See, for example, McGill, “Judging by Numbers,” 253; Stewart, “Judicial Analytics and Australian Courts,” 82. The latter authors write: “Importantly, our focus is not upon the responsible use of data to measure past judicial efficiency or the use of metadata in civil proceedings or analysis of types and volumes of cases heard as presented in Courts’ Annual Reports. Instead, we are most concerned with the use of data to predict judicial behaviour and case outcomes.”

[40] Artificial Lawyer, “France Bans Judge Analytics.”

[41] Stewart, “Judicial Analytics and Australian Courts”; McGill, “Judging by Numbers.”

[42] McGill, “Judging by Numbers.”

[43] Stewart, “Judicial Analytics and Australian Courts.”

[44] Steponenaite, “Judicial Analytics on Trial.”

[45] McGill, “Judging by Numbers,” 252.

[46] McGill, “Judging by Numbers,” 265; Stewart, “Judicial Analytics and Australian Courts,” 85; Chen, “Judicial Analytics and the Great Transformation of American Law,” 15.

[47] Chen, “Judicial Analytics and the Great Transformation of American Law,” 17.

[48] Stewart, “Judicial Analytics and Australian Courts,” 84.

[49] Chen, “Machine Learning and the Rule of Law,” 7. He writes: “The first goal would be to expose judges to findings concerning the effects of legally relevant and legally irrelevant factors on decisions, with the goal of general rather than specific debiasing.”

[50] McGill, “Judging by Numbers,” 249.

[51] McGill, “Judging by Numbers,” 260; Stewart and Stuhmcke, “Judicial Analytics and Australian Courts,” 85.

[52] McGill, “Judging by Numbers,” 269.

[53] McGill, “Judging by Numbers,” 258–70.

[54] For a summary of the kinds of misinformation envisaged, see Ognyanova, “Misinformation in Action.”

[55] McGill, “Judging by Numbers,” 268.

[56] Even assuming that one could obtain the algorithm and data used to develop a machine learning model, it would be difficult to identify which factors the model is using to generate outputs (the problem of “explainability” or “interpretability”). For an extended explanation of this phenomenon, see Maclure, “AI, Explainability and Public Reason”; Ghassemi, “The False Hope of Current Approaches”; Ződi, “Algorithmic Explainability and Legal Reasoning.”

[57] Franceschini, Management by Measurement, 1.

[58] McGill, “Judging by Numbers,” 271–72.

[59] Shetreet, “Judicial Independence and Accountability,” 1, 6–10.

[60] Appleby, “Contemporary Challenges Facing the Australian Judiciary,” 327–29. Note, however, that this study did not rule out the possibility that judges were simply not forthcoming about the issue.

[61] Choi, “Judicial Evaluations and Information Forcing,” 1342.

[62] Patrick, “In the Federal Court, Speed of Justice Depends on the Judge.”

[63] Patrick, “In the Federal Court, Speed of Justice Depends on the Judge.”

[64] Steponenaite, “Judicial Analytics on Trial,” 770.

[65] Steponenaite, “Judicial Analytics on Trial,” 770.

[66] McGill, “Judging by Numbers,” 272.

[67] McGill, “Judging by Numbers,” 273.

[68] See, in particular, Baird, Game Theory and the Law.

[69] Baird, Game Theory and the Law, 1.

[70] Zalnieriute, “Technology and the Judicial Role,” 136.

[71] Niler, “Can AI Be a Fair Judge in Court?”

[72] Republic of Estonia Ministry of Justice, “Estonia Does Not Develop AI Judge.”

[73] Larson, “COMPAS.”

[74] McGill, “Judging by Numbers,” 251.

[75] Stewart, “Judicial Analytics and Australian Courts,” 84.

[76] Discussed in McGill, “Judging by Numbers,” 265.

[77] Susskind, The Future of the Professions, 17.

[78] McGill, “Judging by Numbers,” 269.

[79] A litigant with the financial resources to spend $15,000 per day on legal fees will typically obtain better legal representation than someone reliant on legal aid. Discussed in Dotan, “Do the ‘Haves’ Still Come Out Ahead?”; Haynie, “Resource Inequalities and Litigation Outcomes in the Philippine Supreme Court”; Lahav, “Symmetry and Class Action Litigation.”

[80] Stewart, “Judicial Analytics and Australian Courts,” 85.

[81] Stewart, “Judicial Analytics and Australian Courts,” 85.

[82] Steponenaite, “Judicial Analytics on Trial,” 768.

[83] McGill, “Judging by Numbers,” 263.

[84] Gabriel, “The Ethics of Advanced AI Assistants,” 6.

[85] Parliament of France, “Art. L111-13, Code de l’organisation Judiciaire.”

[86] Artificial Lawyer, “France Bans Judge Analytics.”

[87] See, for example, Chen, “Judicial Analytics and the Great Transformation of American Law.”

[88] McGill, “Judging by Numbers,” 251.

[89] Friedman, “The Law and Society Movement,” 780.

[90] Stewart, “Judicial Analytics and Australian Courts,” 86.

[91] Stewart, “Judicial Analytics and Australian Courts,” 84.

[92] Stewart, “Judicial Analytics and Australian Courts,” 85.

[93] Stewart and Stuhmcke allude to this problem when they acknowledge the ‘lack of formal sanction’ achieved by this approach: Stewart, “Judicial Analytics and Australian Courts,” 85.

[94] McGill, “Judging by Numbers,” 279.

[95] Government of the Netherlands, “Impact Assessment.”

[96] Government of Canada, “Algorithmic Impact Assessment Tool.”

[97] European Parliament, “EU AI Act.”

[98] Bello y Villarino, “An AI Foundation Model for Education,” 115.

[99] Bello y Villarino, “An AI Foundation Model for Education,” 115.

[100] Unless, of course, the state controls the availability of judicial data: see, for example, Chen, “China Vows Judicial Disclosure After Outcry.”

[101] Steponenaite, “Judicial Analytics on Trial,” 771.

[102] Steponenaite, “Judicial Analytics on Trial,” 771.

[103] See Steponenaite, “Judicial Analytics on Trial,” 773; Steponenaite notes that ‘further research is essential’.

[104] Clark, “The Present State of Judicial Statistics.”

[105] Loevinger, “Jurimetrics.”

[106] Ashley, Artificial Intelligence and Legal Analytics.

[107] Davies, “Arbitral Analytics.”

[108] Aprile, “Judicial Profiling.”

[109] Jenkins, “Making Sense of the Litigation Analytics Revolution.”

[110] Jaffin, “Prologue to Nomostatistics.”

[111] Wills, “A Paper on Judiciary Statistics.”

[112] Artificial Lawyer, “France Bans Judge Analytics.”

[113] McGill, “Judging by Numbers,” 284.

[114] Gabriel, “The Ethics of Advanced AI Assistants,” 6.

[115] Loevinger, “Jurimetrics,” 456.

[116] Rahwan, “Machine Behaviour,” 477.

[117] Rahwan, “Machine Behaviour,” 477.

[118] Franceschini, Management by Measurement, 1.

[119] Appleby, “Contemporary Challenges Facing the Australian Judiciary,” 301.

[120] Choi, “Judicial Evaluations and Information Forcing,” 1326.

[121] Choi, “Judicial Evaluations and Information Forcing,” 1342.

[122] Australian Government Department of Industry, Science and Resources, “Australia’s AI Ethics Principles.”

[123] Australian Government Department of Industry, Science and Resources, “Australia’s AI Ethics Principles.”

[124] Attorney General of Nova Scotia v MacIntyre (1982) 1 SCR 175, 183.

[125] Australian Human Rights Commission, Human Rights and Technology: Final Report, 10.

[126] Australian Human Rights Commission, Human Rights and Technology: Final Report, 10.

[127] Australian Government Attorney-General’s Department, “Fair Trial and Fair Hearing Rights.”

[128] Steponenaite, “Judicial Analytics on Trial,” 767–68.

[129] Grover, “Guidelines for Human Rights Protocol and Architecture Considerations.”

[130] Curtice, “The Proportionality Principle and What It Means in Practice,” 111–12.

[131] European Union, “Artificial Intelligence Act.”

[132] Australian Human Rights Commission, “Human Rights and Technology: Discussion Paper,” 8.

[133] McGill, “Judging by Numbers,” 278.

[134] Beckers, Three Liability Regimes for Artificial Intelligence, 6–7.

[135] Svetiev, Experimentalist Competition Law and the Regulation of Markets.

[136] Bar‐Siman‐Tov, “Temporary Legislation.”


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2024/12.html