We argue that the ways that machine learning mediates how law is done -- according to statistical notions of relevance or optimality -- ultimately affect what law *is*.
The issue isn't some mythical #AGI 'robot judge', but the much more subtle (mis)direction of legal practice by statistical means. This often happens upstream, effected by design decisions that might seem remote from the day-to-day use of #legaltech
For certain legal tech applications, notably search and document generation, this can become problematic if it restricts the litigator's creative capacities -- capacities that we argue are at the core of the #RuleOfLaw.
To exercise those capacities lawyers need access to 'lossless law', uncompressed by statistical methods that in many cases have little or no relationship with legal methods and commitments, if they are to exercise their professional duties as officers of the court.
This means paying attention to the front- and back-ends of legal tech systems, which ultimately frame the practices lawyers engage in. As in other domains, legal tech is not neutral; but unlike other domains it is the de facto means by which law is performed and thus made real.
Those designs mediate 'lossless law', and the onus ought therefore to be on their providers to demonstrate that the mediation enhances, or at least doesn't undermine, the practices that are necessary to uphold the Rule of Law.
Speaking of 'providers', we suggest that Annex III of the draft EU AI Act ought to extend beyond "judicial authorities" to include all legal practitioners who make use of AI systems. Applying the 'high risk' standard to use by state organs alone is not nearly enough.