Jason Braier Profile picture
Employment law barrister at @42BR_employment. Dad to 2 amazing children. Love a good #ukemplaw thread. All views my own, etc etc etc.

May 5, 2021, 15 tweets

Some fascinating insights in this (unsurprisingly) brilliant and thought-provoking paper on the benefits and dangers of the online publication of the employment tribunal register. #ukemplaw

2/ The authors express a particularly prescient concern about unscrupulous employers using automated screening tools to weed out job applicants who've previously brought claims - raising fascinating issues for potential victimisation & discrimination claims.

3/ Relatedly there are concerns about employers using algorithmic analysis of judgments to determine the likelihood of an applicant to take the employer to tribunal in due course. Unsurprisingly, reliance on such analysis would further entrench discriminatory hiring practices.

4/ The authors suggest also the danger of algorithmic analysis influencing the approach companies take to drafting contractual documents to avoid employment status - hopefully though that's now a busted flush following the Supreme Court's Uber judgment.

5/ Fascinatingly, the authors explain that algorithms are applied already to analyse judgment data to predict outcomes on a given set of facts, but are also moving towards elucidation of legal reasoning (at which point I will be overtaken by tweeting machines).

6/ The authors conclude that blacklisting, anonymity orders under ET r.50, data protection & equality law all fall short of offering protection against the misuse of algorithmic analysis of the register.

7/ In respect of equality law, the authors note a number of problems relevant to proving a decision not to hire resulting from algorithmic decision-making was because of a particular protected characteristic.

8/ 1st, there's the IP protection of algorithms' decision-rules. 2nd, even if identified, the rules are generally obfuscatory - not pinpointing a single characteristic but a multifactorial approach which is problematic under EqA s13 (but would be less so if s.14 were in force).

9/ Whilst indirect discrimination may be a more promising route the authors are concerned at the evidential difficulty in identifying group disadvantage at the time of the discrimination given that algorithms continuously process data & update predictions.

10/ I'm not sure that's necessarily so problematic. I suspect expert evidence wouldn't be too hard to come by to suggest that using algorithmic hiring with litigious risk as a factor places certain groups (women, ethnic minorities, disabled) at a particular disadvantage.

11/ The authors' primary recommendation is systematic anonymisation of published data, an approach necessitating a different approach to the balance between data protection & open justice.

12/ Anonymisation assists against the crudest form of data mining (& also against hiring managers googling candidates) though it would likely be of limited impact against the discrimination concerns.

13/ To address that, the authors recommend the EqA to extend protected characteristics to 'analogous grounds' s well as making explicit that legal responsibility for use of the algorithm is vested in the user/employer.

14/ Finally, the authors recommend more comprehensive regulation of the use of judgment data as part of AI processes.

15/ A fascinating paper on 1 element of the #ukemplaw impacts of the increasing use of AI by employers for hiring & firing decisions. Until these issues are resolved, ironically this paper could be usefully deployed by employers as ammunition to encourage employees to settle.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling