, 57 tweets, 15 min read
My Authors
Read all threads
This anti-robot rights argument isn't very good, but there's a lot going on and it's hard to keep everything straight.

Let's break down the main claims. I'll save my criticisms for the end. #AIEthics #robotrights #botALLY #humansupremacy

noemamag.com/a-misdirected-…
1. The article begins by associating robot rights with science fiction scenarios and singularity theorists like Kurzweil, ideas that are grounded in "endless optimism". ImageImage
2. The authors then associate robot rights with "the reductionist mainstream tradition of cognitive science" based on the "computer metaphor," in which values, affection, and interpersonal human connection is "reduced to a kind of formalism". ImageImage
3. Against this reductionist view, the authors claim "We are not just complicated information processing machines". Instead, they endorse "embodied and enactive approaches to cognition" in which artifacts exist as "constituent parts of our milieu". ImageImageImage
4. The authors argue that on the enactive perspective, robots aren't entites in their own right, they aren't "out there", but instead are "an inherent aspect of our own being". ImageImage
5. The authors then turn to a discussion of #AIEthics as an alternative to science fiction scenarios. They discuss how AI algorithms are used to oppress and marginalize people. They say human rights is the real challenge of AI Ethics. ImageImageImage
6. These reflections on the imbalance of power and abuses of big tech lead the authors to claim that "Robotic and AI systems are inherently conservative forces that are inseparable from power and wealth." Image
7. Furthermore, the authors acknowledge how the appeal to "humanity" in tech ethics often ignores or marginalizes certain vulnerable groups, citing a case where a wheelchair user became trapped by a service delivery robot. pittnews.com/article/151679… Image
8. The authors argue that prioritizing the rights of the robot over the rights of the human amounts to the "dehumanization of marginalized individuals and communities." Image
9. The authors claim that the robot rights debate is a "first-world preoccupation with abstract conceptions," and that theoretical debates over agency or consciousness are "detestable" in light of real threats "on the ground". Image
10. The authors then raise a challenge to the robot rights debate: the difficulty of drawing boundaries around the entities that would be granted rights. They note that AI depends heavily on so-called "microworkers" who are often exploited. Image
11. The authors conclude the article by returning to the worry that robot rights simply allows corporations to evade responsibility. "Treating AI systems... as separate... from these corporations, or as 'autonomous'... is not ethics — it is irresponsible and harmful..." Image
I hope my breakdown of the article above is fair and constructive! I don't want to misrepresent their position in my comments below.
On a claim by claim basis, I agree with a lot of their view!

- Singularity theory is a doomsday cult & shouldn't be taken seriously
- Dynamical, enactive approaches to cognition are good
- Big tech is evil and must be held accountable
- Human rights have priority in #AIEthics
These four points aren't trivial! From my perspective, agreement on these four points puts me and the authors on "the same side" of this debate; any disagreements we might have beyond these points are relatively minor in comparison.
The authors see things differently. They are drawing sharper lines in the sand, ones which condemns work on robot rights (of the sort I do). From their perspective, I'm on the wrong side of this debate.

Before talking about our disagreements, let's talk about their argument.
Their argument works by association: they're grouping singularity theory, mainstream cognitive science, and corporate tech as the "robot rights" side, and positioning enactivism and human rights as the ethical alternative.

This framing of the debate is a strawman.
The major proponents of robot rights in the literature (Kate Darling, David Gunkel) don't ground their arguments in singularity theory or mainstream cognitive science, and they aren't just corporate shills.

The article doesn't touch any of this work.
Instead, the authors align their views on robot rights with debates in cognitive science between dynamical and computational approaches to the mind.

This framing of the debate is a red herring. The computationalism vs enactivism debate is basically irrelevant for robot rights.
This requires some unpacking.

For one thing, computationalist cognitive scientists can still presumably care about human rights. Computationalism about the mind is not itself an ethical stance about the value of humans relative to computers.
There's no reason why a computationalist can't be sincerely committed to human rights, stand against corporate abuses, reject Kurzweil as a loony, etc. Computationalists aren't generally committed to defending corporate robots.
Similarly, there's no principled reason why enactivists can't think about robot rights, or take seriously the idea that machines deserve some social status or recognition. Plenty of enactivists take an ecological view of agency where machines (like thermostats) can be agents.
For that matter, the article gives no reasons for thinking that enactivism will better deal with corporate abuses in tech ethics. Perhaps enactive views will make it easier for corporations to evade responsibility by emphasizing the dynamics of complex systems. Why not?
More importantly, there's a rich literature in which people take enactive approaches to computation, or computational approaches to dynamical organization found in enactive systems.

In other words, enactivism and computationalism are not starkly opposed views.
If we believe that computationalism and enactivism are ultimately both useful and compatible frameworks for understanding dynamical complexity, then this distinction is completely unhelpful for informing our views on robot rights.
Put another way: the rejection of "mainstream computationalism" is clearly leftover rhetoric from the representation wars in the late 20th century. But those battles have cooled off in the last few decades. Modern cog sci is both dynamical and computational.
But isn't this discussion of dynamical complexity in enactivism part of the same "first world preoccupation with abstraction" as mainstream cog sci?

If the archaic representation wars partisanship informing their argument is not idle "first world" abstraction, I dunno what is.
I think pinning their position on an enactivist rejection of computationalism is a red herring, an appeal to academic obscurity that most reading the argument simply won't be equipped to evaluate. It's doing none of the ethical or theoretical work.
My sense is that they're trying to leverage the existing animosity to computationalism among radical anti-representationalist enactivists in opposition to robot rights.

There are lots of self-righteous anti-representationalists, but that doesn't make it the most ethical view.
So on their own terms, I don't see any clear conceptual relationship between the computationalism/enactivism debates and robot (or human) rights.

Unfortunately, the article is explicitly hostile to attempts at exploring these relationships further.
After skimming the surface of these debates, the article then rejects idle theory and turns to actual cases of AI ethics abuses.

They don't say how computationalism or singularity theory make these abuses possible, or what they have to do with robot rights.
I don't know any tech CEOs making public comments on enactivism or robot rights to defend their abuses. I do know that (Sophia's tech lead) Ben Goetzel's views on the mind are informed by an autopoetic account. He's a (computational) enactivist. bit.ly/3hsro6M
The upshot is that the alignments the article constructs around robot rights, tech ethics, and enactivism don't really hold up to any scrutiny.

If the authors want to argue that enactivism provides a better grounding for thinking about ethics and rights, perhaps they should start by explaining what "rights" are in an enactivist framing. If humans are more like hurricanes than computers, do hurricanes deserve rights?
An enactivist account of human and robot rights would be interesting! But the authors don't even hint at what that account might look like. They jump straight from enactivist psychology to the ethical priority of human rights without showing any of their work.
Presumably, an enactive account would recognize some forms of agency in the machines around us. A computer server is not just a formal system. It is also an embodied dynamical system, subject to changes in temperature, maintenance, fluctuations in internet activity, etc.
Yes, an artifact is a constituent of the world I'm engaging. But so are you! That something exists in my world-view doesn't mean it has no agency independent of me. Insisting that artifacts don't exist "out there" is a frightening view tbh.
It's not clear to me how to square this view, that "artifacts are part of us", with the condemnation of corporate behavior, where it's unethical to attribute the responsibility to anything but corporations.

Are corporations part of us or not?

Another useful enactivist discussion would be how responsibility breaks down in complex dynamical systems like corporations. To what extent can (ethical? agential?) responsibility be attributed to the whole, and to what extent the parts? These are big open questions!
Obviously, corporations are made of people who are to some extent responsible for what the whole does. But the legal framework of corporate personhood is designed to shield those responsible from certain forms of liability.

So, what's the enactivist take on corporate personhood?
Part of the challenge here is that enactivism is an explanatory framework, designed to explain the dynamical self-organizating structures found in the biological sciences.

Enactivism doesn't really have direct normative implications for the idea of rights.
Consider the ethics of mosquito eradication. The enactivist might recognize the harm caused by mosquitoes, but also the delicate interdependence of ecological dynamics on mosquitoes. But what does this imply about eradication? Maybe nothing? medium.com/@eripsa/extinc…
Enactivists can tell us why and how mosquitoes are alive, but they can't really tell us whether it's better to keep them around or eradicate.

Seems to me that enactivism as such is also silent on both human and robot rights.
"Rights" are not just about identifying and classifying all the individuals or communities that deserve them.

At their core, rights are about protecting an idea of justice, of a world we want to live in. Enactivism can't tell us what justice is, it's not trying to do that.
We protect rights like free speech or assembly not because there's something special about persons which grant them these rights, but simply because we want to live in social systems where people are free to speak and assemble. We're all better off when these rights are protected
On this framing, the question of robot rights doesn't depend on any comparison with people. It only depends on the question "what roles do we want robots to play in society". Or better: "in a world populated with robots, what does justice look like?"
I can easily imagine a just world where corporations are held responsible for their robots in transparent ways, where marginalized communities and individuals are have strong legal protections from systemic abuses, and where robots are have some protected social status.
In fact, I'd argue that protecting certain communities from abuses like microwork requires some detailed technical and legal theory about collective agency and corporate responsibility, and how semi-autonomous robots fit into these structures.
If we admit that even people aren't fully autonomous, the fact that robots aren't fully autonomous either shouldn't count against their agency or rights either.

The only reason to think that robots should be denied social status is on the theory that rights are zero-sum, and protecting the rights of robots can only come at the expense of the rights of others.

But what if protecting robot rights *helped us* protect human rights?
The authors don't show that robot rights creates situations situations where robots infringe on human rights. They don't discuss why robot rights would exacerbate existing harms and abuses.

They simply offer some cases where robots and AI are misused.
For instance, consider the Emily Ackerman case. Perhaps a more thorough legal framework for discussing the operations of service robots in public spaces (eg, right of way policies) might have prevented this situation?

Of course, public infrastructure is already hostile to disabled persons. The wheelchair ramp on sidewalks only allows for one set of wheels at a time. Mindlessly throwing robots into the mix only makes the situation worse.

But rejecting robot rights doesn't follow from this.
It seems to me that the Emily Ackerman case is a great example of why we need robot rights NOW, because robots aren't some fictional abstraction, they're populating our world, operating with and alongside us today. Their operations are *already* a matter for justice.
If we're advocating for a system that accommodates the full range of pedestrian mobility needs, why not consider the use and activity of artificial agents among those needs?

Why not take seriously the legal and political complications of a world populated by robots?
If corporations have already mastered the art of evading accountability, this is not due to recent AI techniques or robot rights advocacy. Condemning robot rights for the egregious abuses of capitalism that long predate the literature is simply unfair.
To robot rights advocates, taking the legal and political implications of artificial agents seriously provides an opportunity for rethinking agency and responsibility across all scales. It opens ways to challenge corporate personhood, not simply to expand its reach.
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with eripsantifa

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!