.@WestworldHBO makes an excellent (STS) point about artificial intelligence and autonomous systems: it's not the technology that is scary, what is scary is technology says about us.
Luckily this also creates space for reflexivity, in the way art, literature and science can help us understand ourselves and our societies.
'We shape our tools and thereafter our tools shape us'.
.@WestworldHBO also clearly reiterates that we should not fall into the trap of technological determinism, but that doesn't mean that technology doesn't have its own materiality.
'Technology is a very human activity - and so is the history of technology.'
Our fears about technology are maybe a projection of other fears we have: lack of agency in a complex world, rising power of opaque transnational companies, diminishing trust in govts to protect us, our inability to stop contributing to these cycles that hurt us (b/c of comfort).
Technology cannot hide or alleviate the basic premises of the human condition: we are alone and we are weak. The only way to overcome this is meaningfully and emphatically engage with each other - which is really hard.
Technology makes collaborating, organizing and communicating quicker - but not necessarily deeper because time is a crucial component in building ties, trust and understanding (as pointed out by @zeynep).
This is where we can have a discussion about technology and what we think it should look like. Currently the Internet is largely shaped by companies - but it doesn't need to be that way. In the end it is the public space in which we perform our lives. We can (re-)shape it.
Which doesn't mean it's easy to get it right, but we don't need to wait for companies or governments to fix it for us; they cannot do that without us.
We need to reclaim technology as a conversation and an experimentation ground.
One of the main characteristics of technology is that is allows us to change our environment, and thus provide a new ordering - this is what makes technology inherently political.
That is why with every technology we need to ask: who controls it, and what change does it affect?
Another way to ask that question is: who gains something with the use of this technology, and who loses something? How does this affect power relations? What vision of people and society does this imprint and/or strengthen?
This often is a hard question to ask because technology has many layers, and many actors who can produce, use, abuse and influence it. To make it even harder: technologies interact and change over time.
But because it is hard, doesn't mean we should not do it!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last days I have been researching the European draft law on artificial intelligence. I am not an expert on AI, but I do know a thing or two about standards, and standard-setting features prominently in this act!
What the European Commission proposes is that standards will be created for AI by European Standards Organisations. AI implementations that are compliant with the standards are legal in the EU.
There are only three European Standards Organisations, namely CEN, CENELEC, and ETSI. AI standards-setting would probably take place within CEN/CENELEC's Joint Technical Committee (JTC) 21 ‘Artificial Intelligence’.
In this work, I examine the role of norms in the governance of the Internet infrastructure.
Based on extensive qualitative and quantitative analysis of different Internet governance bodies, namely @ICANN, @ietf, and @ripencc, I developed a theory.
Norms only get introduced and maintained in the governance of the Internet infrastructure if they: 1) Are translated to the social worlds of the significantly represented groups 2) Increase voluntary interconnection and interoperation between independents networks.
Another new draft in response to the discussion of the removal of racist language in the IETF.
This new draft practically says: when there is contention about whether a concept is racist or not, the IETF should err on the side of racism and the status quo.
The draft asserts that this would be to the benefit of the Internet community.
It also continues to state that removing racist language might be too much work, and would not really contribute much. It's more important to not disrupt the current ways of working.
Also, the draft says that no one should be forced to remove racists language because, again, it would disrupt normal work.
When @MalloryKnodel and I started this work, we thought removing racist language would be kind of a no-brainer.
I think a European normative technical system should seek to leverage the human right to science and with that overcome the patents and copyrights of the infrastructure of the information society.
This would leverage the knowledge production of universities, and re-involve public research institutions with the development of the Internet as public utility.