this idea comes up frequently and in practice is much more difficult than it seems. the reasons why are illustrative of the software/hardware disconnect in dealing with the real world
teleoperation seems like both a great and simple idea; if you can replace the “””AI””” part of robotics with a human you get a big gain for free right?
not simple in practice, and about as intractable as the robotics problem on its own in many cases. here are some reasons
the “””AI””” part of robotics is seldom the hard part in many applications. you don’t need much of a sophisticated on-line planning algo to drive a tractor around a field.
you hit the first problems in control and actuation. electromechanical actuators are not perfect. they do not respond linearly to control inputs. their sensors are noisy; the linkage is not necessarily in the position the sensor says it’s in
controllers are not perfect. the input you’re giving your actuator may not be the same one as the controller is supposed to give. the sensors that tell you this aren’t perfect.
so even assuming perfect perception and planning, in a zero latency environment, teleoperation has to contend with getting the machine to do what you want it to do
to some degree a human can compensate for these facts and indeed they do; when you’re in the tractor you’re dealing with these same problems to some extent
but when you’re in the tractor you have human level perception. you don’t get to have that over teleoperation. even before latency, you immediately lose proprioception and inertial sensing, as well as haptic feedback. you can build haptics in but we’re back where we started then
so let’s say eventually we rig something up that’s fine for humans to drive a tractor around a field remotely. fine, but you still have every other part of farm operations to automate, plus the teleoperation overhead you’ve sunk a lot of resources into
Moravec’s paradox cruelly strikes again; the stuff we can do teleoperation on we can just automate, the stuff we can’t automate we can’t do teleoperation on either
this is not universally true; there are useful operating points for teleoperation. Supervision of autonomous operation in environments where there are a lot of long tail cases is one
detecting anomalous environments and calling for help is a simpler problem than just solving autonomy
but for many of the “””obvious””” cases where we ‘just’ replace the “””AI””” with a human, it’s either not feasible or else not interesting
dealing with the physical world is much more difficult than dealing with any software environment regardless of the verisimilitude or your simulation, not because of something special about the physical world but because of the shortcomings of software
this is before we get into the economics of any of this
• • •
Missing some Tweet in this thread? You can try to
force a refresh
it neither leads to the best or even good government, nor does it allow you to be rid of a bad government. elections justify the transfer of power without bloodshed and this is what makes representative democracy historically unique.
you will hopefully notice how it follows to explain why we still have daylight savings time.
responses to this post are emblematic of the broader cultural divide between and among tech and split into roughly four types: 1. 'yes, this is a hard and important problem that will likely take a lot of time and difficult work to solve, let's try to figure that out'
2. 'you fucking idiot, you stupid moron, don't you know how hard this problem is? the one that you said is hard? how dare you even think about trying to change or improve anything you goddamn tech bro'
3. 'those asml people must be stupid and their machines are obviously too complicated, a fact that is obvious to me, a person who has never thought about this technology before encountering this post, and in fact does not know what a semiconductor is or does'
yesterday evening i gave a presentation to founders, investors, and the ai community at @aixventureshq on how to think about ai application development. it was well received so i'm going to reproduce it in full here on x the everything app (which is also now a slide deck app).
1. buzzwords are mind killers. you must empty your head of all buzzwords. the temptation with any new technology is to use existing concepts as crutches as much as possible, but this kills the creativity necessary to explore the full capabilities of this new technology.
new abstractions which don't carry meaning are unnecessary / harmful:
- 'rag' is meaningless as a concept. imagine calling a web app 'database augmented programming'.
- 'agent' probably just means 'run an llm in a loop'
phd-sounding terms to say simple concepts mostly let everyone know how cool you are for readng the arxiv but don't help anyone get work done:
- 'prompting' means 'text input'
- 'system prompt' means 'the instruction part of the text input'
it is far too early to spend time adopting the abstractions imposed by existing tools and frameworks. until you understand what you want to build and how to build it, these can only slow you down
finally, ai application development is still just programming, with the same social and technological implications as all programming since we invented the stored instruction computer
2. chatbots / assistants are the geocities (static websites if you're too young to remember the true glory days) of ai
- people think 'ai' means 'chatbot' because their first experience of ai is chatgpt / claude / gemini etc, and because people reason about new technologies by analogy. this is the same principle as how at launch, the web was thought of as a new publishing medium (hence web 'page'). it takes time to break out of the very first analogies we apply.
- chatbots and assistants are very useful, very flexible, and apply across almost every industry and job function. it's great to be able to interact with large text corpora non-linearly and conversationally, and 95% of chroma's users are building chatbots on top of their data. this is real and very useful and people should do that!
- ... but they just represent static data in a new way. even assistants without retrieval are essentially a kind of conversational index with basic reasoning over a corpus stored in the weights of the model (this is too many phd words)
- over-indexing on chatbots as the most important or most valuable application of this technology is a mistake, but so is dismissing their utility
before any of this gets fixed it's necessary to understand why men in tech dress like shit. hacker culture and the web allowed people to rebel against the stifling business culture of the 90's. in software, especially foss, people were judged by their work, not their appearance.
the first wave of truly successful new-breed software companies were founded and succeeded by the hacker ethos. in contrast to 90's / 2000's microsoft, nobody gave a shit what you wore to work at early google as long as you did the job. the best flocked to these co's.
but then the hackers got rich and famous. as in every society since the dawn of time, people wanted to emulate the elites, and the elites dressed like shit. new founders wanted to 'look like' founders and 'fit in', so they too, dressed like shit.
the only things in the new testament about jesus’ life after age 12, before starting his ministry at age 30, are the verses mark 6:3 and matthew 13:55
in most english translations, they say jesus was (the son of) a carpenter, but this isn’t quite right
in the original greek, the word is τέκτων, tekton, a general term for artisans, craftsmen, and builders - indeed, this word has the same root as τεχνολογία - technologia, the study and discourse of craft
jesus spent his early adult life as a builder, making useful things
because there are no stories in the new testament about jesus’s life during this time, one might think it’s unimportant. this would be missing the point.
now that the dust has settled on the first round of the openai debacle, it’s time to start asking some questions
two of the board members who voted altman out, helen toner and tasha mccauley are deeply enmeshed in ‘effective altruism’
‘effective altruism’ as a movement has a lot wrong with its ideas, some of which i’ve documented before, but this isn’t about that - it’s about basic competence
the people who tell you over and over that they’re most concerned with the “long term future of humanity” have demonstrated over and over again that they don’t understand even the short term consequences of their actions