this idea comes up frequently and in practice is much more difficult than it seems. the reasons why are illustrative of the software/hardware disconnect in dealing with the real world
teleoperation seems like both a great and simple idea; if you can replace the “””AI””” part of robotics with a human you get a big gain for free right?
not simple in practice, and about as intractable as the robotics problem on its own in many cases. here are some reasons
the “””AI””” part of robotics is seldom the hard part in many applications. you don’t need much of a sophisticated on-line planning algo to drive a tractor around a field.
you hit the first problems in control and actuation. electromechanical actuators are not perfect. they do not respond linearly to control inputs. their sensors are noisy; the linkage is not necessarily in the position the sensor says it’s in
controllers are not perfect. the input you’re giving your actuator may not be the same one as the controller is supposed to give. the sensors that tell you this aren’t perfect.
so even assuming perfect perception and planning, in a zero latency environment, teleoperation has to contend with getting the machine to do what you want it to do
to some degree a human can compensate for these facts and indeed they do; when you’re in the tractor you’re dealing with these same problems to some extent
but when you’re in the tractor you have human level perception. you don’t get to have that over teleoperation. even before latency, you immediately lose proprioception and inertial sensing, as well as haptic feedback. you can build haptics in but we’re back where we started then
so let’s say eventually we rig something up that’s fine for humans to drive a tractor around a field remotely. fine, but you still have every other part of farm operations to automate, plus the teleoperation overhead you’ve sunk a lot of resources into
Moravec’s paradox cruelly strikes again; the stuff we can do teleoperation on we can just automate, the stuff we can’t automate we can’t do teleoperation on either
this is not universally true; there are useful operating points for teleoperation. Supervision of autonomous operation in environments where there are a lot of long tail cases is one
detecting anomalous environments and calling for help is a simpler problem than just solving autonomy
but for many of the “””obvious””” cases where we ‘just’ replace the “””AI””” with a human, it’s either not feasible or else not interesting
dealing with the physical world is much more difficult than dealing with any software environment regardless of the verisimilitude or your simulation, not because of something special about the physical world but because of the shortcomings of software
this is before we get into the economics of any of this
• • •
Missing some Tweet in this thread? You can try to
force a refresh
before any of this gets fixed it's necessary to understand why men in tech dress like shit. hacker culture and the web allowed people to rebel against the stifling business culture of the 90's. in software, especially foss, people were judged by their work, not their appearance.
the first wave of truly successful new-breed software companies were founded and succeeded by the hacker ethos. in contrast to 90's / 2000's microsoft, nobody gave a shit what you wore to work at early google as long as you did the job. the best flocked to these co's.
but then the hackers got rich and famous. as in every society since the dawn of time, people wanted to emulate the elites, and the elites dressed like shit. new founders wanted to 'look like' founders and 'fit in', so they too, dressed like shit.
the only things in the new testament about jesus’ life after age 12, before starting his ministry at age 30, are the verses mark 6:3 and matthew 13:55
in most english translations, they say jesus was (the son of) a carpenter, but this isn’t quite right
in the original greek, the word is τέκτων, tekton, a general term for artisans, craftsmen, and builders - indeed, this word has the same root as τεχνολογία - technologia, the study and discourse of craft
jesus spent his early adult life as a builder, making useful things
because there are no stories in the new testament about jesus’s life during this time, one might think it’s unimportant. this would be missing the point.
now that the dust has settled on the first round of the openai debacle, it’s time to start asking some questions
two of the board members who voted altman out, helen toner and tasha mccauley are deeply enmeshed in ‘effective altruism’
‘effective altruism’ as a movement has a lot wrong with its ideas, some of which i’ve documented before, but this isn’t about that - it’s about basic competence
the people who tell you over and over that they’re most concerned with the “long term future of humanity” have demonstrated over and over again that they don’t understand even the short term consequences of their actions
regardless of how exactly this shakes out - and it’s looking increasingly likely that this was a poorly executed power grab by mental defectives - it’s important to remember that ‘effective altruists’ have been consistently wrong about literally everything
they completely failed to predict the dominant paradigm for ai (llms), instead building castles out of sand on the idea of recursively self-improving agents and ideas like “instrumental convergence” and “orthogonality”, none of which apply because an llm has no agency
they have predicted catastrophe with every single model release, bringing down the perfectly harmless ‘galactica’ from meta - months later, the much more capable llama2 is out, with zero harm produced
i spent the weekend working to try to get more data about the question of whether or not gpt has world models
my results seem to suggest it doesn't, but i found some interesting things along the way. writeup below.
the entire question really kicked off for me when people were using gpt's performance on chess under certain conditions as strong evidence for the idea that it has world models
at the same time, it couldn't play well from random positions
the claim that gpt played chess well because it has a world model of chess is a very very strong claim, which entails that gpt knows the rules of chess, can consistently apply them, and knows good strategy
making retrieval practical for ai applications takes a lot more than repackaging a vector database built for semantic search, or adding vector search to an existing database.
in the first place, the core vector search and storage architecture for ai is fundamentally different from that made for semantic search
in semantic search / recommender systems, all the data in a single very large index is accessible to every user
additionally, the data is updated only rarely - most mutations are additions, with relatively few and infrequent updates and deletions