This research note is part of the Mediocre Computing series
I’m going to set aside my longer serialized projects for the next month or so. There’s too much random disruption happening in my life to work on them, so I’m going to do a bunch of one-off exploratory pieces on miscellaneous themes that have been on my mind. One such theme is robots.
The reason robots are on my mind is not because Amazon has released its first one (to decidedly mixed reviews), but because Foundation has started airing on Apple TV, and that got me thinking of Asimov and real robots.1 The show by the way is pretty good, two episodes in (here’s my evolving Twitter thread on the show), and most of the changes improve both plot and character development without messing with the spirit of the original.
In the tech world, we seem to have oddly schizoid attitudes about robots. On the one hand, we have people (including skilled roboticists) reluctant to even recognize them as a distinct category of technology, and looking to subsume them in adjacent categories like appliances, automation, or AI. On the other, we have people conceptualizing them primarily in tediously and uninterestingly anthropomorphic ways.
Category-Denying Robots
As an example of the former attitude, consider some of the responses I was getting while shitposting about robots earlier this week:
Here are three category-denying types of responses I got:
“Robots don’t look like robots when they arrive. They look like dishwashers.”
“A robot is just anything with a sensor and an actuator”
“Robots are already here, working in factories.”
Some of what’s going on here is analogous to what we talked about last time — the goalposts-moving phenomenon that robotics shares with its simpler, stupider sibling, AI (more on why robotics > AI later). But there’s also a part that’s just shallow analysis.
The first type of response does not make a meaningful distinction between just any random type of machine and the idea of a robot. It’s like saying any kind of computer program is an AI.
This is not so much moving the goalposts as denying the game of football exists.
It’s a peculiarly American attitude. It’s like this country (unlike say Japan), wants to reduce robots to appliances when clearly they can be so much more. When the Roomba first came out, there was a lot of commentary about this strange cultural bias.
The second type of response is reductive in an unhelpful way. A thermostat comprises a sensor and an actuator, as does the light inside your refrigerator, but it is not very interesting to consider those things robots unless your agenda is to dismiss the category altogether. People who make this sort of argument are often technically skilled but rather tasteless (in design terms) types who can’t see wholes that are greater than parts. They’ll never build interesting robots.
The third type is perhaps the most serious kind of shallow analysis: conflating robotics with industrial automation. Yes, robots can be used to do a distinctive pattern of industrial automation (marked by greater flexibility than typical machine tools for example). But that’s merely an important use case, not a definition. Robotics is a larger, more interesting category than automation. Pretending robots are only about automation is a bit like pretending that blue-collar factory workers are the only kinds of humans.
There are already robots bumming around in the wild as far away as Mars, far from anything that looks like a factory, doing things that look nothing like assembly line work.
Bullshit Hollywood Robots
The opposite category of bad robot conceptualization is the tediously and uninterestingly anthropomorphic kind favored by the more “serious” kind of film-maker with angsty humanist conceits.
These are the humanoid robots featured in a certain kind of self-satisfied Hollywood movies, and for some reason depicted as having unnecessarily exposed innards capped by an unnecessarily creepy death-mask plastic face and determinedly uncanny-valley affect. The more tropey versions usually have eyes that glow red when they turn evil.
As Adam Elkus has argued in a recent deconstruction of Ex Machina, this sort of movie is not really about robots at all, but about murky projections that have nothing to do with the technology, and everything to do with narcissistic explorations of what it means to be human, and how human relationships work. Mannequins or puppets would serve well enough in this kind of narrative (in fact, a movie like Lars and the Real Girl, featuring a sex doll, does a better job than most ponderous “can robots feel love?” type bullshit movies).
Not to beat up too much on this kind of thing — clearly great stories can result — but it’s not a good way to think about actual robots. As Bender from Futurama2 might say, bite my shiny metal ass.
The weakness of the conceptualization is clear from what’s actually interesting about real bleeding-edge humanoid robots like Boston Dynamics’ Atlas — the amazing acrobatics, and the demonstration of challenging technologies like high-speed hydraulics and machine vision. Not made-up subtle emotional dramas that don’t get at anything interesting about robotics qua robotics.
Okay, so that’s two ways to get robots wrong. How do we go about getting them right?
Asimovian Robots
Asimov doesn’t get enough credit for the sophistication of his conceptualization of robots. His work is dated in many ways, but he got many basic things right that more modern storytellers still get annoyingly wrong.
Besides the three laws (plus a zeroth law added in later books), which are genuinely interesting both as narrative devices and robotics thought experiments, he also offered one of the better justifications for pursuing humanoid robotics at all — to both adapt to existing human built environments, and to explore space as high-fidelity human proxies and prepare it for human habitation.
Not only is this a better reason to make humanoid robots than mere narcissism, it is actually a pretty good reason in general. There’s other reasons too of course. Some are already being explored: companionship, sex, and so on. But Asimov’s reason is better than most that are routinely discussed, and has only gotten better.
A robot that has roughly the body size and shape (and strength and speed) as a human will likely solve problems in ways that humans can imitate, making humans and robots interchangeable in useful ways.
Body morphology, properly understood, induces a kind of language of knowing with which you comprehend your environment.
A creature that uses color stereo vision and opposable thumbs to explore and manipulate an environment speaks the same “language of manipulation” as another such creature, and both are different from one that uses wings, beak and claws, or one that uses a strong sense of smell, four legs, and strong jaws.
More subtly, the medium being the message, function following form, and so on, humanoid robots will likely see the world in similar ways to humans, and develop similar emergent understandings of it.
For example, we all unconsciously see the world of objects in terms of “handles” that we can use to lift them. That’s an affordance of the world as viewed from the perspective of a body with a hand (do dogs see “bitables”?). When you have a hammer in your hand, everything looks like a nail. But when you have a hand, everything looks like it has a handle. A humanoid robot equipped with a deep-learning computer is likely to discover and name the concept of a handle, and be able to communicate with humans in terms of words that translate well to human language.
With the rise of deep learning, this justification has gotten way stronger. A robot that looks like a human has a vast store of data to learn from. It can learn a “language model” of primate kinesiology from the zillions of hours of footage of humans and apes moving we already have available. As VR and AR capabilities come about, and human movement is captured in 3d via motion capture, the training data will get even better. The argument generalizes to any kind of animal for which we have, or can cheaply generate, extensive movement data.
That actually provides a natural justification for Asimov’s Three Laws of Robotics. The specifics of the laws don’t matter, but it is interesting to speculate that perhaps if function follows form, then behavior follows function, and values follow behavior as well. After all, many human moral ideas are couched in body-allegory terms: turn the other cheek, an eye for an eye, left-handed compliment.
There is a larger point here about embodiment that I’ll get to.
A humanoid robot might naturally develop basic operating values that are in harmony with other humanoids, a kind of deep empathy based on living in a similar bodily configuration space and seeing the world through an internal, emergent language close to that of humans.
Would a robot in the form factor of say a vast swarm of gray goo, or the shape-shifting metal in Terminator 2, even see see the world in a way that allow for the three laws to make sense? Would it ever learn the concept of a “human” to “not harm”?
Asimov for whatever reason ended up building a world that was nearly exclusively based on humanoid robots (though some early stories feature robots in other form factors such as cars and farm equipment), but that’s not necessary of course. There’s no reason we can’t envision and try to build all sorts of robots.
Perhaps the more humanoid ones will be capable of being governed by something like the Three Laws, while the weirder ones will require other kinds of governance mechanisms.
Robots as Artificial Biology
Too many people fail to see the huge design space between uninspired poles of “a dishwasher is a robot” and “humanoid imagined by a narcissistic anthropocentric humanist.”
The trick to robotics is to be loosely inspired by biology without being constrained/limited by it, while also being open to non-biological sources of inspiration of the right kind.
Though anthropocentric conceits and narcissistic objectification are poor reasons to pursue any sort of robotics (and make for bad movies), there is in my opinion, a valuable and generative, rather than constraining biological aspect to robots. One that goes beyond the basic (and good) Asimovian justifications.
Elsewhere in the twitter thread linked earlier, elsewhere I made up a definition:
A robot is a sufficiently complex, loosely biomorphic machine with a domain-adapted universal computing capability.
It’s a nebulous category, but it captures what’s interesting about the design direction represented by biology, and its tradeoffs. For eg, how to solve physical problems without significantly shaping or specializing the environment to suit the machine. Simple example: a hand can twist-and-turn a wide range of shapes, but in a torque-limited way. A fixed-size spanner can only handle a single size of nut, but apply a great deal more torque. An environment where the spanner is truly useful needs to have nuts of the right size in it, but almost any environment is one where a hand’s twisting-and-turning ability is useful.
This gets at the key difference between robots and traditional factory automation.
A high-end CNC milling machine is vastly more complex than a low-end robot, and will likely have a more powerful computer and richer feedback loops. But it needs a highly controlled and closed factory environment, has no autonomy, and a narrow, fragile intelligence. It is useful, but not very biomorphic.
Biomorphic is a compact way of referencing a particular part of design space where you make minimal assumptions about the environment, which drives the design of the machine itself towards generality and autonomy.
But you only need to be loosely biomorphic since you needn’t imitate a particular organism literally. Just seek strategies inspired by biology. But don’t literally design steel parts to be limited to stress levels that can be supported by bone.
Unlike a typical machine — whether you’re thinking of a dishwasher, or a CNC milling machine in a factory — a robot is in principle designed to inhabit a wilder, less scripted environment. This naturally implies more generalized capabilities and higher autonomy. Since you can’t predict or control all the environmental conditions the robot might encounter, you have to design it to be more general purpose and autonomous than an ordinary machine.
So high autonomy and general capabilities in unscripted and open (but not necessarily natural) environments is another way to get at the essence of robots.
This is my “better reason” for biomorphic (including anthropomorphic) design. Biology supplies our main class of reference designs that work in wild, unscripted environments with a lot of unpredictability and ambiguity, and enforce minimum levels of autonomy. We could do worse than to start off where biology has landed after millennia.
We actually have very few mechanisms in the history of mechanical engineering that can compete with biological evolved ones in terms of their ability to support general and autonomous behaviors.
One of the few is, you guessed it, the wheel.
Though true wheels are not known in biology (except for some dubious molecular mechanisms), and are not in principle impossible to evolve (a plot device in Philip Pullman’s Dark Materials trilogy), it is fair to say that wheels are definitely artificial.
That does not mean they don’t harmonize well with biology. A good example of wheels and biological design elements coming together is a Mars rover. Though the mobility of currently operating rovers is based on wheels rather than limbs, they use a special kind of chassis called a rocker-bogie that works more like a hip joint than a car suspension3 and is surprisingly versatile and limbed-chassis-like in its capabilities.
Rovers also illustrate autonomy constraints well. Though presently operating rovers are micromanaged to death, to the point that they’re barely robots, the hard constraint of the speed-of-light roundtrip time to Mars imposes a minimum floor on the level of autonomy you need. The older Curiosity rover moved extremely slowly in part to allow for round-trip human-in-the-loop remote control. The newer Perseverance rover moves significantly faster, and as a result has more sophisticated autonomy. Missions that explore more distant parts of the solar system are necessarily more autonomous because the roundtrip time keeps going up.
Embodiment and AI
I want to close with a brief note on a point I won’t argue in too much detail at present: robotics is harder than AI because you’re aiming at general intelligence contained in a specific body that must live in an open world. Computing has universal Turing machines, but mechanical engineering has no such thing as a “universal” mechanism4 that can do everything. You have to commit to a physical form factor when designing a robot, and a broad, open class of environments.
Robotics is not just slightly harder than AI, and definitely not a matter of a few afterthought physical design elements tacked on to a disembodied AI. It is like a couple of orders of magnitude harder. So much harder in fact that focusing on the disembodied aspects of intelligence is almost like working on the trivial bits. You could say AI is the spherical cow problem of robotics.
Robotics proper often gets uninterestingly sucked into either AI or automation because people underestimate the significance of embodiment. I think it's actually richer and more interesting than either. Robotics is open-world, situated, specifically embodied general intelligence.
Ordinary AI has a disembodiment degeneracy, automation has a designed-environment (or closed-world) degeneracy.
In AI you enjoy the advantages and simplifications of not having a body. In automation you enjoy the advantages and simplifications of being able to design and close off the environment to overcome the limitations of the machine.
A superficial view would conclude that being constrained to a specific "body" makes intelligence weaker. Actually it makes it stronger. It takes a smarter AI to live in a particular body than in no-body, while remaining a general intelligence.
Incarnations are smarter than gods. Phenomena eat noumena for lunch.
This isn’t about atoms versus bits or even about “messy” physical phenomena like friction, stiff wires, leaking lubricants, and so on, though those do need handling in ways today’s AI is wildly incapable of doing. That stuff will gradually get solved.
It isn’t even about the difficulty of hardware engineering over software, which is something of a fake distinction to begin with. Yes, there are important hardware problems in say hydraulics and battery management and so on, but that’s not what’s hard about robotics. That’s merely the schleppy part. And you’ll probably mostly solve them with code anyway.
The hard part of robotics is the simple fact of embodiment itself. Being in a body rather than being a disembodied intelligence in the cloud means you are in a closed loop relation with the open world through sensors and actuators, and have to fundamentally live in the world of behaving rather than knowing. And you have to do so within the limitations of a specific body. While living with the consequences that body creates for itself through its actions.
Being a general intelligence is easy, it’s being a general intelligence in a specific body that’s hard.
The thinking is still done by computers, and the software is still going to take 10x-100x as much time to design as the hardware (a heuristic commonly used in estimating effort for robotics projects), but what makes it a much harder kind of software to write is that it is software that must live in a particular body. The generalizable bits (such as are accommodated in things like ROS) are fairly limited.
I’ll have more to say about robotics in the future, especially as my own experiments progress, but for now I’ll leave you with this thought.
For a variety of reasons, we’re on the cusp of a golden age of real, honest-to-Asimovness robotics, so it’s good to think hard about what one actually is. And if you go around thinking it’s just a fancy dishwasher, thermostat, kind of machine tool, or worst of all, a jumped up mannequin-sex-doll that might “learn to love,” you’re going to go wrong navigating this robotic new world.
Though of course there aren’t many robots in the Foundation series, except for one really important one.
Futurama is a show that imagines a robotic future in surprisingly interesting ways despite being mostly satirical. The robots are not sad excuses for human psychological projections. They are often mechanically interesting, and behave in ways shaped by their mechanical interestingness and have both person-like and object-like traits. Bender himself has a design that sustains all sorts of gags that are interesting from a robotics point of view. For example, he is self-re-assembling when torn apart. He is gyroscopically stabilized. He is a popcorn machine. His hands work as a variety of end-effectors.
I can say this with some authority because I’m building a rocker-bogie chassis for my model rover. The hip-like aspect is due to a differential bar that connects the left and right sides of the chassis, which shifts weight around similarly to how you shift your weight around as you walk. The 3 wheels on each side are also connected in the eponymous “rocker bogie” mechanism that allows them to rise and fall relatively independently, without need for a spring-based suspension. The result is a surprisingly organic looking six-wheeled “gait” that is eerily close to six-legged crawling. Mars rovers kinda clamber around on wheels rather than roll around.
The only thing that comes close is various proposals for self-replicating robots that are capable of evolving like organisms. These are “universal constructors” that are technically the same thing as universal Turing machines, but for physical things. But nobody has yet managed to actually build usable general universal constructor schemes.