I ran into this Bluesky post, and while a lot of the argument resonated with me, I think the author is missing something important.
Shannon Vallor is a philosopher of technology at the University of Edinburgh. She spoke recently at a meeting honoring the 75th anniversary of the Turing Test. The core of her argument, recapped in the Bluesky post, is that artificial general intelligence, or AGI, represents an outdated scientific concept, like phlogiston. While some researchers in the past thought of humans as having a kind of “general” intelligence that a machine would need to replicate, scientists today break down intelligence into a range of capabilities that can be present in different ways. From that perspective, searching for artificial general intelligence doesn’t make much sense: instead, researchers should focus on the particular capabilities they’re interested in.
I have a lot of sympathy for Vallor’s argument, though perhaps from a different direction than what she had in mind. I don’t know enough about intelligence in a biological context to comment there. But from a computer science perspective, intelligence obviously is composed of distinct capabilities. Something that computes, like a human or a machine, can have different amounts of memory, different processing speeds, different input and output rates. In terms of ability to execute algorithms, it can be a Turing machine, or something less than a Turing machine. In terms of the actual algorithms it runs, they can have different scaling for large inputs, and different overhead for small inputs. In terms of learning, one can have better data, or priors that are closer to the ground truth.
These days, all of these Turing machine algorithm capabilities are in some sense obviously not what the people interested in AGI are after. We already have them in currently-existing computers, after all. Instead, people who pursue AGI, and AI researchers more generally, are interested in heuristics. Humans do certain things without reliable algorithms, instead we do them faster, but unreliably. And while some human heuristics seem pretty general, it’s widely understood that in the heuristics world there is no free lunch. No heuristic is good for everything, and no heuristic is bad for everything.
So is “general intelligence” a mirage, like phlogiston?
If you think about it as a scientific goal, sure. But as a product, not so much.
Consider a word processor.
Obviously, from a scientific perspective, there are lots of capabilities that involve processing words. Some were things machines could do well before the advent of modern computers: consider typewriters, for instance. Others still are out of reach, after all, we do still pay people to write. (I myself am such person!)
But at the same time, if I say that a computer program is a word processor, you have a pretty good idea of what that means. There was a time when processing words involved an enormous amount of labor, work done by a large number of specialized people (mostly women). Look at a workplace documentary from the 1960’s, and compare it to a workplace today, and you’ll see that word processor technology has radically changed what tasks people do.
AGI may not make sense as a scientific goal, but it’s perfectly coherent in these terms.
Right now, a lot of tasks are done by what one could broadly call human intelligence. Some of these tasks have already fallen to technology, others will fall one by one. But it’s not unreasonable to think of a package deal, a technology that covers enough of such tasks that human intelligence stops being economically viable. That’s not because there will be some scientific general intelligence that the technology would then have, but because a decent number of intellectual tasks do seem to come bundled together. And you don’t need to cover 100% of human capabilities to radically change workplaces, any more than you needed to cover 100% of the work of a 1960’s secretary with a word processor for modern secretarial work to have a dramatically different scope and role.
It’s worth keeping in mind what is and isn’t scientifically coherent, to be aware that you can’t just extrapolate the idea of general intelligence to any future machine. (For one, it constrains what “superintelligence” could look like.) But that doesn’t mean we should be complacent, and assume that AGI is impossible in principle. AGI, like a word processor, would be a machine that covers a set of tasks well enough that people use it instead of hiring people to do the work by hand. It’s just a broader set of tasks.
