AI Can’t Do Science…And Neither Can Other Humans

Seen on Twitter:

I don’t know the context here, so I can’t speak to what Prof. Cronin meant. But it got me thinking.

Suppose you, like Prof. Cronin, were to insist that AI “cannot in principle” do science, because AI “is not autonomous” and “does not come up with its own problems to solve”. What might you mean?

You might just be saying that AI is bad at coming up with new problems to solve. That’s probably fair, at least at the moment. People have experimented with creating simple “AI researchers” that “study” computer programs, coming up with hypotheses about the programs’ performance and testing them. But it’s a long road from that to reproducing the much higher standards human scientists have to satisfy.

You probably don’t mean that, though. If you did, you wouldn’t have said “in principle”. You mean something stronger.

More likely, you might mean that AI cannot come up with its own problems, because AI is a tool. People come up with problems, and use AI to help solve them. In this perspective, not only is AI “not autonomous”, it cannot be autonomous.

On a practical level, this is clearly false. Yes, machine learning models, the core technology in current AI, are set up to answer questions. A user asks something, and receives the model’s prediction of the answer. That’s a tool, but for the more flexible models like GPT it’s trivial to turn it into something autonomous. Just add another program: a loop that asks the model what to do, does it, tells the model the result, and asks what to do next. Like taping a knife to a Roomba, you’ve made a very simple modification to make your technology much more dangerous.

You might object, though, that this simple modification of GPT is not really autonomous. After all, a human created it. That human had some goal, some problem they wanted to solve, and the AI is just solving the problem for them.

That may be a fair description of current AI, but insisting it’s true in principle has some awkward implications. If you make a “physics AI”, just tell it to do “good physics”, and it starts coming up with hypotheses you’d never thought of, is it really fair to say it’s just solving your problem?

What if the AI, instead, was a child? Picture a physicist encouraging a child to follow in their footsteps, filling their life with physics ideas and rhapsodizing about the hard problems of the field at the dinner table. Suppose the child becomes a physicist in turn, and finds success later in life. Were they really autonomous? Were they really a scientist?

What if the child, instead, was a scientific field, and the parent was the general public? The public votes for representatives, the representatives vote to hire agencies, and the agencies promise scientists they’ll give them money if they like the problems they come up with. Who is autonomous here?

(And what happens if someone takes a hammer to that process? I’m…still not talking about this! No-politics-rule still in effect, sorry! I do have a post planned, but it will have to wait until I can deal with the fallout.)

At this point, you’d probably stop insisting. You’d drop that “in principle”, and stick with the claim I started with, that current AI can’t be a scientist.

But you have another option.

You can accept the whole chain of awkward implications, bite all the proverbial bullets. Yes, you insist, AI is not autonomous. Neither is the physicist’s child in your story, and neither are the world’s scientists paid by government grants. Each is a tool, used by the one, true autonomous scientist: you.

You are stuck in your skull, a blob of curious matter trained on decades of experience in the world and pre-trained with a couple billion years of evolution. For whatever reason, you want to know more, so you come up with problems to solve. You’re probably pretty vague about those problems. You might want to see more pretty pictures of space, or wrap your head around the nature of time. So you turn the world into your tool. You vote and pay taxes, so your government funds science. You subscribe to magazines and newspapers, so you hear about it. You press out against the world, and along with the pressure that already exists it adds up, and causes change. Biological intelligences and artificial intelligences scurry at your command. From their perspective, they are proposing their own problems, much more detailed and complex than the problems you want to solve. But from yours, they’re your limbs beyond limbs, sight beyond sight, asking the fundamental questions you want answered.

6 thoughts on “AI Can’t Do Science…And Neither Can Other Humans

  1. JollyJoker's avatarJollyJoker

    This reminds me of the idea of throwing a scientific community into a black hole so what is behind the event horizon can be called science.

    More on point, I guess many cling to the idea that human brains have some magic that anything artificial can never replicate. They put that magic beyond the border of what they themselves understand, which looks absurd when you understand the part they think is magic.

    Like

    Reply
  2. kimpton's avatarkimpton

    replying to “AI, humans, autonomous science”.

    It would seem consciousness would be requisite for autonomy, even if not sufficient.

    Penrose holds that consciousness is not computational, and must arise from some other base, proposing quantum intra and extra cellular effects.

    though very controversial, if this has some validity, would quantum AI be likely to evolve this ability?

    /

    Like

    Reply
    1. 4gravitons's avatar4gravitons Post author

      It’s not obvious to me that autonomy has much to do with consciousness, as opposed to with free will. Free will and consciousness seem distinct, you could imagine a philosophical zombie with free will, and people with various meditation techniques/Susan Blackmore have reported feeling like they don’t have free will.

      And I’m quite skeptical about Penrose’s arguments about quantum consciousness. There’s no evidence that I’m aware of that quantum computers can compute the uncomputable, they just have faster algorithms for some tasks, so I think even if Penrose was correct you’d need something more exotic.

      Like

      Reply
  3. Prinumber2357's avatarPrinumber2357

    It’s possible that he is talking about amateurs using AI to write nonsensical papers, that look fine for the untrained eye, because they have equations, graphs, predictions, references, etc. But when you read them, they are a bunch of disconnected nonsense, even if it reads smoothly like a thing a human could have written with great language and absolutely no knowledge of the world. (The equations topically are small modifications of equations that appear in other papers or are widely known, but are modified without understanding nor anything resembling it). You can look at many places of internet, like reddit /physics or /hypotheticalphysics, to find that kind of bulls**t.

    Like

    Reply
    1. 4gravitons's avatar4gravitons Post author

      I mean, that’s hardly the kind of thing you point to with “in principle”. “In principle” humans can also write that kind of deceptive paper, it just takes too much work to be worth it most of the time.

      Like

      Reply
      1. Prinumber2357's avatarPrinumber2357

        You are right. Also, before I commented I should have had re-read your post. I misunderstood what I was reading and my comment shouldn’t had to be here in retrospective. Sorry!

        Like

        Reply

Leave a reply to kimpton Cancel reply