Kernighan’s Law states, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” People sometimes make a similar argument about philosophy of mind: “The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.”
Both points operate on a shared kind of logic. They picture understanding something as modeling it in your mind, with every detail clear. If you’ve already used all your mind’s power to design code, you won’t be able to model when it goes wrong. And modeling your own mind is clearly nonsense, you would need an even larger mind to hold the model.
The trouble is, this isn’t really how understanding works. To understand something, you don’t need to hold a perfect model of it in your head. Instead, you translate it into something you can more easily work with. Like explanations, these translations can be different for different people.
To understand something, I need to know the algorithm behind it. I want to know how to calculate it, the pieces that go in and where they come from. I want to code it up, to test it out on odd cases and see how it behaves, to get a feel for what it can do.
Others need a more physical picture. They need to know where the particles are going, or how energy and momentum are conserved. They want entropy to be increased, action to be minimized, scales to make sense dimensionally.
Others in turn are more mathematical. They want to start with definitions and axioms. To understand something, they want to see it as an example of a broader class of thing, groups or algebras or categories, to fit it into a bigger picture.
Each of these are a kind of translation, turning something into code-speak or physics-speak or math-speak. They don’t require modeling every detail, but when done well they can still explain every detail.
So while yes, it is good practice not to write code that is too “smart”, and too hard to debug…it’s not impossible to debug your smartest code. And while you can’t hold an entire mind inside of yours, you don’t actually need to do that to understand the brain. In both cases, all you need is a translation.
If I was able to model my own mind, would that model be me?
LikeLike
Good question!
LikeLike
Is there any proof that modeling your own mind you would need an even larger mind to hold the model?
LikeLike
Depends on what you mean by modeling. If you’re trying to represent every detail of your mind, then sure, by definition you’d need every bit of space. But as I’m trying to argue in my post, understanding something doesn’t have to mean representing every detail like that.
LikeLike
I think you’re making an assumption that the brain is the best compression for the knowledge and function of the mind.
LikeLike
I was neglecting that possibility, but yes you’re quite correct that it could be possible to faithfully represent everything in the brain in a more compressed form. My point is that even if the representation is not actually a faithful one, you can probably still represent everything you care about.
LikeLike
How do you know that “everything you care about” can give you accurate understanding?
This reminds me of the error often made in statistical analysis where a very large number of predictors is filtered down to a few and then those few are used to make predictions. The error in those predictions is measured by “cross-validation” using the few selected predictors instead of doing “cross-validation” on the full set of very large predictors.
LikeLike