photo of a hand-painted wooden model airplane parked on grass

We use models all the time to help make sense of the world. You might not understand metallurgy, suspension geometry, combustion chemistry, impedance matching, lubrication, seating design, production line management, and the million other disciplines that went into designing and building your car, but you have a mental model that allows you to operate it nonetheless.

You might not know and understand another person in molecular detail, but you can form a mental model of a person, filed under a name (“Janet”), with some quick or detailed information about their personality, relationship to you and others in your circle, family and social history, and other characteristics that let you communicate with her.

Unless you’re me, when you put the kettle on for tea you don’t think of the interactions of subatomic particles in response to applied heat that leads to molecular interactions that appear grossly as convection that causes the water to heat up. And even if you do, you still are just modeling the subatomic particles and the phenomenon of “heat.”

Science uses models all the time. Good scientific models have good predictive value: you can manipulate a model of the solar system to assess the effect of a ninth planet or a passing comet without having to do the experiment on the actual solar system.

Models can be useful: if the big ugly kid down the street beats you up and steals your lunch money every few days, a mental model that suggests you avoid big, ugly kids down the stree might serve you well. If, however, you extend that model beyond its useful context, and start avoiding all big, ugly kids you might miss out on some useful relationships.

We also have a tendency to forget that we’re working with models, and fail to distinguish the model from the reality. Have you ever heard arguments about whether light is a wave or a particle?

IT IS NOT.

Light is what it is. One model for understanding light is to consider it a wave. Another is to consider it a particle. In different situations, one model or the other might have superior predictive power. But the models are not the reality. Light is what it is. Light is not one or the other, just because we’ve used particles and/or waves to model light.

Large language models (LLMs) are all the rage, but they are models of human speech. They are deliberately designed to have predictive value. But their predictions are often wrong. Are they useful? Sometimes. Will they only get better? Hard to say… computing power will continue to increase, but training data will get worse and worse. I doubt they’ll ever live up to the hype.

—2p

← previous|next →