I was doing some random web surfing the other day and ran across a couple of pages of text that, superficially, seemed meaningful and useful. As I read them, however, I noticed that my anxiety was increasing. I kept expecting them to start making real sense, but they seemed to be right on the edge of actual meaning without ever getting there.
It didn’t take me long to realize that I was reading AI slop. What surprised me is how familiar the experience felt.
Long, long ago, I was running the engineering department at a small hardware and software development shop. The owner brought a couple of people to me on the shop floor and said something along the lines of “These guys have some interesting ideas. See what you think.” I listened politely for a while, and it did sound interesting at first, but then as time went on I started to have the anxious feeling that I wasn’t getting it. I made the default assumption that I had just missed something important, or misunderstood a key bit, or maybe I just wasn’t smart enough to understand. But as I listened and listened I got more and more confused. Like Shrimp Jesus, above, it seemed as though what they were saying should make sense, but it just didn’t, no matter how hard I tried to fill in the gaps.
It turned out they were simply hucksters, trying to sell a product by making vague claims about its possible role in our business.
Later, studying psychiatry, I would learn about pareidolia, “tendency for perception to impose a meaningful interpretation on a nebulous stimulus” (Wikipedia). I would interview quite a few patients who — due to trauma to the brain or thought disorders or psychoactive drug use — would speak in a kind of word salad that would almost, but not quite, make sense. I realized that a lot of scams and hucksterism relied on giving this indistinct narrative which an eager listener can flesh out to make a coherent narrative.
I played with ELIZA in the very early 1970’s and thought that this extremely rudimentary artificial psychotherapist could be pretty convincing… for a few minutes. ELIZA used a tiny, tiny fraction of the computing resources that a modern large language model (LLM) uses, so it isn’t that surprising that LLMs are able to keep up the illusion for longer.
Having spent decades in the tech startup universe, it’s easy for me to imaging entrepreneurial huckster types first seeing an early LLM and thinking “we could really fool people with this,” then subsequently drinking their own Kool-Aid and believing that these plausibility engines could truly be revolutionary. Or at least be sold as such.
—2p