61 Comments

Long story short: The current round of AI is, intellectually, on the level of parlor tricks. There's no intellectual, theoretical, or scientific there there. Really, there isn't. (This is easy for me to say: I have an all-but-thesis in AI from Roger Schank.)

AI, originally, was defined to be a branch of psychology in which computation (the mathematical abstraction) would be used to analyze and theorize about various human capabilities, and computers (real devices) would be used to verify that analisys/resultant theories. But AI grew a monster other half whose goal was to make computers do interesting things. With no intellectual basis or theoretical/academic concerns whatsoever.

It is this other half of the field that has taken over.

Slightly longer story: It turns out that we humans really do think; we actually do logical reasoning about things in the real world quite well. But we (1980s AI types) couldn't figure out how to persuade computers to do this logical reasoning. Simple logic was simply inadequate, and it turns out that people really are seriously amazing. (Animals are far more limited than most people think: there is no animal other than humans, that understands that sex causes pregnancy and pregnancy leads to childbirth. Animals can "understand" that the tiny thing in front of them is cute and needs care, but that realizing that it has a father is not within their intellectual abilities.)

So the field gave up on even trying, and reverted to statistical computation with no underlying models. The dream is that if you can find the statistical relationship, you don't need to understand what's going on. This leaves the whole game at essentially a parlor trick level: when the user thinks that the "object recognition" system has recognized a starfish, the neural net has actually recognized the common textures that occurred in all the starfish images.

So here's the "parlor trick" bit: the magician doesn't actually do the things he claims to do, neither do the AI programs. (The 1970s and 1980s programs from Minsky an Schank and other labs actually tried to do the things they claimed to do.) And just as getting better and better at making silver dollars appear in strange places doesn't lead to an ability to actually make silver dollars, better and better statistics really isn't going to lead to understanding how intelligent beings build quite good and effective models of the world in their heads and reason using those models.

So while AI isn't incomprehensible (it's a collection of parlor tricks), it is an intellectual horror show.

Expand full comment

At this point I am willing to give the AI aliens a chance.

Expand full comment
Sep 22, 2022Liked by Noah Smith

Well, I started this article generally a optimist about artificial intelligence and now you have me looking under the bed for monsters.

Expand full comment

"Predictable results from an inexplicable mechanism" is actually the best definition of magic I have ever heard.

Expand full comment

I'd argue the difference between sci-fi and magic is about whether it responds to human level concerns like emotion, concentration, desire etc.. I mean quantum mechanics or even, if you reduce it to it's most basic level, Newtonian mechanics produces predictable results for no understandable reason. I mean, at the end of the day there isn't anything 'under' the postulates of QM or the Schrodinger equation. Sure, you can point to particular properties they have but there isn't any reason that reality is well described by wave-functions that obey these symetries and not those other than it 'just does'.

What's different is that the electron and it's wave function doesn't respond to your human concerns or issues. You can't summon up a well of strength and will the plutonium core not to go critical...it does what it does based on factors that aren't relates to our beliefs, wants and desires while spells are all about responding to your will, emotions, dedication and study.

It's why sci-fi shows struggle more with satisfying narrative arcs that don't frustrate us with plot holes. In a sci-fi show the dumbest dropout can point your fancy special weapon and vaporize your highly skilled veteran (in thy) and plot holes spring up when you need to explain why that didn't happen. OTOH in fantasy it's just the nature of the universe that Elrond and Gandalf are hugely more powerful because of their age and roles.

Expand full comment

Everyone whose ever tried art knows that hands are hard.

Expand full comment
Sep 22, 2022Liked by Noah Smith

AI as incomprehensible is how Zachary Mason described it in his elegant neo-cyberpunk novel, Void Star.

Expand full comment

I have to put up a link to this piece by Eric Hoel, "We need a Butlerian Jihad against AI: A proposal to ban AI research by treating it like human-animal hybrids", which seems to resonate with your Lovecraftian horror notion:

https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against

Expand full comment

AI safety may be like drug safety.

You don't know if a new drug has fatal side effects. You just know how much testing it went through, and how many inspections the regulators did of the drug factory. Usually that's enough.

We won't know if our shiny AI black boxes have hidden Lovecraft modes. But we can know under what conditions the AI boxes were trained, and what tests they passed.

Maybe that will be enough?

Expand full comment

"I quit this “Hitler or Lovecraft?” quiz halfway through after getting every question wrong!"

If it includes the words "gibbering," "noisome," or "putrescent," it's probably Lovecraft. :)

Expand full comment

The worst case of autonomous military drones -- in particular, drones that have been equipped to repair themselves and build new swarm members to replace fallen comrades -- is "Horizon: Zero Dawn".

Expand full comment

I tried prompting the language model GPT-J with

"Artificial Intelligence: a poem" by H.P. Lovecraft

I have found the best results occur with "temperature" in the 0.9-0.99 range.

Starting at 0.9, I got a free verse talking about the prehistoric origins of art; promising, but then it never made its way into more recent eras.

A second try produced an essay about "the mind of the universe" and "the intelligence of the cosmos" and how they are different things.

For a third attempt, I set temperature higher, to 0.98, figuring that this would be better for poetic composition. The result:

https://pastebin.com/3cFeGdAS

Expand full comment

The most dangerous person one will meet is the lunatic who doesn't rave.

Have enjoyed Ellison, Clark, Lovecraft and others, thanks for reminding me of them.

Expand full comment

The way I've always explained it for "is this a task a modern AI is suited for" is this: what are the consequences for a 1 in 10 failure rate? This is supremely optimistic, but it feels nice and round. AI art generator fails? Laugh and post it to #Synthetic-Horrors. AI autonomous car fails? People die.

Needless to say, while AI will find its niche in "high failure tolerance" areas, the Singularity is not soon upon us. For my money, it'll take a paradigm shift away from purely neural network based AIs before we get anything acceptable for other cases.

Expand full comment

re: why sociopaths creep us out.

It is similar to what you write about unpredictability but as a possibly interesting tangent...the philosopher David Livingstone Smith, in his book On Inhumanity, argues that humans naturally fall into essentialist thinking where things are expected to fit into neat categories. Think how casually we say things like "My cat is bad at being a cat, he doesn't even try to catch mice". We'd cringe if we said something remotely like that about a group of humans. Or how we see things as either pets OR food but never both. But essentialist thinking seems to come very naturally to us. And things that don't fit into essentialist categories make us squirm a bit. At least, that's his argument about why dehumanisers always talk about their targets in transgressive terms. They are both people AND cockroaches. Beasts AND people. After all, we don't get that mad about a deer that eats our shrubs. We don't torture the bird that poops on our car. And sociopaths likewise don't fit into neat categories. Bad people are supposed to be obviously bad. We're supposed to be able to spot them a mile away. They're supposed to be like a Disney villain, that obviously bad.

And while I'm on vaguely related tangents to your post: The Mountain in the Sea by Ray Nayler is another great, recent entry in the canon of extrahuman intelligence (in this case cephalapod intelligence) that a lot of people reckon is a sure fire Hugo winner this year. You might enjoy it.

Expand full comment

I feel like to fill out your uncanny valley / psychopathy notion of how horror operates you should look into and appreciate the concept of "abject art," which is defined by ordinary biological forms and barriers being violated, causing unease and disturbing expectations. AI doesn't get what a human sees, so outputs a correlation / approximation from what's been input, but since it doesn't know the referent image from the actual form (which humans should be very careful about assuming THEY understand the full context from any image!!!) nor has any procedure to comprehend the distinction, it merely outputs a reference of references which accidentally violate our understanding of our bodies and thus disturbs us.

It's not just our bodies but also perspective, spacetime stuff. Ordinary things become warped.

Fun follow up question: why do our dreams do that to us? Coming from a human brain, shouldn't dreams not violate our sense of self and perspective as humans?

Expand full comment