220 Comments

I think this article is pushing back against a strawman position ("LLMs are going to destroy the world") that basically no one in the AI Safety/Alignment community holds. What is true is that the recent increase in people being worried about bad AI outcomes was triggered by the unexpected achievements of LLMs. But here the LLMs just represent a milestone on the way to more powerful AIs and possibly a trigger for more investment in the field.

Expand full comment

I'm sort of confused as to why you'd write so many words arguing against a position that few people are taking. LLMs are concerning to people worried about AI safety because they've illustrated a) how rapid progress is being made in the area of Artificial Intelligence b) race dynamics are causing large actors to make risky moves in pursuit of rapidly expanding capabilities and c) how difficult it is to control the behavior of an artificial agent. I don't think any serious AI safety thinker is worried about the current generation of AI tools.

Expand full comment

So don't worry about AGI because we haven't invented AGI yet? Are we only supposed to worry about it after we've invented it?

LLMs can't end humanity, no. But no one is arguing that. The speed at which AI models (LLMs, stable diffusion etc.) are improving is why people are worrying about what would happen when we do create an intelligence sophisticated enough to be a danger. To say that we're not near AGI is not a reason not to worry. It's just kicking the can down the road.

Expand full comment
Mar 8, 2023ยทedited Mar 8, 2023Liked by Noah Smith

Presenting a direct descendant of an LLM, that can operate a robot, follow directions, and do a bunch of other things that chatbot LLMs can't do:

https://palm-e.github.io/

Expand full comment

"The most prominent of these voices is Eliezer Yudkowsky, who has been saying disturbingly doomer-ish things lately"

*lately*??? Yudkowsky has been saying doomer-ish things for the past fifty billion years!

Well, OK, I binged him (and that doesn't mean what your filthy mind is thinking!!!) and he was born in 1979, so only the past 40 years or so.

But it's really good to finally read a non-insane take on AGI. We are NOWHERE CLOSE to AGI, and everything we know about NGI (humans) tells us that nothing being done now will get there. We humans don't even understand why essentially all vertebrates need to sleep. Will AGIs sleep? If not, what is the mechanism (that millions of years of evolution did not find) that eliminates the need for sleep? No one has any fucking clue!!! No one in AI even thinks about these things!

*sigh*

Expand full comment
Mar 8, 2023Liked by Noah Smith

"First, like humans, sci-fi AIs are typically autonomous โ€” they just sit around thinking all the time, which allows them to decide when to take spontaneous action. LLMs, in contrast, only fire up their processors when a human tells them to say stuff. To make an LLM autonomous, youโ€™d have to leave it running all the time, which would be pretty expensive. But even if you did that, itโ€™s not clear what the LLM would do except just keep producing reams and reams of text."

I really like this argument. The counterargument is that autonomous AI systems will be more useful than AI you have to babysit. AI homework helper becomes AI tutor becomes AI teacher. AI medical chatbot becomes AI pharmacist and AI physician. AI coding assistant becomes AI research engineer and starts writing papers of its own advancing the field of AI. This has already happened in finance, where algorithmic trading cannot be overseen by humans in real time, and so we give the algorithms autonomy with occasionally disastrous consequences (e.g. Flash Crash).

But in the short term, it seems like AI deployment will be bottlenecked by human oversight. Maybe this will continue for decades: humans know more about human values, and have different strengths and weaknesses that can complement AI systems. In this world, the Baumol effect goes crazy, as capital automates an increasing share of tasks yet labor remains a key bottleneck. There would be massive profits from more complete automation that takes humans out of the loop and allows growth to feed back into itself, driving more growth. Whether the labor bottleneck is strong enough to prevent singularity seems like one of the most important questions in AI forecasting.

Here's a macro model simulating how full automation results in a sudden singularity: https://takeoffspeeds.com/playground.html

On the other hand, here's how Baumol could prevent the singularity: https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf

Section 6.2 has a wonderful 4 page overview of the key theoretical models and outstanding questions: https://www.nber.org/system/files/working_papers/w29126/w29126.pdf

Also, a more CS, less Econ argument against fully autonomous AI: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case#A__Contra__superhuman_AI_systems_will_be__goal_directed__

And a very CS argument predicting more autonomy in AI: https://gwern.net/tool-ai

Appreciate your thoughtful consideration of many topics, hoping you find some of this stuff worth your time to think about more.

Expand full comment
Mar 8, 2023Liked by Noah Smith

I think a lot of the rationalist community has really lost the ability for self-reflection on these things, in a way that makes me wonder if I was mistaken to ever read them at all. Their argument basically boils down to 'AGI would have such a serious potential consequence that any argument against it, no matter how implausible or impossible to evaluate for plausibility, is valid.'

Imagine if you took this approach to life. Is there a sniper waiting to shoot me if I open my window blinds? It's not technically impossible! And the consequences are dire if it's true - I would die! Your arguments that no one is trying to kill me, and if they were they'd do it differently, and also that there's no evidence at all for it, pale in comparison to the risk of my total annihilation! You'd be immobilised. In some ways I risk my life every day, and humanity does things that could imperil it in the future, but that's just life in a world of unknowns.

The doomers need to draw some kind of reasonable line from where we are to a real AGI, the danger it poses, and what the evaluations of plausibility are. Otherwise, it's just people indulging their favourite pastime of philosophising about AGI. Which is fine, have fun, but it doesn't matter to the rest of us.

Expand full comment

To me, the whole subject of AI which is so popular right now suffers from an excessive interest in details, which is obscuring the bottom line. The whole reason AI is being developed is that we want more power. But we can't handle the power we already have. It's the simplest thing, and all this expert posturing is getting in the way.

Imagine you're the parent of a teenager. Your teenager wants a car. But they keep crashing the moped you bought them. And so you would say, prove to me you can handle the moped, and then we'll talk about the car.

So kids, come up with credible solutions to nukes and climate change, and then we can talk about AI. Until then, forget it.

Expand full comment
Mar 8, 2023Liked by Noah Smith

Good book on the ways nuclear weapons systems are prone to cyberattack: https://www.amazon.com/Hacking-Bomb-Threats-Nuclear-Weapons/dp/1626165645

Three mechanisms:

1. Direct detonation is probably not possible via cyberattack. Command and control is a secret, but in all likelihood includes human operators of airgapped systems. So the simplest story is likely bunk.

2. False attacks that simulate a nuclear first strike are likely possible. Israel's Operation Orchard took over Syria's air defense systems, feeding them false information to mask the ongoing attack. While it's never been done, it could be possible to hack into missile defense systems and fake an attack in order to provoke a nuclear response strike. This is the better story for AGI doomers.

3. Spread of information via hacking would be a systemic risk factor in nuclear war. Better hacking could expose secrets such as missile locations, launch codes, and response plans that threaten MAD. This isn't nearly as direct a threat as the previous two, and is less directly caused by AI.

Expand full comment

100% correct. Even worse for the doomers is that even if AGI could accomplish this list (except for asteroids) each would not end humanity. They could put a dent in civilizations, but actually eliminating humanity is so hard that even if we made it our job, we couldn't do it. Our will to survive generally exceeds the will to destroy.

Expand full comment

Kind of a genius post. Admiration if youโ€™re right, and if youโ€™re wrong...

Expand full comment

Good Take. My sense is that LLMs will be useful in three domains: 1) adding the conversational element to boring speech (customer support). Think of it as the windows for language 2) productivity in software engineering 3) kindling for creative pursuits

None of these seem to be close to ending the world.

Expand full comment

This piece was disappointing in its ignorance of what Yudkowsky and others are actually worried about. If you don't understand their argument, maybe that makes them bad communicators, but it doesn't make you a good refuter, either. Sorry.

Expand full comment
Mar 8, 2023ยทedited Mar 8, 2023Liked by Noah Smith

Makes sense.

See also Pirate Wires/Solana on same theme a couple weeks ago.

Expand full comment
Mar 8, 2023Liked by Noah Smith

I worry more that there will be someone who releases MAGA World Fox like chat bots to keep the ignorant and hateful part of our society in constant outrage or worse. It's working with Fox - can you imagine if it's magnified.

Expand full comment
Mar 8, 2023Liked by Noah Smith

How much would your post change if you read about Toolformer (https://arxiv.org/abs/2302.04761, the abstract alone is enough to give you an idea) and then imagine that some of those tools can generate an execute code? Of course you can argue that's not just an LLM but I would say it's a small enough step away that it should have been talked about in this piece

Expand full comment