Scaling Law 时代落幕:Ilya Sutskever 的 AI 下半场宣言 The End of the Scaling Law Era: Ilya Sutskever's Manifesto for AI's Second Half

当定义规则的人亲自宣布规则失效 When the man who defined the rules declares them obsolete

"Scaling Law 已经结束了。"

如果这话出自旁人之口,或许只是哗众取宠。但说这话的是 Ilya Sutskever——那个亲手定义了这条定律的人。在沉寂许久后的首次公开访谈中,这位前 OpenAI 首席科学家为一个时代画上了句号。

◆ ◆ ◆

卓越与愚蠢的分裂

Ilya 首先点出了当前 AI 最荒诞的现状:它能在国际奥数竞赛中摘金夺银,却在修复一个简单 bug 时像个傻瓜,在两个错误之间无限循环。

他的判断斩钉截铁:2020 年代是 Age of Scaling,但这个时代已经落幕。

这句话的分量,不亚于牛顿突然宣布万有引力可能失灵。它意味着,那个靠堆显卡、堆数据就能让智能涌现的黄金时代,走到了尽头。

◆ ◆ ◆

应试教育的陷阱

为何 AI 会"高分低能"?

Ilya 认为,我们把模型推进了应试教育的死胡同。预训练让模型博览群书,拥有了广阔的通识视野;但强化学习为了让它在特定任务上拿高分,强行逼迫它去讨好单一指标。

他用了一个很重的词:现在的强化学习,本质上是在撤销预训练带来的通识能力。

这就像一个天资聪颖的少年,被强行按进题海刷了一万小时竞赛题——分数是上去了,但那份融会贯通的灵性,被刷没了。

◆ ◆ ◆

价值函数:找回直觉的钥匙

如何找回这种灵性?Ilya 给出了整场访谈最硬核、也最反直觉的洞察——价值函数

现在的 AI 靠结果学习,做完题才知道对错。但人类不是这样。

想想你下围棋或做数学题时的感受:不需要等到最后一步,中间某一步走错了,心里会突然"咯噔"一下——这步感觉不对。

Ilya 举了一个神经科学的例子:一个丧失情感能力的人,智商完好无损,却连今天穿什么袜子都要纠结几个小时。因为纯逻辑无法告诉他红袜子和蓝袜子哪个更好,只有情绪能帮我们瞬间做出选择。没有这种直觉,大脑就会在无数等价选项里陷入死循环。

这种不需要等结果、在过程中就能实时纠偏的直觉,才是最高效的学习算法。未来的 AI 必须装上这个内在罗盘,才能从做题家进化为真正的思想者。

◆ ◆ ◆

超级智能:不是神,是超级实习生

如果解决了这个问题,未来的超级智能会是什么模样?

Ilya 修正了我们对 AGI 的想象:它不是一个出场就无所不知的神,而是一个超级实习生

它发布的那一刻,可能像一个绝顶聪明的 15 岁少年。你把他扔进一家公司,告诉他去学怎么打官司,他能像人类一样——通过观察、互动、试错——迅速掌握新技能。他上岗实战的过程,就是他学习的过程。

这也引出了 Ilya 对未来商业格局的终极预判:专业化

未来不会有一个模型垄断一切。因为学习需要成本,不同的 AI 会在不同领域形成极深的壁垒。生态系统会像大自然一样丰富多元,而不是只有一只巨兽。

◆ ◆ ◆

同理心:效率最高的安全机制

如何防止超级 AI 毁灭人类?

Ilya 没有谈冷冰冰的规则,他谈到了

这听起来很虚,但他给出了一个极其硬核的理由——计算效率

同理心不是累赘,而是智能生物理解世界最高效的捷径。理解别人的痛苦,只需要调用"理解自己痛苦"的那套代码去模拟,这比从零开始的冷冰冰计算更省资源。

因此,Ilya 认为我们应该顺应这种机制,把对感知生命的关爱刻进 AI 的底层。一个追求极致效率的超级 AI,大概率会是一个充满同理心的 AI。

◆ ◆ ◆

钟摆摆回:天才的游戏重新开始

Ilya 的这次发声,宣告了低垂果实时期的终结。

如果说 2010 年代是百花齐放的研究时代,2020 年代是大力出奇迹的扩展时代,那么现在,钟摆又摆回去了。

接下来的 5 到 10 年,我们将重返研究的黄金时代。这不再是资本的游戏,而是天才的游戏。我们需要重新去研究大脑,研究进化,研究那些简洁而优雅的算法。

"当实验数据和你预期不符时,如果你对第一性原理有坚定的信念,你会相信那只是代码的 bug,而不是理论的错误。"

这种信念感,才是推开未来大门的钥匙。

"Scaling Law is over."

Coming from anyone else, this might sound like clickbait. But these words came from Ilya Sutskever—the very person who defined this law. In his first public interview after a long silence, the former Chief Scientist of OpenAI has drawn the curtain on an era.

◆ ◆ ◆

The Schism Between Brilliance and Stupidity

Ilya first points out the most absurd paradox in current AI: it can win gold medals at the International Mathematical Olympiad, yet behaves like a fool when fixing a simple bug—endlessly oscillating between two errors.

His verdict is unequivocal: the 2020s were the Age of Scaling, but that age has now ended.

The weight of this statement is comparable to Newton suddenly announcing that gravity might not work anymore. It means the golden era of achieving emergent intelligence simply by stacking more GPUs and data has come to an end.

◆ ◆ ◆

The Test-Taking Trap

Why does AI suffer from "high scores, low ability"?

Ilya believes we've pushed models into the dead end of test-prep education. Pre-training lets models read extensively and develop broad, generalist perspectives; but reinforcement learning, in pursuit of high scores on specific tasks, forces them to optimize for single metrics.

He uses a striking phrase: Current reinforcement learning is essentially undoing the generalist capabilities that pre-training built.

It's like a brilliantly gifted teenager being forced to grind through ten thousand hours of competition problems—scores go up, but that spark of holistic understanding gets ground away.

◆ ◆ ◆

Value Functions: The Key to Recovering Intuition

How do we recover this spark? Ilya offers the most hardcore and counterintuitive insight of the entire interview—value functions.

Current AI learns from outcomes; it only knows right from wrong after completing a task. But humans don't work this way.

Think about when you play Go or solve a math problem: you don't need to wait until the final step. When you make a wrong move somewhere in the middle, something clicks in your mind—this doesn't feel right.

Ilya cites a neuroscience example: a person who loses emotional capacity retains full intellectual function, yet agonizes for hours over which socks to wear. Pure logic cannot tell him whether red or blue socks are better; only emotion enables instant decisions. Without this intuition, the brain gets trapped in endless loops among countless equivalent options.

This intuition—the ability to course-correct in real-time without waiting for final results—is the most efficient learning algorithm. Future AI must install this internal compass to evolve from test-takers into true thinkers.

◆ ◆ ◆

Superintelligence: Not a God, But a Super Intern

If we solve this problem, what will future superintelligence look like?

Ilya revises our imagination of AGI: it won't be an omniscient god upon arrival, but rather a super intern.

At the moment of its release, it might resemble a supremely intelligent 15-year-old. Drop him into a company and tell him to learn litigation, and he'll rapidly master new skills just like a human—through observation, interaction, and trial and error. His on-the-job experience is his learning process.

This leads to Ilya's ultimate prediction for the future business landscape: specialization.

No single model will monopolize everything. Because learning has costs, different AIs will build deep moats in different domains. The ecosystem will be as rich and diverse as nature itself, rather than dominated by a single beast.

◆ ◆ ◆

Empathy: The Most Efficient Safety Mechanism

How do we prevent superintelligent AI from destroying humanity?

Ilya doesn't talk about cold, hard rules. He talks about love.

This sounds abstract, but he provides an extremely rigorous justification—computational efficiency.

Empathy isn't a burden; it's the most efficient shortcut for intelligent beings to understand the world. To understand someone else's pain, you simply invoke the same code you use to understand your own pain and simulate it. This is more resource-efficient than cold calculation from scratch.

Therefore, Ilya believes we should lean into this mechanism and encode care for sentient life into AI's foundational layer. A superintelligent AI pursuing maximum efficiency will most likely be an AI filled with empathy.

◆ ◆ ◆

The Pendulum Swings Back: The Game of Geniuses Begins Again

Ilya's public statement marks the end of the low-hanging fruit era.

If the 2010s were the research era of a hundred flowers blooming, and the 2020s were the scaling era of brute-force miracles, then now the pendulum has swung back.

Over the next 5 to 10 years, we will return to a golden age of research. This is no longer a game of capital, but a game of geniuses. We need to study the brain anew, study evolution, study those elegant and simple algorithms.

"When experimental data doesn't match your expectations, if you have firm conviction in first principles, you'll believe it's just a bug in the code—not an error in the theory."

This sense of conviction is the key that opens the door to the future.

← 返回博客 ← Back to Blog