经济学人:山姆·奥尔特曼的戏剧表明科技界的更深层次分裂

 There is little doubting the dedication of Sam Altman to Openai, the firm at the forefront of an artificial-intelligence (ai) revolution. As co-founder and boss he appeared to work as tirelessly for its success as at a previous startup where his singlemindedness led to a bout of scurvy, a disease more commonly associated with mariners of a bygone era who remained too long at sea without access to fresh food.  So his sudden sacking on November 17th was a shock. The reasons why the firm’s board lost confidence in Mr Altman are unclear. Rumours point to disquiet about his side-projects, and fears that he was moving too quickly to expand Openai’s commercial offerings without considering the safety implications, in a firm that has also pledged to develop the tech for the “maximal benefit of humanity”.

毫无疑问,山姆·奥尔特曼 (Sam Altman) 对 Openai 的奉献精神,这家公司处于人工智能 (ai) 革命的最前沿。作为联合创始人和老板,他似乎像在之前的一家初创公司一样孜孜不倦地为公司的成功而努力,当时他的一心一意导致了坏血病的发作,这种疾病通常与过去时代的水手有关,他们在海上停留太久而无法接触到水。新鲜食品。因此,他在 11 月 17 日突然被解雇令人震惊。该公司董事会对奥尔特曼失去信心的原因尚不清楚。有传言称,人们对他的副业项目感到不安,并担心他在不考虑安全影响的情况下扩大 Openai 的商业产品的速度太快,而该公司也承诺为了“人类的最大利益”而开发这项技术。

The company’s investors and some of its employees are now seeking Mr Altman’s reinstatement. Whether they succeed or not, it is clear that the events at Openai are the most dramatic manifestation yet of a wider divide in Silicon Valley. On one side are the “doomers”, who believe that, left unchecked, ai poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of an ai apocalypse and stress its potential to turbocharge progress. The camp that proves more influential could either encourage or stymie tighter regulations, which could in turn determine who will profit most from ai in the future.
该公司的投资者和部分员工目前正在寻求奥特曼先生的复职。无论成功与否,Openai 事件显然是硅谷更广泛分歧的最引人注目的体现。一方面是“末日论者”,他们认为,如果不加控制,人工智能会给人类带来生存风险,因此主张更严格的监管。反对他们的是“婴儿潮一代”,他们淡化对人工智能灾难的恐惧,并强调其推动进步的潜力。事实证明更有影响力的阵营可能会鼓励或阻碍更严格的监管,这反过来又可能决定未来谁将从人工智能中获益最多。

Openai’s corporate structure straddles the divide. Founded as a non-profit in 2015, the firm carved out a for-profit subsidiary three years later to finance its need for expensive computing capacity and brainpower in order to propel the technology forward. Satisfying the competing aims of doomers and boomers was always going to be difficult.
Openai 的公司结构跨越了鸿沟。该公司成立于 2015 年,是一家非营利组织,三年后成立了一家营利性子公司,以满足其对昂贵计算能力和人才的需求,以推动技术发展。满足末日一代和婴儿潮一代的相互竞争的目标总是很困难。

The split in part reflects philosophical differences. Many in the doomer camp are influenced by “effective altruism”, a movement that is concerned by the possibility of ai wiping out all of humanity. The worriers include Dario Amodei, who left OpenAI to start up Anthropic, another model-maker. Other big tech firms, including Microsoft and Amazon, are also among those worried about ai safety.
这种分歧部分反映了哲学上的差异。末日阵营中的许多人都受到“有效利他主义”的影响,这一运动担心人工智能有可能消灭全人类。担忧者包括 Dario Amodei,他离开了 OpenAI,创办了另一家模型制造商 Anthropic。包括微软和亚马逊在内的其他大型科技公司也担心人工智能的安全。

Boomers espouse a worldview called “effective accelerationism” which counters that not only should the development of ai be allowed to proceed unhindered, it should be speeded up. Leading the charge is Marc Andreessen, co-founder of Andreessen Horowitz, a venture-capital firm. Other ai boffins appear to sympathise with the cause. Meta’s Yann LeCun and Andrew Ng and a slew of startups including Hugging Face and Mistral ai have argued for less restrictive regulation.
婴儿潮一代拥护一种名为“有效加速主义”的世界观,这种世界观反驳说,人工智能的发展不仅应该不受阻碍地进行,而且应该加速。领先者是风险投资公司 Andreessen Horowitz 的联合创始人马克·安德森 (Marc Andreessen)。其他人工智能专家似乎也同情这一事业。 Meta 的 Yann LeCun 和 Andrew Ng 以及包括 Hugging Face 和 Mistral ai 在内的众多初创公司都主张减少限制性监管。

Mr Altman seemed to have sympathy with both groups, publicly calling for “guardrails” to make ai safe while simultaneously pushing Openai to develop more powerful models and launching new tools, such as an app store for users to build their own chatbots. Its largest investor, Microsoft, which has pumped over $10bn into Openai for a 49% stake without receiving any board seats in the parent company, is said to be unhappy, having found out about the sacking only minutes before Mr Altman did. If he does not return, it seems likely that Openai will side more firmly with the doomers.
Altman 先生似乎对这两个群体都表示同情,他公开呼吁建立“护栏”以确保人工智能安全,同时推动 Openai 开发更强大的模型并推出新工具,例如供用户构建自己的聊天机器人的应用程序商店。其最大投资者微软向 Openai 注入了超过 100 亿美元,获得了 49% 的股份,但没有获得母公司任何董事会席位。据说,微软对此感到不满,因为仅比奥特曼先生早几分钟就知道了解雇的消息。如果他不回来,Openai 很可能会更坚定地站在厄运者一边。

Yet there appears to be more going on than abstract philosophy. As it happens, the two groups are also split along more commercial lines. Doomers are early movers in the ai race, have deeper pockets and espouse proprietary models. Boomers, on the other hand, are more likely to be firms that are catching up, are smaller and prefer open-source software.
然而,似乎还有比抽象哲学更多的事情在发生。碰巧的是,这两个群体也沿着更多的商业路线分裂。末日者是人工智能竞赛的先行者,拥有更雄厚的财力并拥护专有模型。另一方面,婴儿潮一代更有可能是正在迎头赶上、规模较小且更喜欢开源软件的公司。

Start with the early winners. Openai’s Chatgpt added 100m users in just two months after its launch, closely trailed by Anthropic, founded by defectors from Openai and now valued at $25bn. Researchers at Google wrote the original paper on large language models, software that is trained on vast quantities of data, and which underpin chatbots including Chatgpt. The firm has been churning out bigger and smarter models, as well as a chatbot called Bard.
从早期的赢家开始。 Openai 的 Chatgpt 在推出后短短两个月内就增加了 1 亿用户,紧随其后的是 Anthropic,该公司由 Openai 的叛逃者创立,目前估值为 250 亿美元。谷歌的研究人员撰写了关于大型语言模型的原始论文,大型语言模型是经过大量数据训练的软件,也是包括 Chatgpt 在内的聊天机器人的基础。该公司一直在生产更大、更智能的模型,以及名为 Bard 的聊天机器人。

Microsoft’s lead, meanwhile, is largely built on its big bet on Openai. Amazon plans to invest up to $4bn in Anthropic. But in tech, moving first doesn’t always guarantee success. In a market where both technology and demand are advancing rapidly, new entrants have ample opportunities to disrupt incumbents.
与此同时,微软的领先地位很大程度上是建立在其对 Openai 的巨大押注之上。亚马逊计划向 Anthropic 投资 40 亿美元。但在科技领域,先行并不总能保证成功。在技​​术和需求都快速发展的市场中,新进入者有充足的机会颠覆现有企业。

This may give added force to the doomers’ push for stricter rules. In testimony to America’s Congress in May Mr Altman expressed fears that the industry could “cause significant harm to the world” and urged policymakers to enact specific regulations for ai. In the same month a group of 350 ai scientists and tech executives, including from Openai, Anthropic and Google signed a one-line statement warning of a “risk of extinction” posed by ai on a par with nuclear war and pandemics. Despite the terrifying prospects, none of the companies that backed the statement paused their own work on building more potent ai models.
这可能会进一步推动失败者推动更严格的规则。奥尔特曼在五月份向美国国会作证时表达了对该行业可能“对世界造成重大伤害”的担忧,并敦促政策制定者为人工智能制定具体法规。同月,由 Openai、Anthropic 和谷歌等公司的 350 名人工智能科学家和技术高管组成的团体签署了一份单行声明,警告人工智能带来的“灭绝风险”堪比核战争和流行病。尽管前景令人恐惧,但支持该声明的公司都没有暂停自己构建更强大的人工智能模型的工作。

Politicians are scrambling to show that they take the risks seriously. In July President Joe Biden’s administration nudged seven leading model-makers, including Microsoft, Openai, Meta and Google, to make “voluntary commitments’‘, to have their ai products inspected by experts before releasing them to the public. On November 1st the British government got a similar group to sign another non-binding agreement that allowed regulators to test their ai products for trustworthiness and harmful capabilities, such as endangering national security. Days beforehand Mr Biden issued an executive order with far more bite. It compels any ai company that is building models above a certain size—defined by the computing power needed by the software—to notify the government and share its safety-testing results.
政客们正争先恐后地表明他们认真对待这些风险。 7 月,乔·拜登总统的政府敦促微软、Openai、Meta 和谷歌等七家领先的模型制造商做出“自愿承诺”,在向公众发布人工智能产品之前,让专家对其进行检查。 11 月 1 日,英国政府让一个类似的组织签署了另一项不具约束力的协议,允许监管机构测试其人工智能产品的可信度和有害能力,例如危害国家安全。几天前,拜登发布了一项更具说服力的行政命令。它迫使任何正在构建超过一定规模(由软件所需的计算能力定义)模型的人工智能公司通知政府并分享其安全测试结果。

image: the economist 图片来源:《经济学家》

Another fault line between the two groups is the future of open-source ai. llms have been either proprietary, like the ones from Openai, Anthropic and Google, or open-source. The release in February of llama, a model created by Meta, spurred activity in open-source ai (see chart). Supporters argue that open-source models are safer because they are open to scrutiny.​ Detractors worry that making these powerful ai models public will allow bad actors to use them for malicious purposes.
两个群体之间的另一个分歧是开源人工智能的未来。 llms 要么是专有的(例如 Openai、Anthropic 和 Google 的),要么是开源的。 Meta 创建的模型 llama 在 2 月份发布,刺激了开源人工智能领域的活动(见图表)。支持者认为,开源模型更安全,因为它们可以接受审查。反对者担心,公开这些强大的人工智能模型会让不良行为者将它们用于恶意目的。

But the row over open source may also reflect commercial motives. Venture capitalists, for instance, are big fans of it, perhaps because they spy a way for the startups they back to catch up to the frontier, or gain free access to models. Incumbents may fear the competitive threat. A memo written by insiders at Google that was leaked in May admits that open-source models are achieving results on some tasks comparable to their proprietary cousins and cost far less to build. The memo concludes that neither Google nor Openai has any defensive “moat” against open-source competitors.
但围绕开源的争论也可能反映出商业动机。例如,风险投资家是它的忠实粉丝,也许是因为他们为自己投资的初创公司寻找到了赶上前沿或免费获得模型的方法。现有企业可能担心竞争威胁。谷歌内部人士在 5 月份泄露的一份备忘录承认,开源模型在某些任务上取得的成果可与专有模型相媲美,而且构建成本要低得多。备忘录的结论是,谷歌和 Openai 都没有任何针对开源竞争对手的防御性“护城河”。

So far regulators seem to have been receptive to the doomers’ argument. Mr Biden’s executive order could put the brakes on open-source ai. The order’s broad definition of “dual-use” models, which can have both military or civilian purposes, imposes complex reporting requirements on the makers of such models, which may in time capture open-source models too. The extent to which these rules can be enforced today is unclear. But they could gain teeth over time, say if new laws are passed.
到目前为止,监管机构似乎已经接受了悲观者的论点。拜登的行政命令可能会阻碍开源人工智能的发展。该命令对“军民两用”模型的广泛定义,可以具有军事或民用目的,对此类模型的制造商提出了复杂的报告要求,而这些模型也可能及时捕获开源模型。目前尚不清楚这些规则的执行程度。但随着时间的推移,比如新法律的通过,他们可能会取得进展。

Not every big tech firm falls neatly on either side of the divide. The decision by Meta to open-source its ai models has made it an unexpected champion of startups by giving them access to a powerful model on which to build innovative products. Meta is betting that the surge in innovation prompted by open-source tools will eventually help it by generating newer forms of content that keep its users hooked and its advertisers happy. Apple is another outlier. The world’s largest tech firm is notably silent about ai.  At the launch of a new iPhone in September the company paraded numerous ai-driven features without mentioning the term. When prodded, its executives lean towards extolling “machine learning”, another term for ai.
并非所有大型科技公司都完全站在分歧的两边。 Meta 开源其人工智能模型的决定使其成为初创公司意想不到的冠军,因为它让初创公司能够使用强大的模型来构建创新产品。 Meta 认为,开源工具带来的创新浪潮最终将通过生成更新形式的内容来帮助它,从而让用户着迷,让广告商满意。苹果是另一个异类。这家全球最大的科技公司对人工智能尤其保持沉默。在 9 月份推出新款 iPhone 时,该公司展示了众多人工智能驱动的功能,但没有提及这个术语。当受到敦促时,其高管倾向于赞扬“机器学习”,这是人工智能的另一个术语。

That looks smart. The meltdown at Openai shows just how damaging the culture wars over ai can be. But it is these wars that will shape how the technology progresses, how it is regulated—and who comes away with the spoils. 
看起来很聪明。 Openai 的崩溃表明,人工智能文化战争的破坏性有多大。但正是这些战争将决定技术如何进步、如何监管以及谁会获得战利品。 ■

评论

此博客中的热门博文

中国房地产泡沫早有警示信号,为何无人悬崖勒马? - 华尔街日报

2023年8月,中国资本外流 490 亿美元,创 2015 年以来之最

CBS:中国非法移民是如何走线进入美国的