聚焦中美经济报道 Focus on us-china economic&financial report,中美贸易战跟踪,股市分析,宏观经济分析,A股美股港股,分享国际主流财经媒体报道,找最新深度财经分析上【财税茶座 ,CSCZ.ORG】,一站在手全都有。
搜索此博客
经济学人:山姆·奥尔特曼的戏剧表明科技界的更深层次分裂
There is littledoubting the dedication of Sam Altman to Openai, the firm at the forefront of an artificial-intelligence (ai) revolution. As co-founder and boss he appeared to work as tirelessly for its success as at a previous startup where his singlemindedness led to a bout of scurvy, a disease more commonly associated with mariners of a bygone era who remained too long at sea without access to fresh food. So hissudden sackingon November 17th was a shock. The reasons why the firm’s board lost confidence in Mr Altman are unclear. Rumours point to disquiet about his side-projects, and fears that he was moving too quickly to expand Openai’s commercial offerings without considering the safety implications, in a firm that has also pledged to develop the tech for the “maximal benefit of humanity”.
The company’s investors and some of its employees are now seeking Mr Altman’s reinstatement. Whether they succeed or not, it is clear that the events at Openaiare the most dramatic manifestation yet of a wider divide in Silicon Valley. On one side are the “doomers”, who believe that, left unchecked,ai poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of anaiapocalypse and stress its potential to turbocharge progress. The camp that proves more influential could either encourage or stymie tighter regulations, which could in turn determine who will profit most fromaiin the future.
Openai’s corporate structure straddles the divide. Founded as a non-profit in 2015, the firm carved out a for-profit subsidiary three years later to finance its need for expensive computing capacity and brainpower in order to propel the technology forward. Satisfying the competing aims of doomers and boomers was always going to be difficult.
The split in part reflects philosophical differences. Many in the doomer camp are influenced by “effective altruism”, a movement that is concerned by the possibility ofai wiping out all of humanity. The worriers include Dario Amodei, who left OpenAI to start up Anthropic, another model-maker. Other big tech firms, including Microsoft and Amazon, are also among those worried aboutai safety.
Boomers espouse a worldview called “effective accelerationism” which counters that not only should the development ofaibe allowed to proceed unhindered, it should be speeded up. Leading the charge is Marc Andreessen, co-founder of Andreessen Horowitz, a venture-capital firm. Otheraiboffins appear to sympathise with the cause. Meta’s Yann LeCun and Andrew Ng and a slew of startups including Hugging Face and Mistralaihave argued for less restrictive regulation.
婴儿潮一代拥护一种名为“有效加速主义”的世界观,这种世界观反驳说,人工智能的发展不仅应该不受阻碍地进行,而且应该加速。领先者是风险投资公司 Andreessen Horowitz 的联合创始人马克·安德森 (Marc Andreessen)。其他人工智能专家似乎也同情这一事业。 Meta 的 Yann LeCun 和 Andrew Ng 以及包括 Hugging Face 和 Mistral ai 在内的众多初创公司都主张减少限制性监管。
Mr Altman seemed to have sympathy with both groups, publicly calling for “guardrails” to makeaisafe while simultaneously pushing Openaito develop more powerful models and launching new tools, such as an app store for users to build their own chatbots. Its largest investor, Microsoft, which has pumped over $10bn into Openai for a 49% stake without receiving any board seats in the parent company, is said to be unhappy, having found out about the sacking only minutes before Mr Altman did. If he does not return, it seems likely that Openaiwill side more firmly with the doomers.
Yet there appears to be more going on than abstract philosophy. As it happens, the two groups are also split along more commercial lines. Doomers are early movers in theairace, have deeper pockets and espouse proprietary models. Boomers, on the other hand, are more likely to be firms that are catching up, are smaller and prefer open-source software.
Start with the early winners. Openai’s Chatgptadded 100m users in just two months after its launch, closely trailed by Anthropic, founded by defectors from Openaiand now valued at $25bn. Researchers at Google wrote the original paper on large language models, software that is trained on vast quantities of data, and which underpin chatbots including Chatgpt. The firm has been churning out bigger and smarter models, as well as a chatbot called Bard.
Microsoft’s lead, meanwhile, is largely built on its big bet on Openai. Amazon plans to invest up to $4bn in Anthropic. But in tech, moving first doesn’t always guarantee success. In a market where both technology and demand are advancing rapidly, new entrants have ample opportunities to disrupt incumbents.
This may give added force to the doomers’ push for stricter rules. In testimony to America’s Congress in May Mr Altman expressed fears that the industry could “cause significant harm to the world” and urged policymakers to enact specific regulations forai. In the same month a group of 350aiscientists and tech executives, including from Openai, Anthropic and Google signed a one-line statement warning of a “risk of extinction” posed byaion a par with nuclear war and pandemics. Despite the terrifying prospects, none of the companies that backed the statement paused their own work on building more potentaimodels.
Politicians are scrambling to show that they take the risks seriously. In July President Joe Biden’s administration nudged seven leading model-makers, including Microsoft, Openai, Meta and Google, to make “voluntary commitments’‘, to have theiraiproducts inspected by experts before releasing them to the public. On November 1st the British government got a similar group to sign another non-binding agreement that allowed regulators to test theirai products for trustworthiness and harmful capabilities, such as endangering national security. Days beforehand Mr Biden issued an executive order with far more bite. It compels anyaicompany that is building models above a certain size—defined by the computing power needed by the software—to notify the government and share its safety-testing results.
Another fault line between the two groups is the future of open-sourceai.llms have been either proprietary, like the ones from Openai, Anthropic and Google, or open-source. The release in February ofllama, a model created by Meta, spurred activity in open-sourceai(see chart). Supporters argue that open-source models are safer because they are open to scrutiny. Detractors worry that making these powerfulaimodels public will allow bad actors to use them for malicious purposes.
两个群体之间的另一个分歧是开源人工智能的未来。 llms 要么是专有的(例如 Openai、Anthropic 和 Google 的),要么是开源的。 Meta 创建的模型 llama 在 2 月份发布,刺激了开源人工智能领域的活动(见图表)。支持者认为,开源模型更安全,因为它们可以接受审查。反对者担心,公开这些强大的人工智能模型会让不良行为者将它们用于恶意目的。
But the row over open source may also reflect commercial motives. Venture capitalists, for instance, are big fans of it, perhaps because they spy a way for the startups they back to catch up to the frontier, or gain free access to models. Incumbents may fear the competitive threat. A memo written by insiders at Google that was leaked in May admits that open-source models are achieving results on some tasks comparable to their proprietary cousins and cost far less to build. The memo concludes that neither Google nor Openaihas any defensive “moat” against open-source competitors.
So far regulators seem to have been receptive to the doomers’ argument. Mr Biden’s executive order could put the brakes on open-sourceai. The order’s broad definition of “dual-use” models, which can have both military or civilian purposes, imposes complex reporting requirements on the makers of such models, which may in time capture open-source models too. The extent to which these rules can be enforced today is unclear. But they could gain teeth over time, say if new laws are passed.
Not every big tech firm falls neatly on either side of the divide. The decision by Meta to open-source itsaimodels has made it an unexpected champion of startups by giving them access to a powerful model on which to build innovative products. Meta is betting that the surge in innovation prompted by open-source tools will eventually help it by generating newer forms of content that keep its users hooked and its advertisers happy. Apple is another outlier. The world’s largest tech firm is notably silent aboutai. At the launch of a new iPhone in September the company paraded numerousai-driven features without mentioning the term. When prodded, its executives lean towards extolling “machine learning”, another term forai.
并非所有大型科技公司都完全站在分歧的两边。 Meta 开源其人工智能模型的决定使其成为初创公司意想不到的冠军,因为它让初创公司能够使用强大的模型来构建创新产品。 Meta 认为,开源工具带来的创新浪潮最终将通过生成更新形式的内容来帮助它,从而让用户着迷,让广告商满意。苹果是另一个异类。这家全球最大的科技公司对人工智能尤其保持沉默。在 9 月份推出新款 iPhone 时,该公司展示了众多人工智能驱动的功能,但没有提及这个术语。当受到敦促时,其高管倾向于赞扬“机器学习”,这是人工智能的另一个术语。
That looks smart. The meltdown at Openaishows just how damaging the culture wars overaican be. But it is these wars that will shape how the technology progresses, how it is regulated—and who comes away with the spoils.■
简街是一家新兴的美国金融公司,成立于2000年,总部位于纽约,由蒂姆·雷诺兹(Tim Reynolds)和罗伯特·格兰诺夫(Robert Granovetter)等创立。它是一家量化交易公司,专注于高频交易(High-Frequency Trading, HFT)、市场制造(Market Making)和流动性提供,尤其在交易所交易基金(ETF)、债券、股票、期权和衍生品等领域表现出色。截至2025年5月,简街已成为全球金融市场中一支重要力量,其交易量在某些市场(如美国ETF市场)占据主导地位。 核心业务 : 市场制造 :简街通过提供买卖双方的报价,为市场提供流动性,尤其在ETF和固定收益产品领域表现突出。它利用复杂的算法和数学模型,确保在高波动市场中仍能提供高效的流动性。例如,2020年市场动荡期间,简街在债券ETF市场提供了关键流动性,防止了潜在的“流动性末日循环”( Jane Street: the top Wall Street firm ‘no one’s heard of’ )。 量化交易 :简街依赖量化策略,通过大数据分析和算法模型进行交易决策,追求低风险、高回报的投资机会。其交易策略通常基于统计套利和市场中性,尽量减少市场风险敞口。 技术驱动 :简街的交易系统高度依赖自主开发的软件和硬件,其技术平台能够处理海量的市场数据,并在微秒级别执行交易。几乎所有软件都使用OCaml编程语言编写,代码库约7000万行,体现了其技术深度( Jane Street Capital - Wikipedia )。 全球布局 :除了纽约总部,简街在伦敦、香港、新加坡和阿姆斯特丹设有办公室,覆盖全球主要金融市场。2025年3月,简街计划大幅扩展其香港办公室空间,显示其对亚洲市场的重视( US trading firm Jane Street seeks to rapidly expand Hong Kong office space - Reuters )。 公司文化与特色 : 技术与数学导向 :简街的员工多为数学、计算机科学或工程背景的顶尖人才,公司内部强调严谨的逻辑思维和概率分析。其招聘过程极为严格,录用率不到1%,重点招聘数学、计算机科学和金融领域的顶尖人才( Debunking The Myth: Is Jane Street A Hedge Fund? )。 低调...
经济学人: 去年 , 美国的预算赤字占 GDP 的7% 。这一数字可能很快就会更高。唐纳德·特朗普总统提出的“一揽子美好法案”(One Big Beautiful Bill Act)目前正在国会审议,该法案永久延长了2017年推出的减税措施,为餐饮业从业人员和老年人提供更多福利,并增加了对贫困儿童的补助。这项拟议的法案相当于未来十年额外借款数万亿美元。 特朗普的表演天赋吸引了人们的注意——但美国并非孤例。所有富裕国家的政府都在日益挥霍无度(见图1)。今年,法国的赤字将达到 GDP 的6% ;英国的赤字仅略小一些。德国政府的借款额将相当于 GDP 的3% 。加拿大的预算收支也陷入赤字。路易十四时期的官僚让-巴蒂斯特·科尔伯特(Jean-Baptiste Colbert)曾说过,税收政策的本质是“用最少的嘶嘶声从鹅身上拔掉尽可能多的羽毛”。如今的政府不会拔鹅毛。就像鹅肝酱生产商一样,他们会给鹅填馅。 图表:《经济学人》 各国政府长期存在赤字。鹅肝酱之乡法国自1974年以来就从未出现过盈余。如果经济增长速度快于债务积累速度,政府就可以同时借钱和减少债务。 然而,如今的情况却是前所未有的。如果经济陷入衰退,赤字水平并不罕见。事实上,富裕国家的 GDP 正在稳步增长。失业率接近历史最低水平。企业利润增长稳健。与此同时,借贷成本却大幅上升。按 GDP 加权,富裕国家政府的平均借款期限为10年,年利率为3.7%,高于新冠疫情期间的1%。 在这种情况下,许多教科书至少会建议削减赤字。如今的政府更愿意加倍努力。许多政府 承诺增加国防开支 。虽然这或许不可避免,但其他决策并非如此。在 日本, 各政党在参议院选举前提供财政优惠,从现金补贴到削减消费税,不一而足。英国政府最近取消了几个月前刚刚实施的节流措施,恢复向老年人发放能源补贴。韩国正在削减遗产税。澳大利亚正在削减所得税。 就连曾经谨慎的国家也纷纷效仿。德国政府计划借款8000亿欧元(9400亿美元),用于投资国防和基础设施。“以德国的标准来看,这确实是一项‘不惜一切代价’的财政政策,”德意志银行分析师表示。疫情前拥有巨额预算盈余的瑞士,如今却只剩下少量盈余。明年,该国将开始发放第13个月的国家养老金。在莱茵河畔享用迟到午餐的银发老人似乎并非生活在贫困线上。然而,如今每个人都能得到救济。 政府为何如此挥霍无度?疫情期间,政客们养成...
评论
发表评论