Pause Giant AI Experiments: An Open Letter(暫停大型人工智慧實驗: 一封公開信)

2023-03-30 06:00:56

Pause Giant AI Experiments: An Open Letter(暫停大型人工智慧實驗: 一封公開信)

前幾天在 futureoflife 網站上有一封公開信,呼籲暫停大型人工智慧實驗,並且可以加上你的簽名,目前看來包括馬斯克在內的很多大佬、圖領獎得主都已經簽名了。

AI發展的速度確實太快了,而且如果不考慮金錢成本、不受限制的學習的話,這個速度將會是指數級的增長,在我們目前沒有做好完全足夠多的準備情況下,是不是要繼續發展確實是個問題。

要發展,至少目前要有一個較為完善穩妥的制度,大家都知道,一種新的事物出來的時候都是野蠻式的增長的,比如P2P、加密貨幣,最終的結局可能都不會太好。

我看見網上有說 openAI CEO 也簽名了,但是我在這份簽名單上貌似沒有找到的。

當然,人心叵測,我們只是平民老百姓,大佬的想法跟我們完全不同,換個角度去想,這是不是商業上的打擊和競爭呢?

以下是原文和譯文,原文地址貼在文末,有興趣可以自己檢視。

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

我們呼籲所有人工智慧實驗室立即暫停至少6個月的培訓比 GPT-4更強大的人工智慧系統。

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

具有人類競爭智慧的人工智慧系統可以對社會和人類構成深遠的風險,正如廣泛的研究[1]所顯示的,並得到頂級人工智慧實驗室的承認。[2]正如得到廣泛認可的《阿西洛馬爾人工智慧原則》所指出的,高階人工智慧可以代表地球生命歷史上的一個深刻變化,應該在相應的關心和資源的支援下進行規劃和管理。不幸的是,這種規劃和管理水平並沒有發生,儘管最近幾個月人工智慧實驗室陷入了一場失控的競賽,開發和部署越來越強大的數位頭腦,沒有人——甚至是它們的創造者——能夠理解、預測或可靠地控制它們。

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

當代的人工智慧系統在一般任務上正變得與人類競爭,我們必須捫心自問: 我們是否應該讓機器用宣傳和謊言淹沒我們的資訊渠道?我們是否應該將所有的工作自動化,包括那些令人滿意的工作?我們是否應該發展出可能最終超過我們、智力超過我們、過時並取代我們的非人類思維?我們應該冒失去文明控制的風險嗎?這樣的決定不能委託給未經選舉產生的技術領導人。只有當我們確信它們的影響將是積極的並且它們的風險將是可控的時候,才應該開發強大的人工智慧系統。這種信心必須是合理的,並隨著系統潛在影響的大小而增加。OpenAI 最近關於人工通用智慧的宣告指出: 「在某種程度上,在開始訓練未來系統之前得到獨立的審查可能是重要的,而且對於最先進的努力來說,同意限制用於建立新模型的計算機的增長速度也是重要的。」我們同意。那就是現在。

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

因此,我們呼籲所有人工智慧實驗室立即暫停至少6個月的培訓比 GPT-4更強大的人工智慧系統。這種暫停應該是公開的、可驗證的,幷包括所有關鍵參與者。如果這種暫停不能迅速實施,各國政府應該介入並實施暫停。

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

人工智慧實驗室和獨立專家應利用這一暫停時間,共同開發和實施一套高階人工智慧設計和開發的共用安全協定,由獨立外部專家進行嚴格稽核和監督。這些協定應該確保遵循它們的系統是安全的,不會有任何合理的懷疑。[4]這並不意味著一般意義上的人工智慧開發的暫停,只是從危險的競賽退回到具有緊急能力的更大的不可預測的黑匣子模型。

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

人工智慧的研究和發展應該重新聚焦於使當今強大的、最先進的系統更加準確、安全、可解釋、透明、健壯、一致、值得信賴和忠誠。

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

與此同時,人工智慧開發人員必須與政策制定者合作,大幅加快開發強大的人工智慧治理系統。這些措施至少應包括: 專門負責人工智慧的新的、有能力的監管機構; 監督和跟蹤高能力的人工智慧系統和大量計算能力; 幫助區分真實和合成以及跟蹤模型洩漏的出處和水印系統; 健全的審計和認證生態系統; 人工智慧造成損害的責任; 為人工智慧技術安全研究提供充足的公共資金; 以及資源充足的機構,以應對人工智慧將造成的巨大經濟和政治混亂(尤其是。

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

有了人工智慧,人類可以享受一個繁榮的未來。在成功創造出強大的人工智慧系統之後,我們現在可以享受一個「人工智慧夏天」,在這個夏天裡,我們可以收穫回報,為所有人的利益設計這些系統,並給社會一個適應的機會。社會對其他可能對社會造成災難性影響的技術已經暫停使用。[5]我們可以在這裡做。讓我們享受一個漫長的人工智慧夏天,而不是毫無準備地衝進秋天。

簽署人(不一一列舉)

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

YoshuaBengio,Mila 的創始人和科學主任,圖靈獎得主,蒙特利爾大學教授

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook 「Artificial Intelligence: a Modern Approach"

斯圖爾特 · 拉塞爾(Stuart Russell) ,伯克利大學電腦科學教授,智慧系統中心主任,《人工智慧: 現代方法》標準教科書的合著者

Elon Musk, CEO of SpaceX, Tesla & Twitter

SpaceX,Tesla & Twitter 的執行長馬斯克

Steve Wozniak, Co-founder, Apple

史蒂夫 · 沃茲尼亞克,蘋果公司聯合創始人

...

原文地址:https://futureoflife.org/open-letter/pause-giant-ai-experiments/