OpenAI重磅宣布:新模型,訓練中!
          來自:逆向獅

                凌晨,OpenAI發推宣布兩個重磅消息!

                 第一個重磅消息:宣布新模型正在訓練中,具體模型代碼并未透露。(GPT-5.0?)

                 原文如下:OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI


                 第二個重磅消息:OpenAI宣布成立安全與保障委員會。

                 原文如下:(這里會做一段一段翻譯)

                 Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.

               今天,OpenAI董事會成立了一個安全保障委員會,由董事Bret Taylor(主席)、Adam D’Angelo、Nicole Seligman和CEO Sam Altman領導。該委員會將負責向全體董事會提出關于OpenAI項目和運營的重要安全和保障決策的建議。

               OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

               OpenAI最近開始訓練其下一代模型,我們期待由此產生的系統將使我們在通往人工通用智能的道路上達到下一個水平的能力。雖然我們為構建和發布在能力和安全性方面領先于行業的模型感到驕傲,但在這一重要時刻,我們歡迎進行深入的討論。

               A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.

               安全保障委員會的首要任務將是在接下來的90天內評估并進一步發展OpenAI的流程和保障措施。在90天結束時,安全與保障委員會將向全體董事會分享他們的建議。在全體董事會審查后,OpenAI將以符合安全與保障要求的方式公開分享已采納的建議更新。

               OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.

               OpenAI的技術和政策專家Aleksander Madry(準備工作負責人)、Lilian Weng(安全系統負責人)、John Schulman(對齊科學負責人)、Matt Knight(安全負責人)和Jakub Pachocki(首席科學家)也將加入委員會。

               Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.

               此外,OpenAI將保留并咨詢其他安全、保密和技術專家,以支持這項工作,包括前網絡安全官員羅布·喬伊斯(Rob Joyce)和約翰·卡林(John Carlin),他們為OpenAI提供建議。


               以下是委員會成員身份信息:

               委員會領導:

               委員會成員: