OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英)
Revealing OpenAI’s plan to create AGI by 2027 In this document I will be revealing information I have gathered regarding OpenAI’s (delayed) plans to create human-level AGI by 2027. Not all of it will be easily verifiable but hopefully there’s enough evidence to convince youSummary: OpenAI started training a 125 trillion parameter multimodal model in August of 2022. The first stage was Arrakis also called Q*. The model finished training in December of 2023 but the launch was canceled due to high inference cost. This is the original GPT-5 which was planned for release in 2025. Gobi (GPT-4.5) has been renamed to GPT-5 because the original GPT-5 has been canceled.The next stage of Q*, originally GPT-6 but since renamed to GPT-7 (originally for release in 2026), has been put on hold because of the recent lawsuit by Elon MuskQ* 2025 (GPT-8) was planned to be released in 2027 achieving full AGI...Q* 2023 = 48 IQQ* 2024 = 96 IQ (delayed)Q* 2025 = 145 IQ (delayed)Elon Musk caused the delay because of his lawsuit. This is why I’m revealing the information now because no further harm can be done I’ve seen many definitions of AGI – artificial general intelligence – but I will define AGI simply as an artificial intelligence that can do any intellectual task a smart human can. This is how most people define the term now.2020 was the first time I was shocked by an AI system – that was GPT-3. GPT-3.5, an upgraded version of GPT-3, is the model behind ChatGPT. When ChatGPT was released, I felt as though the wider world was finally catching up to something I was interacting with 2 years prior. I used GPT-3 extensively in 2020 and was shocked by its ability to reason.GPT-3, and its half-step successor GPT-3.5 (which powered the now famous ChatGPT -- before it was upgraded to GPT-4 in March 2023), were a massive step towards AGI in a way that earlier models weren’t. The thing to note is, earlier language models like GPT-2 (and basically all chatbots since Eliza) had no real ability to respond coherently at all. So why was GPT-3 such a massive leap?... Parameter Count“Deep learning” is a concept that essentially goes back to the beginning of AI research in the 1950s. The first neural network was created in the 50s, and modern neural networks are just “deeper”, meaning, they contain more layers – they’re much, much bigger and trained on lots more data. Most of the major techniques used in AI today are rooted in basic 1950s research, combined with a few minor engineering solutions like “backpropogation” and “transformer models”. The overall point is that AI research hasn’t fundamentally changed in 70 years. So, there’s only two real reasons for the recent explosion of AI capabilities: size and data.A growing number of people in the field are beginning to believe we’ve had the technical details of AGI solved for many decades, but merely didn’t have enough computing power and data to build it until the 21st century. Obviously, 21st century computers are vastly more
[OPENAI]:OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英),点击即可下载。报告格式为PDF,大小4.6M,页数53页,欢迎下载。
