Every week, we try and feature diverse perspectives on the AI boom (or bubble). The division is increasingly clear – those calling it a bubble are old school value investors who have seen many a hyped-up mania in history go bust whilst the tech bros driving the boom believe that the tangible prospects of AI creating a world of abundance justify the trillions spent on datacentres. Here’s Tomas Pueyo presenting the case for the latter:
“Hyperscalers believe they might build God within the next few years. That’s one of the main reasons they’re spending billions on AI, soon trillions. They think it will take us just a handful of years to get to AGI—Artificial General Intelligence, the moment when an AI can do nearly all virtual human tasks better than nearly any human. They think it’s a straight shot from there to superintelligence—an AI that is so much more intelligent than humans that we can’t even fathom how it thinks. A God.
…If they’re right, you should conclude that we’re not in an AI bubble. If anything, we’re underinvested. You should double down on your investments, and more importantly, you should be making intense preparations, because your life and those of all your loved ones are about to be upended. If they’re wrong, it would be useful to know, for you could become a millionaire shorting their stock—or at least not lose your money when the bubble pops.”
The rest of the piece is putting forward the case for why hyperscalers believe building ‘God’ is within humanity’s reach and in the not-too-distant future.
The crux of the piece is that how AI is helping AI get better, thereby accelerating the rate of improvement reaching exponential levels making AGI a reality sooner than later. It applies the definition of AGI to that of an AI researcher using a quote from Leopold Aschenbrenner: “The jobs of AI researchers and engineers at leading labs can be done fully virtually and don’t run into real-world bottlenecks [like robots]. And the job of an AI researcher is fairly straightforward, in the grand scheme of things: read ML literature and come up with new questions or ideas, implement experiments to test those ideas, interpret the results, and repeat.”
“The thing that’s special about AI researchers is not just that they seem highly automatable, but also that:
- AI researchers know how to automate tasks with AI really really well
- AI labs have an extremely high incentive to automate as much of that job as possible
Once you automate AI researchers, you can speed up AI research, which will make AI better much faster, accelerating our path to superintelligence, and automating many other disciplines along the way. This is why hyperscalers believe there’s a straight shot from AGI to superintelligence.”
It then tackles the question of how far are we from automating AI researchers. It looks at improvement rate on two dimensions – first, the error rate of current LLMs and second, the length of tasks AIs are doing over time. The article shows some startling data around the improvement on these two fronts:
““Test loss” is a way to measure the mistakes that Large Language Models (LLMs) make. The lower, the better. What this is telling you is that predictions get linearly better as you add more orders of magnitude of compute, data, and parameters to an LLM. And we can predict how this will keep going, because it has remained valid over seven orders of magnitude!
So, the jump here would simply be: Let’s just throw more resources at these models and they’ll eventually get there. We don’t need magic, just more and more efficient resources.
The previous graph is from a 2020 paper, but we are witnessing something tangible in the wild akin to that: The length of tasks AIs are doing is improving very consistently.”
The author then shows evidence of this with the performance of AI on various scholastic tests for humans such as SAT, Math Olympiad, etc, which shows a dramatic improvement in a short span of time.
“It’s not like AIs improving AIs is a pipe dream, they’ve been doing it for years, and every year it accelerates:
- Neural Architecture Search (NAS) uses AI to optimize AI neural networks
- AutoML automates the entire neural network creation process, including tasks like data preprocessing, feature engineering, and hyperparameter tuning.
- AlphaProof (2024) does math at the International Mathematical Olympiad level, generating proofs that inform AI algorithm design.
- And of course, coding:
- Anthropic: AI agents now write 90% of code!
- OpenAI: “Almost all new code written at OpenAI today is written by Codex users”
- Google: AI agents write 50% of code characters.
- Meta and Microsoft: ~50% next year and 20-30% today respectively”
In conclusion, “AI expertise is growing inexorably. Threshold after threshold, discipline after discipline, it masters it, and then beats humans at it. We’re now tackling the PhD level. In the current trajectory, we should reach AI Researcher levels soon. Once we do, we can automate AI research and turbo-boost it. If we do that, superintelligence should be around the corner.”
Finally, how soon do the hyperscalers believe we will get to AGI? “Elon Musk thinks AGI will be reached at the end of this year, or the beginning of next year; Dario Amodei, CEO of Anthropic, thinks it’s going to be in 2026-2027…Sam Altman, CEO of OpenAI, believes the path to AGI is solved, and that we will reach it in 2028.”
If you want to read our other published material, please visit https://marcellus.in/blog/
Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.