This is an incredibly interesting long essay in the FT on what is likely to be defining tech dilemma of our times – how much to let AI rip apart the world we live in. The writer, Ian Hogarth, is an investor in 50 AI start-ups, the co-author of the annual ‘State of AI’ report and someone who has been building and then selling companies which automate tasks (like reading biopsy reports) for 20 years. We would strongly urge you to read this lengthy but super insightful essay in full.
Mr Hogarth believes that we are on the cusp of developing ‘Artificial General Intelligence’ (AGI) which he says can be simply thought of as God-like AI: “GI can be defined in many ways but usually refers to a computer system capable of generating new scientific knowledge and performing any task that humans can.
Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be. The AI researcher did not have to consider it for long. “It’s possible from now onwards,” he replied.
This is not a universal view. Estimates range from a decade to half a century or more. What is certain is that creating AGI is the explicit aim of the leading AI companies, and they are moving towards it far more swiftly than anyone expected.”
So why should the imminent development of AGI worry us? Mr Hogarth’s answer feels similar to what economists call the ‘tragedy of the commons’: “A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. To be clear, we are not here yet. But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.
Recently the contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side.”
The reason we have reached the cusp of developing AGI so fast says Mr Hogarth is because over the past decade $21bn has been poured in AI research and around 100 super bright computer scientists have burnt the midnight oil to push computing power further and faster forward than anyone would have imagined a decade ago. There are a couple of jaw dropping graphics in Mr Hogarth’s piece which illustrate what has happened. The following para will give you a flavour of what has happened: “The compute used to train AI models has increased by a factor of one hundred million in the past 10 years. We have gone from training on relatively small datasets to feeding AIs the entire internet. AI models have progressed from beginners — recognising everyday images — to being superhuman at a huge number of tasks. They are able to pass the bar exam and write 40 per cent of the code for a software engineer. They can generate realistic photographs of the pope in a down puffer coat and tell you how to engineer a biochemical weapon.”
At the forefront of this revolution has been DeepMind – this is the company which really started it all a decade ago: “DeepMind was founded in London in 2010 by Demis Hassabis and Shane Legg, two researchers from UCL’s Gatsby Computational Neuroscience Unit, along with entrepreneur Mustafa Suleyman. They wanted to create a system vastly more intelligent than any human and able to solve the hardest problems. In 2014, the company was bought by Google for more than $500mn. It aggregated talent and compute and rapidly made progress, creating systems that were superhuman at many tasks. DeepMind fired the starting gun on the race towards God-like AI.
Hassabis is a remarkable person and believes deeply that this kind of technology could lead to radical breakthroughs. “The outcome I’ve always dreamed of . . . is [that] AGI has helped us solve a lot of the big challenges facing society today, be that health, cures for diseases like Alzheimer’s,” he said on DeepMind’s podcast last year. He went on to describe a utopian era of “radical abundance” made possible by God-like AI. DeepMind is perhaps best known for creating a program that beat the world-champion Go player Ke Jie during a 2017 rematch. (“Last year, it was still quite human-like when it played,” Ke noted at the time. “But this year, it became like a god of Go.”) In 2021, the company’s AlphaFold algorithm solved one of biology’s greatest conundrums, by predicting the shape of every protein expressed in the human body.”
Competing with DeepMind in the race to build AGI is OpenAI, the creators of the now famous ChatGPT: “OpenAI, meanwhile, was founded in 2015 in San Francisco by a group of entrepreneurs and computer scientists including Ilya Sutskever, Elon Musk and Sam Altman, now the company’s chief executive. It was meant to be a non-profit competitor to DeepMind, though it became for-profit in 2019. In its early years, it developed systems that were superhuman at computer games such as Dota 2. Games are a natural training ground for AI because you can test them in a digital environment with specific win conditions. The company came to wider attention last year when its image-generating AI, Dall-E, went viral online. A few months later, its ChatGPT began making headlines too.”
Mr Hogarth says that just because DeepMind and OpenAI have so far spent time on cracking how to win computer games, we should not be lulled into believing that the tech that these firms are creating is purely benign: “In 2011, DeepMind’s chief scientist, Shane Legg, described the existential threat posed by AI as the “number one risk for this century, with an engineered biological pathogen coming a close second”. Any AI-caused human extinction would be quick, he added: “If a superintelligent machine (or any kind of superintelligent agent) decided to get rid of us, I think it would do so pretty efficiently.” Earlier this year, Altman said: “The bad case — and I think this is important to say — is, like, lights out for all of us.””
If you want to read our other published material, please visit https://marcellus.in/blog/
Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.