The chorus is building up about AI not living up to its hype – atleast the use cases not yet being commensurate to the hundreds of billions invested in GPU capacity by the hyperscalers – Microsoft, Google, Amazon and Meta. The comparison with the internet boom of the late 90s (and the eventual bust) seem to be uncanny i.e, the world didn’t take to the internet as quickly as was envisioned or atleast as implied by the soaring stock prices. But just as today we can’t imagine a world without the internet, it is very likely that AI will become mainstream one day but not before an inevitable dose of rationality typically associated with hype cycles. This article though, talks about a different AI related hype – the one that says AI will takeover the world. In this brilliant and well-balanced essay, Navneet brings out all that AI can achieve and where it is likely to fall short. For those interested in the topic, the whole piece is worth a read. Some excerpts below.

The large language models behind generative AI are driving us to believe the hype is because we have historically associated intelligence with language:

“…for decades, the standard test for whether technology was approaching intelligence was the Turing test, named after its creator Alan Turing, the British mathematician and second world war code-breaker. The test involves a human interrogator who poses questions to two unseen subjects – a computer and another human – via text-based messages to determine which is the machine. A number of different people play the roles of interrogator and respondent, and if a sufficient proportion of interviewers is fooled, the machine could be said to exhibit intelligence. ChatGPT can already fool at least some people in some situations.

Such tests reveal how closely tied to language our notions of intelligence are. We tend to think that beings that can “do language” are intelligent: we marvel at dogs that appear to understand more complex commands, or gorillas that can communicate in sign language, precisely because such acts are closer to our mechanism of rendering the world sensible.

…But being able to do language without also thinking, feeling, willing or being is probably why writing done by AI chatbots is so lifeless and generic. Because LLMs are essentially looking at massive sets of patterns of data and parsing how they relate to one another, they can often spit out perfectly reasonable-sounding statements that are wrong or nonsensical or just weird. That reduction of language to just collection of data is also why, for example, when I asked ChatGPT to write a bio for me, it told me I was born in India, went to Carleton University and had a degree in journalism – about which it was wrong on all three counts (it was the UK, York University and English). To ChatGPT, it was the shape of the answer, expressed confidently, that was more important than the content, the right pattern mattering more than the right response.”

But pattern recognition especially gleaned from vast sets of data isn’t too different from human intelligence:

“Today, AI can take previously unconnected, even random things, such as the skyline of Toronto and the style of the impressionists, and join them to create what hasn’t existed before. But there is a sort of discomforting or unnerving implication here. Isn’t that also, in a way, how we think? Raphaël Millière, an assistant professor at Macquarie University in Sydney, says that, for example, we know what a pet is (a creature we keep with us at home) and we also know what a fish is (an animal that swims in large water bodies); we combine those two in a way that keeps some characteristics and discards others to form a novel concept: a pet fish. Newer AI models boast this capacity to amalgamate into the ostensibly new – and it is precisely why they are called “generative.”

Even comparatively sophisticated arguments can be seen to work this way. The problem of theodicy has been a topic of debate among theologians for centuries. It asks: if an absolutely good God is omniscient, omnipotent and omnipresent, how can evil exist when God knows it will happen and can stop it? It radically oversimplifies the theological issue, but theodicy, too, is in some ways a kind of logical puzzle, a pattern of ideas that can be recombined in particular ways. I don’t mean to say that AI can solve our deepest epistemological or philosophical questions, but it does suggest that the line between thinking beings and pattern recognition machines is not quite as hard and bright as we may have hoped.”

But to take it to the extent that AI can solve all problems of humanity like many VCs and entrepreneurs in Silicon valley would like us to believe might be taking it a bit too far:

“More often, the problems are the lack of resources, the absence of political will, the power of entrenched interests and, more plainly, money.

This is what the utopian vision of the future so often misses: if and when change happens, the questions at play will be about if and how certain technology gets distributed, deployed, taken up. It will be about how governments decide to allocate resources, how the interests of various parties affected will be balanced, how an idea is sold and promulgated, and more. It will, in short, be about political will, resources, and the contest between competing ideologies and interests. The problems facing the world – not just climate breakdown but the housing crisis, the toxic drug crisis, or growing anti-immigrant sentiment – aren’t problems caused by a lack of intelligence or computing power. In some cases, the solutions to these problems are superficially simple. Homelessness, for example, is reduced when there are more and cheaper homes. But the fixes are difficult to implement because of social and political forces, not a lack of insight, thinking, or novelty. In other words, what will hold progress on these issues back will ultimately be what holds everything back: us.

The idea of an exponentially greater intelligence, so favoured by big tech, is a strange sort of fantasy that abstracts out intelligence into a kind of superpower that can only ever increase. In this view, problem-solving is like a capacity on a dial that can simply be turned up and up. To assume this is what’s called “tech solutionism”, a term coined a decade ago by the writer Evgeny Morozov. He was among the first to point to how Silicon Valley tended to see tech as the answer to everything.

…An AI model can be trained on billions of data points, but it can’t tell you if any of those things is good, or if it has value to us, and there’s no reason to believe it will. We arrive at moral evaluations not through logical puzzles but through consideration of what is irreducible in us: subjectivity, dignity, interiority, desire – all the things AI doesn’t have.

To say that AI on its own will be able to produce art misunderstands why we turn to the art in the first place. We crave things made by humans because we care about what humans say and feel about their experience of being a person and a body in the world.

There’s also a question of quantity. In dropping the barriers to content creation, AI will also flood the world with dreck. Already, Google is becoming harder to use because the web is being flooded with AI-crafted content designed to get clicks.”

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions