Here’s another perspective on the emerging thinking about Artificial Intelligence (AI). As discourse on the risks of AI models becoming sentient and potentially posing a threat to humanity gains ground, this piece from a couple of years ago takes a shot at understanding what we mean by intelligence and its inherent link to consciousness, the reason our species dominates the planet we inhabit. The author begins by helping us agree on what we mean by consciousness?

“It seems that conscious experiences often involve interplay or feedback between planning, memory, models, and emotions. All of these can be shown to have value in survival and reproduction as evolutionary drivers. For example, emotions are a form of future reward. If something makes you sad, you can form a persistent memory of that sadness that you wish to avoid in the future.

…Humans are made of living cells, that seem to produce consciousness when assembled in the right pattern. We would certainly be alive without consciousness, but would such a life carry any meaning? Does a Roomba robot really care if it is turned off? It seems not. Clearly, there is some evolutionary driver to proliferate consciousness, as it seems abundant to varying degrees in animals. Perhaps, it is fair to say, that consciousness attaches meaning to our lives. We can experience, we can feel, we can hope, we can dream.

So we all have it, we can play around with it a little, and it’s all in the brain.”

So can AI models gain consciousness as opposed to simple intelligence:
“If information is the whole and only game in town, then you would think neural networks are relevant to consider. After all, they are rudimentary representations of how the neurons of our own brains work, as far as we know, that is. The great irony of A.I. is that we don’t know those artificial neurons work, either. Really. It works, spectacularly well, in fact, but we’re not sure exactly how or why. There seems to be some magical capability attributed to a network of simple arithmetic operations with non-linearity repeated thousands of times. The artificial brains don’t learn as broadly or as quickly as a human brain, but they still learn a lot. Just by showing lots of samples labeled with possible outputs, even a small neural network can learn very intricate relationships from input to output. Sometimes beyond human-level, even.

Currently, the major limitation of progress is around generalization. That means that if you learn one thing, you don’t need to learn every example to be effective in the real world. Babies don’t need to see all types of dogs to recognize a dog, but computers do. Humans can learn to play any type of game, from tennis to chess, but computers struggle to learn many things. They can be superhuman in one category, but the best we can do right now is learning across different Atari games or going from Go to Chess. That’s it. No tennis. No poetry. What’s missing?

Surprisingly few people are actively working on this problem. Most of the commercially viable uses of A.I. don’t require anything remotely like human intelligence. In fact, if you just need to control valves in a chemical plant, it’s better to not do a bit of sudoku or jazz composition on the side. Focused algorithms are incredibly useful commercially.

One of the few people actively thinking about it is Jan LeCun, one of the early pioneers of modern machine learning techniques in image recognition. His proposal is that Reinforcement Learning is the right path. For context, that’s the type of algorithm behind AlphaZero, for example. It’s learning by carrot and stick. If you do good, you get a carrot. If you do bad, you get a stick. That alone left to do lots of learning can produce spectacular intelligence like world-beating chess algorithms. So that gets us pretty far. What then?

Well, LeCun suggests we need a world view. A framework to represent the real world out there. For example, when humans play chess we see the pieces and the board. They are objects with context and meaning. The algorithm doesn’t see any of this. No shape to the knight. No color to the board. It just sees numbers. Knight D4. Checkmate.

The computer can learn the rules, but nothing about the objects. There is no meaning to moving a piece on the board, that can be reused in moving an apple. You might even argue, that even though the computer can play chess, it doesn’t understand chess. At all. To be precise, the computer can learn a function approximation that happens to have a strong correlation with good moves in chess. It’s not chess. If there was no human carrot and stick, it would in fact learn nothing whatsoever. The computer brain must be spoon-fed. This must change. The computer must be given a framework for modeling the world, just like humans. We understand deeply the environment we live in. How to move, how to find food, how to climb stairs, how to open doors, how to ask directions, how to hunt for jobs, how to write emails, and so on.

What if we got there? What if a computer could do all that? An exact functional replica of a human, but made from silicon and copper wire?”

The article is worth a read in its entirety as it asks probing questions about what can take AI to full sentience.

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: the above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India as a provider of Portfolio Management Services. Marcellus Investment Managers is also regulated in the United States as an Investment Advisor.

Copyright © 2022 Marcellus Investment Managers Pvt Ltd, All rights reserved.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions