The hammering tech stocks have been under this year notwithstanding, there is no denying the  massive strides the tech industry has made over the past decade. Most notably in the area of artificial intelligence and its practical applications. This piece is to acknowledge the contribution to this field of Geoffrey Hinton, a cognitive psychologist and computer scientist, most noted for his work in artificial neural networks and deep learning. A Turing award winner, Hinton now divides his time working for Google and the university of Toronto. In this piece, Jaspreet Bindra highlights how Hinton’s fascination with the human brain and its functioning inspired him to recreate the brain to make great progress in artificial intelligence.
“After studying psychology at Cambridge and AI at the University of Edinburgh, Hinton went back to something which had fascinated him even as a child: How the human brain stored memories, and how it worked. He was one of the first researchers who started working on ‘mimicking’ the human brain with computer hardware and software, thus constructing a newer and purer form of AI, which we now call ‘deep learning’. He started doing this in the 1980s, along with an intrepid bunch of students. His PhD thesis, titled Deep Neural Networks for Acoustic Modelling in Speech Recognition, demonstrated how deep neural networks outclassed older machine learning models like Hidden Markovs and Gaussian Mixtures at identifying speech patterns. He literally invented ‘backpropagation’, which was reportedly one of the concepts that inspired Google’s BackRub search algorithm, the core of its exemplary service.
“I get very excited when we discover a way of making neural networks better—and when that’s closely related to how the brain works,” says Hinton. By mimicking the brain, he sought to get rid of traditional machine learning techniques, where humans would label pictures, words and objects; instead, his work copied the brain’s self-learning techniques. He and his team built “artificial neurons from interconnected layers of software modelled after the columns of neurons in the brain’s cortex. These neural nets can gather information, react to it, build an understanding of what something looks or sounds like” (bit.ly/3LRJwWo ). The AI community did not trust this new approach; Hinton told Sky News that it was “an idea that almost no one on Earth believed in at that point—it was pretty much a dead idea, even among AI researchers”.
Well, that sentiment has changed. Deep Learning has been harnessed by Google, Meta, Microsoft, DeepMind, Baidu and almost every other tech firm to build driverless cars, predict protein folding and beating humans at Go. Of Hinton’s students, Yann LeCun now leads Meta’s AI efforts, Yoshua Bengio is doing seminal work at University of Montreal, Ilya Sutskevar co-founded OpenAI, famous for GPT-3. Hinton himself works part time for Google, the result of a frenzied bidding war between Google, Microsoft and Baidu, where he auctioned his company (and his services) to Google for $44 million—the stuff of legend in itself. Deep learning is now considered one of the most exciting developments in AI. It is regarded as the surest bet that AI will achieve artificial general intelligence, or AGI. As Hinton put it: “We ceased to be the lunatic fringe. We’re now the lunatic core.””

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: the above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India as a provider of Portfolio Management Services. Marcellus Investment Managers is also regulated in the United States as an Investment Advisor.

Copyright © 2022 Marcellus Investment Managers Pvt Ltd, All rights reserved.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions