Even as discussions with our investee companies point to the inevitability of AI taking over roles which are currently performed by well-paid humans (well-paid in the Indian human context), it is becoming clear that there are still flaws even in the most advanced AI models. Quoting from the AP article: “Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text – known in the industry as hallucinations – can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.”

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”…

A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model.”

The article then goes on to give a range of hallucinations that researchers have come across as they go through transcripts eg. “In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.””

These hallucinations will reduce as the bots are trained better in the months & years to come but the question that then poses question for firms like OpenAI is which uses cases will it focus its resources on. Given that bots can be trained for almost an infinite number of use cases, OpenAI will have to play god and decide which use cases are prioritised for minimising the risk of hallucinations.

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions