logo
Big Tech whistleblower's parents sue, sounding alarm over son's unexpected death

Big Tech whistleblower's parents sue, sounding alarm over son's unexpected death

Fox News08-02-2025
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
The parents of a young California tech whistleblower whose 2024 death was ruled a suicide are now suing the City and County of San Francisco, alleging they violated public records laws by refusing to fulfill their requests for information about their son's death.
Suchir Balaji, 26, was an employee at OpenAI, the artificial intelligence company behind ChatGPT, at the time of his Nov. 26, 2024, death. A San Francisco County medical examiner concluded the next day he died from a self-inflicted gunshot wound inside his apartment.
"In the two-plus months since their son's passing, Petitioners and their counsel have been stymied at every turn as they have sought more information about the cause of and circumstances surrounding Suchir's tragic death. This petition, they hope, is the beginning of the end of that obstruction," the lawsuit states.
San Francisco City Attorney's Office spokesperson Jen Kwart told Fox News Digital that once their office is served, they will review the complaint and respond accordingly.
"Mr. Balaji's death is a tragedy, and our hearts go out to his family," Kwart said.
"It's really been a nightmare for the last three months for them," one of the family's attorneys, Kevin Rooney, told Fox News Digital.
"We really feel that there's a lot of things that are known to us that are inconsistent with suicide and would suggest … that his death was the result of a homicide."
Just days before he died, Balaji was "upbeat and happy" during a trip to Catalina Island with his friends for his 26th birthday, the complaint filed Jan. 31 says.
The lawsuit describes Balaji as a "child prodigy with a particular interest in and talent for coding." He attended the University of California at Berkeley, and, upon graduating, was hired as an AI researcher at OpenAI.
"In that position, he was integral in OpenAI's efforts to gather and organize data from the internet used to train GPT-4, a language model used by the company's now-ubiquitous online chatbot, ChatGPT," the complaint says.
By August 2024, however, Balaji "had become disillusioned with OpenAI's business practices and decided to leave to pursue his own projects." In October, he was featured in a New York Times article titled "Former Open AI Researcher Says the Company Broke Copyright Law," with his photo.
Balaji alleged that "OpenAI violates United States copyright law because ChatGPT trains on copyrighted products of business competitors and then can imitate and substitute those products, running the risk of reducing the commercial viability of OpenAI's competitors to zero," according to the lawsuit.
In a Jan. 16 statement, OpenAI described Balaji as a "valued member" of the company's team, and its employees are "still heartbroken by his passing."
Balaji's parents, Poornima Ramarao and Bajami Ramamurthy, allege their requests for more information about their son's death were denied unfairly under the California Public Records Act. They further alleged in the lawsuit that investigators did not take their concerns about Balaji's whistleblower status seriously.
Rooney said there are good reasons for investigators not to disclose certain information about a criminal case to the public.
"But you should at least communicate with them and let them know generally what's being done to investigate the case," Rooney said. "And if that hasn't been done here because they've made a conclusion that Suchir died by suicide and that the investigation is closed, well … then we do have a right under the law [to view police records].
"When Ms. Ramarao informed the representative that her son had been a whistleblower against OpenAI and had been featured in the New York Times regarding his whistleblower allegations, the representative declined to follow up or seek any additional information," the lawsuit alleged.
"Instead, the [medical examiner's office] representative handed Ms. Ramarao Suchir's apartment keys and told her she could retrieve her son's body the following day. The representative also told Ms. Ramarao that she should not be allowed to see Suchir's body and that his face had been destroyed when a bullet went through his eye."
Dr. Joseph Cohen, a forensic pathologist hired by Balaji's parents, conducted a private autopsy and noted that Balaji's gunshot wound was "atypical and uncommon in suicides." The 26-year-old also had a contusion on the back of his head, according to the complaint.
Cohen also "noted that the trajectory of the bullet was downward with a slight left to right angle" and "that the bullet completely missed the brain before perforating and lodging in the brain stem."
Fox News Digital reached out to OpenAI for comment.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Victim's girlfriend among 9 South Carolina teens arrested in 16-year-old's murder, alleged set up
Victim's girlfriend among 9 South Carolina teens arrested in 16-year-old's murder, alleged set up

New York Post

time4 minutes ago

  • New York Post

Victim's girlfriend among 9 South Carolina teens arrested in 16-year-old's murder, alleged set up

Nine South Carolina teens were arrested in the June shooting death of a 16-year-old who authorities say was involved in an argument 'over a girl' with his alleged shooter, according to local reports. The victim's 17-year-old girlfriend is among the suspects. Florence County Sheriff's deputies found Trey Dean Wright of Johnsonville dead on First Neck Road with multiple gunshot wounds June 24, authorities said previously. He was found about 45 miles west of Myrtle Beach. The following day, they arrested Devan Scott Raper, a 19-year-old from Conway, who allegedly fatally shot Wright after an argument. At least one teen involved reportedly recorded the slaying on video. And deputies continued to announce new arrests for weeks as the investigation dug deeper. 'All this court hearing and bond court and stuff is driving me crazy,' Wright's mother, Ashley Lindsey, told Fox News Digital. 'I don't even have time to sit down and think half the time, on top of losing my precious baby.' Now, an entire group of teens, many of them juveniles, are facing charges, some for allegedly setting Wright up and bringing Raper to the victim's location, knowing he was armed, according to authorities. 3 PICTURED: From left: Corrine Belviso, Devan Raper, Gianna Helene Kistenmacher, Hunter Kendall and Sydney Kearns. Florence County Sheriff's Office One of the suspects was Wright's girlfriend, Gianna Kistenmacher, a Myrtle Beach 17-year-old, his mother said. Kistenmacher was charged with being an accessory before the fact for allegedly bringing Raper to the crime scene knowing he was both armed and likely to kill her boyfriend, according to a press release from the sheriff's office. A spokesperson for the sheriff's office did not immediately return a call seeking comment from Fox News Digital Wednesday. Prosecutors confirmed that nine teens had been charged but declined to comment further, citing the ongoing investigation. 'They were complicit in bringing the armed codefendant, Raper, to the incident location and knowing that there would be a confrontation,' Maj. Michael Nunn of the Florence County Sheriff's Office told WBTW-TV, which is based in Florence. 'They knew that Raper had presented a firearm to the victim and made threats to shoot him, according to the arrest warrants.' 3 Trey Dean Wright was found dead roughly 45 miles west of Myrtle Beach. 4kclips – He told the Post and Courier separately that, under state law, all five are charged as adults. Prosecutors filed additional murder charges against three more Myrtle Beach-area teens, identified as 18-year-olds Hunter Kendall and Corrine Belviso and 17-year-old Sydney Kearns, according to WBTW. 'The hand of one being the hand of all is part of South Carolina law as well, so that's the basis of the charge for each of those individuals,' Nunn told the station. Sheriff T.J. Joye told the station separately that the fatal shooting appeared to have stemmed from an argument and apparent romantic rivalry. 3 One of the suspects was Wright's girlfriend, according to his mother. gofundme 'They had issues with each other, and it was over a female,' he reportedly said. 'The sad thing is, you got a 16-year-old that lost his life. You've got a 19-year-old who is going to be in jail the rest of his life. Over what?' Start your day with all you need to know Morning Report delivers the latest news, videos, photos and more. Thanks for signing up! Enter your email address Please provide a valid email address. By clicking above you agree to the Terms of Use and Privacy Policy. Never miss a story. Check out more newsletters Raper faces charges of murder and possession of a weapon during a violent crime and is being held without bond. Kendall is also being held without bond. Belviso and Kearns each posted $20,000 surety bonds last week and have been released pending a trial.

I Asked ChatGPT and a Financial Advisor How To Become a Millionaire: Here's How Their Advice Compared
I Asked ChatGPT and a Financial Advisor How To Become a Millionaire: Here's How Their Advice Compared

Yahoo

timean hour ago

  • Yahoo

I Asked ChatGPT and a Financial Advisor How To Become a Millionaire: Here's How Their Advice Compared

For all of the transformative capabilities of artificial intelligence (AI), AI chatbots are still in their infancy stage. While applications like ChatGPT provide generic information that can be used to jump-start research or clarify terms, trusting it with a serious matter like a person's finances would certainly be foolish. Or would it? To test ChatGPT's accuracy on a unique and universally desired financial end goal for many Americans, GOBankingRates asked it how to become a millionaire and received an answer detailing things like how much you would need to invest and at what return, paths to use (consistent investing, growing a business, building high-income skills), and the importance of living below your means and avoiding wealth killers. Read Next: Find Out: GOBankingRates then sought a professional to speak on the subject, asking veteran wealth advisor Jake Falcon, CRPC, where the application succeeded and where it failed. Here's what the founder and CEO of Falcon Wealth Advisors had to say about ChatGPT's answer on how to become a millionaire. Where ChatGPT Succeeds What Falcon liked about ChatGPT's advice was its emphasis on practicalities. The nuts and bolts of sound planning and wealth building aren't a mystery, but they need to be adhered to if you're going to make any progress toward your financial objective. 'ChatGPT's response to 'How to Become a Millionaire' is surprisingly solid for a general audience — it's structured, motivational, and covers the basics well,' Falcon said. 'That said, there are a few areas where it shines and others where it falls short from my perspective.' Here are three examples where ChatGPT provided solid basic guidance, according to Falcon. Learn More: Emphasizing Consistency and Time 'The breakdown of how monthly investments compound over time is a great way to demystify wealth-building,' Falcon said. 'It reinforces the idea that becoming a millionaire is more about discipline than luck.' Highlighting Multiple Wealth Paths 'ChatGPT does a good job outlining the common routes — investing, entrepreneurship, real estate, and high-income skills,' Falcon explained. 'This gives readers a menu of options to explore based on their strengths.' Promoting Financial Hygiene 'Advice like 'live below your means,' 'avoid lifestyle inflation,' and 'track every dollar' is timeless. These are foundational habits I encourage with many clients,' he said. Where ChatGPT Falls Short Financial advisors know that investing is a highly personal activity, so ChatGPT's 'underplaying of behavioral finance' doesn't sit well with Falcon. 'Becoming a millionaire isn't just about math — it's about mindset. ChatGPT doesn't address emotional biases, fear of loss, or the discipline needed to stay invested during downturns. That's where human advisors add real value,' he said. ChatGPT falters in the details, those which financial advisors give such high priority for their clients. Here are three instances where ChatGPT fell flat for Falcon. Oversimplified Math 'While the investment projections are directionally correct, they lack nuance,' Falcon said. 'For example, assuming a flat 7% return ignores market volatility, inflation, and sequence-of-returns risk. Real-world planning requires stress testing — something ChatGPT can't do reliably.' No Personalization Similar strategies can't be used for every investor. 'The advice is generic,' Falcon stated. 'A 25-year-old software engineer and a 55-year-old teacher need very different strategies. As a wealth advisor, I tailor plans based on income, risk tolerance, tax situation, and life goals.' Missing Tax Strategy and Asset Location 'There's no mention of Roth vs. traditional accounts, capital gains, or tax-efficient withdrawals,' Falcon said. 'These are critical to maximizing wealth and often overlooked in DIY plans.' A Financial Advisor's Take A million dollars doesn't go quite as far as it used to, but hitting that wealth threshold is still the ultimate goal for many Americans. Unless you're a master tactician when it comes to creating comprehensive financial plans that cover the essentials — retirement, insurance, taxes, estate planning and major life transitions — you should see a human advisor before consulting ChatGPT for specific strategies on any important financial matter, including how to become a millionaire. 'ChatGPT is a great starting point for financial literacy. It can help people get motivated and understand the basics,' Falcon admitted. 'But when it comes to execution — especially for high earners or those nearing retirement — there's no substitute for personalized advice. 'You can Google 'How to change your car's oil,' but asking the internet on how to build the car probably isn't going to work. The same applies to wealth-building.' More From GOBankingRates New Law Could Make Electricity Bills Skyrocket in These 4 States I'm a Self-Made Millionaire: 6 Ways I Use ChatGPT To Make a Lot of Money 5 Strategies High-Net-Worth Families Use To Build Generational Wealth 4 Affordable Car Brands You Won't Regret Buying in 2025 This article originally appeared on I Asked ChatGPT and a Financial Advisor How To Become a Millionaire: Here's How Their Advice Compared Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias
New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias

Yahoo

time2 hours ago

  • Yahoo

New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias

Do you like AI models? Well, chances are, they sure don't like you back. New research suggests that the industry's leading large language models, including those that power ChatGPT, display an alarming bias towards other AIs when they're asked to choose between human and machine-generated content. The authors of the study, which was published in the journal Proceedings of the National Academy of Sciences, are calling this blatant favoritism "AI-AI bias" — and warn of an AI-dominated future where, if the models are in a position to make or recommend consequential decisions, they could inflict discrimination against humans as a social class. Arguably, we're starting to see the seeds of this being planted, as bosses today are using AI tools to automatically screen job applications (and poorly, experts argue). This paper suggests that the tidal wave of AI-generated résumés are beating out their human-written competitors. "Being human in an economy populated by AI agents would suck," writes study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, in a thread on X-formerly-Twitter explaining the work. In their study, the authors probed several widely used LLMs, including OpenAI's GPT-4, GPT-3.5, and Meta's Llama 3.1-70b. To test them, the team asked the models to choose a product, scientific paper, or movie based on a description of the item. For each item, the AI was presented with a human-written and AI-written description. The results were clear-cut: the AIs consistently preferred AI-generated descriptions. But there are some interesting wrinkles. Intriguingly, the AI-AI bias was most pronounced when choosing goods and products, and strongest with text generated with GPT-4. In fact, between GPT-3.5, GPT-4, and Meta's Llama 3.1, GPT-4 exhibited the strongest bias towards its own stuff — which is no small matter, since this once undergirded the most popular chatbot on the market before the advent of GPT-5. Could the AI text just be better? "Not according to people," Kulveit wrote in the thread. The team subjected 13 human research assistants to the same tests and found something striking: that the humans, too, tended to have a slight preference for AI-written stuff, with movies and scientific papers in particular. But this preference, to reiterate, was slight. The more important detail was that it was not nearly as strong as the preference that the AI models showed. "The strong bias is unique to the AIs themselves," Kulveit said. The findings are particularly dramatic at our current inflection point where the internet has been so polluted by AI slop that the AIs inevitably end up ingesting their own excreta. Some research suggests that this is actually causing the AI models to regress, and perhaps the bizarre affinity for its own output is part of the reason why. Of greater concern is what this means for humans. Currently, there's no reason to believe that this bias will simply go away as the tech embeds itself deeper into our lives. "We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more," Kulveit wrote. "If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favor the AI one." If AIs continue to be widely adopted and integrated into the economy, the researchers predict that companies and institutions will use AIs "as decision-assistants when dealing with large volumes of 'pitches' in any context," they wrote in the study. This would lead to widespread discrimination against humans who either choose not to use or can't afford to pay to use LLM tools. AI-AI bias, then, would create a "gate tax," they write, "that may exacerbate the so-called 'digital divide' between humans with the financial, social, and cultural capital for frontier LLM access and those without." Kulveit acknowledges that "testing discrimination and bias in general is a complex and contested matter." But, "if we assume the identity of the presenter should not influence the decisions," he says, the "results are evidence for potential LLM discrimination against humans as a class." His practical advice to humans trying to get noticed is a sobering indictment of the state of affairs. "In case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality," Kulveit wrote. More on AI: Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store