logo
Independent Autopsy Finds OpenAI Whistleblower's Bullet Wound Was Not Consistent With Suicide

Independent Autopsy Finds OpenAI Whistleblower's Bullet Wound Was Not Consistent With Suicide

Yahoo10-02-2025

San Francisco's medical examiner ruled OpenAI whistleblower Suchir Balaji's death last year a suicide — but the young man's family claims in a recent lawsuit that their own pathologist found differently.
As Decrypt and other outlets report, the 26-year-old whistleblower's parents, Poornima Ramarao and Balaji Ramamurthy, say the independent autopsy they commissioned suggests that the gunshot wound that killed their son was inconsistent with suicide — building on claims they've been making since November that their son was murdered rather than having taken his own life.
In a lawsuit filed against San Francisco and its police department, the grieving parents are now accusing authorities of covering up foul play and demanding they release all files about Balaji's death.
"The lawsuit demands that the city, police department, and medical examiner release public documents withheld under the Public Records Act," the family's attorney Joseph Goethals told Decrypt. If authorities fail to do so expediently, the lawyer said that he would seek a court order to force the issue.
Just a month prior to his body being found in his apartment, the New York Times published a bold claim from the former OpenAI employee: that the company was, in spite of its half-baked insistence to the contrary, using "enormous amounts" of copyrighted materials without permission to train its models. Balaji knew this because he'd help gather said data before ChatGPT was released in November 2022.
Since Balaji's death, the young tech genius' parents have fought for answers, and said they've been met with resistance from SFPD. Police told Ramarao that she couldn't see her son's body because the bullet that killed him had "destroyed" his face, she said — but when independent pathologist Joseph Cohen examined the body, he discovered that that claim appeared to be false.
During that second autopsy, Cohen deduced that the downward and slightly left-to-right trajectory of the single bullet that penetrated Balaji's brainstem would have been unusual for a suicide. He also found a contusion on the back of the young man's head, suggesting, per the suit, that he may have been struck before being shot.
Thus far, authorities in San Francisco have been tight-lipped about the case because it's still open. According to a source with direct knowledge who spoke to Fortune magazine, however, that silence may soon end when police and the city medical examiner's office release letters with "detailed facts" that provide more evidence that the young man killed himself.
Throughout this debacle, Balaji's family hasn't pointed fingers at OpenAI or anyone else. Instead, the grieving parents are insisting that authorities fully investigate the matter — and, hopefully, start to heal from this unimaginable tragedy.
More on OpenAI: OpenAI Seems to Be Low Key Panicking

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Here's how to turn off public posting on the Meta AI app
Here's how to turn off public posting on the Meta AI app

CNBC

time9 minutes ago

  • CNBC

Here's how to turn off public posting on the Meta AI app

AI generative images of women kissing while mud wrestling and President Donald Trump eating poop are some of the conversations users are unknowingly sharing publicly through Meta's newly launched AI app. The company rolled out the Meta AI app in April, putting it in direct competition with OpenAI's ChatGPT. But the tool has recently garnered some negative publicity and sparked privacy concerns over some of the wacky — and personal — prompts being shared publicly from user accounts. Besides the mud wrestlers and Trump eating poop, some of the examples CNBC found include a user prompting Meta's AI tool to generate photos of the character Hello Kitty "tying a rope in a loop hanging from a barn rafter, standing on a stool." Another user whose prompt was posted publicly asked Meta AI to send what appears to be a veterinarian bill to another person. "sir, your home address is listed on there," a user commented on the photo of the veterinarian bill. Prompts put into the Meta AI tool appear to show up publicly on the app by default, but users can adjust settings on the app to protect their privacy. To start, click on your profile photo on the top right corner of the screen and scroll down to data and privacy. Then head to the "suggesting your prompts on other apps" tab. This should include Facebook and Instagram. Once there, click the toggle feature for the apps that you want to keep your prompts from being shared on. After, go back to the main data and privacy page and click "manage your information." Select "make all your public prompts visible only to you" and click the "apply to all" function. You can also delete your prompt history there. Meta has beefed up its recent bets on AI to improve its offerings to compete against megacap peers and leading AI contenders, such as Google and OpenAI. This week the company invested $14 billion in startup Scale AI and tapped its CEO Alexandr Wang to help lead the company's AI strategy. The company did not immediately respond to a request for comment.

The Chatbot Disinfo Inflaming the LA Protests
The Chatbot Disinfo Inflaming the LA Protests

WIRED

time18 minutes ago

  • WIRED

The Chatbot Disinfo Inflaming the LA Protests

Jun 13, 2025 1:04 PM On this episode of Uncanny Valley , our senior politics editor discusses the spread of disinformation online following the onset of the Los Angeles protests. Flames engulf an autonomous Waymo vehicle during an anti-ICE protest in downtown Los Angeles, California, on June 8, 2025. Photo-Illustration: WIRED Staff; Photograph: Getty Images In recent days, Los Angeles residents have taken to the streets to protest the Trump administration's immigration policies and the increasingly frequent ICE raids. WIRED's senior politics editor Leah Feiger joins Zoë Schiffer, director of business and industry, to discuss the related flood of information on social media, and how AI chatbots like Grok and ChatGPT are delivering incorrect and at times, inflammatory answers. Mentioned in today's episode: AI Chatbots Are Making LA Protest Disinformation Worse by David Gilbert I Joined Every Class Action Lawsuit I Could Find, and So Can You by Andy Vasoyan Vibe Coding Is Coming for Engineering Jobs by Will Knight You can follow Zoë Schiffer on Bluesky at @zoeschiffer. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Zoë Schiffer: This is Zoë. Before we start, I want to take a chance to remind you that we really want to hear from you. If you have a question about AI, politics or privacy that's been on your mind or a topic that you wish we talked about on the show, you can write to us at UncannyValley@ and if you listen to our episodes and enjoy them, please, please rate them on your podcast app of choice. It really honestly helps people find us. Welcome to WIRED's Uncanny Valley . I'm WIRED's Director of Business and Industry, Zoë Schiffer. Today on the show we're diving into how AI chatbots are fueling disinformation about the ongoing protests in Los Angeles. In recent days, LA residents have taken to the streets to protest the Trump administration's immigration policies and the increasingly frequent ICE raids. Social media is filled with information and footage about the protests and as people try and fact check what's actually going on. AI chatbots like Grok and ChatGPT are delivering incorrect and at times inflammatory answers. We'll discuss how this pattern has been unfolding and the consequences it can have on this political moment. I'm joined today by Leah Feiger, Senior Politics Editor at WIRED. Leah, welcome to the show. Leah Feiger: Thanks for having me, Zoë. Zoë Schiffer: Since last weekend, these massive protests have broken out in Los Angeles, so maybe let's just start there with a very high level of what has been going on. Leah Feiger: So last week, protests broke out in LA following ICE raids across the city, and at first it was smaller than people were claiming online. They were like, "There are hordes of people. They have taken over Los Angeles. This is wild." The truth of the matter was that these were in very specific areas and very much linked to ICE raids. But over the weekend things took a turn. Trump called in the National Guard, much to California Governor Gavin Newsom's chagrin, and there was a lot of back and forth online about states' rights versus federal rights versus how all of this was getting depicted. Lots of arrests, a lot of people really upset about this administration's pretty controversial immigration policies and some pretty dismaying imagery of ICE raids, not just in LA but throughout the country. So this has been boiling up for quite some time, and LA is really one of the first places that we're seeing this kind of response. Zoë Schiffer: Yeah, I mean, I think that's what really stood out to me from the beginning. It feels like kind of the first time that we've seen a groundswell of opposition to the Trump administration in a really full-throated way, in a way that I think that we expected to see at various points throughout this administration since January. But we really haven't so far, despite the fact that what they've been doing, what they've been saying is pretty unusual compared to how politics normally run. Leah Feiger: Absolutely. It's been kind of wild, and obviously there's been the Tesla Takedown protest movement and a couple of smaller protests that happened right when DOGE, Elon Musk's so-called Department of Government Efficiency, was kicking off. But you're totally right, there has not been anything in this specific way, and part of this is because it's just really incredibly personal. We're hearing stories from protesters about their family members being taken by ICE and not hearing from them again. There's been some amazing profiles across the country of people being shocked that their neighbors were taken off the streets. And again, the administration has been saying that they're going to be doing this the entire time. The big difference here is one, the response from people in the community in LA, and two, the fact that this is now really becoming a state's rights versus federal issue. Zoë Schiffer: And I feel like as this has been unfolding, even for you and me, we've been trying to figure out, like you said, what is really going on here? What is happening? And so one of the things that they've been doing, very 2025, is turning to chatbots, asking ChatGPT, asking Grok, What's happening here?" So what are some of those questions that people are trying to get answered and what is happening when they talk to chatbots about what's going on? Leah Feiger: Okay. A little bit of a controversial take to get us started. I think there's actually something incredibly pure about people seeing a photo or a video on social media and going, "Wait, is this real? Is this accurate?" That's actually kind of good. That to me is a good thing. Zoë Schiffer: Yeah, media literacy. Leah Feiger: Media literacy. We've been screaming about this for so long. But the devastating part of this is that these chatbots are just absolutely not up to the challenge. So can I give you an example here? Would that be okay? Zoë Schiffer: Please, please. Leah Feiger: Okay. So obviously there's been these people taking to the streets following these ICE raids and conservative posters have been totally flooding their feeds with inaccurate information. They've been posting old protest footage, clips from video games, movies, claiming that there's all of these agitators. It's been a lot. And as a result, people have been turning to AI chatbots like Grok, etcetera, included. So one of these specific cases is when the federal government sent in 2,000 National Guard troops, the SF Chronicle actually published a photo of these troops sleeping on the floor. These images were shared by Governor Newsom who was like, "You literally sent these troops to our state that we did not ask for that is fully inciting this situation, and now they actually have nowhere to sleep." Right after Newsom shared this, a ton of people were claiming on X, on Facebook that these images were fabrications, they were AI generated, or they were totally referencing a different event. And we're not just talking random posters. We're talking like Laura Loomer, queen of the conspiracists. Zoë Schiffer: Oh, wow. Leah Feiger: Yeah, exactly. Who has Trump's ear already. This became widespread. And so we were talking about people went to X's Grok and they were like, "Grok, what is this?" And what did Grok tell them? No, no. Grok said these were not actually images from the protest in la. They said they were from Afghanistan. Zoë Schiffer: Oh. Grok, no. Leah Feiger: They were like, "There's no credible support. This is misattribution. It was really bad. It was really, really bad. And then there was another situation where another couple of people were sharing these photos with ChatGPT and ChatGPT was also like, "Yep, this is Afghanistan. This isn't accurate, etcetera, etcetera. It's not great. Zoë Schiffer: I mean, don't get me started on this moment coming after a lot of these platforms have systematically dismantled their fact-checking programs, have decided to purposefully let through a lot more content. And then you add chatbots into the mix who, for all of their uses, and I do think they can be really useful, they are incredibly confident. When they do hallucinate, when they do mess up, they do it in a way that is very convincing. You will not see me out here defending Google Search. Absolute trash, nightmare, but it's a little more clear when that's going astray, when you're on some random, uncredible blog than when Grok tells you with complete confidence that you're seeing a photo of Afghanistan when you're not. Leah Feiger: It's really concerning. I mean, it's hallucinating. It's fully hallucinating, but is with the swagger of the drunkest frat boy that you've ever unfortunately been cornered at a party in your life. Zoë Schiffer: Nightmare. Nightmare. Yeah. Leah Feiger: They're like "No, no, no. I am sure. I have never been more sure in my life." Zoë Schiffer: Absolutely. I mean, okay, so why do chatbots give these incorrect answers with such confidence? Why aren't we seeing them just say, "Well, I don't know, so maybe you should check elsewhere. Here are a few credible places to go look for that answer and that information." Leah Feiger: Because they don't do that. They don't admit that they don't know, which is really wild to me. There's actually been a lot of studies about this, and in a recent study of AI search tools at the Tow Center for Digital Journalism at Columbia University, it found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately. Offering instead incorrect or speculative answers." Really, really, really wild, especially when you consider the fact that there were so many articles during the election about, "Oh no, sorry, I'm ChatGPT and I can't weigh in on politics." You're like, well, you're weighing in on a lot now. Zoë Schiffer: Okay, I think we should pause there on that very horrifying note and we'll be right back. Welcome back to Uncanny Valley . I'm joined today by Leah Feiger, Senior Politics Editor at WIRED. Okay, so beyond just trying to verify information and footage, there've also been a bunch of reports about misleading AI-generated videos. There was a TikTok account that started uploading videos of an alleged National Guard soldier named Bob who'd been deployed to the LA protests, and you could see him saying false and inflammatory things like like the fact that the protesters are "chucking in balloons full of oil" and one of the videos had close to a million views. So I don't know, it feels like people have to become a little more adept at identifying this kind of fake footage, but it's hard in an environment that is inherently contextless like a post on X or a video on TikTok. Leah Feiger: Totally. And that TikTok live stream that you mentioned from Bob, that got debunked by BBC, they took it down. That's gone. But people in response are like, "Oh my gosh, this is the mainstream media. They're just shutting this down. Here's this information." So it's like when you find the thing, when you debunk the thing, it is still not enough. And I have to get back to the questions of media literacy, and that's perhaps why the AI chatbots getting it wrong are so upsetting to me, because people are actually, not everyone, but there is for sure some good faith folks going, "Is this real? Please, someone tell me if this is real." Zoë Schiffer: Oh, absolutely. Yeah, they want to know. I know. My poor mom, she showed me a video the other day of a bunch of identical baby sloths looking at the camera. She was like, "Aren't these so cute? I found this on Facebook." And I was like, "Mom, those are not real sloths. We need to move on." She was crushed. Leah Feiger: I think though that AI animal slop is something that I may be okay with. Zoë Schiffer: The one form of AI that Leah likes is the AI animals. Leah Feiger: Nope, nope, I take it back. I dig it back on the record. Anyway, lower stakes, but arguably still contributing to the fact that what's real? I don't know. Who's to say? And the part of the thing is we've in some ways been here before. Five years ago was 2020 and the George Floyd protests across the country, and those were massive, and those were big, and we had a lot of issues of misinformation. We had a lot of people that were saying, "No, this is this. This is this video game. This is this." But five years later, the landscape has entirely changed. You still have the same bad actors, but it's now way more confusing. Trust in the media is down. Who do you turn to? Your trusty AI chatbot that has been integrated into every single social media app that you have personal access to. Zoë Schiffer: Which has also stripped away the other fact checking mechanisms. So here we are. Leah Feiger: Yes. Zoë Schiffer: We've seen this again and again. I mean, it's been interesting because Elon Musk famously laid off a large portion of the content moderation team at X, formerly Twitter. He also stripped away some of the content moderation rules that the company had spent years weighing and putting in place and debating internally. And as we've had big news events play out in real time, the kind of thing that Twitter was made to do, big earthquake, war, protests, whatever it is, these moments where previously you would've seen everyone flock to X to figure out what was going on, you still see people doing that same behavior, but the ability to ascertain what is actually going on on that platform has declined so much in recent years. There is good information, but it's intermixed with all of this horrible, bad, false information, and it's really hard to tell what is what. Leah Feiger: It is such a mess, and I am not an X poster anymore, but I do go on frequently to see what's up. It's trash. It's a combination right now in this very specific moment because Elon Musk is back and posting nonstop about space travel, so it's like we have the Elon Musk posts that are at the very top of my feed. We have the LA is burning, the fire is spreading to the rest of America, and then we have a couple of journalists still hanging out being like, "Read my story." Zoë Schiffer: Getting no traction? Yeah. Leah Feiger: And it's so sad. Yeah. Zoë Schiffer: Yeah, I mean, if you are someone who's trying to become an X-influencer, the kind of things that you need to share are inherently a little divisive, a little inflammatory, a little shocking. In some cases, they're getting paid by the platform to post this kind of content. I think that there are a few guardrails in place. If you have a community note on a post, you're not supposed to get money for it, but the incentive structure, like you mentioned, is set up to encourage people to post the craziest thing they can possibly get away with, and that's what we're seeing right now. Leah Feiger: Although I have to say for what? It's to line the pockets of these companies and these billionaire owners, but the actual posters, I'm sure that many people have made the grift, but there was this amazing New York Times article pretty recently about a very, very established- Zoë Schiffer: Right. Leah Feiger: Yeah, the poster on X. And he was like, "I'm a Republican and this is what I care about." He spends no vacations, no weekends. His entire day is on his computer and he's making something like, it's less than $60,000 a year, to which I'm just like, what is the incentive? Is the incentive that amazing hit of serotonin when you have a tweet go viral? That can't pay groceries? So the fact that Musk and Zuckerberg and all of these folks have managed to convince people that this is still the place to be with more incendiary, inaccurate content is arguably the greatest grift of all. Zoë Schiffer: Yeah. Incredible. Okay, we're going to take another quick break and when we come back, we'll share our recommendations for what to check out on this week. Welcome back to Uncanny Valley . I'm Zoë Schiffer, WIRED's Director of Business and Industry. I'm joined today by WIRED's Senior Politics Editor, Leah Feiger. Before we take off, Leah, tell our listeners what they absolutely need to read on Leah Feiger: I really, really stand by this recommendation. It's an article by Andy Vasoyan titled I Joined Every Class Action Lawsuit I Could Find, and So Can You. It's scratched every single itch of the person. I want to know, was I supposed to get that extra $5? I would like to know. And this writer did it. They did it. They are on the path. They talk to people who are involved in getting people their money, and this statistic will haunt me forever, which is that something like only 4% of these class action lawsuit claims, what comes out, are actually ever claimed by people that deserve it. Devastating. Zoë Schiffer: I know. I'm part of this. I do get these letters kind of a lot. What did I just get recently? Leah Feiger: Zoë, you are part of the problem. Get that $7, Zoë. Zoë Schiffer: I know. Leah Feiger: They can Venmo now. Zoë Schiffer: I did it once and it was like $2, and I was like, okay. I think my expectation was that I was going to get like $3,000 and I was like, that's not what occurred. Leah Feiger: Okay. No, this is a long haul. This is a game. Okay, what's yours? What's your favorite thing? Zoë Schiffer: My recommendation is a story we published this week by Will Knight, our AI reporter. It is about how vibe coding is coming for engineering jobs, and it is so, so good. We've been talking to engineers all across Silicon Valley hearing about what they're calling the engineering apocalypse that is coming because AI coding agents are getting so, so good. But actually when Will did the reporting, it's a lot more complicated. People said it was like giving a toddler a chainsaw and that you could come up with really, really great things. Yes, you can code an entire app and website just by talking to a chatbot, but you can also create really buggy code, code that has all sorts of security vulnerabilities. And people were like, "It totally breaks down as soon as you care about the functionality of the app that you're building." Leah Feiger: I'm in a cold sweat right now. Zoë Schiffer: I know. I know. I was like, this is all of Leah's nightmares combined in one. Leah Feiger: My actual horror story. Zoë Schiffer: Leah, we should make you vibe code something for business. Leah Feiger: Oh my God. I would sit there and start crying. Frankly, that would be good content. I am adding to the clickable hellscape that is online. This is good. This is full circle for us. Zoë Schiffer: You're a TikTok influencer now. You heard it here first. That is our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley , which is all about tech companies and why they're betting big on AI agents. We're off next week and we'll be back the following. Adriana Tapia and Jordan Bell produced this episode. Amar Lal at Macro Sound mixed this episode. Jake Lummus was our New York studio engineer. Jordan Bell is our executive producer. Conde Nast Head of Global Audio is Chris Bannon. And Katie Drummond is WIRED's Global Editorial Director.

AMD's MI350 Previewed, MI400 Seen as Real Inflection
AMD's MI350 Previewed, MI400 Seen as Real Inflection

Yahoo

timean hour ago

  • Yahoo

AMD's MI350 Previewed, MI400 Seen as Real Inflection

AMD (NASDAQ:AMD) previewed its MI350 AI accelerators at Thursday's AI event, but Morgan Stanley argues the real turning point will come with next year's MI400 series. Warning! GuruFocus has detected 3 Warning Signs with AMD. Analyst Joseph Moore kept his Equal-Weight rating and $121 price target, noting that while the MI350 launch hit expectations, the focus remains on the rack-scale MI400/450 product for next year, which could provide the bigger inflectionif they can deliver. The event featured customer testimonials from Meta, Oracle, OpenAI, Microsoft, Cohere and HUMAIN that were constructive but not thesis changing, Moore said. AMD also highlighted its rack-scale architecture and gave a sneak peek at the MI400 series, which early indications suggest could match Nvidia's forthcoming Vera Rubin GPUs in performance. However, Moore warns that near-term upside remains modest until MI400 proves itself: AI upside is considerable longer term but near-term products don't support high convictionMI400 may change the stakes, but is still something of a show-me story. OpenAI CEO Sam Altman's onstage appearance added credibility to AMD's tens of billions revenue forecast for AI, Moore noted, even though no surprise customer deals were announced. The company underscored its aggressive M&A strategy25 acquisitions and investments over the past yearas evidence of its resourcefulness in chasing market share against much larger rivals. Why It Matters: With the AI accelerator market dominated by Nvidia, MI400's successful delivery could be the catalyst AMD needs to boost its data-center compute share and validate lofty long-term growth projections. Closing: Investors will look for early technical benchmarks and customer commitments around MI400likely unveiled in detail at next year's Computex eventto gauge whether AMD can ignite its next AI growth phase. This article first appeared on GuruFocus.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store