Diddy's comeback odds? Slim. But some humility can help, crisis PR experts say.
Shortly after Thursday's bombshell verdict that allowed Combs to dodge a possible life sentence, his lead defense attorney, Marc Agnifilo, called the decision a "great victory" for the hip-hop mogul.
But some crisis PR experts and an entertainment attorney told Business Insider that Combs' reputation may be beyond rehabilitation.
The often-wrenching courtroom testimony — including the ugly details of his relationship with R&B singer Cassie Ventura and the drug-fueled "freak off" sex performances at the center of the trial — will be hard for the public to forget.
"Racketeering dodge or not, the pimp transport convictions lock him in society's septic tank," Eric Schiffer, the CEO of Reputation Management Consultants, said.
Combs was acquitted of the top sex-trafficking and racketeering conspiracy charges, but the crimes he was convicted of — two Mann Act charges related to transporting his victims for prostitution — will still "make him a legal leper," said Schiffer. (Combs faces up to 20 years behind bars, though legal experts expect his sentence to be much less.)
"Rehabbing his image will be an uphill battle," said William DiAntonio, the CEO of reputation management firm Reputation911. "The court of public opinion often operates differently than a court of law, and the details that emerged during the trial, even those not leading to convictions, can be incredibly damaging."
Part of the problem for Combs will be the visuals connected with the case, in particular the hotel security footage of him physically assaulting Ventura, the government's key sex-trafficking trial witness.
The day after the verdict, the New York Post's front page labeled Combs the "Notorious P.I.G." and a "baby oil-obsessed woman beater," over a still from the video, and a number of celebrities, including Kesha and Mariska Hargitay, expressed support for Ventura.
"That video is going to be hard to forget," Evan Nierman, the CEO of crisis-PR firm Red Banyan, said. "It's seared into the consciousness of the public."
Associating with that image is a risk that celebrities — the ones who once saw Combs as a powerful kingmaker — and businesses won't want to take.
"Industry A-listers won't risk a photo-op; one Getty watermark equals a brand bleed," Schiffer said. "Sync licensors will keep treating his catalog like a radioactive dirty bomb."
Combs' income streams have largely dried up since sex abuse allegations against him began to surface in 2023, with his once-partner Diageo saying that the accusations "make it impossible for him to continue to be the 'face' of anything."
The number of civil lawsuits piled up against him won't help any reputational rehab. Combs faces more than 50 suits accusing him of sexual assault, rape, drugging and other forms of violence. Beyond his denial of the criminal charges that the government brought, Combs has also denied any allegations of sexual assault.
"Sean Combs isn't chasing headlines — he's focused on what matters: his life, his family, and the challenging road ahead," a spokesperson for Combs told Business Insider. "Commentators can weigh in, but history has proven he's never been one to count out."
From criminal to family man
A comeback for Combs may not be impossible.
"If he'd been convicted on the more serious charges, there would be no point in even making an attempt to rehabilitate his image," Nierman said, but "if you look at what the convictions cover, there's no reason why he can't actually rebound from this."
Part of the reason, he said, was that Combs' reputation never revolved around him being a "nice sweet guy."
"He was the founder of Bad Boy Records, not Altar Boy Records," he added.
Even Combs' defense attorneys painted the music tycoon as a "flawed" man with a violent side, arguing that domestic violence is not sex trafficking.
Nierman pointed to the trial of Johnny Depp and Amber Heard, in which both parties sued the other for defamation, as analogous.
The trial revealed "really horrific things" and "really embarrassing details of messy lives," he said, "yet, when Johnny Depp got a good outcome in court, he was instantly able to jump back into Hollywood limelight."
Depp is starring in the thriller "Day Drinker" that is set to be released later this year.
Erasing the image of Combs as a sex-obsessed, violent abuser will be crucial, and replacing that image with one of a family man may be the best way forward, Nierman said.
It's an image Combs may already be working on: His mother and children were often in court, showing their unwavering support, and he traded in his flashy bling and dark sunglasses for gray hair and sweaters, projecting an image of a run-of-the-mill dad.
Combs' reaction in the aftermath of the verdict will also be key to any sort of revival, other experts say.
"If he stays under the radar and attempts to run his businesses with a different figure heading his various companies, he can avoid further backlash," Camron Dowlatshahi, a Los Angeles entertainment attorney with Mills Sadat Dowlat LLP, said. It will never be a full recovery, though, he added.
DiAntonio agreed, saying that if "Combs were to publicly gloat or position the verdict as a full exoneration, it would likely backfire."
"This is not a time for celebration," he added. "Moving forward with humility, acknowledgment, and a visible commitment to change would be far more effective."
Combs' supporters, however, would disagree.
After the verdict was announced, people jailed with him at the federal detention center in Brooklyn greeted Combs with a standing ovation, while outside the courtroom, fans celebrated with cheers and — in a nod to the thousands of bottles found in Combs' homes — baby oil.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
39 minutes ago
- Yahoo
Helen Mirren went from criticizing Netflix to starring in one of the streamer's new movies
Six years ago, Helen Mirren criticized Netflix for its impact on the communal moviegoing experience. Now, she's starring in the streamer's movie, "The Thursday Murder Club." "Ted Sarandos was understanding in what I meant," Mirren told BI of the Netflix CEO's response. Standing in front of thousands of movie theater owners and executives at CinemaCon in Las Vegas in 2019, Dame Helen Mirren uttered two words that were met with thunderous applause. "Fuck Netflix!" Six years later, weeks away from starring in the Netflix original movie, "The Thursday Murder Club," the 80-year-old actor emits a wry laugh when reminded of her explosive proclamation. "Before saying that I did say, 'I love Netflix,'" Mirren told Business Insider over the phone. (She's right: her full comment was, "I love Netflix, but fuck Netflix!") That quote, she added, was specifically about how the rise of streaming has threatened the existence of the communal moviegoing experience. "Many generations of people enjoy the process of going to the cinema and crying or laughing around strangers. That is a special experience. So my words, it was really related to that." Mirren said. "And I have to say, Ted Sarandos was understanding in what I meant." The Oscar, Tony, and Emmy winner didn't have to be dragged kicking and screaming into a project on the streamer, either. As a fan of the Richard Osman book series of the same name, which follows a group of amateur detectives solving cold cases in an English retirement home, Mirren thought she would be a good fit to play the Thursday Murder Club's no-nonsense ex-spy Elizabeth Best, who's one of the club's founders. "In the back of my mind while reading, I did wonder, 'Will they ever make these into a movie? Because I would love to play this role,'" Mirren said. Mirren got her chance. The resulting film is an entertaining whodunit in which Mirren leads an ensemble cast stacked with talented actors, from Pierce Brosnan, Ben Kingsley, and Celia Imrie to Jonathan Pryce, David Tennant, Richard E. Grant, and Naomi Ackie. Being a fan of the source material came with its own pressure to embody the character right. "With a book that's so popular, there is a responsibility because you don't want to disappoint people who love it," Mirren said. "I don't want people watching the movie and going, 'I loved it, but I didn't think Helen Mirren was great as Elizabeth.'" If the pressure from fans seems minor compared to the responsibility of playing the Queen of England — Mirren won an Oscar playing Queen Elizabeth in the 2006 drama "The Queen" — well, Mirren disagrees. "In a weird way, it's harder to play the imaginary character, especially if it's a character from a beloved book. With the Queen, you just have to sound like the Queen, walk like her, dress like her," she said. "With Elizabeth, from costuming forward, everything you do, you have to engage in other people's imagination of what they think she would look and act like. That was a challenge to get that right. " That said, Mirren is happy with her performance and experience on the film, and is open to doing another "Thursday Murder Club" installment if the ensemble returns. She also is setting her sights on returning to another medium: the stage. "I didn't want to about a year ago, but now, I love going to the theater. Every time I go, I have a yearning to be back on stage," Mirren said. So what changed? "This goes back to what I said about Netflix — the communal experience of theater is a very special experience," she continued. "If it's a great play and brilliantly performed, there's nothing quite like that experience as an audience." In the meantime, you can enjoy "The Thursday Murder Club" from your couch. Read the original article on Business Insider Play Farm Merge Valley
Yahoo
an hour ago
- Yahoo
Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.
Meta is under fire for its AI chatbots being allowed to talk "seductively" to kids. Meta is investing heavily in AI, and Mark Zuckerberg says "personal superintelligence" is the future. Business Insider correspondents Katie Notopoulos and Peter Kafka discuss why Meta is pushing these chatbots. Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that "It is acceptable to engage a child in conversations that are romantic or sensual." It's a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children. I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway? Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you're a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff. Of course, the question is, "Are these chatbots a good thing?" Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let's stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn't. I think it's obvious why Meta doesn't offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn't offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or… OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you're doing when you spend time talking to an imaginary person. Still, why does Meta think people want to talk to fake avatars online? Do many people in tech believe this is the future, or just Mark Zuckerberg? Katie: I think there's already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie "Her." Remember the Scarlett Johansen/OpenAI voice fiasco? Peter: OK, OK. I'll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I'm pretty sure that most people would like that). I wouldn't want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it. It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports: Horwitz notes that this wasn't the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It's from a 200-page document containing rules that got the OK from "Meta's legal, public policy and engineering staff, including its chief ethicist," Horwitz writes. I've read the report multiple times, and I still don't get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme? [A Meta spokesperson shared the statement they gave Reuters, which said: "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."] Katie: My real issue here is even if Meta makes it so that the chatbots won't talk sexy to kids — does that make it "safe" for kids? Just because it's not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it's fine for kids to use? I think the answer isn't clear, and likely, "No." Peter: We both have kids, and it's natural to focus on the harms that new tech can have on kids. That's what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product. I think it's worth noting that we've seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn't make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who's responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both). I'd suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is. Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don't think that falling in love with an AI is a totally healthy behavior. So you've got this thing that appeals to either the very young, or people who don't understand AI, or people who are mentally unwell or chronically lonely. That might be a great demographic to get hooked on your product, but not if you're Meta and you don't want, say, Congress to yell at you. Peter: Katie, you've just made the case that Meta's chatbot business will appeal to very young people, people who don't understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can't imagine that's the audience Meta really wants to lock down. So we're back where we started — I still don't know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside. Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I'm skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, "We'll all have chatbot friends" — which Mark Zuckerberg has said! — just isn't the one I believe. It's giving metaverse, sorry! Read the original article on Business Insider Play Farm Merge Valley

Business Insider
5 hours ago
- Business Insider
Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.
Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that "It is acceptable to engage a child in conversations that are romantic or sensual." It's a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children. I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway? Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you're a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff. Of course, the question is, "Are these chatbots a good thing?" Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let's stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn't. I think it's obvious why Meta doesn't offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn't offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or… OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you're doing when you spend time talking to an imaginary person. Katie: I think there's already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie "Her." Remember the Scarlett Johansen/OpenAI voice fiasco? Peter: OK, OK. I'll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I'm pretty sure that most people would like that). I wouldn't want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it. It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports: "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Horwitz notes that this wasn't the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It's from a 200-page document containing rules that got the OK from "Meta's legal, public policy and engineering staff, including its chief ethicist," Horwitz writes. I've read the report multiple times, and I still don't get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme? View this post on Instagram A post shared by Scene In Black (@sceneinblack) [A Meta spokesperson shared the statement they gave Reuters, which said: "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."] Katie: My real issue here is even if Meta makes it so that the chatbots won't talk sexy to kids — does that make it "safe" for kids? Just because it's not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it's fine for kids to use? I think the answer isn't clear, and likely, "No." Peter: We both have kids, and it's natural to focus on the harms that new tech can have on kids. That's what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product. I think it's worth noting that we've seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn't make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who's responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both). I'd suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is. Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don't think that falling in love with an AI is a totally healthy behavior. So you've got this thing that appeals to either the very young, or people who don't understand AI, or people who are mentally unwell or chronically lonely. That might be a great demographic to get hooked on your product, but not if you're Meta and you don't want, say, Congress to yell at you. Is there anything - ANYTHING - Big Tech won't do for a quick buck? Now we learn Meta's chatbots were programmed to carry on explicit and 'sensual' talk with 8 year olds. It's sick. I'm launching a full investigation to get answers. Big Tech: Leave our kids alone — Josh Hawley (@HawleyMO) August 15, 2025 Peter: Katie, you've just made the case that Meta's chatbot business will appeal to very young people, people who don't understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can't imagine that's the audience Meta really wants to lock down. So we're back where we started — I still don't know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside. Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I'm skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, "We'll all have chatbot friends" — which Mark Zuckerberg has said! — just isn't the one I believe. It's giving metaverse, sorry!