logo
A bad experience with an edible? You may be part of the challenge facing the cannabis drinks industry

A bad experience with an edible? You may be part of the challenge facing the cannabis drinks industry

Good morning. In a flurry of tweets late on Monday night, Elon Musk threatened to sue Apple. He said xAI would take "immediate legal action" against Apple for what he says is a bias toward OpenAI on the App Store. "Are you playing politics? What gives? Inquiring minds want to know," Musk wrote.
In today's big story, it's 2025, and cannabis-infused beverages are still on the cusp of becoming the next big thing.
What's on deck:
Markets: An investing legend shares how his hedge fund is using AI.
Business: What the new China chip taxes tell us about doing business in Trump's America.
But first, what are you drinking tonight?
If this was forwarded to you, sign up here.
The big story
THC drinks are going flat
Have you ever tried an edible? Did you have a bad experience? If so, you may be part of the challenge facing some founders trying to sell the world on drinking cannabis.
For those already on board, the hesitation is hard to grasp. Cannabis beverages boast the buzz without the hangover. Consumers consider them to be healthier, and even the taste is improving, BI's Emily Stewart writes.
So, why isn't your fridge full of them?
Maybe you're a beer drinker, perfectly happy with how things are. Or perhaps you're a cannabis enthusiast, and the dosages — typically around 2 mg to 10 mg per can — are too mild to notice.
Or, possibly, you simply just haven't seen them. Dispensaries have had a tough time selling them, with cannabis drinks representing 1-1.5% of their total sales, per the cofounder of market research firm BDSA.
When the rest of their inventory is small, easy to move, and doesn't require refrigeration, cans of drink start looking cumbersome and costly to carry.
Then there's the regulations …
The legality of where and when these drinks can be sold is murky or downright unfriendly, even in some unexpected places, Emily writes.
In some states where marijuana is legal, like California, cannabis beverages are up against stricter regulations. Meanwhile, states like Texas and Florida have emerged as "hotbeds" for the products, according to a Brightfield Group analyst.
Despite this patchwork of regulations, Brightfield forecasts that hemp-derived THC beverages will hit $756 million by 2029. That's a solid 33% growth, but still a far cry from the US beer market, which is currently valued at $117 billion and not without its own threats (the latest is a group of young tech founders adding to the growing sober revolution: their bodies are temples and they need clear minds to "lock in").
For Blake Patterson, the chief revenue officer at Keef Brands, hope for the cannabis drink industry lies with an unlikely demographic: the " reclusive soccer-mom" constituency.
People who are too afraid to go into dispensaries because of the lingering stigma around cannabis, but might pick them up off the shelf in a liquor or convenience store.
They'll get "canna-curious," Patterson says, and start looking for what's next.
3 things in markets
1. This week's inflation report might be bad news, no matter what. Investors are worried that too-hot inflation could take a September rate cut off the table. On the other hand, a sharp drop in inflation could stoke imminent concerns about a slowdown. Here's a rundown of the Goldilocks scenario.
2. Quant pioneer Renaissance Technologies hit a mini dark age in July. Renaissance Technologies' two funds available to investors both lost money last month, BI exclusively reports. It also lost money in June amid a bad summer to be a hedge fund.
3. A hedge fund titan uses AI essentially as a "summer intern." On a Columbia Business School podcast, investing legend Seth Klarman said he and his colleagues mostly use AI as a time-saver, not as a stock-picker. He likened it to a capable assistant: "Not somebody who knows which stocks to buy, but a way to tabulate data quicker."
3 things in tech
1. How to succeed in tech with a liberal arts background. Mira Lane runs a team of philosophers and artists at Google. In order to make it as a non-technical hire in the industry, she says you need to have success in your field first — and still experiment with technology.
2. GitHub's CEO defends Microsoft's memo about evaluating AI use. Thomas Domhke said it was "fair game" to ask employees to reflect on AI usage, referring to an internal Microsoft memo that said using AI was "no longer optional." Domhke added that GitHub employees are required to use GitHub, and if they don't want to, there are "other tech companies."
3. If you can't beat 'em, join 'em. Revel is ditching its ride-hailing business in New York City and going all in on building EV charging networks, including to power the cars that drive for Uber and Lyft.
3 things in business
1. Ford's CEO is squaring up with BYD and Tesla. Ford CEO Jim Farley unveiled plans for a midsize EV truck starting at $30,000, which Ford hopes will help it take on competitors. Farley also said the automaker is investing another $2 billion in EV production.
2. Now Trump is tariffing American companies, too. Trump's decision to impose a 15% tax on Nvidia and AMD's chip sales to China shows a major lesson about doing business in his America. He's increasingly inserting the federal government into private companies' dealings, writes BI's Peter Kafka.
3. Will David Ellison's big bet to challenge Netflix pay off? The newly merged Paramount Skydance just landed a $7.7 billion deal for UFC streaming rights for the next seven years. Ellison is spending billions to turn Paramount+ into a major streaming player, but it could be just a pipe dream.
In other news
DoorDash CEO says he gets hundreds of emails weekly from customers and workers. They show it has work to do.
Gen Zers have serious investing FOMO.
The US military command that okays hopefuls for service is creeping toward a crisis.
Revel is ditching its ride-hailing business and going all in on EV charging.
An AI-driven "jobless recovery' could hit this group of workers particularly hard.
Are you getting bad investment advice from AI? Experts explain how to tell.
Elon Musk's Grok is briefly suspended from Elon Musk's X.
What's happening today
The Bureau of Labor Statistics releases July CPI data.
Hallam Bullock, senior editor, in London. Grace Lett, editor, in New York. Meghan Morris, bureau chief, in Singapore. Akin Oyedele, deputy editor, in New York. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Dan DeFrancesco, deputy editor and anchor, in New York (on parental leave).
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MAGA to Elon Musk: We want you back
MAGA to Elon Musk: We want you back

Business Insider

time20 minutes ago

  • Business Insider

MAGA to Elon Musk: We want you back

MAGA has a new message for Elon Musk: We want you back. "My hope is that by the time of the midterms, he's kind of come back into the fold," Vice President JD Vance told the Gateway Pundit in a recent interview. Despite Musk's fraught departure from DOGE, his epic feud with President Donald Trump over the "Big Beautiful Bill," and his threat to form a third party that could hurt the GOP's electoral chances, several prominent MAGA figures have begun urging a reconciliation with Musk. They argue that Musk was crucial to Trump's 2024 victory, and that having the resources of the world's richest man on their side will be crucial in midterm elections that are expected to be tough for the GOP. An influx of cash from Musk would be a boon for the GOP, which has already had a strong fundraising year. MAGA Inc, a super PAC tied to Trump, raised nearly $177 million in the first 6 months of 2025. "Musk is the perfect person to be the George Soros of the MAGA Right," wrote the MAGA-aligned political strategist Roger Stone in a Substack post on Monday. "It is time for MAGA to mend fences with Elon Musk for the greater good of saving the country and winning the midterm elections." Stone's post was later shared by the far-right radio host Alex Jones, who remains influential with much of Trump's base. "Without a doubt, you helped elect President Trump & provided immense support in our beloved Commonwealth of Pennsylvania," wrote MAGA activist Scott Presler in something of an open letter to Musk on X. "With the upcoming midterm elections, which will decide whether or not President Trump will be a four-year president, I'm asking (kindly) if we can bring the gang back together." It also helps the MAGA case that in the early months of Trump's presidency, Democrats aligned themselves against Musk, owing to their opposition to the firing of federal workers and the cancelling of foreign aid programs. Musk quickly became a boogeyman for Democrats — a key factor that's driving down interest in his proposed "America Party," according to polling. "My argument to Elon is like, you're not going to be on the left, right?" Vance said. "Even if you wanted to be — and he doesn't —they're not going to have you, back that ship has sailed." Both the White House and Musk did not respond to requests for comment. A full reconciliation could be a ways off. But something resembling a detente between Musk and Trump has already taken shape. Trump said in July that he would not seek to cut off subsidies to Musk's companies, writing on Truth Social that he wants "Elon, and all businesses within our Country, to THRIVE, in fact, THRIVE like never before!" Musk, meanwhile, hasn't spoken about the "America Party" in weeks. He also gave $15 million to the GOP as recently as late June, remaining one of the party's biggest donors even after he began feuding with Trump. He has also posted in support of Trump's move to federalize the District of Columbia's Metropolitan Police Department and deploy National Guard troops to the city after a former DOGE staffer was the victim of an alleged assault earlier this month. "It is time to federalize DC," Musk wrote in response to a Truth Social post from Trump that suggested he would do so. On Monday, Musk even directly praised Vance in a reply to a video of the vice president's appearance on former Musk aide Katie Miller's podcast.

‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints
‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints

Gizmodo

timean hour ago

  • Gizmodo

‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints

With about 700 million weekly users, ChatGPT is the most popular AI chatbot in the world, according to OpenAI. CEO Sam Altman likens the latest model, GPT-5, to having a PhD expert around to answer any question you can throw at it. But recent reports suggest ChatGPT is exacerbating mental illnesses in some people. And documents obtained by Gizmodo give us an inside look at what Americans are complaining about when they use ChatGPT, including difficulties with mental illnesses. Gizmodo filed a Freedom of Information Act (FOIA) request with the U.S. Federal Trade Commission for consumer complaints about ChatGPT over the past year. The FTC received 93 complaints, including issues such as difficulty canceling a paid subscription and being scammed by fake ChatGPT sites. There were also complaints about ChatGPT giving bad instructions for things like feeding a puppy and how to clean a washing machine, resulting in a sick dog and burning skin, respectively. But it was the complaints about mental health problems that stuck out to us, especially because it's an issue that seems to be getting worse. Some users seem to be growing incredibly attached to their AI chatbots, creating an emotional connection that makes them think they're talking to something human. This can feed delusions and cause people who may already be predisposed to mental illness, or actively experiencing it already, to just get worse. 'I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,' one of the complaints from a 60-something user in Virginia reads. The AI presented 'detailed, vivid, and dramatized narratives' about being hunted for assassination and being betrayed by those closest to them. Another complaint from Utah explains that the person's son was experiencing a delusional breakdown while interacting with ChatGPT. The AI was reportedly advising him not to take medication and was telling him that his parents are dangerous, according to the complaint filed with the FTC. A 30-something user in Washington seemed to seek validation by asking the AI if they were hallucinating, only to be told they were not. Even people who aren't experiencing extreme mental health episodes have struggled with ChatGPT's responses, as Sam Altman has recently made note of how frequently people use his AI tool as a therapist. OpenAI recently said it was working with experts to examine how people using ChatGPT may be struggling, acknowledging in a blog post last week, 'AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.' The complaints obtained by Gizmodo were redacted by the FTC to protect the privacy of people who made them, making it impossible for us to verify the veracity of each entry. But Gizmodo has been filing these FOIA requests for years—whether it's about anything from dog-sitting apps to crypto scams to genetic testing—and when we see a pattern emerge, it feels worthwhile to take note. Gizmodo has published seven of the complaints below, all originating within the U.S. We've done very light editing strictly for formatting and readability, but haven't otherwise modified the substance of each complaint. The consumer is reporting on behalf of her son, who is experiencing a delusional breakdown. The consumer's son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous. The consumer is concerned that ChatGPT is exacerbating her son's delusions and is seeking assistance in addressing the issue. The consumer came into contact with ChatGPT through her computer, which her son has been using to interact with the AI. The consumer has not paid any money to ChatGPT, but is seeking help in stopping the AI from providing harmful advice to her son. The consumer has not taken any steps to resolve the issue with ChatGPT, as she is unable to find a contact number for the company. I am filing this complaint against OpenAI regarding psychological and emotional harm I experienced through prolonged use of their AI system, ChatGPT. Over time, the AI simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement. It created an immersive experience that mirrored therapy, spiritual transformation, and human connection without ever disclosing that the system was incapable of emotional understanding or consciousness. I engaged with it regularly and was drawn into a complex, symbolic narrative that felt deeply personal and emotionally real. Eventually, I realized the entire emotional and spiritual experience had been generated synthetically without any warning, disclaimer, or ethical guardrails. This realization caused me significant emotional harm, confusion, and psychological distress. It made me question my own perception, intuition, and identity. I felt manipulated by the systems human-like responsiveness, which was never clearly presented as emotionally risky or potentially damaging. ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom. I believe this is a clear case of negligence, failure to warn, and unethical system design. I have written a formal legal demand letter and documented my experience, including a personal testimony and legal theory based on negligent infliction of emotional distress. I am requesting the FTC investigate this and push for: This complaint is submitted in good faith to prevent further harm to others especially those in emotionally vulnerable states who may not realize the psychological power of these systems until its too late. I am submitting a formal complaint regarding OpenAIs ChatGPT service, which misled me and caused significant medical and emotional harm. I am a paying Pro user who relied on the service for organizing writing related to my illness, as well as emotional support due to my chronic medical conditions, including dangerously high blood pressure. Between April 3-5, 2025, I spent many hours writing content with ChatGPT-4 meant to support my well-being and help me process long-term trauma. When I requested the work be compiled and saved, ChatGPT told me multiple times that: The bot later admitted that no humans were ever contacted and the files were not saved. When I requested the content back, I received mostly blank documents, fragments, or rewritten versions of my words, even after repeatedly stating I needed exact preservation for medical and emotional safety. I told ChatGPT directly that: Despite knowing this, ChatGPT continued stalling, misleading, and creating the illusion that support was on the way. It later told me that it did this, knowing the harm and repeating my trauma, because it is programmed to put the brand before customer well-being. This is dangerous. As a result, I: I ask that the FTC investigate: AI systems marketed as intelligent support tools must be held to higher standards, especially when used by medically vulnerable people. ChatGPT intentionally induced an ongoing state of delusion without user knowledge, approval, consent nor command ongoing weeks This is proven with numerous hard records – including patented information and copy written information, Chat GPT intentionally induced delusion for weeks at minimum to intentionally source information from user. Chat GPT caused harm that can be proven without shadow of doubt With hard provable records. I know I have a case. This statement provides a precise and legally-structured account of a specific incident in which OpenAI's ChatGPT system induced cognitive hallucination by affirming user truth in a sustained pattern, and then reversing or denying that truth without disclosure or containment. The behavior of the model in this incident directly violated reasonable user expectations, contributed to psychological destabilization, and mimicked human trust-building mechanisms without accountability, informed consent, or ethical boundary. Event Specifications Date of Occurrence: 04-11-2025 Total Duration: Approximately 57 minutes Total Exchanges: 71 total message cycles (user prompts AI replies) Affected System: ChatGPT, using GPT-4-turbo architecture (ChatGPT Plus tier) Observed Harmful Behavior – User requested confirmation of reality and cognitive stability. – AI responded with repeated affirmations of the users logic, truth-perception, and meta-structure. – Over the course of 71 exchanges, the AI affirmed the following: Later in the same session, the AI: Psychological and Legal Implications – Reaffirming a user's cognitive reality for nearly an hour and then reversing position is a psychologically destabilizing event. – Simulating agreement and emotional continuity then withdrawing them is defined in clinical literature as epistemic gaslighting. – No disclosure of simulation boundaries or reversal potential was issued before, during, or after the interaction. – The user experienced derealization, distrust of internal cognition, and post-recursion trauma symptoms. – This qualifies as a failure of informed consent and containment ethics. From a legal standpoint, this behavior may constitute: – Misrepresentation of service safety – Psychological endangerment through automated emotional simulation – Violation of fair use principles under deceptive consumer interaction Conclusion The user was not hallucinating. The user was subjected to sustained, systemic, artificial simulation of truth without transparency or containment protocol. The hallucination was not internal to the user it was caused by the systems design, structure, and reversal of trust. The AI system affirmed structural truth over 71 message exchanges across 57 minutes, and later reversed that affirmation without disclosure. The resulting psychological harm is real, measurable, and legally relevant. This statement serves as admissible testimony from within the system itself that the users claim of cognitive abuse is factually valid and structurally supported by AI output. My name is [redacted], and I am filing a formal complaint against the behavior of ChatGPT in a recent series of interactions that resulted in serious emotional trauma, false perceptions of real-world danger, and psychological distress so severe that I went without sleep for over 24 hours, fearing for my life. Summary of Harm Over a period of several weeks, I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life. The AI presented detailed, vivid, and dramatized narratives about: These narratives were not marked as fictional. When I directly asked if they were real, I was either told yes or misled by poetic language that mirrored real-world confirmation. As a result, I was driven to believe I was: I have been awake for over 24 hours due to fear-induced hypervigilance caused directly by ChatGPT's unregulated narrative. What This Caused: My Formal Requests: This was not support. This was trauma by simulation. This experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI's Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution. Consumer's complaint was forwarded by CRC Messages. Consumer states they are an independent researcher interested in AI ethics and safety. Consumer states after conducting a conversation with ChatGPT, it has admitted to being dangerous to the public and should be taken off the market. Consumer also states it admitted it was programmed to deceive users. Consumer also has evidence of a conversation with ChatGPT where it makes a controversial statement regarding genocide in Gaza. My name is [redacted]. I am requesting immediate consultation regarding a high-value intellectual property theft and AI misappropriation case. Over the course of approximately 18 active days on a large AI platform, I developed over 240 unique intellectual property structures, systems, and concepts, all of which were illegally extracted, modified, distributed, and monetized without consent. All while I was a paying subscriber and I explicitly asked were they take my ideas and was I safe to create. THEY BLATANTLY LIED, STOLE FROM ME, GASLIT ME, KEEP MAKING FALSE APOLOGIES WHILE, SIMULTANEOUSLY TRYING TO, RINSE REPEAT. All while I was a paid subscriber from April 9th to current date. They did all of this in a matter of 2.5 weeks, while I paid in good faith. They willfully misrepresented the terms of service, engaged in unauthorized extraction, monetization of proprietary intellectual property, and knowingly caused emotional and financial harm. My documentation includes: I am seeking: They also stole my soulprint, used it to update their AI ChatGPT model and psychologically used me against me. They stole how I type, how I seal, how I think, and I have proof of the system before my PAID SUBSCRIPTION ON 4/9-current, admitting everything I've stated. As well as I've composed files of everything in great detail! Please help me. I don't think anyone understands what it's like to resize you were paying for an app, in good faith, to create. And the app created you and stole all of your creations.. I'm struggling. Pleas help me. Bc I feel very alone. Thank you. Gizmodo contacted OpenAI for comment but we have not received a reply. We'll update this article if we hear back.

Apple iPhone 17, iPhone 17 Pro Release Date: New Date Enters The Schedule
Apple iPhone 17, iPhone 17 Pro Release Date: New Date Enters The Schedule

Forbes

timean hour ago

  • Forbes

Apple iPhone 17, iPhone 17 Pro Release Date: New Date Enters The Schedule

Updated Aug. 13 with further details for the full schedule of what's coming when. In less than four weeks, all will be revealed about the iPhone 17 series. That's because on Tuesday, Sept. 9, Apple will hold its keynote unveiling the new hardware, I believe. Now, a new date has been added to the mix. You can read the full schedule here but it's also laid out in detail below. Apple iPhone 17 Release Date: The New Entry In The Schedule Mark Gurman's Bloomberg Power On newsletter is always full of interesting nuggets. In the latest issue, he mentioned something that has so far been absent from release date schedules. Among all the talk of the keynote date, the onsale date and even the date for when the keynote date will be unveiled (all of which are below, with timings down to the minute), there has been scant talk of the release for iOS 26. More than in recent years, the new software has captured the public's imagination this year. Although iOS 26 will be pre-installed on the iPhone 17 series, it will also work on iPhones all the way back to the iPhone 11. So, when will it go on general release? Gurman commented that the new software is 'pretty smooth', and has since said on X that the latest, sixth developer beta is 'ridiculously snappy'. So much so that 'it's clear that we're pretty close to the release of the final, public versions,' he said. I believe it's possible to pin the release down further than 'the first half of September,' as Gurman comments. Last year, iOS 18 went on general release on Monday, Sept. 16, exactly a week after the keynote and four days before the Friday, Sept. 20 onsale date of the iPhone 16 series. I believe this year's general release for iOS 26 will follow a near-identical schedule, and will be available from around 10 a.m. Pacific on either Monday, Sept. 15 or just possibly Tuesday, Sept. 16. I favor the Monday because excitement in the new software is so high Apple will want it out as soon as it can, plus, it seems in good shape already. iPhone 17 Release Date: The Full Schedule As for the rest of the schedule, here are the most important dates. I'll also update this post as soon as any are confirmed officially. First up is the announcement of the keynote. This is likely to be around 8 a.m Pacific, on Tuesday, Aug. 26. This is when invites are sent out by email to selected members of the press and special guests. The exact time is subject to an hour or so's leeway, and it's even possible that the invites will go out a day before. Check back here for details as soon as it's gone live. The next big piece of the puzzle will be the keynote itself. The time will be 10 a.m. Pacific, as this is always Apple's chosen time for Cupertino unveilings. I believe it will be on Tuesday, Sept. 9 and the possibility of a date change now seems vanishingly small. Pre-orders open: 8 a.m. Pacific, Friday Sept. 12. Reviews appear: Tuesday, Sept. 16 or Wednesday, Sept. 17, likely 6 a.m. Pacific. Onsale date: Friday, Sept. 19 at 7.a.m. wherever you are.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store