logo
Synthesia: The AI Avatar Generator Rethinking Corporate Communication

Synthesia: The AI Avatar Generator Rethinking Corporate Communication

Bloomberg18 hours ago
London-based Synthesia uses AI to create life-like video avatars based on humans which can be deployed in a range of scenarios, including customer support, sales and staff training. It's valued at over $2 billion and serves 80 percent of Fortune 100 companies. Bloomberg's Tom Mackenzie spoke exclusively to Victor Riparbelli, Synthesia's co-founder and CEO, about the company's tech and what it means for jobs. (Source: Bloomberg)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...
HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...

Yahoo

timean hour ago

  • Yahoo

HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...

Release Date: August 14, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Positive Points HIVE Digital Technologies Ltd (NASDAQ:HIVE) reported a record quarter with over $45 million in total revenue, primarily driven by Bitcoin mining operations. The company achieved a significant growth in earnings per share, increasing by 206% year over year. HIVE's strategic expansion in Paraguay has been transformative, allowing the company to rapidly scale its Bitcoin mining operations. The company maintains a strong balance sheet with $24.6 million in cash and $47.3 million in digital currencies. HIVE's focus on renewable energy and sustainable practices positions it well for future growth, particularly in the AI and HPC sectors. Negative Points The volatility of Bitcoin prices poses a risk to HIVE's financial performance, as evidenced by the significant non-cash reevaluation of Bitcoin on their balance sheet. High depreciation charges due to the purchase of new GPU and ASIC chips for AI and Bitcoin buildout could impact profitability. The company's expansion and scaling efforts require significant capital investment, which could strain financial resources if not managed carefully. HIVE's growth strategy involves complex operations across multiple countries, which may present logistical and regulatory challenges. The competitive landscape in the Bitcoin mining and AI sectors is intensifying, which could pressure HIVE's market position and margins. Q & A Highlights Warning! GuruFocus has detected 7 Warning Signs with HIVE. Q: Can you provide an overview of HIVE's financial performance for Q1 2026? A: Aiden Killick, President and CEO, highlighted that HIVE had a record quarter with over $45 million in total revenue, 90% of which came from Bitcoin mining operations and 10% from their HPC AI business. The company achieved a gross operating margin of 38%, yielding about $15.8 million in cash flow from operations, and reported a net income of $35 million with $44.6 million in adjusted EBITDA. Q: How is HIVE managing its Bitcoin holdings and what strategies are in place for future growth? A: Aiden Killick explained that HIVE ended the quarter with 435 Bitcoin on the balance sheet and has a Bitcoin pledge strategy allowing them to purchase Bitcoin back at zero interest. This strategy has enabled HIVE to scale its Bitcoin mining business without dilution or taking on debt, effectively using $200 million worth of CapEx. Q: What are the key developments in HIVE's expansion efforts, particularly in Paraguay? A: Aiden Killick noted that HIVE has significantly expanded its operations in Paraguay, completing phase one of their expansion ahead of schedule. They are currently operating at over 15 exahash and are fully funded to reach 25 exahash by American Thanksgiving. This expansion is part of their strategy to maintain a 440 megawatt green energy footprint for Bitcoin mining. Q: How does HIVE's AI and HPC business contribute to its overall strategy? A: Craig Tavares, President of Buzz HPC, explained that HIVE's AI and HPC business is rapidly scaling, with a target of reaching $100 million ARR. The company operates over 5,000 GPUs and is focused on providing a full suite of infrastructure services for AI, leveraging their existing data centers and renewable energy sources. Q: What are HIVE's future plans for data center expansion and AI infrastructure? A: Craig Tavares mentioned that HIVE is expanding its data center footprint with recent acquisitions in Toronto and Sweden. These facilities will support their sovereign AI strategy and are expected to go live next year. The Toronto data center, in particular, will be a tier 3 facility leveraging liquid cooling infrastructure to support high-density GPU clusters. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus.

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Yahoo

time2 hours ago

  • Yahoo

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

The White House AI advisor discussed "AI psychosis" on a recent podcast. David Sacks said he doubted the validity of the concept. He compared it to the "moral panic" that surrounded earlier tech leaps, like social media. AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed "AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges. Read the original article on Business Insider Solve the daily Crossword

4 Things Schools Need To Consider When Designing AI Policies
4 Things Schools Need To Consider When Designing AI Policies

Forbes

time2 hours ago

  • Forbes

4 Things Schools Need To Consider When Designing AI Policies

Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store