logo
Prime Focus Technologies to showcase CLEAR® AI Agents and AI Applications for Postproduction and Content Supply Chain at Broadcast Asia 2025

Prime Focus Technologies to showcase CLEAR® AI Agents and AI Applications for Postproduction and Content Supply Chain at Broadcast Asia 2025

Yahoo21-05-2025

LOS ANGELES, May 21, 2025 /PRNewswire/ -- Prime Focus Technologies (PFT), the creator of CLEAR®, is excited to announce its participation at Broadcast Asia (BCA) in Singapore, where it will showcase its CLEAR® AI Agents and AI Applications—designed to deliver real results in postproduction and content supply chain workflows. CLEAR® AI is purpose-built for video content and is set to showcase how these innovations can enable visible ROI for AI investments today.
"In a region as diverse as APAC, AI must do more than automate—it must understand context, culture, and workflow nuances, said Anupam Sharma, Senior Vice President & Head of Sales – APAC, PFT. "At PFT, our AI Agents and AI Applications empower media companies to scale operations, accelerate time-to-market, and tap into new audience segments—without ballooning costs. This is AI that drives reach and revenue across your content portfolio."
At BCA 2025, visitors at Meeting Room #5J3-8 will see:
Content Discovery AI Agents: CLEAR® AI's Search & Metadata Agents go beyond keywords to empower teams to retrieve assets for powerful storytelling. Our AI Agents make it easier than ever to repurpose content, enabling new monetization channels and driving additional revenue.
Content Studio AI Agents: CLEAR® AI's Content Creation Action Agents automate the time-consuming tasks of snackable content creation—from generating highlights to automating the editing process. This accelerates time-to-market and drives higher viewership and engagement. With proven success, including a 4X YouTube viewership increase and 22X revenue growth for a major content network, our AI Agents are designed to turn your content into a revenue generator.
Content Automation AI Agents: CLEAR® AI Automation Agents take the heavy lifting out of operations—whether automating segmentation, ensuring compliance, or seamlessly adapting assets for global distribution. Experience a 25% increase in viewer retention, a 15% boost in ad revenue, and a reduction in manual compliance review time by 80%.
Beyond AI Agents, PFT will feature applications like AI-powered video collaboration, smart MAM, and supply chain automation, empowering teams to achieve more with fewer resources and driving productivity while cutting operational costs.
Media companies don't have time for experiments that don't move the needle. See us at BCA, Meeting Room #5J3-8, and see how CLEAR® AI delivers real results—no fluff.
About Prime Focus Technologies
Prime Focus Technologies (PFT) is the creator of CLEAR®. It offers streaming platforms, studios, and broadcasters AI-led technology and media services powered by the cloud—enabling creativity, agility, and most importantly, revenue growth. PFT partners with major players such as Channel 4, ITV, Sinclair Broadcast Group, A&E Networks, Warner Bros. Discovery, Paramount, Crunchyroll, Insight TV, JioStar, BCCI, Tegna, Amazon MGM Studios, and more.
For more information visit: www.primefocustechnologies.com.
Press Contact
T ShobhanaSenior Vice President & Global Head of MarketingPrime Focus TechnologiesLos Angeles, CAt.shobhana@primefocus.com
View original content:https://www.prnewswire.com/apac/news-releases/prime-focus-technologies-to-showcase-clear-ai-agents-and-ai-applications-for-postproduction-and-content-supply-chain-at-broadcast-asia-2025-302461030.html
SOURCE Prime Focus Technologies

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time9 minutes ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

Genpact Limited Board Declares Quarterly Cash Dividend
Genpact Limited Board Declares Quarterly Cash Dividend

Yahoo

time14 minutes ago

  • Yahoo

Genpact Limited Board Declares Quarterly Cash Dividend

NEW YORK, June 6, 2025 /PRNewswire/ -- Genpact Limited (NYSE: G), a global advanced technology services and solutions company, today announced that its board of directors has declared a cash dividend of $0.17 per common share for the second quarter of 2025. The dividend is payable on June 30, 2025 to shareholders of record as of the close of business on June 18, 2025. The declaration of any future dividends will be at the discretion of the board of directors. About Genpact Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Safe Harbor Statements in this press release regarding Genpact's intention to pay dividends on its common shares from time to time are forward-looking statements. There are a number of important factors that could cause actual events to differ materially from those suggested or indicated by such forward-looking statements. These include, among others, Genpact's cash flows from operations, macroeconomic uncertainty, U.S. and global trade and tariff policies and general economic conditions, any deterioration in the global economic environment and its impact on our clients, our ability to develop and successfully execute our business strategies, technological innovation, including AI technology and future uses of generative AI and large language models, and our ability to invest in new technologies and adapt to industry developments at sufficient speed and scale, our ability to effectively price our services and maintain pricing and employee utilization rates, general inflationary pressures and our ability to share increased costs with our clients, wage increases in locations in which we have operations, our ability to attract and retain skilled professionals, the economic and other impacts of geopolitical conflicts, as well as any potential expansion or escalation of these conflicts or resulting economic disruption beyond the current scope, and other factors identified in our most recent Annual Report on Form 10-K and other reports filed with the SEC. Genpact undertakes no obligation to update any forward-looking statements that may be made from time to time by or on behalf of Genpact. Contacts:Investors Tyra Whelton+1 (908) Media Alexia Taxiarchos +1 (617) 259-8172 View original content to download multimedia: SOURCE Genpact Sign in to access your portfolio

Why Lululemon (LULU) Stock Is Nosediving
Why Lululemon (LULU) Stock Is Nosediving

Yahoo

time14 minutes ago

  • Yahoo

Why Lululemon (LULU) Stock Is Nosediving

Shares of athletic apparel retailer Lululemon (NASDAQ:LULU) fell 19.4% in the afternoon session after the company reported underwhelming Q1 2025 results. Its EPS guidance for next quarter missed and its full-year EPS guidance fell short of Wall Street's estimates. Sales and earnings were roughly in line with expectations, with the softness resulting from slowing growth in the Americas, where comparable sales declined, despite international markets continuing to deliver significantly better growth. Overall, this was a weaker quarter. The shares closed the day at $265.29, down 19.6% from previous close. The stock market overreacts to news, and big price drops can present good opportunities to buy high-quality stocks. Is now the time to buy Lululemon? Access our full analysis report here, it's free. Lululemon's shares are somewhat volatile and have had 13 moves greater than 5% over the last year. But moves this big are rare even for Lululemon and indicate this news significantly impacted the market's perception of the business. The biggest move we wrote about over the last year was 6 months ago when the stock gained 18.4% on the news that the company reported strong third-quarter results, which surpassed analysts' revenue expectations on a convincing same-store sales beat. The result revealed strong double-digit top-line growth in China and the rest of the World. The company also observed record engagement metrics at the top of the funnel during the Black Friday event mostly from its online channels. This signaled a strong start to the holiday shopping season. Looking ahead, guidance was roughly in line, which was encouraging. Overall, this quarter had some key positives. Lululemon is down 28.7% since the beginning of the year, and at $265.41 per share, it is trading 37% below its 52-week high of $421.16 from January 2025. Investors who bought $1,000 worth of Lululemon's shares 5 years ago would now be looking at an investment worth $841.77. Here at StockStory, we certainly understand the potential of thematic investing. Diverse winners from Microsoft (MSFT) to Alphabet (GOOG), Coca-Cola (KO) to Monster Beverage (MNST) could all have been identified as promising growth stories with a megatrend driving the growth. So, in that spirit, we've identified a relatively under-the-radar profitable growth stock benefiting from the rise of AI, available to you FREE via this link. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store