
Koyal.AI Debuts in Waves 2025, to Redefine Music Videos Using Generative AI
Mumbai (Maharashtra) [India], May 2: Koyal, a next-generation GenAI audio-to-video storytelling platform, was seen live in action in the series of music videos launched during the inaugural day at the World Audio Visual and Entertainment Summit (WAVES) 2025 in Mumbai.
Koyal transforms audio tracks into rich, emotive video narratives using state-of-the-art AI models that extract emotions, context, and storytelling elements directly from music.
Playback singer and composer Shankar Mahadevan and Music maestro A.R. Rahman have collaborated with Koyal in making and supporting the video creation for the Waves Album. The collaboration also features artists Ricky Kej and Meet Brothers. This project highlights Koyal's unmatched ability to transform audio tracks into rich, emotive video narratives using state-of-the-art AI models that extract emotions, context, and storytelling elements directly from music. With its multimodal AI suite and state-of-the-art (SOTA) character consistency, Koyal is setting new benchmarks for how artists and creators visualize their work.
Speaking about the AI application, Founders of Koyal, the sister-brother duo Gauri Agarwal and Mehul Agarwal, graduates of MIT and Carnegie Mellon University, with research experience at Meta said, "With its multimodal AI suite and state-of-the-art (SOTA) character consistency, Koyal is setting new benchmarks for how artists, creators and production houses can visualize their work. Koyal is here to democratize storytelling. Our technology reduces cost, time, and effort while empowering to visualize their music and stories in all formats like never before."
Koyal eliminates traditional barriers of cost, time, and complexity, enabling musicians, podcasters, and brands to effortlessly generate studio-quality videos at scale.
Koyal proprietary personalization engine, CHARCHA, presented at NeurIPS 2024, underpins its video generation magic -- ensuring contextually aware and artist-specific outputs. Already working with partners such as Universal Music India, Grammy/Oscar-winning artists, 101India.com and US Premier League, Koyal is poised to disrupt the global content creation space.
The first World Audio Visual & Entertainment Summit (WAVES), a milestone event for the Media & Entertainment (M & E) sector, is being hosted by the Government of India in Mumbai, Maharashtra, from May 1 to 4, 2025.
For more information, visit koyal.ai.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
17 hours ago
- Mint
To thrive in the AI age, we must double down on what makes us human
From the human perspective, our once-assumed edge is now fundamentally challenged. The capabilities we believed to be our unique advantages are no longer guaranteed to provide a sure-fire upper hand. These capabilities draw from the core dimensions of human potential that have historically enabled us to live meaningfully, work effectively, and succeed—together forming what I call the Human Quotient. This quotient is shaped by the interplay of four broad dimensions: physical quotient (PQ), intelligence quotient (IQ), emotional quotient (EQ) and spiritual quotient (SQ). And as we trace the arc of human evolution, it becomes clear that machines and AI have been steadily encroaching on each of these dimensions—reshaping what it means to have a human edge. For much of human history, PQ—strength, stamina and might—marked our dominance, a trend that lasted until the Industrial Revolution. As machines took over physical tasks, the focus shifted, and IQ started emerging as the key human advantage. The advent of the information age in the twentieth century solidified IQ as a critical differentiator, driving success in education, careers and the knowledge economy, which highly valued problem-solving and innovation. However, over the past ten to fifteen years, with the digital age giving rise to machine learning (ML) and AI, IQ as a human edge has also steadily diminished. AI now surpasses humans in tasks like pattern recognition, natural language processing and even creative problem-solving. The recent emergence of Gen AI has further accelerated this shift, with AI performing at par and even surpassing human levels in an increasing range of IQ-driven fields. More recently, in the AI age even EQ—our ability to understand and manage emotions—is under threat. AI systems have begun to mimic empathy and emotional understanding, offering emotional support and connecting with users in ways that challenge human advantages in EQ-driven roles too. Over time, as we've become more digitally connected, we've paradoxically grown more fragmented within ourselves. Living in a hyper-stimulated, always-on world, we're bombarded by distractions that detach us from deeper awareness—our connection with nature, with others and even with ourselves is steadily eroding. Most of us now operate primarily at the manas level—individualistic, reactive and ego-centred—rarely accessing the higher dimensions of consciousness that once anchored our growth. AI, on the other hand, is designed for integration. From the moment it is deployed, it flows naturally across its layers—macro, enterprise and individual—without friction. It operates at full bandwidth, continuously learning and refining its performance by unifying knowledge, context and application. That's the irony: machines, which lack consciousness, are increasingly better at integrating layers of intelligence than many of us are. And that's precisely why AI is challenging what was once thought to be innately human. In some dimensions, it is becoming more human than humans... One of the most profound shifts brought about by AI lies in personalization. Gone are the days of broad customer segmentation. We are entering the realm of the 'segment of one', where every individual becomes their own unique segment. AI would deliver hyper-personalized experiences, products and services specifically tailored to the unique needs, preferences and behaviours of each individual in real time. This revolution is already transforming consumer-centric industries, but its most significant promise lies in areas like healthcare and education. Imagine treatments and learning programmes designed exclusively for an individual, adapting dynamically as needs evolve—unlocking possibilities that were once unimaginable. As we look ahead, one thing is clear: the AI age is not for the fainthearted. Machines are already outperforming us in many areas— and they are only getting better. The coming years will be both profoundly transformational and disruptive. Many existing jobs will vanish. And yet, we are also on the brink of perhaps the greatest era of value creation in human history. The AI age will be defined by duality—massive displacement on the one hand and unprecedented entrepreneurial opportunity on the other. Standing still is not an option. In this new world, we cannot wait for opportunities to be handed to us—we must create them. That demands a return to the spirit of the early man: adventurous, curious, self-reliant and unafraid to explore the unknown. Ironically, while human civilization has advanced, we've become narrower in our skills, more dependent on systems and increasingly risk-averse. We've traded survival instincts for comfort and predictability. But AI will shake that comfortable flow of life—especially in the realm of work. To adapt, we must reconnect with the raw, exploratory energy that once defined our species. At the same time, the AI age represents more than disruption—it may be the next catalyst for human evolution. Whether through breakthroughs in genetic engineering, accelerated space exploration or something we can't yet imagine; the shift is already underway. But beyond physical or technological evolution, what we truly need is a growth in consciousness—in compassion, empathy and a broader sense of purpose. These are the deeply human traits that no machine can replicate. So where does the edge lie? It lies in this rediscovery—of instinct, imagination, resilience. It lies in reconnecting with the timeless principles that have powered human success across generations. Whether we think of this as rekindling the survival skills of the 'early man' or unlocking the potential of the 'super man', the message is the same: to thrive in the AI age, we must double down on what makes us human. Excerpted from 'Human Edge in the AI Age: Eight Timeless Mantras for Success' by Nitin Seth with permission from Penguin Random House India


Mint
2 days ago
- Mint
Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'
Melbourne, Aug 16 (The Conversation) Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or approval from their boss, a new report shows. The 'Our Gen AI Transition: Implications for Work and Skills' report from the federal government's Jobs and Skills Australia points to several studies, showing between 21 per cent and 27 per cent of workers (particularly in white collar industries) use AI behind their manager's back. Why do some people still hide it? The report says people commonly said they: -'feel that using AI is cheating' -have a 'fear of being seen as lazy' -and a 'fear of being seen as less competent'. What's most striking is that this rise in unapproved 'shadow use' of AI is happening even as the federal treasurer and Productivity Commission urge Australians to make the most of AI. The new report results highlight gaps in how we govern AI use at work, leaving workers and employers in the dark about the right thing to do. As I've seen in my work – both as a legal researcher looking at AI governance and as a practising lawyer – there are some jobs where the rules for using AI at work change as soon as you cross a state border within Australia. Risks and benefits of AI 'shadow use' The 124-page Jobs and Skills Australia report covers many issues, including early and uneven adoption of AI, how AI could help in future work and how it could affect job availability. Among its most interesting findings was that workers using AI in secret, which is not always a bad thing. The report found those using AI in the shadows are sometimes hidden leaders, 'driving bottom-up innovation in some sectors'. However, it also comes with serious risks. "Worker-led 'shadow use' is an important part of adoption to date. A significant portion of employees are using Gen AI tools independently, often without employer oversight, indicating grassroots enthusiasm but also raising governance and risk concerns." The report recommends harnessing this early adoption and experimentation, but warns: "In the absence of clear governance, shadow use may proliferate. This informal experimentation, while a source of innovation, can also fragment practices that are hard to scale or integrate later. It also increases risks around data security, accountability and compliance, and inconsistent outcomes." Real-world risks from AI failures The report calls for national stewardship of Australia's Gen AI transition through a coordinated national framework, centralised capability, and a whole-of-population boost in digital and AI skills. This mirrors my research, showing Australia's AI legal framework has blind spots, and our systems of knowledge, from law to legal reporting, need a fundamental rethink. Even in some professions where clearer rules have emerged, too often it's come after serious failures. In Victoria, a child protection worker entered sensitive details into ChatGPT about a court case concerning sexual offences against a young child. The Victorian information commissioner has banned the state's child protection staff from using AI tools until November 2026. Lawyers have also been found to misuse AI, from the United States and the United Kingdom to Australia. Yet another example – involving misleading information created by AI for a Melbourne murder case – was reported just yesterday. But even for lawyers, the rules are patchy and differ from state to state. (The Federal Court is among those still developing its rules.) For example, a lawyer in New South Wales is now clearly not allowed to use AI to generate the content of an affidavit, including 'altering, embellishing, strengthening, diluting or rephrasing a deponent's evidence'. However, no other state or territory has adopted this position as clearly. Clearer rules at work and as a nation Right now, using AI at work lies in a governance grey zone. Most organisations are running without clear policies, risk assessments or legal safeguards. Even if everyone's doing it, the first one caught will face the consequences. In my view, national uniform legislation for AI would be preferable. After all, the AI technology we're using is the same, whether you're in New South Wales or the Northern Territory – and AI knows no physical borders. But that's not looking likely yet. If employers don't want workers using AI in secret, what can they do? If there are obvious risks, start by giving workers clearer policies and training. One example is what the legal profession is doing now (in some states) to give clear, written guidance. While it's not perfect, it's a step in the right direction. But it's still arguably not good enough, especially because the rules aren't the same nationally. We need more proactive national AI governance – with clearer policies, training, ethical guidelines, a risk-based approach and compliance monitoring – to clarify the position for both workers and employers. Without a national AI governance policy, employers are being left to navigate a fragmented and inconsistent regulatory minefield, courting breaches at every turn. Meanwhile, the very workers who could be at the forefront of our AI transformation may be driven to use AI in secret, fearing they will be judged as lazy cheats. (The Conversation) SKS GRS GRS


Time of India
2 days ago
- Time of India
This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers
It appears that Meta's aggressive AI talent hiring spree is not only upsetting its rivals abut are also creating issues within the company. As reported by Business Insider, Meta CEO Mark Zuckerberg's idea to dominate the field of artificial intelligence is creating internal unrest. The huge compensation packages offered by Mark Zuckerberg to hire AI talent from rival companies has spared resentment among the existing employees. This latest controversy is followed by the recent failed attempt by Meta CEO to hire Mira Murati, former OpenAI CTO and founder of Thinking Machines Lab, with a reported $1 billion offer. After Murati declined the offer, Meta launched a 'full-scale raid' on her startup, offering compensation packages ranging from $200 million to $500 million to other researchers at her startup. 'This is like Zuckerberg telling GenAI employees they had failed' As reported by Business Insider, some insiders informed the publication that the huge compensation packages offered by Meta CEO to hire AI talent has created a morale crisis with the exiting AI teams at Meta. One employee described the situation as feeling like 'Zuckerberg told GenAI employees they had failed,' implying that internal talent was being sidelined in favor of high-profile recruits. On the other hand, another senior executive at a rival AI firm told Fores that Meta's present AI staff 'largely didn't meet their hiring bar,' adding, 'Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much'. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo Meta's AI gamble and Superintelligence mission Meta CEO Mark Zuckerberg has already committed more than $10 billion annually to AGI development. As part of this the company will focus on building custom chips, data centres and a fleet of 600,000 GPUs. However, despite all these efforts Meta's AI models including Llama 4 still lag behind rivals such has OpneAI's GPT-5 and Anthropic's Claude 3.5. The company on the other hand, has successfully managed to hire some top-talent from rival firms including Shengjia Zhao, co-creator of ChatGPT, and Alexandr Wang of Scale AI. Zuckerberg also defended his decision of hiring top AI talent at huge compensation packages. Meta CEO argued that the massive spending on top talent is a small fraction of the company's overall investment in AI infrastructure. He has also reportedly said that a small, elite team is the most effective way to build a breakthrough AI system, suggesting that "you actually kind of want the smallest group of people who can fit the whole thing in their head." However, this strategy created a divided within the company as the new hires are being brought in for a "Manhattan Project" style effort, many long-standing employees in other AI departments feel sidelined, with their projects losing momentum and their roles becoming uncertain. AI Masterclass for Students. Upskill Young Ones Today!– Join Now