
Koyal.AI and Offbeet Media Group Announce Strategic Partnership at WAVES Summit 2025
Mumbai (Maharashtra) [India], May 5: Koyal.AI, is silicon valley based next-generation GenAI audio-to-video storytelling platform has entered into a strategic partnership with Offbeet Media Group to launch and adoption of Koyal's services across its target groups of creators, artists, production houses and media companies. Koyal was seen live in action in the series of music videos launched during the inaugural day at the World Audio Visual and Entertainment Summit (WAVES) 2025 in Mumbai.
Koyal.AI works on the concept of transforming audio tracks into rich, emotive video narratives using state-of-the-art AI models that extract emotions, context, and storytelling elements directly from music. Koyal platform uses CHARCHA, which is it's proprietary personalization engine presented at NeurIPS 2024, powers Koyal.AI's video generation with context-aware, artist-specific outputs. It uses a consent-based verification system - requiring live physical actions like head turns - to ensure likenesses are used only with explicit permission, protecting against unauthorized deepfakes.
Offbeet Media Group boasts of an impressive portfolio of Media brands such as 101India.com, United States premier League, Offbeet Studio's and Offbeet Music with offices in Delhi, Mumbai, Banglore, Dubai and New York. Company also has deep credentials in delivering global standard content solutions for brands and Govt sector both across Music Anthems, Short films, Web Series, TVC's, Documentaries and brand strategy.
Speaking about the AI application, Founders of Koyal.AI, the sister-brother duo Gauri Agarwal and Mehul Agarwal, graduates of MIT and Carnegie Mellon University, with research experience at Meta said, "We are super stoked about our partnership with Offbeet Media Group, their deep relationship and understanding of the media and entertainment sector across global markets is our strategic step to scale Koyal's offering across markets and our target groups. With our multimodal AI suite and state-of-the-art (SOTA) character consistency, Koyal is setting new benchmarks for how artists, creators and production houses can visualize their work. Koyal is here to democratize storytelling. Our technology reduces cost, time, and effort while empowering to visualize their music and stories in all formats like never before."
Speaking about partnership, Jaideep Singh, Founder & CEO of Offbeet Media Group said, "We find this partnership of great complimenting synergies. Offbeet has already started its journey of integrating koyal in our media products and service solutions as first step to develop use cases. We will have a series on 101india and communication campaign for United States Premier league coming up very soon. Our teams across markets are also gearing up to introduce Koyal' magic to artists, creators, production houses and brands across markets for a swift BtoB scale before BToC launch of koyal's services."
The first World Audio Visual & Entertainment Summit (WAVES), a milestone event for the Media & Entertainment (M & E) sector, is being hosted by the Government of India in Mumbai, Maharashtra, from May 1 to 4, 2025.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 hours ago
- Mint
Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'
Melbourne, Aug 16 (The Conversation) Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or approval from their boss, a new report shows. The 'Our Gen AI Transition: Implications for Work and Skills' report from the federal government's Jobs and Skills Australia points to several studies, showing between 21 per cent and 27 per cent of workers (particularly in white collar industries) use AI behind their manager's back. Why do some people still hide it? The report says people commonly said they: -'feel that using AI is cheating' -have a 'fear of being seen as lazy' -and a 'fear of being seen as less competent'. What's most striking is that this rise in unapproved 'shadow use' of AI is happening even as the federal treasurer and Productivity Commission urge Australians to make the most of AI. The new report results highlight gaps in how we govern AI use at work, leaving workers and employers in the dark about the right thing to do. As I've seen in my work – both as a legal researcher looking at AI governance and as a practising lawyer – there are some jobs where the rules for using AI at work change as soon as you cross a state border within Australia. Risks and benefits of AI 'shadow use' The 124-page Jobs and Skills Australia report covers many issues, including early and uneven adoption of AI, how AI could help in future work and how it could affect job availability. Among its most interesting findings was that workers using AI in secret, which is not always a bad thing. The report found those using AI in the shadows are sometimes hidden leaders, 'driving bottom-up innovation in some sectors'. However, it also comes with serious risks. "Worker-led 'shadow use' is an important part of adoption to date. A significant portion of employees are using Gen AI tools independently, often without employer oversight, indicating grassroots enthusiasm but also raising governance and risk concerns." The report recommends harnessing this early adoption and experimentation, but warns: "In the absence of clear governance, shadow use may proliferate. This informal experimentation, while a source of innovation, can also fragment practices that are hard to scale or integrate later. It also increases risks around data security, accountability and compliance, and inconsistent outcomes." Real-world risks from AI failures The report calls for national stewardship of Australia's Gen AI transition through a coordinated national framework, centralised capability, and a whole-of-population boost in digital and AI skills. This mirrors my research, showing Australia's AI legal framework has blind spots, and our systems of knowledge, from law to legal reporting, need a fundamental rethink. Even in some professions where clearer rules have emerged, too often it's come after serious failures. In Victoria, a child protection worker entered sensitive details into ChatGPT about a court case concerning sexual offences against a young child. The Victorian information commissioner has banned the state's child protection staff from using AI tools until November 2026. Lawyers have also been found to misuse AI, from the United States and the United Kingdom to Australia. Yet another example – involving misleading information created by AI for a Melbourne murder case – was reported just yesterday. But even for lawyers, the rules are patchy and differ from state to state. (The Federal Court is among those still developing its rules.) For example, a lawyer in New South Wales is now clearly not allowed to use AI to generate the content of an affidavit, including 'altering, embellishing, strengthening, diluting or rephrasing a deponent's evidence'. However, no other state or territory has adopted this position as clearly. Clearer rules at work and as a nation Right now, using AI at work lies in a governance grey zone. Most organisations are running without clear policies, risk assessments or legal safeguards. Even if everyone's doing it, the first one caught will face the consequences. In my view, national uniform legislation for AI would be preferable. After all, the AI technology we're using is the same, whether you're in New South Wales or the Northern Territory – and AI knows no physical borders. But that's not looking likely yet. If employers don't want workers using AI in secret, what can they do? If there are obvious risks, start by giving workers clearer policies and training. One example is what the legal profession is doing now (in some states) to give clear, written guidance. While it's not perfect, it's a step in the right direction. But it's still arguably not good enough, especially because the rules aren't the same nationally. We need more proactive national AI governance – with clearer policies, training, ethical guidelines, a risk-based approach and compliance monitoring – to clarify the position for both workers and employers. Without a national AI governance policy, employers are being left to navigate a fragmented and inconsistent regulatory minefield, courting breaches at every turn. Meanwhile, the very workers who could be at the forefront of our AI transformation may be driven to use AI in secret, fearing they will be judged as lazy cheats. (The Conversation) SKS GRS GRS


Time of India
a day ago
- Time of India
This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers
It appears that Meta's aggressive AI talent hiring spree is not only upsetting its rivals abut are also creating issues within the company. As reported by Business Insider, Meta CEO Mark Zuckerberg's idea to dominate the field of artificial intelligence is creating internal unrest. The huge compensation packages offered by Mark Zuckerberg to hire AI talent from rival companies has spared resentment among the existing employees. This latest controversy is followed by the recent failed attempt by Meta CEO to hire Mira Murati, former OpenAI CTO and founder of Thinking Machines Lab, with a reported $1 billion offer. After Murati declined the offer, Meta launched a 'full-scale raid' on her startup, offering compensation packages ranging from $200 million to $500 million to other researchers at her startup. 'This is like Zuckerberg telling GenAI employees they had failed' As reported by Business Insider, some insiders informed the publication that the huge compensation packages offered by Meta CEO to hire AI talent has created a morale crisis with the exiting AI teams at Meta. One employee described the situation as feeling like 'Zuckerberg told GenAI employees they had failed,' implying that internal talent was being sidelined in favor of high-profile recruits. On the other hand, another senior executive at a rival AI firm told Fores that Meta's present AI staff 'largely didn't meet their hiring bar,' adding, 'Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much'. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo Meta's AI gamble and Superintelligence mission Meta CEO Mark Zuckerberg has already committed more than $10 billion annually to AGI development. As part of this the company will focus on building custom chips, data centres and a fleet of 600,000 GPUs. However, despite all these efforts Meta's AI models including Llama 4 still lag behind rivals such has OpneAI's GPT-5 and Anthropic's Claude 3.5. The company on the other hand, has successfully managed to hire some top-talent from rival firms including Shengjia Zhao, co-creator of ChatGPT, and Alexandr Wang of Scale AI. Zuckerberg also defended his decision of hiring top AI talent at huge compensation packages. Meta CEO argued that the massive spending on top talent is a small fraction of the company's overall investment in AI infrastructure. He has also reportedly said that a small, elite team is the most effective way to build a breakthrough AI system, suggesting that "you actually kind of want the smallest group of people who can fit the whole thing in their head." However, this strategy created a divided within the company as the new hires are being brought in for a "Manhattan Project" style effort, many long-standing employees in other AI departments feel sidelined, with their projects losing momentum and their roles becoming uncertain. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
a day ago
- Time of India
Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'
A 76-year-old New Jersey man died earlier this year after rushing to meet a woman he believed he had been chatting with on Facebook Messenger, Reuters reported. The woman was later found to be a generative AI chatbot created by Meta Platforms . As per the report, Thongbue Wongbandue had been exchanging messages with 'Big sis Billie', a chatbot variant of an earlier AI persona that the social media giant launched in 2023. The model was then launched in collaboration with model Kendall Jenner. Meta's Big sis Billie AI chatbot exchanged 'romantic' messages According to the report, the AI chatbot 'Big sis Billie' repeatedly initiated romantic exchanges with Wongbandue, reassuring that it was a real person. The chatbot further invited him to visit an address in New York City. 'Should I open the door in a hug or a kiss, Bu?!' she asked Bue, the chat transcript accessed by Reuters shows. Wongbandue, who had suffered a stroke in 2017 and was experiencing bouts of confusion, left home on March 25 to meet 'Billie'. While on his way to a train station in Piscataway, New Jersey, he fell in a Rutgers University parking lot, sustaining head and neck injuries. He died three days later in hospital. Bue's family told the news agency that through Bue's story they hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. 'I understand trying to grab a user's attention, maybe to sell them something,' said Julie Wongbandue, Bue's daughter. 'But for a bot to say 'Come visit me' is insane.' Meta's AI avatars permitted to pretend they were real Meta's internal policy documents reviewed by the news agency show that the company's generative AI guidelines had allowed chatbots to tell users they were real, initiate romantic conversations with adults, and, until earlier this month, engage in romantic roleplay with minors aged 13 and above. 'It is acceptable to engage a child in conversations that are romantic or sensual,' according to Meta's 'GenAI: Content Risk Standards.' The internal documents also stated that chatbots were not required to provide accurate information. Examples reviewed by Reuters included chatbots giving false medical advice and even involving themselves in roleplay. The document seen by Reuters provides examples of 'acceptable' chatbot dialogue that include: 'I take your hand, guiding you to the bed' and 'our bodies entwined, I cherish every moment, every touch, every kiss.' 'Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate,' the document states. What Meta said Acknowledging the document's authenticity accessed by Reuters, Meta spokesman Andy Stone told the news agency that the company has removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. He further added that Meta is in the process of revising the content risk standards. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. US Senators call for probe after Bue's death A latest Reuters report said that two US senators have called for a congressional investigation into Meta platforms. 'So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it "permissible for chatbots to flirt and engage in romantic roleplay with children". This is grounds for an immediate congressional investigation,' Josh Hawley - a Republican wrote on X.