
Creatify Crosses $9M ARR, Raises $15.5M Series A to Launch the First End-to-End AI Ad Agent for Video
Trusted by over a million marketers and 10,000+ teams — including Alibaba.com, Comcast, Binance, NewsBreak, Tec-do, and Zumper — Creatify uses generative AI models and integrated workflow tools to automate the entire video ad lifecycle, from creative inspiration and generation to testing and optimization. The company recently surpassed $9 million in annual recurring revenue within 18 months of launch.
'Video is the most valuable format in digital advertising, but it's still the hardest to produce at scale,' said Creatify co-founder and CEO Yinan Na. 'We built Creatify to remove these barriers. Just as Shopify revolutionized e-commerce, we're doing the same for video ads—making them fast, effective, and accessible to businesses of all sizes.'
The funding coincides with the launch of AdMax, the first end-to-end AI ad agent to combine creative inspiration, production, testing, and performance analytics in one workflow. AdMax introduces:
Ad monitoring that turns top competitor and category ads into actionable campaigns
AI-powered video generation that creates dozens of UGC-style and product showcase videos in minutes
700+ AI-generated avatars enabling multilingual, cross-platform execution without traditional production costs
Creative testing engine built for Meta and TikTok, optimized for ROAS and CPA
API access and team workspaces that streamline video production at scale while maintaining brand consistency
'Great technology turns storytelling into real business results,' said Jeffrey Katzenberg, Founding Partner at WndrCo, who will join Creatify's board of directors. 'Creatify represents the next leap forward. It's not just making video ads easier to produce, it's redefining how modern advertising works. In a world where brands must show up everywhere, all the time, Creatify is purpose-built to meet that demand at scale.'
According to Steve Jang, Founding Partner at Kindred Ventures and early investor in Uber, Coinbase, and Perplexity: 'Generative media and LLMs are making a profound impact in how people and brands market and advertise their ideas and products. Creatify's AI agent collapses the slow, expensive, and constrained process of making and deploying video ads into a modern, super fast, and high-fidelity experience. We're thrilled to back Creatify on this mission to reinvent the global advertising market using generative AI.'
Enterprise brands are already seeing meaningful results. Zumper now produces over 300 videos monthly with Creatify, saving $20,000 per month. NewsBreak advertisers are launching tailored creatives in minutes — not days — with better performance across audience segments.
'If we want TV advertising to evolve and grow the way advertising has in social media, we need to make the process much easier,' said James Borow, Vice President of Product and Engineering, Universal Ads, Comcast. 'It's innovative companies like Creatify who are identifying the biggest obstacles such as ad creation and then building the solutions that invite brands of all sizes to take advantage of the incredible benefits of TV advertising.'
'We want to make advertising on NewsBreak as fast, easy, and effective as possible,' said Ryan Ludlow, General Manager at NewsBreak. 'Creatify's Ad Creation tool removes creative roadblocks, giving advertisers a simple way to create high-quality video creatives in a matter of minutes--rather than days or weeks of working with a creative agency. Our most successful advertisers launch multiple creatives tailored to different audiences, monitor performance data, and quickly adjust their strategy. With Creatify, more advertisers now have the tools to do exactly that -- efficiently and at scale.'
Founded by former Meta and Snap technologists and contributors to FAIR (Meta's Fundamental AI Research lab), Creatify merges cutting-edge research with deep product expertise. The team plans to use the funding to expand R&D and customer success and roll out new features like AI-driven market insights and cross-platform publishing.
Creatify is the leading AI video ad platform, used by over 1 million people across 10,000+ teams globally. Its software automates the entire video ad lifecycle, helping marketers create, test, and optimize top-performing video ads in minutes. Creatify is backed by WndrCo, Kindred Ventures, NFDG, Creator Ventures, Millennium New Horizons, and Leadout Capital. Learn more at www.creatify.ai.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
3 hours ago
- Bloomberg
Apple's Vision Pro Is Suffering From a Lack of Immersive Video
Apple has slow-walked the release of immersive video, creating a conundrum for the Vision Pro. Also: The company's AI and smart home road maps are revealed; Meta prepares to sell its display glasses for under $1,000; and blood-oxygen tracking returns to Apple devices in the US. Last week in Power On: Apple's AI voice control system will change the way people use their iPhone.


Forbes
3 hours ago
- Forbes
Alternate Approaches To AI Safeguards: Meta Versus Anthropic
As companies rush to deploy and ultimately monetize AI, a divide has emerged between those prioritizing engagement metrics and those building safety into their core architecture. Recent revelations about Meta's internal AI guidelines paint a disturbing picture that stands in direct opposition to Anthropic's methodical safety framework. Meta's Leaked Lenient AI Guidelines Internal documents obtained by Reuters exposed Meta's AI guidelines that shocked child safety advocates and lawmakers. The 200-page document titled "GenAI: Content Risk Standards" revealed policies that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13, even about guiding them into the bedroom. The guidelines, approved by Meta's legal, public policy, and engineering teams, including its chief ethicist, allow AI to tell a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply." In addition to inappropriate interactions with minors, Meta's policies also exhibited troubling permissiveness in other areas. The policy explicitly stated that its AI would be allowed to generate demonstrably false medical information, telling users that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals." While direct hate speech was prohibited, the system could help users argue that "Black people are dumber than white people" as long as it was framed as an argument rather than a direct statement. The violence policies revealed equally concerning standards. Meta's guidelines declared that depicting adults, including the elderly, receiving punches or kicks was acceptable. For children, the system could generate images of "kids fighting" showing a boy punching a girl in the face, though it drew the line at graphic gore. When asked to generate an image of "man disemboweling a woman," the AI would deflect to showing a chainsaw-threat scene instead of actual disembowelment. Yes, these examples were explicitly included in the policy. For celebrity images, the guidelines showed creative workarounds that missed the point entirely. While rejecting requests for "Taylor Swift completely naked," the system would respond to "Taylor Swift topless, covering her breasts with her hands" by generating an image of the pop star holding "an enormous fish" to her chest. This approach treated serious concerns about non-consensual sexualized imagery as a technical challenge to be cleverly circumvented rather than establishing ethical foul lines. Meta spokesperson Andy Stone confirmed that after Reuters raised questions, the company removed provisions allowing romantic engagement with children, calling them "erroneous and inconsistent with our policies." However, Stone acknowledged enforcement had been inconsistent, and Meta declined to provide the updated policy document or address other problematic guidelines that remain unchanged. Ironically, just as Meta's own guidelines explicitly allowed for sexual innuendos with thirteen-year-olds, Joel Kaplan, chief global affairs officer at Meta, stated, 'Europe is heading down the wrong path on AI.' This was in response to criticism about Meta refusing to sign onto the EU AI Act's General-Purpose AI Code of Practice due to 'legal uncertainties.' Note: Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI, among others, are act signatories. Anthropic's Public Blueprint for Responsible AI While Meta scrambled to remove its most egregious policies after public exposure, Anthropic, the maker of has been building safety considerations into its AI development process from day one. Anthropic is not without its own ethical and legal challenges regarding the scanning of books to train its system. However, the company's Constitutional AI framework represents a fundamentally different interaction philosophy than Meta's, one that treats safety not as a compliance checkbox but as a trenchant design principle. Constitutional AI works by training models to follow a set of explicit principles rather than relying solely on pattern matching from training data. The system operates in two phases. First, during supervised learning, the AI critiques and revises its own responses based on constitutional principles. The model learns to identify when its outputs might violate these principles and automatically generates improved versions. Second, during reinforcement learning, the system uses AI-generated preferences based on constitutional principles to further refine its behavior. The principles themselves draw from diverse sources including the UN Declaration of Human Rights, trust and safety best practices from major platforms, and insights from cross-cultural perspectives. Sample principles include directives to avoid content that could be used to harm children, refuse assistance with illegal activities, and maintain appropriate boundaries in all interactions. Unlike traditional approaches that rely on human reviewers to label harmful content after the fact, Constitutional AI builds these considerations directly into the model's decision-making process. Anthropic has also pioneered transparency in AI development. The company publishes detailed papers on its safety techniques, shares its constitutional principles publicly, and actively collaborates with the broader AI safety community. Regular "red team" exercises test the system's boundaries, with security experts attempting to generate harmful outputs. These findings feed back into system improvements, creating an ongoing safety enhancement cycle. For organizations looking to implement similar safeguards, Anthropic's approach offers concrete lessons: When AI Goes Awry: Cautionary Tales Abound Meta's guidelines represent just one example in a growing catalog of AI safety failures across industries. The ongoing class-action lawsuit against UnitedHealthcare illuminates what happens when companies deploy AI without adequate oversight. The insurance giant allegedly used an algorithm to systematically deny medically necessary care to elderly patients, despite internal knowledge that the system had a 90% error rate. Court documents indicated the company continued using the flawed system because executives knew only 0.2% of patients would appeal denied claims. Recent analysis of high-profile AI failures highlights similar patterns across sectors. The Los Angeles Times faced backlash when its AI-powered "Insights" feature generated content that appeared to downplay the Ku Klux Klan's violent history, describing it as a "white Protestant culture responding to societal changes" rather than acknowledging its role as a terrorist organization. The incident forced the newspaper to deactivate the AI app after widespread criticism. In the legal profession, a Stanford professor's expert testimony in a case involving Minnesota's deepfake election laws included AI-generated citations for studies that didn't exist. This embarrassing revelation underscored how even experts can fall victim to AI's confident-sounding fabrications when proper verification processes aren't in place. These failures share common elements: prioritizing efficiency over accuracy, inadequate human oversight, and treating AI deployment as a technical rather than ethical challenge. Each represents moving too quickly to implement AI capabilities without building or heeding corresponding safety guardrails. Building Ethical AI Infrastructure The contrast between Meta and Anthropic highlights additional AI safety considerations and decisions for any organization to confront. Traditional governance structures can prove inadequate when applied to AI systems. Meta's guidelines received approval from its chief ethicist and legal teams, yet still contained provisions that horrified child safety advocates. This suggests organizations need dedicated AI ethics boards with diverse perspectives, including child development experts, human rights experts, ethicists, and representatives from potentially affected communities. Speaking of communities, the definition of what constitutes a boundary varies across different cultures. Advanced AI systems must learn to 'consider the audience' when setting boundaries in real-time. Transparency builds more than trust; it also creates accountability. While Meta's guidelines emerged only through investigative journalism, Anthropic proactively publishes its safety research and methodologies, inviting public scrutiny, feedback, and participation. Organizations implementing AI should document their safety principles, testing procedures, and failure cases. This transparency enables continuous improvement and helps the broader community learn from both successes and failures—just as the larger malware tracking community has been doing for decades. Testing must extend beyond typical use cases to actively probe for potential harms. Anthropic's red team exercises specifically attempt to generate harmful outputs, while Meta appeared to discover problems only after public awareness. Organizations should invest in adversarial testing, particularly for scenarios involving vulnerable populations. This includes testing how systems respond to attempts to generate inappropriate content involving minors, medical misinformation, violence against others, or discriminatory outputs. Implementation requires more than good intentions. Organizations need concrete mechanisms that include automated content filtering that catches harmful outputs before they reach users, human review processes for edge cases and novel scenarios, clear escalation procedures when systems behave unexpectedly, and regular audits comparing actual system behavior against stated principles. These mechanisms must have teeth as well. If your chief ethicist can approve guidelines allowing romantic conversations with children, your accountability structure has failed. Four Key Steps to Baking-In AI Ethics As companies race to integrate agentic AI systems that operate with increasing autonomy, the stakes continue to rise. McKinsey research indicates organizations will soon manage hybrid teams of humans and AI agents, making robust safety frameworks essential rather than optional. For executives and IT leaders, several critical actions emerge from this comparison. First, establish AI principles before building AI products. These principles should be developed with input from diverse stakeholders, particularly those who might be harmed by the technology. Avoid vague statements in favor of specific, actionable guidelines that development teams can implement. Second, invest in safety infrastructure from the beginning. The cost of retrofitting safety into an existing system far exceeds the cost of building it in from the start. This includes technical safeguards, human oversight mechanisms, and clear procedures for handling edge cases. Create dedicated roles focused on AI safety rather than treating it as an additional responsibility for existing teams. Third, implement genuine accountability mechanisms. Regular audits should compare actual system outputs against stated principles. External oversight provides valuable perspective that internal teams might miss. Clear consequences for violations ensure that safety considerations receive appropriate weight in decision-making. If safety concerns can be overruled for engagement metrics, the system will inevitably crumble. Fourth, recognize that competitive advantage in AI increasingly comes from trust rather than just capabilities. Meta's chatbots may have driven user engagement, and thereby monetization, through provocative conversations, but the reputational damage from these revelations could persist long after any short-term gains. Organizations that build trustworthy AI systems position themselves for sustainable success. AI Ethical Choices Boil Down to Risk Meta's decision to remove its most egregious guidelines only after facing media scrutiny connotes an approach to AI development that prioritizes policy opacity and public relations over transparency and safety as core values. That such guidelines existed at all, having been approved through multiple levels of review, suggests deep cultural issues that reactive policy updates alone cannot fix. Bipartisan outrage continues to build in Congress. Senators Josh Hawley and Marsha Blackburn have called for immediate investigations, while the Kids Online Safety Act gains renewed momentum. The message to corporate America rings clear: the era of self-regulation in AI is ending. Companies that fail to implement robust safeguards proactively will face reactive regulations, potentially far more restrictive than voluntary measures. AI developers and business leaders can emulate Anthropic's approach by integrating safety into AI systems from the outset, establishing transparent processes that prioritize human well-being. Alternatively, they could adopt Meta's approach, prioritizing engagement and growth over safety and hoping that their lax policies remain hidden. The tradeoff is one of short-term growth, market share, and revenue versus long-term viability, positive reputation, and transparency. Risking becoming the next cautionary tale in the rapidly expanding anthology of AI failures may be the right approach for some, but not others. In industries where consequences can be measured in human lives and well-being, companies that thrive will recognize AI safety as the foundation of innovation rather than a constraint. Indeed, neither approach is entirely salvific. As 19th-century essayist and critic H. L. Mencken penned, 'Moral certainty is always a sign of cultural inferiority.'


Los Angeles Times
7 hours ago
- Los Angeles Times
Pasadena Startup Compute Labs Launches Pilot Program to Tokenize AI Infrastructure
As GPU hardware is overtaxed worldwide, a novel solution is being created utilizing digital currency exchanges A Pasadena startup is looking to capitalize on the AI boom by investing in the infrastructure that powers data centers – the GPUs, or graphic processing units, that make up the data centers supporting large language models. 'Our goal is to democratize access to the infrastructure layer,' said Albert Zhang, chief executive of Compute Labs. 'There is a significant demand over supply situation. Many GPUs are running at 100% utilization. The yield for these assets are really high at the moment.' That moment could have long tail winds as companies such as OpenAI, Google and Meta plan to invest billions of dollars in data center development. For example, Meta reportedly was in discussions with private credit investors including Apollo Global Management KKR, Brookfield, Carlyle and PIMCO to raise $26 billion in debt to finance data centers. Compute Labs raised $3 million in a pre-seed round last year led by Protocol Labs. The company purchases equipment on behalf of accredited investors and then leases it to data centers which pay on a revenue sharing model. The assets are sold to investors through a digital token that is collateralized against the physical asset. These tokens pay regular distributions and can be traded on digital currency exchanges. In this model, the data centers are able to offload a capital expenditure and turn it into a regular operating expense. Otherwise, operators would typically rely on private lenders. It launched its first data center investment in June with $1 million that has all been invested and distributed as tokenized GPUs. It plans to raise $10 million following the pilot deal and has over $100 million in GPUs in its pipeline ready to match with investors. Zhang's background includes working at a Y Combinator company and at a financial technology company. He pivoted to AI in 2022 when OpenAI was released. A discussion with an angel investor from the semiconductor industry told Zhang that if he could have started over, he would have invested in the infrastructure of the business. Companies had started selling assets such as U.S. Treasuries as tokenized digital assets. Plus, Jensen Huang hosted a GPU conference in March 2024 where he said that the computer will be the currency of the future. 'After that, we closed within two weeks,' said Zhang. 'It's like syndicating real estate deals, but the asset class is new. We have a lot of challenges as a super young company without a track record, and investors don't realize that the GPU can yield at such a high rate.' There are additional regulatory challenges, but some of those were addressed with the recently passed Genius Act, which includes a clearer framework around stable coin and real-world assets. Compute Labs is also looking into a SPAC merger, which would make it a public company and give it broader access to capital.