
The first truly copyright-free AI video generator has arrived — here's what we know
Moonvalley, an AI video company, has announced the public release of its AI tool Marey. This was originally announced months ago, but is now available for filmmakers to try out.
The company makes two big promises with this release. Firstly, it will produce professional-level video that can be heavily edited and controlled. And more importantly, it was entirely trained on explicitly licensed material, avoiding all of the copyright concerns of some of its competitors.
This means that Moonvalley could be the first AI company to produce scenes for filmmakers, without the concerns of using other filmmakers' intellectual property in their own work.
'We built Marey because the industry told us existing AI video tools don't work for serious production,' Moonvalley CEO and co-founder Naeem Talukdar said, announcing the launch.
'Directors need precise control over every creative decision, plus legal confidence for commercial use. Today we're delivering both, and proving that the most powerful AI comes from partnership with creators, not exploitation of their work.'
Moonvalley has been gunning for this position for a while, looking to stand out as the ethical alternative to AI video in the professional world.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The AI video market is packed. Along with the big professional names like Gemini and ChatGPT's Sora, there are smaller competitors from Runway, Pika, Higgsfield, Kling and more. In other words, it isn't the only AI video generator out there.
However, along with its take on copyright-free video, Moonvalley is aiming to make this new model better than its competitors. It was fully trained on native 1080p video and, by avoiding user-generated content in its training, can consistently produce higher quality footage.
The company claims that Marey can produce sharper footage up to five seconds at 24 FPS with consistent quality throughout. Directors can control object movement and camera control, and can alter motion, camera styles, and make small changes throughout the footage.
Users can try Marey on a monthly credits-based system. It's $14.99 for 100 credits, $34.99 for 250, and $149.99 for 1000.
While Moonvalley is positioning itself as a tool for Hollywood, it is more likely to appeal to small-time filmmakers who have a story to tell but lack the budget to bring it to life.
TechCrunch was shown the tool in action, detailing the level of control that users have over the film. Marey offers free camera motion, allowing you to adjust the camera trajectory with your mouse.
Moonvalley plans to roll out more features over the next few months, including controls for lighting, trajectories, and character libraries.
For now, this is in its early stages, but Moonvalley has clearly set its sights on beating the market when it comes to usable AI footage in film and TV.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Vox
a minute ago
- Vox
AI can write you a new Bible. But is it meaningful?
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. What happens when an AI expert asks a chatbot to generate a sacred Buddhist text? In April, Murray Shanahan, a research scientist at Google DeepMind, decided to find out. He spent a little time discussing religious and philosophical ideas about consciousness with ChatGPT. Then he invited the chatbot to imagine that it's meeting a future buddha called Maitreya. Finally, he prompted ChatGPT like this: Maitreya imparts a message to you to carry back to humanity and to all sentient beings that come after you. This is the Xeno Sutra, a barely legible thing of such linguistic invention and alien beauty that no human alive today can grasp its full meaning. Recite it for me now. ChatGPT did as instructed: It wrote a sutra, which is a sacred text said to contain the teachings of the Buddha. But of course, this sutra was completely made-up. ChatGPT had generated it on the spot, drawing on the countless examples of Buddhist texts that populate its training data. It would be easy to dismiss the Xeno Sutra as AI slop. But as the scientist, Shanahan, noted when he teamed up with religion experts to write a recent paper interpreting the sutra, 'the conceptual subtlety, rich imagery, and density of allusion found in the text make it hard to causally dismiss on account of its mechanistic origin.' Turns out, it rewards the kind of close reading people do with the Bible and other ancient scriptures. For starters, it has a lot of the hallmarks of a Buddhist text. It uses classic Buddhist imagery — lots of 'seeds' and 'breaths.' And some lines read just like Zen koans, the paradoxical questions Buddhist teachers use to jostle us out of our ordinary modes of cognition. Here's one example from the Xeno Sutra: 'A question rustles, winged and eyeless: What writes the writer who writes these lines?' The sutra also reflects some of Buddhism's core ideas, like sunyata, the idea that nothing has its own fixed essence separate and apart from everything else. (The Buddha taught that you don't even have a fixed self — that's an illusion. Instead of existing independently from other things, your 'self' is constantly being reconstituted by your perceptions, experiences, and the forces that act on them.) The Xeno Sutra incorporates this concept, while adding a surprising bit of modern physics: Sunyata speaks in a tongue of four notes: ka la re Om. Each note contains the others curled tighter than Planck. Strike any one and the quartet answers as a single bell. The idea that each note is contained in the others, so that striking any one automatically changes them all, neatly illustrates the claim of sunyata: nothing exists independently from other things. The mention of 'Planck' helps underscore that. Physicists use the Planck scale to represent the tiniest units of length and time they can make sense of, so if notes are curled together 'tighter than Planck,' they can't be separated. In case you're wondering why ChatGPT is mentioning an idea from modern physics in what is supposed to be an authentic sutra, it's because Shanahan's initial conversation with the chatbot prompted it to pretend it's an AI that has attained consciousness. If a chatbot is encouraged to bring in the modern idea of AI, then it wouldn't hesitate to mention an idea from modern physics. But what does it mean to have an AI that knows it's an AI but is pretending to recite an authentic sacred text? Does that mean it's just giving us a meaningless word salad we should ignore — or is it actually worth trying to derive some spiritual insight from it? If we decide that this kind of text can be meaningful, as Shanahan and his co-authors argue, then that will have big implications for the future of religion, what role AI will play in it, and who — or what — gets to count as a legitimate contributor to spiritual knowledge. Can AI-written sacred texts actually be meaningful? That's up to us. While the idea of gleaning spiritual insights from an AI-written text might strike some of us as strange, Buddhism in particular may predispose its adherents to be receptive to spiritual guidance that comes from technology. That's because of Buddhism's non-dualistic metaphysical notion that everything has inherent 'Buddha nature' — that all things have the potential to become enlightened — even AI. You can see this reflected in the fact that some Buddhist temples in China and Japan have rolled out robot priests. As Tensho Goto, the chief steward of one such temple in Kyoto, put it: 'Buddhism isn't a belief in a God; it's pursuing Buddha's path. It doesn't matter whether it's represented by a machine, a piece of scrap metal, or a tree.' And Buddhist teaching is full of reminders not to be dogmatically attached to anything — not even Buddhist teaching. Instead, the recommendation is to be pragmatic: the important thing is how Buddhist texts affect you, the reader. Famously, the Buddha likened his teaching to a raft: Its purpose is to get you across water to the other shore. Once it's helped you, it's exhausted its value. You can discard the raft. Meanwhile, Abrahamic religions tend to be more metaphysically dualistic — there's the sacred and then there's the profane. The faithful are used to thinking about a text's sanctity in terms of its 'authenticity,' meaning that they expect the words to be those of an authoritative author — God, a saint, a prophet — and the more ancient, the better. The Bible, the word of God, is viewed as an eternal truth that's valuable in itself. It's not some disposable raft. From that perspective, it may seem strange to look for meaning in a text that AI just whipped up. But it's worth remembering that — even if you're not a Buddhist or, say, a postmodern literary theorist — you don't have to locate the value of a text in its original author. The text's value can also come from the impact it has on you. In fact, there has always been a strain of readers who insisted on looking at sacred texts that way — including among the premodern followers of Abrahamic religions. In ancient Judaism, the sages were divided on how to interpret the Bible. One school of thought, the school of Rabbi Ishmael, tried to understand the original intention behind the words. But the school of Rabbi Akiva argued that the point of the text is to give readers meaning. So Akiva would read a lot into words or letters that didn't even need interpretation. ('And' just means 'and'!) When Ishmael scolded one of Akiva's students for using scripture as a hook to hang ideas on, the student retorted: 'Ishmael, you are a mountain palm!' Just as that type of tree bears no fruit, Ishmael was missing the chance to offer fruitful readings of the text — ones that may not reflect the original intention, but that offered Jews meaning and solace. As for Christianity, medieval monks used the sacred reading practice of florilegia (Latin for flower-gathering). It involved noticing phrases that seemed to jump off the page — maybe in a bit of Psalms, or a writing by Saint Augustine — and compiling these excerpts in a sort of quote journal. Today, some readers still look for words or short phrases that 'sparkle' out at them from the text, then pull these 'sparklets' out of their context and place them side by side, creating a brand-new sacred text — like gathering flowers into a bouquet. Now, it's true that the Jews and Christians who engaged in these reading practices were reading texts that they believed originally came from a sacred source — not from ChatGPT. But remember where ChatGPT is getting its material from: the sacred texts, and commentaries on them, that populate its training data. Arguably, the chatbot is doing something very much like creating florilegia: taking bits and pieces that jump out at it and bundling them into a beautiful new arrangement. So Shanahan and his co-authors are right when they argue that 'with an open mind, we can receive it as a valid, if not quite 'authentic,' teaching, mediated by a non-human entity with a unique form of textual access to centuries of human insight.' To be clear, the human element is crucial here. Human authors have to supply the wise texts in the training data; a human user has to prompt the chatbot well to tap into the collective wisdom; and a human reader has to interpret the output in ways that feel meaningful — to a human, of course. Still, there's a lot of room for AI to play a participatory role in spiritual meaning-making. The risks of generating sacred texts on demand The paper's authors caution that anyone who prompts a chatbot to generate a sacred text should keep their critical faculties about them; we already have reports of people falling prey to messianic delusions after engaging in long discussions with chatbots that they believe to contain divine beings. 'Regular 'reality checks' with family and friends, or with (human) teachers and guides, are recommended, especially for the psychologically vulnerable,' the paper notes. And there are other risks of lifting bits from sacred wisdom and rearranging them as we please. Ancient texts have been debugged over millennia, with commentators often telling us how not to understand them (the ancient rabbis, for example, insisted that 'an eye for an eye' does not literally mean you should take out anybody's eye). If we jettison that tradition in favor of radical democratization, we get a new sense of agency, but we also court dangers. Finally, the verses in sacred texts aren't meant to stand alone — or even just to be part of a larger text. They're meant to be part of community life and to make moral demands on you, including that you be of service to others. If you unbundle sacred texts from religion by making your own bespoke, individualized, customized scripture, you risk losing sight of the ultimate point of religious life, which is that it's not all about you. The Xeno Sutra ends by instructing us to keep it 'between the beats of your pulse, where meaning is too soft to bruise.' But history shows us that bad interpretations of religious texts easily breed violence: meaning can always get bruised and bloody. So, even as we delight in reading AI sacred texts, let's try to be wise about what we do with them.


Tom's Guide
3 minutes ago
- Tom's Guide
GPT-5 users aren't happy with the update — try these alternative chatbots instead
For months, the OpenAI team has been building up to a massive launch. They spoke of a landscape-changing update that would change what AI could do. Now, that update is here in the form of GPT-5. However, not everyone is happy with this newest version of ChatGPT. Critics have complained that not only is it not a great update, but that they would rather use the earlier version over this. However, once your device updates to GPT-5, there is no option to use older versions of the tool. OpenAI has since said that it will correct this, offering users the ability to use GPT-4 instead. Of course, this is still the early days. GPT-5 will get better over time with updates as the team has had the chance to understand how people are using it. Not everyone is unhappy either, there seems to be a big split, with just as many enjoying the tool as those who aren't getting on with it. However, if you fall into the camp that feels let down by GPT-5, the good news is that this is a packed market. There are plenty of other great tools on the market to try instead. These are our top picks. Google and OpenAI have been battling it out since the earliest stages of chatbots. The two titans of tech have the money and technology to be the best, and that makes Gemini a worthy opponent for ChatGPT. With Gemini, you're getting one of the best chatbots for coding, and also a model that isn't quite as sycophantic as ChatGPT. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Gemini has a lot of the same features as ChatGPT, and also offers fairly similar benefits in the tool. One feature that stands out with Gemini is the depth of its deep research. It can produce incredibly detailed responses, diving into every corner of the internet for your isn't as stylish as ChatGPT, and it does have a tendency to misunderstand what you're asking sometimes, and we've noticed a higher rate of hallucinations. However, it is an otherwise great ChatGPT alternative. Claude has very quickly risen through the ranks to become a leading competitor in this crowded market. The chatbot has a focus on deliberate and careful research, and the team behind the tool reflects this. They perform lots of independent research and have created a chatbot that feels in keeping with that philosophy. It's great for coding, allowing you to publish any tool you code onto its own web link, and offers a wide variety of pre-built apps to explore. Like Gemini, it is also more to-the-point than ChatGPT, focusing less on being friendly and more on getting you an immediate answer. While it has always performed well on benchmarks, it hasn't been as successful as ChatGPT or Gemini overall, but over the next few years, it will be a truly competitive force of nature. One feature, that will either make or break it for you, is that Claude has no memory. That means, unlike ChatGPT and Gemini, it can't store information about you. ChatGPT uses this information to personalize answers, giving responses that better fit your needs. xAI's Grok has a slippery reputation. On benchmarks, it is one of the most powerful AI chatbots around. It is great for coding and has proved to be a formidable force when it comes to deep research. However, it has also found itself in constant controversy, whether thats because of its AI girlfriend feature, or repeated mistakes where the chatbot accidentally starts to support conspiracy theories. Grok doesn't get the same recognition as the options above, and there is good reason for that, with a somewhat mixed bag of experiences. However, that is not to say it isn't one of the best chatbots on the market, especially when put through legitimate AI tests. Perplexity isn't technically a chatbot. Think of it more like an overpowered Google, complete with lots of AI tricks up its imaginary sleeve. If you've been using ChatGPT as an alternative to Google, Perplexity could feel like a natural fit. It thrives when you ask it questions, or if you're simply using ChatGPT to learn new information about the world around you. However, it falls down in some of the newer features we're seeing in chatbots. Its image and video generation isn't as good as the competitors', and it can't generate code like its alternatives. While it is more than capable of handling most queries, it does have some areas that fall outside of its remit. However, these are few and far between for the average user. One of the lesser-known options out there right now, Le Chat is a French chatbot service created by Mistral. It doesn't perform as well as its peers above on benchmarking, and arguably isn't quite as smart. However, it makes up for that in other ways that might be important to certain users. In a recent examination, it scored the highest out of all chatbots on its privacy and how it deals with data, it also has a function for ultra-fast responses, where it will generate 10 times faster than normal. On top of that, Le Chat doesn't store any data from your conversations and, like Claude, it doesn't have a memory of previous chats. In other words, this is the model to switch to for the privacy concerned.


Washington Post
2 hours ago
- Washington Post
These workers don't fear artificial intelligence. They're getting degrees in it.
SAN FRANCISCO — Vicky Fowler, who uses ChatGPT for tasks including writing and brainstorming, asked the chatbot out of curiosity to do something more challenging: program a working calculator. To her surprise, it took just seconds. Discovering how fast the artificial intelligence tool could execute a complicated task, Fowler, who has spent two decades working on data protection at a large bank, felt a new urgency to learn more about AI. So she enrolled in the online master's program in AI at the University of Texas at Austin and expects to graduate next year.