We Made a Film With AI. You'll Be Blown Away—and Freaked Out.
Welcome to the premiere of 'My Robot & Me.' Please silence your phones, chew your popcorn quietly and remember: Every visual you're about to see was generated with AI. Most of the audio too, except my voice.
Some of it's totally wild. You won't believe that no real cameras were used. Some of it, you'll laugh at, because it's clearly not real. I promise you, I did not have facial reconstructive surgery between scenes.
But enough from me. Hopefully by now you've watched the film above, complete with its behind-the-scenes look. Just come back—we've got some lessons to share.
Yes, we. To make this film, I teamed up with Jarrard Cole, a real human producer. We met over a decade ago here at WSJ, experimenting with helmet cams and new video formats like VR. These days, he's become obsessed with AI video tools.
So I challenged him to make a totally AI video. How hard could it be?
Very hard.
Over a thousand clips, days of work and who knows how much data-center computing power later, we ended up with a three-minute film—about my life with a new kind of efficiency robot. Even if you don't care about camera angles or storyboards, you might care about what this says about using AI in any job.
Just a few years ago, an AI-generated clip of Will Smith eating spaghetti went viral because it was terrifyingly bad. Now, these tools can pop out scenes that look nearly flawless—at least at first glance.
After testing a bunch of options, we landed on Google's Veo and a tool from startup Runway AI. They gave us the best mix of quality and control. OpenAI's Sora was nowhere near as good. On May 20, Google dropped Veo 3, which adds AI-powered audio, including dialogue and sound effects. Check out Will Smith now!
Yep, if you can think it, you can generate it. A podcasting baby, a SWAT team storming a bounce house, a futuristic Mayan metropolis. But we weren't going for sight gags—we wanted to tell a story, something with characters, humor and meaning. That proved to be a lot harder.
Think you can paste in a script and out pops a Netflix hit? Cute. Every shot of ours was the result of many prompts and generation attempts. And to keep characters and sets consistent from scene to scene, Jarrard invented a whole production pipeline.
The quick version: We used AI image generator Midjourney to generate our sets (a suburban neighborhood, a newsroom) and to design our robot star. Then we used photos of real me to create AI me. We uploaded those to Runway or Veo, where we wrote prompts. Here's a short one:
Low angle shot: Joanna does push-ups at a brisk pace, maintaining a straight line from head to heel. The robot stands above, monitoring and guiding.
That careful, specific wording made a huge difference. As a filmmaker, Jarrard could break down scenes beat by beat, specifying camera angles, lighting styles and movement. That nail-biter ending? Every shot was carefully described to build suspense.
And it still took us over 1,000 clips. Some were complete disasters, with anatomical nightmares and random new characters. Even in 'good' scenes, my face looks different in almost every shot.
There's a popular term for AI-generated content these days: slop. And yeah, our film has some slop vibes. Some shots are overly smooth and parts look undeniably fake. But if we did our jobs right, it still hopefully accomplished what a film should. Maybe it made you laugh, maybe it made you think.
And we did it without a massive budget, props departments and special-effects teams. The total cost would have been a few thousand dollars for Google and Runway's AI video tools. (We paid for some of it, and the companies gave us special access to the rest.)
As two seasoned video producers, we can say AI opens up new ways to create things we never could before. But it can't replace the craft.
These tools are nothing without human input, creativity and original ideas. As the film hopefully reminded you, we are not robots. Live a little.
Write to Joanna Stern at joanna.stern@wsj.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
26 minutes ago
- Bloomberg
Polygon Co-Founder Sandeep Nailwal Takes Over as Foundation CEO
Sandeep Nailwal, a co-founder of Polygon, has been appointed CEO of the blockchain project's foundation to help sharpen its strategic focus. The appointment stems from a need to make faster decisions in light of mounting competition in the blockchain space, Nailwal said in an interview. The foundation oversees the development and growth of Polygon — a setup common in the crypto space.


Medscape
26 minutes ago
- Medscape
Head CT May Be Essential in Older Adults With Delirium
A study found that 1 in 6 older adults with delirium in the emergency department (ED) had abnormal findings on head CT, even without traditional risk factors for intracranial pathology. METHODOLOGY: Researchers conducted a secondary analysis of two prospective observational studies (one conducted from 2009 to 2012 and the other from 2012 to 2014) involving a total of 160 patients with delirium, aged 65 years or older (median age, 76 years; 62.5% women; 20% non-White), at an urban, tertiary care academic hospital. The primary outcome measure was a "positive composite head CT," defined as any acute intracranial abnormality seen during the index hospitalization or a new neurologic diagnosis within 30 days of the patient's ED visit. Researchers evaluated the diagnostic performance of five commonly cited risk factors — including anticoagulant use, recent head trauma, focal neurologic deficits, headache, and altered level of consciousness — to determine their ability to identify patients needing head CT. A secondary analysis was conducted to assess the diagnostic performance of six less common risk factors: Current antiplatelet use, current aspirin use, new or worsening seizures, history of brain hardware, recent falls, recent non-fall trauma, and vomiting secondary to trauma. TAKEAWAY: About 15.6% of older patients with delirium had a positive composite head CT. The absence of any common risk factors did not reduce the likelihood of a positive composite head CT (negative likelihood ratio [NLR], 0.6; 95% CI, 0.2-2.4), except for focal neurologic deficits, whose absence slightly lowered this likelihood (NLR, 0.6; 95% CI, 0.4-0.9). Only new or worsening seizures (positive likelihood ratio [PLR], 5.4; 95% CI, 0.8-36.6) and history of brain hardware (PLR, 5.4; 95% CI, 1.2-25.3) moderately increased the probability of having a positive composite head CT. IN PRACTICE: "Given that there is a high rate of abnormal head CTs, our study suggests that older ED patients with delirium should receive a head CT as part of the diagnostic workup, even in the absence of any risk factors," the authors wrote. SOURCE: The study was led by James O. Jordano, MD, Vanderbilt University Medical Center, Nashville, Tennessee. It was published online on May 16, 2025, in The American Journal of Emergency Medicine . LIMITATIONS: The study was limited by its single-center, secondary analysis design and small sample size. Reliance on retrospective chart review may have introduced misclassification of risk factors. Additionally, the exclusion of critically ill patients and the lack of enrollment during weekends and overnight hours may have introduced selection bias. DISCLOSURES: This study was funded by the National Institutes of Health, the National Center for Research Resources, and the National Center for Advancing Translational Sciences. The authors reported having no conflicts of interest.

Yahoo
28 minutes ago
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data