Colleen Hoover's ‘Reminders Of Him' Moves Ahead Of Margot Robbie & Jacob Elordi's ‘Wuthering Heights' In Pre-Valentine's Day 2026 Frame
Reminders of Him stars Maika Monroe, Lauren Graham, Bradley Whitford, Lainey Wilson and Nicholas Duvernay. The source material was a No. 1 New York Times bestselling tome about motherhood, forgiveness and the power of one love to heal even the most shattered heart.
More from Deadline
BBC & Working Title Developing 'A Passage To India' TV Series
Jacob Elordi On Playing A Haunted POW In 'The Narrow Road To The Deep North' And His Growing Zeal For Acting As He Tees Up 'Frankenstein' & 'Wuthering Heights'
Ariana Grande Joins 'Meet The Parents' Sequel At Universal; Fall 2026 Release Date Set
Hoover adapted her own novel with Lauren Levine. Pic is produced by Hoover and Levine for their production company, Heartbones Entertainment. Gina Matthews is producing through her Little Engine Productions. Robin Fisichella is executive producing.
First published in 2022, Reminders of Him has sold more than 6 million copies in the United States and has been translated into 45 languages. The feature take of Hoover's It Ends With Us was a surprise box office sensation last summer grossing over $351M and netting $207M after all global ancillaries and expenses. In the wake of the success of that Justin Baldoni and Blake Lively movie, Hollywood jumped on the Hoover bandwagon much like they did with authors such as Stephen King in the 1980s and John Grisham in the 1990s.
Reminders of Him will share the Feb. 6-8 weekend next year with the Angel Studios' comedy Solo Mio.
Left on Valentine's Day weekend with Wuthering Heights is an untitled NEON horror movie, the Sony Animation feature Goat and the Chris Hemsworth-Barry Keoghan Amazon MGM Studios movie Crime 101.
Best of Deadline
'Stick' Soundtrack: All The Songs You'll Hear In The Apple TV+ Golf Series
'Nine Perfect Strangers' Season 2 Release Schedule: When Do New Episodes Come Out?
'Stick' Release Guide: When Do New Episodes Come Out?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
3 hours ago
- Gizmodo
The Big ‘Superman' Speech Happened After This Emotional Behind the Scenes Moment
Sometimes the most magical thing about a movie is seeing how it came together. We can watch the final product and feel however it makes us feel, but that's usually disconnected from all the work that went into it. You rarely think about the different takes, different conversations, and intense work that go into every single second. Especially a film's biggest, most important emotional moment. James Gunn's Superman is now available to watch at home, and part of the release is a 60-minute special feature called 'Adventures in Making Superman,' which features footage from on set. To show just how intimate the feature gets, Gunn shared the below nearly seven-minute clip on his social media. It's from during the filming of the film's emotional climax when Superman gives Lex Luthor the speech about his flaws and humanity. As you'll see, actor David Corenswet didn't quite connect with the words, so he and Gunn really dug into it on the day. Their conversation is almost as heartwarming as the resulting performance. Check it out. View this post on InstagramThat's just movie-making magic right there. Actor and director, challenging each other to get on the same page. Making sure that the delivery of the speech tracks with everything that happened before. Each being totally open to what the other is saying. Even though Gunn wrote the script and is directing, he's genuinely interested in Corenswet's thoughts. And when they get on the same page, it nearly brings tears to the eyes of the director and his producer-fellow DC Studios president, Peter Safran. The rest of the documentary, and several other features, are on the digital release, which is available now. And, later, people who purchase will have their download updated with a Gunn director commentary and more. We've got the details here. Did you enjoy watching Gunn direct this scene? Does this glimpse behind the curtain give you a new appreciation of it? Let us know below. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what's next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
Yahoo
4 hours ago
- Yahoo
New Harry Potter doughnuts, Golden Snitch Latte at Krispy Kreme
Harry Potter fans, the hot light has been illuminated at Krispy Kreme in anticipation of your arrival to taste the new House of Hogwarts doughnut collection. New House of Hogwarts, Harry Potter menu at Krispy Kreme Starting Monday, Aug. 18, a new partnership with Warner Bros. Discovery is hitting Krispy Kreme for a limited time. The collaborative doughnut menu is inspired by the four iconic Hogwarts houses and also includes a specialty Sorting Hat Doughnut with a mystery-colored Kreme filling to represent one of the four school houses. The Sorting Hat doughnut is dipped in chocolate flavored icing, sprinkled with shimmering gold stars and gold sugar, and topped with a Sorting Hat piece. In addition to the Sorting Hat Doughnut, the menu includes: The Gryffindor Doughnut, made with an unglazed shell doughnut that's filled with cookie butter-flavored Kreme, dipped in red icing and Biscoff cookie crumble, and topped with golden icing drizzles and the Gryffindor crest. The Slytherin Doughnut, an Original Glazed doughnut that's topped with chocolate and green buttercreme flavored swirls, chocolate cookie sugar blend, and a Slytherin crest. The Hufflepuff Doughnut, made with an unglazed shell doughnut filled with brown butter toffee-flavored custard, dipped in golden yellow icing, and topped with black chocolate drizzle, cookie crunch, and the Hufflepuff crest. The Ravenclaw Doughnut, an Original Glazed doughnut dipped in blueberry-flavored icing and topped with Ravenclaw sprinkles and a crest. In addition to the sweet pastries, Krispy Kreme also crafted a new Golden Snitch Latte. The caramel toffee-inspired latte is topped with whipped cream, Biscoff cookie crumbles and a sprinkle of golden shimmer sugar. The Harry Potter: Houses of Hogwarts doughnuts are available individually as well as in custom-designed dozens boxes in-shop for pickup or delivery. There will also be a six-pack box from the collection delivered to select retailers, locations for which can be found here. Free Krispy Kreme for Harry Potter fans Additionally on Aug. 23, guests who proudly represent their favorite Harry Potter house at participating shops can enjoy one free Original Glazed doughnut, no purchase necessary. Solve the daily Crossword


Atlantic
4 hours ago
- Atlantic
Don't Believe What AI Told You I Said
John Scalzi is a voluble man. He is the author of several New York Times best sellers and has been nominated for nearly every major award that the science-fiction industry has to offer—some of which he's won multiple times. Over the course of his career, he has written millions of words, filling dozens of books and 27 years' worth of posts on his personal blog. All of this is to say that if one wants to cite Scalzi, there is no shortage of material. But this month, the author noticed something odd: He was being quoted as saying things he'd never said. 'The universe is a joke,' reads a meme featuring his face. 'A bad one.' The lines are credited to Scalzi and were posted, atop different pictures of him, to two Facebook communities boasting almost 1 million collective members. But Scalzi never wrote or said those words. He also never posed for the pictures that appeared with them online. The quote and the images that accompanied them were all 'pretty clearly' AI generated, Scalzi wrote on his blog. 'The whole vibe was off,' Scalzi told me. Although the material bore a superficial similarity to something he might have said—'it's talking about the universe, it's vaguely philosophical, I'm a science-fiction writer'—it was not something he agreed with. 'I know what I sound like; I live with me all the time,' he noted. Bogus quotations on the internet are not new, but AI chatbots and their hallucinations have multiplied the problem at scale, misleading many more people, and misrepresenting the beliefs not just of big names such as Albert Einstein but also of lesser known individuals. In fact, Scalzi's experience caught my eye because a similar thing had happened to me. In June, a blog post appeared on the Times of Israel website, written by a self-described 'tech bro' working in the online public-relations industry. Just about anyone can start a blog at the Times of Israel —the publication generally does not edit or commission the contents—which is probably why no one noticed that this post featured a fake quote, sourced to me and The Atlantic. 'There's nothing inherently nefarious about advocating for your people's survival,' it read. 'The problem isn't that Israel makes its case. It's that so many don't want it made.' As with Scalzi, the words attributed to me were ostensibly adjacent to my area of expertise. I've covered the Middle East for more than a decade, including countless controversies involving Israel, most recently the corrupt political bargain driving Prime Minister Benjamin Netanyahu's actions in Gaza. But like Scalzi, I'd never said, and never would say, something so mawkish about the subject. I wrote to the Times of Israel, and an editor promptly apologized and took the article down. (Miriam Herschlag, the opinion and blogs editor at the paper, later told me that its blogging platform 'does not have an explicit policy on AI-generated content.') Getting the post removed solved my immediate problem. But I realized that if this sort of thing was happening to me—a little-known literary figure in the grand scheme of things—it was undoubtedly happening to many more people. And though professional writers such as Scalzi and myself have platforms and connections to correct falsehoods attributed to us, most people are not so lucky. Last May, my colleagues Damon Beres and Charlie Warzel reported on 'Heat Index,' a magazine-style summer guide that was distributed by the Chicago Sun-Times and The Philadelphia Inquirer. The insert included a reading list with fake books attributed to real authors, and quoted one Mark Ellison, a nature guide, not a professional writer, who never said the words credited to him. When contacted, the author of 'Heat Index' admitted to using ChatGPT to generate the material. Had The Atlantic never investigated, there likely would have been no one to speak up for Ellison. The negative consequences of this content go well beyond the individuals misquoted. Today, chatbots have replaced Google and other search engines as many people's primary source of online information. Everyday users are employing these tools to inform important life decisions and to make sense of politics, history, and the world around them. And they are being deceived by fabricated content that can leave them worse off than when they started. This phenomenon is obviously bad for readers, but it's also bad for writers, Gabriel Yoran told me. A German entrepreneur and author, Yoran recently published a book about the degradation of modern consumer technology called The Junkification of the World. Ironically, he soon became an object lesson in a different technological failure. Yoran's book made the Der Spiegel best-seller list, and many people began reviewing and quoting it—and also, Yoran soon noticed, misquoting it. An influencer's review on XING, the German equivalent of LinkedIn, included a passage that Yoran never wrote. 'There's quotes from the book that are mine, and then there is at least one quote that is not in the book,' he recalled. 'It could have been. It's kind of on brand. The tone of voice is fitting. But it's not in the book.' After this and other instances in which he received error-ridden AI-generated feedback on his work, Yoran told me that he 'felt betrayed in a way.' He worries that in the long run, the use of AI in this manner will degrade the quality of writing by demotivating those who produce it. If material is just going to be fed into a machine that will then regurgitate a sloppy summary, 'why weigh every word and think about every comma?' Like other online innovations such as social media, large language models do not so much create problems as supercharge preexisting ones. The internet has long been awash with fake quotations attributed to prominent personalities. As Abraham Lincoln once said, 'You can't trust every witticism superimposed over the image of a famous person on the internet.' But the advent of AI interfaces churning out millions of replies to hundreds of millions of people—ChatGPT and Google's Gemini have more than 1 billion active users combined—has turned what was once a manageable chronic condition into an acute infection that is metastasizing beyond all containment. The process by which this happens is simple. Many people do not know when LLMs are lying to them, which is unsurprising given that the chatbots are very convincing fabulists, serving up slop with unflappable confidence to their unsuspecting audience. That compromised content is then pumped at scale by real people into their own online interactions. The result: Meretricious material from chatbots is polluting our public discourse with Potemkin pontification, derailing debates with made-up appeals to authority and precedent, and in some cases, defaming living people by attributing things to them that they never said and do not agree with. More and more people are having the eerie experience of knowing that they have been manipulated or misled, but not being sure by whom. As with many aspects of our digital lives, responsibility is too diffuse for accountability. AI companies can chide users for trusting the outputs they receive; users can blame the companies for providing a service—and charging for it—that regularly lies. And because LLMs are rarely credited for the writing that they help produce, victims of chatbot calumny struggle to pinpoint which model did the deed after the fact. You don't have to be a science-fiction writer to game out the ill effects of this progression, but it doesn't hurt. 'It is going to become harder and harder for us to understand what things are genuine and what things are not,' Scalzi told me. 'All that AI does is make this machinery of artifice so much more automated,' especially because the temptation for many people is 'to find something online that you agree with and immediately share it with your entire Facebook crowd' without checking to see if it's authentic. In this way, Scalzi said, everyday people uncritically using chatbots risk becoming a 'willing route of misinformation.' The good news is that some AI executives are beginning to take the problems with their products seriously. 'I think that if a company is claiming that their model can do something,' OpenAI CEO Sam Altman told Congress in May 2023, 'and it can't, or if they're claiming it's safe and it's not, I think they should be liable for that.' The bad news is that Altman never actually said this. Google's Gemini just told me that he did.