logo
#

Latest news with #Scalzi

Don't Believe What AI Told You I Said
Don't Believe What AI Told You I Said

Atlantic

timea day ago

  • Entertainment
  • Atlantic

Don't Believe What AI Told You I Said

John Scalzi is a voluble man. He is the author of several New York Times best sellers and has been nominated for nearly every major award that the science-fiction industry has to offer—some of which he's won multiple times. Over the course of his career, he has written millions of words, filling dozens of books and 27 years' worth of posts on his personal blog. All of this is to say that if one wants to cite Scalzi, there is no shortage of material. But this month, the author noticed something odd: He was being quoted as saying things he'd never said. 'The universe is a joke,' reads a meme featuring his face. 'A bad one.' The lines are credited to Scalzi and were posted, atop different pictures of him, to two Facebook communities boasting almost 1 million collective members. But Scalzi never wrote or said those words. He also never posed for the pictures that appeared with them online. The quote and the images that accompanied them were all 'pretty clearly' AI generated, Scalzi wrote on his blog. 'The whole vibe was off,' Scalzi told me. Although the material bore a superficial similarity to something he might have said—'it's talking about the universe, it's vaguely philosophical, I'm a science-fiction writer'—it was not something he agreed with. 'I know what I sound like; I live with me all the time,' he noted. Bogus quotations on the internet are not new, but AI chatbots and their hallucinations have multiplied the problem at scale, misleading many more people, and misrepresenting the beliefs not just of big names such as Albert Einstein but also of lesser known individuals. In fact, Scalzi's experience caught my eye because a similar thing had happened to me. In June, a blog post appeared on the Times of Israel website, written by a self-described 'tech bro' working in the online public-relations industry. Just about anyone can start a blog at the Times of Israel —the publication generally does not edit or commission the contents—which is probably why no one noticed that this post featured a fake quote, sourced to me and The Atlantic. 'There's nothing inherently nefarious about advocating for your people's survival,' it read. 'The problem isn't that Israel makes its case. It's that so many don't want it made.' As with Scalzi, the words attributed to me were ostensibly adjacent to my area of expertise. I've covered the Middle East for more than a decade, including countless controversies involving Israel, most recently the corrupt political bargain driving Prime Minister Benjamin Netanyahu's actions in Gaza. But like Scalzi, I'd never said, and never would say, something so mawkish about the subject. I wrote to the Times of Israel, and an editor promptly apologized and took the article down. (Miriam Herschlag, the opinion and blogs editor at the paper, later told me that its blogging platform 'does not have an explicit policy on AI-generated content.') Getting the post removed solved my immediate problem. But I realized that if this sort of thing was happening to me—a little-known literary figure in the grand scheme of things—it was undoubtedly happening to many more people. And though professional writers such as Scalzi and myself have platforms and connections to correct falsehoods attributed to us, most people are not so lucky. Last May, my colleagues Damon Beres and Charlie Warzel reported on 'Heat Index,' a magazine-style summer guide that was distributed by the Chicago Sun-Times and The Philadelphia Inquirer. The insert included a reading list with fake books attributed to real authors, and quoted one Mark Ellison, a nature guide, not a professional writer, who never said the words credited to him. When contacted, the author of 'Heat Index' admitted to using ChatGPT to generate the material. Had The Atlantic never investigated, there likely would have been no one to speak up for Ellison. The negative consequences of this content go well beyond the individuals misquoted. Today, chatbots have replaced Google and other search engines as many people's primary source of online information. Everyday users are employing these tools to inform important life decisions and to make sense of politics, history, and the world around them. And they are being deceived by fabricated content that can leave them worse off than when they started. This phenomenon is obviously bad for readers, but it's also bad for writers, Gabriel Yoran told me. A German entrepreneur and author, Yoran recently published a book about the degradation of modern consumer technology called The Junkification of the World. Ironically, he soon became an object lesson in a different technological failure. Yoran's book made the Der Spiegel best-seller list, and many people began reviewing and quoting it—and also, Yoran soon noticed, misquoting it. An influencer's review on XING, the German equivalent of LinkedIn, included a passage that Yoran never wrote. 'There's quotes from the book that are mine, and then there is at least one quote that is not in the book,' he recalled. 'It could have been. It's kind of on brand. The tone of voice is fitting. But it's not in the book.' After this and other instances in which he received error-ridden AI-generated feedback on his work, Yoran told me that he 'felt betrayed in a way.' He worries that in the long run, the use of AI in this manner will degrade the quality of writing by demotivating those who produce it. If material is just going to be fed into a machine that will then regurgitate a sloppy summary, 'why weigh every word and think about every comma?' Like other online innovations such as social media, large language models do not so much create problems as supercharge preexisting ones. The internet has long been awash with fake quotations attributed to prominent personalities. As Abraham Lincoln once said, 'You can't trust every witticism superimposed over the image of a famous person on the internet.' But the advent of AI interfaces churning out millions of replies to hundreds of millions of people—ChatGPT and Google's Gemini have more than 1 billion active users combined—has turned what was once a manageable chronic condition into an acute infection that is metastasizing beyond all containment. The process by which this happens is simple. Many people do not know when LLMs are lying to them, which is unsurprising given that the chatbots are very convincing fabulists, serving up slop with unflappable confidence to their unsuspecting audience. That compromised content is then pumped at scale by real people into their own online interactions. The result: Meretricious material from chatbots is polluting our public discourse with Potemkin pontification, derailing debates with made-up appeals to authority and precedent, and in some cases, defaming living people by attributing things to them that they never said and do not agree with. More and more people are having the eerie experience of knowing that they have been manipulated or misled, but not being sure by whom. As with many aspects of our digital lives, responsibility is too diffuse for accountability. AI companies can chide users for trusting the outputs they receive; users can blame the companies for providing a service—and charging for it—that regularly lies. And because LLMs are rarely credited for the writing that they help produce, victims of chatbot calumny struggle to pinpoint which model did the deed after the fact. You don't have to be a science-fiction writer to game out the ill effects of this progression, but it doesn't hurt. 'It is going to become harder and harder for us to understand what things are genuine and what things are not,' Scalzi told me. 'All that AI does is make this machinery of artifice so much more automated,' especially because the temptation for many people is 'to find something online that you agree with and immediately share it with your entire Facebook crowd' without checking to see if it's authentic. In this way, Scalzi said, everyday people uncritically using chatbots risk becoming a 'willing route of misinformation.' The good news is that some AI executives are beginning to take the problems with their products seriously. 'I think that if a company is claiming that their model can do something,' OpenAI CEO Sam Altman told Congress in May 2023, 'and it can't, or if they're claiming it's safe and it's not, I think they should be liable for that.' The bad news is that Altman never actually said this. Google's Gemini just told me that he did.

Manchego moon
Manchego moon

Winnipeg Free Press

time10-05-2025

  • Entertainment
  • Winnipeg Free Press

Manchego moon

This is a very silly book. John Scalzi is an award-winning writer who has produced many entertaining novels and stories, endearing him to many readers. His work is always off-beat, often sardonic and witty, and his tales are some of the most inventive in modern science fiction. His latest novel, When the Moon Hits Your Eye, is perhaps the most far-out he has ventured, and his fan base will definitely love it. Did we mention it was silly? John Scalzi photo John Scalzi Scalzi 'workshopped' the novel's premise at a convention, and attendees had such a tremendous reaction that he was encouraged to go all-out with his idea. It's one of those 'What If' questions that you might find you and your friends asking whilst completely wasted at a party. In essence: 'What if, one day, the Moon suddenly turned to cheese?' That's it. That's the entire premise for the book. Silly, right? When the Moon Hits Your Eye is basically a collection of vignettes exploring the consequences of the Moon's unexpected transformation. Scalzi imagines how people in a wide variety of jobs, lifestyles and positions are impacted or affected by the new lunar reality. Sure, there's the scientific impact: the Moon would be much larger because it would have the same mass as lunar regolith, and cheese of that size isn't particularly stable. Physics dictates that it would undergo some changes, with drastic consequences for Earth's residents. Also, NASA wouldn't be happy. At the time Scalzi wrote the book, the space organization was fully prepared to send astronauts back to the Moon as a stepping stone to the stars. (That may not happen now, as NASA funding is being reallocated.) Scalzi's astronauts won't get the chance, especially since narcissist trillionaire Jody Bannon, who has his own vanity space program (yes, he mentions a real one), is going anyway. (Was Scalzi psychic?) Beyond the scientific and technical impact, Scalzi imagines how the Moon's alteration would affect many others. How would organized religion deal with imminent catastrophe? Was this a miracle or a sign from a benevolent or a malevolent god? Would we look to spiritual leaders for reassurance or blissful acceptance of our fate? On the opposite side of morality, what about Hollywood? How would sensational institutions capitalize on disaster? And in terms of capital, would financial markets be able to cope with runs on banks? What would be the most secure form of monetary exchange? Would the military establishment have any role in world affairs anymore? Politics would be in an apoplectic state in trying to maintain status quo following an event of this magnitude. The White House would hold press conferences to spin its complete control of the situation, despite its total inability to do so. The White House chief of staff to his colleagues: 'So let me summarize. Sometime yesterday afternoon the moon was replaced by a globe of cheese…' And then reassuring the public by lying completely: 'There is no danger at this time, nor do we anticipate any danger from it in the near future.' When the Moon Hits Your Eye Scalzi includes scenes of conflict even in cheese shops, where rioters threaten the lives of retailers simply trying to make a living. And because cheese is itself now the enemy, nothing is sacred. Not even Moon rocks brought back to Earth by Apollo astronauts are safe — they too have been turned to cheese, and are now sought by the wealthy elite who will stop at nothing to make a lunar grilled cheese sandwich. Scalzi's keen sense of social media and pop culture allow him to show the impact of the Moon's demise through the lens of Reddit, Slack, the publishing industry itself and even Saturday Night Live (where skits bomb as much as they seem to in real life anyway). Winnipeg Jets Game Days On Winnipeg Jets game days, hockey writers Mike McIntyre and Ken Wiebe send news, notes and quotes from the morning skate, as well as injury updates and lineup decisions. Arrives a few hours prior to puck drop. This ain't your grandpa's science fiction, to be sure. When the Moon Hits Your Eye is a prime example of absurdist literature, reminiscent of Kafka's The Metamorphosis, which explored how an impossible event might be viewed by society. It even smacks of sci-fi author Larry Niven's classic question 'What can you say about chocolate covered manhole covers?' that he posed decades ago in the story of the same name, challenging readers' already-stretched imaginations. Fans of Douglas Adams will find Scalzi's work delightful. Scalzi concludes When the Moon Hits Your Eye with chapters illustrating the fragility of belief and the way in which factual events are regarded across time, challenging our notions of history and truth. His characters seem to be living in a world very much like we find ourselves in today. And just as silly. Chris Rutkowski is a Winnipeg science writer and sci-fi fan.

Henderson State University launches aviation courses for non-degree-seeking pilots
Henderson State University launches aviation courses for non-degree-seeking pilots

Yahoo

time24-04-2025

  • Business
  • Yahoo

Henderson State University launches aviation courses for non-degree-seeking pilots

ARKADELPHIA, Ark. — A change made last week at Henderson State University is ensuring more people are 'Reddie' to fly. The aviation flight training program, turning into the Arkansas Aviation Academy, is more than a name change. It expands flight education beyond those enrolled at the school. New Arkansas law creates incentives for alternative aviation fuel Three courses are now open to non-degree students. They include a commercial multi-engine add-on course, a tailwheel course, and a Certified Flight Instructor spin-training course. They are available only to people who already have a commercial pilot certificate. Shannon Clardy, Henderson State University Dean of the College of Aviation, Science, and Nursing, said the 16-plane fleet allows more room for advanced training. 'That's what we're focused on right now. As our fleet capacity grows, then our offerings will also grow,' Clardy said. It also fills a hiring need in the aviation industry. 'That is all experience that pilots will need to move out into industry, whether they are flying charter airplanes, flying freight, or flying passengers,' Clardy explained. United Airlines wants to turn algae into jet fuel Taylor Scalzi went from student to instructor. She completed the commercial multi-engine add-on courses last week. 'It really broadens my horizons of where I can go, and what I can do,' Scalzi explained. Each of the three initial courses can be completed in less than a week. Accommodations will be available on campus for participating pilots during their week-long training. Scalzi said she is excited to see who the academy brings in. 'It is a very dynamic group. It could be anybody from 18 years old to in their 50s. It doesn't matter,' Scalzi said. The creation of an aviation advisory board is underway, and the academy is working on alumni connections, along with pursuing potential partnerships with airlines, which could help with landing a job. Chad Cocroft is a junior HSU aviation major. He said the course is his 'next step' in achieving his dream of becoming a career pilot. 'I enjoy it. I want to do it for the rest of my life,' Cocroft said. University of Central Arkansas celebrates groundbreaking of Aviation Academy hangar Henderson's long-established professional pilot bachelor's degree program is the only public university program of its kind in the state. For additional information, visit or call (870) 230-5585 to schedule training. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store