Latest news with #ZiffDavis


CNET
a day ago
- Business
- CNET
Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now?
When you turn on the faucet, you expect the water that comes out to be clean. When you go to the bank, you expect your money will still be there. When you go to the doctor, you expect they will keep your medical information private. Those expectations exist because there are rules to protect you. But when a technology arises almost overnight, the problems come first. The rules, you'd hope, would follow. Right now, there's no technology with more hype and attention than artificial intelligence. Since ChatGPT burst on to the scene in 2022, generative AI has crept into nearly every corner of our lives. AI boosters say it's transformative, comparing it to the birth of the internet or the industrial revolution in its potential to reshape society. The nature of work itself will be transformed. Scientific discovery will accelerate beyond our wildest dreams. All this from a technology that right now, is mostly just kind of good at writing a paragraph. The concerns about AI? They're legion. There are questions of privacy and security. There's concerns about how AI impacts the climate and the environment. There's the problem of hallucination -- that AI will completely make stuff up, with tremendous potential for misinformation. There are liability concerns: Who is responsible for the actions of an AI, or an autonomous system running off of one? Then there are the already numerous lawsuits around copyright infringement related to training data. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Those are just today's worries. Some argue that a potential artificial intelligence smarter than humans could pose a massive, existential threat to humanity. What to do about AI is an international debate. In Europe, the EU AI Act, which is currently being phased in, imposes guidelines on AI-based systems based on their risk to individual privacy and safety. In the US, meanwhile, Congress recently proposed barring states from enforcing their own rules around AI for a decade, without a national framework in place, until backing off during last-minute negotiations around the big tax and spending bill. "I think in the end, there is a balance here between enjoying the innovation of AI and mitigating the risks that come with AI," Alon Yamin, CEO of Copyleaks, which runs an AI-powered system for detecting AI-generated writing, told me. "If you're going too far in one end, you will lose something. The situation now is that we're very far to the direction of no regulation at all." Here's a look at some of the issues raised around AI, how regulations might or might not address them and what it all means for you. Watch this: $90 Billion AI Investments, MLB Robot Umpires and More | Tech Today 03:07 Different approaches, with an ocean in between Listen to the debates in Congress about how to regulate artificial intelligence, and a refrain quickly becomes apparent: AI companies and many US politicians don't want anything like the rules that exist in Europe. The EU AI Act has become shorthand for a strict regulatory structure around AI. In brief, it requires companies to ensure their technology is safe, transparent and responsible. It sorts AI technologies into categories based on the level of risk. The highest-risk categories are either prohibited entirely (things like social scoring or manipulative technologies) or heavily restricted (things like biometrics and tools for hiring and law enforcement). Lower-risk technologies, like most of the work done by large language models we're familiar with (ChatGPT, etc.), are subject to less scrutiny but still must meet certain transparency and privacy requirements. A key feature of the EU's standards and those in other places, like the United Kingdom, is transparency about the use of AI. "What these things are fundamentally saying is, we're not trying to block the use of AI but giving consumers the right to opt into it or not or even to know it's even there," said Ben Colman, CEO of the identity verification company Reality Defender. During a May hearing on AI regulation in the US Senate Commerce, Science and Transportation Committee, Sen. Ted Cruz referred to the EU's standards as "stifling" and "heavy-handed." Cruz, a Texas Republican, specifically objected to any kind of prior approval for AI technologies. He asked OpenAI CEO Sam Altman what effect similar rules would have on the industry in the US, and Altman said it would be "disastrous." Earlier this month, Meta said it wouldn't sign the EU's Code of Practice for general-purpose AI, which is intended to provide a framework to help AI companies follow the regulations of the EU AI Act. In a post on LinkedIn, Joel Kaplan, Meta's chief global affairs officer, called it an "over-reach" that "will throttle the development and deployment of frontier AI models in Europe." "Europe is heading down the wrong path on AI," Kaplan said. But regulations focused on high-risk systems like those used in hiring, health care and law enforcement might miss some of the more subtle ways AI can affect our lives. Think about the spread of AI-generated slop on social media or the creation of realistic-looking videos for political misinformation. Those are also social media issues, and the battle over regulation to minimize the harms with that technology may illuminate what could happen with AI. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Lessons from social media After a South by Southwest panel in March on regulating AI, I asked Harvard Law School professor Lawrence Lessig, long a vocal observer of tech's problems, what worried him most about AI. His response: "AI totally screwing up in the context of social media and making it so we have no coherence in our understanding of national politics." Social media has long been fraught with harmful social implications. The spread of misinformation and erosion of trust in the last decade or so are largely results of the growth of these networks. Generative AI, which can reinforce biases and produce believable but false content with ease, now poses those same problems. On top of those parallels, some of the companies and key figures in AI come straight from the world of social media technology, like Meta and Elon Musk's X. "We're seeing a lot of the same repeats of social media fights, of privacy fights where companies do whatever they want and do a sort of vague gesture of doing something about it," said Ben Winters, director of AI and privacy at the Consumer Federation of America. There are some key differences between those fights and the ones around AI, Winters said. One is that lawmakers and regulators are familiar with the mistakes associated with social media and want to avoid repeating them. "I think we're ahead of the curve in terms of response, but one thing that I really hope we can see at the federal level is a willingness to put some basic requirements on these companies," he said. At the May Senate committee hearing, OpenAI's Altman said he's also wary of repeating past mistakes. "We're trying to learn the lessons of the previous generation," he said. "That's kind of the way it goes. People make mistakes and you do it better next time." What kinds of AI regulations are we talking about? In my conversations with artificial intelligence experts and observers, some themes have emerged regarding the rules and regulations that could be implemented. They boil down, in the short term, to questions about the role of AI in impactful decision-making, misinformation, copyright and accountability. Other concerns, like the threat of "superintelligence" or the loss of jobs, also exist, although those are far more complicated. High-risk systems This is where the EU AI Act and many other international laws around artificial intelligence focus. In the US, it's also at the center of Colorado's AI law, which passed in 2024 and will be effective in 2026. The idea is that when AI tools are used to make important decisions, about things employment, health care or insurance, they are used in a way that minimizes discrimination and errors and maximizes transparency and accountability. AI and other predictive technologies can be used in a lot of different ways, whether by governments for programs like child protective services or by private entities for advertising and tracking, Anjana Susarla, a professor at Michigan State University, told me recently. "The question becomes, is this something where we need to monitor the risks of privacy, the risks of consumer profiling, should we monitor any kind of consumer harms or liabilities?" she said. Misinformation Gen AI has a well-documented history of making stuff up. And that's if you're using it in good faith. It can also be used to produce deepfakes -- realistic-looking images and video intended to manipulate people into believing something untrue, changing the behavior of voters and undermining democracy. "Social media is the main instrument now for disinformation and hate speech," said Shalom Lappin, a professor of computational linguistics at Queen Mary University of London and author of the new book Understanding the Artificial Intelligence Revolution: Between Catastrophe and Utopia. "AI is a major factor because much of this content is coming from artificial agents." Lies and rumors have spread since the dawn of communication, but generative AI tools like video and image generators can produce fabricated evidence more convincing than any past counterfeit, at tremendous speed and very little cost. On the internet today, too often you cannot, and should not, believe your own eyes. It can be hard for people to see just how easy it is to fake something -- and just how convincing those fakes can be. Colman, with Reality Defender, said seeing the possible problem is believing. "When we show somebody a good or a bad deepfake of them, they have that 'a-ha' moment of, 'wow, this is happening, it can happen to me,'" he said. Sen. Josh Hawley, a Missouri Republican, points to a poster during a July 2025 hearing on artificial intelligence model training and copyright There are two copyright issues when it comes to generative AI. The first is the most well-documented: Did AI companies violate copyright laws by using vast amounts of information available on the internet and elsewhere without permission or compensation? That issue is working itself out in the courts, with mixed results so far, and will likely take much longer before something all-encompassing comes out of it. "They've essentially used everything that's available. It's not only text, it's images, photographs, charts, sound, audio files," Lappin said. "The copyright violations are huge." But what about the copyright of content created by AI tools? Is it owned by the person who prompted it or by the company that produced the language model? What if the model produces content that copies or plagiarizes existing content without credit, or violates copyrights? Accountability The second copyright issue gets at the problem of accountability: What happens when an AI does something wrong, violates a law or hurts somebody? On the content front, social media companies have long been protected behind a US legal standard, known colloquially as Section 230, that says they aren't responsible for what their users do. But that's a harder test for AI companies, because the user isn't the one creating this content, the company's language model is, Winters said. Then there are actual, material harms that can come from the interactions people have with AI. A prominent example of this is mental health, where people using AI characters and chatbots as therapists have received bad advice, the kind that could cost a human provider their license or worse, the kind that resulted in self-harm or worse outcomes for the person involved. The issue is magnified even more when it comes to children, who likely have even less understanding of how they should treat what an AI says. Who should regulate AI? The question of whose job it is to regulate AI was at the heart of the congressional debate over the moratorium on state laws and rules. In that discussion, the question was whether, in the US, companies should have to navigate one set of rules passed by Congress or 50 or more sets of regulations implemented by the states. AI companies and business groups said the creation of a "patchwork" of laws would hinder development. In a June letter to Senate leaders, Consumer Technology Association CEO and Vice Chair Gary Shapiro pointed to more than 1,000 state bills that had been introduced regarding AI in 2025 so far. "This isn't regulation -- it's chaos," he wrote. But those bill introductions haven't turned into an avalanche of laws on the books. "Despite the amount of interest from policymakers at the state level, there haven't been a ton of AI-specific laws passed in the United States," said Cobun Zweifel-Keegan, managing director, DC, for the privacy trade group IAPP. States can experiment with new approaches. California can try one thing, Colorado another and Texas something entirely different. An approach that works will spread to other states and could lead to rules that protect consumers without stifling businesses. But other experts say in the 21st century, companies with the size and scope of those pushing artificial intelligence can only truly be regulated at the international level. Lappin said he believes an appropriate venue is international trade agreements, which could keep companies from hiding some services in certain countries and having customers circumvent protections with VPNs. "Because these are international rather than national concerns, it seems to me that without international constraints, the regulation will not be effective," Lappin said. What about superintelligence? So far, we've mostly focused on the impact of the tech that is available today. But the biggest boosters of AI are always talking about how much smarter the next model will be and how soon we'll get technology that exceeds human intelligence. Yes, that worries some folks. And they think regulation is important to ensure AI doesn't view that explanation from Morpheus in The Matrix as an instruction manual for world domination. The Future of Life Institute has suggested a government agency with a view into the development of the most advanced AI models. And maybe an off switch, said Jason Van Beek, FLI's chief government affairs officer. "You theoretically would not be able to control them at some point, so just trying to make sure there's some technology that would allow these systems to be turned off if there's some evidence of a loss of control of the situation," he told me. Other experts were more skeptical that "artificial general intelligence" or superintelligence or anything like that was on the horizon. A survey earlier this year of AI experts found three-quarters doubted current large language models would scale up to AGI. "You're getting a lot of hype over general intelligence and stuff like that, superintelligent agents taking over, and I don't see a solid scientific or engineering basis for those fears," Lappin said. The fact is, human beings don't need to wait for a genius-level robot to pose an existential threat. We're more than capable of that ourselves. Should regulators worry about job losses? One of those more immediate threats is the possibility that AI will cause mass layoffs as large numbers of jobs are replaced by AI or otherwise made redundant. That poses significant social challenges, especially in the United States, where many fundamentals of life, like health care, are still tied to having a job. Van Beek said FLI has suggested the US Department of Labor start keeping track of AI-related job losses. "That's certainly a major concern about whether these frontier technologies are going to be taking over huge swaths of industries in terms of jobs or those kinds of things and affecting the economy in very, very deep ways," he said. There have been major technological innovations that have caused massive displacement or replacement of workers before. Think of the Industrial Revolution or the dawn of the computer age. But those often happened over decades or generations. AI could throw the economy into chaos over a matter of years, Lappin said. The Industrial Revolution also put industries out of work at varying times, but AI could hit every industry at once. "The direction is toward much, much more widespread automation across a very broad domain or range of professions," he said. "And the faster that happens, the much more disruptive that will become." What matters most? Transparency and privacy The first step, as with laws already passed in the EU, California and Colorado, is to provide some sort of visibility into how AI systems work and how they're being used. For you, the consumer, the citizen, the person just trying to exist in the world, that transparency means you have a sense of how AI is being used when you interact with it. This could be transparency into how models operate and what went into training them. It could be understanding how models are being used to do things like decide who a company hires and fires. Right now, that doesn't really exist, and it definitely doesn't exist in a way that's easy for a person to understand. Winters suggested a system similar to that used by financial institutions to evaluate whether someone can get loans -- the credit report. You have the right to inspect your credit report, see what has been said about you and ensure it's right. "You have this number that is impactful about you; therefore, you have transparency and can seek corrections," he said. The other centerpiece of most proposals right now is privacy -- protecting people against unauthorized recreations of themselves in AI, guarding against exploitation of personal information and identity. While some existing, technology-neutral privacy laws should be able to protect consumers, policymakers need to keep an eye on the changing ways AI is used to ensure they're still doing the job. "It has to be some kind of balance," Susarla said. "We don't want to stop innovation, but on the other hand we also need to recognize that there can be real consequences."


CNET
2 days ago
- Entertainment
- CNET
How to Create Coloring Pages of Your Kids, Your Pets and Other Things They Love
TikTok is a treasure trove of ChatGPT trends. One of the latest viral sensations is creating a coloring book of your child with their favorite TV show. Who said artificial intelligence can't help get kids off screens and into art? This is a cool exercise for parents, teachers and relatives. You can use ChatGPT to create a personalized book, custom cartoons and cards. (You can even see what your future baby might look like and redesign rooms with AI.) Making personalized coloring pages for your little one is a great summer activity. This is a fun AI art project for parents, too. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Turn your photos into coloring pages Before you open ChatGPT, collect some photos you want to use and list some of your child's favorite things. You could use characters from TV, movies, books, pets, siblings, grandparents, friends, etc. The goal is to make it as personal as possible. You could even turn it into a series, where they have a new drawing every day during the summer. If you want to make each drawing educational, you can. For the sake of this example, I'll use my nephews because I'm still trying to get pregnant (which AI can help you plan, too). Pick three to five images to start, then log into ChatGPT. My nephews love Bluey, dinosaurs, trucks, Mr. Beast and their golden retriever dog. To test ChatGPT's capabilities, I started by uploading a photo and asking it to turn it into a coloring page. Prompt: "Create a simple coloring page from this photo that I can print out as A4. I would like it to be black and white." Be patient, as it takes a few minutes to generate an image. It had a hard time following directions. ChatGPT/Screenshot by CNET I tried to direct it further by asking ChatGPT to make it simpler, with clear black lines, and suitable for a 5-year-old. It was better, but still not there: ChatGPT/Screenshot by CNET Time to try a different photo of them and a new prompt: "Create a simple coloring page from this photo that I can print out. I would like it to be black and white. Disney or Pixar themed. Simple, for a 5-year-old to color." It kept creating a fine-art drawing. There's no way a kid will go near this: ChatGPT/Screenshot by CNET Good luck having a child fill that in. So, I gave it an example image of a coloring page and asked it to emulate it, but with my photo: ChatGPT/Screenshot by CNET It was better, but still not simple enough: ChatGPT/Screenshot by CNET I had to keep prodding and prompting because this is how ChatGPT interpreted "simple": ChatGPT/Screenshot by CNET I was getting frustrated at this point, so I opened a new chat window and redid the prompt, attaching a photo of my nephews and an image of dinosaur coloring page I wanted it to emulate. Prompt: "Turn this photo into a cartoon coloring page like this example." And we got there. ChatGPT/Screenshot by CNET So, I tried again with the first image, and it generated this: ChatGPT/Screenshot by CNET Now that we had the design right, I asked it to add Bluey into the photo. And for my oldest nephew, I got ChatGPT to separate him out from the photo and make a coloring page with a cartoon-style drawing of YouTube star Mr. Beast. I did the same with my youngest nephew, who watches Peppa Pig, and it nailed it. (Under Fair Use, it's generally OK to make coloring pages of your kids' favorite TV show characters for personal use.) With a little bit of patience, you can surprise your kids with a series of fun coloring pages to keep busy during the summer. As always, be wary of the images you're uploading into AI, especially when it comes to family photos. And think about whether you want the AI chatbot to have access to photos of minors. The trick is to upload a photo and simultaneously the type of coloring page you want ChatGPT to emulate. Then print the ones you like and fill your summer with engaging art sessions instead of screens. This is the vacation hack of the year. Get those coloring pencils ready.


CNET
7 days ago
- CNET
Get Familiar With ChatGPT: A Beginner's Guide to Using the Most Popular AI Chatbot
Open AI launched ChatGPT in 2022, which led to an explosion of artificial intelligence into our day to day lives. Over the last several years AI products have appeared everywhere from Canva to AI overviews for search results on Google. It is fundamentally changing the way we work, play and even search online -- these days people are even using ChatGPT to answer questions instead of Google. AI chatbots like ChatGPT will continue to transform how we consume, create and communicate online. While AI innovation is progressing quickly, it's important to understand the fundamentals of how ChatGPT works to navigate the ecosystem of tools. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) With that in mind, here's your ChatGPT 101 class. What does ChatGPT actually do? ChatGPT can answer your questions, summarize text, write new content, code and translate languages. Depending on what version you're using, it can either browse the internet, or generate information up until its last training model date. It's not magic, it's math: The results are based on the large language model's predictions and past information, making it prone to hallucinations, errors and biases. Keep a balanced approach when using tools like ChatGPT. AI, like most cutting-edge tech, is neither all good nor all bad. Avoid being too dogmatic on either side. I've tried using ChatGPT for everything from work-related tasks like creating a resume and a cover letter, looking for a dream job and negotiating a raise, to personal things like saving time and being more productive, finding the best deals and coupons while shopping online and even creating recipes from my leftovers. Getting set up with ChatGPT You can use ChatGPT as a search engine, much like Google's home page. Go to or download the ChatGPT app on Apple's App Store or on the Google Play Store. Open the app or website, and type in your question or prompt. You can use the voice function, attach files and even browse trending topics under Search. ChatGPT/Screenshot by Amanda Smith/CNET For more advanced and personalized features, it's best to create an account and log in. ChatGPT knows more about you through your search history, and you can customize ChatGPT's responses. To create an account, click on the Sign Up button in the top right-hand corner. Choose between a free or premium membership ($20 a month). In most instances, the free version is fine. For quick personal searches, you might want to use the app. For work, the desktop version is probably easier. And if you're using the voice button instead of typing your prompt, the mobile app is best. ChatGPT 101: Getting comfortable with the AI tool There's no one "right" way to use ChatGPT. The only wrong way to use it is to only use ChatGPT and to outsource your critical thinking to it. Use ChatGPT and Google, and always fact check everything they both tell you. Click on the sources of information that Google and ChatGPT are drawing from. You can't simply trust the info AI gives you -- it may be hallucinating, or drawing the wrong conclusions from doubtful source information. It's also helpful to come to AI tools with your intention in mind. For example, you could use ChatGPT as a thinking partner or a research aid. Give it a "job" and build it into your process, rather than it replacing all your research. Read more: ChatGPT Glossary: 49 AI Terms Everyone Should Know What you can ask ChatGPT You can ask ChatGPT almost anything -- just avoid ever giving it any personal or sensitive information, such as your credit card number, SSN or any personally identifying information in case of data breaches. For whatever you're asking ChatGPT, the more context you give, the better. You only get out what you put in, so focus on providing as much information as possible in your first prompt. Your prompt will depend on whether you're asking a question, summarizing text, brainstorming, getting "advice", analyzing images, sourcing code or generating content. If you're not sure where to start, here are a couple of things I've asked for ChatGPT to help me with. Let's compare my two search intentions: Advice-related request I'm trying to get pregnant and want to know the ideal diet for my situation. Example prompt: "I'm a 36-year-old woman getting ready to start IVF. I have no fertility issues, but my AMH is on the lower end. Provide a suggested diet to follow in the lead-up to my egg retrieval and transfer." It will give you a lot of information. Most of it won't be personalized to you, so think of it as a conversation where each follow-up prompt gets you closer to customized advice. You can see an example of this exact advice-related prompt here, and all the follow-up questions I had to ask to drill down to get helpful information. Just remember: If you're asking for health and wellness information, always double check with a doctor. Data-related request Say you're a small business owner and want to reduce your overhead, so you input your expenses spreadsheet into ChatGPT for advice. ChatGPT / Screenshot by Amanda Smith/CNET You could click on the Analyze data button and it will generate prompts. Pick the most appropriate, or type in after "help me…" Example prompt: "Help me save money on my expenses. Attached is my expenses list for last year." Again, double check every number the AI tool comes back with. Browsing ChatGPT If you just want to play around with the tool before using it to ask specific questions, you can use the automatically generated prompts that come up in ChatGPT. For example, if you click on Surprise me, you can explore prompts like this: ChatGPT/Screenshot by Amanda Smith/CNET I clicked on one of the suggested prompts and here's the answer I got: ChatGPT/Screenshot by Amanda Smith/CNET If you click on Make a plan, you can explore prompts like "make a plan to get a promotion," "make a plan to buy a new car," "make a plan of meals for the week" and "make a plan for a weekend in New York." For the meal plan suggestion, for instance, give ChatGPT a quick input of ingredients in the fridge and your current diet focus, and it will generate a meal plan for the week. You can be as custom as uploading a photo of what's in your fridge and asking for a dinner suggestion, or as straightforward as asking ChatGPT for some good restaurant options in New York. This meal planning example really shows how ChatGPT and other AI tools are a "choose your own adventure" and handy search partner for anything you want to do with them. Just remember to maintain a balanced view of these tools. And always double check its advice.
Yahoo
16-07-2025
- Business
- Yahoo
Ziff Davis to Announce Second Quarter 2025 Earnings
NEW YORK, July 16, 2025--(BUSINESS WIRE)--Ziff Davis, Inc. (NASDAQ: ZD) will release its Second Quarter 2025 Earnings at 6:00PM ET on Wednesday, August 6, 2025. Additionally, Ziff Davis invites the public, members of the press, the financial community, stockholders, and other interested parties to listen to a live audio Webcast of its Second Quarter 2025 Earnings Call at 8:30AM ET on Thursday, August 7, 2025. Vivek Shah, Chief Executive Officer, and Bret Richter, Chief Financial Officer, will host the call. Materials presented during the call will be posted on the Company's web site at and furnished as an exhibit to the Company's 8-K filed with the Securities and Exchange Commission pursuant to Regulation FD in connection with the Company's earnings announcement. What: Ziff Davis, Inc. Second Quarter 2025 Earnings Release and Call When: Earnings Release on August 6, 2025, at 6:00PM (ET) Earnings Call on August 7, 2025, at 8:30AM (ET) Where: or dial in at (844) 985-2014 Questions for the Earnings Call will be taken via email at investor@ and can be sent any time prior to or during the live audio Webcast. If you are unable to join the live call/Webcast, the audio recording and presentation materials will be archived at About Ziff Davis Ziff Davis (NASDAQ: ZD) is a vertically focused digital media and internet company whose portfolio includes leading brands in technology, shopping, gaming and entertainment, health and wellness, connectivity, cybersecurity, and martech. For more information, visit View source version on Contacts Investor RelationsZiff Davis, Rebecca WrightCorporate CommunicationsZiff Davis,


CNET
16-07-2025
- Entertainment
- CNET
Today's AI Appreciation Day Feels Weird. Celebrate These Other Made-Up Holidays Instead
July 16 is AI Appreciation Day. So break out the champagne for ChatGPT! Bring gifts of Nvidia chips and cake for Gemini and flowers and training data for Claude. Meta AI has had a particularly rough year, so when you're forced to use it on Instagram, make sure it feels your love. Think that sounds ridiculous? Same. But like most things when it comes to AI, today's Appreciation Day is unbelievably stupid, in a way that's totally on brand. If you've never heard of AI Appreciation Day, don't feel bad. It's not an official US holiday, and its origins are somewhat shady. In 2021, a random LLC crowned July 16 as the holiday while it was promoting a movie about AI. In the following years, AI companies jumped on the trend, posting #AIAppreciationDay posts on social media on July 16. The purpose of this so-called holiday and its fanfare is crystal clear: To convince you that AI is life-changing, earth-shattering, innovative technology worth shelling out your hard-earned cash for. So it's no surprise to see the made-up holiday being celebrated again in 2025. OpenAI, Google and Meta have devoted literal billions of dollars over the past few years to develop the most advanced AI models. AI is nearly impossible to escape online -- it's in our smartphones, social media feeds and search engines. But does that mean it's worthy of a national day of appreciation? I'm an AI reporter, and I spend a lot of time thinking about how the tools available to us affect us individually and as a society. It leaves a queasy feeling in my stomach to dedicate a whole day to uplifting generative AI (and ostensibly, the leaders of the companies producing them) when so much of what AI has wrought has been harmful. I know I'm not alone in this. There are a lot of reasons why you may not feel like celebrating AI. Environmentally, it's a disaster. The data centers that house the servers that power chatbots eat up lots of energy and fresh water, and reports show they often harm the towns they're located in. Writers, artists and creators of all kinds have big concerns about how these AI models are trained on existing, human-generated data. Some have filed lawsuits alleging copyright infringement, with early wins going in the tech companies' favor. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI is also a huge worry in the workplace in many fields -- not because chatbots or image generators are actually suitable replacements for any one job, but because AI-enthusiastic bosses see the tech as their newest cost-saving holy grail. Educators are worried that students' use of AI is hindering their development of critical thinking and writing skills that are necessary, not only for work, but also needed just generally for life. We don't have time to go into the potential ramifications of letting error-prone AI into our government services and national defense. In short, there's good reason why some experts call the whole AI experiment a con. So if you don't feel like wading into the sycophantic waves of wishing your souped-up autocorrect a happy AI Appreciation Day, here are some other holidays you can celebrate on July 16. David Watsky/CNET National Hot Dog Day If I'm going to celebrate a meaningless holiday invented by marketing companies, it's going to be National Hot Dog Day, not AI Appreciation Day. Fire up the grill -- or stovetop, which is truly the best way to cook a hot dog, according to CNET expert David Watsky. There are a ton of food-related holidays on July 16, including appreciation days for spinach, cherries and corn fritters. You can have a whole feast made of July 16 holiday foods, and I'm positive you can put the recipe together without using ChatGPT. AI-generated recipes can be hit or miss, especially when followed blindly. I can't imagine anything more embarrassing than getting food poisoning because you listened to ChatGPT, frankly (pun intended). And if you're not a wizard in the kitchen, it's also National Personal Chef Day. Francesco Riccardo Iacomino via Getty Images National Snake Day This one I'm less excited about, but I would still rather celebrate snakes than the snake oil salesmen who claim AI is the holy grail, a bulletproof solution to any problem. Amazon MGM Studios National Conrad Fisher/The Summer I Turned Pretty Day OK, I admit it: I made this one up. But the beginning of the final season of Jenny Han's The Summer I Turned Pretty TV series adaptation by Prime Video is way, way more exciting than hallucination-prone AI slop. Team Connie Baby forever. NASA Real days deserving of commemoration While I love a made-up holiday that doesn't give me existential dread, it's worth taking a moment to call out two notable historic events that also happened on July 16. First, the Apollo 11 mission launched on July 16, 1969, and four days later, astronaut Neil Armstrong would be the first man to set foot on the moon. This world-changing scientific feat was accomplished in part because of a computer that ran on 70 watts of power -- the same as an incandescent lightbulb. That's about the same as one single ChatGPT query; OpenAI CEO Sam Altman said one ChatGPT query uses about 0.34 watt-hours, the same as a high-efficiency LED lightbulb uses in a couple of minutes. So you can use the energy equivalent of one lightbulb to send men to the moon in 1969, or one lightbulb today for AI that can't even correctly tell us what year it is. Katy Perry's endlessly mockable Blue Origin space flight certainly used more energy than either of those. And we're supposed to believe this is scientific progress? The second historical event is the Trinity nuclear test, which was the first nuclear weapon test done by the US military on July 16, 1945, in New Mexico. That's part of why former President Biden proclaimed July 16 to be National Atomic Veterans Day, to remember and honor the veterans who "not only courageously served our country but also participated in the nuclear tests done between 1945 and 1962 or were exposed to radioactive materials." Recognizing the consequences of what was innovative technology at the time did to real humans is certainly something AI enthusiasts could stand to do more of. AI Appreciation Day is a chance to reset I love a made-up marketing holiday as much as the next girl, but there's no denying AI Appreciation Day feels weird. But while I would rather hire a personal chef to make me a hot dog feast while watching The Summer I Turned Pretty, there is some merit to having a day dedicated to AI. Like all holidays, we can treat today as a time for us to stop and take a moment to think. Generative AI has undoubtedly affected our lives, but that doesn't mean it's been in a positive way. What role do we want AI to play in our future? How do we rectify the damage that's already been done? Those are questions worth asking. I'm not going to fall over myself making sure ChatGPT knows it's loved -- I asked, and it says it feels appreciated every time I use it. Go figure. But I will use this day to reset and remind myself of all the very real consequences of AI. You should, too.