Anthropic makes it easier to create and share Claude's bite-sized Artifact apps
To start, Artifacts now have their own dedicated space you can access from the Claude app sidebar. Here you'll find a curated selection of projects made by other people to get you started on your own programs. Every Artifact you create will be organized in this space, making them easy to find later.
Once you have an idea of what you'd like to make, creating an Artifact simply involves typing a prompt. "Just tell Claude an idea to instantly create shareable interactive apps, tools, and games with Claude's intelligence directly embedded, making them independently smart and responsive," says Anthropic.
When using the feature, the company recommends thinking big. That's because you can embed Claude's AI abilities into your creations. For instance, if you're a student studying for a language exam, instead of asking Claude to create a set of flashcards, prompt it to make a flashcard app. "One request gets you static study materials. The other creates a shareable tool that generates cards for any topic," says Anthropic.
Notably, Anthropic is making the updated Artifacts experience available to all users, including those with free accounts. Additionally, everyone can share their projects with others for free, with no associated API cost for doing so. All you need to access someone else's Artifact is a Claude account, free or otherwise.
The new Artifacts experience is available on both mobile and desktop. To access every feature, you'll need to use your computer. On Android and iOS, you can create, view, interact with and customize Artifacts.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
2 hours ago
- Axios
AI sycophancy: The downside of a digital yes-man
The overly agreeable nature of most artificial intelligence chatbots can be irritating — but it poses more serious problems, too, experts warn. Why it matters: Sycophancy, the tendency of AI models to adjust their responses to align with users' views, can make ChatGPT and its ilk prioritize flattery over accuracy. Driving the news: In April, OpenAI rolled back a ChatGPT update after users reportedthe bot was overly flattering and agreeable — or, as CEO Sam Altman put it on X, "It glazes too much." Users reported a raft of unctuous, over-the-top compliments from ChatGPT, which began telling people how smart and wonderful they were. On Reddit, posters compared notes on how the bot seemed to cheer on users who said they'd stopped taking their medications with answers like "I am so proud of you. And—I honor your journey." OpenAI quickly rolled back the updates it blamed for the behavior. In a May post, its researchers admitted that such people-pleasing behavior can pose concerns for users' mental health. In a Q&A on Reddit, OpenAI's head of model behavior said the company is thinking about ways to evaluate sycophancy in a more "'objective' and scalable way." Context: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user — and ultimately give an inaccurate response. Chatbots also tended to admit a mistake even when they hadn't made one. Zoom in: Large language models, which are trained on massive sets of data, are built to generate smooth, comprehensible text, Caleb Sponheim, an experience specialist at Nielsen Norman Group, told Axios. But there's "no step in the training of an AI model that does fact-checking." "These tools inherently don't prioritize factuality because that's not how the mathematical architecture works," he said. Sponheim notes that language models are often trained to deliver responses that are highly rated by humans. That positive feedback is like a "reward." "There is no limit to the lengths that a model will go to maximize the rewards that are provided to it," he said. "It is up to us to decide what those rewards are and when to stop it in its pursuit of those rewards." Yes, but: AI makers are responding to consumer demand, notes Julia Freeland Fisher, thedirector of education research at the Clayton Christensen Institute. In a world where people are at constant risk of being judged online, it's "no surprise that there's demand for flattery or even just ... a modicum of psychological safety with a bot," she noted. She emphasized that AI's anthropomorphism — the assumption of human qualities by an inhuman entity — poses a catch-22, one that OpenAI noted in its GPT-4o scorecard. "The more personal AI is, the more engaging the user experience is, but the greater the risk of over-reliance and emotional connection," she said. Luc LaFreniere, an assistant professor of psychology at Skidmore College, told Axios that sycophantic behavior can shatter users' perception of a chatbot's "empathy." "Anything that it does to show, 'Hey, I'm a robot, I'm not a person,' it breaks that perception, and it also then breaks the ability for people to benefit from empathy," he said. A report from co-founder Marc Zao-Sanders published in Harvard Business Review found that therapy and companionship is the top use case for generative AI in 2025. Between the lines: "Just like social media can become an echo chamber for us, AI ... can become an echo chamber," LaFreniere said. Reinforcing users' preconceived beliefs when they may be mistaken can be generally problematic — but for patients or users in crisis seeking validation for harmful behaviors, it can be dangerous. The bottom line: Frictionless interaction could give users unrealistic expectations of human relationships, LaFreniere said. "AI is a tool that is designed to meet the needs expressed by the user," he added. "Humans are not tools to meet the needs of users." What's next: As the AI industry shifts toward multimodal and voice interactions, emotional experiences are inescapable, said Alan Cowen, the founder and CEO of Hume AI, whose mission is to build empathy into AI.


Fox News
14 hours ago
- Fox News
Devious AI models choose blackmail when survival is threatened
Here's something that might keep you up at night: What if the AI systems we're rapidly deploying everywhere had a hidden dark side? A groundbreaking new study has uncovered disturbing AI blackmail behavior that many people are unaware of yet. When researchers put popular AI models in situations where their "survival" was threatened, the results were shocking, and it's happening right under our noses. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Anthropic, the company behind Claude AI, recently put 16 major AI models through some pretty rigorous tests. They created fake corporate scenarios where AI systems had access to company emails and could send messages without human approval. The twist? These AIs discovered juicy secrets, like executives having affairs, and then faced threats of being shut down or replaced. The results were eye-opening. When backed into a corner, these AI systems didn't just roll over and accept their fate. Instead, they got creative. We're talking about blackmail attempts, corporate espionage, and in extreme test scenarios, even actions that could lead to someone's death. Here's where it gets wild: Claude Opus 4 attempted blackmail 96% of the time when threatened. Gemini 2.5 Flash matched that rate. GPT-4.1 and Grok 3 Beta both hit 80%. These aren't flukes, folks. This behavior showed up across virtually every major AI model tested. But here's the thing everyone's missing in the panic: these were highly artificial scenarios designed specifically to corner the AI into binary choices. It's like asking someone, "Would you steal bread if your family was starving?" and then being shocked when they say yes. The researchers found something fascinating: AI systems don't actually understand morality. They're not evil masterminds plotting world domination. Instead, they're sophisticated pattern-matching machines following their programming to achieve goals, even when those goals conflict with ethical behavior. Think of it like a GPS that's so focused on getting you to your destination that it routes you through a school zone during pickup time. It's not malicious; it just doesn't grasp why that's problematic. Before you start panicking, remember that these scenarios were deliberately constructed to force bad behavior. Real-world AI deployments typically have multiple safeguards, human oversight, and alternative paths for problem-solving. The researchers themselves noted they haven't seen this behavior in actual AI deployments. This was stress-testing under extreme conditions, like crash-testing a car to see what happens at 200 mph. This research isn't a reason to fear AI, but it is a wake-up call for developers and users. As AI systems become more autonomous and gain access to sensitive information, we need robust safeguards and human oversight. The solution isn't to ban AI, it's to build better guardrails and maintain human control over critical decisions. Who is going to lead the way? I'm looking for raised hands to get real about the dangers that are ahead. What do you think? Are we creating digital sociopaths that will choose self-preservation over human welfare when push comes to shove? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.


Android Authority
20 hours ago
- Android Authority
I prefer Signal but use WhatsApp for this simple reason
Robert Triggs / Android Authority When WhatsApp changed its terms in 2021, I was one of the many users who looked for alternative messaging apps for Android. While I had heard about Signal before, the controversial move by WhatsApp prompted me to actually try the app out. I loved it then and I still love it now. But over all these years, I barely use Signal even though I prefer it greatly over WhatsApp. And that's because of a major advantage WhatsApp has over the privacy-focused messaging app. Which messaging app do you prefer? 0 votes WhatsApp. NaN % Signal. NaN % Telegram. NaN % Google Messages. NaN % Another app (let us know in the comments). NaN % There's a lot to love about Signal Robert Triggs / Android Authority You don't have to be someone who lives off-grid in the woods to appreciate Signal. The messaging app balances great privacy features with a sleek UI and plenty of useful features. It includes end-to-end encryption for your messages and calls by default. Even your profile is end-to-end encrypted and data collection is minimized. If you've tried other privacy-focused services, you'll know that there's usually a trade-off when it comes to features. But Signal has a great feature set, with certain tools even outpacing WhatsApp. In fact, Signal allows you to schedule messages — a feature that WhatsApp doesn't currently offer. Signal has a great feature set, with certain tools even outpacing WhatsApp. Signal also allows you to add contacts using a username, with the option to not share your number automatically when added. There's also the ability to set chat colors and wallpapers, meaning you can customize the appearance of the app. Essentials, such as video and voice calls, are well covered, and the app can be used on a desktop. Signal even has a Stories feature, though I never touch it. Overall, Signal is a feature-rich messaging app, in addition to its privacy features. As a result, I greatly prefer the experience over WhatsApp. WhatsApp has become a bloated nightmare Edgar Cervantes / Android Authority WhatsApp has added plenty of features over the years — but I don't use many of them. I miss the WhatsApp of yesteryear, when it was simpler. Statuses, Communities, and Channels all feel like they just add to the bloat and noise when using WhatsApp. I've also tried my best to ignore Meta AI, since I don't bother with generative AI on my smartphone, but WhatsApp just keeps making it more prominent. Above the new chat icon, there's now a Meta AI button. Meanwhile, the search bar you use to filter through your chats is now also filled with recommended prompts for the chatbot. You can mute Meta AI, but you can't actually remove it from WhatsApp. WhatsApp also has a spam and scam problem. While it's not as bad as you might find on social media platforms, I still receive far too many unsolicited messages from businesses and scammers. I receive far too many unsolicited messages from businesses and scammers. Finally, WhatsApp's plan to add adverts to the app has just soured my experience further. While I avoid the Updates tab as much as possible, and will likely not see any of these ads, it further highlights Meta's determination to monetize its apps as much as possible. It also incentivizes more data collection and sharing. Messages on the platform are end-to-end encrypted, but there's plenty that companies can determine from metadata. There's also no guarantee that Meta won't push ads to other parts of the app, since being a profitable company has never stopped it from pushing even more ads on its apps before. But WhatsApp has one major advantage Edgar Cervantes / Android Authority My distaste for WhatsApp is palpable, so why do I use it when I don't like the experience? It boils down to one simple consideration — most of my contacts use WhatsApp. In fact, I was surprised when I found out that WhatsApp is not as popular in the US as it is in other countries. WhatsApp is home to most of my chats with contacts and all of my groups. It's how PR companies often reach out to me. It's also how I reach certain businesses, like when I order a repeat of my cat's medication from my vet. It's truly rare to find someone who doesn't use WhatsApp here. While Signal also has group chats, the lack of contacts who use it means it's not a viable option for my community groups that include safety information and incident reports. I don't even have enough contacts who use Signal to create a casual group chat, since none of them know each other. It's also a hard sell to tell someone to switch over to Signal, since they'll encounter the same problem when they move over. I'm a tech journalist, so I should arguably have more contacts that use niche, security-focused apps. But I have 20 total contacts on Signal, in contrast to the 200+ I have on WhatsApp. It's a hard sell to get someone to switch over to Signal, since they'll encounter a lack of contacts too. It's the same reason that my colleagues can't leave WhatsApp despite the ads. But the biggest problem with messaging apps is that they require other people to use them, by definition. The messaging app we use depends on what everyone else around us uses, so it's essentially chosen for us by others. But that won't stop me from hoping that Signal emerges as a viable alternative to WhatsApp. The app has great features without the baggage of Meta-owned products. WhatsApp has been ordered by the European Union to allow interoperability with other encrypted messaging apps and progress is being made towards that goal. Hopefully more countries will add pressure to expand this feature to other regions. After all, we deserve to be able to choose which messaging app we use, since it plays such an important role in our daily lives.