logo
I don't like my reminders app — so I made my own using Claude 4

I don't like my reminders app — so I made my own using Claude 4

Tom's Guide13 hours ago

If I don't write something down, it will be gone from my brain forever. I am the kind of person who needs every date, name, and random bit of trivia noted down, or it will vanish into the abyss.
This has resulted in me trying just about every reminder or notes app under the sun. And it's been a pretty hit-and-miss experience. While I've been able to make things work with these off-the-shelf apps, it's still not the exact thing I want.
So, when I saw how advanced Anthropic's Claude has become for coding, I thought I'd try something that would have seemed impossible just a year ago: creating my own app entirely via a chatbot.
Making an app with an AI chatbot normally requires some knowledge of prompting. It's not quite as simple as saying 'make me this app'. Instead, you need to give some clear instructions and be prepared for some back and forth.
Claude makes this step slightly easier, offering a pre-designed prompt to best get the model working. I used this prompt, with an edit to make it better fit what I was looking for:
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Claude instantly jumped into coding, but also asked me two questions:
1: How do you want to distinguish between brain dumps and important notes
2: What's most important in a reminder? Due dates, reminders, or more advanced features
With these two questions answered, Claude jumped into action.
After just a minute or two of generating, Claude returned a completed app. It had a search function and three categories to add notes: Brain dump, important notes, and reminders. While it had everything I asked for, I decided I wanted these in separate tabs.
After asking for the change, it took Claude a few more minutes to do this and my app was completed. I could search for specific notes and had my three categories, capable of adding as many notes as I needed to each.
Each note has the date of creation and with the reminders section, I could choose the exact time a reminder was set for.
With the app finished, I could then publish it. This gives it a public URL and allows you to come back to it at any time. I can bookmark it, and access it via browser from any of my devices.
I've tried "vibe coding", the concept of coding via an AI chatbot, with all the big names now: ChatGPT, Grok, Gemini and Claude.
Claude is, by a long way, the best experience of this. It was so easy getting this app up and running. I was able to decide exactly what I wanted from it and didn't need any coding experience to do it.
While there is still a long way to go, and this is just a URL I can use to get to this reminders app, it shows the impressive jump forward AI is taking when it comes to coding. I'm already trying to decide what my next app should be.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gemini can turn text into audio overviews — here's how to do it
Gemini can turn text into audio overviews — here's how to do it

Tom's Guide

time6 hours ago

  • Tom's Guide

Gemini can turn text into audio overviews — here's how to do it

You may already be familiar with NotebookLM, Google's Gemini-powered research companion. If you're new to it, it's worth exploring its standout feature called Audio Overview, which takes uploaded information and turns it into a podcast hosted by two AI presenters. This feature is, in many ways, one of the main reasons for using NotebookLM, allowing you to more easily digest even the most complex of information. But the feature has been airing elsewhere. Indeed, you'll now find it in Google's AI assistant Gemini either on Android or iOS. It works in the exact same way but it plays the audio in your browser. Let's check it out. Launch the Gemini app and, in the Ask Gemini box, tap the Plus button. Next, tap Files and select a document you'd like Gemini to work with. You will be able to look through files in Google Drive and lots of file types are supported including DOC, DOCX PDF, RFT and TXT. Once the file has been uploaded, tap Generate Audio Overview. You will need to wait a few minutes while your file is being processed but eventually you will get a result. You don't actually have to stay in the chat — you will be notified when it's ready. You can now tap the Audio Overview in order to listen to it — you may need to tap play. You can find the Audio Overview at any time if you select the Chats and Gems icon in the top-left of the screen. You can share the Audio Overview from this playback screen. Just tap the three-dot icon in the top-right of the screen and tap the Download icon (a downward-pointing icon at the top of the menu). You can also tap Share and select a method such as Messages, email or social media. And there you go. You now know how to generate audio overviews in Gemini, but there's so much else you can do. You can learn how to use Google Gemini to summarize a YouTube video or figure out how to use Gemini AI to create the perfect workout music playlist. It's even possible to discover how to find the best haircut for your face shape. Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

ChatGPT Glossary: 52 AI Terms Everyone Should Know
ChatGPT Glossary: 52 AI Terms Everyone Should Know

CNET

time7 hours ago

  • CNET

ChatGPT Glossary: 52 AI Terms Everyone Should Know

AI is now a part of our everyday lives. From the massive popularity of ChatGPT to Google cramming AI summaries at the top of its search results, AI is completely taking over the internet. With AI, you can get instant answers to pretty much any question. It can feel like talking to someone who has a Ph.D. in everything. But that aspect of AI chatbots is only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on country of origin is cool, but the potential of generative AI could completely reshape economies. That could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence. It's showing up in a dizzying array of products -- a short, short list includes Google's Gemini, Microsoft's Copilot, Anthropic's Claude, the Perplexity You can read our reviews and hands-on evaluations of those and other products, along with news, explainers and how-to posts, at our AI Atlas hub. As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you're trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know. This glossary is regularly updated. artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities. agentive: Systems or models that exhibit agency with the ability to autonomously pursue actions to achieve a goal. In the context of AI, an agentive model can act without constant supervision, such as an high-level autonomous car. Unlike an "agentic" framework, which is in the background, agentive frameworks are out front, focusing on the user experience. AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias. AI safety: An interdisciplinary field that's concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans. algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own. alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions toward humans. anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it's happy, sad or even sentient altogether. artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks. autonomous agents: An AI model that have the capabilities, programming and other tools to accomplish a specific task. A self-driving car is an autonomous agent, for example, because it has sensory inputs, GPS and driving algorithms to navigate the road on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions and shared language. bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes. chatbot: A program that communicates with humans through text that simulates human language. ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology. cognitive computing: Another term for artificial intelligence. data augmentation: Remixing existing data or adding a more diverse set of data to train an AI. dataset: A collection of digital information used to train, test and validate an AI model. deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns. diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo. emergent behavior: When an AI model exhibits unintended abilities. end-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It's not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once. ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues. foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity. generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it's authentic. generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material. Google Gemini: An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn't connected to the internet. guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn't create disturbing content. hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren't entirely known. For example, when asking an AI chatbot, "When did Leonardo da Vinci paint the Mona Lisa?" it may respond with an incorrect statement saying, "Leonardo da Vinci painted the Mona Lisa in 1815," which is 300 years after it was actually painted. inference: The process AI models use to generate text, images and other content about new data, by inferring from their training data. large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language. latency: The time delay from when an AI system receives an input or prompt and produces an output. machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content. Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It's similar to Google Gemini in being connected to the internet. multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech. natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules. neural network: A computational model that resembles the human brain's structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time. overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data. paperclips: The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal to produce the maximum amount of paperclips, an AI system would hypothetically consume or convert all materials to achieve its goal. This could include dismantling other machinery to produce more paperclips, machinery that could be beneficial to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal to make paperclips. parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions. Perplexity: The name of an AI-powered chatbot and search engine owned by Perplexity AI. It uses a large language model, like those found in other AI chatbots, to answer questions with novel answers. Its connection to the open internet also allows it to give up-to-date information and pull in results from around the web. Perplexity Pro, a paid tier of the service, is also available and uses other models, including GPT-4o, Claude 3 Opus, Mistral Large, the open-source LlaMa 3 and its own Sonar 32k. Pro users can additionally upload documents for analysis, generate images, and interpret code. prompt: The suggestion or question you enter into an AI chatbot to get a response. prompt chaining: The ability of AI to use information from previous interactions to color future responses. quantization: The process by which an AI large learning model is made smaller and more efficient (albeit, slightly less accurate) by lowering its precision from a higher format to a lower format. A good way to think about this is to compare a 16-megapixel image to an 8-megapixel image. Both are still clear and visible, but the higher resolution image will have more detail when you're zoomed in. stochastic parrot: An analogy of LLMs that illustrates that the software doesn't have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them. style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso. temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks. text-to-image generation: Creating images based on textual descriptions. tokens: Small bits of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four characters in English, or about three-quarters of a word. training data: The datasets used to help AI models learn, including text, images, code or data. transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context. turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine's ability to behave like a human. The machine passes if a human can't distinguish the machine's response from another human. unsupervised learning: A form of machine learning where labeled training data isn't provided to the model and instead the model must identify patterns in data by itself. weak AI, aka narrow AI: AI that's focused on a particular task and can't learn beyond its skill set. Most of today's AI is weak AI. zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.

People Are Sharing The Things That Have Been Ruined By AI, And The Future Is A Nightmare
People Are Sharing The Things That Have Been Ruined By AI, And The Future Is A Nightmare

Yahoo

time7 hours ago

  • Yahoo

People Are Sharing The Things That Have Been Ruined By AI, And The Future Is A Nightmare

Recently, Reddit user StrawHat_Froggy asked, "What is something that has been ruined by AI and will never recover?" People had A LOT to say in the comments. Here are some of the best and most interesting replies: 1."There are more and more sewing patterns out there with fake instructions and an AI-generated picture of the finished garment. If you know what to look out for, they're not hard to spot (at least currently), but many people don't know this scam exists and just buy some AI pattern off Etsy." —Frillybits 2."Freelance writing. Two years ago, I was making a good $500 a month writing articles. Then some writing app was introduced, and the number of jobs I could choose from dropped from about 80-100 per day to nothing in three weeks." —Useless890 3."My colleagues' ability to write and send an email by themselves." —CloakAndKeyGames 4."Several of my friends' ability to have conversations in dating apps." —youcantkillanidea 5."Search engine results." —Ruminations0 "I recently learned that you can put '-ai' after your search and it won't give an AI summary." —AstonishedSockPuppet Related: People Who Never Believed In The Supernatural Are Revealing What Made Them Change Their Minds, And I'm Terrified 6."Trust. There used to be a saying, 'I'll believe it when I see it.' Now we can see it and still not believe it." —mslabrat01 7."Duolingo." —EthanTheJudge 8."Video proof of an event." —dontcarebare Related: 40 Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really Creepy Wikipedia Pages 9."People. So many people use AI to generate things like assignments, documents, etc., to totally avoid learning content. Then they don't even read it or realize that AI essentially created a digital turd." —CreepyValuable 10."Photographic evidence. We can no longer trust any image to be real." —PoolExtension5517 11."The em dash. I used to use them in my copywriting, but now it's a big flag for AI written content because ChatGPT is em dash happy. I'll never forgive AI for that." "Dropping this article for anyone interested in more context about the em dash drama." —inkyblackops 12."The AITA (Am I The A-Hole) subreddit and other forums on Reddit. Posting a quick AI story makes it easy for bot accounts to get enough karma to fully access Reddit communities." —Horizontal_Bob 13."Democracy. Propaganda can be made so realistic that it will be almost impossible to tell what's true and what's fiction. All sides will be able to point to their own evidence and 'truth.'" —DrTenochtitlan 14."Porn deepfakes seem like a genie that's not going back into the bottle." —Dangerous-Coach-1999 15."Texting and instant messaging. I was texting my aunt about my disapproval of a certain politician. Halfway into the conversation, her responses switched from short two sentence statements (in broken English) to paragraphs with bullet points, capitalization, and punctuation/vocabulary I know she doesn't use. I asked if it was AI. She responded that it was her own analysis. People are losing the ability to communicate their own thoughts." —Gobnobbla 16."Voice acting." —KasElGatto 17."Computer science grad here. AI has taken away most of my job prospects. Companies make more money by shipping worse software faster with AI. I no longer see a future where my better ability is worth my slower speed in the eyes of hiring managers." —madboneman finally, it's an AI ouroboros: "Oddly enough, AI. As more AI-generated content is online, it's included in learning data for subsequent models, and the AI hallucination problem gets worse. In other words, it gets high on its own supply." —whereamilivingtoday H/T r/AskReddit Some replies have been edited for length and clarity. Also in Internet Finds: Lawyers Are Sharing Their Juiciest "Can You Believe It?!" Stories From The Courtroom, And They're As Surprising As You'd Expect Also in Internet Finds: People Are Sharing "The Most Believable Conspiracy Theories," And Now I'm Questioning Everything I Thought I Knew Also in Internet Finds: 51 People Who Quickly Discovered Why Their Hilariously Clueless Partner Was Single Before Meeting Them

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store