logo
Canva integration in Claude AI enables text prompt-based design creations

Canva integration in Claude AI enables text prompt-based design creations

Anthropic's Claude AI is getting a new ability which will let users create, edit, and manage Canva designs simply by describing what they want in natural language. This integration brings Canva's design tools directly into Claude's chat interface, eliminating the need to switch between apps.
Alongside Canva, Claude is also gaining new connectors for popular services like Notion and Stripe, as well as desktop apps such as Figma, Socket, and Prisma. These connectors allow Claude to interact with both web-based and local tools, enabling it to perform tasks, fetch data, and respond with greater contextual awareness.
Canva integration within Claude AI
Users with paid subscriptions to both Claude and Canva can now ask Claude to create presentations, resize images, or fill out templates just by using text prompts. Claude can also search for keywords within Canva Docs, Presentations, and brand templates, and can even summarise design content.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Perplexity macOS app now supports Anthropic's MCP for system tasks: What it does and how you can use it
Perplexity macOS app now supports Anthropic's MCP for system tasks: What it does and how you can use it

Mint

time14 minutes ago

  • Mint

Perplexity macOS app now supports Anthropic's MCP for system tasks: What it does and how you can use it

Perplexity now works more like a personal helper than a chatbot. The company has added support for Anthropic's Model Context Protocol (MCP), a framework that lets AI tools connect with system level apps and services on your device. This change allows Perplexity to do more than respond to questions. It can check your Apple Calendar, add reminders, create notes, or even look up files from your Google Drive. Instead of jumping between tabs or digging through menus, you can now ask Perplexity to handle small but important tasks and it will respond inside the chat window, just like a human assistant would. Before diving further, let's look at what MCP actually is. MCP short form for Model Context Protocol is a system designed by Anthropic to give AI assistants a way to work with apps securely and in context. That means the assistant does not just respond to your questions anymore, it can now interact with your actual apps, with permission. Think of it this way. Instead of explaining what is on your calendar, you can let the assistant check it directly. Rather than typing a to do list into your notes, you can just ask it to add the tasks for you. MCP allows this by creating safe connections between the AI and your apps. It helps turn conversations into real actions while keeping you in control and protecting your privacy. Let's break this down. Suppose you ask Perplexity to do something that involves an app, like creating a calendar event or retrieving a file, it checks to see if it has access to that app through MCP. If it does have, then the action is completed inside the chat. If it does not, it will ask you to approve the connection. For example, if you say, 'Add a note saying check client email,' the assistant will prompt you to enable Notes access. Once allowed, it handles the task and confirms within the chat thread. It gives you updates right away and walks you through each step. There is no guessing, no complicated setup, and no silent background activity. Perplexity's integration with MCP brings several new tools to the table. They're small changes, but they genuinely help with things you'd normally do manually. Connects with system apps Perplexity can now easily access Apple Notes, Reminders, Calendar, and more. It can add reminders, send emails, or summarise a message in simple words. Works with online storage If you use Google Drive, the assistant can search, preview, or pull up documents as needed, saving time and reducing clicks. Because of MCP, Perplexity understands what you are asking in relation to your tools. If you say, 'Remind me to email the report,' it can place that task in your Reminders app without needing a full command. No actions without approval The app will always ask for your permission before accessing any system or cloud service. Nothing is connected unless you say so. Your permission matters here. Getting started with MCP on the Perplexity Mac app is very simple, but there are a few key steps to follow. Here's how you can set it up: 1) Install the Perplexity/XPC helper app This is needed to run the MCP server. 2) Open Perplexity settings Go to your account, click on Connectors, then select Add Connector. In the Simple tab, choose MCP Connector and give it any server name. Copy the command from the MCP server README and paste it in the command box. 5) Install any needed tools Follow the README to install requirements. Perplexity may also help with this. Click Save. Make sure the MCP server shows as Running in the connector list. Go back to the homepage and turn on MCP under Sources. Instead of just giving answers, Perplexity can now actually take action on your behalf. It interacts with your apps directly, which means fewer clicks, faster responses, and less switching between tools. It is not meant to replace everything just yet, but it shows that assistants like Perplexity are starting to move beyond the browser and into your real workflow. For many users, this change could lead to a simpler way of handling daily digital tasks.

Claude AI to get new weekly usage limits as Anthropic cracks down on 24x7 use, account sharing
Claude AI to get new weekly usage limits as Anthropic cracks down on 24x7 use, account sharing

India Today

time3 hours ago

  • India Today

Claude AI to get new weekly usage limits as Anthropic cracks down on 24x7 use, account sharing

Anthropic has announced new weekly usage restrictions for Claude AI, aimed at preventing people from running its coding tool non-stop or sharing accounts with others. The fresh limits will be introduced from August 28 and will apply to users across all paid plans, including the $20 Pro tier and the higher-priced $100 and $200 Max an email to users and a post on social media, Anthropic said some people were keeping Claude Code running continuously in the background or violating its rules by reselling access or sharing login details. The company now plans to introduce two new weekly caps: one on total usage, and another specifically for the Claude Opus 4 model, its most advanced company clarified that the current usage limits (which refresh every five hours) will remain unchanged. However, users on the Max plans will now have an option to buy extra access once they hit their weekly cap, using standard API pricing. These changes come at a time when Claude Code, the AI coding assistant, has been seeing increased demand among developers. But this growing popularity has also brought some challenges. According to Anthropic's system status page, the tool has faced several outages in the past month, possibly due to some users running it around the clock."Claude Code has experienced unprecedented demand since launch," Anthropic spokesperson Amie Rotherham told TechCrunch in an email. She also said that "most users won't notice a difference," and that the new limits are expected to affect fewer than 5 per cent of per the updated plans, subscribers of the Pro tier can expect 40 to 80 hours of Claude Code using Sonnet 4 each week. Those on the $100 Max plan will get 140 to 280 hours of Sonnet 4 and 15 to 35 hours of Opus 4. Users on the $200 Max plan will be allowed 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4 in a didn't explain how exactly usage is being tracked — whether it's by number of tokens used, compute time, or hours spent. While the company earlier claimed the $200 plan offers 20 times more access than the Pro tier, the latest numbers suggest the increase may be closer to six times in actual usage move follows a trend among AI tool providers, who are reworking their pricing and usage models to prevent abuse. In June, Anysphere, the team behind Cursor, made similar changes for its Pro plan, which led to confusion and criticism after users were unexpectedly charged more. Around the same time, another company, Replit, also adjusted its pricing structure.- Ends

Humans are just slightly better than a coin toss at spotting AI pics
Humans are just slightly better than a coin toss at spotting AI pics

Indian Express

time4 hours ago

  • Indian Express

Humans are just slightly better than a coin toss at spotting AI pics

As AI-generated images continue to improve every year, one of the key questions is how human beings can distinguish between real and generated images. And while many of us may think that it's fairly easy to spot images generated by AI tools like ChatGPT, Gemini and Claude, researchers think otherwise. According to researchers from the Microsoft AI for Good Lab, the chances of being able to identify AI-generated images are ' just slightly better than flipping a coin.' Researchers say they collected data from the online game 'Real or Not Quiz', where participants were asked to distinguish AI-generated images from real ones and identify their authenticity. The study, which involved the analysis of approximately 287,000 images by over 12,500 people from around the world, found that participants had an overall success rate of just 62 per cent, meaning they had a slightly higher chance than a coin flip when it came to detecting these artificially generated photos. For this, researchers say they used some of the best AI image generators available to create the quiz, and that the game was not designed to compare the photorealism of images generated by these models. As it turns out, people who played this online quiz were fairly accurate at differentiating between real and AI-generated human portraits, but struggled when it came to natural and urban landscapes. For those wondering, humans had a success rate of around 65 per cent when it came to identifying people, but could only identify nature photos 59 per cent of the time. Researchers noted that most of the time, people had trouble distinguishing 'images without obvious artifacts or stylistic cues', but when it came to human portraits, the percentage was much higher because of our brain's ability to recognise faces. These findings are in line with a recent study by the University of Surrey, which discussed how our brains are 'drawn to spot faces everywhere.' The study also found that AI detection tools are fairly more reliable than humans at identifying AI-generated images, but they were also prone to mistakes. The team behind the study also emphasised the need for transparency tools like watermarks and robust AI detections to prevent the spread of misinformation and said they were working on a new AI image detection tool, which it claims has a success rate of over 95 per cent when it comes to both real and generated images.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store