logo
Get an all-in-one AI tool for life for just £23

Get an all-in-one AI tool for life for just £23

Yahooa day ago

The following content is brought to you by Mashable partners. If you buy a product featured here, we may earn an affiliate commission or other compensation.
TL;DR: Put all your AI tools like ChatGPT, Gemini Pro, and Leonardo.AI in one place with a lifetime subscription to 1minAI, an all-in-one AI app, on sale for just £22.24 (reg. £173.67).
Opens in a new window
Credit: 1minAI
1minAI: Lifetime Subscription
£22.27 at The Mashable Shop £173.70 Save £151.43
Get Deal
The free version of some AI models like ChatGPT can get the job done, but if you want the good stuff, you should consider opting for a paid subscription.
1minAI consolidates many of the most popular AI models — like ChatGPT, Gemini, and Meta AI — into one platform where you can use them all seamlessly, whether you need to generate text, images, or even full videos. A lifetime subscription to 1minAI is on sale now for £22.24.
1minAI gives you one platform where you can access AI models like ChatGPT, Gemini, Meta AI, Mistral AI, and more. You don't just get the baseline version, either — 1minAI users can chat with GPT-4, GPT-4 Turbo, Gemini Pro 1.5, and Llama 2 or Llama 3.
Like a ton of other AI platforms, 1minAI has a limit to how much you can generate every month. Unlike other platforms, the limit is incredibly high. Here's what your lifetime subscription gets you every month.
First, you get a baseline of 1,000,000 credits per month. Then, every day you log in, you get an additional 15,000 free credits. These will roll over if you don't use them all, too.
In practice, those huge numbers mean you could generate up to 362,500 words, research nearly 2,000 SEO keywords, generate 386 images, upscale 241 images, remove 24 backgrounds, convert text to speech for over 120,000 characters, and generate up to 12 videos.
Put all your go-to AI tools in one place. Get a lifetime subscription to 1minAI for £22.24.
StackSocial prices subject to change.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

WWDC 2025 is make or break for Apple Intelligence — here's why
WWDC 2025 is make or break for Apple Intelligence — here's why

Tom's Guide

time16 minutes ago

  • Tom's Guide

WWDC 2025 is make or break for Apple Intelligence — here's why

WWDC 2025 is going to be a big deal for Apple users. Not only are we expecting to see a big redesign for iOS 19/ (or iOS 26), but it also marks one year since Apple went all in on AI and announced Apple Intelligence. Of course Apple Intelligence hasn't really been the resounding success that Apple probably hoped. It's not been a disaster, but WWDC 2024 turned out to be the one thing Apple typically tries to avoid doing — overpromising and underdelivering. Nearly a year later, many of the promised Siri features are still missing in action. Considering Apple was already late to the party with AI, and the troubles it's had, the pressure is on at WWDC 2025. It's make or break, and if Apple doesn't ease the biggest concerns about Apple Intelligence then it risks it ending up like Siri did 10 years ago. The biggest issue with Apple Intelligence is that Apple realized AI was going to be a big deal much later than everyone else. Apple wasn't ignoring AI, but in the years before ChatGPT exploded in popularity, the company wasn't that interested in investing large amounts of money into AI development — especially with no clear end goal. According to a report from Bloomberg, it wasn't until after ChatGPT arrived that Apple's software chief Craig Federighi used generative AI for himself and realized how useful a tool generative AI could be. But by that point Apple was seriously far behind its rivals, and wouldn't be able to catch up easily. This is apparently where the main problems with Siri come in, since Apple attempted to catch up by tacking the new LLM-powered Siri onto the older voice assistant. This hasn't worked out, not only because of the delays but also because it apparently caused a bunch of problems that have been described as "whack-a-mole." All that inevitably made the controversial rollout of Apple Intelligence even more problematic. Not because the features that were released were bad, though things like news summaries proved too problematic to keep around. Apple Intelligence itself didn't land until iOS 18.1 arrived in late October, a month after iOS 18 and the iPhone 16 were released. iOS 18.2 was where the real improvements came into play, and that didn't arrive until late December. iOS 18.3 and 18.4 landed throughout the first few months of 2025, but by that point the number of useful new features had dropped dramatically. The problem wasn't the state of Apple Intelligence, though, and more of how Apple handled it. Simply put, it looked like Apple didn't want to be seen lagging behind its rivals, then overestimated what it could accomplish. WWDC is where Apple tells us what's going on with all its software, and it would be a mistake not to give Apple Intelligence the attention it needs. This is the first anniversary of its reveal, and despite all the problems Apple can't afford to be seen ignoring it. I'm not saying that WWDC needs to be an all-Apple Intelligence show. Google I/O did that, and it was far too much AI for any normal person to handle. But that doesn't mean Apple can brush AI to the wayside and treat it like Siri was treated for so many years. If that happens, Apple might as well be throwing in the towel on the AI race. We all know that the company is behind the likes of Google and OpenAI, but that doesn't mean the company's AI ambitions are dead. There's plenty of time to improve, and potentially catch up. In a best-case scenario Apple would admit that it dropped the ball with Apple Intelligence, and pledges to do better going forward. I don't see that happening. Apple is not known for willingly admitting its mistakes. But I also don't see Apple spending a great deal of time on AI either. Not just because it has a bunch of major design revamps to get us through in a keynote that can only be so long. But also because I'm sure Apple doesn't want to risk making the same mistakes as last year. No doubt we'll be hearing a lot of impressive specs about Apple Intelligence and its adoption, and maybe some reveals on different smaller features that may be on the way. And that should be enough. AI isn't the focus of this year's releases based on what we've heard, and it shouldn't dominate the show. But it does still need attention and improvements so it can continue to grow. Apple has already made plenty of mistakes with AI, from jumping on the bandwagon late to screwing up the launch of the features when they were ready. So it's imperative that the company get itself into gear, and come up with an adequate strategy for future updates and AI features. WWDC is going to be the starting point for all of that, and the attention Apple Intelligence gets at the show is going to lay the groundwork for the next few years of Apple AI rollouts. And while we can't expect Apple to roll out another wave of announcements like the ones we saw last year, it needs to avoid ignoring the topic completely. Otherwise, if AI is just going to get tossed to the side because of some early hurdles, then Apple probably shouldn't have bothered investing in it in the first place.

ChatGPT is making us weird
ChatGPT is making us weird

Yahoo

timean hour ago

  • Yahoo

ChatGPT is making us weird

Artificial intelligence chatbots like OpenAI's ChatGPT models are changing the way we think. Beyond our workflows and creative processes, the bots are disrupting our social cues and intimate lives. Business Insider spoke to a range of professionals to ask: Is ChatGPT making us weird? The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which is analyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people." Read the original article on Business Insider

There's one bright spot for San Francisco's office space market
There's one bright spot for San Francisco's office space market

Los Angeles Times

time2 hours ago

  • Los Angeles Times

There's one bright spot for San Francisco's office space market

SAN FRANCISCO — In recent years, San Francisco's image as a welcoming place for businesses has taken a hit. Major tech companies such as Dropbox and Salesforce reduced footprints in the city by subleasing office space, while retailers including Nordstrom and Anthropologie pulled out of downtown. Social media firm X, formerly Twitter, vacated its Mid-Market headquarters for Texas, after owner Elon Musk complained about 'dodging gangs of violent drug addicts just to get in and out of the building.' While the city remains on the defensive, one bright spot has been a boom in artificial intelligence startups. San Francisco's 35.4% vacancy rate in the first quarter — among the highest in the nation — is expected to drop one to three percentage points in the third quarter thanks to AI companies expanding or opening new offices in the city, according to real estate brokerage firm JLL. The last time San Francisco's vacancy rate dropped was in the fourth quarter, when it declined 0.2% — the first time since the COVID-19 pandemic, according to JLL. 'People wanted to count us out, and I think that was a bad bet,' said Mayor Daniel Lurie. 'We're seeing all of this because the ecosystem is better here in San Francisco than anywhere else in the world, and it's really an exciting time.' Five years ago, AI leases in San Francisco's commercial real estate market were relatively sparse, with just two leases in 2020, according to JLL. But that's since soared to 167 leases in the first quarter of 2025. The office footprint for AI companies has also surged, making up 4.8 million square feet in 2024, up from 2.6 million in 2022, JLL said. 'You need the talent base, you need the entrepreneur ecosystem, and you need the VC ecosystem,' said Alexander Quinn, senior director of economic research for JLL's Northwest region. 'So all those three things exist within the greater Bay Area, and that enables us to be the clear leader.' AI firms are attracted to San Francisco because of the concentration of talent in the city, analysts said. The city is home to AI companies including ChatGPT maker OpenAI and Anthropic, known for the chatbot Claude, which in turn attract businesses that want to collaborate. The Bay Area is also home to universities that attract entrepreneurs and researchers, including UC Berkeley, UC San Francisco and Stanford University. Venture capital companies are pouring money into AI, fueling office and staff growth. OpenAI landed last quarter the world's largest venture capital deal, raising $40 billion, according to research firm CB Insights. OpenAI leases about 1 million square feet of space across five different locations in the city and employs roughly 2,000 people in San Francisco. The company earlier this year opened its new headquarters in Mission Bay, leasing the space from Uber. OpenAI began as a nonprofit research lab in 2015 and the people involved found their way to San Francisco for the same reason why earlier generations of technologists and people pushing the frontier in the United States are drawn to the city, said Chris Lehane, OpenAI's vice president of global affairs in an interview. 'It is a place where, when you put out an idea, no matter how crazy it may seem at the time, or how unorthodox it may seem ... San Francisco is the city where people don't say, 'That's crazy,'' Lehane said. 'They say, 'That's a really interesting idea. Let's see if we can do it.'' Databricks, valued at $62 billion, is also expanding in San Francisco. Databricks in March announced it will move to a larger space in the Financial District next year, boosting its office footprint to 150,000 square feet and more than doubling its San Francisco staff in the next two years. It pledged to hold its annual Data + AI Summit in the city for five more years. The company holds 57,934 square feet at its current San Francisco office in the Embarcadero, according to CoStar, which tracks real estate trends. 'San Francisco is a real talent magnet for AI talent,' said Databricks' co-founder and vice president of engineering Patrick Wendell. 'It's a beautiful city for people to live and work in and so we really are just following where the employees are.' Several years ago, Wendell said his company was considering whether to expand in San Francisco. At the time, it was unclear whether people would return to offices after the pandemic, and some businesses raised concerns about safety and cleanliness of San Francisco's streets. Wendell said his company decided to invest more in the city after getting reassurances from city leaders. 'People are seeing an administration that is focused on public safety, clean streets and creating the conditions that also says that we're open for business,' said Lurie, who defeated incumbent mayor London Breed last November by campaigning on public safety. 'We've said from day one, we have to create the conditions for our arts and culture, for our small businesses and for our innovators and our entrepreneurs to thrive here.' Laurel Arvanitidis, director of business development for San Francisco's Office of Economic and Workforce Development, said that the city's policy and tax reforms have helped attract and retain businesses in recent years, including an office tax credit that gives up to a $1-million credit for businesses that are new or relocating to San Francisco. On Thursday, Lurie announced on social media that cryptocurrency exchange Coinbase is opening an office in San Francisco after leaving the city four years ago. 'We are excited to reopen an office in SF,' Coinbase Chief Executive Brian Armstrong wrote in response to the mayor's social media post. 'Still lots of work to do to improve the city (it was so badly run for many years) but your excellent work has not gone unnoticed, and we greatly appreciate it.' Santa Clara-based Nvidia is also looking for San Francisco office space, according to a person familiar with the matter who declined to be named. The news was first reported by the San Francisco Chronicle. Nvidia, which also has California offices in San Dimas and Sunnyvale, declined to comment. 'It's because of AI that San Francisco is back,' Nvidia Chief Executive Jensen Huang said last month on the Hill & Valley Forum podcast. 'Just about everybody evacuated San Francisco. Now it's thriving again.' But San Francisco still has challenges ahead, as companies continue to push workers to return to the office. While the street environment has improved, it will be critical for the city to keep up the progress. Lurie said his administration inherited the largest budget deficit in the city's history and they have to get that under control. His administration's task is to make sure streets and public spaces are clean, safe and inviting, he said. 'We have work to do, there's no question, but we are a city on the rise, that's for sure,' Lurie said. Times staff writer Roger Vincent contributed to this report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store