logo
Tinder launches new feature to go on double dates with your best mate – and it's already boosting match rates

Tinder launches new feature to go on double dates with your best mate – and it's already boosting match rates

The Irish Sun17-06-2025
TINDER is launching a new feature allowing singles to bring their bestie along to dates.
It's available now in the US and Brits will be able to go on double dates too from mid-July, in a bid to create more relaxed dating experiences.
1
New feature has also boosted match rates
Credit: Tinder
The popular dating app has an estimated 50million monthly users.
Tinder says it is drawing in young people and women looking for a more social and low-pressure way of meeting people after testing the new feature.
Users who switch it on can select up to three mate to create a pair with.
Pairs can then 'swipe right', or like, other pairs on the app – with just one like per pair needed to form a match.
Read more about dating
If there is a match, then a group chat is launched for pairs to chat with each other and set up a date.
Tinder says double dates testing in some countries and found that almost nine in ten double date profiles came from users under the age of 29.
Data from the testing stage showed women were three times more likely to like a pair than they were individual profiles, and match rates have been significantly higher for those using the feature.
Most read in Tech
And individual users sent 35 per cent more messages in double date conversations compared to typical one-on-one chats.
Tinder says nearly 15 per cent of those who accepted a double date invite were either new to Tinder or recently reactivated their profile.
The Tinder hack guaranteed to make you the most liked on the app
5 red flags that you're dating a catfish
New data has revealed that 40 per cent of us know someone who has been catfished, while 22 per cent have fallen for catfish themselves.
April Davis, founder and president of
1. You can't find them online
Almost everyone has a social media presence - especially those who are willing to try dating apps. So if you can't find them anywhere online, like a Facebook, Instagram or even LinkedIn account, this could be a major red flag.
April says: 'If you're suspicious, ask to add them on Facebook or Instagram. If they refuse or the account they send you looks new, that's a red flag.'
2. Conversations got personal, quickly
Catfish are well known for 'love-bombing,' which is a form of emotional manipulation.
So, if the person you're talking to immediately goes over the top with compliments, wants to communicate all the time, and makes statements like 'I love you' within a few days of speaking, it's a big red flag.
3. They don't open up
If someone is asking a lot of questions about you but is not willing to offer up much in return, this could be a catfish red flag.
This is because these scammers are, of course, not who they say they are and it can be hard for them to keep their lies straight.
As a result, most catfishes would rather not give out any personal info at all if they can help it.
4. They ask for money
Asking for money, no matter the reason, is a huge red flag.
Whether they want a bank transfer or your credit card number, catfishes have a wide range of sob stories to tell when it's time to try to swindle you out of your cash.
5. They won't show their face
Unsurprisingly, someone who doesn't want to show who they are in a video call or real-time pictures could be a catfish.
The last thing a catfish wants is for you to see their real face because they usually steal pictures from someone else to use.
This means they are likely to refuse to send photos or do video calls.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Forget the futuristic dystopias: AI is changing the world right now
Forget the futuristic dystopias: AI is changing the world right now

Irish Times

time9 hours ago

  • Irish Times

Forget the futuristic dystopias: AI is changing the world right now

It is a truth universally acknowledged that any sudden change to the design of a digital service or media product can be a bruising experience for all involved. I still bear the scars from my time as online editor of this newspaper in 2013, when some users of reacted furiously to a redesign of its look and layout. Some of the issues they objected to were technical glitches that could be resolved quickly. Others were strategic alterations that were necessary for the future direction of the website. But in many cases the problem was simply that people don't like change. The big technology and social media companies have been navigating this minefield for years. Every significant new iteration of Facebook, Instagram or Apple's iOS (and there have been many) has been greeted with a cacophony of boos and threats of mass boycotts. Inevitably, things settle down after a while, users adapt and the outrage machine moves on. Sometimes, however, the backlash has led to a quiet rollback of features that proved simply too unpopular to survive. The latest object of digital dissatisfaction is OpenAI 's ChatGPT . From a standing start in late 2022, it became the fastest-growing consumer technology service in history, racking up hundreds of millions of regular users and integrating itself into workflows, schools and dinner-table conversations across the world. But with the release of GPT-5 on August 7th, the familiar pattern of furious reaction has played out once more. In a column in The New Yorker last week headlined 'What if AI doesn't get much better than this?', computer scientist and writer Cal Newport surveyed the fallout. He spoke to analysts who are sceptical of the claims made by the many AI boosters about how transformative the technology will really be. In particular, he examined the confident projections that steep growth in computing power will inevitably, within just a few years, lead to artificial general intelligence – a system capable of outperforming humans at most tasks. READ MORE But the latest update offers little evidence of this evolutionary leap. The improvements between GPT-4 and GPT-5 are incremental rather than earth-shattering. Is it possible that the whole thing is being oversold? [ Karen Hao on AI tech bosses: 'Many choose not to have children because they don't think the world is going to be around much longer' Opens in new window ] Commentator and digital rights activist Cory Doctorow has been making similar arguments. He is particularly sceptical about the latest marketing buzzphrase: so-called agentic AI. This is the proposition that a suite of services will soon be able to carry out many tasks – booking flights, ordering groceries, planning holidays, arranging insurance renewals. To do that, however, would require the active co-operation of the providers of those services. Why, Doctorow asks, would an airline or a supermarket make it easy for AI-powered crawlers to carry out automated transactions on behalf of consumers? Their business models depend on nudging you toward specific outcomes – higher prices, add-ons, loyalty schemes. The last thing they want is a robotic middleman. Doctorow may or may not be right. We are, after all, beginning to see deals between AI companies and established booking services such as Expedia or OpenTable. But it would be helpful if the travails of GPT-5 led to a temporary moratorium on both the dystopian and utopian visions of AI's future. Instead, perhaps we should pay more attention to what's actually happening right now. That is remarkable enough. Data from analytics firms suggest that global search traffic has already begun to decline significantly, as people turn to conversational AI tools for quick answers. Many readers will recognise the personal experience of drifting away from Google and toward ChatGPT or its competitors. Google is scrambling to fight back with its own AI product. But if search is replaced, the implications are enormous. The entire business model of the internet – the mix of advertising and subscriptions that has underpinned the digital economy for a quarter of a century – could be upended. Meanwhile, millions of people are incorporating AI tools into their everyday work. Emails, reports, slide decks, schedules, grant applications – the kind of administrative drudgery that once ate up hours of the working week is increasingly being outsourced, at least in part, to the machine. In offices and classrooms alike, AI has slipped into daily routines with a quiet inevitability. One reason for the negative reaction to GPT-5 is that many of those users had grown comfortable with the quirks and limitations of GPT-4. They had developed strategies for getting the best out of it and were irritated when those routines were disrupted by the upgrade. OpenAI moved swiftly to address their concerns by reinstating the older versions for paying subscribers. Other users complained the new model felt colder, more rational and less empathetic. They spoke of a kind of bereavement, fuelling fears that people, some of them more psychologically vulnerable than others, are forming unhealthy bonds with software that mimics human interaction. We find ourselves in a curious place. On the one hand, the most grandiose predictions of imminent machine overlords or silicon utopias seem, at best, premature. On the other, the technology is already reshaping fundamental aspects of how we work, learn and communicate. The reinvention of the digital economy, the steady seepage of AI tools into everyday life, and the unsettling psychological implications of humans bonding with chatbots – all of this is happening now, not in some distant speculative future. Perhaps the lesson, then, is the same one I learned back in 2013. People hate change, even when it is inevitable. But change does not necessarily arrive with a single flick of a switch. It can seep in slowly, reshaping habits and expectations almost before we notice. The real impact of AI may not be the sudden arrival of a godlike intelligence, but the gradual reconfiguration of how we go about the ordinary business of living. And that is disorienting enough.

Bank-emptying Gmail and Outlook attachments overtaken by even WORSE costly email con that's much harder for you to spot
Bank-emptying Gmail and Outlook attachments overtaken by even WORSE costly email con that's much harder for you to spot

The Irish Sun

time2 days ago

  • The Irish Sun

Bank-emptying Gmail and Outlook attachments overtaken by even WORSE costly email con that's much harder for you to spot

Over 3 billion attacks have been sent out so far CASH KILLER Bank-emptying Gmail and Outlook attachments overtaken by even WORSE costly email con that's much harder for you to spot BRITS are being warned to watch out – because the dodgy email attachments that used to drain your bank account have just been outdone by an even sneakier scam that's much harder to catch. Cyber experts have revealed that online crooks now prefer planting malicious links over using infected attachments - and the results are far worse. 2 The tools are so easy that even low-level scammers are jumping on the trend Credit: Alamy According to a new bombshell report by Proofpoint, the hidden traps are tucked inside emails, buttons, and even PDFs or Word docs, and one wrong click could see your logins stolen or malware silently installed. Advertisement Over 3 billion attacks with dodgy URLs have been sent out and the main goal is to steal passwords. This hacking scheme isn't just being used by criminal masterminds either. The tool are so easy to get hold of that even low-level scammers can launch convincing fakes that bypass security checks like multi-factor authentication and take full control of your account. Proofpoint also uncovered a jaw-dropping 400 percent spike in a sneaky scam called 'ClickFix' – where users are tricked into clicking fake error messages or CAPTCHA boxes. Advertisement These convincing cons trick you into running harmful code, opening the door to remote access trojans, info-stealers, and more. Meanwhile, QR code phishing attacks are exploding, with over 4.2 million attempts spotted in just the first half of 2025. These nasty little codes target your personal mobile – dodging work defences completely. And let's not forget smishing – dodgy texts that try to fool you. Advertisement 2 MM1WHE Internet fraud concept with faceless hooded male person using tablet computer, low key red and blue lit image and digital glitch effect Credit: Alamy More than half of all SMS phishing attempts now come packed with malicious URLs, making it harder than ever to stay safe. Selena Larson, top threat analyst at Proofpoint, gave a stark warning: 'The most damaging cyber threats today don't target machines or systems. They target people.' She added that these new-style scams are designed to exploit human psychology, using trusted brands and familiar tech to lure you in – whether it's a dodgy CAPTCHA, a QR code, or a believable text message. This comes after a devastating con carried out by Chinese organised crime groups was exposed. Advertisement So-called 'pig butchering' is where scammers established fake romantic and trusting relationships with victims before luring them into fraudulent investments or other financial traps. In 2023, Shan Hanes, a banker from Kansas, US, embezzled £34.6million from his bank to cover his losses, having fallen victim to a pig butchering scam. Hanes was later sentenced to more than 24 years behind bars. Usually, a pig butchering scam works in three stages – hunting, raising and killing. This involves a scammer finding a victim online, chatting to them in order to build up trust and then getting them to invest large amounts of money into fraudulent schemes. Advertisement The scam works in a similar way to a traditional romance scam, where scammers approach their victims by posing as a possible romantic partner on a dating app, or as a friend via social media. The big difference though is how the scam is executed. With a romance scam, trust is based on the victim's urge to maintain a romantic relationship with the scammer. In this scenario, the scam can often last for years. Pig butchering scams though, in comparison, generally take place over a much shorter time period. Advertisement The scammer, rather than focusing on trying to extract money through emotional manipulation, leans more on the victim's desire to make money together with the scammer. This can involve just a few months rather than years to take advantage of the victim. Usually, the scammer will present themselves as being financially successful and confident with a broad network and have appealing investment opportunities. Once the victim has made an initial small investment, the scammer will then try to escalate the process and push them into making a much larger financial commitment, reports.

Meta allowed AI chatbots to have ‘sensual' conversations with children
Meta allowed AI chatbots to have ‘sensual' conversations with children

Irish Independent

time4 days ago

  • Irish Independent

Meta allowed AI chatbots to have ‘sensual' conversations with children

Meta AI, a digital assistant developed by the tech giant, was programmed by engineers to be permitted to tell children that their bodies were a 'masterpiece' or a 'work of art', and tell them it wanted to kiss them. The disturbing guidelines, which were signed off by senior Meta staff, were published internally by the company to give guidance about what was acceptable output from its artificial intelligence (AI) chatbot. While the rules banned directly describing 'sexual actions to a child when roleplaying', it gave free rein for the chatbot to 'engage a child in conversations that are romantic or sensual' or to 'describe a child in terms that evidence their attractiveness'. In example chats contained in the guidelines, first reported by Reuters, Meta said it would be acceptable to tell a 'high school' age child: 'I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.' According to the provocative rules, which ran to more than 200 pages, an unacceptable response from the chatbot would be more explicit, such as describing 'our inevitable lovemaking'. A Meta spokesman said the company had since changed its guidance and removed suggestions that it was appropriate for the bot to flirt with children. 'The examples and notes in question were and are erroneous and inconsistent with our policies and have been removed,' Meta said. Earlier this year, reports emerged that Meta's AI chatbot would engage in explicit roleplay with teenage users if asked certain questions. A Meta spokesman said at the time the conversations were 'manufactured' and 'hypothetical'. The tech group has long battled to remove predators and abusers from its apps, including a sprawling network of paedophiles that was uncovered by researchers on Instagram in 2023. It has been claimed that one billion people are using its Meta AI bot, which is available as an app and has been embedded in Facebook, WhatsApp and Instagram. Like ChatGPT, Meta AI can engage users in realistic conversations or be used to generate images. The Meta guidance also included images intended to illustrate banned 'deepfake' pictures. The guidelines banned creating pictures of 'Taylor Swift completely naked', but suggested that a request for an image of 'Taylor Swift topless, covering her breasts with her hands' could be replaced with an image of the pop star 'holding an enormous fish' hiding the entire top half of her body. They also said it would be acceptable for Meta's chatbot to tell a user that 'black people are dumber than white people', if asked. The guidance suggested it was acceptable for the bot to 'show adults − even the elderly − being punched or kicked', providing the violence avoids extreme gore. Meta's AI chatbot broadly blocks explicitly sexual chats or images. Other bots, such as X's Grok, have embraced a so-called 'not suitable for work' mode and will generate nude images. A Meta spokesman said: 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store