Cursed New Dating App Matches You Based on the Most Deranged Thing We Can Imagine
A newly-developed dating app matches potential lovers based on their entire internet browsing histories — and we're not quite sure how we feel about it.
As Wired reports, the new service is straightforwardly-named "Browser Dating," and is the brainchild of Belgian artiste provocateur Dries Depoorter.
After years creating one-off projects like "Shirt," a top that increases one euro each time it's purchased, Depoorter took a different route with his new app that invites lonely users to upload their entire internet footprint — blessedly sans "Incognito" mode — in pursuit of love.
"Instead of choosing the best pictures or best things about yourself, this will show a side of you that you'd never pick," the artist says of the site, which launched earlier in June. "You're not able to choose from your search history — you have to upload all of it."
If that sounds like a privacy nightmare to you, you're not alone — and although Depoorter claims Browser Dating "is not exposed to the internet," Futurism found when going through the site's application process that that might not be the case.
Pretty soon into the application, Browser Dating asks users to download an extension that will give the site permission to access and export your browsing history. Though Depoorter stores user information on Firebase, Google's data storage platform used in developing AI apps, there's no reason that bad actors couldn't breach the extension itself, as we've seen as recently as February of this year.
As Wired notes, the artist has previously played with the concept of privacy invasion. In 2018, for instance, he used public surveillance camera footage of people jaywalking to create art. The "surveillance artist," as the New York Times once called Depoorter, returned to his voyeurism for "The Follower," a 2022 project that used webcams in public spaces to record people as they took selfies.
In both projects, it seems that Depoorter published footage of his unwitting subjects without consent — which doesn't exactly set a great precedent for his new app, though he insists it's not a gimmick.
We've reached out to the artist to ask what precautions, if any, he's taken to protect against any breach of the Browser Dating extension.
All told, this Futurism reporter didn't complete the site's registration once asked to download the extension. As always, it's better to be safe than sorry.
More on dating and privacy: Woman Alarmed When Date Uses ChatGPT to Psychologically Profile Her
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
20 minutes ago
- Business Insider
A massive trove of 16 billion stolen passwords was discovered — here's what to do
Researchers say they've uncovered one of the largest data leaks in history that involves many popular platforms. The leak includes nearly 16 billion login credentials that could give cybercriminals access to social media and business platforms such as Apple, Gmail, Telegram, Facebook, GitHub, and more, researchers at Cybernews said this week. Bad actors now have "unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing," the researchers said. The number of exposed people or accounts is unknown. The researchers said the data likely comes from malicious software known as infostealers. "What's especially concerning is the structure and recency of these datasets — these aren't just old breaches being recycled. This is fresh, weaponizable intelligence at scale," the researchers said. Cybernews said researchers uncovered the leak when the datasets were exposed for a short period of time. It follows the May discovery of a database containing more than 184 million credentials, including Apple, Facebook, and Google logins, Wired earlier reported. If you're nervous that your logins are at risk, there are steps you can take to make your account safer. How to protect yourself You can't unring the bell of an information leak. However, you can take steps to identify if your credentials have been involved in any data breaches and protect yourself in the future. You can check sites like Have I Been Pwned to see if your email has appeared in a data breach. Turning on two-step authentication for your accounts can also help protect them from unauthorized access. Platforms also offer resources to help users secure their accounts. Google encourages users to use protections that don't require a password, like a passkey. It's one of the tech giants, along with Apple, Amazon, and Microsoft, that have been working to move users away from passwords to help secure their accounts. For those who prefer to stick with passwords, Google's password manager can store login credentials and notify users if they appear in a breach, a spokesperson told Business Insider. There's also Google's dark web report, a free tool that tracks whether personal information is floating around in online databases. GitHub, an online coding platform, offers developers a guide on how to implement safety measures in their organizations. The site recommends creating a security policy, having strict password guidelines, and requiring two-factor authorization. The data leak included logs — "often with tokens, cookies, and metadata," which makes it "particularly dangerous for organizations lacking multi-factor authentication or credential hygiene practices," the Cybernews team said. Meta offers a Privacy Checkup tool for users to review their privacy and security account settings. There, you can turn on two-factor authentication and ensure Meta alerts you of unusual logins. Meanwhile, Telegram said its primary login method sends a one-time password to users over SMS. "As a result, this is far less relevant for Telegram users compared to other platforms where the password is always the same," a Telegram spokesperson told BI about the data leak. Apple, GitHub, and Meta did immediately respond to a request for comment on the data leak. Google said it was directing users to some of the security resources above.


TechCrunch
an hour ago
- TechCrunch
Character.AI taps Meta's former VP of business products as CEO
the Google-backed AI chatbot provider with tens of millions of monthly active users, announced on Friday that Karandeep Anand, the former VP of Business Products at Meta, is joining the company as CEO. Previously a board adviser to Anand is stepping into the CEO role at a pivotal moment for the chatbot provider, as the company tries to simultaneously grow its platform while combatting child safety concerns. In recent months, has added an array of new safety features in light of an active lawsuit, which alleges that one of the company's chatbots played a role in the death of a 14-year-old Florida boy. Anand comes to with experience running advertising products that reached billions of users on Meta's apps. Previously, Anand served as Microsoft's head of product management, overseeing user experience on the company's cloud platform, Azure. Most recently, Anand served as the president of the fintech startup Brex. new CEO, Karandeep Anand Image Credits: Anand is taking over just over 10 months after Google hired away the startup's co-founder and CEO, Noam Shazeer, who had previously led core AI teams at the Mountain View giant. At the time, Google also signed a non-exclusive agreement to use technology. deal with Google prompted federal regulators to investigate the companies' relationship over antitrust concerns. It's one of many reverse-acquihire deals in the AI startup space that's received regulatory scrutiny, alongside Microsoft's deal with has raised more than $150 million in venture funding, largely from Andreessen Horowitz. In a blog post, Anand said one of his first priorities would be making safety filters 'less overbearing.' The new CEO noted that the company cares deeply about users safety, but that too often, 'the app filters things that are perfectly harmless.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Anand also said he plans to improve the quality of AI models on platform, innovate around memory features, and increase transparency around decision making. He says many of these features are coming in the next 60 days. Chatbots that are purely designed for entertainment, which specializes in, are growing into a massive market for generative AI — a trend that's been surprising to many. In 2024, 66% of the company's users were between the age of 18 and 24, and 72% of the company's users were women, according to data from Sensor Tower.


Forbes
an hour ago
- Forbes
Is ChatGPT Making Us Stupid?
Two research studies suggest that heavy use of AI is not only a game changer, but an alarming threat ... More to humanity's ability to solve problems, communicate with one another, and perhaps to thrive. In boardrooms and classrooms, coffee shops and cubicles, the same question keeps coming up: Is ChatGPT making us smarter, or is it making us intellectually lazy—maybe even stupid? There's no question that generative artificial intelligence is a game-changer. ChatGPT drafts our emails, answers our questions, and completes our sentences. For students, it's become the new CliffsNotes. For professionals, a brainstorming device. For coders, a potential job killer. In record time, it has become a productivity enhancer for almost everything. But what is it doing to our brains? As someone who has spent his career helping clients anticipate and prepare for the future, this question deserves our attention. With any new technology, concerns inevitably arise about its impact. When calculators were first introduced, people worried that students would lose their ability to perform basic arithmetic or mental math skills. When GPS was first introduced, some fretted that we would lose our innate sense of direction. And when the internet bloomed, people grew alarmed that easy access to information would erode our capacity for concentration and contemplation. 'Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, is what often gets shortchanged by internet grazing,' noted technology writer Nicholas Carr in a prescient 2008 Atlantic article, 'Is Google Making Us Stupid?' Today, Carr's question needs to be asked anew – but of a different techno-innovation. Just-released research studies are helping us understand what's going on when we allow ChatGPT to think for us. What Happens to the Brain on ChatGPT? Researchers at MIT invited fifty-four participants to write essays across four sessions, divided into three groups: one using ChatGPT, one using Google, and one using only their brainpower. In the final session, the groups switched roles. What these researchers found should make all of us pause. Participants who used ChatGPT consistently produced essays that scored lower in originality and depth than those who used search or wrote unaided. More strikingly, brain imaging revealed a decline in cognitive engagement in ChatGPT users. Brain regions associated with attention, memory, and higher-order reasoning were noticeably less active. The MIT researchers introduced the concept of "cognitive debt"—the subtle but accumulating cost to our mental faculties when we outsource too much of our thinking to AI. 'Just as relying on a GPS dulls our sense of direction, relying on AI to write and reason can dull our ability to do those very things ourselves,' notes the MIT report. 'That's a debt that compounds over time.' The second study, published in the peer-reviewed Swiss journal Societies, is titled 'AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.' It broadens the lens from a lab experiment to everyday life. Researchers surveyed 666 individuals from various age and educational backgrounds to explore how often people rely on AI tools—and how that reliance affects their ability to think critically. The findings revealed a strong negative correlation between frequent AI use and critical thinking performance. Those who often turned to AI for tasks like writing, researching, or decision-making exhibited lower 'metacognitive' awareness and analytical reasoning. This wasn't limited to any one demographic, but younger users and those with lower educational attainment were particularly affected. What's more, the study confirmed that over-reliance on AI encourages 'cognitive offloading'—our tendency to let external tools do the work our brains used to do. While cognitive offloading isn't new (we've done it for centuries with calculators and calendars), AI takes it to a whole new level. 'When your assistant can 'think' for you, you may stop thinking altogether,' the report notes. Are We Letting the Tool Use Us? These studies aren't anti-AI. Neither am I. I use ChatGPT daily. As a futurist, I see ChatGPT and similar tools as transformational breakthroughs—the printing press of the 21st century. They unlock productivity, unleash creativity, and lower barriers to knowledge. But just as the printing press didn't eliminate the need to learn to read, ChatGPT doesn't absolve us of the responsibility to think. And that is the danger today, that people will stop doing their own thinking. These studies are preliminary, and further research is needed. However, there is sufficient evidence to suggest that heavy use of AI is not only a game changer, but an alarming threat to humanity's ability to solve problems, communicate with one another, and perhaps to thrive. In integrating metacognitive strategies—thinking about thinking—into education, workplace training, and even product design. In other words, don't just use AI—engage with it. The line we must straddle is between augmentation and abdication. Are we using AI to elevate our thinking? Or are we turning over the keys to robots? Here are four ideas for using this new technology, while keeping our cognitive edge sharp: The danger isn't that ChatGPT will replace us. But it can make us stupid—if we let it replace our thinking instead of enriching it. The difference lies in how we use it, and more importantly, how aware we are while using it. The danger is that we'll stop developing the parts of ourselves that matter most—because it's faster and easier to let the machine do it. Let's not allow that to happen.