logo
Want your partner to be a certain height? Tinder will now let you filter by it

Want your partner to be a certain height? Tinder will now let you filter by it

Economic Times2 days ago

Tinder is currently testing a feature that allows paid users to set a specific preference for potential matches. It will not block profiles, but it will influence recommendations. The move comes as the dating app attempts to offer more tailored experiences and potentially appeal more to women amid a decline in paid users.Tinder is apparently experimenting with a new height preference feature for its Gold and Premium subscribers. The feature adds to growing concerns about dating app superficiality.
The launch of the test comes after Match, the parent company of Tinder, reported a 5% decline in paying users in its most recent earnings. In the first quarter, Match's dating apps had 14.2 million paid subscribers, down from 14.9 million a year earlier, as per a report by TechCrunch.
Tinder is capitalizing on the perception that dating apps are shallow. A representative for the company asserted that the discovery setting has been made available as a worldwide test after a Reddit user shared a picture of the updated Tinder app height setting, as quoted in a report by TechCrunch. Given that the app is typically more heavily dominated by men in the US and abroad, the company may be hoping that adding a height setting will entice more women to use and pay for it.
ALSO READ: Who is the Japanese pop star Elon Musk reportedly fathered a child with? Mystery deepensAccording to the company, the setting will express a preference rather than acting as a "hard filter." In other words, it will inform recommendations rather than block or exclude profiles.In an emailed statement, Phil Price Fry, VP of communications at Tinder, said, "We're always listening to what matters most to our Tinder users and testing the paid height preference is a great example of how we're building with urgency, clarity, and focus,' 'This is a component of a larger initiative to encourage more deliberate Tinder connections. Every decision we make with our new product is guided by certain principles, and this one speaks directly to a few of them.'Although not all tests are implemented permanently, they all teach us how to provide more intelligent, pertinent experiences and advance the category.The height setting is likely to generate more controversy and worry, even though dating apps normally allow users to filter for people based on more conventional criteria, such as age, whether they want a long-term relationship, or whether they have children.
With the popularity of online dating, evaluating someone entirely on the basis of physical attributes, such as their size or appearance, has become crucial. Tinder's user interface, which places a strong emphasis on browsing through people's photos, contributed to the acceleration of this trend. The popularity of dating apps not only revealed people's inclination for the conventionally attractive, but it also created a culture in which prejudice against taller men was accepted as the standard.Can I filter Tinder matches based on height?Only if you are a Gold or Premium subscriber in the test group; Tinder is testing a height preference setting.
Does the height setting exclude shorter or taller users? No, it only helps guide who Tinder shows you; it does not completely block anyone.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

The Hindu

timean hour ago

  • The Hindu

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots, including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini, in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok, now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries, wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

Time of India

timean hour ago

  • Time of India

Hey chatbot, is this true? AI 'factchecks' sow misinformation

During India and Pakistan's recent conflict, social media users turned to AI chatbots like Grok, ChatGPT, and Gemini for fact-checking, only to receive misinformation. Experts warn these tools often produce false or biased content, raising concerns as tech firms cut human fact-checkers and users increasingly rely on AI for news. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI 's Grok, OpenAI's ChatGPT , and Google's Gemini -- in search of reliable information."Hey @Grok, is this true?" has become a common query on Elon Musk 's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social the responses are often themselves riddled with -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes."The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP."Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead."When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as " Community Notes ," popularized by have repeatedly questioned the effectiveness of "Community Notes" in combating fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people."We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP."I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."

Girls in schools in Japan always wear short skirt due to...., reason will leave you shocked
Girls in schools in Japan always wear short skirt due to...., reason will leave you shocked

India.com

timean hour ago

  • India.com

Girls in schools in Japan always wear short skirt due to...., reason will leave you shocked

Girls in schools in Japan always wear short skirt due to...., reason will leave you shocked In schools and colleges, where uniforms are a symbol of uniformity and discipline, a country like Japan has a completely different take on it. Instead of being a representation of equality or identity, Japan's school uniform speaks more about style and a cultural phenomenon. Walk past any Japanese school and you will see the girl embracing fashion trends that are layered in the pop culture style but impractical- a short, pleated skirt. No matter whether it's sunny or freezing out there, girls wear miniskirts. Being in trend for decades, short and mini skirts in Japan started as a pop culture trend but later turned out to be a cultural norm. Back in 1990, a Japanese pop Namie Amuro, rose to fame, and she was widely hailed for her fashion sense, which featured short skirts. Soon, this trend started gravitating teenage girls, including school students who tailored their uniform accordingly, so much so that today their educational dress code has become that. More Than Just a Trend According to reports, teenage school girls started embracing their trend because it made them feel cuter and stylish. What began as a fashion inspiration slowly turned out to be a matter of confidence and identity for them. Comfort vs. Climate The trend has dominated Japanese girls so much so that, irrespective of the weather, school girls continue to embrace this tradition. Even in freezing temperatures, you will find them styling this mini skirt with scarves and jackets, but the skirt's length wouldn't be compromised. A Tradition Cemented by Pop Culture Besides being a school uniform, this trend has become iconic with many anime, dramas, and K-pop-inspired videos, and international films embracing it. Today, this trend is not just an outfit but a global symbol of Japanese youth culture.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store