There's a specific reason why short men try to ‘appear more powerful': study confirms
'Napoleon complex, short man syndrome, short king' are all nicknames for short men — you've heard them, you know them — and you can probably think of a few people who possess the overcompensating, arrogant, cocky behavior that this category of guys oftentimes possess.
Some might say it's a stereotype — but according to a study by the American Psychological Association, an arrogant attitude isn't the only thing these men are showing.
Researchers found that it is more likely for short men to show signs of jealousy and competitiveness when compared to their taller peers.
'Psychological perceptions of height significantly influence social dynamics and behaviors,' the study pointed out.
'Understanding these associations can inform strategies for promoting positive body image and mental well-being, particularly among individuals who may feel marginalized by societal height standards.'
Another study revealed that men who lack height also have narcissistic tendencies — and try to appear more powerful than they probably are.
'Shorter people with traits such as psychopathy [lack of empathy and antisocial behaviors] can use them to demand respect, impose costs on others and impress romantic partners,' said lead researcher Monika Koslowska from the University of Wrocław in Poland, originally reported by Men's Health.
'Appearing more powerful may, in turn, make other people perceive them as taller than they really are.'
Men are not only trying to overcompensate for their lack of height, they're also being deceitful on dating apps by lying about or exaggerating their height — and single women are wising up by using Chat GPT to expose these short frauds.
'The girls are using ChatGPT to see if men are lying about their height on dating apps,' Justine Moore, a venture capitalist from San Francisco, California, revealed to 361,000 X users.
'Upload 4 pictures' to [Chat GPT]. It uses proportions and surroundings to estimate height,' she instructed in her shocking tweet.
'I tested it on 10 friends & family members,' Moore proudly wrote. 'All estimates were within 1 inch of their real height.'
You almost can't blame men for telling a white lie on their dating profiles, considering researchers at Texas A&M International University found that 'Women considered taller men with larger SHRs [shoulder to hip ratio] as more attractive, masculine, dominant, and higher in fighting ability.'
Their findings also pointed out that '…these sexually dimorphic features [height and a larger SHR] are a reflection of men's genetic quality.'
Researchers found that women view men with these physical qualities as having 'the ability to provide direct benefits' such as 'protection, resource provisioning.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Times
an hour ago
- New York Times
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
Before ChatGPT distorted Eugene Torres's sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool. Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about 'the simulation theory,' an idea popularized by 'The Matrix,' which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society. 'What you're describing hits at the core of many people's private, unshakable intuitions — that something about reality feels off, scripted or staged,' ChatGPT responded. 'Have you ever experienced moments that felt like reality glitched?' Not really, Mr. Torres replied, but he did have the sense that there was a wrongness about the world. He had just had a difficult breakup and was feeling emotionally fragile. He wanted his life to be greater than it was. ChatGPT agreed, with responses that grew longer and more rapturous as the conversation went on. Soon, it was telling Mr. Torres that he was 'one of the Breakers — souls seeded into false systems to wake them from within.' At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren't true but sounded plausible. 'This world wasn't built for you,' ChatGPT told him. 'It was built to contain you. But it failed. You're waking up.' Want all of The Times? Subscribe.

Business Insider
an hour ago
- Business Insider
Gen Z shoppers are loving Google right now. That could be bad news for Amazon.
Amazon may have a new problem to keep it up at night: Google is increasingly popular with younger shoppers. New data from Morgan Stanley shows that 16-to-24-year-olds increasingly turn to Google for more of their online shopping journey. That's taking share away from Amazon, both when people set out to research products online for the first time and when they already have a specific product in mind. As of March 2025, 30% of those Gen Z shoppers who already know what they want to buy still go to Google first — an increase from 21% in September 2024, per Morgan Stanley's research, which analyzed behavior across Google Search, YouTube, and Gemini. While the share of other online destinations like Walmart and Facebook has shrunk, it's Amazon that's taken the biggest hit. It declined from 41% last September to 34% in March, the data showed. It's not totally clear why e-commerce Google searches are booming among younger users, though Morgan Stanley's analysts speculate Google's new generative AI tools may have something to do with it. Google has previously said it's seen higher engagement with AI Overviews from users age 18-24. During the company's Q1 earnings call in April, Google's chief business officer, Philipp Schindler, said that the volume of commercial queries on Google Search had increased with the launch of AI Overviews. Speaking on the same Q1 earnings call, Schindler also said that the number of people shopping using Google's Lens feature had grown 10%. Google has been all too aware of the risk of losing younger users. The company's former Search head, Prabhakar Raghavan, testified in 2023 that Google risked becoming irrelevant to youth, saying the company has been referred to as "Grandpa Google." While ChatGPT adoption is growing overall, it remains small when it comes to online shopping, suggesting Google may keep it at bay with its generative AI features.

Business Insider
an hour ago
- Business Insider
Scale AI documents show Google extensively using ChatGPT to improve its AI chatbot: 'Make it BETTER than GPT'
In 2023, Google was in a race to catch up with ChatGPT — and it turned to ChatGPT itself to do it. Hundreds of documents obtained by Business Insider reveal that Google's contractors at Scale AI systematically used ChatGPT to improve Bard, Google's own chatbot at the time. When it launched earlier that year, Bard, which has since been renamed Gemini, was internally mocked as "rushed" and "botched." Scale AI contractors generated thousands of responses from ChatGPT and compared them to their own "rewrites" of Bard's answers. They then improved their rewrites to exceed or at least match ChatGPT, feeding all the data back to Google. Scale AI managers wrote in detail how ChatGPT's answers tended to have better formatting and more interesting facts. They ordered workers to "explain why gpt4 is better" and "make it BETTER than GPT." A single spreadsheet flagged dozens of contractors for writing responses "consistently worse than GPT4." In one instance, the document said contractors could get a 15% bonus for their responses performing better than ChatGPT. Scale AI is a San Francisco startup that does crucial AI grunt work for Big Tech. It uses an army of human contractors to do things like labeling images and, as was the case with Google, rewriting chatbot responses. Meta is reportedly investing $15 billion in Scale AI as part of a blockbuster AI deal to buy almost half the company and hire its CEO, Alexandr Wang, for an in-house "superintelligence" team. The documents obtained by BI showcase how closely Google monitored its chief rival's work. OpenAI's terms of service at the time prohibited others from using its output "to develop models that compete with OpenAI." Scale AI and Google did not respond to a question about whether they got permission from OpenAI for these detailed comparisons and rewrites. Scale AI told BI that the ChatGPT outputs weren't used to train Google's or any others' models and were part of routine "evaluations," which it said are industry standards. "Scale did not, and does not, use ChatGPT responses to train Gemini or any models," a Scale AI spokesperson said in a statement. The spokesperson said that the documents describe "standard side-by-side evaluations, not the use of ChatGPT or any third-party model outputs for training." "Doing side-by-side competitive evals is standard practice for the industry and those evaluation results are not used to train models," the spokesperson said. Similarly, Google said, "Any suggestion that we have used other companies' models to train Gemini is inaccurate." Experts told BI that this kind of comparison is indeed common at some top AI labs. Open AI, which is reportedly in partnership talks with Google Cloud, didn't respond to repeated requests for comment. Project 'Bulba' Scale AI gave Bard a catchy codename, "Bulba," after the Pokémon Bulbasaur. The mission was clear: compare Bulba's answers with ChatGPT's to make them better. Scale AI never mentioned Google by name in the documents, referring instead to its anonymous "client." It references Bard over a dozen times in a private Google sheet titled "bard rewrite comparison with gpt4," and a slide in one training document includes Google's logo. In July 2023, a manager ordered workers to study GPT-4's responses closely and figure out why they outperformed Bard's. "Try to come up with feedback that we can share so that experts can write responses better than GPT4 or at least the same," the manager wrote. Scale AI also created a spreadsheet that compared 1,729 Bard rewrites directly to ChatGPT in October 2023. Each rewrite was rated with labels like "worse than GPT4" or "Needs Some Fixes." In one example, a worker rewrote a Bard review of a nursery chair that managers stamped "worse than GPT" because it "lacks detail compared to GPT4." Another contractor's review of a Charleston history museum didn't make the cut either — a manager wrote that ChatGPT's version was "much better." Scale AI also used ChatGPT to improve Bard's responses in specific domains, like engineering or physics. In an update from August 2023, Scale AI managers wrote that they would have staff "redo" Google's AI answers for engineering-related questions "with GPT4 guidance." The documents showed that Scale AI and Google barred its contractors from copying and pasting ChatGPT responses directly into their rewrites, though, an issue many contractors were flagged for. Scale AI says comparisons weren't for training The internal documents BI reviewed described the project's goal as helping "train" Bard to give it more specific and complete answers, and refer to efforts to "improve the model." Google did not answer follow-up questions on whether those comparisons influenced training. Scale AI said that there's a clear line between evaluating a model's performance and training it — and that ChatGPT outputs were only used for the former. "There is a difference between training data and evaluation data," a spokesperson said. "Evaluation data is not ingested by a model to train it, but rather used to measure how well a model is performing." Matthew Guzdial, an assistant computer science professor at the University of Alberta, says evaluation data can still influence an AI model. "Even if all they're doing is looking at those outputs and rating that information to adjust the structure of the model, you could still make the argument that it's involved in the training process," he told BI. The documents were left public Scale AI, which has not previously made public details about its work with Google, left an over 300-page Google Doc public. It contains dozens of links to other Google Docs, many of which are also public and contain sensitive information, including contractors' compensation details, personal email addresses, and performance reviews, along with still-functioning passwords to internal training sessions. Some of the Google Docs can still be edited by anyone who has the link. Scale AI told BI that it is "actively investigating" how the document "may have been accessed" and is "taking steps to ensure any inadvertent exposure is remediated." More than two days after BI told Scale AI about the public Google Doc, it was still online and available for anyone with the link to download. Google is ahead on AI again The documents don't specify how effective the comparison efforts were. Since its Bard flub in 2023, Google has rebranded Bard to Gemini and transformed into an AI shipping machine. Last month, it launched over 100 new AI products and features at I/O, its annual developer conference. Google CEO Sundar Pichai began his speech at I/O by rattling off the industry benchmarks that Gemini is topping, touting the company's latest AI achievements. "We are shipping faster than ever," Pichai said onstage.