
New research challenges 50-50 myth: Biological sex of children may not always be random
A new study is turning our understanding of biological sex determination on its head. Long taught in biology class as a 50-50 genetic lottery between X and Y chromosomes, the sex of a child may not be entirely random. According to new findings published in Science Advances on July 18, individual families may have skewed odds toward consistently having either boys or girls — and age, genetics, and environmental factors may be involved.
Led by epidemiologist Dr. Jorge Chavarro of Harvard T.H. Chan School of Public Health, the study analyzed data from the historic Nurses' Health Study , one of the largest and longest-running fertility databases in the world. Examining more than 58,000 pregnancies between 1956 and 2015, researchers looked at patterns of sibling sex within families, maternal age, and genetic data.
While the broader population showed the expected close-to-even split between male and female births, something surprising appeared at the family level: some couples appeared more likely to have children of the same sex — and not by random chance.Approximately one in three families in the study had children all of the same sex — all boys or all girls — a figure notably higher than simple probability theory would predict, especially in families with three or more kids.The researchers developed a statistical model suggesting that each couple might have their own "unique probability" of producing a child of one sex or the other. While this individual bias remains balanced out when looking at large populations, it could significantly affect outcomes within families. 'It's not that boys or girls are more common overall, but that the odds may not actually be 50-50 for everyone,' said Dr. Chavarro.
The effect appeared even stronger in women who had their first child later in life. This suggests possible biological shifts over time, potentially influencing which type of sperm — X-carrying (girl) or Y-carrying (boy) — is more likely to fertilize the egg.As women age, for example, vaginal pH and cervical mucus composition change, potentially favoring sperm carrying specific chromosomes. X sperm tend to be larger and more resilient, possibly giving them a better chance at success in slightly more acidic environments, which can develop later in life.
The study also uncovered two genetic loci that may be linked to the likelihood of having children of the same sex. However, these genes do not appear to be directly involved in known reproductive pathways — their role is mysterious and not yet fully understood.
'These are just initial hints,' Chavarro explained. 'They point to a potential genetic influence on sex bias, but don't yet explain the mechanism.' This finding opens new doors for genetic and evolutionary biologists to explore how heritable factors may interact with environmental and physiological variables to influence offspring sex.The findings, while intriguing, are not without controversy. Australian psychologist and behavioral geneticist Dr. Brendan Zietsch remains skeptical.Zietsch previously worked on a large Swedish study analyzing millions of birth records, which found no evidence of consistent sex patterns within families. He believes that the U.S.-based cohort (95% white, mostly nurses and health professionals) may not be representative enough to make broader claims.This study could reshape how we understand fundamental reproductive biology. For couples puzzled by having multiple same-sex children — and others praying for variety — the research may provide both insight and assurance. While conception still carries an element of chance, it may not be as random as previously believed.It also raises huge questions for future research: Could IVF or fertility treatments one day account for these biological skews? Could genetics or maternal physiology be guided to increase the chances of balancing the family tree?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
2 days ago
- NDTV
'Cannibal' Robot That Can Grow, Heal By Consuming Other Bots Developed
In what appears to be a scene straight out of a dystopian sci-fi movie, scientists at Columbia University in New York have developed a robot capable of growing and healing by consuming other robots. Researchers say the development marks a big step in the creation of self-sustaining machines that can lead to an entirely new dimension of autonomy for artificial intelligence. "It is possible to form machines that can grow physically and become more capable within their lifetime by consuming and recycling material from their immediate surroundings and other machines," the study published in the journal Science Advances highlighted. The so-called cannibal robot works through a process called "robot metabolism", allowing it to self-assemble and increase its performance by adding new bits, or discarded old parts that have run out of battery or no longer serve a function. The researchers showed how Truss Links -- a robotic magnet stick inspired by the Geomag toy, self-assembled into two-dimensional shapes, which could later morph into three-dimensional robots. These robots then further improved themselves by integrating new parts, effectively "growing" into more capable machines. "Truss Links can be used to build modular robots. Modular robot systems comprise multiple parts called modules, links, or cells that can self-assemble or be assembled to achieve an objective. The Truss Link is the basic building block of our modular robot system," Also Read | Japan's Toll System Crashes For 38 Hours, 24,000 Still Voluntarily Pay The Fees Though the development of self-sustaining robots is still in a nascent stage, the researchers say the creation of Truss Links is a step toward a future where robots can adapt and improvise "instead of being purpose-built with the vain hope of anticipating all use cases". "True autonomy means robots must not only think for themselves but also physically sustain themselves," Philippe Martin Wyder, lead author and researcher, said in a statement. "Just as biological life absorbs and integrates resources, these robots grow, adapt, and repair using materials from their environment or from other robots." 'What can go wrong' Reacting to the news of self-sustaining robots that might become the norm in the future, social media users were divided on their use case. "Why we need this? What is the purpose of that in our world?" wrote one user while another added: "Oh lovely, real life replicators! What can go wrong?"


Hans India
2 days ago
- Hans India
AI model trained to respond to online political posts impressive
Researchers. who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse had improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high-quality online conversation and 'substantially increase (one's) openness to alternative viewpoints', according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide 'light-touch suggestions', such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, said. 'To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics,' Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, said, '(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups.' Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a 'fictitious social media user' for the participants -- which tailored its argument 'on the fly' according to the text's position and reasoning. The participants then responded as if replying to a social media comment. 'An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points,' the authors wrote in the study. Eady said, 'Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc.' AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about 'using LLMs to regulate online political discussions in more heavy-handed ways.' Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, 'The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India. Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible,' the author added. Kapoor said, 'In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here.' Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that 'those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement.' Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study as 'interesting', Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a preprint paper), found that dark personality traits were associated with a disregard for norms and hierarchies.


Time of India
3 days ago
- Time of India
Researchers train AI model to respond to online political posts, find quality of discourse improved
Researchers who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high quality online conversation and "substantially increase (one's) openness to alternative viewpoints", according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide "light-touch suggestions", such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI. "To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics," Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, "(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups." Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a "fictitious social media user" for the participants -- which tailored its argument "on the fly" according to the text's position and reasoning. The participants then responded as if replying to a social media comment. "An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors wrote in the study. Eady said, "Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc." AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about "using LLMs to regulate online political discussions in more heavy-handed ways." Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, "The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India." "Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible," the author added. Kapoor said, "In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here." Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that "those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement." Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study "interesting", Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a pre-print paper), found that dark personality traits were associated with a disregard for norms and hierarchies.