Latest news with #math
Yahoo
3 hours ago
- Business
- Yahoo
OpenAI and Google outdo the mathletes, but not each other
AI models from OpenAI and Google DeepMind achieved gold medal scores in the 2025 International Math Olympiad (IMO), one of the world's oldest and most challenging high school level math competitions, the companies independently announced in recent days. The result underscores just how fast AI systems are advancing, and yet, how evenly matched Google and OpenAI seem to be in the AI race. AI companies are competing fiercely for the public perception of being ahead in the AI race: an intangible battle of 'vibes' that can have big implications for securing top AI talent. A lot of AI researchers come from backgrounds in competitive math, so benchmarks like IMO mean more than others. Last year, Google scored a silver medal at IMO using a 'formal' system, meaning it required humans to translate problems into a machine‑readable format. This year, both OpenAI and Google entered 'informal' systems into the competition, which were able to ingest questions and generate proof‑based answers in natural language. Both companies claim their AI models correctly answered five out of six questions on IMO's test, scoring higher than most high school students and Google's AI model from last year, without requiring any human-machine translation. In interviews with TechCrunch, researchers behind OpenAI and Google's IMO efforts claimed that these gold medal performances represent breakthroughs around AI reasoning models in non-verifiable domains. While AI reasoning models tend to do well on questions with straightforward answers, such as simple math or coding tasks, these systems struggle on tasks with more ambiguous solutions, such as buying a great chair or helping with complex research. However, Google is raising questions around how OpenAI conducted and announced its gold medal IMO performance. After all, if you're going to enter AI models into a math contest for high schoolers, you might as well argue like teenagers. Shortly after OpenAI announced its feat on Saturday morning, Google DeepMind's CEO and researchers took to social media to slam OpenAI for announcing its gold‑medal prematurely — shortly after IMO announced which high schoolers had won the competition on Friday night — and for not having their model's test officially evaluated by IMO. Thang Luong, a Google DeepMind senior researcher and lead for the IMO project, told TechCrunch that Google waited to announce its IMO results to respect the students participating in the competition. Luong said that Google has been working with IMO's organizers since last year in preparation for the test and wanted to have the IMO president's blessing and official grading before announcing its official results, which it did on Monday morning. 'The IMO organizers have their grading guideline,' Luong said. 'So any evaluation that's not based on that guideline could not make any claim about gold-medal level [performance].' Noam Brown, a senior OpenAI researcher who worked on the IMO model, told TechCrunch that IMO reached out to OpenAI a few months ago about participating in a formal math competition, but the ChatGPT-maker declined because it was working on natural language systems that it thought were more worth pursuing. Brown says OpenAI didn't know IMO was conducting an informal test with Google. OpenAI says it hired third-party evaluators — three former IMO medalists who understood the grading system — to grade its AI model's performance. After OpenAI learned of its gold medal score, Brown said the company reached out to IMO, which then told the company to wait to announce until after IMO's Friday night award ceremony. IMO did not respond to TechCrunch's request for comment. Google isn't necessarily wrong here — it did go through a more official, rigorous process to achieve its gold medal score — but the debate may miss the bigger picture: AI models from several leading AI labs are improving quickly. Countries from around the world sent their brightest students to compete at IMO this year, and just a few percent of them scored as well as OpenAI and Google's AI models did. While OpenAI used to have a significant lead over the industry, it certainly feels as though the race is more closely matched than any company would like to admit. OpenAI is expected to release GPT-5 in the coming months, and the company certainly hopes to give off the impression that it still leads the AI industry. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


TechCrunch
4 hours ago
- Business
- TechCrunch
OpenAI and Google outdo the mathletes, but not each other
AI models from OpenAI and Google DeepMind achieved gold medal scores in the 2025 International Math Olympiad (IMO), one of the world's oldest and most challenging high school level math competitions, the companies independently announced in recent days. The result underscores just how fast AI systems are advancing, and yet, how evenly matched Google and OpenAI seem to be in the AI race. AI companies are competing fiercely for the public perception of behind ahead in the AI race: an intangible battle of 'vibes' that can have big implications for securing top AI talent. A lot of AI researchers come from backgrounds in competitive math, so benchmarks like IMO mean more than others. Last year, Google scored a silver medal at IMO using a 'formal' system, meaning it required humans to translate problems into a machine‑readable format. This year, both OpenAI and Google entered 'informal' systems into the competition, which were able to ingest questions and generate proof‑based answers in natural language. Both companies claim their AI models scored higher than most high school students and Google's AI model from last year, without requiring any human-machine translation. In interviews with TechCrunch, researchers behind OpenAI and Google's IMO efforts claimed that these gold medal performances represent breakthroughs around AI reasoning models in non-verifiable domains. While AI reasoning models tend to do well on questions with straightforward answers, such as math or coding tasks, these systems struggle on tasks with more ambiguous solutions, such as buying a great chair or helping with complex research. However, Google is raising questions around how OpenAI conducted and announced its gold medal IMO performance. After all, if you're going to enter AI models into a math contest for high schoolers, you might as well argue like teenagers. Shortly after OpenAI announced its feat on Saturday morning, Google DeepMind's CEO and researchers took to social media to slam OpenAI for announcing its gold‑medal prematurely — shortly after IMO announced which high schoolers had won the competition on Friday night — and for not having their model's test officially evaluated by IMO. Btw as an aside, we didn't announce on Friday because we respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved — Demis Hassabis (@demishassabis) July 21, 2025 Thang Luong, a Google DeepMind senior researcher and lead for the IMO project, told TechCrunch that Google waited to announce its IMO results to respect the students participating in the competition. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Luong said that Google has been working with IMO's organizers since last year in preparation for the test and wanted to have the IMO president's blessing and official grading before announcing its official results, which it did on Monday morning. 'The IMO organizers have their grading guideline,' Luong said. 'So any evaluation that's not based on that guideline could not make any claim about gold-medal level [performance].' Noam Brown, a senior OpenAI researcher who worked on the IMO model, told TechCrunch that IMO reached out to OpenAI a few months ago about participating in a formal math competition, but the ChatGPT-maker declined because it was working on natural language systems that it thought were more worth pursuing. Brown says OpenAI didn't know IMO was conducting an informal test with Google. OpenAI says it hired third-party evaluators — three former IMO medalists who understood the grading system — to grade its AI model's performance. After OpenAI learned of its gold medal score, Brown said the company reached out to IMO, which then told the company to wait to announce until after IMO's Friday night award ceremony. IMO did not respond to TechCrunch's request for comment. Google isn't necessarily wrong here — it did go through a more official, rigorous process to achieve its gold medal score — but the debate may miss the bigger picture: AI models from several leading AI labs are improving quickly. Countries from around the world sent their brightest students to compete at IMO this year, and just a few percent of them scored as well as OpenAI and Google's AI models did. While OpenAI used to have a significant lead over the industry, it certainly feels as though the race is more closely matched than any company would like to admit. OpenAI is expected to release GPT-5 in the coming months, and the company certainly hopes to give off the impression that it still leads the AI industry.
Yahoo
a day ago
- Entertainment
- Yahoo
40 People Who Were So Wrong But So Confident When They Posted Something On The Internet
person who didn't know where Spain was: person who attempted to sell their dryer: person who was proud of their flowers: person who tried to correct the Merriam-Webster dictionary: person who couldn't do elementary school math: Related: person who misunderstood some important geography: person who tried to make an argument that the earth is flat: person who misspelled a word and doubled down with an explanation: person who was trying to get to know someone: person who had a message for baristas: person who didn't understand what "theory" meant: person who didn't know how time worked: person who wanted people to remember their worth: Related: parent who needed to get back to school themself: person's answer to a Hinge prompt: person explaining a game: person who realized what "news" meant: person trying to find their son's glasses: person opening up about their insecurities: person who told someone what they were having for breakfast: Related: person who was trying to be sexy: person who didn't know what the sun was: person who didn't understand simple fractions: person who stan'ed Big Dairy: person who left a review about how fresh a restaurant's food was: person who shared their goal for graduating: person whose grammar rules made no sense: person's message about actors: person who corrected someone and still made a mistake: person's passionate rant about cats' diets: Related: person who just needed to sort out their stomach issues: person who found Washington very scenic: person who was protecting their food: person who thought someone misspelled a word: person who insisted that space was fake: person who planned to travel outside of the US based on the election outcome: person who said blood was blue: person who had a hot take about certain wings: person who was describing someone's boyfriend: finally, thisi person who gave financial advice: Want more funny, weird, wholesome, or just plain interesting internet content like what you just read? Subscribe to the Only Good Internet newsletter to get all of the scrolling with none of the doom. No politics, no celeb drama, just Good Content. Also in Internet Finds: Also in Internet Finds: Also in Internet Finds:
Yahoo
4 days ago
- Science
- Yahoo
Study confirms there's no innate difference in aptitude between boys and girls in math
When you buy through links on our articles, Future and its syndication partners may earn a commission. Classroom teaching may be driving a gender gap in math performance, and the effect starts from the moment children begin school, a new study finds. The study, published July 11 in the journal Nature, included data on the math skills of more than 2.5 million first-grade children in France. It revealed that, while girls and boys started school with a similar level of math skills, within four months, boys performed significantly better than girls. That gap quadrupled in size by the end of the first year of formal education. Gender gaps in math performance have been documented the world over, and the origin of this disparity has long been blamed on supposedly inherent differences between the genders — "boys are better at math" and "girls are better at language" — that are actually just stereotypes without scientific backing. But the new study — and previous studies conducted in the U.S. — throw a wrench in those ideas, and instead suggest that something about formal math education spurs the gap to form. "I was very surprised, not by the fact that there was a gender gap, but that it emerges at the time when formal math instruction in school begins," study coauthor Elizabeth Spelke, a professor of psychology at Harvard University, told Live Science. Formal education widens gaps The new study leveraged an initiative by the French Ministry of Education to boost national math standards, which was launched after several years of disappointing performances in international assessments and uncovered the disturbing extent of the math skills gender gap in the country. Related: Is there really a difference between male and female brains? Emerging science is revealing the answer. With the aid of cognitive scientists and educators, the French government implemented a universal program of testing for all French children to help teachers better understand the needs of each class and inform updated national standards. Since 2018, every child's math and language skills have been assessed upon entry into first grade, the first mandatory year of schooling in France. They were tested again after four months of formal education and then after one complete year of learning. These tests revealed no notable differences between girls' and boys' mathematical ability when starting school. However, within four months, a sizable gap opened up between them, placing boys ahead, and that gap only grew as schooling progressed, suggesting that classroom activities had created the disparity, the study authors proposed. Spelke and her team's analysis covered four national cohorts whose data were collected between 2018 and 2022, and included demographic data to probe the role of external social factors — such as family structure and socioeconomic status (SES) — on school performance. But they found that the emergence of the math gender gap was universal and transcended every parameter investigated: regardless of SES, family structure or type of school, on average, boys performed substantially better in the third assessment than did girls. This bolstered the hypothesis that an aspect of the schooling itself was to blame. And that idea was further supported by data from the cohort impacted by COVID-related school closures, Spelke added. "When schools were closed during the pandemic, the gender gap got narrower and then they reopened and it got bigger again," she said. "So there are lots of reasons to think that the gender gap is linked in some way that we don't understand to the onset and progress of formal math instruction." Causes of the math performance gap For Jenefer Golding, a pedagogy specialist at University College London who was not involved in the study, the research raises worrying questions about attitudes or behaviors in the classroom that could be creating this disparity. "Gendered patterns are widespread but they're not inevitable," Golding told Live Science. "It's about equity of opportunity. We need to be quite sure that we're not putting avoidable obstacles in the way of young people who might thrive in these fields." However, separating these educational factors from possible social or biological contributors remains a complex issue, she said. As a purely observational study, the research does not allow any firm conclusions to be drawn about why this gender gap becomes so pronounced upon starting school. But the alarming findings are already prompting discussion among educational experts. Educational analyst Sabine Meinck of the International Association for the Evaluation of Educational Achievement drew on her own research, noting that "our data suggest early gendered patterns in parental engagement, [so] gender stereotypes may begin to take root through early childhood play." RELATED STORIES —'Let's just study males and keep it simple': How excluding female animals from research held neuroscience back, and could do so again —When was math invented? —Parents who have this gene may be more likely to have a girl For example, "parents report engaging girls significantly more in early literacy activities, while boys are more often involved with building blocks and construction toys," she told Live Science in an email. That may be laying a foundation for how kids engage with reading and math learning in school. These differences in early childhood play have previously correlated with differing levels of scholastic achievement down the line. The next step requires more research in classrooms, Spelke said, where researchers should gather data to develop interventions that could be useful to students, then test them. "And when we find that something is working, then it can be implemented across the board."
Yahoo
6 days ago
- Science
- Yahoo
Cultivating the next generation of scientists, engineers and energy experts
These STEM students are busy trying to beat the clock, putting their science, math, tech, and engineering skills to work building model homes out of simple materials like cardboard and wood foam planks, and felt for insulation. 'The roof is currently made out of layers of aluminum foil and the same kind of packaging material.' The model home challenge is one of several projects students in Constellation's third annual Youth Energy Summit are tackling.