logo
In Seattle, a Convergence of 5,444 Mathematical Minds

In Seattle, a Convergence of 5,444 Mathematical Minds

New York Times28-01-2025

The world's largest gathering of mathematicians convened in Seattle from Jan. 8 to Jan. 11 — 5,444 mathematicians, 3,272 talks. This year the program diverged somewhat from the its traditional kaleidoscopic panorama. An official theme, 'Mathematics in the Age of A.I.,' was set by Bryna Kra, the president of the American Mathematical Society, which hosts the event in collaboration with 16 partner organizations. In one configuration or another, the meeting, called the Joint Mathematics Meetings, or the J.M.M., has been held more or less annually for over a century.
Dr. Kra intended the A.I. theme as a 'wake-up call.' 'A.I. is something that is in our lives, and it's time to start thinking about how it impacts your teaching, your students, your research,' she said in an interview with The New York Times. 'What does it mean to have A.I. as a co-author? These are the kinds of questions that we have to grapple with.'
On the second evening, Yann LeCun, the chief A.I. scientist at Meta, gave a keynote lecture titled 'Mathematical Obstacles on the Way to Human-Level A.I.' Dr. LeCun got a bit into the technical weeds, but there were digestible tidbits.
'The current state of machine learning is that it sucks,' he said during the lecture, to much chortling. 'Never mind humans, never mind trying to reproduce mathematicians or scientists; we can't even reproduce what a cat can do.'
Instead of the generative large language models powering chatbots, he argued, a 'large-scale world model' would be the better bet for advancing and improving the technology. Such a system, he said in an interview after the lecture, 'can reason and plan because it has a mental model of the world that predicts consequences of its action.' But there are obstacles, he admitted — some mathematically intractable problems, their solutions nowhere in sight.
Deirdre Haskell, the director of the Fields Institute for Research in Mathematical Sciences in Toronto and a mathematician at McMaster University, said she appreciated Dr. LeCun's reminder that, as she recalled, 'the way we use the term A.I. today is only one way of possibly having an 'artificial intelligence.''
Dr. LeCun had noted in his lecture that the term artificial general intelligence, or A.G.I. — a machine with human-level intelligence — was a misnomer. Humans 'do not have general intelligence at all,' he said. 'We're extremely specialized.' The preferred term at Meta, he said, is 'advanced machine intelligence,' or AMI — 'we pronounce it 'ami,' which means friend in French.'
Dr. Haskell was already sold on the importance of 'using A.I. to do math, and the huge problem of understanding the math of A.I.' An expert in mathematical logic, she is working on the equivalent of a textbook: a collection of results that can be used by A.I. systems to generate and verify more complex mathematical research and proofs.
For Kenny Banks, an undergraduate at the University of North Carolina at Greensboro who attended the J.M.M., artificial intelligence does not appeal as a tool for guiding exploration. 'I think the mathematics that people currently love is driven by human curiosity, and what computers find interesting cannot be the same as what humans find interesting,' he said in an email. Nevertheless, he regretted not squeezing any A.I.-related talks into his itinerary. 'The math + A.I. theme was definitely of interest, it just ended up not working with all the things I had planned!'
Here are some other highlights from the mathapalooza in Seattle:
Day 1
At 6 p.m. on Wednesday, Jan. 8, after a ribbon-cutting and awards ceremony, attendees stampeded to the grand-opening reception in an exhibit hall. The draw was a) free food, and b) exhibitor booths occupied by publishers and purveyors of various mathy wares. At Booth 337, Robert Fathauer was selling an impressive inventory of dice — including the new '5-Player Go First Dice,' a colorful set of five 60-sided dice that share no number in common, allowing five game players an equal shot when they roll to determine who starts first. Dr. Fathauer, who is based in Arizona, was also co-organizer of the meeting's art exhibit and contributed two ceramic sculptures of his own, 'Hyperbolic Helicoid' and 'Cubic Squeeze.'
The exhibit's award-winning art submissions were 'Saddle Monster,' crocheted in wool, copper and nylon, by Shiying Dong of Greenwich, Conn., a mathematical artist with a Ph.D. in physics …
… and 'Twisted' and 'Untwisted,' created using a vector graphics app on an iPad, by Rashmi Sunder-Raj, a mathematical artist in Waterloo, Ontario.
Rebecca Lin, a Ph.D. student in computer science at M.I.T., received an honorable mention for a laser-cut engraving on paper titled 'Disintegrating (State of Mind).'
Day 2
On Thursday, Jon Wild, a music theorist at McGill University in Montreal who does math on the side, was invited to a session on applied mathematics to discuss his investigations into 'counting arrangements of circles' in the plane. Given certain constraints, there is one way to draw one circle, three ways to draw two circles, 14 ways to draw three, 173 ways for four, and 16,951 ways to draw five. (The enumeration of six circles is yet to be computed.) Dr. Wild was surprised to learn that this research was relevant to 3-D printing: that is, to how multiple printer heads could each trace circular arcs while avoiding collisions. 'I was tickled,' Dr. Wild said.
During a session on mathematics and the arts, Susan Goldstine, a mathematician at St. Mary's College of Maryland, lectured about her 'Poincaré Blues' craft project. Named for the French mathematician Henri Poincaré, the project involved making a patchwork denim skirt from old jeans. As she described in a write-up: 'After noodling around with different patterns, I settled on the tiling of the Poincaré disk model of the hyperbolic plane by 30°-45°-90° triangles,' which was familiar to her from an illustration by the classical geometer H.S.M. Coxeter (and which also inspired the Dutch artist M.C. Escher).
Day 3
At midday, the undergraduate poster session buzzed with expositions on topics including lunar time synchronization; the math of piano tuning; loops in four-dimensional space; and a model for wildfire containment, smoke spread and their public health consequences.
During another session on mathematics and the arts, Barry Cipra, a mathematician from Minnesota, gave a talk about 'gelbes feld' ('yellow field'), a painting by the Bauhaus-trained Swiss artist Max Bill.
It may appear to be a solid canvas of color, Dr. Cipra said, but there is a faint pattern of contrasting dots, or, more precisely, squares. 'Let's look at an abstract version of Bill's abstract,' he said. 'Can you spot what Bill is up to?'
By Dr. Cipra's analysis, the artist encoded in the painting a classic 3-by-3 magic square — a square array of numbers that form a logic puzzle wherein the sum of each row, column and diagonal equals 15.
Another peculiarity was that each row, column and diagonal had five pips (as on dice or dominoes):
Dr. Cipra noted, 'It looks like Bill posed and solved an original mathematics problem and hid it in a painting: Can you place the pips within each square of the 3-by-3 magic square so that there are exactly five pips along each row, column and main diagonal of the 9-by-9 subgrid?' The same question could be asked for 5-by-5 and larger magic squares of odd sizes, he said. 'But it's far from clear what the answer is going to be.'
Dr. Goldstine found Dr. Cipra's discovery compelling. 'I am always excited when math turns up in a place where you wouldn't expect it,' she said in an email. 'I often use these surprising connections to get students who might be afraid of or bored by math to see some of its beauty.'
Day 4
The final day offered a number of public events, including a mini math festival with hands-on puzzles and games.
'Why is it math?' asked Aleksandra Upton, 7, of a geometric puzzle.
'Because we can count all the different ways that we put the shapes together,' said her mother, Karolina Sarnowska-Upton, a software engineering manager at Microsoft in Redmond, Wash.
In one public lecture, Ravi Vakil, a mathematician at Stanford and the incoming president of the American Mathematical Society, explored the simultaneously playful and profound 'mathematics of doodling.'
In another, Eugenia Cheng, a mathematician and pianist at the School of the Art Institute of Chicago, addressed 'Math, Art, Social Justice.' One of her salient messages: 'Pure mathematics is a framework for agreeing on things.' She sang some of the lecture alongside a recorded video of herself playing the piano.
And there was a world premiere of a documentary film, 'Creating Pathways,' the second in the 'Journeys of Black Mathematicians' series by the director George Csicsery. (It airs on public television stations in February.) The film's senior consultant was Johnny Houston, an emeritus professor at Elizabeth City State University in North Carolina. After the screening, Dr. Houston remarked on the timeliness of the 2025 premiere: In 1925, Elbert Frank Cox became the first African American — and first Black person in the world — to receive a Ph.D. in mathematics. Of his own journey, and that of many Black mathematicians, Dr. Houston said that with exposure, experience and opportunity, 'we can do as well as any mathematician in earning a Ph.D. and beyond.'
The last of the talks wound down that evening. By 3 a.m. the next morning, as some attendees headed to the airport, two mathematicians were just heading to bed, but not before riding the elevator down to the hotel lobby to ask reception for a late checkout.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

CNET

time20 minutes ago

  • CNET

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.

Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year
Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year

Yahoo

time38 minutes ago

  • Yahoo

Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year

Meta (NASDAQ:META) is reportedly preparing to invest more than $10 billion in Scale AI, a startup providing high-quality labeled data to companies building next-generation AI models, including OpenAI and Microsoft (NASDAQ:MSFT). If finalized, the deal would rank among the largest private funding rounds in tech history, Bloomberg reports. San Francisco-based, Scale AI plays a pivotal role in the artificial intelligence supply chain by powering data pipelines used to train large language models like ChatGPT and Meta's Llama. According to Investor's Business Daily, Meta's potential investment builds on its prior participation in Scale's $1 billion Series F round in 2024, which valued the firm at $18 billion. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Many are rushing to While chips and talent dominate most AI headlines, data remains the less-publicized pillar essential to building models that mimic human reasoning, Bloomberg says. Scale AI provides structured and labeled data by deploying a global network of contractors, including experts with PhDs and graduate degrees in medicine, law, and molecular biology. The firm's high-level contributors create test scenarios and reinforce learning strategies for AI systems, ensuring that models produce accurate and context-aware responses, especially in industries such as healthcare and tax law. According to Bloomberg, as of this year, 12% of Scale's contributors involved in improving AI models held PhDs in fields such as molecular biology, while more than 40% had a master's degree, law degree, or MBA in their area of expertise. Trending: Maximize saving for your retirement and cut down on taxes: Scale AI expects to generate $2 billion in revenue this year, more than doubling the $870 million earned last year, Bloomberg reports. This surge reflects the broader industry's increasing need for tailored datasets as synthetic training data begins to reach its performance limits. Meta's relationship with Scale also serves the purpose of regulatory alignment. Scale CEO Alexandr Wang has become a regular presence in Washington, advising lawmakers on AI regulation and calling for a 'national AI data reserve.' Bloomberg reports that lawmakers on Capitol Hill have welcomed his insights, viewing Scale as a bridge between Silicon Valley innovation and federal oversight. Investor's Business Daily says that Scale has also expanded its role in defense AI by contributing to projects such as Meta's Defense Llama model, designed for U.S. national security applications. This aligns Meta more closely with the U.S. government, following similar partnerships between Microsoft and its success, Scale has faced criticism for labor practices involving overseas contractors in countries like Kenya and the Philippines, where some workers were paid relatively little and reported psychological distress from reviewing harmful online content, Bloomberg reports. According to TechCrunch, the U.S. Department of Labor has since closed its investigation into the company's compliance with wage regulations, but ongoing scrutiny remains a reputational concern. According to Bloomberg, insiders say discussions between Meta and Scale are ongoing and could still shift. Regardless of a final number, Meta's expected commitment may redefine the race for control of the AI data layer, strengthening Scale AI's position at the heart of artificial intelligence's next chapter. Read Next: Here's what Americans think you need to be considered wealthy. Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? MICROSOFT (MSFT): Free Stock Analysis Report This article Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Meta and XGS Energy link to develop geothermal project in New Mexico
Meta and XGS Energy link to develop geothermal project in New Mexico

Yahoo

time2 hours ago

  • Yahoo

Meta and XGS Energy link to develop geothermal project in New Mexico

Meta Platforms has signed an agreement with XGS Energy to support the development of 150MW of geothermal energy project in the US state of New Mexico. Both its initial smaller first phase and its larger second phase will be operational by 2030. Both will be located on the site of the state's PNM public service electric grid. The agreement represents another step for Meta, one of the world's largest corporate purchasers of renewable energy. Meta global head of energy Urvi Parekh stated: 'Advances in AI require continued energy to support infrastructure development. 'With next-generation geothermal technologies like XGS ready for scale, geothermal can be a major player in supporting the advancement of technologies like AI as well as domestic data centre development. 'We're excited to partner with XGS to unlock a new category of energy supply for our operations in New Mexico.' The energy company's geothermal technology will be deployed to the PNM electric grid and will support the tech company's data centre operations, which are located in New Mexico. XGS's tech has separated geothermal energy production from traditional constraints by eliminating the need for specific geological conditions such as water availability or permeable rock formations. Currently, New Mexico has only one operational geothermal power plant despite having some of the most promising hot rock resources nationwide. The new project is expected to raise New Mexico's total output of geothermal electricity tenfold. XGS Energy CEO Josh Prueher stated: 'We're pleased to support Meta's ambitious AI objectives and accelerate access to new round-the-clock power supplies. 'More broadly, the state of New Mexico is a growing hub for data centre development. We are eager to feed clean, water-independent geothermal power into the New Mexico market at a scale uniquely possible with XGS technology.' Constellation and Meta recently signed a 20-year power purchase agreement (PPA) to use the output from the Clinton Clean Energy Center in Illinois, US. The deal supports Meta's clean energy objectives and regional operations by providing 1,121MW (1.12GW) of emissions-free nuclear energy. "Meta and XGS Energy link to develop geothermal project in New Mexico" was originally created and published by Power Technology, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store