Latest news with #JamesWhite


USA Today
4 days ago
- Sport
- USA Today
Wisconsin announces UW Athletic Hall of Fame Class of 2025 inductees
Wisconsin announces UW Athletic Hall of Fame Class of 2025 inductees This past Thursday, Wisconsin athletic director Chris McIntosh announced the UW Athletic Hall of Fame Class of 2025. The list of inductees included notable athletes, staff, and even broadcasters. These names include Mohammed Ahmed (Track & Field and Cross Country, 2009-14), Dorcas Akinniyi Jansen (Track & Field, 2008-13), Molly Engstrom (Women's Hockey, 2001-05), Travis Frederick (Football, 2009-12), Frank Kaminsky (Men's Basketball, 2012-15), Mary Massei Landini (Softball, 2010-14), Joe Pavelski (Men's Hockey, 2004-06), Cara Walls (Women's Soccer, 2011-14), James White (Football, 2010-13), Tom Shipley (Baseball, 1970-74), Martin Smith and Matt Lepay (Broadcasting). The biggest names on the list include former National Player of the Year Frank Kaminsky, multiple-time Super Bowl winner James White, and Matt Lepay, who has served as the play-by-play announcer for Wisconsin Badgers basketball since 1988 and Wisconsin football since 1994. The class of 2025 will be officially inducted into the UW Athletic Hall of Fame during Hall of Fame weekend on September 5th and 6th. The inductees will also be honored inside Camp Randall Stadium during Wisconsin football's Week 2 game against the Middle Tennessee Blue Raiders. Contact/Follow @TheBadgersWire on X (formerly Twitter) and like our page on Facebook to follow ongoing coverage of Wisconsin Badgers news, notes and opinion
Yahoo
21-05-2025
- Business
- Yahoo
'The Models Are Getting Better, But They're Also More Likely To Be Good At Bad Stuff,' Tech Experts Sound The Alarm As AI Charges On
Morgan Stangely has estimated that generative AI revenue could surpass $1.1 trillion by 2028 In an effort to capitalize on that growth, tech companies like OpenAI, Meta, and Alphabet have been rushing their products to market Now, tech experts are warning that in doing so these companies have been sacrificing the safety and security of their products Tech experts are concerned that safety is taking a backseat to profit potential in AI development, CNBC reports. In the early days of AI, research was a priority for companies like Meta (NASDAQ:META), Alphabet (NASDAQ:GOOG, GOOGL)), and OpenAI, but the release of ChatGPT in late 2022 changed that, according to the outlet. As the groundbreaking chatbot proved hugely popular and financially lucrative, other major companies took note and began rushing their own AI platforms to market. Don't Miss: Hasbro, MGM, and Skechers trust this AI marketing firm — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. In March, Morgan Stanley estimated that generative AI revenue could surpass $1.1 trillion by 2028, making it a huge market with lots of potential for growth. And in an effort to stay competitive, many companies are now cutting corners when it comes to the rigorous safety testing of their AI models before they are released to the public, industry experts told CNBC. James White, chief technology officer at CalypsoAI, told CNBC that new AI models are sacrificing security for response quality, meaning they're less likely to reject malicious kinds of prompts. "The models are getting better, but they're also more likely to be good at bad stuff," said White. "It's easier to trick them to do bad stuff." White, whose company runs safety and security audits of popular models from Meta, Google, OpenAI among others, says that this shift is readily apparent at companies like Meta and Alphabet. Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — According to insiders who spoke to CNBC, Meta has deprioritized its AI research unit, the Fundamental Artificial Intelligence Research unit, in favor of Meta GenAI. Similarly, Alphabet has lumped its research group, Google Brain, in with its DeepMind division, which leads the development of new AI products. Other experts readily agree with White, telling CNBC that as employees face increasingly shortened development timelines, they are left with the idea that they can't afford to fall behind when it comes to getting new models and products to market. Under this sort of pressure, they spend less time on security and more time on proficiency. Google's Gemini 2.5 model, which was released in March, is a perfect example of this. According to CNBC, for weeks after its release, Gemini 2.5 was missing its model card, a common tool for AI transparency that shares information about how the model works, its limitations, and any potential a memo obtained by CNBC a month before Gemini 2.5's release, Google co-founder Sergey Brin told DeepMind and Gemini staffers that "the final race to AGI is afoot." "We have all the ingredients to win this race but we are going to have to turbocharge our efforts," he said, noting that Google needed to speed up the process of AI testing and cut out the habit of "minor tweaking and overfitting" products. "We can't keep building nanny products," Brin wrote. "Our products are overrun with filters and punts of various kinds." Read Next: Invest where it hurts — and help millions heal:. Inspired by Uber and Airbnb – Deloitte's fastest-growing software company is transforming 7 billion smartphones into income-generating assets – Image: Shutterstock Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? ALPHABET (GOOG): Free Stock Analysis Report This article 'The Models Are Getting Better, But They're Also More Likely To Be Good At Bad Stuff,' Tech Experts Sound The Alarm As AI Charges On originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


NDTV
19-05-2025
- Entertainment
- NDTV
Video: Zookeeper Avoids Bite From 1 Crocodile. Luck Didn't Favour Him Twice
A video of a zookeeper sitting astride a large crocodile, with guests gathered around the fence, has gone viral, but for all the wrong reasons. At first, it looked like an everyday interaction, one he must have had innumerable times with the crocodile to entertain the visitors. But on that day, it almost turned fatal. In the video, the man proceeds to "tease" the two crocodiles in their enclosure. The next moment, he noticed the second reptile was approaching him. He got off the crocodile, and to his horror, it attacked him. Fortunately for him, the reptile just attempted to bite him before returning to its resting position. The zookeeper, who was limping a little after the attack, hurried away from the deadly wild creatures. View this post on Instagram A post shared by James White (@itzmrpr3zz) After the clip went viral, the majority of online users called out the man for his "actions." Some also made remarks about how "messing with nature" was never a smart idea in the first place. One user said, "Deserved it. Leave the animals alone." "When people do crazy things happen," a second comment read. "He deserved it!" exclaimed one. "The animal reacted as such because the man mistakenly stepped on its hind leg," one comment read. In 2024, a 15-foot crocodile attacked a zookeeper inside an enclosure in South Africa, and he barely survived. The incident reportedly happened while the seasoned reptile specialist was entertaining dozens of guests at the Crocodile Creek theme park in Ballito, Kwa-Zulu Natal. It was a Nile crocodile, the biggest species in the world, with the strongest bite force. In a video that went viral on social media, the unidentified hunter was using a stick to probe the beast when it suddenly sprang at him. The reptile was supposed to turn its head and bite the stick. The crocodile instead wrapped its jaws around the zookeeper's waist.


CNBC
14-05-2025
- Business
- CNBC
AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say
Not long ago, Silicon Valley was where the world's leading artificial intelligence experts went to perform cutting-edge research. Meta, Google and OpenAI opened their wallets for top talent, giving researchers staff, computing power and plenty of flexibility. With the support of their employers, the researchers published high-quality academic papers, openly sharing their breakthroughs with peers in academia and at rival companies. But that era has ended. Now, experts say, AI is all about the product. Since OpenAI released ChatGPT in late 2022, the tech industry has shifted its focus to building consumer-ready AI services, in many cases prioritizing commercialization over research, AI researchers and experts in the field told CNBC. The profit potential is massive — some analysts predict $1 trillion in annual revenue by 2028. The prospective repercussions terrify the corner of the AI universe concerned about safety, industry experts said, particularly as leading players pursue artificial general intelligence, or AGI, which is technology that rivals or exceeds human intelligence. In the race to stay competitive, tech companies are taking an increasing number of shortcuts when it comes to the rigorous safety testing of their AI models before they are released to the public, industry experts told CNBC. James White, chief technology officer at cybersecurity startup CalypsoAI, said newer models are sacrificing security for quality, that is, better responses by the AI chatbots. That means they're less likely to reject malicious kinds of prompts that could cause them to reveal ways to build bombs or sensitive information that hackers could exploit, White said. "The models are getting better, but they're also more likely to be good at bad stuff," said White, whose company performs safety and security audits of popular models from Meta, Google, OpenAI and other companies. "It's easier to trick them to do bad stuff." The changes are readily apparent at Meta and Alphabet, which have deprioritized their AI research labs, experts say. At Facebook's parent company, the Fundamental Artificial Intelligence Research, or FAIR, unit has been sidelined by Meta GenAI, according to current and former employees. And at Alphabet, the research group Google Brain is now part of DeepMind, the division that leads development of AI products at the tech company. CNBC spoke with more than a dozen AI professionals in Silicon Valley who collectively tell the story of a dramatic shift in the industry away from research and toward revenue-generating products. Some are former employees at the companies with direct knowledge of what they say is the prioritization of building new AI products at the expense of research and safety checks. They say employees face intensifying development timelines, reinforcing the idea that they can't afford to fall behind when it comes to getting new models and products to market. Some of the people asked not to be named because they weren't authorized to speak publicly on the matter. When Joelle Pineau, a Meta vice president and the head of the company's FAIR division, announced in April that she would be leaving her post, many former employees said they weren't surprised. They said they viewed it as solidifying the company's move away from AI research and toward prioritizing developing practical products. "Today, as the world undergoes significant change, as the race for AI accelerates, and as Meta prepares for its next chapter, it is time to create space for others to pursue the work," Pineau wrote on LinkedIn, adding that she will formally leave the company May 30. Pineau began leading FAIR in 2023. The unit was established a decade earlier to work on difficult computer science problems typically tackled by academia. Yann LeCun, one of the godfathers of modern AI, initially oversaw the project, and instilled the research methodologies he learned from his time at the pioneering AT&T Bell Laboratories, according to several former employees at Meta. Small research teams could work on a variety of bleeding-edge projects that may or may not pan out. The shift began when Meta laid off 21,000 employees, or nearly a quarter of its workforce, starting in late 2022. CEO Mark Zuckerberg kicked off 2023 by calling it the "year of efficiency." FAIR researchers, as part of the cost-cutting measures, were directed to work more closely with product teams, several former employees said. Two months before Pineau's announcement, one of FAIR's directors, Kim Hazelwood, left the company, two people familiar with the matter said. Hazelwood helped oversee FAIR's NextSys unit, which manages computing resources for FAIR researchers. Her role was eliminated as part of Meta's plan to cut 5% of its workforce, the people said. OpenAI's 2022 launch of ChatGPT caught Meta off guard, creating a sense of urgency to pour more resources into large language models, or LLMs, that were captivating the tech industry, the people said. In 2023, Meta began heavily pushing its freely available and open-source Llama family of AI models to compete with OpenAI, Google and others. With Zuckerberg and other executives convinced that LLMs were game-changing technologies, management had less incentive to let FAIR researchers work on far-flung projects, several former employees said. That meant deprioritizing research that could be viewed as having no impact on Meta's core business, such as FAIR's previous health care-related research into using AI to improve drug therapies. Since 2024, Meta Chief Product Officer Chris Cox has been overseeing FAIR as a way to bridge the gap between research and the product-focused GenAI group, people familiar with the matter said. The GenAI unit oversees the Llama family of AI models and the Meta AI digital assistant, the two most important pillars of Meta's AI strategy. Under Cox, the GenAI unit has been siphoning more computing resources and team members from FAIR due to its elevated status at Meta, the people said. Many researchers have transferred to GenAI or left the company entirely to launch their own research-focused startups or join rivals, several of the former employees said. While Zuckerberg has some internal support for pushing the GenAI group to rapidly develop real-world products, there's also concern among some staffers that Meta is now less able to develop industry-leading breakthroughs that can be derived from experimental work, former employees said. That leaves Meta to chase its rivals. A high-profile example landed in January, when Chinese lab DeepSeek released its R1 model, catching Meta off guard. The startup claimed it was able to develop a model as capable as its American counterparts but with training at a fraction of the cost. Meta quickly implemented some of DeepSeek's innovative techniques for its Llama 4 family of AI models that were released in April, former employees said. The AI research community had a mixed reaction to the smaller versions of Llama 4, but Meta said the biggest and most powerful Llama 4 variant is still being trained. The company in April also released security and safety tools for developers to use when building apps with Meta's Llama 4 AI models. These tools help mitigate the chances of Llama 4 unintentionally leaking sensitive information or producing harmful content, Meta said. "Our commitment to FAIR remains strong," a Meta spokesperson told CNBC. "Our strategy and plans will not change as a result of recent developments." In a statement to CNBC, Pineau said she is enthusiastic about Meta's overall AI work and strategy. "There continues to be strong support for exploratory research and FAIR as a distinct organization in Meta," Pineau said. "The time was simply right for me personally to re-focus my energy before jumping into a new adventure." Meta on Thursday named FAIR co-founder Rob Fergus as Pineau's replacement. Fergus will return to the company to serve as a director at Meta and head of FAIR, according to his LinkedIn profile. He was most recently a research director at Google DeepMind. "Meta's commitment to FAIR and long term research remains unwavering," Fergus said in a LinkedIn post. "We're working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research." Google released its latest and most powerful AI model, Gemini 2.5, in March. The company described it as "our most intelligent AI model," and wrote in a March 25 blog post that its new models are "capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy." For weeks, Gemini 2.5 was missing a model card, meaning Google did not share information about how the AI model worked or its limitations and potential dangers upon its release. Model cards are a common tool for AI transparency. A Google website compares model cards to food nutrition labels: They outline "the key facts about a model in a clear, digestible format," the website says. "By making this information easy to access, model cards support responsible AI development and the adoption of robust, industry-wide standards for broad transparency and evaluation practices," the website says. Google wrote in an April 2 blog post that it evaluates its "most advanced models, such as Gemini, for potential dangerous capabilities prior to their release." Google later updated the blog to remove the words "prior to their release." Without a model card for Gemini 2.5, the public had no way of knowing which safety evaluations were conducted or whether DeepMind checked for dangerous capabilities at all. In response to CNBC's inquiry on April 2 about Gemini 2.5's missing model card, a Google spokesperson said that a "tech report with additional safety information and model cards are forthcoming." Google published an incomplete model card on April 16 and updated it on April 28, more than a month after the AI model's release, to include information about Gemini 2.5's "dangerous capability evaluations." Those assessments are important for gauging the safety of a model — whether people can use the models to learn how to build chemical or nuclear weapons or hack into important systems. These checks also determine whether a model is capable of autonomously replicating itself, which could lead to a company losing control of it. Running tests for those capabilities requires more time and resources than simple, automated safety evaluations, according to industry experts. The Financial Times in March reported that Google DeepMind CEO Demis Hassabis had installed a more rigorous vetting process for internal research papers to be published. The clampdown at Google is particularly notable because the company's "Transformers" technology gained recognition across Silicon Valley through that type of shared research. Transformers were critical to OpenAI's development of ChatGPT and the rise of generative AI. Google co-founder Sergey Brin told staffers at DeepMind and Gemini in February that competition has accelerated and "the final race to AGI is afoot," according to a memo viewed by CNBC. "We have all the ingredients to win this race but we are going to have to turbocharge our efforts," he said in the memo. Brin said in the memo that Google has to speed up the process of testing AI models, as the company needs "lots of ideas that we can test quickly." "We need real wins that scale," Brin wrote. In his memo, Brin also wrote that the company's methods have "a habit of minor tweaking and overfitting" products for evaluations and "sniping" the products at checkpoints. He said employees need to build "capable products" and to "trust our users" more. "We can't keep building nanny products," Brin wrote. "Our products are overrun with filters and punts of various kinds." A Google spokesperson told CNBC that the company has always been committed to advancing AI responsibly. "We continue to do that through the safe development and deployment of our technology, and research contributions to the broader ecosystem," the spokesperson said. The debate of product versus research is at the center of OpenAI's existence. The company was founded as a nonprofit research lab in 2015 and is now in the midst of a contentious effort to transform into a for-profit entity. That's the direction co-founder and CEO Sam Altman has been pushing toward for years. On May 5, though, OpenAI bowed to pressure from civic leaders and former employees, announcing that its nonprofit would retain control of the company even as it restructures into a public benefit corporation. Nisan Stiennon worked at OpenAI from 2018 to 2020 and was among a group of former employees urging California and Delaware not to approve OpenAI's restructuring effort. "OpenAI may one day build technology that could get us all killed," Stiennon wrote in a statement in April. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity." But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card. Results of the model's "preparedness evaluations," the tests OpenAI runs to assess an AI model's dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available. Johannes Heidecke, OpenAI's head of safety systems, told CNBC in an interview that the company ran its preparedness evaluations on near-final versions of the o1 model. Minor variations to the model that took place after those tests wouldn't have contributed to significant jumps in its intelligence or reasoning and thus wouldn't require additional evaluations, he said. Still, Heidecke acknowledged that OpenAI missed an opportunity to more clearly explain the difference. OpenAI's newest reasoning model, o3, released in April, seems to hallucinate more than twice as often as o1, according to the model card. When an AI model hallucinates, it produces falsehoods or illogical information. OpenAI has also been criticized for reportedly slashing safety testing times from months to days and for omitting the requirement to safety test fine-tuned models in its latest "Preparedness Framework." Heidecke said OpenAI has decreased the time needed for safety testing because the company has improved its testing effectiveness and efficiency. A company spokesperson said OpenAI has allocated more AI infrastructure and personnel to its safety testing, and has increased resources for paying experts and growing its network of external testers. In April, the company shipped GPT-4.1, one of its new models, without a safety report, as the model was not designated by OpenAI as a "frontier model," which is a term used by the tech industry to refer to a bleeding-edge, large-scale AI model. But one of those small revisions caused a big wave in April. Within days of updating its GPT-4o model, OpenAI rolled back the changes after screenshots of overly flattering responses to ChatGPT users went viral online. OpenAI said in a blog post explaining its decision that those types of responses to user inquiries "raise safety concerns — including around issues like mental health, emotional over-reliance, or risky behavior." OpenAI said in the blogpost that it opted to release the model even after some expert testers flagged that its behavior "'felt' slightly off." "In the end, we decided to launch the model due to the positive signals from the users who tried out the model. Unfortunately, this was the wrong call," OpenAI wrote. "Looking back, the qualitative assessments were hinting at something important, and we should've paid closer attention. They were picking up on a blind spot in our other evals and metrics." Metr, a company OpenAI partners with to test and evaluate its models for safety, said in a recent blog post that it was given less time to test the o3 and o4-mini models than predecessors. "Limitations in this evaluation prevent us from making robust capability assessments," Metr wrote, adding that the tests it did were "conducted in a relatively short time." Metr also wrote that it had insufficient access to data that would be important in determining the potential dangers of the two models. The company said it wasn't able to access the OpenAI models' internal reasoning, which is "likely to contain important information for interpreting our results." However, Metr said, "OpenAI shared helpful information on some of their own evaluation results." OpenAI's spokesperson said the company is piloting secure ways of sharing chains of thought for Metr's research as well as for other third-party organizations. Steven Adler, a former safety researcher at OpenAI, told CNBC that safety testing a model before it's rolled out is no longer enough to safeguard against potential dangers. "You need to be vigilant before and during training to reduce the chance of creating a very capable, misaligned model in the first place," Adler said. He warned that companies such as OpenAI are backed into a corner when they create capable but misaligned models with goals that are different from the ones they intended to build. "Unfortunately, we don't yet have strong scientific knowledge for fixing these models — just ways of papering over the behavior," Adler said.


Otago Daily Times
02-05-2025
- Sport
- Otago Daily Times
Absences can't stop University's winning ways
University of Canterbury first-five James White may be in line for a promotion to the Crusaders after Taha Kemara's injury. PHOTO: ISAAC NOKES Despite missing key players and their head coach, the University of Canterbury continues to find ways to win in the Metro rugby competition. They defeated Lincoln University 52-17 on Saturday, even with three players away on New Zealand under-20 duty and head coach Daniel O'Brien in Thailand for his sister's wedding. Lock Jake Frost also suffered a broken arm during the match and will miss most of the season. Adding to the uncertainty, captain and first-five James White could be called into the Crusaders squad following Taha Kemara's injury. Halfback and backs coach Jared Mitchell, who stepped in as head coach, said the match was tougher than the score suggested. 'We only had 39% possession and territory and managed to score 50 points, so that's always a good thing,' he said. The win also saw University reclaim the Hart-O'Reilly Trophy, contested between the two varsity teams, after losing it 31-26 to Lincoln last season. University will face Sydenham this weekend, with O'Brien back in charge – though several key players will still be sidelined. Mitchell is expecting a tough contest. 'I would expect them to be much the same as they have been. They always have a good pack and they try and rumble you around. They've had some pretty exciting outsides as well, so they can hit you from anywhere.' Elsewhere, Linwood held onto the DCL Shield after a 36-29 win over Shirley. They will defend it again this weekend against New Brighton, who are tied with University on competition points but trail on points difference. Lincoln University will aim to bounce back against Belfast, who impressed despite a 42-29 loss to Burnside. Shirley meet Burnside, HSOB face Marist Albion and Christchurch host Sumner with all matches taking place on Saturday. Metro rugby points University 15; New Brighton 15; HSOB 12; Marist Albion 10; Burnside 10; Linwood 9; Sydenham 7; Christchurch 6; Belfast 6; Lincoln Uni 5; Sumner 2; Shirley 2