logo
#

Latest news with #ICLR

AI models can't tell time or read a calendar, study reveals
AI models can't tell time or read a calendar, study reveals

Yahoo

time17-05-2025

  • Science
  • Yahoo

AI models can't tell time or read a calendar, study reveals

When you buy through links on our articles, Future and its syndication partners may earn a commission. New research has revealed another set of tasks most humans can do with ease that artificial intelligence (AI) stumbles over — reading an analogue clock or figuring out the day on which a date will fall. AI may be able to write code, generate lifelike images, create human-sounding text and even pass exams (to varying degrees of success) yet it routinely misinterprets the position of hands on everyday clocks and fails at the basic arithmetic needed for calendar dates. Researchers revealed these unexpected flaws in a presentation at the 2025 International Conference on Learning Representations (ICLR). They also published their findings March 18 on the preprint server arXiv, so they have not yet been peer-reviewed . "Most people can tell the time and use calendars from an early age. Our findings highlight a significant gap in the ability of AI to carry out what are quite basic skills for people," study lead author Rohit Saxena, a researcher at the University of Edinburgh, said in a statement. These shortfalls must be addressed if AI systems are to be successfully integrated into time-sensitive, real-world applications, such as scheduling, automation and assistive technologies." To investigate AI's timekeeping abilities, the researchers fed a custom dataset of clock and calendar images into various multimodal large language models (MLLMs), which can process visual as well as textual information. The models used in the study include Meta's Llama 3.2-Vision, Anthropic's Claude-3.5 Sonnet, Google's Gemini 2.0 and OpenAI's GPT-4o. And the results were poor, with the models being unable to identify the correct time from an image of a clock or the day of the week for a sample date more than half the time. Related: Current AI models a 'dead end' for human-level intelligence, scientists agree However, the researchers have an explanation for AI's surprisingly poor time-reading abilities. "Early systems were trained based on labelled examples. Clock reading requires something different — spatial reasoning," Saxena said. "The model has to detect overlapping hands, measure angles and navigate diverse designs like Roman numerals or stylized dials. AI recognizing that 'this is a clock' is easier than actually reading it." Dates proved just as difficult. When given a challenge like "What day will the 153rd day of the year be?," the failure rate was similarly high: AI systems read clocks correctly only 38.7% and calendars only 26.3%. This shortcoming is similarly surprising because arithmetic is a fundamental cornerstone of computing, but as Saxena explained, AI uses something different. "Arithmetic is trivial for traditional computers but not for large language models. AI doesn't run math algorithms, it predicts the outputs based on patterns it sees in training data," he said. So while it may answer arithmetic questions correctly some of the time, its reasoning isn't consistent or rule-based, and our work highlights that gap." The project is the latest in a growing body of research that highlights the differences between the ways AI "understands" versus the way humans do. Models derive answers from familiar patterns and excel when there are enough examples in their training data, yet they fail when asked to generalize or use abstract reasoning. "What for us is a very simple task like reading a clock may be very hard for them, and vice versa," Saxena said. RELATED STORIES —Scientists discover major differences in how humans and AI 'think' — and the implications could be significant —If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy —Researchers gave AI an 'inner monologue' and it massively improved its performance The research also reveals the problem AI has when it's trained with limited data — in this case comparatively rare phenomena like leap years or obscure calendar calculations. Even though LLMs have plenty of examples that explain leap years as a concept, that doesn't mean they make the requisite connections required to complete a visual task. The research highlights both the need for more targeted examples in training data and the need to rethink how AI handles the combination of logical and spatial reasoning, especially in tasks it doesn't encounter often. Above all, it reveals one more area where entrusting AI output too much comes at our peril. "AI is powerful, but when tasks mix perception with precise reasoning, we still need rigorous testing, fallback logic, and in many cases, a human in the loop," Saxena said.

ICON Public Limited Company (ICLR): A Bull Case Theory
ICON Public Limited Company (ICLR): A Bull Case Theory

Yahoo

time07-05-2025

  • Business
  • Yahoo

ICON Public Limited Company (ICLR): A Bull Case Theory

We came across a bullish thesis on ICON Public Limited Company (ICLR) on Substack by DeepValue Capital. In this article, we will summarize the bulls' thesis on ICLR. ICON Public Limited Company (ICLR)'s share was trading at $145 as of April 28th. ICLR's trailing and forward P/E were 15.22 and 10.20 respectively according to Yahoo Finance. A research scientist in a lab coat examining data on a computer screen demonstrating the trials and development of new drugs for CNS diseases. ICON, a Contract Research Organization (CRO), is a key player in the drug development process, offering services to pharmaceutical companies that need to conduct clinical trials. Instead of managing these complex and costly processes in-house, drug companies outsource them to firms like ICON, which handles everything from patient recruitment to regulatory compliance. ICON's role is integral to bringing new medicines to market, without owning the drugs themselves, but instead running the operations that make it all possible. This model aligns well with my investment strategy, as I focus on service businesses that offer scalability, recurring revenue, and strong returns on capital. The company's stock has seen a dramatic decline, down over 60% since July 2024, despite maintaining a robust backlog of $25 billion, which remains near all-time highs. Historically, ICON has achieved impressive returns on invested capital (ROIC) and returns on capital employed (ROCE) of around 15%. This decline in stock price presents an interesting opportunity, particularly when considering the growing long-term tailwinds in the pharmaceutical sector. With over $200 billion in annual drug revenue at risk due to a patent cliff by 2030, pharmaceutical companies are aggressively rebuilding their drug pipelines, which requires more clinical trials. Additionally, aging populations and the demand for chronic and complex treatments further drive this need for trials, making ICON's services more essential. Despite the recent downturn, the shift toward increased R&D spending, AI-driven drug discovery, and global diversification of trials point to a structural growth opportunity for ICON. The demand for clinical trials is set to continue expanding, and ICON doesn't rely on the success of any single drug; it simply needs more trials to be conducted, and all signs suggest this will be the case across the industry. However, there are still key risks and unknowns that need to be addressed. These include the validity of ICON's $25 billion backlog, executive incentive alignment, and whether ICON has a competitive advantage in a crowded field. Additionally, if AI disrupts core functions or if ICON falls behind competitors, there could be negative implications. While these risks are not immediate, they warrant careful consideration over the next five years.

Icon PLC (ICLR) Q1 2025 Earnings Call Highlights: Navigating Challenges with Strategic Initiatives
Icon PLC (ICLR) Q1 2025 Earnings Call Highlights: Navigating Challenges with Strategic Initiatives

Yahoo

time02-05-2025

  • Business
  • Yahoo

Icon PLC (ICLR) Q1 2025 Earnings Call Highlights: Navigating Challenges with Strategic Initiatives

Revenue: $2 billion, a year-on-year decrease of 4.3% or 3.2% on a constant currency basis. Adjusted Gross Margin: 28.2%, compared to 29.9% in Q1 2024. Adjusted SG&A Expense: $173.4 million, or 8.7% of revenue. Adjusted EBITDA: $390.7 million, or 19.5% of revenue. Adjusted Operating Income: $353.6 million, a margin of 17.7%. Adjusted Net Income: $258.3 million, a margin of 12.9%. Adjusted Earnings Per Share: $3.19, a decrease of 8.1% year over year. US GAAP Net Income: $154.2 million, or $1.90 per diluted share, a decrease of 15.6% from the prior year period. Cash from Operating Activities: $268.2 million. Free Cash Flow: $239.3 million. Net Debt: $2.9 billion, with a leverage ratio of 1.7 times net debt to adjusted EBITDA. Share Repurchases: $250 million at an average price of $184 per share. Warning! GuruFocus has detected 5 Warning Signs with ICLR. Release Date: May 01, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Icon PLC (NASDAQ:ICLR) reported better-than-expected adjusted EBITDA margin performance of 19.5% for Q1 2025, demonstrating effective cost control and resource alignment. The company has seen a significant increase in overall opportunities in the Biotech segment, with a modest uptick in win rates for projects that reached decision. Icon PLC (NASDAQ:ICLR) continues to generate momentum from recent strategic partnerships, particularly in the mid-sized Pharma segment. The company has launched two new AI-enabled tools, iSubmit and SmartDraft, to enhance operational efficiencies and accelerate clinical trials. Icon PLC (NASDAQ:ICLR) executed $250 million in share repurchases during Q1 2025, reflecting a commitment to returning capital to shareholders. Icon PLC (NASDAQ:ICLR) experienced elevated levels of cancellations in Q1 2025, similar to Q4 2024, impacting overall performance in bookings. The company reported a year-on-year revenue decrease of 4.3% in Q1 2025, with a constant currency decline of 3.2%. Adjusted gross margin for Q1 2025 was 28.2%, down from 29.9% in the same period last year. The book-to-bill ratio decreased to 1.01 times in Q1 2025, reflecting challenges in converting opportunities into contracts. Icon PLC (NASDAQ:ICLR) updated its full-year guidance to account for increased cancellations and the removal of next-generation COVID trials, impacting expected revenue. Q: Can you provide more details on the elevated levels of cancellations and whether there were any significant outliers? A: Stephen Cutler, CEO: There were no unusual outliers or specific customer groups responsible for the elevated cancellations. The cancellations were broadly distributed across our customer segments, reflecting our business portfolio. We anticipate this elevated level of cancellations to continue throughout the year. Q: How do you view the current environment regarding pharma reprioritizations and biotech funding, and what are your expectations for cancellations and book-to-bill ratios going forward? A: Stephen Cutler, CEO: The environment remains challenging with elevated cancellations expected to persist. We don't foresee a significant increase in cancellations beyond current levels, but they will likely remain elevated for the rest of the year. Our book-to-bill ratio was 1.01 in Q1, and we expect similar conditions to continue. Q: Are you seeing any changes in the RFP dynamics, particularly with smaller customers being more price-sensitive? A: Stephen Cutler, CEO: In the biotech space, we face more competition, which affects our strike rate. While there is an uptick in RFP opportunities, some are canceled before reaching a decision. We don't risk-adjust our backlog but recognize different dynamics in the biotech segment. Q: Can you elaborate on the impact of tariffs on ICON and any potential effects on your business? A: Stephen Cutler, CEO: We don't anticipate a significant impact from tariffs on our services business. Some components, like lab kits, could be affected, but it's a minor part of our overall revenue. Regarding pharma tariffs, it's too early to speculate on the impact, but we are monitoring the situation closely. Q: How are you managing cost savings, and what is the outlook for margins this year? A: Nigel Clerkin, CFO: We are focusing on cost control, particularly non-labor costs, and leveraging automation to preserve margins. Our Q1 adjusted EBITDA margin was 19.5%, and we expect a gradual increase throughout the year, exiting around 21% by year-end. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus. Sign in to access your portfolio

NTT Scientists Present Breakthrough Research on AI Deep Learning at ICLR 2025
NTT Scientists Present Breakthrough Research on AI Deep Learning at ICLR 2025

Business Wire

time24-04-2025

  • Business
  • Business Wire

NTT Scientists Present Breakthrough Research on AI Deep Learning at ICLR 2025

SUNNYVALE, Calif. & TOKYO--(BUSINESS WIRE)-- NTT Research, Inc. and NTT R&D, divisions of NTT (TYO:9432), announced that their scientists will present nine papers at the International Conference on Learning Representations (ICLR) 2025, a top-tier machine learning conference dedicated to the advancement of representation learning, particularly deep learning. Five of the accepted presentations result from research co-authored by scientists within NTT Research's recently announced Physics of Artificial Intelligence (PAI) Group led by Group Head Hidenori Tanaka. Collectively, this research breaks new ground in understanding how AI models learn, grow and overcome uncertainty—all supporting NTT's commitment to pioneering transformative, socially resilient, sustainable and responsible AI. Share Collectively, this research breaks new ground in understanding how AI models learn, grow and overcome uncertainty—all supporting NTT's commitment to pioneering transformative, socially resilient, sustainable and responsible AI. 'The Physics of AI Group and its collaborators share the excitement for AI's potential expressed by the public, the technology industry and the academic community,' said Tanaka. 'As the research accepted at ICLR 2025 shows, however, important questions remain about how AI fundamentally learns and how generative AI fundamentally creates outputs. Neural networks play a vital role in the 'deep learning' of AI, and improving our understanding of them is vital to ultimately foster the development of sustainable, reliable and trustworthy AI technologies.' One paper, ' Forking Paths in Neural Text Generation,' addresses the issue of estimating uncertainty in Large Language Models (LLMs) for proper evaluation and user safety. Whereas prior approaches to uncertainty estimation focus on the final answer in generated text—ignoring potentially impactful intermediate steps—this research tested the hypothesis of the existence of key forking tokens, such that re-sampling the system at those specific tokens, but not others, leads to very different outcomes. The researchers discovered many examples of forking tokens, including punctuation marks, suggesting that LLMs are often just a single token away from generating a different output. The paper was co-authored by Eric Bigelow 1,2,3, Ari Holtzman 4, Hidenori Tanaka 2,3 and Tomer Ullman 1,2. Four other papers co-authored by members of the NTT Research PAI Group will be presented at the show, including: ' In-Context Learning of Representations:' Researchers explore the open-ended nature of LLMs (for example, their ability to in-context learn) and whether models alter these pretraining semantics to adopt alternative, context-specific ones. Findings indicate that scaling context size can flexibly re-organize model representations, possibly unlocking novel capabilities. Authors include: Core Francisco Park 3,5,6, Andrew Lee 7, Ekdeep Singh Lubana 3,5, Yongyi Yang 3,5,8, Maya Okawa 3,5, Kento Nishi 5,7, Martin Wattenberg 7 and Hidenori Tanaka. ' Competition Dynamics Shape Algorithmic Phases of In-Context Learning: ' Researchers propose a synthetic sequence modeling task that involves learning to simulate a finite mixture of Markov chains. They argue that In-Context Learning (ICL) is best thought of as a mixture of different algorithms, each with its own peculiarities, instead of a monolithic capability, also implying that making general claims about ICL that hold universally across all settings may be infeasible. Authors include: Core Francisco Park, Ekdeep Singh Lubana, Itamar Pres 9 and Hidenori Tanaka. ' Dynamics of Concept Learning and Compositional Generalization: ' Researchers propose an abstraction of prior work's compositional generalization problem by introducing a structured identity mapping (SIM) task, where a model is trained to learn the identity mapping on a Gaussian mixture with structurally organized centroids. Overall, the work establishes the SIM task as a meaningful theoretical abstraction of concept learning dynamics in modern generative models. Authors include: Yongyi Yang, Core Francisco Park, Ekdeep Singh Lubana, Maya Okawa, Wei Hu 8 and Hidenori Tanaka. ' A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language: ' Recognizing the need to establish the causal factors underlying the phenomenon of "emergence" in a neural network, researchers seek inspiration from the study of emergent properties in other fields and propose a phenomenological definition for the concept in the context of neural networks. Authors include: Ekdeep Singh Lubana, Kyogo Kawaguchi 10,11,12, Robert P. Dick 9 and Hidenori Tanaka. In addition, four papers authored or co-authored by NTT R&D scientists based in Japan will be presented at the show, including: ' Test-time Adaptation for Regression by Subspace Alignment ' Authors include: Kazuki Adachi 13,14, Shin'ya Yamaguchi 13,15, Atsutoshi Kumagai 13 and Tomoki Hamagami 14 . ' Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods ' Authors include: Akira Ito 16, Masanori Yamada 16 and Atsutoshi Kumagai. ' Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation ' Authors include: Hiroshi Takahashi 13, Tomoharu Iwata 13, Atsutoshi Kumagai, Yuuki Yamanaka 13 and Tomoya Yamashita 13. ' Wavelet-based Positional Representation for Long Context ' Authors include: Yui Oka 13, Taku Hasegawa 13, Kyosuke Nishida 13, Kuniko Saito 13. ICLR 2025, the thirteenth International Conference on Learning Representations, is a globally esteemed conference on deep learning being held in Singapore April 24-28, 2025. Last year at ICLR 2024, NTT Research Physics & Informatics (PHI) Lab scientists co-authored two key papers: one on 'analyzing in-context learning dynamics with random binary sequences, revealing sharp transitions in LLM behaviors' and another on 'how fine-tuning affects model capabilities, showing minimal changes.' The NTT Research Physics of Artificial Intelligence Group is dedicated to advancing our understanding of deep neural networks and the psychology of AI. Its three-pronged mission includes: 1) Deepening our understanding of the mechanisms of AI, all the better to integrate ethics from within, rather than through a patchwork of fine-tuning (i.e. enforced learning); 2) Borrowing from experimental physics, it will continue creating systematically controllable spaces of AI and observe the learning and prediction behaviors of AI step-by-step; 3) Healing the breach of trust between AI and human operators through improved operations and data control. Formally established in April 2025 by members of the PHI Lab, the group began as a collaboration between the NTT Research and the Harvard University Center for Brain Science, having been formerly known as the Harvard University CBS-NTT Fellowship Program. About NTT Research NTT Research opened its offices in July 2019 in Silicon Valley to conduct basic research and advance technologies as a foundational model for developing high-impact innovation across NTT Group's global business. Currently, four groups are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, the Medical and Health Informatics (MEI) Lab, and the Physics of Artificial Intelligence (PAI) Group. The organization aims to advance science in four areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; 3) medical and health informatics; and 4) artificial intelligence. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D investment of thirty percent of its profits. NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. ©2025 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

AI's Next Frontier: What the ICLR 2025 Conference Reveals About Machine Learning
AI's Next Frontier: What the ICLR 2025 Conference Reveals About Machine Learning

Time Business News

time23-04-2025

  • Business
  • Time Business News

AI's Next Frontier: What the ICLR 2025 Conference Reveals About Machine Learning

The International Conference on Learning Representations (ICLR) 2025, held in Singapore, has sparked excitement in the tech world, showcasing groundbreaking advancements in artificial intelligence (AI) and machine learning. From April 16–22, 2025, researchers gathered to present innovations in diffusion models, contrastive learning, and AI applications in healthcare and law. But what does this mean for the average person, and how will these developments shape our future? One major highlight was the focus on efficient diffusion-based sampling, a technique that powers AI tools like image and text generators. These models are becoming faster and more accurate, enabling everything from hyper-realistic digital art to personalized medical diagnostics. For instance, AI-driven health tools presented at ICLR could analyze patient data in real-time, offering doctors precise treatment recommendations. Similarly, advancements in large language models promise smarter, more context-aware chatbots that could revolutionize customer service or education. Another key topic was AI safety, a growing concern as AI systems become more autonomous. Researchers discussed methods to align AI models with human values, ensuring they don't produce harmful or biased outputs. This is critical as companies like xAI push boundaries with tools like Grok, which aim to accelerate human discovery while maintaining ethical standards. The conference also highlighted AI's role in niche fields. For example, legal tech innovations showcased how AI can streamline contract analysis, saving time and reducing errors. Meanwhile, reinforcement learning advancements could improve autonomous systems, from self-driving cars to robotic manufacturing. Why does this matter? These developments signal a future where AI is more integrated into daily life, making tasks faster, safer, and more personalized. However, they also raise questions about accessibility and regulation. Will these tools be affordable for all, or will they widen the digital divide? And how do we balance innovation with privacy concerns? ICLR 2025 underscores that AI is no longer a distant dream—it's here, evolving rapidly. For businesses, educators, and individuals, staying informed is crucial to harnessing its potential responsibly. For more insights on AI's impact, check out Explained Now at where we break down complex tech trends in simple terms. TIME BUSINESS NEWS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store