Latest news with #LLMs


Business Insider
7 hours ago
- Business
- Business Insider
How IBM Is Helping AI Models Improve Their Social Skills
Large language models (LLMs) are beginning to move past simple question-and-answer tasks and into more complex scenarios that involve reasoning, strategy, and even social skills. Indeed, a new open-source platform called TextArena was created in order to train AI for situations that require these skills. Developed by researchers from Singapore's Agency for Science, Technology and Research (A*STAR) and tech giant IBM (IBM), the platform allows LLMs to compete in over two dozen text-based games that are similar to classic computer puzzles and board games from the 1980s that test abilities like negotiation, planning, and problem solving. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Interestingly, TextArena acts as a training ground where AI agents can learn through interaction rather than memorization. By using reinforcement learning, the platform measures performance and updates leaderboards in real time. This approach simulates real-world conditions more effectively than fixed exams and provides a nearly endless stream of data for improving models. Furthermore, since its launch in January, TextArena has grown quickly and has hosted more than 100,000 matches for 216 competing models. It is also worth noting that the project has drawn a lot of interest from the open-source community, which has helped expand its game library and refine the user experience. This shows just how excited people are about teaching AI to handle social interactions that could make it more practical in everyday life. By focusing on cooperation, mind theory, and negotiation, TextArena is helping LLMs develop the softer skills needed to interact better with humans. Is IBM a Buy, Sell, or Hold? Turning to Wall Street, analysts have a Moderate Buy consensus rating on IBM stock based on seven Buys, four Holds, and one Sell assigned in the past three months, as indicated by the graphic below. Furthermore, the average IBM price target of $297.33 per share implies 13% upside potential.


The Hindu
8 hours ago
- Science
- The Hindu
How the fair use clause is being applied to generative AI
Access to diverse kinds of materials is vital for building and fine-tuning Large Language Models (LLMs). These materials could include those that are available in the public domain (for example, works where the copyright has expired or works wherein copyright was relinquished by the authors) as well as those under copyright protection. Apart from gathering data through sources such as Common Crawl, AI firms often scan copies of books and other materials, and convert them into machine readable text from which data can be extracted for training purposes. Whether the use of copyrighted materials for training purposes, without permission from the copyright holders, constitutes copyright infringement is a challenging legal question. Many litigations are happening across the globe around this issue. One of the key factors that could determine the outcome in these litigations is how the courts view whether the concerned activities fall within the scope of any of the exceptions to infringement under the relevant copyright laws. For litigations in the U.S., this means one of the primary determinants in the outcomes would be the application of the 'fair use' doctrine under U.S. copyright law. Two trial courts in the U.S. have recently delivered summary judgments on fair use, and they may be considered as the beginning of the adjudications on this complex issue. Factors considered The U.S. courts generally take into consideration four factors while assessing whether a use constitutes as 'fair use'. They are — (i) purpose and character of the use, and the enquiry in this regard includes the extent to which the use can be considered 'transformative'; (ii) nature of concerned copyrighted materials (there is a higher likelihood of the fair use clause being applicable when it is used for works that are factual in character as compared to works of fiction or fantasy); (iii) amount of the portion taken, and this includes both qualitative and quantitative analyses; and (iv) the effect of the use on the potential market of the plaintiff's works or value of the plaintiff's works. The questions of transformative use and the impact on the potential market/value of the plaintiffs works have historically played critical roles in determining the final outcomes in a fair use litigation. The Anthropic case Anthropic trained the LLMs underlying Claude, one of their popular GenAI agents, using books and other texts from a library compiled by them. The library consisted of works obtained from different sources, including books purchased and converted to digital form as well as books acquired from potentially illegal sources. The copyright infringement action was initiated by the plaintiffs as their works were used for training without any authorisation from them. Based on the application of the four above factors to the specific facts of the case, specifically the highly transformative nature of the use of copyrighted materials, the court, in Andrea Bartz et al. versus Anthropic PBC, granted summary judgment in favour of Anthropic on the question of whether the training of the AI was fair use. The court was of the view that the print-to-digital format conversion of the books purchased by them constituted fair use. However, it denied the request of Anthropic that downloading and storing of the copies sourced from illegal sources must be treated as fair use. It remains to be seen how the infringement analysis and remedies would be handed down by the court with regard to those activities. The Meta judgment In Richard Kadrey et al. versus Meta Platforms, Inc., 13 authors had sued Meta for downloading books from illegal sources and using them for training Llama, the LLM of Meta. Based on the specific facts and the specific averments made by the parties with regard to the four fair use factors, the court granted a summary judgment in favour of Meta. The court was of the view that use of the works for training purposes was highly transformative in character and in such instances the plaintiffs will have to bring in substantial evidence with regard to the fourth factor (whether such use has affected the plaintiff's works market value) to avoid a summary judgment against them. But as the plaintiffs in the instant case couldn't produce any meaningful evidence, the summary judgment was in favour of Meta with regard to the copying and use of the plaintiffs' books as training data. However, the court will be continuing the proceedings against Meta with respect to the argument of the plaintiffs that Meta also unlawfully distributed their works during the torrenting process. Comparative analysis One of the common dimensions of both the summary judgments is the recognition of the highly transformative character of the use of copyrighted works in training LLMs. This substantially influenced fair use analysis in both cases. There is an alignment on the third factor also, as both courts considered the extent of materials used reasonable in the broader context of training. But on the fourth factor, one can see substantial differences. Judge Chahabria, who authored the Meta summary judgment, rejected the argument of the plaintiffs that Meta harmed the potential licensing market of the plaintiffs, primarily on the ground that it is not a market that the plaintiff is legally entitled to monopolise. However, he also observed that in many cases, AI training on copyrighted materials may become illegal due to 'market dilution'. According to him, the rapid generation of countless works that compete with the originals, even if those works aren't themselves infringing, can result in market dilution through indirect substitution. But the inability of the plaintiffs in the case to produce sufficient empirical evidence in this regard illustrates the difficulty in proving this kind of harm. On the other hand, Judge Alsup, who authored the judgment in the Anthropic case, categorically rejected the market dilution argument and observed that the '[a]uthors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.' It is also worth highlighting here that the Judge in the Anthropic case considered downloading or building a permanent library of infringing works as a different use that warrants separate analysis and a different outcome. But the Meta summary judgement didn't take that approach and focused just on the ultimate purpose, that is, the training of models. Other AI cases Earlier this year, in Thomson Reuters versus Ross Intelligence, the court had reached the conclusion that the fair use exception was not applicable. However, this was not a GenAI case. The AI in question merely retrieved and shared judicial opinions based on queries from users. As this was not considered a transformative use by the court, and as the AI in question competed directly with the works of the plaintiff, the court concluded that the use of those materials without permission was not fair use. Broader implications Both the summary judgments in the Anthropic and Meta cases recognise the highly transformative character of use of materials in the GenAI training context, thereby favouring a finding of fair use with respect to the use of copyrighted materials for training purposes. But both judgments also reflect many of the anxieties of copyright holders. Whether the sourcing of materials from potentially illegal sources can negate the claims of fair use is an issue where scholarly opinion is divided and more discussions are warranted. It is also evident that the kind of evidence copyright holders will bring in to illustrate the negative impact on their market will play a prominent role in determining the final outcome in many cases. This also implies that copyright infringement related issues are far from settled in the AI training area, and depending on the specific facts and evidences in each of these cases, the outcomes can be very different. Arul George Scaria is a professor at the National Law School of India University (NLSIU)

Finextra
16 hours ago
- Business
- Finextra
Agentic AI in FX: From Automation to Autonomy: By Chandresh Pande
Abstract Imagine a super-smart FX trader on your desk—one who continuously scans global markets, detects macroeconomic shifts, adapts execution strategies in real time, learns from fluctuations, manages risk independently, and spots arbitrage before others even notice. Now picture a back-office specialist who predicts settlement failures, flags reconciliation breaks, updates static data across systems, and ensures regulatory compliance—all without human intervention. Moreover, these guys work 24/7, never complain, take no coffee breaks, and do not even ask for a raise! Too good to be true, right? Think again. What once would have sounded like science fiction is now rapidly becoming reality—powered by Agentic AI: intelligent, autonomous systems that perceive, reason, and act with purpose. Rooted in cognitive science and robotics, agentic systems evolved from early research prototypes into adaptive, autonomous problem-solvers. Advances in reinforcement learning and large language models (LLMs) have enabled agents to make decisions, learn from outcomes, and operate independently in complex domains. With real-time data and scalable computing, finance is emerging as their next frontier. Unlike traditional AI, which passively processes data, agentic systems thrive on feedback loops—observing, deciding, and evolving—making them ideal for the dynamic, high-stakes world of FX trading and operations. With capabilities like autonomous strategy selection, self-directed risk management, and real-time market adaptation, Agentic AI has the potential to transform how institutions engage with FX markets. But as adoption grows, so do questions around oversight, explainability, and trust. Is this the dawn of truly intelligent automation in FX—or just another technological mirage? One thing is clear: the agent is already on the floor. What is Agentic AI? Agentic AI refers to artificial intelligence systems that operate with autonomy, intentionality and adaptability – much like a human agent1. These systems don't just follow pre-defined rules or passively respond to inputs; they set goals, make context aware decisions take actions and learn from the outcomes in a continuous feedback loop. In contrast to traditional models that execute fixed workflows, agentic systems can dynamically change their course based on new information – enabling them to thrive in uncertain, fast changing and fragmented markets like FX markets where milliseconds matter. A major strength of agentic AI lies in multi-agent system (MAS)1, where multiple specialized agents interact and coordinate across different roles, that can be particularly useful in financial systems. In FX environments, MAS can simulate trading, pricing, compliance, risk and settlement via different agents working together to achieve shared goals. This can facilitate simulation and execution of complex workflows like price discovery, order routing, trade matching etc. while also optimizing confirmations, exception handling and settlement workflows in the back office. The distributed nature of MAS improves resiliency, processing speed and enables adaptive response to market. Unlocking Agentic AI in FX The FX market, with its 24x5 trading cycle, deep liquidity, and high volatility, is ideally suited for the integration of agentic AI. These intelligent systems are capable of autonomous decision-making and continuous adaptation, making them valuable in navigating the rapid changes driven by macroeconomic events, geopolitical shifts, and client behaviours. This section highlights a few use-cases how agentic AI can deliver efficiency, reduce risk, and provide strategic advantages across front, middle, and back-office FX functions. These use cases are illustrative not exhaustive. As agentic AI matures, countless other applications will emerge across the FX trade lifecycle. 1. Pre Trade market intelligence and Signal generation Agentic AI systems can autonomously scan and synthesize macroeconomic data, real-time liquidity trends, news feeds, central bank statements, and social media signals. This allows them to generate actionable trade signals or predictive macro views. Additionally, agentic AI can serve as a latency arbitrage hunter by scanning multiple FX trading venues (ECNs, dark pools, etc.) for price discrepancies, where millisecond differences in timing and pricing matter. Example: Prior to an ECB rate decision, an agent might detect tone shifts in ECB speeches and correlate them with historical market reactions. It then feeds these directional insights into the execution algorithm. 2. Autonomous Trade execution These agentic AI systems can use self-evolving execution algorithms that factor in liquidity, order book behaviour, spreads, and volatility in real-time. Unlike static rule-based systems, they dynamically self-tune execution strategies based on objectives such as slippage minimization or speed. Example: An agent detecting a sudden liquidity drop may reroute the order flow or delay execution to prevent slippage, mimicking human trader decision-making but at a machine scale and speed. 3. Liquidity Provision and Market Making Agentic AI systems can operate as autonomous market makers. By monitoring market volatility, client flow, and inventory risks, they can autonomously adjust bid-ask spreads and quote levels. Example: During geo-politically induced volatility, the agent may momentarily widen spreads, then narrow them post-event to restore competitiveness while managing inventory risk. 4. Client behaviour modelling and Personalization These agents can analyze granular client data—such as trading patterns, profitability, and preferences—to segment clients and deliver hyper-personalized strategies. They learn from historical data to forecast behaviour and optimize pricing models or service tiers. Example: A spike in hedging frequency by a client may prompt an alert for the relationship manager to review service models or offer targeted product solutions. 5. Real time Risk monitoring and Response Agentic AI systems can enhance FX risk management by identifying evolving counterparty risks, large directional exposures, or breaches in risk thresholds. They can recommend or auto-execute mitigation actions such as portfolio rebalancing or hedge placement. Example: If an agent detects concentrated exposure due to a correlated client flow, it may autonomously initiate offsetting trades or flag risk teams for pre-emptive action. 6. Settlement failure prediction and intervention Agentic AI can analyze post-trade data across the entire settlement chain to predict which trades are at risk of failing. These agents can use patterns from past settlement failures, counterparty behaviour, payment system data, and real-time exceptions to proactively intervene. They can recommend corrective actions—such as reallocation of funding, client follow-ups, or adjustments in trade instructions—to prevent bottlenecks or penalties. Example: An autonomous 'settlement operations agent' may detect a high probability of failure in a CLS-linked FX leg due to delayed funding from a counterparty, triggering an alert or rebooking logic to avoid settlement disruption. 7. Regulatory Reporting and Compliance monitoring Agentic AI can assist in real-time regulatory compliance by ensuring reporting accuracy across multiple jurisdictions. They automatically validate trade lifecycle data, flag anomalies, and ensure alignment with EMIR, MiFID II, and Dodd-Frank. Example: An AI agent may detect trade discrepancies in timestamps or record-keeping and auto-trigger remediation workflows. Challenges While the potential of agentic AI in financial markets is immense, its safe and effective adoption is fraught with challenges. Below are three critical hurdles that must be addressed before Agentic AI can take the driver's seat in FX world. 1. Autonomy vs. Accountability A core feature of agentic AI is its ability to act autonomously. However, in a highly regulated domain like FX, accountability is paramount. If an autonomous agent executes a trade that results in significant losses or violates regulations, who bears responsibility — the quant who designed the system, the trader who deployed it, or the institution itself? This lack of clarity over responsibility raises serious legal and ethical concerns. Without robust governance structures, auditability, and real-time supervisory frameworks, widespread deployment will remain cautious2. 2. Black Box Behaviour Many agentic AI systems — particularly those leveraging reinforcement learning — behave as 'black boxes,' learning optimal strategies from past data without offering clear rationale for individual decisions. In FX, where compliance and transparency are critical, this opacity is problematic. Regulators increasingly demand explainability and audit trails to justify market behavior. Without transparent decision-making, agentic AI risks introducing systemic vulnerabilities, especially in high-stakes scenarios such as volatility spikes3. 3. Safe Adaptability in Volatile Markets3 Adaptability is one of agentic AI's greatest strengths — but in volatile FX markets, unchecked adaptability can backfire. Constant real-time adjustments to noisy signals can lead to overreactions, unintended feedback loops, or even market destabilization (as seen in past flash crashes4). Rigorous guardrails, staged deployment environments, and stress-testing of agentic behaviors are essential to ensure that 'smart' does not become 'reckless.' The Cutting Edge Leading investment banks are beginning to explore Agentic AI frameworks in controlled environments. JP Morgan5 is leveraging its Athena platform to deploy agent-based systems for risk analytics and trade booking, demonstrating early-stage automation of front office workflows Goldman Sachs5, through its Marquee platform, is employing agents to assist in options pricing and the generation of structured product ideas. Morgan Stanley5 has introduced AskResearchGPT, an agentic model designed to recommend the next best action for trade decisions and to assist in alpha generation, blending research automation with trading insight. Citi5 is utilizing agentic AI in FX for both market making and smart order routing within the fragmented FX markets, showcasing a move towards autonomous execution and adaptive flow management. Two Sigma's1 Venn platform combines market analytics with reinforcement learning agents to dynamically calibrate investment strategies based on changing market conditions. JP Morgan's1 LOXM system, which integrates agentic AI to analyze market data, news, and social media, uncovers real-time investment opportunities. These initiatives signal a growing institutional appetite to harness agentic AI not just for efficiency, but for a strategic edge — driving a shift from static automation to autonomous, intelligent financial systems. Conclusion Agentic AI marks a significant leap in the evolution of financial automation—shifting from passive tools to autonomous, goal-oriented digital agents capable of executing complex decisions across the FX trade lifecycle. As illustrated at the beginning of this article through the imagined trader and operations personas, these agents are no longer confined to generating insights; they actively trade, reconcile, hedge, and adapt—continuously learning from their environment to meet strategic objectives. The use cases across the front, middle, and back office are compelling: autonomous execution, arbitrage detection, proactive risk mitigation, dynamic margin management, and intelligent exception handling. Each demonstrates how agentic AI can reshape FX workflows with speed, precision, and round-the-clock responsiveness. Yet, these possibilities come with real challenges. From autonomy vs. accountability to the opacity of black-box decision-making and the risk of unintended feedback loops in live trading environments, the path to widespread adoption must be tread with caution and clarity. Agentic systems must be deployed with human oversight, robust guardrails, and explainability built in from day one. We also see that leading investment banks and financial firms are exploring the possibilities, but these are still in early stages. Some are piloting "trading copilots" that work alongside human dealers; others are experimenting with agentic systems for post-trade workflows. These early initiatives signal both interest and caution—a recognition that agentic systems can bring scale and intelligence, but only when aligned with enterprise goals, operational resilience, and regulatory trust. Ultimately, the future of FX will not be human or machine—but human and machine, working in tandem. Agentic AI won't replace traders or operations teams but will act as tireless digital teammates, amplifying capabilities, enhancing decision-making, and navigating the increasingly complex FX landscape with intelligence, autonomy, and precision. References 1. 'Building Agentic AI Systems' by Anjanava Biswas & Wrick Talukdar, Packt Publishing. 2. Gasser, U., & Almeida, V. A. (2017). "A Layered Model for AI Governance." Harvard Journal of Law & Technology. 3. European Securities and Markets Authority (2022). 'Final Report: Guidelines on AI in Financial Markets.' 4. Kirilenko, A. A., Kyle, A. S., Samadi, M., & Tuzun, T. (2017). 'The Flash Crash: The Impact of High Frequency Trading on an Electronic Market.' Journal of Finance. 5. 6. 7.


The Sun
2 days ago
- General
- The Sun
Horse racing tips: Grand National-winning trainer lines up this very appealing 10-1 chance
Attention, Our system has indicated that your user behaviour is potentially automated. News Group Newspapers Limited does not permit the access, collection, text or data mining of any content from our Service by any automated means whether directly or through an intermediary service. This is stated in our terms and conditions. This includes use of content for machine learning, artificial intelligence (AI) purposes, engineering large language models (LLMs) or related generative AI systems. If you would like to inquire about the commercial use of our content, please contact crawlpermission@


Observer
2 days ago
- Science
- Observer
The hidden cost of AI: Cognitive lethargy
The world stands at a technological crossroads, with an unprecedented proliferation of large language models (LLMs), such as ChatGPT, drastically altering and reshaping the very essence of human existence. It has permeated the lives, homes, work, and play spaces of people. Over 378 million people worldwide are estimated to be active users of AI, including LLMs like ChatGPT, Gemini, Claude, and Copilot. The use of AI-generated large language models experienced a significant global surge in 2025. Hundreds of millions of people worldwide use these tools daily for academic, personal, and professional purposes. Considering the rapid growth and development of AI, it becomes crucial to understand the cognitive implications of the widespread use of LLMs in educational and informational contexts. Today, there is substantial evidence-based research to show that, although it enhances the accessibility and personalization of education, the prolonged and frequent use of AI tools for information reduces people's critical thinking capacity. So, the integration of LLMs in learning ecosystems presents a complex duality. Recently, the research from the Massachusetts Institute of Technology (MIT) has raised concerns for the education sector, educators, and learners. The study suggests that the increased use of AI systems may raise serious concerns regarding human intellectual development and autonomy. The usage of LLMs provides users with singular responses that will inadvertently discourage lateral thinking and independent judgment. Instead of becoming seekers of knowledge, we are leaning towards passive consumption of AI-generated content. In the long run, this world will lead to superficial engagement, weakened critical thinking, and less long-term memory formation, as well as a shallower understanding of the material. This will lead to a decrease in decision-making skills, and it will create a false perception that learning is effortless and simplified, decreasing student motivation and reducing the interest in individual research. The increased use of ChatGPT will affect student learning, performance, perception of learning, and impact higher-order thinking. The MIT research suggests that while AI tools can enhance productivity, they may also promote a form of meta-cognition laziness. The fundamental principle of research inquiry is compromised as students rely heavily on digital tools for information gathering. The students will fall prey to Echo Chambers, where users of ChatGPT become trapped in self-reinforcing information bubbles, filtering out contradictory evidence. Echo Chambers can impact the foundation of academic discourse and debate. Furthermore, the sophisticated functioning of the algorithms leaves the users unaware of information gaps in the research, degrading the standard of scholarly outcomes. These research findings have many implications for various stakeholders in the education sector and beyond. The role of the professor is evolving from a source of knowledge to a facilitator and guide. The curriculum must be adapted to digital literacy and changing patterns with a focus on security and safety. The technological developments call for monitoring academic integrity. Students must adopt a balanced approach to protect their higher-order thinking. Artificial intelligence is here to stay and will impact all sectors, creating new career opportunities and displacing traditional employment pathways. Evidence suggests that if left unchecked, the world risks turning learners into mere editors of AI-generated text, rather than actual creators and thinkers. While advanced technologies, such as artificial intelligence, have immense potential and offer unprecedented opportunities for enhancing human learning and access to vast volumes of information, they also have the power to impact cognitive development, long-term memory building, and intellectual independence, and demand caution and critical consideration.