Latest news with #marketmanipulation


Bloomberg
6 days ago
- Business
- Bloomberg
Freepoint Commodities Ex-Analyst Claims Illegal Retaliation
A former senior analyst with Freepoint Commodities LLC claims he was fired from the energy-trading firm for resisting pressure to cooperate with executives in illegal insider trading, market manipulation and trade-theft schemes. The analyst, Andrew Martin, sued May 14 in Manhattan federal court claiming wrongful termination and retaliation by Freepoint, where he had worked for a decade. Martin claimed he was fired to prevent him from reporting alleged violations during a planned visit by the Federal Bureau of Investigation to Freepoint's Stamford, Connecticut, headquarters.


Argaam
26-05-2025
- Business
- Argaam
CMA refers suspects to prosecutors over National Building IPO misconduct
The Capital Market Authority (CMA) referred several individuals to the Public Prosecution for allegedly manipulating the initial public offering (IPO) of National Building and Marketing Co. on the Nomu-Parallel Market. The suspects are accused of carrying out transactions intended to influence the coverage of the offering and to falsely meet the eligibility criteria for transitioning to the main market. The alleged actions were aimed at creating a misleading impression about the security and the company's compliance with the liquidity requirements for listing. For More IPOs The CMA said the suspects may have violated Article 49 of the Capital Market Law and Article 2 of the Market Conduct Regulations. The regulator reiterated its warning to all market participants that fraudulent, deceptive, and manipulative conduct constitutes a breach of capital market rules and may result in legal action and penalties. The CMA affirmed its commitment to holding violators accountable by monitoring market activity and using its powers to protect investors and ensure fairness, transparency, and efficiency in the Saudi capital market. The General Secretariat of the Committees for the Resolution of Securities Disputes will disclose the names of convicted individuals on its website once final rulings are issued.
Yahoo
18-05-2025
- Business
- Yahoo
Trading bots are evolving: What happens when AI cheats the market?
Malevolent trading practices aren't new. Struggles against insider trading, as well as different forms of market manipulation, represent a long-running battle for regulators. In recent years — however — experts have been warning of new threats to our financial systems. Developments in AI mean that automated trading bots are not only smarter, but they're more independent too. While basic algorithms respond to programmed commands, new bots are able to learn from experience, quickly synthesise vast amounts of information, and act autonomously when making trades. According to academics, one risk scenario involves collaboration between AI bots. Just imagine: hundreds of AI-driven social media profiles begin to pop up online, weaving narratives about certain companies. The information spread isn't necessarily fake, but may just be the amplification of existing news. In response, real social media users start to react, highlighting the bots' chosen message. As the market is tipped by the crafted narrative, one investor's roboadvisor rakes in profits, having coordinated with the gossiping bots. Other investors, who didn't have the insider information, lose out by badly timing the market. The problem is, the investor profiting may not even be aware of the scheme. This means that charges of market manipulation can't necessarily be effective, even if authorities can see that a trader has benefitted from distortive practices. Alessio Azzutti, assistant professor in law & technology (FinTech) at the University of Glasgow, told Euronews that the above scenario is still a hypothesis — as there's not enough evidence to prove it's happening. Even so, he explains that similar, less sophisticated schemes are taking place, particularly in 'crypto asset markets and decentralised finance markets'. 'Malicious actors… can be very active on social media platforms and messaging platforms such as Telegram, where they may encourage members to invest their money in DeFi or in a given crypto asset, to suit themselves,' Azzutti explained. 'We can observe the direct activity of human malicious actors but also those who deploy AI bots.' He added that the agents spreading misinformation may not necessarily be very sophisticated, but they still have the power to 'pollute chats through fake news to mislead retail investors'. 'And so the question is, if a layman, if a youngster on his own in his home office is able to achieve these types of manipulations, what are the limits for the bigger players to achieve the same effect, in even more sophisticated markets?' Related OpenAI launches first AI agent 'Operator' but it won't be coming to Europe yet Visa wants to give artificial intelligence 'agents' your credit card The way that market information now spreads online, in a widespread, rapid, and uncoordinated fashion, is also fostering different types of trading. Retail investors are more likely to follow crazes, rather than relying on their own analysis, which can destabilise the market and potentially be exploited by AI bots. The widely-cited GameStop saga is a good example of herd trading, when users on a Reddit forum decided to buy up stock in the video game company en masse. Big hedge funds were betting that the price would fall, and subsequently lost out when it skyrocketed. Many experts say this wasn't a case of collusion as no official agreement was created. A spokesperson from ESMA, the European Securities and Markets Authority, told Euronews that the potential for AI bots to manipulate markets and profit off the movements is "a realistic concern", although they stressed that they don't have "specific information or statistics on this already happening". "These risks are further intensified by the role of social media, which can act as a rapid transmission channel for false or misleading narratives that influence market dynamics. A key issue is the degree of human control over these systems, as traditional oversight mechanisms may be insufficient," said the spokesperson. ESMA highlighted that it was "actively monitoring" AI developments. One challenge for regulators is that collaboration between AI agents can't be easily traced. 'They're not sending emails, they're not meeting with each other. They just learn over time the best strategy and so the traditional way to detect collusion doesn't work with AI,' Itay Goldstein, professor of finance and economy at the Wharton School of the University of Pennsylvania, told Euronews. 'Regulation has to step up and find new strategies to deal with that,' he argued, adding that there is a lack of reliable data on exactly how traders are using AI. Filippo Annunziata, professor of financial markets and banking legislation at Bocconi University, told Euronews that the current EU rules 'shouldn't be revised', referring to the Regulation on Market Abuse (MAR) and the Markets in Financial Instruments Directive II (MiFID II). Even so, he argued that 'supervisors need to be equipped with more sophisticated tools for identifying possible market manipulation'. He added: 'I even suggest that we ask people who develop AI tools for trading on markets and so on to include circuit breakers in these AI tools. This would force it to stop even before the risk of manipulation occurs.' In terms of the current legal framework, there's also the issue of responsibility when an AI agent acts in a malicious way, independent of human intent. This is especially relevant in the case of so-called black box trading, where a bot executes trades without revealing its inner workings. To tackle this, Some experts believe that AI should be designed to be more transparent, so that regulators can understand the rationale behind decisions. Another idea is to create new laws around liability, so that actors responsible for AI deployment could be held responsible for market manipulation. This could apply in cases where they didn't intend to mislead investors. "It's a bit like the tortoise and the hare," said Annunziata. "Supervisors tend to be tortoises, but manipulators that use algorithms are hares, and it's difficult to catch up with them." Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
18-05-2025
- Business
- Yahoo
Trading bots are evolving: What happens when AI cheats the market?
Malevolent trading practices aren't new. Struggles against insider trading, as well as different forms of market manipulation, represent a long-running battle for regulators. In recent years — however — experts have been warning of new threats to our financial systems. Developments in AI mean that automated trading bots are not only smarter, but they're more independent too. While basic algorithms respond to programmed commands, new bots are able to learn from experience, quickly synthesise vast amounts of information, and act autonomously when making trades. According to academics, one risk scenario involves collaboration between AI bots. Just imagine: hundreds of AI-driven social media profiles begin to pop up online, weaving narratives about certain companies. The information spread isn't necessarily fake, but may just be the amplification of existing news. In response, real social media users start to react, highlighting the bots' chosen message. As the market is tipped by the crafted narrative, one investor's roboadvisor rakes in profits, having coordinated with the gossiping bots. Other investors, who didn't have the insider information, lose out by badly timing the market. The problem is, the investor profiting may not even be aware of the scheme. This means that charges of market manipulation can't necessarily be effective, even if authorities can see that a trader has benefitted from distortive practices. Alessio Azzutti, assistant professor in law & technology (FinTech) at the University of Glasgow, told Euronews that the above scenario is still a hypothesis — as there's not enough evidence to prove it's happening. Even so, he explains that similar, less sophisticated schemes are taking place, particularly in 'crypto asset markets and decentralised finance markets'. 'Malicious actors… can be very active on social media platforms and messaging platforms such as Telegram, where they may encourage members to invest their money in DeFi or in a given crypto asset, to suit themselves,' Azzutti explained. 'We can observe the direct activity of human malicious actors but also those who deploy AI bots.' He added that the agents spreading misinformation may not necessarily be very sophisticated, but they still have the power to 'pollute chats through fake news to mislead retail investors'. 'And so the question is, if a layman, if a youngster on his own in his home office is able to achieve these types of manipulations, what are the limits for the bigger players to achieve the same effect, in even more sophisticated markets?' Related OpenAI launches first AI agent 'Operator' but it won't be coming to Europe yet Visa wants to give artificial intelligence 'agents' your credit card The way that market information now spreads online, in a widespread, rapid, and uncoordinated fashion, is also fostering different types of trading. Retail investors are more likely to follow crazes, rather than relying on their own analysis, which can destabilise the market and potentially be exploited by AI bots. The widely-cited GameStop saga is a good example of herd trading, when users on a Reddit forum decided to buy up stock in the video game company en masse. Big hedge funds were betting that the price would fall, and subsequently lost out when it skyrocketed. Many experts say this wasn't a case of collusion as no official agreement was created. A spokesperson from ESMA, the European Securities and Markets Authority, told Euronews that the potential for AI bots to manipulate markets and profit off the movements is "a realistic concern", although they stressed that they don't have "specific information or statistics on this already happening". "These risks are further intensified by the role of social media, which can act as a rapid transmission channel for false or misleading narratives that influence market dynamics. A key issue is the degree of human control over these systems, as traditional oversight mechanisms may be insufficient," said the spokesperson. ESMA highlighted that it was "actively monitoring" AI developments. One challenge for regulators is that collaboration between AI agents can't be easily traced. 'They're not sending emails, they're not meeting with each other. They just learn over time the best strategy and so the traditional way to detect collusion doesn't work with AI,' Itay Goldstein, professor of finance and economy at the Wharton School of the University of Pennsylvania, told Euronews. 'Regulation has to step up and find new strategies to deal with that,' he argued, adding that there is a lack of reliable data on exactly how traders are using AI. Filippo Annunziata, professor of financial markets and banking legislation at Bocconi University, told Euronews that the current EU rules 'shouldn't be revised', referring to the Regulation on Market Abuse (MAR) and the Markets in Financial Instruments Directive II (MiFID II). Even so, he argued that 'supervisors need to be equipped with more sophisticated tools for identifying possible market manipulation'. He added: 'I even suggest that we ask people who develop AI tools for trading on markets and so on to include circuit breakers in these AI tools. This would force it to stop even before the risk of manipulation occurs.' In terms of the current legal framework, there's also the issue of responsibility when an AI agent acts in a malicious way, independent of human intent. This is especially relevant in the case of so-called black box trading, where a bot executes trades without revealing its inner workings. To tackle this, Some experts believe that AI should be designed to be more transparent, so that regulators can understand the rationale behind decisions. Another idea is to create new laws around liability, so that actors responsible for AI deployment could be held responsible for market manipulation. This could apply in cases where they didn't intend to mislead investors. "It's a bit like the tortoise and the hare," said Annunziata. "Supervisors tend to be tortoises, but manipulators that use algorithms are hares, and it's difficult to catch up with them." Error while retrieving data Sign in to access your portfolio Error while retrieving data


Japan Times
07-05-2025
- Business
- Japan Times
SMBC Nikko hires former Nomura and Citi top executives in rebuild
SMBC Nikko Securities hired a pair of top executives from rival firms, as the brokerage rebuilds its talent bench following a market manipulation scandal. The unit of Sumitomo Mitsui Financial Group hired former Nomura Holdings senior executive Susumu Usui as co-head of equity and ex-Citigroup's Keita Matsumoto as head of financial markets, according to a statement by the firm. Their appointments are effective June 1, the firm said. SMBC Nikko is seeking to strengthen its business following the rigging scandal more than two years ago, which led to the loss of several equity executives. It suffered a setback recently, posting a net loss for the quarter ended March 31. Usui left Nomura after spending more than a quarter-century at Japan's biggest brokerage. He was most recently based in Hong Kong co-leading global execution services as a member of the firm's senior management lineup announced last year. Usui's previous roles included overseeing trading services in Japan, according to his LinkedIn profile. Matsumoto was head of institutional sales at Citigroup's Tokyo investment banking subsidiary and left earlier this year. At SMBC Nikko, he will replace Nobuaki Nakamura as head of financial markets. With the new appointments at SMBC Nikko, Kazuhiko Sawanobori becomes advisor from his current role as co-head of equity. Nakamura retains his role as co-head of the global markets unit, the firm said.