RSS News Feed

Trading bots are evolving: What happens when AI cheats the market?


ADVEReadNOWISEMENT

Malevolent trading practices aren’t new. Struggles against insider trading, as well as different forms of market manipulation, represent a long-running battle for regulators.

In recent years — however — experts have been warning of new threats to our financial systems. Developments in AI mean that automated trading bots are not only smarter, but they’re more independent too. While basic algorithms respond to programmed commands, new bots are able to learn from experience, quickly synthesise vast amounts of information, and act autonomously when making trades.

According to academics, one risk scenario involves collaboration between AI bots. Just imagine: hundreds of AI-driven social media profiles begin to pop up online, weaving narratives about certain companies. The information spread isn’t necessarily fake, but may just be the amplification of existing news. In response, real social media users start to react, highlighting the bots’ chosen message.

As the market is tipped by the crafted narrative, one investor’s roboadvisor rakes in profits, having coordinated with the gossiping bots. Other investors, who didn’t have the insider information, lose out by badly timing the market. The problem is, the investor profiting may not even be aware of the scheme. This means that charges of market manipulation can’t necessarily be effective, even if authorities can see that a trader has benefitted from distortive practices.

Social platforms are changing trading

Alessio Azzutti, assistant professor in law & technology (FinTech) at the University of Glasgow, told Euronews that the above scenario is still a hypothesis — as there’s not enough evidence to prove it’s happening. Even so, he explains that similar, less sophisticated schemes are taking place, particularly in “crypto asset markets and decentralised finance markets”.

“Malicious actors… can be very active on social media platforms and messaging platforms such as Telegram, where they may encourage members to invest their money in DeFi or in a given crypto asset, to suit themselves,” Azzutti explained.

“We can observe the direct activity of human malicious actors but also those who deploy AI bots.”

He added that the agents spreading misinformation may not necessarily be very sophisticated, but they still have the power to “pollute chats through fake news to mislead retail investors”.

“And so the question is, if a layman, if a youngster on his own in his home office is able to achieve these types of manipulations, what are the limits for the bigger players to achieve the same effect, in even more sophisticated markets?”

The way that market information now spreads online, in a widespread, rapid, and uncoordinated fashion, is also fostering different types of trading. Retail investors are more likely to follow crazes, rather than relying on their own analysis, which can destabilise the market and potentially be exploited by AI bots.

The widely-cited GameStop saga is a good example of herd trading, when users on a Reddit forum decided to buy up stock in the video game company en masse. Big hedge funds were betting that the price would fall, and subsequently lost out when it skyrocketed. Many experts say this wasn’t a case of collusion as no official agreement was created.

A spokesperson from ESMA, the European Securities and Markets Authority, told Euronews that the potential for AI bots to manipulate markets and profit off the movements is “a realistic concern”, although they stressed that they don’t have “specific information or statistics on this already happening”.

“These risks are further intensified by the role of social media, which can act as a rapid transmission channel for false or misleading narratives that influence market dynamics. A key issue is the degree of human control over these systems, as traditional oversight mechanisms may be insufficient,” said the spokesperson.

ESMA highlighted that it was “actively monitoring” AI developments.

ADVEReadNOWISEMENT

Is regulation ready?

One challenge for regulators is that collaboration between AI agents can’t be easily traced.

“They’re not sending emails, they’re not meeting with each other. They just learn over time the best strategy and so the traditional way to detect collusion doesn’t work with AI,” Itay Goldstein, professor of finance and economy at the Wharton School of the University of Pennsylvania, told Euronews.

“Regulation has to step up and find new strategies to deal with that,” he argued, adding that there is a lack of reliable data on exactly how traders are using AI.

Filippo Annunziata, professor of financial markets and banking legislation at Bocconi University, told Euronews that the current EU rules “shouldn’t be revised”, referring to the Regulation on Market Abuse (MAR) and the Markets in Financial Instruments Directive II (MiFID II).

ADVEReadNOWISEMENT

Even so, he argued that “supervisors need to be equipped with more sophisticated tools for identifying possible market manipulation”.

He added:  “I even suggest that we ask people who develop AI tools for trading on markets and so on to include circuit breakers in these AI tools. This would force it to stop even before the risk of manipulation occurs.”

In terms of the current legal framework, there’s also the issue of responsibility when an AI agent acts in a malicious way, independent of human intent.

This is especially relevant in the case of so-called black box trading, where a bot executes trades without revealing its inner workings. To tackle this, Some experts believe that AI should be designed to be more transparent, so that regulators can understand the rationale behind decisions.

ADVEReadNOWISEMENT

Another idea is to create new laws around liability, so that actors responsible for AI deployment could be held responsible for market manipulation. This could apply in cases where they didn’t intend to mislead investors.

“It’s a bit like the tortoise and the hare,” said Annunziata.

“Supervisors tend to be tortoises, but manipulators that use algorithms are hares, and it’s difficult to catch up with them.”



Source link