AI in Capital Markets: IOSCO’s latest report signals greater scrutiny ahead
Artificial intelligence is no longer just an emerging trend in capital markets—it’s becoming embedded across trading, risk management, compliance, and operations. Recognising the speed of this transformation, the International Organization of Securities Commissions (IOSCO) released a consultation report in March 2025, titled “Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges.”
This global report explores how AI is being used across financial markets, the risks it introduces, and what regulators might do next. For futures commission merchants (FCMs), proprietary trading firms, central counterparties (CCPs) and exchanges, the findings point to both opportunities and growing expectations for oversight.
Why IOSCO Is Looking at AI Now
IOSCO aims to build a shared understanding among global securities regulators about AI’s use in financial markets—especially its implications for market integrity, investor protection, and financial stability. The report is part of a two-phase effort:
- Phase one: map current AI use cases and risks (the focus of this consultation report).
- Phase two: explore whether regulators need new tools, guidance, or rules to address these risks.
With AI adoption accelerating—especially with new technologies like large language models and generative AI—IOSCO is asking whether existing regulations are enough, or if new standards are needed to govern AI in capital markets.
Key AI Use Cases for Derivatives Market Participants
IOSCO’s survey found that nearly half of market participants have already deployed or are piloting AI systems. While AI is being used across many areas, several use cases stand out for firms active in exchange-traded derivatives:
✅ Algorithmic Trading & Strategy Development: Firms are using AI and machine learning to power signal generation, market pattern recognition, trade execution strategies, and post-trade analysis. Proprietary trading desks in particular are integrating AI across the trading lifecycle—from pre-trade market impact analysis to optimising execution algorithms.
Some exchanges are also adopting AI. For example, Nasdaq has launched an AI-powered dynamic order type that uses reinforcement learning to adjust order holding times in real time, aiming to improve fill rates while reducing adverse price moves.
✅ Risk Management & Analytics: AI is being deployed to enhance internal risk monitoring, stress testing, and margin optimisation. Firms are leveraging machine learning to identify emerging risk patterns, simulate market stress scenarios, and improve anomaly detection in trading activity. Recent advances, such as generative AI, are also being explored for automating internal reporting, data extraction, and risk documentation.
✅ Surveillance, Compliance & AML: AI plays an increasing role in surveillance systems to detect market manipulation, insider trading, and fraud. It’s also widely used for anti-money laundering (AML) and counter-terrorist financing (CFT) monitoring, scanning transactions to flag suspicious activity faster and more accurately than manual methods.
✅ Post-Trade Operations & Clearing: For CCPs and exchanges, AI is supporting transaction processing, trade validation, and settlement optimisation. Some clearinghouses are using AI to predict settlement failures and automate trade reporting. These tools can improve operational efficiency and help reduce errors in critical post-trade workflows.
Operational Risks & Challenges Identified by IOSCO
While AI brings clear benefits, IOSCO highlights several risks that firms—and regulators—need to manage:
🔍 Model Governance & Validation: AI models, particularly those using deep learning, can be complex “black boxes.” IOSCO warns that without strong governance, it’s hard to know whether an AI-driven decision (e.g., a flagged trade or a margin call) is reliable. Firms need clear validation, testing, and documentation processes to govern AI models—especially those impacting trading and risk.
🔍 Explainability & Data Quality: Regulators are concerned that AI outputs can’t always be explained or audited. This lack of explainability is a problem in regulated environments, where decisions must be justifiable. IOSCO also flags risks of data bias—if AI is trained on incomplete or skewed data, it could embed unfairness or errors in risk models and trading algorithms.
🔍 Human Oversight & Accountability: IOSCO stresses the need for “human-in-the-loop” oversight to avoid blind reliance on AI outputs. Firms need controls to ensure humans can challenge or override AI decisions when necessary. There’s also a growing need for AI expertise in risk and compliance teams to properly supervise these technologies.
🔍 Third-Party & Concentration Risks: Many firms are sourcing AI tools from external vendors, raising outsourcing and concentration risks. IOSCO notes that reliance on a small number of AI providers or platforms could create systemic vulnerabilities—especially if an outage or cyberattack affects multiple market participants at once.
Regulatory Outlook: What Might Come Next?
The report signals that regulators worldwide are actively assessing how to respond to AI adoption. Some jurisdictions are applying existing rules (like those for algorithmic trading or outsourcing) to AI activities. Others are considering whether bespoke AI regulations are needed—especially around governance, explainability, and accountability.
IOSCO’s next step is to evaluate whether new recommendations or tools should be developed for its member regulators. Meanwhile, firms are encouraged to align with emerging best practices for AI governance, documentation, oversight, and transparency.
Why This Matters for Risk & Operations Leaders
The IOSCO consultation underscores that AI is no longer optional or niche in capital markets—it’s a growing reality across trading, clearing, and risk functions. For firms in derivatives markets, the report points to both opportunity and rising regulatory expectations:
- Stay ahead of regulatory scrutiny by strengthening AI model governance and documentation today.
- Ensure explainability and auditability of AI systems used in trading, risk, and compliance workflows.
- Assess third-party AI vendor risks and plan for vendor failure scenarios.
- Prepare talent and oversight structures to supervise AI responsibly.
For risk managers, operations leaders, and executives at FCMs, proprietary trading firms, CCPs and exchanges, this report is a timely call to action: AI adoption is advancing, and so is global regulatory focus. Firms that proactively address AI risks and governance will be better positioned as oversight evolves.
Get in touch to find out more about the lates AI developments at Cumulus9.