Why AI Should Be a Risk Radar, Not a Crystal Ball
For as long as I have worked in markets, people have been looking for a crystal ball. When they hear “AI in trading”, many imagine exactly that: a machine that finally tells them whether prices go up or down tomorrow.
That fantasy is one of the most dangerous things you can bring into a live market.
In my own work, and in the way I teach at VERAXIS Global Business School, AI is not treated as a fortune teller. It is closer to a risk radar: something that helps you notice what is changing around your positions before your P&L delivers the bad news.
Prediction obsession versus risk awareness
Most retail traders start with a prediction question:
“Is this going to rally or dump?”
The more time you spend in professional environments, the more that question fades into the background. The focus shifts toward things like:
-
What regime are we in right now: trend, chop, squeeze, or outright stress?
-
How does my portfolio behave across several different scenarios, not just the one I hope will happen?
-
What is my real downside if volatility spikes or liquidity dries up?
AI becomes genuinely useful when you point it at those questions. It can help you map out where your risk is clustered, where your assumptions are fragile, and where you might be too confident in a single narrative.
Used that way, AI is less exciting in the short term, but far more valuable in the long run.
What AI is actually good at in markets
When people talk about “AI trading systems”, they often jump straight to entry and exit signals. In practice, some of the most powerful uses of AI sit in the background, quietly scanning the environment.
An AI system can:
It can sift through a large amount of cross-asset data and highlight patterns that would take a human much longer to notice. It can monitor changes in volatility structure and warn you when the market is no longer behaving like last month. It can watch correlations and flag situations where assets that usually move together are suddenly breaking apart, or where unrelated markets start moving in sync.
It can also keep an eye on liquidity, depth, and slippage risk in a more systematic way than most individuals ever do manually.
None of this guarantees that your next trade will be a winner. What it does is improve the odds that you will not be surprised by risks that were visible in the data but invisible to your intuition.
The limits that never go away
Because AI is so powerful in some areas, it is easy to forget its limits. Those limits matter more than the model architecture or the buzzwords attached to it.
AI cannot decide how much you are willing to lose.
It cannot set your maximum drawdown for you.
It cannot stop you from ignoring your own rules.
Position sizing, leverage choices, and exit discipline are human responsibilities. A model can show you scenarios where your exposure is dangerous. It can tell you that a certain combination of trades creates a very fat tail. But it cannot force you to walk away from that exposure.
Nor can AI turn a fundamentally reckless decision into a professional one. If your process is built on impulse, revenge trading, or the hope of “making it back in one shot”, wrapping that in sophisticated code does not fix the core problem. It just makes the story sound smarter.
How I think about human–AI collaboration
The most productive way I have found to think about AI in trading and investing is to imagine it as another colleague at the desk.
That colleague is very good at scanning data, never gets tired, and does not care about your ego. It will happily tell you that your favorite position is the one adding the most tail risk to your portfolio. It will highlight that your exposure looks fine in a quiet regime but becomes fragile under stress.
What it will not do is reach across and hit the sell button for you. It will not say “no” when you decide to double down after a bad day. It will not protect you from ignoring everything it shows you.
In the classroom and in practice, the most interesting conversations happen exactly at that boundary: the point where the model has done its job, and the decision is still yours.
A suggestion for anyone learning to use AI in markets
If you are trying to bring AI into your trading or investment process, start by changing the question you ask of it.
Instead of “How can this model predict the next move?”, try:
“How can this model help me see my risk more honestly than I do on my own?”
That shift alone changes the way you design your tools, the way you read their output, and the way you react when the market does something uncomfortable.
AI can be a very sharp risk radar. It can help you notice smoke before you see flames. But it will never replace the need for a clear framework, realistic sizing, and the willingness to accept responsibility for your own decisions.
Those parts are still human work.
Disclaimer:
This article is for educational and informational purposes only. It does not constitute investment advice, trading recommendations, or an offer to buy or sell any financial instrument. All views expressed here are personal and should not be interpreted as the official position of any institution. You should make your own decisions or consult a qualified professional before acting on any financial information.

Comments
Post a Comment