On October 10th, 2025, the cryptocurrency markets experienced a seismic dislocation. In a matter of minutes, a liquidation cascade wiped out billions in open interest, leaving standard trading algorithms paralyzed. It wasn’t just a price drop; it was a structural failure of predictive models. Strategies that had printed money for months suddenly faced a market state that did not exist in their training data.
This event served as a brutal reminder: in the high-stakes world of quantitative finance, the reliance on Machine Learning (ML) has become absolute, yet its blind spots remain fatal. From High-Frequency Trading (HFT) algorithms executing in nanoseconds to complex DeFi oracles, the industry is in an arms race of data supremacy. But when a “Black Swan” hits, models trained on historical data don’t just underperform – they break.
This creates a paradox for modern trading firms: how do you build resilient systems when your primary tools are blind to the most significant risks?
To answer this, we sat down with Grigory Chikishev, a Team Lead and Quantitative Trader at Quantum Brains. With over nine years of experience building infrastructure solutions for markets – ranging from HFT algorithms and ML models to graph-based flow evaluation systems – Grigory has spent his career at the intersection of execution speed and systemic resilience. At Quantum Brains, he has transformed market processes into scalable architectures designed to withstand the very volatility that breaks standard models.
Here is his perspective on why the industry needs to move beyond the “black box” and how to engineer true antifragility.
The Zen of the Unpredictable
When the discussion turns to the failure of risk models during events like the recent October crash, the COVID-19 pandemic, or the 2008 financial crisis, the standard critique is that the models “failed” to predict the event. Grigory challenges this premise entirely. He argues that the expectation that an ML model will predict a singularity is mathematically flawed, and that the solution lies not in better prediction but in better acceptance.
“I’d like to point out right away that I don’t see a problem with the existence of black swans. They are, by definition, events that are impossible to predict. And there’s nothing we can do about it. For example, a comet colliding with Earth: we can almost certainly say it won’t happen in the coming weeks or even years, but no one knows what’s going on in the unseen part of the galaxy…
The word ‘fail’ may be an exaggeration. If we know in advance of our inability to predict event A, then we should accept its occurrence with Buddhist calmness.”
However, accepting unpredictability does not mean ignoring consequences. Grigory points out that while a model cannot predict the timing of a crisis, human domain experts must architect systems that understand the consequences of the worst-case scenario – something purely data-driven models often miss because the data points simply aren’t there.
“Somewhere between these two numbers lies the critical point that separates a predictable event from an unpredictable one (a black swan). And the fundamental flaw of any model is that it can’t calculate this point… We can only prepare for the worst-case scenario, which the model DOESN’T account for.”
The Myth of the Transparency Trade-Off
A significant debate in quantitative finance is the tension between Explainable AI (XAI) and profit. The prevailing wisdom suggests that “Black Box” models (unsupervised deep learning models that are difficult to interpret) are more profitable because they are more complex, and that forcing them to be explainable (for regulatory compliance) slows execution and blunts their edge.
Grigory vehemently disagrees with this dichotomy. For him, transparency is not a regulatory burden; it is a debugging tool.
“I highly doubt that an unsupervised or black box approach will ultimately be more successful than a white box approach when directly compared… Therefore, any efforts toward ‘regulatory-level interpretability’ are only for the better. If your newborn child could explain what hurts, it would be very convenient and would clearly help with their upbringing.”
He suggests that opacity in trading strategies is often a mask for luck rather than genius – specifically, survivorship bias.
“If you see a successful ML strategy that ‘is unclear how it works,’ then one of two things is most likely true:
- Either its creators actually understand everything, but prefer to keep their cards close to their chest.
- Or we’re dealing with survivorship bias… If 1,024 people make a chain of 10 binary predictions, precisely one of them will be absolutely correct in each prediction.
Unfortunately, sometimes both reasons are correct. So always demand an explanation from your AI agent!”
Engineering Antifragility
If prediction is impossible, the only viable strategy is antifragility – the ability of a system to gain from disorder, a concept popularized by Nassim Taleb. However, implementing this in hardware and infrastructure is notoriously difficult. Building a system that can handle 100x the normal market load during a crash is often cost-prohibitive.
Grigory’s approach to infrastructure at Quantum Brains prioritizes flexibility over brute force capacity.
“You can’t prepare your infrastructure for a black swan event. For example, if you calculate your server’s peak load and allow for a 100x increase, then you’re burning money on unused resources almost 100% of the time… But you can prepare a flexible system to reduce resource costs. For example, simply shutting down one trading setup after another. What’s the point anyway if everything goes to hell?”
This flexibility allows a firm to survive the initial shock. But to actually profit from the dislocation – to be truly antifragile – requires a shift in mindset. It requires recognizing that when others’ algorithms fail, the market is no longer efficient.
“I repeat, we’re talking about a situation that our models didn’t predict… This formulation also contains some good news: we can assume that other market participants are experiencing the same ‘difficult’ scenario. On October 10th, cryptocurrencies experienced a significant shock, prompting many positions to be liquidated. Some participants literally left the market: either they chose the second option (shutdown) or simply didn’t have time to do so (RIP).
This was a good moment to exploit inefficiencies or realize opportunities that would usually be closed… In a sense, this is also Taleb’s way: to avoid being a turkey, you simply have not to be one.”
The Human Element in a Zero-Sum Game
As AI continues to dominate trade execution, many question the future role of the human quantitative trader. If machines handle the flow, the risk, and the execution, is the human obsolete?
Grigory believes the very nature of the market safeguards the human element: it is a zero-sum game driven by the desire to win, an emotion that algorithms do not possess. While AI can execute, it lacks the drive to “beat” the market that fuels true innovation.
“Trading differs from many other fields where AI is actively developing, because it’s a zero-sum game… Let’s imagine an extreme: there are no living participants left in the market… Is there a place for humans here? In my opinion, there isn’t.
But fortunately… in the real world, there will always be living participants… Another human factor is overconfidence. The idea, ‘I’m human, I’ll be more inventive and original than AI,’ will never leave our minds.”
Ultimately, the future of quantitative trading isn’t about replacing humans with AI, but about humans using AI to compete against other humans. The algorithm is the weapon, not the soldier.
“As I said, it’s a zero-sum game. But an algorithm has no interest in making money in such conditions. Only homo sapiens will always have the desire to ‘beat’ others.”




