
One of the most common mistakes when creating a trading robot is over-tuning it to historical data. This is called over-optimization. It can make the robot look great in backtests, but fail when used in real time.
In other words, the robot learns to “memorize” what happened before, but it doesn’t know how to react well when market conditions change.
How do you know if a robot is over-optimized?
A clear sign is when a strategy works very well in a specific period, but underperforms in other times or markets. This indicates that it’s tailored to a specific case, rather than a general market logic.
This often happens when parameters are tested and changed just to improve the profit curve, without considering whether those changes make sense.
Be careful with precise numbers
Sometimes very specific values are used in indicators, such as an RSI of 37 or a moving average of 19.5. While these numbers may have worked before, it was often just by chance.
Using such exact values can result in the strategy being based on data that is not repeated, making it fragile and unstable.
Too many rules don’t always help
If a strategy uses too many indicators, filters, or very complicated conditions, it may appear perfect in tests… but it isn’t.
The problem is that, instead of understanding the market, the robot is “forced” to fit in with what has already happened. This doesn’t help it adapt to what’s coming next.
Does what the robot does make sense?
A good way to check if a strategy is well thought out is to ask yourself: Can I easily explain why each rule is there?
If you can’t do this, that rule may only be helping to improve past results, but without a solid foundation. And that’s not reliable for the future.
Test on new data
A good strategy should also work on data that wasn’t used during its creation. This is called out-of-sample testing. If it fails this type of test, it’s likely over-optimized.