Inferences: Check Before You Assume the Outputs Are Accurate
November 23, 2015
Predictive software works really well as long as the software does not have to deal with horse races, the stock market, and the actions of single person and his closest pals.
“Inferences from Backtest Results Are False Until Proven True” offers a useful reminder to those who want to depend on algorithms someone else set up. The notion is helpful when the data processed are unchecked, unfamiliar, or just assumed to be spot on.
The write up says:
the primary task of quantitative traders should be to prove specific backtest results worthless, rather than proving them useful.
What throws backtests off the track? The write up provides a useful list of reminders:
- Data-mining and data snooping bias
- Use of non tradable instruments
- Unrealistic accounting of frictional effects
- Use of the market close to enter positions instead of the more realistic open
- Use of dubious risk and money management methods
- Lack of effect on actual prices
The author is concerned about financial applications, but the advice may be helpful to those who just want to click a link, output a visualization, and assume the big spikes are really important to the decision you will influence in one hour.
One point I highlighted was:
Widely used strategies lose any edge they might have had in the past.
Degradation occurs just like the statistical drift in Bayesian based systems. Exciting if you make decisions on outputs known to be flawed. How is that automatic indexing, business intelligence, and predictive analytics systems working?
Stephen E Arnold, November 23, 2015