Training a strategy

Just like an athlete, a strategy should be trained, ideally in regular intervals. Training means optimizing the parameters, but its main purpose is not getting the highest profit out of the strategy. Sometimes training can even reduce the backtest profit. The main purposes of training are

Training a strategy involves testing a set of strategy parameters with historical or recent price data, and selecting the most stable parameter values. For example, imagine a trading system that buys and sells a position when the price curve crosses its n-bar Simple Moving Average. This strategy can be trained to find the optimal value of n between 20 and 200. For this purpose, some trade platforms come with an optimizer for finding the highest performance peak in the parameter space. Zorro's optimize function does not seek performance peaks; instead it looks for performance ranges and places the parameters into their centers. This does not necessarily result in the maximum backtest performance, but in the highest likeliness to reproduce the hypothetical performance in real trading.

For preparing a strategy to be trained, set the PARAMETERS flag and assign optimize calls to all optimizable parameters. The code for training the Simple Moving Average system in the above example would look like this:

// n-bar Moving Average system (just for example - don't trade this!)
function run() 
{
  set(PARAMETERS);
  var TimePeriod = optimize(100,20,200,10);
  var* Price = series(price());
  var* Average = series(SMA(Price,TimePeriod));
  if(crossOver(Price,Average)) 
    enterLong();
  else if(crossUnder(Price,Average)) 
    exitLong();
}

Now click [Train]. Zorro will now run through several optimization cycles for finding the most robust parameter combination. Zorro optimizes very fast; still, dependent on the number of parameters, assets, and trade algorithms, optimization can take from a few seconds up to several hours for a complex portfolio strategy. The progress bar indicates the time you have to wait until it's finished:

The displayed values per line are the loop and parameter number, the optimization step, the parameter value, the result of the objective function, and the number of winning and losing trades.

The optimal parameters are stored in a *.par file in the Data folder, and are automatically used by Zorro when testing or trading the system. Parameters are optimized and stored separately per asset and algorithm in a portfolio system; therefore all optimize calls must be located inside the asset/algo loop if there is any. When the FACTORS flag is set, capital allocation factors are also generated and stored in a *.fac file in the Data folder. When the strategy contains AI functions and the RULES flag is set, decision tree or perceptron rules are generated and and stored in a *.c file in the Data folder. By setting flags accordingly, parameters, factors, and rules can be generated either together or independent of each other.

Training shows if the strategy is robust, i.e. insensitive to small parameter changes. Therefore it goes hand in hand with strategy development. When training a strategy with the LOGFILE flag set, Zorro shows the performance variance over all parameter ranges on a HTML page, like this:

The charts are stored in the Log folder and end with the number of the parameter, f.i. _p1 for the first, _p2 for the second parameter and so on. The red bars are the result of the objective function, which is by default the profit factor of the training period, corrected by a factor for preferring a higher number of trades. The dark blue bars show the number of losing trades and the light blue bars show the number of winning trades. In walk forward optimization, the charts display the average results over all WFO cycles.

Training is critical and should not be omitted even when a strategy appears to be already profitable with its default parameters. Most strategies only work within certain parameter ranges. Those ranges are usually different for every asset and also vary with the market situation - a bullish or bearish market has different optimal parameters than a market with no trend. Due to the asymmetry of most markets, long and short trades often also require different parameters. Naturally, parameters can only be optimized to a specific historical price curve, therefore they perform best with that particular price curve and will perform not as good with any other data - such as real time prices in live trading. This effect is called overfitting. It varies with the strategy, and is discussed below.

Select carefully the parameters that you train. Parameters that are fundamental to the strategy - for instance, the market opening hour when the strategy exploits the overnight gap - must never be trained, even if training improves the test result. Otherwise you're in danger to replace a real market inefficiency by a random training effect.

For avoiding overfitting bias when testing a strategy, parameter values are typically optimized on a training data set (the "in-sample" data), and then tested on a different data set (the "out-of-sample" or "unseen" data) to simulate the process of trading the system. If the optimized parameters do not perform well on out-of-sample data, the strategy is not suited for trading. Zorro offers three different methods for optimizing and testing a strategy with unseen data:

Horizontal split optimization

This method divides the price data into one-week segments (a different segment length can be set up through DataSkip). Usually two of every three weeks are then used for the optimization, and every third week is skipped by the optimization and used for the strategy test. The LookBack period is the part of the price curve required by the strategy before the first trade can be entered, for functions that use past price data as input, such as moving averages or frequency filters. Thus a preceding lookback period must be cut off from the start of the data for both training and testing.

Simulation period

LookBack
Parameters + AI Rules
Test
           

For setting up this method, set different SKIP1, SKIP2, SKIP3 flags for training and testing. Usually, SKIP3 is set in train mode and and SKIP1+SKIP2 in test mode; any other combination will do as well as long as two weeks are used for training and one week for testing. A script setup for horizontal split optimization looks like this:

function run() // horizontal split optimization
{
  if(Train) set(PARAMETERS+SKIP3);
  if(Test) set(PARAMETERS+SKIP1+SKIP2);
  ...
}

Horizontal split optimization is the preferred training method for quick tests while developing the strategy, or for testing when you don't have much price data. It is the fastest method, uses the largest possible data set for optimization and test, and gives usually quite realistic test results. For checking the consistency you can test and train also with different split combinations, f.i. SKIP1 for training and SKIP2+SKIP3 for testing. The disadvantage is that the test, although it uses different price data, runs through the same market situation as the training, which can make the result too optimistic.

Vertical split optimization

Alternatively, you can select only a part of the data period - such as 75% - for training. The rest of the data is then used exclusively for testing. This separates the test completely from the training period. Again, a lookback period is split off for functions that need past price data.

Simulation period
LookBack
Parameters
+ AI Rules
Test
           

For setting up this method, use DataSplit to set up the length of the training period. A script setup for vertical split optimization looks like this:

function run() // vertical split optimization
{
  set(PARAMETERS);
  DataSplit = 75; // 75% training, 25% test period
  ...
}

Vertical split optimization has the advantage that the test happens after the training period, just as in live trading. The disadvantage is that this restricts testing to a small time window, which makes the test quite inaccurate and subject to fluctuations. It also does not use the full simulation period for optimization, which reduces the quality of the parameters and rules.

Walk Forward Analysis and Optimization

For combining (almost) the advantages of the two previous methods, Zorro offers an analysis and optimization algorithm called Walk Forward Optimization (WFO in short; sometimes also called "Time Series Cross-Validation"). This method trains and tests the strategy in several cycles using a data frame that "walks" over the simulation period:

WFOCycle
Simulation period
1
LookBack
Parameters + Rules
Test1
2
LookBack
Parameters + Rules
Test2
3
LookBack
Parameters + Rules
Test3
4
LookBack
Parameters + Rules
Test4
5
 
LookBack
Parameters + Rules
OOS Test
LookBack
Test5
WFO Test
Look
Back
Test1
Test2
Test3
Test4
Rolling Walk Forward Optimization
 
WFOCycle
Simulation period
1
LookBack
Parameters + Rules
Test1
2
LookBack
Parameters + Rules
Test2
3
LookBack
Parameters + Rules
Test3
4
LookBack
Parameters + Rules
Test4
5
LookBack
Parameters + Rules
Test
   
Test1
Test2
Test3
Test4
Anchored Walk Forward Optimization

The parameters and rules are stored separately for every cycle in files ending with "_n.par" resp. "_n.c", where n is the number of the cycle. This way the strategy can be tested repeatedly without having to generate all the values again. The test cycle is divided into (n-1) test periods that use the parameters and factors generated in the preceding cycle. The last training cycle has no test period, but can be used to generate parameters and rules for real trading.

For setting up this method, use DataSplit to set up the length of the training period in percent, and set either NumWFOCycles to the number of cycles, or WFOPeriod to the length of one cycle. A script setup for Walk Forward Optimization looks like this:

function run() // walk forward optimization
{
  set(PARAMETERS);
  DataSplit = 90; // 90% training, 10% test
  NumWFOCycles = 10; // 10 cylces, 9 test periods
  ...
}

The WFO test splits the test period into several segments according to the WFO cycles, and tests every segment separately. The test result in the performance report is the sum of all segments. The parameters from the last cycle are not included in the WFO test as they are only used for real trading. A special OOS test can be used for out-of-sample testing those parameters with price data from before the training period.

WFO can be used to generate a 'parameter-free' strategy - a strategy that does not rely on an externally optimized parameter set, but optimizes itself in regular intervals while trading. This makes optimization a part of the strategy. WFO simulates this method and thus tests not only the strategy, but also the quality of the optimization process. Overfitted strategies - see below - will show bad WFO performance, profitable and robust strategies will show good WFO performance. The disadvantage of WFO is that it's slow because it normally runs over many cycles. Like vertical split optimization, it uses only a part of the data for optimizing parameters and factors. Depending on the strategy, this causes sometimes worse, sometimes better results than using the full data range. Still, WFO is the best method for generating robust and market independent strategies.

Another advantace of WFO is that the training time can be strongly reduced by using several CPU cores.

Combining rules and parameters

If a strategy requires training of both rules and parameters in a more or less complex way, it is important to know which depends on which. Often several training runs in a certain order are required. With Zorro S, multiple training runs can be automatized with a batch file that starts training instances with different -d command line options for different script flags and behavior. The following cases are possible: 

Training results

Before training a system the first time, make sure to run a [Test] with the raw, unoptimized script. Examine the log and check it for error messages, such as skipped trades or wrong parameters. You can this way identify and fix script problems before training the script. Training won't produce error messages - trades are just not executed when trade parameters are wrong.

After training, files are generated which contain the rules, parameters, and factors. A Zorro strategy can thus consist of up to 4 files that determine its trading behavior:

All 4 files, except for the executable *.x, are normal text files and can be edited with any plain text editor. Note that executable strategies only work with the Zorro version for which they were compiled, as they rely on the format of the data structures that are contained in include\trading.h. If something is changed with those data structures, which can happen from one Zorro version to the next, the strategies must be compiled again.

Overfitting

The optimizing process naturally selects parameters that fare best with the used price data. Therefore, an optimized strategy is always more or less 'curve fitted' to the used price curve. This is normal, but bears a danger when a strategy uses too many parameters that have an impact on the result, or reacts too sensitive on certain parameter values. Optimizing can then generate a parameter combination that produces high profit with the used price curve and time period, but is otherwise not profitable. That means the strategy works perfectly with the given price data, but fails miserably when used in real trading.

For instance, imagine the n-bar Moving Average trading system described above. It can turn out that a 40-bar Moving Average produced the highest return over the test period, but the results for an 20-, 30-, 40-, and 50-bar Moving Average were all negative. The 40-bar result was just a peak - an anomaly of the price data set that will not be reproduced in real trading. Such anomalies exist very often in price data. If the parameters are adapted to such an anomaly, the strategy is overfitted and won't work well. Especially neural networks, genetic algorithms, or brute force optimization methods tend to overfit strategies. They produce 'panda bears' that are perfectly adapted to a specific data set, but can't survive anywhere else.

For this reason, many strategy developers recommend that parameters should not be optimized at all. Problem is that all strategy scripts normally have internal constants and values that affect the result and have to be selected somehow - and this selection process constitutes already an optimization and a possible overfitting. Aside from that, unoptimized strategies can not adapt to different assets and market situations. They are unsuited for a regular income. Zorro takes therefore another approach: it attempts to find not the best, but the most robust strategy parameters.

Imagine that in the mentioned n-bar Moving Average strategy, all time periods between 100 and 150 bars produced positive results, although they were less profitable than the 40-bar peak. It can be expected that a time period parameter from the 100..150 range is much more likely to be successful in real trading. Therefore Zorro's optimize function does not attempt to find the parameter value with the highest return. Instead it looks for parameter ranges with positive results, and selects a value from the center of such a range.

Seven suggestions to reduce overfitting

See also:

Testing, Trading, Result, WFO, DataSplit, optimize, Workshop5, Workshop 7

 

► latest version online