To avoid overfitting, it is ocmmon practice to keep a holdout set of data.
Request is to add a UI to partition the data to the backtest dataset and a holdout dataset.
Users tune their algo against the backtest dataset. Then they run again on the holdout dataset to make sure it still works.
Currently, our UI starts backtesting from January 1 2020. If we allows user to specify start / end date of simulation,
users can achieve backtest / holdout testing by manually running the simulation twice on two contiguous date ranges.
Perhaps it's a feature, not a bug, to make it inconvenient to run the algo on the holdout dataset.
To avoid overfitting, it is ocmmon practice to keep a holdout set of data.
Request is to add a UI to partition the data to the backtest dataset and a holdout dataset.
Users tune their algo against the backtest dataset. Then they run again on the holdout dataset to make sure it still works.
Currently, our UI starts backtesting from January 1 2020. If we allows user to specify start / end date of simulation,
users can achieve backtest / holdout testing by manually running the simulation twice on two contiguous date ranges.
Perhaps it's a feature, not a bug, to make it inconvenient to run the algo on the holdout dataset.