Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/26956
Appears in Collections: | Computing Science and Mathematics Technical Reports |
Peer Review Status: | Unrefereed |
Title: | Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning (Detailed Experiments and Results) |
Author(s): | Christie, Lee A Brownlee, Alexander Woodward, John R |
Contact Email: | alexander.brownlee@stir.ac.uk |
Citation: | Christie LA, Brownlee A & Woodward JR (2018) Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning (Detailed Experiments and Results). Not applicable. Stirling: University of Stirling. |
Keywords: | benchmarks BBOB ranking differential evolution continuous optimisation parameter tuning automated design of algorithms |
Issue Date: | 30-Apr-2018 |
Date Deposited: | 11-Apr-2018 |
Publisher: | University of Stirling |
Abstract: | Benchmarks are important to demonstrate the utility of optimisation algorithms, but there is controversy about the practice of benchmarking; we could select instances that present our algorithm favourably, and dismiss those on which our algorithm under-performs. Several papers highlight the pitfalls concerned with benchmarking, some of which concern the context of the automated design of algorithms, where we use a set of problem instances (benchmarks) to train our algorithm. As with machine learning, if the training set does not reflect the test set, the algorithm will not generalize. This raises some open questions concerning the use of test instances to automatically design algorithms. We use differential evolution, and sweep the parameter settings to investigate the practice of benchmarking using the BBOB benchmarks. We make three key findings. Firstly, several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances, possibly introducing unwanted bias to a resulting automatically designed algorithm. Secondly, the number of evaluations can have a large effect on the conclusion. Finally, a systematic sweep of the parameters shows how performance varies with time across the space of algorithm configurations. The data sets, including all computed features, the evolved policies, and their performances, and the visualisations for all feature sets, are available from http://hdl.handle.net/11667/109. |
Type: | Technical Report |
URI: | http://hdl.handle.net/1893/26956 |
Affiliation: | Computing Science Computing Science Queen Mary, University of London |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
investigating-benchmark-correlations-techreport.pdf | Fulltext - Accepted Version | 986.56 kB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.