# Organized by:

• Danial Yazdani, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China (email: danial.yazdani@gmail.com)
• Juergen Branke, Operational Research and Management Sciences Group, Warwick Business school, University of Warwick, Coventry CV4 7AL, UK (email: juergen.branke@wbs.ac.uk)
• Mohammad Nabi Omidvar, School of Computing, University of Leeds, and Leeds University Business School, Leeds LS2 9JT, UK (email: m.n.omidvar@leeds.ac.uk)
• Xiaodong Li, School of Science (Computer Science), RMIT University, GPO Box 2476, Melbourne, 3001, Australia (xiaodong.li@rmit.edu.au)
• Changhe Li, School of Automation, China University of Geosciences, Wuhan 430074, China (email: changhe.lw@gmail.com)
• Michalis Mavrovouniotis, KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus (email: mavrovouniotis.michalis@ucy.ac.cy)
• Trung Thanh Nguyen, Liverpool Logistics, Offshore and Marine (LOOM) Research Institute, Faculty of Engineering and Technology, School of Engineering, Liverpool John Moores University, Liverpool, UK (email: t.t.nguyen@ljmu.ac.uk)
• Shengxiang Yang, Center for Computational Intelligence (CCI), School of Computer Science and Informatics, De Montfort University, Leicester LE1 9BH, UK (email: syang@dmu.ac.uk)
• Xin Yao, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China, and Center of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, University of Birmingham, Birmingham, UK (email: xiny@sustc.edu.cn)

# Background

Search space of many real-world optimization problems is dynamic in terms of the objective function, the number of variables, and/or constraints. Optimization problems that change over time and need to be solved online by an optimization method are referred to as dynamic optimization problems (DOPs). To solve DOPs, algorithms not only need to find desirable solutions but also be able to react to the environmental changes in a timely manner and find a new solution when the previous one becomes suboptimal. In this competition, we focus on evolutionary dynamic optimization methods (EDOs) for unconstrained single-objective continuous DOPs. EDOs are important class of algorithms as they form the foundation for more complex methods, such as those used in robust optimization over time, as well as large-scale and constrained dynamic optimization. This competition aims at providing a common platform for fair and easy comparison of EDOs. The competition allows competitors to run their own EDO on the problem instances generated by the generalized moving peaks benchmark (GMPB) with different characteristics and levels of difficulty.

# Generalized Moving Peaks Benchmark [1]

The landscapes generated by GMPB are constructed by assembling several promising regions with a variety of controllable characteristics ranging from unimodal to highly multimodal, symmetric to highly asymmetric, smooth to highly irregular, and various degrees of variable interaction and ill-conditioning. An example of a 2-dimensional landscape generated by GMPB is shown in Figure 1.

For a detailed description of GMPB, the reader is referred to the competition support document (download here).

The MATLAB source code of the GMPB (where mQSO [2] is used as a sample EDO) can be downloaded from here.

# Problem instances

12 different problem instances with different characteristics are used in this competition. To generate these problem instances, the participants need to set four parameters PeakNumber, ChangeFrequency, Dimension, and ShiftSeverity in “main.m” according to the values shown in Table 1.

Table 1: Parameter setting of the 12 problem instances used in this competition.
parameter Problem instances
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
PeakNumber 5 10 25 50 100 10 10 10 10 10 10 10
ChangeFrequency 5000 5000 5000 5000 5000 2500 1000 500 5000 5000 5000 5000
Dimension 5 5 5 5 5 5 5 5 10 20 5 5
ShiftSeverity 1 1 1 1 1 1 1 1 1 1 2 5
RunNumber 31
EnvironmentNumber 100

# Evaluation Criteria

Offline error is used as the performance indicator in this competition:

E_(o)=1/(Tϑ)sum_(t=1)^Tsum_(c=1)^ϑ(f^"(t)"(vecx^(∘"(t)"))-f^"(t)"(vecx^(**"("(t-1)ϑ+c")")))

where  vecx^(∘"(t)") is the global optimum position at the tth environment, T is the number of environments, 𝜗 is the change frequency, c is the fitness evaluation counter for each environment, and  vecx^(**((t-1)ϑ+c)) is the best found position at the cth fitness evaluation in the tth environment. Offline error is the average of current error values over optimization process. In the source code, in each run, the values of the current errors are stored in “Problem.CurrentError” at the end of each run, the offline error is obtained.

# Rules

1. Participants are NOT allowed to change the random seed generators in the code.
2. Participants are NOT allowed to modify “BenchmarkGenerator.m” and “fitness.m” files in the source code.
3. Participants are NOT allowed to tune the parameters of their algorithm for individual problem instance. In other words, the values of the parameters of the algorithm must be the same for solving all problem instances.
4. The algorithm can be informed about the environmental changes. Consequently, participants do not need to use any change detection mechanism in their algorithms. Note that it is also accepted if a change detection component is used.
5. Each participant can submit more than one algorithm.
6. Since this is the first competition on GMPB, competitors can participate with either newly proposed algorithms (unpublished) or a previously published algorithm.
7. We require the winner to share their source code with us. The code will not be published and is solely used to reproduce the reported results.

# Submission instructions

For each algorithm, the competitors must provide a compressed folder (named as the algorithm's name) which contains 13 files. The first file is a document containing the following information:

1. Title
2. Names, affiliations, and emails of participants
3. A short description of the algorithm, including the used optimizer (e.g., PSO or DE), how the population is managed and controlled (e.g., bi-population, multi-population with fixed number of sub-populations, or multi-population with clustering), and whether explicit memory (archive) is used or not.
4. The following table which shows the best, worst, average, median, and standard deviation of the offline error values obtained by 31 runs for each problem instance.
Offline error Problem instances
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
Best
worst
average
median
Standard deviation

In addition, this folder also must contain 12 text files where each file contains the 31 offline error values obtained by the algorithm in 31 runs for each problem instance. Each text file must be named according to the corresponding problem instance (e.g., “F10.dat” which contains 31 entries obtained by solving F10).

Submit your files directly to Danial Yazdani (email to danial.yazdani@gmail.com), no later than 15 July 2022.

# References

[1] D. Yazdani, M. N. Omidvar, R. Cheng, J. Branke, T. T. Nguyen, and X. Yao, “Benchmarking continuous dynamic optimization: Survey and generalized test suite,” IEEE Transactions on Cybernetics, pp. 1 – 14, 2020.
[2] T. Blackwell and J. Branke, “Multiswarms, exclusion, and anticonvergence in dynamic environments,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 459–472, 2006.