Abstract
Many real-world problems involve optimizing numerous decision variables and are expensive to evaluate, known as large-scale expensive optimization problems (LSEOPs). While surrogate-assisted evolutionary algorithms have proven effective for expensive problems, training proper models for LSEOPs remains challenging due to insufficient training data. In this paper, we adopt the divide-and-conquer approach, decomposing LSEOPs into lower-dimensional sub-problems and constructing models for sub-problems, and introduce a multi-view synthetic sampling technique for new sample selection. Specifically, we propose sorting all evaluated solutions in an ascending order and dividing them into intervals, from which data are sampled to obtain informative training data for models. The population for the LSEOP is updated by employing cooperative environmental selections on the population, formed by recombining all renewed populations for sub-problems to balance exploration and exploitation. Finally, a solution is selected among the current population for the true evaluation based on its multi-view performance predicted across all sub-problems. Results on CEC'2013 benchmark problems show the effectiveness and efficiency of our proposed method compared to three prevalent large-scale expensive optimization algorithms. Additionally, results on 2000-dimensional CEC'2010 benchmark problems and a 1200-dimensional real-world problem demonstrate encouraging scalability and robustness of the proposed method for addressing higher-dimensional problems.