Abstract
Decomposition based evolutionary algorithms have proven to be able to strike a good trade-off between convergence and diversity in handling multi-objective or many-objective optimization problems with regular Pareto fronts. However, the performance of decomposition based algorithms becomes less efficient in dealing with many-objective problems with irregular Pareto fronts (we call it irregular problems hereafter for simplicity). In this thesis, we aim to solve the irregular problems based on the framework of the decomposition based evolutionary algorithms, in which a set of fixed reference vectors covering the whole objective space is usually adopted to guide the search process.
To ensure that the adaptation of reference vectors can match well with the solutions during the search process, in our first work, we propose to adjust the distribution of reference vectors using an improved growing neural gas network to solve the irregular problems. By using the solutions found during the search process to train the improved growing neural gas network, the algorithm is able to effectively adapt the reference vectors without determining when and where to adapt the reference vectors and the convergence speed can not be slowed down to some extent. For decomposition based algorithms, apart from the reference vectors, the scalarizing function is also critical and the design of it should take both the convergence and diversity into consideration, since eventually, the scalarizing function will be used to rank the solutions before selection based on the reference vector with which the solutions are associated. If the reference vectors are frequently adjusted, a solution in the population may change its associated reference vector very frequently, resulting in reduced selection pressure along a particular search direction and slowing down the convergence. Therefore, in the second work, we propose a new adaptive scalarizing function tailored for adaptive reference vectors, thereby better balancing the trade-off between convergence and diversity.
For the third and fourth work, we aim to handle the expensive irregular problems, which is very challenging. For irregular problems, hundreds of thousands of real function evaluations are usually needed to learn the shape of the irregular Pareto fronts, but only several hundreds of real function evaluations can be afforded for expensive problems. In addition, the model management has to not only consider the balance between the exploitation and exploration but also enhance the sampling of infilled solutions in the region of the irregular Pareto fronts. In the third work, to solve expensive irregular problems, we propose to adapt the reference vectors using the improved growing neural gas network by utilizing both the estimated solutions predicted by Gaussian process models and the solutions that have been evaluated using real objective functions as the training data. This way, the algorithm is able to learn the irregularity of the Pareto fronts to some extent. We also propose a new model management strategy by balancing the diversity and convergence according to the adaptive reference vectors tuned by the growing neural gas network. It is well known that effective model management heavily relies on the design and optimization of the acquisition functions. For expensive many-objective problems, in order to obtain a set of solutions trading off between different objectives, the design of the acquisition functions shall be able to well balance the convergence and diversity. Moreover, the optimization of the acquisition function can be difficult because of the multi-modal property. Therefore, in the last piece of work, we propose a reference vector based adaptive model management strategy to balance the convergence and diversity. Two optimization processes on top of two sets of reference vectors are adopted to optimize an amplified upper confidence bound acquisition function.