Abstract
Neural architecture search (NAS) has emerged as a promising technique for automating neural network design. However, many existing NAS methods rely on manually constructed architecture features, which often fail to capture the intricate structural relationships within network architectures. These limitations can mislead the performance predictors that depend on such features, hindering the search process. To address this challenge, we propose a novel architecture feature extraction framework that automatically learns and optimises feature representations. Specifically, we utilize graph convolutional networks (GCNs) to extract architecture features and employ contrastive learning to refine these features, producing graph embeddings that accurately represent the architecture. This embedding space ensures that architectures with similar performance are positioned closer together, while those with differing performance are spaced farther apart. Building on this representation, we develop an architecture embedding comparator that qualitatively predicts the performance relationship between two architectures. Our proposed method is referred to as Embedded Comparator for Evolutionary Naural Architecture Search (EmCENAS). Experimental results across several widely used NAS search spaces demonstrate the effectiveness of our proposed framework. Ablation studies further reveal the advantages and insights provided by our method, highlighting its potential to enhance NAS.