Abstract
The objective of this work is to address one of the recent ITU AI/machine learning challenges, specifically on the RF power amplifier (PA) behavioral modelling. This paper presents a novel model namely real-valued time-delay temporal convo-lutional network (RVTDTCN) to learn the long-term memory effect from PA in 5G base-station transmitters. The proposed model employs dilated convolutions to incorporate information from various time stamps and capture intricate temporal dependencies. This is achieved without the requirement of additional coefficients. In our experiment, the ITU ML5G-PS-007 datasets collected from commercial 5G base-stations are employed for both training and testing purposes. The performance of the learned PA behavior is evaluated using both normalized mean square error (NMSE) and adjacent channel error power ratio (ACEPR). The results show that the RVTDTCN model achieves competitive performance with the state-of-the-art feedforward neural network (FNN) models but reducing the computational complexity by 60%. Moreover, the proposed model outperforms the state-of-the-art convolutional neural network (CNN) model by 2-3dB in NMSE and ACEPR, with a similar complexity. Index Terms—Power amplifier (PA), digital pre-distortion (DPD), neural network (NN), temporal convolutional network (TCN), real-valued time-delay temporal convolutional network (RVTDTCN), long-term memory effects, 5G.