Abstract
Our earlier work has demonstrated that a sufficiently trained recurrent neural network (RNN) can effectively detect base station performance degradations. We encountered a performance limit however: the accuracy gain diminishes as the RNN deepens. In this paper, we investigate the performance limit of a well-trained RNN by visualising its processes and modeling its internal operation. We first illustrate that inputs following a certain probability density undergo transformation in the RNN. By linearising the RNN process, we then develop a linear model to analyse the transformation. Using the model, we not only unveil insights into RNN operational behaviour, but are also able to explain the effect of diminishing gains in deeper RNNs. Finally, we validate our model and demonstrate its ability to accurately predict the performance of a well-trained RNN.