Abstract
Multi-user (MU)-multiple-input, multiple-output (MIMO) technology has been central to the evolution of wireless networks, since it can provide substantial network gains by enabling the concurrent transmission of a large number of information streams, over the same frequency. However, reliably detecting these mutually interfering streams comes at a very high computational cost that increases exponentially with the number of concurrently transmitted streams. This makes the corresponding MU-MIMO systems highly inefficient in terms of power consumption and processing latency. In this context, and in order to unlock the full MU-MIMO potential, alternative computing architectures are required, able to efficiently detect a large number of information streams, in a power-efficient manner. In this context, NeuroMIMO, is the first attempt to apply the principles of neuromorphic computing to achieve highly efficient MIMO detection. NeuroMIMO suggests and evaluates two different ways to translate the MIMO detection problem into a neuromorphic one. The first (i.e., Massive-NeuroMIMO) is appropriate for massive MIMO systems, where the number of receive, base-station/access-point antennas is much higher than the number of information streams. The second (i.e., Highly-Efficient-NeuroMIMO) is appropriate for the case where the number of transmitted streams approaches the number of base station antennas, and can reach the performance of the optimal Maximum-Likelihood detector. We discuss the trade-offs between the two NeuroMIMO approaches, and we show that both can provide substantial power gains compared to their traditional counterparts, while accounting for the preprocessing overhead required to translate the MIMO detection problem into a neuromorphic one. In addition, despite the current limitations in the "speed" of existing neuromorphic chips, we discuss that real-time processing detection can be achieved, even for a 5G NR system with 100 MHz operating bandwidth.