Special Topic on Computational Radio Intelligence: One Key for 6G Wireless

Release Date:2020-03-20 Author:JIANG Wei and LUO Fa-Long Click:

Editorial on Special Topic: 

Computational Radio Intelligence: One Key for 6G Wireless

 

JIANG Wei and LUO Fa-Long

 

        The year of 2019 is the first deployment year of the fifth generation (5G) mobile communications. As we are writing the editorial for this special issue, a list of countries such as South Korea, the United States, China, Switzerland, the United Kingdom, and Spain have launched commercial 5G services for the general public, while this list is growing quickly and is envisioned to become much longer in the near future. For the past months, 5G has been continuously a hot buzzword in the news, attracting a huge focus from the whole society. It even goes beyond the technical and commercial scopes, becoming the frontline of geopolitical contention and conflict. As a revolutionary technology, 5G will penetrate into all aspects of society―not only human daily life but also manufacturing, education, health care, and scientific activities ―generating tremendous economic and societal benefits. From the perspective of technology research, however, it is already the time to start considering what future beyond-5G or the sixth generation (6G) mobile networks should be, in order to satisfy the demand on communications and networking in 2030. Although a discussion is ongoing within the wireless community about whether counting should be stop at 5, adopting the Microsoft Windows’approach where Windows 10 is the ultimate version, several pioneering works on the next-generation wireless networks have been initiated. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Focus Group Technologies for Network 2030 (FG NET-2030) was established in July 2018. The Focus Group intends to study the capabilities of networks for the year 2030 and beyond, when it is expected to support novel forwardlooking scenarios, such as holographic type communications, extremely fast response in critical situations, and high-precision communication demands of emerging market verticals. The European Commission initiated to sponsor beyond-5G research activities, such as its recent Horizon 2020 call―5G Long Term Evolution―where a number of pioneer projects will be kicked off at the early beginning of 2020. In Finland, the University of Oulu has begun ground-breaking 6G research as part of Academy of Finland’s flagship program, 6G-Enabled Wireless Smart Society and Ecosystem (6Genesis), which focuses on several challenging research areas including reliable near-instant unlimited wireless connectivity, distributed computing and intelligence, as well as materials and antennas to be utilized in future for circuits and devices.

        Among the short list of 6G enabling technologies that can be envisioned currently, such as Terahertz communications, visible light communications, photonics-defined radio, holographic radio, super massive multiple-input and multiple-output (MIMO), quantum communications, and dense satellite constellation, artificial intelligence (AI) is the most recognized candidate, which can provide computational radio and network intelligence from the fundamental physical layer to the upper network management layer. Due to its powerful nonlinear mapping and distribution processing capability, deep neural networks based machine learning technology is being considered as a very promising tool to attack the big challenge in wireless communications and networks imposed by the explosively increasing demands in terms of capacity, coverage, latency, efficiency (power, frequency spectrum and other resources), flexibility, compatibility, quality of experience, and silicon convergence. Mainly categorized into the supervised learning, the unsupervised learning, and the reinforcement learning, various machine learning algorithms can be used to provide a better channel modelling and estimation in millimeter and terahertz bands, to select a more adaptive modulation (waveform, coding rate, bandwidth, and filtering structure) in massive MIMO, to design a more efficient front-end and RF processing (pre-distortion for power amplifier compensation, beam-forming and crest-factor reduction), to deliver a better compromise in selfinterference cancellation for full-duplex transmission and device-to-device communications, and to offer a more practical solution for intelligent network optimization, orchestration and management, mobile edge and fog computing, networking slicing and radio resource management related to wireless big data, mission critical communications, massive machine-type communications and tactile internet.

        From practical application and research development perspective, this special issue aims to be the first single form to provide a comprehensive and highly coherent treatment on all the technology aspects related to machine learning for wireless communications and networks by covering multipath fading channel, channel coding, physical-layer design, network slicing, resource management, mobile edge architecture, fog computing, and autonomous network management. The call-for-papers of this special issue have brought excellent submissions in both quality and quantity. After rigorous reviews, six excellent articles have been selected for publication in this special issue which is organized into the following three category groups.

        Consisting of two articles, the first group of this special issue focuses on the exploration of replacing conventional modelbased statistical methods with data-driven learning approaches in spatial-temporal-spectral radio signal processing, in order to simplify the physical layer implementation or boost the transmission performance. As its title“To Learn or Not to Learn: Deep Learning Assisted Wireless Modem Design”exactly means, the first article by XUE Songyan et al. provides a fundamental rethink of the wireless modem design to answer a frequently-asked question: what additional values artificial intelligence could bring to the physical layer. Three case studies, i. e. , deep learning assisted parallel decoding of convolutional codes for a substantial reduction of decoding latency, deep learning aided multi-user frequency synchronization, and deep learning based coherent multi-user multi-antenna signal detection, are presented in this article, By adapting transmission parameters such as the constellation size, coding rate, and transmit power to instantaneous fading channel conditions, adaptive wireless communications can potentially achieve great performance. To realize this potential, accurate channel state information (CSI) is required at the transmitter. However, unless the mobile speed is very low, the obtained CSI quickly becomes outdated due to the rapid channel variation caused by multi-path fading. The second article,“A Machine Learning Method for Prediction of Multipath Channels”by Julian AHRENS et al. , investigates the feasibility of predicting fading channels by means of a convolutional neural network. The numerical results verify the effectiveness of machine learning based channel prediction in the presence of outdated CSI. It is envisioned that the channel prediction is applicable to a wide variety of adaptive transmission techniques, such as pre-coding and multi-user scheduling in MIMO systems, massive MIMO, beam-forming, interference alignment, closedloop transmit diversity, transmit antenna selection, opportunistic relaying, orthogonal frequency-division multiplexing (OFDM), coordinated multi-point transmission (CoMP), mobility management, and physical layer security.

        Mobile networks’troubleshooting (systems failures, cyberattacks, performance optimization, etc. ) still cannot avoid manual operations. A mobile operator has to keep an operational group with a large number of network administrators with high expertise, leading to a costly Operational Expenditure (OPEX) that is currently three times that of Capital Expenditure (CAPEX) and keeps rising. The 5G and next-generation networks are more complicated and heterogeneous than previous systems. It inevitably imposes a great challenge on manual and semi-automatic network management that is already costly, vulnerable and time-consuming. Therefore, the second group of this special issue is about the application of machine learning approaches to realize an intelligent and autonomous network management that can keep OPEX under an affordable level, improve system Quality-of-Service (QoS) and end users’Quality-of-Experience (QoE), and shorten time-to-market of new services. In the third article entitled“A Case Study on Intelligent Operation System for Wireless Networks”, LIU Jianwei et al. propose a comprehensive and flexible framework to achieve an intelligent operation system. Two use cases are studied to illustrate machine learning algorithms to automate the anomaly detection and fault diagnosis of key performance indicators in wireless networks. The effectiveness of the proposed machine learning algorithms is demonstrated by the real data experiments. Next, HAN Bin et al. provide a comprehensive overview on the metrics of machine learning for network slicing resource management in their article“Machine Learning for Network Slicing Resource Management: A Comprehensive Survey”. Two problems of resource management in network slicing, namely the slice admission control and the cross-slice resource management, are discussed, illustrating the benefits of machine learning techniques in the improvement of service flexibility and network resource efficiency.

        The new demanding features for advanced networks, e. g. , mobile edge computing and fog computing, foster novel services and applications that never emerged in previous networks, such as Unmanned Aerial Vehicle (UAV), the Internet of Things, connected and automated cars, and tactile internet. This will in turn impose new technical challenges on the cellular networks but can well be overcome in the 6G networks. Organized into the third group, the fifth and sixth articles of this special issue focus on cross-layer optimization for the novel network architecture and new services by taking advantage of machine learning techniques. More specifically, the fifth article“Machine Learning Based Unmanned Aerial Vehicle Enabled FogRadio Access Network and Edge Computing”by Mohammed SEID et al. presents the use of machine learning in the UAV enabled fog-radio access network of edge computing architecture. Moreover, this article also addresses the future research direction of machine learning roles in UAV connected cellular networks. Last but not the least, the sixth article of this special issue“A Survey on Machine Learning Based Proactive Caching” by Stephen ANOKYE et al. provides an overview on smart and efficient mobile edge caching relying on machine learning approaches. Issues affecting edge caching, such as caching entities, policies, and algorithms, are discussed, followed by a summary on challenges and future research directions.

        As we conclude the introduction to this special issue and the content of six articles, we would like to thank all authors for their valuable contributions. We also express our sincere gratitude to all the reviewers for their timely and insightful comments on all submitted articles. It is hoped that this special issue is informative and useful from various aspects related to the application of machine learning approaches for next-generation wireless networks.

Download: PDF