Ioannidis to Lead $1M NSF Grant for Real-Time Learning for Next Generation Wireless Systems
ECE Assistant Professor Stratis Ioannidis is leading a $1M NSF grant, with Professors Jennifer Dy, Tommaso Melodia, Associate Professor Kaushik Chowdhury, and Assistant Professor Yanzhi Wang, to develop “Efficient and Adaptive Real-Time Learning for Next Generation Wireless Systems”.
Abstract Source: NSF
Emerging wireless standards and the promise of 5G communication are driven by the need to attain faster data rates and ultra-low latency. Many incredible bleeding-edge applications, such as community/shared virtual reality experiences and self-driving cars, crucially rely on the ubiquitous availability and real-time reconfigurability of high-speed wireless links, which in turn strongly relies on the ability of next-generation wireless devices to perform a broad variety of inference tasks in real-time. The latency requirements associated with these applications imply the need for improved and accelerated machine learning through dedicated hardware. Moreover, due to the unpredictable nature of the wireless channel, inference algorithms must be able to adapt and evolve in the presence of an unfamiliar environment. This project seeks to solve this foundational challenge, with successful outcomes being able to achieve unprecedented efficiency improvements in next generation wireless systems. Ideas and findings from the project are incorporated into a number of accessible seminar talks geared at high-school and undergraduate students, to encourage further interest in engineering and science, as well as through multi-disciplinary tutorials aimed at both the wireless networking and machine learning communities.
The project, executed by a multidisciplinary team of machine learning, systems, and networking researchers, advances the state of the art through novel deep learning architectures tailored to inference tasks pertinent to next generation wireless devices. It also incorporates novel model compression techniques, producing a hardware-friendly structured pruning approach for fully-connected and convolutional layers of deep neural networks, combined with a novel quantization scheme learned jointly during training. The project’s quantization scheme and its hyper-parameter tuning is co-designed with an field programmable gate array (FPGA) hardware implementation and determined via deep reinforcement learning. The adaptation of parts of the network in the presence of new samples is enabled by blending lifelong learning approaches like dynamic networks and complementary learning as new objectives during training.