Spin Out Company from ECE Lab Receives NSF SBIR Phase I Funding

Yanzhi Wang

ECE Assistant Professor Yanzhi Wang is co-founder of a startup company CoCoPIE, LLC that was awarded a $250K Small Business Innovation Research Phase I NSF grant for “Enabling Real-Time AI on End Devices through Compression-Compilation Co-Design.” The core technology of CoCoPIE was a spinout of the research conducted in his lab and a collaborating lab. The core technology is the proprietary compression-compilation optimization, which simultaneously optimizes model search compression for deep learning and automatic code generation for compilers to achieve the best deep neural network acceleration performance on edge devices. The CoCoPIE framework enables large-scale deep neural network execution, including object detection, pose estimation, activity detection, video resolution upscaling, natural language processing, etc., on tens of billions of existing mobile devices through a pure software solution.


Abstract Source: NSF

The broader impact of this Small Business Innovation Research (SBIR) Phase I project is in the array of new opportunities it creates for expanding uses of machine intelligence. By providing an efficient way to transform deep learning models to best fit the constraints on end devices and real-time applications, this project will shorten the time to market for artificial intelligence applications by orders of magnitude, and hence significantly accelerate the development and deployment of intelligent software in health, commerce, financial, defense, social networks, and many other areas.

This Small Business Innovation Research (SBIR) Phase I project aims to address the important barriers for efficient development of real-time Artificial Intelligence applications on end devices (smartphones, drones, etc.). It does this through a breakthrough technology, compression-compilation co-design. Compression and compilation are the two key steps in fitting a deep learning model on a hardware for efficient execution. Model compression reduces the size of deep learning models; compilation generates executable codes from a given deep learning model. The principle of compression-compilation co-design is to design the two components for AI in a hand-in-hand manner. The technology uses a novel approach to synergize a set of novel model compression methods with compression-aware code compilation techniques. The result is a technology that achieves several-fold higher artificial intelligence model compression rates over the state of the art, several times faster speed, better energy efficiency, and satisfying accuracy.

Related Faculty: Yanzhi Wang

Related Departments:Electrical & Computer Engineering