Deep Neural Networks (DNNs) can solve extremely challenging tasks stacking a large number of (convolutional) layers with thousands of neurons and millions of learnable parameters. Their success comes mainly from their ability to learn complex non-linear functions from examples by minimizing a target loss function. However, DNNs are often over-parameterized with respect to the computational resources available at deployment time, such as memory or bandwidth in embedded and mobile systems.
Over the past decade, lots of research efforts have been devoted to compressing neural networks, while the MPEG of ISO has even finalized a standard to testify the industrial interest for the problem. DNNs compression is typically achieved via a combination of pruning, i.e. dropping parameters or neurons altogether, and quantization, i.e. representing parameters over fewer bits. Among the reported benefits are reduced memory footprint and computational speedups and somewhat better learning ability in some contexts. However, there is a lack of consensus on the practical yields of such compression techniques when it comes to their implementation in hardware platforms like FPGAs and ASICs.
The aim of SCENA is to attract top researchers and practitioners from both worlds of algorithmic design and hardware design, bridging the gap between these two communities.
The topics covered within SCENA include, but are not limited to:
- Efficient deep learning solutions for Industry
- Pruning and quantization for deep learning
- Advances in knowledge distillation for efficient deep learning
- Deep neural networks’ efficiency measurements on the device
- New frontiers for evaluation metrics in efficient deep learning
- Deployment of Deep models on portable devices
- Deployment of Deep models on FPGAs
- High-performance deep learning on-the-edge
- Integrated Hardware/Digital design for efficient deep learning
- New methods for frugal AI
Please refer, for all the submission details, to the official ICIP’22 call for papers.
Submission deadline: 16 February 2022, for the invited papers.