Publications

Title

Description

Organization

Source

Security Domain

The paper contains a counter measure against logical side channel attacks where we looked at protection mechanisms against logical side channel attacks using the routers of a Network on Chip instead of the nodes/IP. This has several benefits.

TU Delft

ISVLSI 2020

In this paper, a new countermeasure for AES is presented which is based on confusion

TU Delft

SAMOS 2020

This paper presents a constant time hardware implementation of the lattice algorithm Streamlined NTRU Prime

NXP GE

CARDIS 2020

In this paper we present a new technique that could be used in asymetric cryptography to protect against side channel attacks. The technique is based on exploring multiple bits of the keys simultaneously and comes with 0 overhead. This paper is submitted in september 2020.

TU Delft

VTS 2021

Machine Learning Domain

Activation sparsity results in performance improvement in sparsity aware neural network accelerators. As the dominant operation in DNN is multiply-accumulator of activation and weights, skipping operations with at least one zero operands can result in efficient deep learning inference. While spatial sparsification of activation is a well-known method, the introduction of spatiotemporal sparsity in DNN is less explored. On the other hand, temporal sparsity exploitation is one of the main features in neuromorphic processing and bio-inspired spiking neural networks. In this paper, we introduce a new layer called delta_activation layer to induce temporal sparsity in any DNN model during training. We reported almost 5x improvement in activation sparsity. As the application domain in DNN is moving from static signal processing (e.g. image processing) toward more streaming signal processing (e.g. video and audio), we believe it makes sense to adapt temporal sparsity techniques in the conventional DNN frameworks. (status: submitted, pending review)

IMEC

Frontiers in Neuroscience and Neuromorphic Engineering

IMEC, Published in 2021, Frontiers in Neuroscience and Neuromorphic Engineering

The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) on the edge is a candidate solution that can meet strict energy and cost reduction constraints in applications areas of smart sensing technologies. Towards this goal, we present $\mu$Brain: an event-driven digital architecture that exploits event-based processing to reduce the overall energy consumption of an always-on system. We present an instantiation of the $\mu$Brain architecture in 40nm Complementary Metal Oxide Semiconductor (CMOS) digital technology and demonstrate its efficiency in an Internet of Things (IoT) radar signal classification application, with a power consumption of 70$\mu$W and energy consumption of 340pJ per classification. We demonstrate that $\mu$Brain offers near-sensor processing, it is fully synthesizable, and it allows for a fast development-to-deployment cycle in Application Specific Integrated Circuits (ASIC). This architecture permits low-development costs while taking advantage of digital scaling in advanced node technologies. (status: submitted, pending review)