Exploring multi-task learning in the context of two masked AES implementations

. This paper investigates different ways of applying multi-task learning in the context of two masked AES implementations (via the ASCAD-r and ASCAD-v2 databases). Enabled by multi-task learning, we propose novel ideas based on the encoding of relationships between the multiple learning tasks. Our work provides a wide range of experiments to understand the performance of multi-task strategies against the current state of the art. We show that multi-task learning benefits from the accumulation of constraints to guide the propagation of the gradient. Such strategies achieve novel milestones against protected implementations when the knowledge of randomness isn’t assumed. We propose a new state of the art on ASCAD-r and ASCAD-v2, along with models that defeat for the first time all masks of the affine masking on ASCAD-v2.