An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks

Capitalise on deep learning models, offering Natural Language Processing (NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has generated handsome revenues. At the same time, it is known that the creation of these lucrative deep models is non-trivial. Therefore, protecting these inventions’ intellectual property rights (IPR) from being abused, stolen and plagiarized is vital. This paper proposes a practical approach for the IPR protection on recurrent neural networks (RNN) without all the bells and whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper concept that resembles the recurrent nature in RNN architecture to embed keys. Also, we design the model training scheme in a way such that the protected RNN model will retain its original performance iff a genuine key is presented. Extensive experiments showed that our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes on different RNN variants. Code is available at https://github.com/zhiqin1998/RecurrentIPR.

[1]  Chee Seng Chan,et al.  DeepIPR: Deep Neural Network Ownership Verification With Passports , 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  S. Rathi,et al.  Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property , 2021, Appl. Artif. Intell..

[3]  Lingjuan Lyu,et al.  Protecting Intellectual Property of Language Generation APIs with Lexical Watermark , 2021, AAAI.

[4]  Lixin Fan,et al.  Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Jie Zhang,et al.  Passport-aware Normalization for Deep Model Protection , 2020, NeurIPS.

[6]  Franziska Boenisch,et al.  A Survey on Model Watermarking Neural Networks , 2020, ArXiv.

[7]  Qiang Yang,et al.  Protect, show, attend and tell: Empowering image captioning models with ownership protection , 2020, Pattern Recognit..

[8]  Yixin Chen,et al.  Watermarking Deep Neural Networks in Image Processing , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[9]  Farinaz Koushanfar,et al.  DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models , 2019, ICMR.

[10]  Miodrag Potkonjak,et al.  Watermarking Deep Neural Networks for Embedded Systems , 2018, 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[11]  Hui Wu,et al.  Protecting Intellectual Property of Deep Neural Networks with Watermarking , 2018, AsiaCCS.

[12]  Farinaz Koushanfar,et al.  DeepSigns : A Generic Watermarking Framework for Protecting the Ownership of Deep Learning Models , 2018, 1804.00750.

[13]  Benny Pinkas,et al.  Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.

[14]  Erwan Le Merrer,et al.  Adversarial frontier stitching for remote neural network watermarking , 2017, Neural Computing and Applications.

[15]  Shin'ichi Satoh,et al.  Embedding Watermarks into Deep Neural Networks , 2017, ICMR.

[16]  Peng Zhou,et al.  Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling , 2016, COLING.

[17]  Steve Renals,et al.  Multiplicative LSTM for sequence modelling , 2016, ICLR.

[18]  Christopher D. Manning,et al.  Compression of Neural Machine Translation Models via Pruning , 2016, CoNLL.

[19]  Geoffrey E. Hinton,et al.  A Simple Way to Initialize Recurrent Networks of Rectified Linear Units , 2015, ArXiv.

[20]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[21]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[22]  Philipp Koehn,et al.  Findings of the 2014 Workshop on Statistical Machine Translation , 2014, WMT@ACL.

[23]  Jürgen Schmidhuber,et al.  Learning Precise Timing with LSTM Recurrent Networks , 2003, J. Mach. Learn. Res..

[24]  Dan Roth,et al.  Learning Question Classifiers , 2002, COLING.

[25]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[26]  S. Hochreiter,et al.  Long Short-Term Memory , 1997, Neural Computation.

[27]  J. Rice Mathematical Statistics and Data Analysis , 1988 .