Tensor-Train Decomposed Synaptic Interconnections for Compact and Scalable Photonic Neural Networks

We propose a compact and scalable photonic neural network architecture based on tensor-train decomposed synaptic interconnections with Mach-Zehnder interferometers as building blocks. At the scale of 1024×1024, our architecture exhibits 1164× fewer MZIs and 10.2× fewer cascaded stages.

[1]  Hao Yu,et al.  LTNN: An energy-efficient machine learning accelerator on 3D CMOS-RRAM for layer-wise tensorized neural network , 2017, 2017 30th IEEE International System-on-Chip Conference (SOCC).

[2]  Dirk Englund,et al.  Deep learning with coherent nanophotonic circuits , 2017, 2017 Fifth Berkeley Symposium on Energy Efficient Electronic Systems & Steep Transistors Workshop (E3S).

[3]  J. O'Brien,et al.  Universal linear optics , 2015, Science.

[4]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[5]  David A. B. Miller,et al.  Self-configuring universal linear optical component [Invited] , 2013, 1303.4602.

[6]  Zheng Zhang,et al.  Bayesian Tensorized Neural Networks with Automatic Rank Selection , 2019, Neurocomputing.

[7]  A. Ribeiro,et al.  Demonstration of a 4 × 4-port self-configuring universal linear optical component , 2016, 2016 Progress in Electromagnetic Research Symposium (PIERS).

[8]  Alexander Novikov,et al.  Tensorizing Neural Networks , 2015, NIPS.

[9]  Humphreys,et al.  An Optimal Design for Universal Multiport Interferometers , 2016, 1603.08788.