An emerging issue of GDPR (General Data Protection Regulation) and other like similar concept is facing now and future on all intellectual data base. The most secured situations are on standalone operations for data handling during the certain processing, however it is difficult to get relatively higher performance processing such as AI with deep learning. One development approach will present here for those purpose. Deep learning is consisted by the two functions which one is trainings by big data bases and other is making inferences for the objected needs. The inferences are provided by simple hardware for the processing that can make standalone module. On the other hand, training processing is structured by huge repeating calculations to require heavy hardware or cloud interface, consequent hard to produce mobile feature meant standalone module. Our development approach is with two novel functions to resolve heavy training processing. One problem would be how to access with high speed from SSD storage. Another problem is how to reduce power and calculation time by huge repeating processing. Our approach is based by lookup table (LUT) subsequent zero calculation to enable power reduction and shorten calculation-time. The architecture is with the dynamic reconfiguration by Memory Logic Conjugated System (MLCS) even easy realized for high speed execution that is a non-Neumann processor. In the conclusion, our trial demonstration module has the 1W of effective power and 400Mops of the processing performance in a commercial based right FPGA evaluation board. If it would be arranged by custom designed SoC, that would suppose to be 0.5W and 2Gops. Both cases are perfectly enough for the standalone modules for middle range of DL execution with seamless training and inference operations on real time sequences. The operation with dynamic reconfiguration is furthermore on the performance that has not been realization in any processing systems so far.