Optimal calibration of the learning rate in closed-loop adaptive brain-machine interfaces

Closed-loop decoder adaptation (CLDA) can improve brain-machine interface (BMI) performance. CLDA methods use batches of data to refit the decoder parameters in closed-loop operation. Recently, dynamic state-space algorithms have also been designed to fit the parameters of a point process decoder (PPF). A main design parameter that needs to be selected in any CLDA algorithm is the learning rate, i.e., how fast should the decoder parameters be updated on the basis of new neural observations. So far, the learning rate of CLDA algorithms has been selected empirically using ad-hoc methods. Here we develop a principled framework to calibrate the learning rate in adaptive state-space algorithms. The learning rate introduces a trade-off between the convergence rate and the steady-state error covariance of the estimated decoder parameters. Hence our algorithm first finds an analytical upper-bound on the steady-state error covariance as a function of the learning rate. It then finds the inverse mapping to select the optimal learning rate based on the maximum allowable steady-state error. Using numerical BMI experiments, we show that the calibration algorithm selects the optimal learning rate that meets the requirement on steady-state error level while achieving the fastest convergence rate possible corresponding to this steady-state level.