A new method of Bayesian causal inference in non-stationary environments

Bayesian inference is the process of narrowing down the hypotheses (causes) to the one that best explains the observational data (effects). To accurately estimate a cause, a considerable amount of data is required to be observed for as long as possible. However, the object of inference is not always constant. In this case, a method such as exponential moving average (EMA) with a discounting rate is used to improve the ability to respond to a sudden change; it is also necessary to increase the discounting rate. That is, a trade-off is established in which the followability is improved by increasing the discounting rate, but the accuracy is reduced. Here, we propose an extended Bayesian inference (EBI), wherein human-like causal inference is incorporated. We show that both the learning and forgetting effects are introduced into Bayesian inference by incorporating the causal inference. We evaluate the estimation performance of the EBI through the learning task of a dynamically changing Gaussian mixture model. In the evaluation, the EBI performance is compared with those of the EMA and a sequential discounting expectation-maximization algorithm. The EBI was shown to modify the trade-off observed in the EMA.

[1]  José G. Dias,et al.  The SKM Algorithm: A K-Means Algorithm for Clustering Sequential Data , 2008, IBERAMIA.

[2]  F Tito Arecchi,et al.  Phenomenology of Consciousness : from Apprehension to Judgment , 2010 .

[3]  D. Shanks,et al.  Is causal induction based on causal power? Critique of Cheng (1997). , 2000, Psychological review.

[4]  Tatsuji Takahashi,et al.  Extended Bayesian inference incorporating symmetry bias , 2019, bioRxiv.

[5]  N. Chater,et al.  The probabilistic mind: prospects for Bayesian cognitive science , 2008 .

[6]  Yukio-Pegio Gunji,et al.  Modeling of decision-making process for moving straight using inverse Bayesian inference , 2018, Biosyst..

[7]  P. White Making causal judgments from the proportion of confirming instances: the pCI rule. , 2003, Journal of experimental psychology. Learning, memory, and cognition.

[8]  Vasileios Basios,et al.  Inverse Bayesian inference as a key of consciousness featuring a macroscopic quantum logical structure , 2017, Biosyst..

[9]  Ali Fallah Tehrani,et al.  Modified sequential k-means clustering by utilizing response: A case study for fashion products , 2017, Expert Syst. J. Knowl. Eng..

[10]  Shi Zhong,et al.  Efficient online spherical k-means clustering , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[11]  Geoffrey E. Hinton,et al.  A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.

[12]  S. Waxman,et al.  Nouns mark category relations: toddlers' and preschoolers' word-learning biases. , 1990, Child development.

[13]  O. Cappé,et al.  On‐line expectation–maximization algorithm for latent data models , 2009 .

[14]  Masasi Hattori,et al.  Adaptive Non-Interventional Heuristics for Covariation Detection in Causal Induction: Model Comparison and Rational Analysis , 2007, Cogn. Sci..

[15]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[16]  Vasileios Basios,et al.  Inverse Bayesian inference in swarming behaviour of soldier crabs , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[17]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[18]  S. Dehaene Consciousness and the brain : deciphering how the brain codes our thoughts , 2014 .

[19]  Edward A. Fox,et al.  Recent Developments in Document Clustering , 2007 .

[20]  Shin Ishii,et al.  On-line EM Algorithm for the Normalized Gaussian Network , 2000, Neural Computation.

[21]  P. Cheng,et al.  From covariation to causation: a test of the assumption of causal power. , 2003, Journal of experimental psychology. Learning, memory, and cognition.

[22]  Graham J. Williams,et al.  On-Line Unsupervised Outlier Detection Using Finite Mixtures with Discounting Learning Algorithms , 2000, KDD '00.

[23]  David E Over,et al.  Conditionals and conditional probability. , 2003, Journal of experimental psychology. Learning, memory, and cognition.

[24]  Dan Klein,et al.  Online EM for Unsupervised Models , 2009, NAACL.