Information-Gain Computation

Despite large incentives, correctness in software remains an elusive goal. Declarative programming techniques, where algorithms are derived from a specification of the desired behavior, offer hope to address this problem, since there is a combinatorial reduction in complexity in programming in terms of specifications instead of algorithms, and arbitrary desired properties can be expressed and enforced in specifications directly. However, limitations on performance have prevented programming with declarative specifications from becoming a mainstream technique for general-purpose programming. To address the performance bottleneck in deriving an algorithm from a specification, I propose information-gain computation, a framework where an adaptive evaluation strategy is used to efficiently perform a search which derives algorithms that provide information about a query most directly. Within this framework, opportunities to compress the search space present themselves, which suggest that information-theoretic bounds on the performance of such a system might be articulated and a system designed to achieve them. In a preliminary empirical study of adaptive evaluation for a simple test program, the evaluation strategy adapts successfully to evaluate a query efficiently.

[1]  Robert A. Kowalski,et al.  Algorithm = logic + control , 1979, CACM.

[2]  Chung Chan,et al.  Info-Clustering: A Mathematical Theory for Data Clustering , 2016, IEEE Transactions on Molecular, Biological and Multi-Scale Communications.

[3]  R. Weber On the Gittins Index for Multiarmed Bandits , 1992 .

[4]  Michael Satosi Watanabe,et al.  Information Theoretical Analysis of Multivariate Correlation , 1960, IBM J. Res. Dev..

[5]  Peter I. Cowling,et al.  Bandits all the way down: UCB1 as a simulation policy in Monte Carlo Tree Search , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).

[6]  John Langford,et al.  Contextual Bandit Algorithms with Supervised Learning Guarantees , 2010, AISTATS.

[7]  Luc De Raedt,et al.  ProbLog: A Probabilistic Prolog and its Application in Link Discovery , 2007, IJCAI.

[8]  Hanna Kurniawati,et al.  CEMAB: A Cross-Entropy-based Method for Large-Scale Multi-Armed Bandits , 2017, ACALCI.

[9]  Jürgen Schmidhuber,et al.  Adaptive history compression for learning to divide and conquer , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[10]  Daniel A. Braun,et al.  A Minimum Relative Entropy Principle for Learning and Acting , 2008, J. Artif. Intell. Res..