Abstract Despite large incentives, correctness in software remains an elusive goal. Declarative programming techniques, where algorithms are derived from a specification of the desired behavior, offer hope to address this problem, since there is a combinatorial reduction in complexity in programming in terms of specifications instead of algorithms, and arbitrary desired properties can be expressed and enforced in specifications directly. However, limitations on performance have prevented programming with declarative specifications from becoming a mainstream technique for general-purpose programming, because a strategy which is both efficient and fully general to derive algorithms from specifications does not yet exist. To address this bottleneck, I propose information-gain computation, a framework where an adaptive evaluation strategy is used to efficiently perform a search which derives algorithms that provide information about a query via the most efficient routes. Within this framework, opportunities to compress the search space present themselves, which suggest that information-theoretic bounds on the performance of such a system might be articulated and a system might be designed to achieve them. The computation of the information measures that are the basis of this strategy crucially depends on a probabilistic semantics for the relations represented by predicates, which may either already be present in a probabilistic logic language, or may be superimposed on a pure logic language. I describe a prototype implementation of Fifth , a system that implements these techniques, and a preliminary empirical study of adaptive evaluation for a simple test program. In the test, the evaluation strategy adapts successfully to efficiently evaluate a query with pathological features that would prevent its evaluation by standard general-purpose strategies.
[1]
Giorgi Japaridze,et al.
Introduction to computability logic
,
2003,
Ann. Pure Appl. Log..
[2]
Luc De Raedt,et al.
ProbLog: A Probabilistic Prolog and its Application in Link Discovery
,
2007,
IJCAI.
[3]
Robert A. Kowalski,et al.
Algorithm = logic + control
,
1979,
CACM.
[4]
Anthony Di Franco.
Information-Gain Computation
,
2017,
PLP@ILP.
[5]
Chung Chan,et al.
Info-Clustering: A Mathematical Theory for Data Clustering
,
2016,
IEEE Transactions on Molecular, Biological and Multi-Scale Communications.
[6]
Jürgen Schmidhuber,et al.
Adaptive history compression for learning to divide and conquer
,
1991,
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks.
[7]
Daniel A. Braun,et al.
A Minimum Relative Entropy Principle for Learning and Acting
,
2008,
J. Artif. Intell. Res..
[8]
John Langford,et al.
Contextual Bandit Algorithms with Supervised Learning Guarantees
,
2010,
AISTATS.
[9]
Benjie Lu,et al.
Prolog with best first search
,
2013,
2013 25th Chinese Control and Decision Conference (CCDC).
[10]
R. Weber.
On the Gittins Index for Multiarmed Bandits
,
1992
.
[11]
Tom Schrijvers,et al.
Under Consideration for Publication in Theory and Practice of Logic Programming Swi-prolog
,
2022
.
[12]
Hanna Kurniawati,et al.
CEMAB: A Cross-Entropy-based Method for Large-Scale Multi-Armed Bandits
,
2017,
ACALCI.
[13]
Michael Satosi Watanabe,et al.
Information Theoretical Analysis of Multivariate Correlation
,
1960,
IBM J. Res. Dev..
[14]
Peter I. Cowling,et al.
Bandits all the way down: UCB1 as a simulation policy in Monte Carlo Tree Search
,
2013,
2013 IEEE Conference on Computational Inteligence in Games (CIG).