This Springer Brief presents a basic algorithm that provides a correct solution to finding an optimal state change attempt, as well as an enhanced algorithm that is built on top of the well-known trie data structure. It explores correctness and algorithmic complexity results for both algorithms and experiments comparing their performance on both real-world and synthetic data. Topics addressed include optimal state change attempts, state change effectiveness, different kind of effect estimators, planning under uncertainty and experimental evaluation. These topics will help researchers analyze tabular data, even if the data contains states (of the world) and events (taken by an agent) whose effects are not well understood. Event DBs are omnipresent in the social sciences and may include diverse scenarios from political events and the state of a country to education-related actions and their effects on a school system. With a wide range of applications in computer science and the social sciences, the information in this Springer Brief is valuable for professionals and researchers dealing with tabular data, artificial intelligence and data mining. The applications are also useful for advanced-level students of computer science.
[1]
Martin L. Puterman,et al.
Markov Decision Processes: Discrete Stochastic Dynamic Programming
,
1994
.
[2]
R. Bellman.
A Markovian Decision Process
,
1957
.
[3]
Raúl Rojas,et al.
Neural Networks - A Systematic Introduction
,
1996
.
[4]
Robert Givan,et al.
Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes
,
1997,
UAI.
[5]
อนิรุธ สืบสิงห์,et al.
Data Mining Practical Machine Learning Tools and Techniques
,
2014
.
[6]
Geoffrey I. Webb,et al.
Not So Naive Bayes: Aggregating One-Dependence Estimators
,
2005,
Machine Learning.
[7]
David W. Aha,et al.
Instance-Based Learning Algorithms
,
1991,
Machine Learning.
[8]
Aiko M. Hormann,et al.
Programs for Machine Learning. Part I
,
1962,
Inf. Control..
[9]
Michael L. Littman,et al.
Algorithms for Sequential Decision Making
,
1996
.
[10]
Craig Boutilier,et al.
Stochastic dynamic programming with factored representations
,
2000,
Artif. Intell..
[11]
John N. Tsitsiklis,et al.
Feature-based methods for large scale dynamic programming
,
2004,
Machine Learning.