Algorithm 972

Markov chains (MC) are a powerful tool for modeling complex stochastic systems. Whereas a number of tools exist for solving different types of MC models, the first step in MC modeling is to define the model parameters. This step is, however, error prone and far from trivial when modeling complex systems. In this article, we introduce jMarkov, a framework for MC modeling that provides the user with the ability to define MC models from the basic rules underlying the system dynamics. From these rules, jMarkov automatically obtains the MC parameters and solves the model to determine steady-state and transient performance measures. The jMarkov framework is composed of four modules: (i) the main module supports MC models with a finite state space; (ii) the jQBD module enables the modeling of Quasi-Birth-and-Death processes, a class of MCs with infinite state space; (iii) the jMDP module offers the capabilities to determine optimal decision rules based on Markov Decision Processes; and (iv) the jPhase module supports the manipulation and inclusion of phase-type variables to represent more general behaviors than that of the standard exponential distribution. In addition, jMarkov is highly extensible, allowing the users to introduce new modeling abstractions and solvers.

[1]  Beatrice Meini,et al.  Numerical methods for structured Markov chains , 2005 .

[2]  William J. Knottenbelt,et al.  Generalised Markovian analysis of timed transition systems , 1996 .

[3]  Marcel F. Neuts,et al.  Generating random variates from a distribution of phase type , 1981, WSC '81.

[4]  Luca de Alfaro,et al.  Computing Minimum and Maximum Reachability Times in Probabilistic Systems , 1999, CONCUR.

[5]  J.A. Medina,et al.  A Dynamic Programming Model for Structuring Mortgage Backed Securities , 2007, 2007 IEEE Systems and Information Engineering Design Symposium.

[6]  Shang Zhi,et al.  A proof of the queueing formula: L=λW , 2001 .

[7]  Marie-Josée Cros,et al.  MDPtoolbox: a multi-platform toolbox to solve stochastic dynamic programming problems , 2014 .

[8]  Beatrice Meini,et al.  Numerical Methods for Structured Markov Chains (Numerical Mathematics and Scientific Computation) , 2005 .

[9]  Germán Riaño,et al.  jMarkov: an object-oriented framework for modeling and analyzing Markov chains and QBDs , 2006, SMCtools '06.

[10]  G. Riano,et al.  Linear Programming solvers for Markov Decision Processes , 2006, 2006 IEEE Systems and Information Engineering Design Symposium.

[11]  Marcel F. Neuts,et al.  Matrix-Geometric Solutions in Stochastic Models , 1981 .

[12]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[13]  Miklós Telek,et al.  PhFit: a general phase-type fitting tool , 2002, Proceedings International Conference on Dependable Systems and Networks.

[14]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[15]  Jesse Hoey,et al.  SPUDD: Stochastic Planning using Decision Diagrams , 1999, UAI.

[16]  Pierre L'Ecuyer,et al.  Simulation in Java with SSJ , 2005, Proceedings of the Winter Simulation Conference, 2005..

[17]  J D Littler,et al.  A PROOF OF THE QUEUING FORMULA , 1961 .

[18]  Mor Harchol-Balter,et al.  Closed form solutions for mapping general distributions to quasi-minimal PH distributions , 2006, Perform. Evaluation.

[19]  P JuanF. jPhase: an Object-Oriented Tool for Modeling Phase-Type Distributions , 2006 .

[20]  M. Puterman,et al.  Modified Policy Iteration Algorithms for Discounted Markov Decision Problems , 1978 .

[21]  Marta Z. Kwiatkowska,et al.  PRISM 4.0: Verification of Probabilistic Real-Time Systems , 2011, CAV.

[22]  Ugur Kuter,et al.  Incremental plan aggregation for generating policies in MDPs , 2010, AAMAS.

[23]  Ramin Sadre,et al.  Fitting World Wide Web request traces with the EM-algorithim , 2001, SPIE ITCom.

[24]  Germán Riaño,et al.  Transient behavior of stochastic networks :application to production planning with load-dependent lead times , 2002 .

[25]  Benny Van Houdt,et al.  Structured Markov chains solver: tool extension , 2009, VALUETOOLS.

[26]  Nicholas J. Dingle,et al.  PIPE2: a tool for the performance evaluation of generalised stochastic Petri Nets , 2009, PERV.

[27]  Peter Buchholz,et al.  A novel approach for fitting probability distributions to real trace data with the EM algorithm , 2005, 2005 International Conference on Dependable Systems and Networks (DSN'05).

[28]  Daniel F. Silva,et al.  Continuous-Time Models for Estimating Picker Blocking in Order-Picking-Systems , 2009 .

[29]  A. Horváth,et al.  Matching Three Moments with Minimal Acyclic Phase Type Distributions , 2005 .

[30]  M. Telek,et al.  Moment Bounds for Acyclic Discrete and Continuous Phase Type Distributions of Second Order , 2002 .

[31]  Eugene A. Feinberg,et al.  Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach , 2004, Math. Oper. Res..

[32]  Miklós Telek,et al.  PhFit: A General Phase-Type Fitting Tool , 2002, Computer Performance Evaluation / TOOLS.

[33]  J. Ben Atkinson,et al.  Modeling and Analysis of Stochastic Systems , 1996 .

[34]  Gianfranco Ciardo,et al.  Tools for Formulating Markov Models , 2000 .

[35]  Sean R Eddy,et al.  What is dynamic programming? , 2004, Nature Biotechnology.

[36]  Peter Buchholz,et al.  A Novel Approach for Phase-Type Fitting with the EM Algorithm , 2006, IEEE Transactions on Dependable and Secure Computing.

[37]  Jorge E. Mendoza,et al.  A unied framework for vehicle routing problems with stochastic travel and service times , 2013 .

[38]  R. Serfozo An Equivalence between Continuous and Discrete Time Markov Decision Processes. , 1976 .

[39]  N. A. J. Hastings Technical Note - Bounds on the Gain of a Markov Decision Process , 1971, Oper. Res..

[40]  Philipp Reinecke,et al.  HyperStar: Phase-Type Fitting Made Easy , 2012, 2012 Ninth International Conference on Quantitative Evaluation of Systems.

[41]  Victor R. Lesser,et al.  Decentralized Markov decision processes with event-driven interactions , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[42]  J. Little A Proof for the Queuing Formula: L = λW , 1961 .

[43]  Alma Riska,et al.  ETAQA Solutions for Infinite Markov Processes with Repetitive Structure , 2007, INFORMS J. Comput..

[44]  ModelsSridhar,et al.  Designing Agent Controllers using Discrete-Event Markov , 2007 .

[45]  Vaidyanathan Ramaswami,et al.  Introduction to Matrix Analytic Methods in Stochastic Modeling , 1999, ASA-SIAM Series on Statistics and Applied Mathematics.

[46]  Donald D. Eisenstein,et al.  A Production Line that Balances Itself , 1996, Oper. Res..

[47]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[48]  Ren Asmussen,et al.  Fitting Phase-type Distributions via the EM Algorithm , 1996 .

[49]  Richard F. Serfozo,et al.  Technical Note - An Equivalence Between Continuous and Discrete Time Markov Decision Processes , 1979, Oper. Res..

[50]  Andrzej Galat,et al.  Technical note , 2008, Comput. Biol. Chem..

[51]  Gianfranco Ciardo,et al.  SMART: stochastic model-checking analyzer for reliability and timing , 2002, Proceedings International Conference on Dependable Systems and Networks.