Abstract : The patrolling problem considered in this paper has the following characteristics: Patrol units conduct preventive patrolling and respond to call-for-service. The patrol locations (nodes) have different priorities, and varying incident rates. We design a patrolling scheme such that the locations are visited based on their importance and incident rates. The solution is accomplished in two steps. First, we partition the set of nodes of interest into subsets of nodes, called sectors. Each sector is assigned to one patrol unit. Second, for each sector, we exploit a response strategy of preemptive call-for-service response, and design multiple sub-optimal off-line patrol routes. The net effect of randomized patrol routes with immediate call-for-service response would allow the limited patrol resources to provide prompt response to random requests, while effectively covering the nodes of different priorities having varying incidence rates. To obtain multiple routes, we design a novel learning algorithm (Similar State Estimate Update) under a Markov Decision Process (MDP) framework, and apply softmax action selection method. The resulting patrol routes and patrol unit visibility would appear unpredictable to the insurgents and criminals, thus creating the impression of virtual police presence and potentially mitigating large scale incidents.
[1]
G. Nemhauser,et al.
Optimal Political Districting by Implicit Enumeration Techniques
,
1970
.
[2]
Richard C. Larson,et al.
Urban Police Patrol Analysis
,
1972
.
[3]
Martin L. Puterman,et al.
Markov Decision Processes: Discrete Stochastic Dynamic Programming
,
1994
.
[4]
John N. Tsitsiklis,et al.
Neuro-Dynamic Programming
,
1996,
Encyclopedia of Machine Learning.
[5]
John N. Tsitsiklis,et al.
Analysis of temporal-difference learning with function approximation
,
1996,
NIPS 1996.
[6]
John N. Tsitsiklis,et al.
Rollout Algorithms for Combinatorial Optimization
,
1997,
J. Heuristics.
[7]
P. Pardalos,et al.
Handbook of Combinatorial Optimization
,
1998
.
[8]
Richard S. Sutton,et al.
Reinforcement Learning: An Introduction
,
1998,
IEEE Trans. Neural Networks.
[9]
Richard S. Sutton,et al.
Reinforcement Learning
,
1992,
Handbook of Machine Learning.