My research focus is on using continuous state partially observable Markov decision processes (POMDPs) to perform object manipulation tasks using a robotic arm. During object manipulation, object dynamics can be extremely complex, non-linear and challenging to specify. To avoid modeling the full complexity of possible dynamics. I instead use a model which switches between a discrete number of simple dynamics models. By learning these models and extending Porta's continuous state POMDP framework (Porta et at. 2006) to incorporate this switching dynamics model, we hope to handle tasks that involve absolute and relative dynamics within a single framework. This dynamics model may be applicable not only to object manipulation tasks, but also to a number of other problems, such as robot navigation. By using an explicit model of uncertainty, I hope to create solutions to object manipulation tasks that more robustly handle the noisy sensory information received by physical robots.
[1]
Alexei Makarenko,et al.
Parametric POMDPs for planning in continuous state spaces
,
2006,
Robotics Auton. Syst..
[2]
Leslie Pack Kaelbling,et al.
Planning and Acting in Partially Observable Stochastic Domains
,
1998,
Artif. Intell..
[3]
Thomas L. Griffiths,et al.
Hierarchical Topic Models and the Nested Chinese Restaurant Process
,
2003,
NIPS.
[4]
Nikos A. Vlassis,et al.
Perseus: Randomized Point-based Value Iteration for POMDPs
,
2005,
J. Artif. Intell. Res..
[5]
Pascal Poupart,et al.
Point-Based Value Iteration for Continuous POMDPs
,
2006,
J. Mach. Learn. Res..