Motivating Time-Inconsistent Agents: A Computational Approach

We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model as proposed by Kleinberg and Oreni¾?[5]. Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G. Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to guide the agent. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a $$1+\sqrt{n}$$1+n-approximation algorithm and prove NP-hardness for ratios greater than $$\sqrt{n}/3$$n/3. Finally, we argue that the second setting does not permit any efficient approximation unless $$\mathrm{P}=\mathrm{NP}$$P=NP.