Human-Augmented Prescriptive Analytics With Interactive Multi-Objective Reinforcement Learning

The rise of Artificial Intelligence (AI) enables enterprises to manage large amounts of data in order to derive predictions about future performance and to gain meaningful insights. In this context, descriptive and predictive analytics has gained a significant research attention; however, prescriptive analytics has just started to emerge as the next step towards increasing data analytics maturity and leading to optimized decision making ahead of time. Although machine learning for decision making has been identified as one of the most important applications of AI, up to now, prescriptive analytics is mainly addressed with domain-specific optimization models. On the other hand, existing literature lacks generalized prescriptive analytics models capable of being dynamically adapted according to the human preferences. Reinforcement Learning, as the third machine learning paradigm alongside supervised learning and unsupervised learning, has the potential to deal with the dynamic, uncertain and time-variant environments, the huge states space of sequential decision making processes, as well as the incomplete knowledge. In this paper, we propose a human-augmented prescriptive analytics approach using Interactive Multi-Objective Reinforcement Learning (IMORL) in order to cope with the complexity of real-life environments and the need for optimized human-machine collaboration. The decision making process is modelled in a generalized way in order to assure scalability and applicability in a wide range of problems and applications. We deployed the proposed approach in a stock market case study in order to evaluate the proactive trading decisions that will lead to the maximum return and the minimum risk that the user’s experience and the available data can yield in combination.