A Reinforcement Learning Approach to Feature Model Maintainability Improvement

Software Product Lines (SPLs) evolve when there are changes in their core assets (e.g., feature models and reference architecture). Various approaches have addressed assets evolution by applying evolution operations (e.g., adding a feature to a feature model and removing a constraint). Improving quality attributes (e.g., maintainability and flexibility) of core assets is a promising field in SPLs evolution. Providing a proposal based on a decision maker to support this field is a challenge that grows over time. A decision maker helps the human (e.g., domain expert) to choose the convenient evolution scenarios (change operations) to improve quality attributes of a core asset. To tackle this challenge, we propose a reinforcement learning approach to improve the maintainability of a PL feature model. By learning various evolution operations and based on its decision maker, this approach is able to provide the best evolution scenarios to improve the maintainability of a FM. In this paper, we present the reinforcement learning approach we propose illustrated by a running example associated to the feature model of a Graph Product