Detecting Judgment Inconsistencies to Encourage Model Iteration in Interactive i* Analysis

Model analysis procedures which prompt stakeholder interaction and continuous model improvement are especially useful in Early RE elicitation. Previous work has introduced qualitative, interactive forward and backward analysis procedures for i* models. Studies with experienced modelers in complex domains have shown that this type of analysis prompts beneficial iterative revisions on the models. However, studies of novice modelers applying this type of analysis do not show a difference between semi-automatic analysis and ad-hoc analysis (not following any systematic procedure). In this work, we encode knowledge of the modeling syntax (modeling expertise) in the analysis procedure by performing consistency checks using the interactive judgments provided by users. We believe such checks will encourage beneficial model iteration as part of interactive analysis for both experienced and novice i* modelers.