Abstract In this study, we focus on ethical education as a means to improve artificial companion’s conceptualization of moral decision-making process in human users. In particular, we focus on automatically determining whether changes in ethical education influenced core moral values in humans throughout the century. We analyze ethics as taught in Japan before WWII and today to verify how much the pre-WWII moral attitudes have in common with those of contemporary Japanese, to what degree what is taught as ethics in school overlaps with the general population’s understanding of ethics, as well as to verify whether a major reform of the guidelines for teaching the school subject of “ethics” at school after 1946 has changed the way common people approach core moral questions (such as those concerning the sacredness of human life). We selected textbooks used in teaching ethics at school from between 1935 and 1937, and those used in junior high schools today (2019) and analyzed what emotional and moral associations such contents generated. The analysis was performed with an automatic moral and emotional reasoning agent and based on the largest available text corpus in Japanese as well as on the resources of a Japanese digital library. As a result, we found out that, despite changes in stereotypical view on Japan’s moral sentiments, especially due to historical events, past and contemporary Japanese share a similar moral evaluation of certain basic moral concepts, although there is a large discrepancy between how they perceive some actions to be beneficial to the society as a whole while at the same time being inconclusive when it comes to assessing the same action’s outcome on the individual performing them and in terms of emotional consequences. Some ethical categories, assessed positively before the war, while being associated with a nationalistic trend in education have also disappeared from the scope of interest of post- war society. The findings of this study support suggestions proposed by others that the development of personal AI systems requires supplementation with moral reasoning. Moreover, the paper builds upon this idea and further suggests that AI systems need to be aware of ethics not as a constant, but as a function with a correction on historical and cultural changes in moral reasoning.