Editorial: Safe and Trustworthy Machine Learning

Machine learning (ML) provides incredible opportunities to answer some of the most important and difficult questions in a wide range of applications. However, ML systems often face a major challenge when applied in the real world: the conditions under which the system was deployed can differ from those under which it was developed. Recent examples have shown that ML methods are highly susceptible to minor changes in image orientation, minute amounts of adversarial corruptions, or bias in the data. Susceptibility of ML methods to test-time shift is a major hurdle in a universal acceptance of these solutions in several high-regret applications. To overcome this challenge, in this research topic “Safe and Trustworthy Machine Learning”, a wide range of solutions are contributed as potentially viable solutions to address trust, safety and security issues faced by ML methods.