An Investigation of Bounded Misclassificationfor Operational Security of Deep Neural Networks

Deep Neural Networks are known to be prone to incomprehensible mistakes on the inputs they do misclassify. However, from the perspective of an end-to-end system built on top of a classifier, there may be additional layers of decision making that may actually be immune to particular kinds of misclassification. For example, if a drone ends up misclassifying a yellow school bus as something similar, such as a cab instead of, say, an enemy tank, then the underlying decision problem of ignoring this object as a possible target remains the same, and hence unaffected. In this brief abstract, we discuss this notion of robustness called “bounded misclassification” that is domain-specific and operational, and is specifically predicated on the overall functionalities of a particular application.