Hardness Results for Structured Learning and Inference with Multiple Correct Outputs

In many domains of structured output prediction , multiple outputs can be considered correct. Several results exist showing that polynomial time computation both at training and test time is possible when a single correct output is present. In this work, we show that such guarantees do not hold when multiple outputs are correct. This is shown through three main results indicating that multiple correct outputs lead to NP-hard computation with existing convex sur-rogates for (i) learning with a supermodular loss function, (ii) learning with a submodular loss function, and (iii) test time inference with a diversity penalty term. These theoretical results highlight the importance of identifying sufficient conditions for tractable learning and inference with multiple correct outputs in practice.