Inference after model selection has been an active research topic in the past few years, with numerous works offering different approaches to addressing the problems associated with the reuse of data. In particular, major progress has been made recently on large and useful classes of problems by harnessing general theory of hypothesis testing in exponential families, but these methods have their limitations. Perhaps most immediate is the gap between theory and practice: implementing the exact theoretical prescription in realistic situations---for example, when new data arrives and inference needs to be adjusted accordingly---may turn out to be a prohibitive task.
In this paper we develop methods for carrying out inference conditional on selection, which are more flexible in the sense that they naturally accommodate different models for the data, instead of requiring a case-by-case treatment. Our methods come at the price of offering only approximate inference, but we provide both theory and simulation examples to show that our specific approximation has competitive performance.