Exploration in Linear Bandits with Rich Action Sets and its Implications for Inference