Embracing Uncertainty

Sources of uncertainty abound. Noisy sensor data. Machine learning methods. Hardware and software failures. The physical world. Human behavior. In the past, computer science handled uncertainty by abstracting it away or avoiding it. In the future, instead, computer science needs to embrace uncertainty as a first-class entity. How do we represent uncertainty in our computational models? Probabilities. Thus, we need to make sure that every computer science student learns probability and statistics. Data science, where data drives discovery and decision-making in all fields of study, underscores the importance of having a command of probability and statistics. At the heart of data science is data analytics whose methods such as machine learning rely on probabilistic and statistical reasoning. And since data serve as the currency of any data analytics workflow, explicit representation of probability distributions can help us calculate the degrees of uncertainty throughout a flow. Programming and software engineering courses will need to elevate the status of such data flows to that given to algorithms, data structures, and modular design. In this talk I will discuss the implications of embracing uncertainty on undergraduate computer science curricula.