How should statistical procedures be designed so as to be scalable computationally to the massive datasets that are increasingly the norm? When coupled with the requirement that an answer to an inferential question be delivered within a certain time budget, this question has significant repercussions for the field of statistics. With the goal of identifying “time-data tradeoffs,” we investigate some of the statistical consequences of computational perspectives on scability, in particular divide-and-conquer methodology and hierarchies of convex relaxations. The fields of computer science and statistics have undergone mostly separate evolutions during their respective histories. This is changing, due in part to the phenomenon of “Big Data.” Indeed, science and technology are currently generating very large datasets and the gatherers of these data have increasingly ambitious inferential goals, trends which point towards a future in which statistics will be forced to deal with problems of scale in order to remain relevant. Currently the field seems little prepared to meet this challenge. To the key question “Can you guarantee a certain level of inferential accuracy within a certain time budget even as the data grow in size?” the field is generally silent. Many statistical procedures either have unknown runtimes or runtimes that render the procedure unusable on large-scale data. Although the field of sequential analysis provides tools to assess risk after a certain number of data points have arrived, this is different from an algorithmic analysis that predicts a relationship between time and risk. Faced with this situation, gatherers of large-scale data are often forced to turn to ad hoc procedures that perhaps do provide algorithmic guarantees but which may provide no statistical guarantees and which in fact may have poor or even disastrous statistical properties. On the other hand, the field of computer science is also currently poorly equipped to provide solutions to the inferential problems associated with Big Data. Database researchers rarely view the data in a database as noisy measurements on an underlying population about which inferential statements are desired. Theoretical computer scientists are able to provide analyses of the resource requirements of algorithms (e.g., time and space), and are often able to provide comparative analyses of different algorithms for solving a given problem, but these problems rarely refer to inferential goals. In particular, the notion that it may be possible to save on computation because of the growth of statistical power as problem instances grow in size is not (yet) a common perspective in computer science. In this paper we discuss some recent research initiatives that aim to draw computer science and statistics closer together, with particular reference to “Big Data” problems. There are two main underlying perspectives driving these initiatives, both of which present interesting conceptual challenges for statistics. The first is that large computational problems are often usefully addressed via some notion of “divide-and-conquer.” That is, the large problem is divided into subproblems that are hopefully simpler than the original problem, these subproblems are solved
[1]
I. Johnstone,et al.
Minimax estimation via wavelet shrinkage
,
1998
.
[2]
Vijay V. Vazirani,et al.
Approximation Algorithms
,
2001,
Springer Berlin Heidelberg.
[3]
R. Samworth.
A note on methods of restoring consistency to the bootstrap
,
2003
.
[4]
David Hinkley,et al.
Bootstrap Methods: Another Look at the Jackknife
,
2008
.
[5]
M. Wainwright,et al.
High-dimensional analysis of semidefinite relaxations for sparse principal components
,
2008,
2008 IEEE International Symposium on Information Theory.
[6]
Emmanuel J. Candès,et al.
Matrix Completion With Noise
,
2009,
Proceedings of the IEEE.
[7]
Ameet Talwalkar,et al.
Divide-and-Conquer Matrix Factorization
,
2011,
NIPS.
[8]
Purnamrita Sarkar,et al.
A scalable bootstrap for massive data
,
2011,
1112.5016.
[9]
Benjamin Recht,et al.
A Simpler Approach to Matrix Completion
,
2009,
J. Mach. Learn. Res..
[10]
Peter L. Bartlett,et al.
Oracle inequalities for computationally budgeted model selection
,
2011,
COLT.
[11]
Ohad Shamir,et al.
Using More Data to Speed-up Training Time
,
2011,
AISTATS.
[12]
F. Götze,et al.
RESAMPLING FEWER THAN n OBSERVATIONS: GAINS, LOSSES, AND REMEDIES FOR LOSSES
,
2012
.
[13]
Michael I. Jordan,et al.
Computational and statistical tradeoffs via convex relaxation
,
2012,
Proceedings of the National Academy of Sciences.
[14]
Minge Xie,et al.
A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data
,
2014
.
[15]
Michael I. Jordan,et al.
Distributed matrix completion and robust factorization
,
2011,
J. Mach. Learn. Res..