We thank the commentators for the care and attention they have given to this topic and for the new insights they are contributing to what we all regard as an important and engaging area, namely the study of the structure and function of human social networks. Most of the commentators note the same limitations and promise that we, in our prior papers, have also noted with the use of generalized estimating equation (GEE) models implemented in longitudinal networks. Like us, they cannot see any superior alternative to analyzing such data, especially in networks of the size we analyze, given the statistical tools currently at our disposal. As a scientific principle, we take it to be the case that some observation of the world is better than no observation. The fact that we can imagine the existence of a space-based telescope that would be superior to the terrestrial variety does not mean that the latter, with all the limitations imposed by light pollution and atmospheric interference, offers no value, offers no information about the natural world. We think that the current state of network statistics is like an earth-based telescope: the methods are not perfect, not free of all biases or assumptions. But still, they are much better than nothing. We are grateful for Wasserman’s deft highlighting of this fundamental point. Wasserman has been a pioneer in network statistics, and he knows how hard it is to reap a harvest in this terrain. Like him, we see no point in packing our bags and going home, because network phenomena are so important. Like virtually all social scientists, we have, in many of our papers, made use of observational data, and so we must cope with limitations in extant methods. But we also have published a number of experiments, exploring both the structure and function of networks ranging in size from thousands to tens of millions of subjects [1–3]. And we note with interest the experimental work of others [4–6]. It is worth noting, however, that experiments have their own limitations, including that they are thinned-out versions of reality. What one gains in robust causal inference, one loses in verisimilitude. So, we think that both approaches will play an important role in network science in the coming years. We just need to invent better statistical tools, a sentiment in which we are joined by all four commentators. Thomas, as usual, thinks deeply about the limitations of current network models. Yet, he does not suggest anything specific that one could do differently than we did, given our data and the current state of statistical knowledge. And, in many cases, he identifies limitations in current methods that we, and others, have previously pointed out. The challenge is what to do about them. In his discussion, Thomas appears to overlook the fact that we ourselves, for most of our papers, have published results with both dichotomous and continuous versions of the same outcome – for example, dichotomous obesity and continuous body mass index [7, 8], dichotomous smoking and continuous number of cigarettes smoked per day [9], dichotomous and continuous versions of a happiness index [10], and dichotomous and continuous versions of a lonely days per week [11]. In all cases, the two types of models have led to the same conclusions, a fact we have noted in our papers. Moreover, we have also previously explored double-lagged models of exactly the sort he (and others) have suggested, where alter’s change in status from t -2 to t -1 is used as a predictor for ego’s change in status from t -1 to t , a fact mentioned in the supplements to our papers. And those analyses also confirmed our main results. This has been previously noted by VanderWeele [12]. Moreover, as Thomas notes, the use of other sorts of approaches, whether propensity score matching, for example, as implemented by Aral [13] (and with which we are quite familiar [14]), or SIENA models [15], do not by any means solve the generic critique that Shalizi and Thomas have offered in a prior paper regarding the difficulty of causal inference with observational network data [16]. This is true even with the implementation of further lags, although further lags do increase our confidence in causal interpretations and solve certain other statistical problems [12]. The simulation study involving network geometry that Thomas reports is a clever contribution. We are glad to see that Thomas availed himself of the public-use version of the FHS-Net data for this purpose, and we encourage others to explore that dataset. Still, it is unclear how to modify current
[1]
Aram Galstyan,et al.
Statistical Tests for Contagion in Observational Social Network Studies
,
2012,
AISTATS.
[2]
Cameron Marlow,et al.
A 61-million-person experiment in social influence and political mobilization
,
2012,
Nature.
[3]
Michael Lawrence Barnett,et al.
Variation in patient-sharing networks of physicians across the United States.
,
2012,
JAMA.
[4]
T. Valente.
Network Interventions
,
2012,
Science.
[5]
Eric J Tchetgen Tchetgen,et al.
Why and When "Flawed" Social Network Analyses Still Yield Valid Tests of no Contagion
,
2012,
Statistics, politics, and policy.
[6]
Sedransk Nell,et al.
Data, Statistics, and Controversy: Making Science Research Data Intelligible
,
2012
.
[7]
David G. Rand,et al.
Dynamic social networks promote cooperation in experiments with humans
,
2011,
Proceedings of the National Academy of Sciences.
[8]
Susan K. Walker.
Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives
,
2011
.
[9]
T. VanderWeele.
Sensitivity Analysis for Contagion Effects in Social Networks
,
2011,
Sociological methods & research.
[10]
Dylan Walker,et al.
Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks
,
2010,
ICIS.
[11]
C. Steglich,et al.
DYNAMIC NETWORKS AND BEHAVIOR: SEPARATING SELECTION FROM INFLUENCE: separating selection from influence
,
2010
.
[12]
Cosma Rohilla Shalizi,et al.
Homophily and Contagion Are Generically Confounded in Observational Social Network Studies
,
2010,
Sociological methods & research.
[13]
Arun Sundararajan,et al.
Distinguishing influence-based contagion from homophily-driven diffusion in dynamic networks
,
2009,
Proceedings of the National Academy of Sciences.
[14]
Nicholas A. Christakis,et al.
Cooperative behavior cascades in human social networks
,
2009,
Proceedings of the National Academy of Sciences.
[15]
N. Christakis,et al.
Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study
,
2008,
BMJ : British Medical Journal.
[16]
N. Christakis,et al.
Alone in the Crowd: The Structure and Spread of Loneliness in a Large Social Network
,
2008,
Journal of personality and social psychology.
[17]
N. Christakis,et al.
MATERIAL FOR : The Collective Dynamics of Smoking in a Large Social Network
,
2008
.
[18]
N. Christakis,et al.
The health impact of health care on families: a matched cohort study of hospice use by decedents and mortality outcomes in surviving, widowed spouses.
,
2003,
Social science & medicine.
[19]
Damon Centola.
The spread of behavior in an online social network experiment.
,
2010,
Science.
[20]
J. Stockman.
The Spread of Obesity in a Large Social Network over 32 Years
,
2009
.
[21]
Cohen-Cole,et al.
Estimating peer effects on health in social networks : A response to
,
2008
.