今日推荐

1993

Modifying the t test for assessing the correlation between two spatial processes

Clifford, Richardson, and Hm they require the estimation of an effective sample size that takes into account the spatial structure of both processes. Clifford et al. developed their method on the basis of an approximation of the variance of the sample correlation coefficient and assessed it by Monte Carlo simulations for lattice and non-lattice networks of moderate to large size. In the present paper, the variance of the sample covariance is computed for a finite number of locations, under the multinormality assumption, and the mathematical derivation of the definition of effective sample size is given. The theoretically expected number of degrees of freedom for the modified t test with renewed modifications is compared with that computed on the basis of equation (2.9) of Clifford et al. (1989). The largest differences are observed for small numbers of locations and high autocorrelation, in particular when the latter is present with opposite sign in the two processes. Basic references that were missing in Clifford et al. (1989) are given and inherent ambiguities are discussed.

2003

Treatment Effect Heterogeneity in Theory and Practice

0 阅读

Instrumental Variables (IV) methods identify internally valid causal effects for individuals whose treatment status is manipulable by the instrument at hand. Inference for other populations requires homogeneity assumptions. This paper outlines a theoretical framework that nests causal homogeneity assumptions. These ideas are illustrated using sibling-sex composition to estimate the effect of childbearing on economic and marital outcomes. The application is motivated by American welfare reform. The empirical results generally support the notion of reduced labour supply and increased poverty as a consequence of childbearing but evidence on the impact of childbearing on marital stability and welfare use is more tenuous. Empirical research often focuses on causal inference for the purpose of prediction, yet it seems fair to say that most prediction involves a fair amount of guesswork. The relevance or ‘external validity’ of a particular set of empirical results is always an open question. As Karl Pearson (1911, p. 157) observed in an early discussion of the use of correlation for prediction, ‘Everything in the universe occurs but once, there is no absolute sameness of repetition.’ This practical difficulty notwithstanding, empirical research is almost always motivated by a belief that estimates for a particular context provide useful information about the likely effects of similar programmes or events in the future. Our investment of time and energy in often-discouraging empirical work reveals that empiricists like me are willing to extrapolate. The basis for extrapolation is a set of assumptions about the cross-sectional homogeneity or temporal stability of causal effects. As a graduate student, I learned about parameter stability as ‘the Lucas critique’, while my own teaching and research focuses on the identification possibilities for average causal effects in models with heterogeneous potential outcomes. Applied micro-econometricians devote considerable attention to the question of whether homogeneity and stability assumptions can be justified and to the implications of heterogeneity for alternative parameter estimates. Regrettably, this sort of analysis sometimes comes at the expense of a rigorous examination of the internal validity of estimates, i.e., whether the estimates have a causal interpretation for the population under study. Clearly, however, even internally valid estimates are less interesting if they are completely local, i.e., have no predictive value for populations other than the directly affected group.

论文关键词

genetic algorithm positioning system process control sample size solar cell visible light dna sequence learning object indoor positioning received signal strength statistical process control indoor localization quantum dot statistical proces indoor positioning system count datum hecke algebra factorial design ieee standard binding site escherichia coli weighted moving average knowledge structure statistical quality control poisson structure cell cycle choice behavior econometric model quality level exponentially weighted moving fractional factorial design saccharomyces cerevisiae selection bia affine weyl group statistical process monitoring power conversion efficiency dye-sensitized solar cell charge transport uniform resource identifier learning object metadatum embryonic stem cell moving average control object class dye-sensitized solar reusable learning object linkage disequilibrium quantity discount spatial process spatial econometric population parameter embryonic stem reusable learning object metadatum heterojunction solar cell dna repair location fingerprinting cell development indoor positioning technique spatial econometric model radiation tolerance heterojunction solar genetic linkage signal peptide bulk heterojunction dna segment recombination rate bulk heterojunction solar dna recombination wifi-based indoor localization surface recombination escherichia coli. low-density lipoprotein indoor positioning solution proposed positioning system surface recombination velocity solar cells. neisseria meningitidi genetic heterogeneity learning object review dna break xrcc5 wt allele xrcc5 gene t cell receptor v(d)j recombination v(d)j recombination-activating protein 1 excretory function neuritis, autoimmune, experimental leukemia, b-cell dna sequence rearrangement immunoglobulin class switch recombination immunoglobulin class switching lipoprotein receptor dna breaks, double-stranded telomere maintenance v(d)j recombination genome encoded entity vdj recombinase recombination, genetic crossover (genetic algorithm) meiotic recombination homologous recombination