Two Quantitative Approaches for Estimating Content Validity
暂无分享,去创建一个
[1] J. Fleiss. Measuring nominal scale agreement among many raters. , 1971 .
[2] L. Davis. Instrument review: Getting the most from a panel of experts , 1992 .
[3] P. Prescott,et al. Issues in the Use of Kappa to Estimate Reliability , 1986, Medical care.
[4] M. Topf,et al. Three estimates of interrater reliability for nominal data. , 1986, Nursing research.
[5] Edward G. Carmines,et al. Reliability and Validity Assessment , 1979 .
[6] Stephen E. Fienberg,et al. Discrete Multivariate Analysis: Theory and Practice , 1976 .
[7] Jacob Cohen. A Coefficient of Agreement for Nominal Scales , 1960 .
[8] D. Cicchetti. On a model for assessing the security of infantile attachment: Issues of observer reliability and validity , 1984, Behavioral and Brain Sciences.
[9] M. Lynn. Determination and quantification of content validity. , 1986, Nursing research.
[10] P. Prescott,et al. Nursing Intensity: Going Beyond Patient Classification , 1992, The Journal of nursing administration.
[11] B. Wildman,et al. A probability-based formula for calculating interobserver agreement. , 1977, Journal of applied behavior analysis.
[12] J. A. Wakefield. Relationship Between Two Expressions of Reliability: Percentage Agreement and Phi , 1980 .
[13] C. Waltz,et al. Nursing Research: Design, Statistics, and Computer Analysis , 1981 .
[14] C. Antonakos,et al. Using measures of agreement to develop a taxonomy of passivity in dementia. , 2001, Research in nursing & health.
[15] R L Anders,et al. Development of a scientifically valid coordinated care path. , 1997, The Journal of nursing administration.
[16] D. Weiss,et al. Interrater reliability and agreement of subjective judgments , 1975 .
[17] B. Garvin,et al. Reliability in Category Coding Systems , 1988, Nursing research.
[18] The osteoporosis risk assessment tool: establishing content validity through a panel of experts. , 2002, Applied nursing research : ANR.
[19] S. Siegel,et al. Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.
[20] S Summers. Establishing the reliability and validity of a new instrument: pilot testing. , 1993, Journal of post anesthesia nursing.
[21] D P Hartmann,et al. Considerations in the choice of interobserver reliability estimates. , 1977, Journal of applied behavior analysis.
[22] J. R. Landis,et al. An application of kappa-type analyses to interobserver variation in classifying chest radiographs for pneumoconiosis. , 1984, Statistics in medicine.
[23] Interpreting kappa values for two-observer nursing diagnosis data. , 1997, Research in nursing & health.
[24] V. Martuza. Applying norm-referenced and criterion-referenced measurement in education , 1977 .
[25] W. Willett,et al. Misinterpretation and misuse of the kappa statistic. , 1987, American journal of epidemiology.
[26] C. Waltz,et al. Measurement in nursing research , 1984 .
[27] A. House,et al. Measures of interobserver agreement: Calculation formulas and distribution effects , 1981 .
[28] Hoi K. Suen,et al. Analyzing Quantitative Behavioral Observation Data , 1989 .
[29] D. P. Hartmann,et al. Child behavior analysis and therapy , 1975 .
[30] T P Hutchinson,et al. Focus on Psychometrics. Kappa muddles together two sources of disagreement: tetrachoric correlation is preferable. , 1993, Research in nursing & health.
[31] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[32] P. Brennan,et al. The kappa statistic for establishing interrater reliability in the secondary analysis of qualitative clinical data. , 1992, Research in nursing & health.