In the last decades, Educational and Psychological Measurement was a very active field of academic research. Numerous new methods and procedures were developed and many of them are now used on a regular basis and/or are implemented in statistical software packages. But, even though a solid state of knowledge has been established in many areas of Educational and Psychological Measurement, new demands and requirements are calling for new methodological answers and specific analysis procedures. Several of these demands and requirements stem from Large-Scale Assessments (LSAs). In LSAs, very large samples are examined; often under the objectives of deriving sound comparisons between quite different populations like countries and of drawing far-reaching inferences. The general objective of the Programme for International Student Assessment (PISA), for example, is to answer the rather general question of how well prepared students are to participate in society. The combination of examining very large samples, the desire for the comparison of rather different populations, and the aim to infer farreaching interpretations creates a couple of demanding methodological challenges. Important methodological challenges that have not yet been answered sufficiently concern aspects of complex test designs used to distribute test items to participants, the handling of unwanted item context effects on both item parameter estimates and test performance, the calibration of data sets assessed with complex study designs, and the application of computerized adaptive testing (CAT) in order to meet specific diagnostic needs.The special topic "Current issues in Educational and Psychological Measurement: Design, calibration, and adaptive testing" of Psychological Test and Assessment Modeling assembles a series of research papers addressing current issues in these areas. The general methodological approach used in all papers is the Item Response Theory (IRT). The special topic is spread over two issues of Psychological Test and Assessment Modeling. This issue is the first part and includes five papers.With the first paper entitled "Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration" Yousfi and Bohme (2012) concentrate on item context effects due to the position and the sequence in which they are presented in test booklets. After introducing a taxonomy of booklet designs, different booklet designs are compared with regards to the bias and efficiency of item parameter estimates for CAT within two simulation studies.The second paper entitled "On the importance of using balanced booklet designs in PISA" by Frey and Bernhardt (2012) focusses on the balanced booklet design used in PISA from the year 2003 on. The effects of a systematic distortion of the balanced booklet design structure on estimates for reading performance in different sub-populations are examined. Additionally, the question as to whether students with special characteristics are more prone to be advantaged or disadvantaged by a balanced booklet design compared to an unbalanced booklet design is analyzed.The third paper entitled "A multilevel item response model for item position effects and individual persistence" from Hartig and Buchholz (2012) explicitly examines item position effects using student responses from different countries assessed in PISA 2006. In contrast to Yousfi and Bohme (2012), who compare different booklet designs with regards to item parameter estimates within simulation studies, Hartig and Buchholz investigate individual differences in item position effects and their relationship with student performance in science. …
[1]
Andreas Frey,et al.
On the Importance of Using Balanced Booklet Designs in PISA
,
2012
.
[2]
Effect of Item Order on Item Calibration and Item Bank Construction for Computer Adaptive Tests
,
2013
.
[3]
Johannes Hartig,et al.
A multilevel item response model for item position effects and individual persistence
,
2012
.
[4]
Andreas Frey,et al.
The Sequential Probability Ratio Test for Multidimensional Adaptive Testing with Between-Item Multidimensionality
,
2013
.
[5]
Andreas Frey,et al.
Too hard, too easy, or just right? The relationship between effort or boredom and ability-difficulty fit
,
2013
.
[6]
Biased (conditional) parameter estimation of a Rasch model calibrated item pool administered according to a branched testing design
,
2012
.
[7]
Safir Yousfi,et al.
Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration
,
2012
.
[8]
Jeffrey M. Patton,et al.
Capitalization on chance in variable-length classification tests employing the Sequential Probability Ratio Test
,
2012
.