Collecting and analyzing data in multidimensional scaling experiments: A guide for psychologists using SPSS

Gyslain Giguère Université du Québec à Montréal This paper aims at providing a quick and simple guide to using a multidimensional scaling procedure to analyze experimental data. First, the operations of data collection and preparation are described. Next, instructions for data analysis using the ALSCAL procedure (Takane, Young & DeLeeuw, 1977), found in SPSS, are detailed. Overall, a description of useful commands, measures and graphs is provided. Emphasis is made on experimental designs and program use, rather than the description of techniques in an algebraic or geometrical fashion. In science, being able synthesize data using a smaller number of descriptors constitutes the first step to understanding. Hence, when one must extract useful information from a complex situation implying many hypothetical variables and a huge database, it is convenient to be able to rely on statistical methods which help finding some sense by extracting hidden structures in the data (Kruskal & Wish, 1978). Torgerson (1952), among others, proposed such a method, called multidimensional scaling (MDS). At the time, he believed that while the use of psychophysical measures was appropriate for certain types of experimental situations in which comparing dimension values turned out to be fairly objective (Weber's law and the Just Noticeable Differences paradigm, for example), most of the situations encountered by experimental psychologists involved knowing neither beforehand the identity nor the number of psychologically relevant dimensions stemming from the data set. In essence, MDS is a technique used to determine a n‐ dimensional space and corresponding coordinates for a set of objects, strictly using matrices of pairwise dissimilarities The author would like to thank Sébastien Hélie for comments on this paper, as well as the Fonds québécois pour la recherche sur la nature et les technologies (NATEQ) for financial support in the form of a scholarship. between these objects. When using only one matrix of similarities, this is akin to Eigenvector or Singular value decomposition in linear algebra, and there is an exact solution space. When using several matrices, there is no unique solution, and the complexity of the model commands an algorithm based on numerical analysis. This algorithm finds a set of orthogonal vector dimensions in an iterative fashion, slowly transforming the space to reduce the discrepancies between the inter‐object distances in the proposed space, and the corresponding scaled original pairwise dissimilarities between these objects. A classic example, found in virtually all introductory books on multidimensional scaling (see for …