Skip to main content

[]

Intended for healthcare professionals
Skip to main content
Open access
Research article
First published online February 3, 2025

Multidimensional Scaling of the Wechsler Intelligence Scales for Children in a Clinical Sample Assessed for Fetal Alcohol Spectrum Disorder (FASD)

Abstract

Fetal Alcohol Spectrum Disorder (FASD) is a significant public health concern arising from prenatal alcohol exposure. This study examines the clinical utility of Wechsler intelligence tests in assessing cognition in 108 children with confirmed prenatal alcohol exposure. Data were analysed using multidimensional scaling and Guttman’s Structural Model of Intelligence, with a view to assessing the application of the Wechsler Intelligence Scale for Children 4th Edition (WISC-IV) and 5th Edition (WISC-V) in characterising cognitive ability for this clinical population. WISC-IV and WISC-V subtests exhibited distinct clustering patterns within the sample compared to normative populations. Subtests appeared to cluster based on response modality, aligning with Guttman’s Structural Model of Intelligence. The findings demonstrate an alternative interpretation approach for intelligence tests in children with prenatal alcohol exposure, which may complement existing FASD diagnostic frameworks. The clustering patterns underscore the importance of considering response modality in understanding cognitive abilities.

Introduction

Alcohol is a potent teratogen that readily crosses the placenta to disrupt prenatal development (Mattson et al., 2019). Fetal Alcohol Spectrum Disorder (FASD) is a diagnostic term used to encapsulate the various behavioural, physiological, and neurodevelopmental challenges arising from prenatal alcohol exposure (Shelton et al., 2018). FASD is a significant public health concern and a leading cause of developmental disability within the western world (Astley, 2004), affecting approximately 7.7 per 1000 children in the general population (Lange et al., 2017). Although children with prenatal alcohol exposure may present with a range of physical, neurological, and psychological abnormalities, cognitive deficits are the most consistent feature of FASD, with impairment most frequently observed in working memory, attentional processes, executive functions, visuospatial reasoning, language, and general intelligence (Coriale et al., 2013; Shelton et al., 2018). Despite this, the occurrence and severity of cognitive impairment varies significantly between cases, providing an obstacle to the clinical assessment of FASD (Benz et al., 2009).
FASD is underrecognised globally, with inconsistent assessment approaches and limited access to diagnostic services providing barriers to disease identification (Hayes et al., 2023). As a consequence, many children with prenatal alcohol exposure may be at risk of receiving insufficient support for the neurodevelopmental challenges associated with FASD. Establishing sophisticated diagnostic considerations for the clinical assessment of FASD is therefore imperative to better equip clinicians with the skillset to disentangle the complex phenotypes associated with prenatal alcohol exposure. International guidelines currently inform clinical practice (Astley, 2004; Bower et al., 2016; Cook et al., 2016), though they are not without criticism (Hayes et al., 2022; McLennan & Braunberger, 2017). For instance, stakeholders have identified a disconnect between conceptualisations of cognitive functions described by international diagnostic guidelines and neuropsychological theory underpinning the assessment tools recommended to assess latent cognitive constructs in children with prenatal alcohol exposure (Hayes et al., 2022; McLennan & Braunberger, 2017). The result of this divide is the underutilisation of rich psychometric research that could better inform the assessment of FASD-related cognitive deficits. Bridging the gap between psychometric theory and the current clinical guidelines is critical to improving the consistency and accuracy of FASD diagnosis. Such changes may reduce reliance on clinical intuition, allowing for a standardised and systematic approach to patient-centred care.
The Cattel-Horn-Carrol (CHC) theory of human cognition is a psychometric taxonomy that emphasises the covariance between aspects of intelligence and cognitive functionality (Caemmerer et al., 2020). The CHC model is organised hierarchically, with stratum III representing the general factor of intelligence or ‘g’, stratum II representing 16 broad cognitive abilities, and stratum I representing over 80 narrow abilities (Jewsbury et al., 2017; McGill & Dombrowski, 2019). The CHC model is largely congruent with many existing intelligence tests (Flanagan & Alfonso, 2017; Flanagan et al., 2013), including the Wechsler Intelligence Scales for Children (WISC), commonly recommended as a cognitive assessment tool for FASD diagnostic purposes (Bower et al., 2016; Cook et al., 2016).
While the fourth edition of the WISC (WISC-IV) is typically interpreted through four primary indices (Wechsler, 2003), CHC-derived models propose a five-factor structure, with the primary subtests purported to measure fluid reasoning (Gf), visuospatial processing (Gv), crystalised intelligence (Gc), short-term memory (Gsm), and processing speed (Gs) (Flanagan et al., 2013; Keith et al., 2006). Despite deviating from the standard WISC-IV interpretation, the invariance of the CHC-derived five-factor structure is supported cross-contextually (Chen et al., 2009; Golay et al., 2013; McGill & Canivez, 2018) and is suggested to retain validity irrespective of neurodevelopmental and intellectual disability (Weiss et al., 2013). Meanwhile, the fifth edition of the WISC (WISC-V), organises the test into five primary indices representing visual processing (Gv), fluid reasoning (Gf), crystalised intelligence (Gc), short-term memory (Gsm), and processing speed (Gs) CHC abilities (Chen et al., 2015; Wechsler, 2014). The invariance of this structure in neurodevelopmentally disordered populations is supported by the WISC-V administration manual, which details the test’s utility in a normative sample that includes a proportion of children with various special education classifications (i.e. Intellectual Disability, Specific Learning Disorder, Autism Spectrum Disorder, Attention-Deficit/ Hyperactivity Disorder and children considered intellectually gifted: Wechsler, 2014). Table 1 outlines the proposed congruency between the CHC model and WISC primary indices/subtests.
Table 1. Proposed Alignment Between WISC Primary Indices, Subtests and CHC Factors.
Standard WISC-IV indicesWISC-IV SubtestsCHC Stratum II domain
Verbal Comprehension IndexSimilaritiesGc
VocabularyGc
ComprehensionGc
Perceptual Reasoning IndexBlock designGv
Picture conceptsGf
Matrix reasoningGf
Working Memory IndexDigit-spanGsm
Letter-number sequencingGsm
Processing Speed IndexCodingGs
Symbol searchGs
Standard WISC-V indicesWISC-V subtestsCHC stratum II domain
Verbal Comprehension IndexSimilaritiesGc
VocabularyGc
Visual Spatial IndexBlock designGv
Visual puzzlesGv
Fluid Reasoning IndexMatrix reasoningGf
Figure WeightsGf
Working Memory IndexDigit-spanGsm
Picture-spanGsm
Processing Speed IndexCodingGs
Symbol searchGs
Note. Abbreviation represent stratum II cognitive domains where Gc = Crystalised intelligence, Gf = Fluid intelligence, Gv = Visual processing, Gsm = Short-term memory, Gs = processing speed. For WISC-IV subtest alignment to CHC factors, see Keith et al. (2005), and Weiss et al. (2013). For WISC-V subtest alignment to CHC factors, see Wechsler (2014).
Since both WISC-IV and WISC-V CHC-derived models assert construct validity among neurodevelopmentally disordered populations, the purported structure of these tests should be retained when examining children with prenatal alcohol exposure, where diverse and complex impairments across cognitive functions are common. To assess this, a robust investigation into the CHC-derived factor structure of the WISC-IV and WISC-V models within children with prenatal alcohol exposure is warranted. Such research will help to better understand whether the current diagnostic paradigm with a CHC theoretical framework is valid, reliable, and useful to clinical practice.

Multidimensional Scaling and the WISC Models

While factor analytic (FA) techniques are typically employed to understand the relationship between the factor structure of hierarchical intelligence tests, they require large, stratified samples that are not generally available in discrete clinical conditions. Multidimensional scaling (MDS) has been proposed as a complementary methodology to explore interrelationships between latent variables in psychometric assessments (Frisby & Kim, 2008; Joshanloo & Weijers, 2019). MDS techniques are particularly useful within small samples that are atypical in normality and variance. MDS converts multivariate correlational data into Cartesian coordinates that graphically represent the relative proximities between pairs of correlated variables in geometric space (Davison & Sireci, 2000; Jaworska & Chupetlovska-Anastasova, 2009). Variable pairs with stronger correlations are graphically represented in closer proximity while weakly correlated variables are more distal in geometric space (Groenen & Borg, 2013; Joshanloo & Weijers, 2019).
Guttman’s Structural Model of Intelligence provides an interpretive framework for understanding MDS coordinate configurations derived from intelligence tests (Adler & Guttman, 1982; Cohen et al., 2006; L. Guttman & Levy, 1991). This model graphically partitions the MDS visual output into a circle (radex) or cylinder, depending on whether the solution is two- or multidimensional, respectively (Cohen et al., 2006; Guttman & Levy, 1991). Guttman initially proposed that the radex, irrespective of dimensionality, was interpretable along two covarying components, the simplex and the circumplex (Guttman & Levy, 1991; Marshalek et al., 1983).
The simplex portrays a linear distribution of variables ordered according to inherent task complexity; the closeness of a task to the centre of the simplex implies increased recruitment of differing task-related cognitive abilities. Abstract tasks requiring a higher level of inferential ability employ a higher number of cognitive skills, and as a result, display a stronger correlation to other cognitive abilities, placing them closer to the centre of the simplex (Marshalek et al., 1983; Meyer & Reynolds, 2018). In hierarchical models of intelligence, test components sharing greater common variance with psychometric g are considered higher in complexity and exhibit higher centrality along the simplex, while tasks that exhibit more unique variance are considered less complex and are located toward the periphery of the radex (Marshalek et al., 1983; Meyer & Reynolds, 2018). In both the WISC-IV and WISC-V, subtests that measure Gf, Gv, and Gc domains generally exhibit higher correlations to ‘g’ and would be expected to appear closest to the centre of the radex, while subtests that measure Gsm and Gs domains generally present with a lower association to ‘g’, suggesting that these points would appear toward the periphery of the radex (Chen et al., 2015; Keith et al., 2006; Weiss et al., 2013).
The circumplex on the other hand, portrays the clustering of variables around the geometric centre of the radex according to shared characteristics or content (Davison & Sireci, 2000; Meyer, 2021). In the context of intelligence tests, subtests that are highly correlated will cluster into a similar region of geometric space. When using statistical analyses such as smallest space analysis, WISC subtest clusters have historically conformed to surface content features such as test administration (numerical, geometric/ pictorial or verbal) or response modality (oral, manual, or pencil) (Cohen et al., 2006; L. Guttman & Levy, 1991; R. Guttman & Greenbaum, 1998). However, a recent MDS investigation using the WISC-V standardisation sample demonstrated a robust two-dimensional MDS solution with subtest clustering around the circumplex consistent with a CHC-factor structure (Meyer & Reynolds, 2018). This study demonstrated that MDS in conjunction with the radex model could provide a meaningful interpretive framework for replicating FA findings. Such a framework holds particular importance for clinical research where samples are small and may present with abnormal characteristics that violate parametric FA assumptions.
The present study replicated the methodology outlined in Meyer and Reynolds (2018), applying MDS to examine subtest complexity and clustering patterns of the WISC-IV and WISC-V in a cohort of children with prenatal alcohol exposure. MDS outputs were examined within the context of the radex model and prior literature to provide a preliminary overview of WISC-IV and WISC-V test structure in children with prenatal alcohol exposure. Given the purported validity of the WISC models among neurodevelopmentally impaired populations, we hypothesise that the linear distribution of primary subtests along the simplex will generally correspond to ‘g’ loadings outlined within the existing psychometric literature. We also hypothesise that the clustering of primary subtests around the circumplex will conform to the clinically invariant CHC structure of the WISC-IV and WISC-V (outlined in Table 1).

Methodology

The present study utilised archival data collected by a tertiary FASD clinic located in Queensland, Australia, between 2014 and 2018. Data were extracted from a de-identified research database containing information recorded in medical records, parallel files, and computer scoring programs as part of the standard clinical care model. The sample included children referred to the clinic due to behavioural problems and/or suspected neurodevelopmental problems, and a background of confirmed prenatal alcohol exposure. Demographic and background information were collected during the detailed clinical intake interview. A standardised assessment of cognition, utilising either the WISC-IV or the WISC-V A&NZ, was administered by a clinical neuropsychologist or clinical psychologist, consistent with recommendations in the Australian guide to Diagnosis of FASD (Bower et al., 2016). The ten core subtests of the WISC-IV and WISC-V models were administered following the standardised administration procedures and scored using standard scoring procedures.
When required, WISC administration was adjusted in-line with standardised procedures specified in the administration manuals to accommodate attention, language, motor, and other delays. A range of common accommodations were also used when necessary, including regular breaks or compliance-reward activities between subtests (i.e. playing a quick card game). Less common adjustments included the presence of support persons for emotional regulation, adjustable furniture for musculoskeletal difficulties, and adjustments to lighting and ambient sound for sensory issues. The full WISC was administered to all children. However, in accordance with standardised administration and scoring procedures, subtests were awarded a raw score of zero in cases where children engaged with a subtest but could not complete items and no score was recorded when a child could not validly engage with a subtest. Supplementary subtests were administered only when clinically necessary. These data were utilised to inform clinical decision-making, but not included in the current study.

Materials

The WISC-IV and WISC-V A&NZ were standardised on a normative sample of Australian children between the ages of 6:0 and 16:11. The normative sample included a proportion of children from various special education classifications, including those with Intellectual Disability, Specific Learning Disorder (Reading and/or Written Expression), Autism Spectrum Disorder with Language Disorder, Attention-Deficit/ Hyperactivity Disorder and Gifted or Talented children (Wechsler, 2003, 2014). The WISC-IV is composed of 10 primary subtests, which disperse into four primary indices. This structure is reported to have robust internal reliability and construct validity (Wechsler, 2003). The WISC-V is comprised of 10 primary subtests, which disperse into five primary indices. This structure is reported within the WISC-V technical and interpretive manual to have robust internal reliability and construct validity (Wechsler, 2014).

Statistical Analysis

The PROXSCAL function in SPSS version 28.0.1.0 was used for the MDS analysis to examine the primary subtests for each battery. Initial PROXSCAL specifications were configured to replicate Meyer and Reynolds (2018) process and solution. A Torgerson start was used to generate the initial MDS configuration. Squared Euclidean distances were selected as the proximity measure to generate the dissimilarity matrix for coordinate positioning in multidimensional geometric space (Meyer & Reynolds, 2018). An iterative approach to data rescaling and dimensionality using both metric (interval) and non-metric (ordinal) proximity transformations was utilised (Borg & Mair, 2017). To ensure the most robust solution was found, scree plots illustrating dimensionality by raw stress were used to determine the optimal n-dimensional solution, with the maximum number of n-dimensions plotted being the total number of variables used in the analysis minus one (n-1) (Jaworska & Chupetlovska-Anastasova, 2009).
Model fit and interpretability are two key considerations for determining dimensional suitability (Davison & Sireci, 2000). Increasing dimensions in the model typically improves model fit, but sacrifices interpretative parsimony (Davison & Sireci, 2000; Groenen & Borg, 2013). Hence, the simplest statistically robust dimensional solution is optimal for interpretation (Joshanloo & Weijers, 2019). To determine model-fit, ‘goodness of fit’ and ‘badness of fit’ metrics were utilised (Davison & Sireci, 2000; Jaworska & Chupetlovska-Anastasova, 2009). Kruskal’s Stress 1, provided by PROXSCAL, was used as the loss function to determine the ‘badness of fit’, with stress 1 ≤.10 indicating optimal model fit, ≤.15 indicating acceptable model fit, and ≤.20 indicating poor model fit (Kruskal & Wish, 1978). To maintain consistency with Meyer and Reynolds (2018), stress 1 ≤.15 was selected as the criterion for model-fit. Dispersion Accounted For (DAF) was used to assess ‘goodness of fit’. DAF >.60 was selected as the criterion for acceptable model-fit (Borg & Mair, 2017).

Results

The WISC-IV sample (n = 87) consisted of 56 males (64.4%) and 31 females (35.6%), with ages at the time of assessment ranging from 6.92 to 13.17 years (M = 9.05, SD = 1.64). The WISC-V sample (n = 21) consisted of 16 males (76.2%) and 5 females (23.8%) with ages at the time of assessment ranging from 7.25 to 13.80 years (M = 9.07, SD = 1.66). A missing value analysis identified approximately 5% missing data across all WISC-IV/V subtests, indicating that the dataset was suitable for analysis. PROXSCAL employs listwise exclusion of cases for missing data. Little’s Missing completely at random (MCAR) assessment was utilised to validate any exclusion. MCAR was not significant (p > .05) for both WISC-IV and WISC-V samples, indicating that missing data was completely random. Tables 2 and 3 outline descriptive and demographic statistics for the WISC IV and WISC-V samples.
Table 2. WISC-IV and WISC-V Subtest Descriptive Statistics and Abbreviations Utilised in Analysis.
WISC-IV subtestsAbbreviationMeanSDMinMax
SimilaritiesSIM7.253.67116
VocabularyVOC6.982.54114
ComprehensionCOM7.052.52113
Block designBD8.422.87314
Picture conceptsPC7.813.08115
Matrix reasoningMR8.633.02216
Digit-spanDS7.163.20115
Letter-number sequencingLNS7.112.88114
CodingCD6.993.44117
Symbol searchSS7.673.29116
WISC-IV subtestsAbbreviationMeanSDMinMax
SimilaritiesSIM7.953.25114
VocabularyVOC7.813.40314
Block designBD7.903.30113
Visual puzzlesVP9.103.53214
Matrix reasoningMR8.522.94112
Figure WeightsFW8.192.96414
Digit-spanDS7.102.97115
Picture-spanPS7.713.44115
CodingCD8.143.64119
Symbol searchSS8.432.82114
Note: Minimum and maximum scores are subtest scaled scores. Means and standard deviations were calculated from scaled scores.
Table 3. WISC-IV and WISC-V Diagnostic Data.
 WISC-IV sampleWISC-V sample
FrequencyPercentage (%)FrequencyPercentage (%)
FASD diagnosis
 FASD positive7282.81571.4
 FASD negative1517.2628.6
Medicated at the time of assessment
 Yes4551.71361.9
 No4248.3838.1
Identified comorbidities
 Yes6473.61676.2
 No2225.3523.8
 Unknown11.1--
Identified genetic disorder
 Genetic assessment positive66.914.8
 Genetic assessment negative1517.2314.3
 Not assessed6675.91781.0
Attention deficit hyperactivity disorder diagnosis
 Yes1921.81676.2
 No6777.0523.8
 Unknown11.1--
Diagnosed language disorder
 Yes33.4523.8
 No8395.41676.2
 Unknown11.1--
Other diagnosed disorder
 Yes2427.61047.6
 No6271.31152.4
 Unknown11.1--
Risk of prenatal alcohol exposure
 Confirmed exposure3944.81152.4
 Confirmed high-risk exposure4147.11047.6
 Unknown exposurea78.0--
Delayed developmental milestones
 Yes4956.3838.1
 No2832.21047.6
 Unknown1011.5314.3
Level of cognitive function
 Normal2528.7942.9
 Moderate impairment2326.4628.6
 Severe impairment3944.8628.6
aUnknown in accordance with the Australian Diagnostic Guidelines for FASD (2016) represents confirmed prenatal alcohol exposure, but where quantity, type, and duration of exposure are not formally quantifiable.

Multidimensional Scaling Analysis

Scree plots confirmed suitable two- and three-dimensional solutions for both WISC-IV and WISC-V metric and non-metric analyses. All solutions and configurations met the a priori specified model-fit statistics (Stress 1 ≤ .15; DAF >.60). In this instance, three-dimensional non-metric solutions provided better model-fit statistics; however, two-dimensional non-metric solutions were selected for interpretation as they represented a more parsimonious explanation.

WISC-IV: Interpretation of Subtest Positioning in Relation to CHC-Derived Domains

Model fit for the WISC-IV two-dimensional non-metric MDS solution was deemed acceptable according to the a priori criteria (Stress 1 = .09), with 99% dispersion accounted for (DAF = .99). The WISC-IV MDS output revealed a simplex representation of subtests that was inconsistent with expectations of CHC-derived WISC-IV subtest complexity. Subtests representing Gc (VOC, COM), Gsm (LNS), Gv (PC), Gf (BD), and Gs (SS) appeared in the centre of the distribution, indicating greater cognitive complexity of these tasks within the sample. Meanwhile, SIM (Gc), DS (Gsm), MR (Gv), and CD (Gs) appeared toward the periphery of the distribution, indicating comparatively lower complexity of these tasks within the sample.
Subtest clustering around the circumplex was also inconsistent with expectations of CHC-derived subtest interrelations. The sample demonstrated unique clustering for Gf and Gv subtests with BD (Gv subtest) having a close proximity to the Gf subtests (MR and PC). Clustering of Gc (VOC and COM) and Gsm (LNS) subtests was also observed. However, this was not consistent, with SIM (Gc) and DS (Gsm) subtests demonstrating some dissociation from this cluster. Gs subtests (CD and SS) did not cluster together, with CD appearing distant from all other subtests. Figure 1 below illustrates the two-dimensional non-metric MDS output of WISC-IV subtests.
Figure 1. Two-dimensional non-metric MDS output of WISC-IV subtests. Note: Circles orient subtest complexity (inner circle = most complex, outer circle = moderately complex), lines illustrate within-domain constellations of subtests, ellipses illustrate between-domain clusters.

WISC-V: Interpretation of Subtest Positioning in Relation to CHC-derived Domains

Model fit for the WISC-V two-dimensional non-metric MDS solution was deemed acceptable according to the a priori criteria (Stress 1 = .15), with 98% dispersion accounted for (DAF = .98). The WISC-V MDS output revealed a simplex representation of subtests that was inconsistent with expectations of WISC-V subtest complexity. Subtests representing Gsm (PS, DS), Gf (MR), and Gs (SS) appeared at the centre of the distribution, indicating greater cognitive complexity of these tasks within this sample. Meanwhile, subtests representing Gc (VOC, SIM), Gf (FW), Gs (CD), and Gv (VP, BD) appeared toward the periphery of the distribution, indicating comparatively lower complexity of these tasks within this sample.
Subtest clustering around the circumplex was also inconsistent with expectations of CHC-derived subtest interrelations. The sample demonstrated clustering of subtests related to Gsm (PS, DS), Gf (MR. FW), Gv (BD), and Gc (SIM) domains, indicating relatively poor differentiation between domains. Subtests within each of these domains were also dispersed, indicating relatively weak within-domain subtest relationships. An exception to this were Gs subtests (SS, CD), which demonstrated a close within-domain subtest relationship in a unique sector of the distribution. Gv (VP) and Gc (VOC) subtests exhibited distant relations to all other subtests, suggesting a dissociation of these tasks from all domain clusters. Figure 2 below illustrates the two-dimensional non-metric MDS output of WISC-V subtests.
Figure 2. Two-dimensional non-metric MDS output of WISC-V subtests. Note: Circles orient subtest complexity (inner circle = most complex, outer circle = moderately complex), lines illustrate within-domain constellations of subtests, ellipses illustrate between-domain clusters.

Alternative Interpretation: Subtest Positioning in Relation to Task Response Modality

Figure 3 demonstrates the MDS output of WISC-IV subtests partitioned by item response modality. WISC-IV subtests with oral response modality cluster within close proximity in one sector of the distribution (particularly COM, VOC, and LNS), implying a close relationship between verbal expression tasks in children with prenatal alcohol exposure. Meanwhile, subtests with manual (PC, MR, and BD) and pencil/paper (CD and SS) modalities appear in separate sectors of the distribution.
Figure 3. MDS output of WISC-IV subtests partitioned by response modality.
Figure 4 demonstrates the MDS output of WISC-V subtests partitioned by response modality. WISC-V subtests also generally clustered according to oral (DS, VOC and SIM), manual (MR, FW, BD and VP), and pencil/paper (CD and SS) modalities. Interestingly, picture-span (PS), which appeared as the most central and cognitively complex task on the WISC-V output, possesses a dual oral/ manual response modality. Such a finding may reflect alignment with Guttman’s initial explanation of cognitive complexity, where recruitment of more output processes results in heightened task difficulty (L. Guttman, 1954).
Figure 4. MDS output of WISC-V subtests partitioned by response modality.

Discussion

The present study offers a novel perspective on the standardised assessment of intellectual ability in a cohort of children with prenatal alcohol exposure. While preliminary, due to sample size and methodology, our results indicate that a CHC-oriented interpretation of the WISC-IV and WISC-V may be unsuitable within this clinical context, with the relative positioning of subtests deviating from expectations of test complexity and structure reported in prior literature.
CHC-derived WISC-IV short-term memory (Gsm) and crystalised intelligence (Gc) subtests clustered in close proximity to each other and were positioned towards the centre of the radex, indicating shared content features and greater cognitive complexity of these tasks within the sample. Clustering between CHC-derived visuospatial processing (Gv) and fluid reasoning (Gf) subtests was also observed in the WISC-IV model, suggesting poor discrimination between these domains. Interestingly, the standard four-factor WISC-IV index framework is structured so that Gf and Gv subtests represent a single perceptual reasoning domain. Our findings indicated that the standard interpretation of the WISC-IV perceptual reasoning index may be more suitable for this clinical cohort than a CHC-derived assessment.
WISC-V subtest clustering indicated general associations between fluid reasoning (Gf) and working memory (Gsm) subtests, crystalised intelligence (Gc) and working memory (Gsm) subtests, and fluid reasoning (Gf) and visuospatial reasoning (Gv) subtests, suggesting interdependencies between these domains within the sample. The within-domain dispersion of subtests within this sample also indicated poor relationships between tasks purported to measure the same domain. The exception to this were processing speed (Gs) tasks, which appeared to have a close within-domain relationship and a general dissocation from other tasks administered to the sample.
Working memory tasks appeared more centrally than typically expected across both WISC-IV and WISC-V configurations (Chen et al., 2015; Keith et al., 2006; Weiss et al., 2013). Contextually, this implies elevated task complexity of Gsm subtests within this sample and may represent a deficit in working memory commonly reported for children exposed to alcohol in utero (Coriale et al., 2013). Such findings, however, should be interpreted with caution, as collectively, outcomes suggested that the WISC models may be limited in discriminating between CHC domains within this cohort.
While sample-specific characteristics may underpin the observed discrepancies with prior findings and reflect the variable cognitive profile often observed in children with prenatal alcohol exposure (Akison et al., 2024), considerable debate persists regarding the overall utility of index scores beyond higher-order ‘g’ for understanding intelligence profiles in both normative and clinical populations. Higher-order ‘g’ is often noted as the predominant source of variance within WISC models (Canivez et al., 2017; Canivez & Kush, 2013; Dombrowski et al., 2015; Watkins, 2006, 2010). On this basis, it has been claimed that WISC assessments psychometrically detect overall IQ more accurately than domain-specific cognitive abilities. Dombrowski et al. (2018) describe index-level interpretation of the WISC as heavily dependent on the gradual differentiation of cognitive constructs that occurs as children age (Dombrowski et al., 2018). As neurodevelopmental disorders often limit the maturation and diversification of cognitive functions, it is conceivable that such conditions could impinge upon the efficacy of the WISC in accurately detecting latent cognitive abilities among these populations. However, solely interpretting overall IQ scores, while psychometrically robust, severely diminishes the clinical utility of these tools in determining the extent of domain-specific functional impairment among disordered groups (Courville et al., 2016; Fiorello et al., 2007; Hayes et al., 2022; McGill, 2016). Furthermore, some evidence supports the use of index-level interpretation above overall IQ in clinical groups, emphasising the importance of individualised assessment in neurodevelopmentally delayed children (Fiorello et al., 2007). As such, it is crucial to consider alternative interpretation frameworks that may complement and enhance standard psychometric approaches to facilitate a stronger understanding of functional deficits.
The ‘Boston Process Approach’ is a philosophical ideology within clinical neuropsychology that emphasises the importance of the process taken in responding to intelligence tests, particularly within clinical settings, where the extent of an individual’s functional impairment fundamentally defines the interaction between input and output features of cognitive tasks (Bruno-Golden et al., 2013; White & Rose, 1997). Hence, understanding the patient’s ability (process-success) or inability (process-failure) to complete a task may provide insight into the practical implications of neurocognitive impairment, offering a richer understanding of how cognitive deficits may manifest in everyday functioning (Bruno-Golden et al., 2013). In this context, the administration modality of a task (or what Guttman initially referred to as ‘The Facet of Format of Communication’) becomes particularly important, as the ease or difficulty in which an individual responds to each modality may be informative for understanding functional abilities.
Our results (illustrated in Figures 3 and 4) demonstrated the utility of such an alternative interpretation framework, with WISC-IV and WISC-V MDS outputs illustrating a pattern of subtest clustering that adhered to response modality. Upon interpretation, WISC-IV subtest clustering within this sample may imply an elevated complexity of tasks with an oral modality, suggesting that this clinical group may have difficulties with verbal communication. Meanwhile, a dual-response task was implied to be the most complex within the WISC-V output, adhering to Guttman’s explanation of cognitive complexity, which suggests that the recruitment of more output processes results in heightened task difficulty (L. Guttman, 1954).

Limitations, Future Directions and Conclusions

Although comparable as methodological approaches for examining the structure of test batteries, MDS and FA compute results differently, where FA techniques statistically derive distinct but correlated categories from intelligence test data, while MDS techniques spatially arrange correlated tasks to geometrically represent inter-item relationships (Tucker-Drob & Salthouse, 2009). Differences between these techniques can lead to discrepant outcomes, which, in turn, can be further accentuated through the use of largely confirmatory FA approaches, that aim to detect expected structural frameworks rather than representing organically occurring item relationships (Tucker-Drob & Salthouse, 2009). While MDS circumvents this issue, unveiling variable relationships that are potentially obscured by confirmatory approaches, the subjective nature of output interpretation may limit the consistency and generalisability of such findings. Considering this, MDS and FA techniques may be most effectively applied in tandem to explore subtest relations in intelligence batteries, where a combined approach may better illuminate data structures that would typically go unseen by either technique alone.
To our knowledge, this study is the first attempt to explore the structure of the WISC models in children exposed to alcohol in utero. Our findings provide an opportunity to reconsider the interpretation of standardised psychometric outcomes in the diagnostic assessment of FASD. Through the Boston Process Approach, essential clinical information is obtained by examining how difficulties with behaviour, attention, mood, language, motor control, and executive function influence test engagement. This wealth of understanding is often lost in standardised assessment procedures that focus solely on psychometric outcomes. As such, assessments of neurodevelopmental impairment that account for process-success and process-failure may address conventional barriers to psychometric testing by providing insight into how severe functional deficits limit a child’s ability to interact with standardised assessment tools. This insight could provide a basis for individualised assessment procedures that improve upon diagnostic accuracy and better inform clinical decision-making surrounding outpatient care.
The current findings, while providing unique, clinically meaningful insight, must at present be considered preliminary, given constraints in sample size and methodology. Future endeavours may benefit from combining both FA and MDS techniques to examine structural variance and invariance, particularly within clinical cohorts that differ significantly in developmental trajectories from normative samples used to develop intelligence tests. Regardless, our findings may complement and enrich established interpretive frameworks for FASD assessment and provide an impetus to redirect the current diagnostic protocol towards a paradigm informed by the ‘process approach’, which allows clinicians to define functional cognitive deficits through the observable impact of impairment in everyday behaviour. In combination with discrete outcomes in neurocognitive domains, such a framework may encourage a holistic approach to the diagnosis of FASD that better supports interventional and rehabilitative goals, establishing a patient-centred approach to clinical care.

Acknowledgements

We thank Bond University and the Child Development Service, Gold Coast University Hospital, for supporting this research.

Ethical Approval

Data collection was conducted in accordance with Children’s Health Queensland Hospital and Health Human Research Ethics Committee (CHQHREC) approval (HREC/18/QCHQ/45026).

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs

References

Adler N., Guttman R. (1982). The radex structure of intelligence: A replication. Educational and Psychological Measurement, 42(3), 739–748. https://doi.org/10.1177/001316448204200303
Akison L. K., Hayes N., Vanderpeet C., Logan J., Munn Z., Middleton P., Moritz K. M., Reid N., Australian FASD Guidelines Development Group, on behalf of the Australian FASD Guidelines Consortium. (2024). Prenatal alcohol exposure and associations with physical size, dysmorphology and neurodevelopment: A systematic review and meta-analysis. BMC Medicine, 22(1), 467. https://doi.org/10.1186/s12916-024-03656-w
Astley S. J. (2004). Diagnostic guide for fetal alcohol spectrum disorders: The 4-digit diagnostic code. University of Washington.
Benz J., Rasmussen C., Andrew G. (2009). Diagnosing fetal alcohol spectrum disorder: History, challenges and future directions. Paediatrics and Child Health, 14(4), 231–237. https://doi.org/10.1093/pch/14.4.231
Borg I., Mair P. (2017). The choice of initial configurations in multidimensional scaling: Local minima, fit, and interpretability. Austrian Journal of Statistics, 46(2), 19–32. https://doi.org/10.17713/ajs.v46i2.561
Bower C., Elliott E., Group S. (2016). Report to the Australian government department of health:“Australian guide to the diagnosis of fetal alcohol spectrum disorder (FASD)”. Australian Government Department of Health.
Bruno-Golden B. F., Ashendorf L., Swenson R., Libon D. (2013). The integration of process analysis into the clinical assessment of children: A personal perspective. The Boston Process approach to neuropsychological assessment: A practionner’s guide (pp. 314–328).
Caemmerer J. M., Keith T. Z., Reynolds M. R. (2020). Beyond individual intelligence tests: Application of Cattell-Horn-Carroll Theory. Intelligence, 79, 101433. https://doi.org/10.1016/j.intell.2020.101433
Canivez G. L., Kush J. C. (2013). WAIS-IV and WISC-IV Structural Validity: Alternate Methods, Alternate Results. Commentary on Weiss et al. (2013a) and Weiss et al. (2013b). Journal of Psychoeducational Assessment, 31(2), 157–169. https://doi.org/10.1177/0734282913478036
Canivez G. L., Watkins M. W., Dombrowski S. C. (2017). Structural validity of the wechsler intelligence scale for children–fifth edition: Confirmatory factor analyses with the 16 primary and secondary subtests. Psychological Assessment, 29(4), 458–472. https://doi.org/10.1037/pas0000358
Chen H., Zhang O., Raiford S. E., Zhu J., Weiss L. G. (2015). Factor invariance between genders on the Wechsler intelligence scale for children–fifth edition. Personality and Individual Differences, 86, 1–5. https://doi.org/10.1016/j.paid.2015.05.020
Chen H.-Y., Keith T. Z., Yung-Hwa C., Ben-Sheng C. (2009). What does the WISC-IV measure? Validation of the scoring and CHC-based interpretative approaches. Journal of Research in Education Sciences, 54(3), 85.
Cohen A., Fiorello C. A., Farley F. H. (2006). The cylindrical structure of the wechsler intelligence scale for children—IV: A retest of the guttman model of intelligence. Intelligence, 34(6), 587–591. https://doi.org/10.1016/j.intell.2006.05.003
Cook J. L., Green C. R., Lilley C. M., Anderson S. M., Baldwin M. E., Chudley A. E., Conry J. L., LeBlanc N., Loock C. A., Lutke J., Mallon B. F., McFarlane A. A., Temple V. K., Rosales T. (2016). Fetal alcohol spectrum disorder: A guideline for diagnosis across the lifespan. Canadian Medical Association Journal, 188(3), 191–197. https://doi.org/10.1503/cmaj.141593
Coriale G., Fiorentino D., Di Lauro F., Marchitelli R., Scalese B., Fiore M., Maviglia M., Ceccanti M. (2013). Fetal alcohol spectrum disorder (FASD): Neurobehavioral profile, indications for diagnosis and treatment. Rivista Di Psichiatria, 48(5), 359–369. https://doi.org/10.1708/1356.15062
Courville T., Coalson D., Kaufman A., Raiford S. (2016). Does WISC-V scatter matter. In: Intelligent testing with the WISC-V (pp. 209–225). https://doi.org/10.1002/9781394259397.ch7
Davison M. L., Sireci S. G. (2000). Multidimensional scaling. In Handbook of applied multivariate statistics and mathematical modeling (pp. 323–352). Elsevier. https://doi.org/10.1016/b978-012691360-6/50013-6
Dombrowski S. C., Canivez G. L., Watkins M. W. (2018). Factor structure of the 10 WISC-V primary subtests across four standardization age groups. Contemporary School Psychology, 22(1), 90–104. https://doi.org/10.1007/s40688-017-0125-2
Dombrowski S. C., Canivez G. L., Watkins M. W., Alexander Beaujean A. A. (2015). Exploratory bifactor analysis of the wechsler intelligence scale for children—fifth edition with the 16 primary and secondary subtests. Intelligence, 53, 194–201. https://doi.org/10.1016/j.intell.2015.10.009
Fiorello C. A., Hale J. B., Holdnack J. A., Kavanagh J. A., Terrell J., Long L. (2007). Interpreting intelligence test results for children with disabilities: Is global intelligence relevant? Applied Neuropsychology, 14(1), 2–51. https://doi.org/10.1080/09084280701280338
Flanagan D. P., Alfonso V. C. (2017). Essentials of WISC-V assessment. John Wiley & Sons. https://ebookcentral.proquest.com/lib/bond/detail.action?docID=4815062
Flanagan D. P., Ortiz S. O., Alfonso V. C., Kaufman A. S., Kaufman N. L., Kaufman N. L. (2013). Essentials of cross-battery assessment. John Wiley & Sons. https://ebookcentral.proquest.com/lib/bond/detail.action?docID=832573
Golay P., Reverte I., Rossier J., Favez N., Lecerf T. (2013). Further insights on the French WISC–IV factor structure through Bayesian structural equation modeling. Psychological Assessment, 25(2), 496–508. https://doi.org/10.1037/a0030676
Groenen P. J., Borg I. (2013). The past, present, and future of multidimensional scaling. Econometric Institute.
Guttman L. (1954). An outline of some new methodology for social research. Public Opinion Quarterly, 18(4), 395–404. https://doi.org/10.1086/266532
Guttman L., Levy S. (1991). Two structural laws for intelligence tests. Intelligence, 15(1), 79–103. https://doi.org/10.1016/0160-2896(91)90023-7
Guttman R., Greenbaum C. W. (1998). Facet theory: Its development and current status. European Psychologist, 3(1), 13–36. https://doi.org/10.1027/1016-9040.3.1.13
Hayes N., Akison L. K., Goldsbury S., Hewlett N., Elliott E. J., Finlay-Jones A., Shanley D. C., Bagley K., Crawford A., Till H., Crichton A., Friend R., Moritz K. M., Mutch R., Harrington S., Webster A., Reid N. (2022). Key stakeholder priorities for the review and update of the Australian guide to diagnosis of fetal alcohol spectrum disorder: A qualitative descriptive study. International Journal of Environmental Research and Public Health, 19(10), 5823. https://doi.org/10.3390/ijerph19105823
Hayes N., Bagley K., Hewlett N., Elliott E. J., Pestell C. F., Gullo M. J., Munn Z., Middleton P., Walker P., Till H., Shanley D. C., Young S. L., Boaden N., Hutchinson D., Kippin N. R., Finlay-Jones A., Friend R., Shelton D., Crichton A., Reid N. (2023). Lived experiences of the diagnostic assessment process for fetal alcohol spectrum disorder: A systematic review of qualitative evidence. Alcohol, Clinical and Experimental Research, 47(7), 1209–1223. https://doi.org/10.1111/acer.15097
Jaworska N., Chupetlovska-Anastasova A. (2009). A review of multidimensional scaling (MDS) and its utility in various psychological domains. Tutorials in Quantitative Methods for Psychology, 5(1), 1–10. https://doi.org/10.20982/tqmp.05.1.p001
Jewsbury P. A., Bowden S. C., Duff K. (2017). The cattell–horn–carroll model of cognition for clinical assessment. Journal of Psychoeducational Assessment, 35(6), 547–567. https://doi.org/10.1177/0734282916651360
Joshanloo M., Weijers D. (2019). A two-dimensional conceptual framework for understanding mental well-being. PloS One, 14(3), e0214045. https://doi.org/10.1371/journal.pone.0214045
Keith T. Z., Fine J. G., Taub G. E., Reynolds M. R., Kranzler J. H. (2006). Higher order, multisample, confirmatory factor analysis of the Wechsler Intelligence Scale for Children—fourth Edition: What does it measure. School Psychology Review, 35(1), 108–127. https://doi.org/10.1080/02796015.2006.12088005
Kruskal & Wish. (1978). Multidimensional scaling. Sage Publications, Inc. https://doi.org/10.4135/9781412985130
Lange S., Probst C., Gmel G., Rehm J., Burd L., Popova S. (2017). Global prevalence of fetal alcohol spectrum disorder among children and youth: A systematic review and meta-analysis. JAMA Pediatrics, 171(10), 948–956. https://doi.org/10.1001/jamapediatrics.2017.1919
Marshalek B., Lohman D. F., Snow R. E. (1983). The complexity continuum in the radex and hierarchical models of intelligence. Intelligence, 7(2), 107–127. https://doi.org/10.1016/0160-2896(83)90023-5
Mattson S. N., Bernes G. A., Doyle L. R. (2019). Fetal alcohol spectrum disorders: A review of the neurobehavioral deficits associated with prenatal alcohol exposure. Alcoholism: Clinical and Experimental Research. https://doi.org/10.1111/acer.14040
McGill R. J. (2016). Invalidating the full scale IQ score in the presence of significant factor score variability: Clinical acumen or clinical illusion? Archives of Assessment Psychology, 6(1). 1.
McGill R. J., Canivez G. L. (2018). Confirmatory factor analyses of the WISC-IV Spanish core and supplemental subtests: Validation evidence of the Wechsler and CHC models. International Journal of School & Educational Psychology, 6(4), 239–251. https://doi.org/10.1080/21683603.2017.1327831
McGill R. J., Dombrowski S. C. (2019). Critically reflecting on the origins, evolution, and impact of the Cattell-Horn-Carroll (CHC) model. Applied Measurement in Education, 32(3), 216–231. https://doi.org/10.1080/08957347.2019.1619561
McLennan J. D., Braunberger P. (2017). A critique of the new Canadian fetal alcohol spectrum disorder guideline. Journal of the Canadian Academy of Child and Adolescent Psychiatry, 26(3), 179–183.
Meyer E. M. (2021). Multidimensional scaling with intelligence and Academic Achievement scores. University of Kansas.
Meyer E. M., Reynolds M. R. (2018). Scores in space: Multidimensional scaling of the WISC-V. Journal of Psychoeducational Assessment, 36(6), 562–575. https://doi.org/10.1177/0734282917696935
Shelton D., Reid N., Till H., Butel F., Moritz K. (2018). Responding to fetal alcohol spectrum disorder in Australia. Journal of Paediatrics and Child Health, 54(10), 1121–1126. https://doi.org/10.1111/jpc.14152
Tucker-Drob E. M., Salthouse T. A. (2009). Methods AND MEASURES: Confirmatory factor analysis and multidimensional scaling for construct validation of cognitive abilities. International Journal of Behavioral Development, 33(3), 277–285. https://doi.org/10.1177/0165025409104489
Watkins M. W. (2006). Orthogonal higher order structure of the Wechsler intelligence scale for children—fourth edition. Psychological Assessment, 18(1), 123–125. https://doi.org/10.1037/1040-3590.18.1.123
Watkins M. W. (2010). Structure of the Wechsler intelligence scale for children—fourth edition among a national sample of referred students. Psychological Assessment, 22(4), 782–787. https://doi.org/10.1037/a0020043
Wechsler D. (2003). Wechsler intelligence scale for children (4th ed.) Psychological Corporation. (WISC-IV).
Wechsler D. (2014). Wechsler intelligence scale for children (5th ed.) PsychCorp.
Weiss L. G., Keith T. Z., Zhu J., Chen H. (2013). WISC-IV and clinical validation of the four-and five-factor interpretative approaches. Journal of Psychoeducational Assessment, 31(2), 114–131. https://doi.org/10.1177/0734282913478032
White R. F., Rose F. E. (1997). The Boston Process approach. In Goldstein G., Incagnoli T. M. (Eds.), Contemporary approaches to neuropsychological assessment (pp. 171–211). Springer US. https://doi.org/10.1007/978-1-4757-9820-3_6

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
Email Article Link
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: February 3, 2025

Keywords

  1. Fetal Alcohol Spectrum Disorders
  2. Wechsler Scales
  3. intelligence tests
  4. multidimensional scaling analysis

Rights and permissions

© The Author(s) 2025.
Creative Commons License (CC BY 4.0)
This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Authors

Affiliations

Lee Wolff
School of Psychology, Bond University, Robina, QLD, Australia
Haydn Till
Gold Coast University Hospital, Southport, QLD, Australia
School of Applied Psychology, Griffith University, Southport, QLD, Australia
Bruce Watt
School of Psychology, Bond University, Robina, QLD, Australia

Notes

Lee Wolff, School of Psychology, Bond University, 14 University Drive, Robina, QLD 4226, Australia. Email: [email protected]

Metrics and citations

Metrics

Journals metrics

This article was published in Journal of Psychoeducational Assessment.

View All Journal Metrics

Article usage*

Total views and downloads: 200

*Article usage tracking started in December 2016


Articles citing this one

Receive email alerts when this article is cited

Web of Science: 0

Crossref: 0

There are no citing articles to show.

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/EPUB

View PDF/EPUB

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.