This study investigated cognitive patterns of strengths and weaknesses (PSW) and their relationship to patterns of math errors on the Kaufman Test of Educational Achievement (KTEA-3). Participants, ages 5 to 18, were selected from the KTEA-3 standardization sample if they met one of two PSW profiles: high crystallized ability (Gc) paired with low processing speed/long-term retrieval (Gs/Glr; n = 375) or high Gs/Glr paired with low Gc (n = 309). Estimates of Gc and Gs/Glr were based on five KTEA-3 subtests that measure either Gc (e.g., Listening Comprehension) or Gs/Glr (e.g., Object Naming Facility). The two groups were then compared on math error factors. Significant differences favored the High-Gc group for factors that measure math calculation, basic math concepts, and complex computation. However, the two groups did not differ in their errors on factors that measure geometry/measurement or simple addition. Results indicated that students with different PSW profiles also differed in the kinds of errors they made on math tests.
To tackle the complex task of interpreting results of intelligence testing, practitioners generally report the level of performance and usually the pattern of performance as well. Some researchers have advocated only interpreting a child’s level of performance and have argued for the use and reporting of an individual’s global cognitive composite alone (Canivez & Kush, 2013; Ceci & Williams, 1997; Gottfredson, 1997; Jensen, 1998; Neisser et al., 1996). Others have insisted that analysis of a pattern of performance allows for understanding an individual’s educational needs. As an example, Hale, Fiorello, Dumont, and Willis (2008) have found that the predictive validity of intellectual tests is “significantly reduced when one interprets global IQ over subcomponent scores” (p. 850). Those who advocate interpreting a child’s pattern of performance typically carry out a profile analysis. Among those, patterns of strengths and weaknesses (PSW) methods are chosen by those who take an examinee’s level and pattern of performance into account. PSW models have been supported and defended by many researchers (Hale & Fiorello, 2004; A. S. Kaufman, Raiford, & Coalson, 2016), and especially the authors of the Cattell–Horn–Carroll (CHC) Cross-Battery Approach (Flanagan, Ortiz, & Alfonso, 2013).
Research investigating error analysis in mathematics is an area of research and practice that has a long history in the educational psychology literature. Radatz (1979) conducted some of the first modern investigations into math error analysis. He indicated that there was a renewed interest in the study of error analysis in mathematics in the late 1970s, in part because the process of individualization and differentiation of mathematics instruction necessitated a clear understanding of students’ specific areas of difficulty. This ability to more effectively instruct and assist students in mathematics may be one of the more compelling reasons to engage in the process of error analysis.
The Value of Error Analysis
Radatz (1979) emphasized the importance of understanding the underlying causes of errors. Although there is value in simply identifying common errors and error patterns, he posited that without understanding the causes or those errors, it may still be impossible to make informed decisions about math instruction, or to understand the process by which students learn. Students may make the same types of errors on a math task but have different underlying difficulties. Therefore, the cause of that error, and thus the treatment or remediation, would be very different. Understanding these and other causes of errors can then contribute to increased effectiveness in instruction—taking a PSW approach may help to understand the root causes of these errors.
In 2009, researchers Ketterlin-Geller and Yovanoff (2009) wrote about the value of error analysis in their article on the use of diagnostic assessments in mathematics to inform instructional decisions. They noted that error analysis is a commonly used method among educators to identify students’ misconceptions in math. They described error analysis as a process of reviewing a student’s responses to identify a pattern of misunderstanding. They also drew the distinction between what they describe as “slips,” which are random errors in procedural knowledge, and “bugs,” which represent more persistent, underlying misconceptions. With respect to instruction and remediation, Ketterlin-Geller and Yovanoff note that error analysis has value for its ability to provide timely information to teachers that can be used to adjust instruction to meet students’ individual needs.
Other researchers have focused on the value of error analysis in math not just as a tool for researchers and educators, but as a process by which students can develop a deeper understanding of mathematics. In his research, Borasi (1994) proposed that the explicit analysis of mathematical errors by students could serve as a stimulus or “springboard” for inquiry in mathematics, thus enabling students to better understand the conceptual underpinnings of mathematical operations and procedures. He noted that through the process of analyzing their own mathematical errors, students were able to gain numerous benefits including opportunities to engage in authentic problem solving as well as exploration and communication around mathematics. The process of error analysis has clear value for educators, researchers, and students.
Classifying Math Errors
Since the beginning studies of math errors in the early 1900s, researchers have conceptualized mathematics errors in different ways. Working from an information processing perspective, Radatz (1979) posited that proposed math errors found in the literature could be synthesized into four broad areas: language problems, difficulties with spatial analysis, failure to master foundational concepts, rigidity of thinking, and the use of irrelevant rules or strategies. Peng and Luo (2009) took a similar approach and summarized the available literature and proposed a framework of error analysis comprised of two levels: student errors and teachers’ ability to interpret those errors. They described the nature of student mathematical errors in four categories: mathematical, logical, strategic, and psychological. Notably, several of the error categories suggested by Radatz and Peng & Luo are not actually related to math computation ability but other areas such as language and visual-spatial skills. In addition, Clements (1980) notes that difficulty with written or word problems may sometimes have less to do with math difficulty than reading and language comprehension difficulties. It may be that problem solving, unlike simple arithmetic and computation, may be influenced by factors other than math ability such as language, working memory, and visual-spatial skills. This may describe a role for crystallized ability or Gc factors that underlie math ability.
Rather than relying on these broad math error classifications, other researchers have created error categories based on specific types of math problems. For example, Fiori and Zuccheri (2005) identified patterns of errors made by 9- to 12-year-olds on multi-digit subtraction problems. Their error classifications included errors made when borrowing, not borrowing when necessary to solve the problem, and computation errors in the context of correct calculation strategies/borrowing techniques (e.g., 5 − 2 = 2 in the right-most column of digits but the rest of the problem is completed correctly). Another study classified specific errors made by middle schoolers when completing fraction problems, such as incorrectly adding/subtracting the numerator and denominator or adding all numerators and denominators resulting in an incorrect answer (Bottge, Ma, Gassaway, Butler, & Toland, 2014).
For older students, one study analyzed the errors made by high school students and found five error categories: inserting or omitting some aspect of the problem, incorrect interpretation of math word problems into mathematical expressions, logic errors, incorrect application of a concept or theory, and basic computation errors (Movshovitz-Hadar, Zaslavsky, & Inbar, 1987). Another study categorized algebraic misconceptions/errors made by sixth through 12th graders according to three categories: misunderstanding the concept of a variable, misconceptions of equality (i.e., conducting the same operation on both sides of the equal sign), and difficulty with graphing (Russell, O’Dwyer, & Miranda, 2009).
Relationships Between Math Errors and Cognitive Skills
Studies utilizing structural equation modeling have found that measuring specific cognitive abilities may help explain academic performance above and beyond the large effect of general intelligence, or g (Hale, Fiorello, Kavanagh, Hoeppner, & Gaither, 2001; Taub, Floyd, Keith, & McGrew, 2008). However, some variability in cognitive-achievement relationships may result when different measures of the same broad abilities are used. For example, research has consistently found g to be the strongest predictor of quantitative knowledge—studies using the Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV) found that g accounted for the majority of variance in quantitative knowledge (Glutting, Watkins, Konold, & McDermott, 2006; Parkin & Beaujean, 2012); yet, studies using the Woodcock–Johnson, 3rd edition (WJ III) found that short-term working memory and processing speed also explained additional variance (Taub et al., 2008).
S. B. Kaufman, Reynolds, Liu, Kaufman, and McGrew (2012) explored the relationships between general intelligence associated cognitive ability tests (cognitive g) and general ability underlying achievement tests (achievement g) using two large samples tested on different test batteries. Results indicated that cognitive and achievement abilities are highly related but distinct constructs, especially among school-age children, and specific cognitive factors are important for explaining academic achievement. In addition, McGrew and Wendling (2010) reviewed the extant CHC research on cognitive-achievement relations, relying primarily on studies pertaining to the WJ III Cognitive and Achievement test batteries. For both math computation and math problem solving, three broad abilities were consistently significant in predicting scores at one or more age groups: comprehension-knowledge (Gc), fluid reasoning (Gf), and processing speed (Gs). Short-term working memory (Gwm) was a consistently significant, albeit low, predictor of math problem solving ability among high school students. However, Swanson and Beebe-Frankenberger (2004) found a moderate correlation between working memory and math problem solving that was stable across grades, and working memory contributed unique variance to problem solving even when the influence of measures, such as processing speed and phonological processing, were partialed from the analysis.
Although long-term retrieval (Glr) was not a significant predictor in the McGrew and Wendling (2010) study, several narrow Glr abilities were shown to be important predictors of mathematics achievement. Naming automaticity was consistently predictive of math computation, and associative memory and meaningful memory were predictive of math computation and problem solving at one or more age levels. The relationship between Glr and math achievement has not been consistently supported by CHC-based studies (Flanagan et al., 2013); however, other research has shown Glr to be important for rapidly retrieving math facts (e.g., Geary, Hoard, & Bailey, 2011) and facilitating calculation ability by retrieving mathematical knowledge of algorithms and strategies (Swanson & Beebe-Frankenberger, 2004).
Correlational studies have demonstrated significant relationships between cognitive and achievement measures. Analysis of the Kaufman Test of Educational Achievement (3rd ed.; KTEA-3; A. S. Kaufman & Kaufman, 2014) standardization sample indicated that the math composites showed moderate correlations with measures of Gc across different measures such as the Differential Ability Scales, Second Edition (DAS-II); the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V); and the Kaufman Assessment Battery for Children, Second Edition (KABC-II). In addition, math measures were also correlated with measures of Gf such as nonverbal and fluid reasoning. Consistent with theoretical expectations, correlations with measures of Gwm were higher for KTEA-3 Math Concepts & Applications than for Math Computation. Correlations between the KTEA-3 Math composite and measures of Glr ranged from weak to moderate. Correlations with these measures of Gc and Gf were higher for Math Concepts & Applications than for Math Computation.
Gs-Math correlations, however, were similar across the two math subtests and did not show a consistent developmental trend. Unlike the consistently significant relationship between measures of Gs and math abilities reported by McGrew and Wendling (2010), correlations were generally modest between the KTEA-3 Math composite and the concurrent measures of Gs perceptual speed from the DAS-II, the WISC-V, and the KTEA-3 Object Naming Facility (ONF) subtest and Letter Naming Facility (LNF) subtests. The ONF and LNF subtests are classified as Glr because they involve naming automaticity and lexical access but these subtests load more strongly on Gs because they are speeded (D. P. Flanagan, personal communication, August 12, 2015).
Aside from the undeniable importance of g for predicting mathematics achievement, research on cognitive-achievement relations suggests the consideration of several broad abilities, including fluid reasoning, comprehension-knowledge, processing speed, and short-term working memory. However, very little is known about whether differences in broad ability cognitive profiles affect the kinds of errors that children make in either math computation or math problem solving. The present study investigated the relationship between cognitive strengths and weaknesses and specific patterns of errors made on the math subtests of the KTEA-3, and whether the cognitive profiles on two cognitive batteries—high crystallized ability paired with low processing speed/long-term retrieval (High-Gc), and high processing speed/long-term retrieval paired with low crystallized ability (High-Gs/Glr)—were related to distinct patterns of errors on math achievement subtypes. We hypothesized that there would be significant differences between the two groups on math errors for both the Math Concepts & Applications and Math Computation subtests.
Participants
Participants in this study were a subsample of the standardization and validation sample for the KTEA-3 (A. S. Kaufman & Kaufman, 2014). The total of the standardization sample and validation samples (N = 3,843) included 1,987 females and 1,856 males in Grades 1 to 12 (median grade = 5) who ranged in ages from 4 to 19 years (M age = 10 years and 5 months, SD = 3 years and 11 months). These data were collected between August 2012 and 2013; approximately half of the sample was tested the KTEA-3 Form A, and the other half was tested using Form B. More information about the standardization sample can be found in the KTEA-3 Technical and Interpretive Manual (A. S. Kaufman, Kaufman, & Breaux, 2014).
Participants in the present study were selected if they met one of two cognitive profiles. We created two different groups based on expected PSW by examining crystallized intelligence (Gc), cognitive processing speed (Gs), and long-term storage and retrieval (Glr). An estimate of Gc was created by averaging standard scores on the KTEA-3 Listening Comprehension and Oral Expression subtests, both excellent measures of this CHC broad ability (S. B. Kaufman et al., 2012). Gs/Glr scores were generated by taking an average of the standard scores from the Associational Fluency, LNF, and ONF subtests. The first group comprised participants with high Gc but low Gs/Glr, and the second group had the reverse profile.
The rules for selecting PSW cases:
High Gc, Low Gs/Glr: Gc is >90 and Gc is at least 15 points higher than Glr/Gs. Exclude cases if either Listening Comprehension or Oral Expression is equal to or less than Glr (average score).
High Gs/Glr, Low Gc: Gc is < 110 and at least 15 points lower than Gs/Glr. Exclude cases if Associational Fluency, Oral Naming Facility, or Letter Naming Facility is equal or less than Gc (average score).
The present sample consisted of 684 participants (see Table 1 for demographic information). The present study included students between the ages of 5 and 18 and in kindergarten through 12th grade. Table 1 provides demographic information for the two PSW groups.
|
Table 1. Sample Demographics.

Measures
KTEA-3
The KTEA-3 (A. S. Kaufman & Kaufman, 2014) is a measure of academic achievement for children ages 4 through 25. The KTEA-3 also includes an error analysis component that not only allows the examiner to know if an answer is wrong but also the type of error made and if there is a consistent pattern of errors made within and across subtests. The present study utilized subtest and composite scores measuring math abilities in addition to subtests used to estimate Gc, Gs, and Glr abilities. Table 2 provides the mean scores and standard deviations for subtests and composite scores of interest by group. Specific error factors were created for the Math Concepts & Applications and Math Computation subtests using exploratory factor analysis. (For more information on the KTEA-3 error factor creation, see O’Brien et al., 2017). Table 3 provides the mean error scores for error factors of interest by group. Factor scores were originally created using z-scores; however, they were converted to scaled scores with a mean of 10 and a standard deviation of 3 for ease of interpretation.
|
Table 2. Group Means and Standard Deviations on Subtest and Composite Scores of Interest.

|
Table 3. Mean Error Scores by Group.

Analysis
A multi-step process was used to investigate the relationship between students’ cognitive profiles on KTEA-3 cognitive tasks and their corresponding KTEA-3 error scores in math. The first analytic step in this process was to identify the two PSW samples, a procedure described in the “Participants” section. The second step was the derivation of factor scores.
The KTEA-3 utilizes a unique error analysis methodology based on the specific subskills measured by a given subtest. For 10 of the KTEA-3 subtests, curriculum experts identified the different categories of errors students are likely to make on each subtest. For each category of error on a given subtest, students received a grade-level, normative performance label of weakness, average, or strength based on a comparison of a student’s total errors with the average number of errors made by individuals in the same grade and working at the same level (determined by the highest-numbered item administered or the item set used with each individual) in the KTEA-3 normative sample (A. S. Kaufman et al., 2014). A student with a low overall subtest score might still show a strength in a skill compared with other students in the same grade working at the same low level. A student with a high overall subtest score might show a weakness in a skill compared with other students in the same grade working at the same low level. This performance label is called the skill status. Based on this error analysis system, students received multiple skill status error scores within each subtest. To facilitate the use of these skill status error scores in the analyses conducted in this study, exploratory factor analysis of the two KTEA-3 math subtests was conducted to create a reduced error score variable set (O’Brien et al., 2017).
To create the factor scores, polychoric correlation matrices were generated for Math Concepts & Applications and Math Computation. An exploratory factor analysis using unweighted least square extraction was conducted for each math subtest using data from the entire sample between Grades 1 and 12 (O’Brien et al., 2017). A combination of parallel analysis (PA; Horn, 1965), a visual inspection of the scree plot (Cattell, 1966), and content review of the factor structure were used to determine the number of factors to extract. For the subtests related to the current study, three factors were extracted for Math Concepts & Applications and two were extracted for Math Computation. R version 3.2.3 was used to generate Bartlett factor scores for each of the extracted factors.
The final analytic step was to investigate whether students with different PSW have different mean error factor scores on the math subtests. One-way MANOVAs were conducted with subtest error factor scores as dependent variables and the profile of strengths and weaknesses as the independent variable. Prior to conducting the analyses, each set of subtest factor scores were examined for univariate normality issues and outliers. Any extreme cases were analyzed to verify their impact on the distributional properties of each subtest. Using a criterion of |2| skewness and |6| kurtosis (Lix, Keselman, & Keselman, 1996), no violations of normality were observed. To examine the assumption of homogeneity of within-group covariance matrixes, a two-step analysis process was utilized (Huberty & Petoskey, 2000). First, for each analysis, the Box F test was calculated. The Box test was statistically significant for Math Concepts & Applications. However, as noted by Huberty and Petoskey (2000), the Box test is an extremely powerful test. Therefore, as a follow-up analysis, the natural log of the determinant of the covariance matrix for each level of the independent variable was compared with the natural log determinant of the pooled matrix (Huberty & Petoskey, 2000; Olejnik, 2010) for each subtest. In the judgment of the researchers, the differences were relatively close, with the largest difference between a given group and the pooled natural log determinant equal to −.71.
In this analysis, we examined whether different cognitive PSW have different mean error factor scores the KTEA-3 Mathematics error scores. To examine this hypothesis, two one-way MANOVAs were conducted with the cognitive profiles serving as the independent variable and the error factor scores serving as dependent variables. The first MANOVA was conducted with the three Math Concepts & Applications error factor scores serving as dependent variables and the second with the two Math Computation error factor scores as dependent variables. Both MANOVAs yielded significant results: Math Concepts & Applications (Wilks’s λ = 0.8599; F = 27.76; df = 3, 511; p < .0001); Math Computation (Wilks’s λ = 0.9262; F = 9.32; df = 2, 234; p < .0001). Furthermore, 14% of the variance in the Math Concepts & Applications error factor scores can be explained by the differences in the cognitive profiles; for Math Computation, the corresponding value is 7%. Consequently, the MANOVAs were followed up with five univariate ANOVAs, three for Math Concepts & Applications and two for Math Computation.
Mean error scores by cognitive groups are listed in Table 3, and ANOVA results for comparisons by error factors are listed in Table 4. As noted earlier, three error factor scores were generated for Math Concepts & Applications. Based upon an expert review (N. Mather, personal communication, March 21, 2016; J. Willis & R. Dumont, personal communication, March 26, 2016), the three factors were named Math Calculation (simplistic math calculations such as single step operations and computations, fractions, and algebra), Geometric Concepts (problems that utilize visual-spatial information), and Complex Math Problems (problems that involve multiple steps to reach a solution and use abstract concepts). Although the overall model indicated that error scores on Math Concepts & Application were different for students with different cognitive profiles, the separate ANOVAs revealed significant differences on only two of the three error factors. Mean differences were present for Math Calculation, F(5, 14) = 73.18, p < .001, and Complex Math Problems, F(5, 14) = 32.44, p < .001. Specifically, students in the High-Gc group committed fewer Math Calculation and Complex Math Problems errors than the High-Gs/Glr group. However, there were no significant mean differences between groups for Geometric Concepts, F(5, 14) = 0.54, p = 0.461. Although different cognitive profiles had an impact on Math Computation and Complex Problems, differences in cognitive profiles explained slightly more variance in Math Computation error scores (R2 = .125) than Complex Problems (R2 = .059).
|
Table 4. MANOVA Results and Comparisons by Error Factors.

Two additional error factor scores, Basic Math Concepts (problems using logic and basic math principles) and Addition, were calculated for the Math Computation subtest. ANOVAs revealed that those in the High-Gc group made significantly fewer errors in Basic Math Concepts than the High-Gs/Glr group, F(2, 36) = 16.30, p < .001. The two cognitive profiles account for approximately 7% of the total variance in the Basic Math Concepts error factor. However, the PSW groups did not differ significantly in the number of Addition errors they made on the Math Computation subtest, F(2, 36) = 3.39, p = .067.
The present study sought to investigate the relationship between specific cognitive profiles and their relationship to math errors. These results support our hypothesis that particular patterns of cognitive strengths and weaknesses differentially predict performance on tests of math achievement. Specifically, students with a cognitive profile characterized by high crystallized abilities (Gc) combined with low cognitive processing speed (Gs) and long-term retrieval (Glr), outperformed students with a cognitive profile characterized by low Gc combined with high Gs and Glr in several areas of mathematics including basic and complex math problem solving. One reason for this may be that successful math problem solving, especially for more complex problems, requires higher level reasoning abilities to a greater degree than rote memorization or speed, and as such, is more affected by one’s intelligence, as reflected by Gc in the present study. In addition, successfully computing complex problems may require a greater fund of acquired and procedural knowledge, and thus may also be affected by an individual’s crystallized abilities. This finding is in keeping with previous research results and is supported by the work of S. B. Kaufman and colleagues (2012), which also showed Gc to be a significant predictor of math computation and problem solving abilities. Hale et al. (2008) found that Gc was the strongest predictor and suggested that children with math SLD are inclined to rely on what they know (knowledge of math facts and procedures) and do not attempt to solve more complex problems because the higher level math requires skills more associated with Gf.
On the contrary, the High-Gc and High-Gs/Glr groups did not differ on math errors related to geometric concepts. It may be argued that tasks within this realm rely more heavily upon visual-spatial skills and fluid reasoning, and thus are not as dependent upon Gc. It may also be that students’ high Gs and Glr abilities counteract the effects of low Gc with respect to these specific tasks; previous research has shown that Gs/Glr abilities may be helpful for math problems with a visual component. As Taub et al. (2008) found, Gf and Gs explained additional variance in quantitative skills over and above general intelligence. Interestingly, the same finding concerning the geometry factor in relation to the difference in high versus low Gc was also observed among the Mild Intellectual Disability sample of the KTEA-3 standardization sample (Root et al., 2017).
With respect to math computation abilities, we also found significant differences for Basic Math Concepts in favor of the High-Gc group, but not for simple addition problems. One reason for this finding may be that simple addition is more reliant on Gs and rote memorization as opposed to the more complex reasoning skills required for the more complicated skills of subtracting and multiplying multi-digit numbers. Supporting this idea, Floyd et al. (2003) found that Gs was more significantly related to math calculation than math reasoning, so it follows that performance on addition tasks may be related to the presence of high Gs abilities. Indeed, children with math SLD tend to use inadequate calculation strategies because they are weak in the Gs ability that would allow for automaticity (Geary & Hoard, 2001). Perhaps, children with math SLD over-rely on Gc, and every procedural step can be an effortful and time-consuming task because the computation does not easily become automatic for them. In addition, J. Willis (personal communication, March 21, 2016) explained that the Addition factor might reflect the abilities of those students who mastered addition, but did not progress to more difficult processes.
Consistent with previous PSW studies, results of our current research support the conclusion that particular patterns of cognitive strengths and weaknesses differentially predict performance on tests of math achievement. The predictive validity of intelligence tests considerably increases when a PSW model is used to analyze subtypes of learning disabilities (Decker, Hale, & Flanagan, 2013). Hale and Fiorello (2004) recommend that practitioners take an “idiographic approach” to understand a child’s psychological processes and explain that an idiographic approach involves emphasizing a child’s pattern of performance and allows practitioners to develop interventions that are tailored to individual needs. Our present study demonstrates a significant role of Gc in the development, maintenance, and advancement of math skills.
In addition, it is important to note that utilizing this idiographic PSW approach to understanding a students’ learning profile is compatible with the current Response to Intervention (RtI) framework that is now commonly used in schools. For those struggling students who have received evidence-based intervention and are not responding to that intervention, this approach to cognitive assessment can provide educators with new and different information about a student’s specific learning processes and needs, which can then guide the development of different intervention strategies and individualized educational goals. Rather than relying upon only IQ testing as a source of information about a child’s learning abilities, the analysis of PSW has far greater utility in that there is a strong evidence base supporting the relationship between students’ PSW and specific academic skills. As such, this analytic process has direct benefit for practitioners who are working with struggling students.
Limitations and Future Directions
These findings must be interpreted in light of limitations of the present study and analytic approach. The initial plan for the analysis of error categories involved conducting a content analysis. However, due to limitations caused by missing data, this was not feasible. In addition, the completed analysis of the error categories in this study resulted in the need to combine multiple error categories, complicating evidence of reliability in the main variables. Future research controlling for the effects of missing data may want to consider utilizing content analysis. Issues with missing data additionally resulted in limitations regarding measures of cognitive domains. Attempts to control for missing data resulted in a significant reduction of sample size and power, leading to the inability to use data from the standardization sample with scores from cognitive measures (such as the WISC-V, DAS-II, and KABC-II) of Gc, Glr, and Gs in this study. In the future, this research should be replicated using scores from cognitive assessments. Finally, given the significant mathematic achievement error patterns demonstrated in this study, the relationship between error patterns and other broad cognitive domains should be explored. In particular, potential PSW profiles in cognitive areas with strong empirically demonstrated contributions to achievement in mathematics, such as fluid reasoning (Gf) and short-term/working memory (Gsm), should be investigated further.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
Borasi, R. (1994). Capitalizing on errors as “Springboards for Inquiry”: A teaching experiment. Journal for Research in Mathematics Education, 25, 166-208. doi:10.2307/749507 Google Scholar | Crossref | |
|
Bottge, B. A., Ma, X., Gassaway, L., Butler, M., Toland, M. D. (2014). Detecting and correcting fractions computation error patterns. Exceptional Children, 80, 237-255. doi:10.1177/001440291408000207 Google Scholar | SAGE Journals | |
|
Canivez, G. L., Kush, J. C. (2013). WISC-IV and WAIS-IV structural validity: Alternate methods, alternate results. Commentary on Weiss et al. (2013a) and Weiss et al. (2013b). Journal of Psychoeducational Assessment, 31, 157-169. doi:10.1177/0734282913478036 Google Scholar | SAGE Journals | |
|
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245-276. doi:10.1207/s15327906mbr0102_10 Google Scholar | Crossref | Medline | ISI | |
|
Ceci, S., Williams, S. (1997). Schooling, intelligence, and income. American Psychologist, 52, 1051-1058. doi:10.1037/0003-066X.52.10.1051 Google Scholar | Crossref | |
|
Clements, M. K. (1980). Analyzing children’s errors on written mathematical tasks. Educational Studies in Mathematics, 11, 1-21. doi:10.1007/BF00369157 Google Scholar | Crossref | |
|
Decker, S. L., Hale, J. B., Flanagan, D. P. (2013). Professional practice issues in the assessment of cognitive functioning for educational applications. Psychology in the Schools, 50, 300-313. doi:10.1002/pits.21675 Google Scholar | Crossref | ISI | |
|
Fiori, C., Zuccheri, L. (2005). An experimental research on error patterns in written subtraction. Educational Studies in Mathematics, 60, 323-331. doi:10.1007/s10649-005-7530-6 Google Scholar | Crossref | |
|
Flanagan, D. P., Ortiz, S. O., Alfonso, V. (2013). Essentials of cross-battery assessment (3rd ed.). New York, NY: Wiley. Google Scholar | |
|
Floyd, R. G., Evans, J. J., McGrew, K. S. (2003). Relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and mathematics achievement across the school-age years. Psychology in the Schools, 40, 155-171. doi: 10.1002/pits.10083 Google Scholar | Crossref | |
|
Geary, D. C., Hoard, M. K. (2001). Numerical and arithmetical deficits in learning-disabled children: Relation to dyscalculia and dyslexia. Aphasiology,15, 635-647. doi: 10.1080/02687040143000113 Google Scholar | Crossref | ISI | |
|
Geary, D. C., Hoard, M. K., Bailey, D. H. (2011). Fact retrieval deficits in low achieving children and children with mathematical learning disability. Journal of Learning Disabilities, 45, 291-307. doi:10.1177/0022219410392046 Google Scholar | SAGE Journals | |
|
Glutting, J. J., Watkins, M. W., Konold, T. R., McDermott, P. A. (2006). Distinctions without a difference the utility of observed versus latent factors from the WISC-IV in estimating reading and math achievement on the WIAT-II. Journal of Special Education, 40, 103-114. doi:10.1177/00224669060400020101 Google Scholar | SAGE Journals | |
|
Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79-132. doi:10.1016/S0160-2896(97)90014-3 Google Scholar | Crossref | ISI | |
|
Hale, J. B., Fiorello, C. A. (2004). School neuropsychology: A practitioner’s handbook. New York, NY: Guilford Press. Google Scholar | |
|
Hale, J. B., Fiorello, C. A., Dumont, R., Willis, J. O. (2008). Differential Ability Scales–Second Edition: (Neuro)psychological predictors of math performance for typical children and children with math disabilities. Psychology in the Schools, 45, 838-858. doi:10.1002/pits.20330 Google Scholar | Crossref | |
|
Hale, J. B., Fiorello, C. A., Kavanagh, J. A., Hoeppner, J. A. B., Gaither, R. A. (2001). WISC-III predictors of academic achievement for children with learning disabilities: Are global and factor scores comparable? School Psychology Quarterly, 16, 31-55. doi:10.1521/scpq.16.1.31.19158. Google Scholar | Crossref | |
|
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179-185. doi:10.1007/BF02289447 Google Scholar | Crossref | Medline | ISI | |
|
Huberty, C. J., Petoskey, M. D. (2000). Multivariate analysis of variance and covariance. In Tinsley, H., Brown, S. (Eds.), Handbook of applied multivariate statistics and mathematical modeling (pp. 183-208). New York, NY: Academic Press. Google Scholar | Crossref | |
|
Jensen, A. R . (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Google Scholar | |
|
Kaufman, A. S., Kaufman, N. (2014). Kaufman Test of Educational Achievement (3rd ed.). Minneapolis, MN: NCS Pearson. Google Scholar | |
|
Kaufman, A. S., Kaufman, N. L., Breaux, K. C. (2014). Kaufman Test of Educational Achievement, Third Edition (KTEA-3) technical & interpretive manual. Bloomington, MN: Pearson. Google Scholar | |
|
Kaufman, A. S., Raiford, S. E., Coalson, D. L. (2016). Intelligent testing with the WISC-V. Hoboken, NJ: John Wiley. Google Scholar | |
|
Kaufman, S. B., Reynolds, M. R., Liu, X., Kaufman, A. S., McGrew, K. S. (2012). Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests. Intelligence, 40, 123-138. doi:10.1016/j.intell.2012.01.009 Google Scholar | Crossref | ISI | |
|
Ketterlin-Geller, L. R., Yovanoff, P. (2009). Diagnostic assessments in mathematics to support Instructional Decision Making. Practical Assessment, Research & Evaluation, 14, 1-11. Google Scholar | |
|
Lix, L. M., Keselman, J. C., Keselman, H. J. (1996). Consequences of assumption violations revisited: A quantitative review of alternative to the one-way analysis of variance. Review of Educational Research, 66, 579-619. doi:10.3102/00346543066004579 Google Scholar | SAGE Journals | |
|
McGrew, K. S., Wendling, B. J. (2010). Cattell–Horn–Carroll cognitive-achievement relations: What we have learned from the past 20 years of research. Psychology in the Schools, 47, 651-675. doi:10.1002/pits.20497 Google Scholar | Crossref | ISI | |
|
Movshovitz-Hadar, N., Zaslavsky, O., Inbar, S. (1987). An empirical classification model for errors in high school mathematics. Journal for Research in Mathematics Education, 18, 3-14. Google Scholar | Crossref | |
|
Neisser, U., Boodoo, G., Bouchard, T. J., Boykin, A. W., Brody, N., Ceci, S., . . . Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77-101. doi:10.1037/0003-066X.51.2.77 Google Scholar | Crossref | ISI | |
|
O’Brien, R., Pan, X., Courville, T., Bray, M. A., Breaux, K. C., Avitia, M., . . . Choi, D. (2017). Exploratory factor analysis of reading, spelling, and math errors. Journal of Psychoeducational Assessment, 35(1-2) 8-24. Google Scholar | |
|
Olejnik, S. (2010). Multivariate analysis of variance. In Hancock, G., Mueller, R. (Eds.), The reviewer’s guide to quantitative methods in the social sciences (pp. 315-328). New York, NY: Routledge. Google Scholar | |
|
Parkin, J. R., Beaujean, A. A. (2012). The effects of Wechsler Intelligence Scale for Children–Fourth Edition cognitive abilities on math achievement. Journal of School Psychology, 50, 113-128. doi:10.1016/j.jsp.2011.08.003 Google Scholar | Crossref | Medline | ISI | |
|
Peng, A., Luo, Z. (2009). A framework for examining mathematics teacher knowledge as used in error analysis. For the Learning of Mathematics, 29, 22-25. Google Scholar | |
|
Radatz, H. (1979). Error analysis in mathematics education. Journal for Research in Mathematics Education, 10, 163-172. doi:10.2307/748804 Google Scholar | Crossref | |
|
Root, M. M., Marchis, L., White, E., Courville, T, Choi, D., Bray, M. A., Pan, X., . . . Wayte, J. (2017). How achievement error patterns of students with mild intellectual disability differ from low IQ and low achievement students without diagnoses. Journal of Psychoeducational Assessment, 35(1-2), 95-111. Google Scholar | SAGE Journals | |
|
Russell, M., O’Dwyer, L. M., Miranda, H. (2009). Diagnosing students’ misconceptions in algebra: Results from an experimental pilot study. Behavior Research Methods, 41, 414-424. doi:10.3758/BRM.41.2.414 Google Scholar | Crossref | Medline | |
|
Swanson, H. L., Beebe-Frankenberger, M. (2004). The relationship between working memory and mathematical problem solving in children at risk and not at risk for serious math difficulties. Journal of Educational Psychology, 96, 471-491. doi:10.1037/0022-0663.96.3.471 Google Scholar | Crossref | ISI | |
|
Taub, G. E., Floyd, R. G., Keith, T. Z., McGrew, K. S. (2008). Effects of general and broad cognitive abilities on mathematics achievement. School Psychology Quarterly, 25, 187-198. doi:10.1037/1045-3830.23.2.187 Google Scholar | Crossref |

