Norm-referenced error analysis is useful for understanding individual differences in students’ academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of Educational Achievement–Third Edition (KTEA-3) through exploratory factor analyses (EFAs). The EFA results were supportive of models with two or three factors for each of the five subtests. Significant inter-factor correlations within subtests were identified in all subtests, except between two factors within the Math Concepts and Application (MCA) subtest. There was also consistency in the covariance patterns of some error categories across subtests, particularly within the Nonsense Word Decoding (NWD) and Spelling (SP) subtests. This consistency was supportive of the proposed factor structures. The factor structures yielded by these analyses were used as the bases for the other articles in this special issue.
Students’ errors provide valuable information in educational evaluation. Errors in the fields of reading and mathematics are extensively studied, but studies have largely been qualitative in nature and tied to a specific concept or area of knowledge (Jordan & Hanich, 2000; Pennington et al., 1986). Such research has provided great insight into common student (mis)understandings and is, therefore, valuable for instructional planning. However, such analyses are not sufficient for diagnostic evaluations, as these require not only information on what error(s) a student is making but also how the student’s errors compare with peers’ errors (normative comparison), as well as how the student’s errors compare with his or her own performance in other fields (personal strength/weakness). The use of norm-referenced scores allows for a multi-dimensional interpretation of a student’s scores (or errors) so that reliable distinctions can be made between students’ strengths and weaknesses (Gronlund, 2006).
The Kaufman Test of Educational Achievement–Third Edition (KTEA-3) is an individually administered test for students in grades pre-kindergarten (PK) through 12 and ages 4 through 25 years that provides a theory-based and norms-based error analysis system. Errors are classified in terms of the types of processing required, elements of comprehension, content aspects of the problem, or a combination of these. This theoretically based systematic analysis provides an in-depth look at the potential underlying reasons for an examinee’s errors, which better informs the development of appropriate interventions. The norms-based analysis allows for a comparison of the number and types of errors made to determine a student’s strengths and weaknesses relative to those of similar students in the normal population.
Five subtests (three reading/language and two math tests) from KTEA-3 standardization projects are involved in the current study. The language subtests include Nonsense Word Decoding (NWD), Letter and Word Recognition (LWR), and Spelling (SP). The math subtests include Math Computation (MC), and Math Concepts and Applications (MCA). LWR is a measure of word recognition skills that requires students to read aloud from a list of regular and irregular words. The NWD subtest is a measure of phonological decoding skills that requires students to read aloud from a list of phonetically regular pseudowords. The SP subtest is a measure of written spelling of single words from dictation, requiring the use of phonological and orthographic processes. The MC subtest is a measure of untimed written math computation skills. The MCA subtest is a measure of a student’s ability to apply mathematical computation and reasoning skills to solve meaningful problems read aloud to the student and accompanied by a printed copy of the problem or an illustration (Kaufman, Kaufman, & Breaux, 2014).
The current study has three main goals. The first goal is to examine the underlying relationship between students’ errors on selected KTEA-3 language and math subtests. The second goal is to identify those error categories that are more salient than others in the selected language and math subtests. Finally, the current study aims to reduce data to a smaller set of summary variables, which will serve as the foundation for other articles in this special issue.
Exploratory factor analysis (EFA) was used to achieve these goals. EFA is commonly used to explore the latent constructs underlying observed variables (McCoach, Gable, & Madura, 2013). In this study, EFA was used to determine what patterns were present in the types of errors students make in each of five KTEA-3 subtests. Identifying the factor structure of errors will help in determining how errors across the various categories are related and whether errors in particular categories are indicative of developmental and cognitive weaknesses.
Participants
The sample used in this study included students from PK to 12th grade who participated during the standardization phase of the KTEA-3 (Kaufman & Kaufman, 2014) between August 2012 and July 2013. The distribution of gender, ethnicity, parental education level, and regions closely matches U.S. Census data, as reported in the KTEA-3 Technical and Interpretive Manual (Kaufman et al., 2014). See Table 1 for descriptions of the samples for each KTEA-3 subtest used in this study.
|
Table 1. Demographic Characteristics of the Sample.

The sample that included the five KTEA-3 subtests that were used in the current study—LWR, NWD, SP, MCA, and MC—is a subsection of the larger KTEA-3 standardization sample. A subset of this standardization sample used in this study included those with error analysis data available. Sample sizes on the five subtests differ, ranging between 1,732 (NWD) and 3,842 (MCA). According to the KTEA-3 Technical & Interpretive Manual (Kaufman et al., 2014), trained scorers classified the errors made by students on the subtests, and then the total number of errors by category was calculated for each student.
Measures
Five subtests (three language and two math subtests) from the KTEA-3 were included in the current study. The three language subtests are NWD, LWR, and SP. The two math subtests are MC and MCA. Each of the five subtests includes 15 to 19 error categories. The error categories for each of the five subtests that were included in this study are listed in Table 2.
|
Table 2. Error Categories of Five KTEA-3 Subtests.

LWR
The LWR subtest assesses a student’s ability to recognize letters and words. The initial items of LWR focus on knowledge of letter names and letter sounds, whereas the latter items focus on word knowledge using both regular words (those that are read correctly by applying phonological decoding principles) and irregular words (those that can be read correctly only if the student is familiar with the word or related words). Seventeen error types were identified during the KTEA-3 standardization phase (see Table 2). For LWR, errors are analyzed within each item, where examiners make error classifications based on a qualitative analysis of the examinee’s response on each item.
NWD
The NWD subtest assesses an examinee’s ability to transform printed letters and letter patterns into sounds and integrate those sounds into a pronunciation that conforms to rules of standard American English. “The items are built from commonly occurring letter patterns, such as suffixes and inflections, in combinations that have predictable pronunciations” (Kaufman et al., 2014, p. 122). The difficulty level of the nonsense word items is primarily determined by suffixes; silent letters; unusual vowel and vowel-team constructions; hard/soft C, G, and S; and syllabication. Similar to LWR, error analysis is conducted at a within-item level, in which examiners make error classifications based on a qualitative analysis of the examinee’s response on each item. A total of 17 error categories were identified for the NWD subtest (see Table 2).
SP
The SP subtest assesses a test-taker’s ability in spelling, including fundamental phonological processing skills, knowledge of phonics (how to convert sounds to print), and mental representations of orthographic patterns through the written spelling of dictated words. This subtest includes words with predictable and unpredictable patterns. The error analysis provides for differentiating phonetic from non-phonetic spelling errors, thereby enabling the identification of possible root causes for spelling difficulties. Similar to LWR and NWD, students’ errors on the SP subtest are analyzed at within-item level, with a total of 16 error categories identified (see Table 2).
MCA
The MCA subtest examines a student’s understanding of mathematical concepts, mathematical reasoning ability, and conceptual knowledge and reasoning application. Aspects non-central to problem solving were simplified so that each item corresponds to a particular error type. A total of 14 error types were defined for the MCA subtest (see Table 2).
MC
The MC subtest measures numeration, the basic operations, computations with zero (e.g., subtracting from zero, or multiplying by zero as part of a multiple digit problem), fractions, decimals, algebra, roots and exponents, signed numbers, binomials, and factorial expansion (Kaufman et al., 2014). The focus of each item is on whether the student can demonstrate a specific skill, fact, or process, which in turn corresponds to a particular error category. Seventeen error categories were identified based on item-level analysis (see Table 2). This item-error correspondence was possible because the aspects of an item that are not central to the focal skill are kept simple so as not to influence or cause an incorrect answer.
For each subtest, the total number of an examinee’s errors per category is transformed into one of three descriptive categorizations (weakness, average, or above average) based on a normative comparison. Each student’s total number of errors per category was compared with that of other students in their grade who completed the same items on the same form, and was then dichotomized as either a weakness (0) or average/above average (1). Tables 3 through 7 show the distribution of skill categorization per error type for each subtest.
|
Table 3. LWR Subtest: n Count Per Skill Categorization Per Error Type.

|
Table 4. NWD Subtest: n Count Per Skill Categorization Per Error Type.

|
Table 5. SP Subtest: n Count Per Skill Categorization Per Error Type.

|
Table 6. MCA Subtest: n Count Per Skill Categorization Per Error Type.

|
Table 7. MC Subtest: n Count Per Skill Categorization Per Error Type.

EFA Procedure
Pre-processing
Due to the binary nature of the data, polychoric correlation matrices were used as the basis of EFA. Error types with more than half of the data missing were dropped from the analysis. The following error types were, therefore, removed from the MC analysis: decimal, exponent or root, algebra, add or subtract numerator and denominator, equivalent fraction/common denominator, multiply/divide fraction, mixed number, and incorrect sign. In addition, the following errors were removed from the MCA analysis: decimals and percents, and data investigation. For all subtests, error types such as uncodable or unpredictable pattern were also eliminated.
Moreover, error types that highly correlated with each other were aggregated. Any pair with a polychoric correlation of .6 or higher is indicative of collinearity between the error types. Both variables in the pair were dropped and the average of the two was used instead in further analysis. The following error types were aggregated: subtraction and regrouping, subtraction on the MC subtest, multiplication and fact or computation on the MC subtest, and division and word problems on the MCA subtest. All categories remaining in the analyses are denoted with superscript “a” in Table 2.
Extraction method
Unweighted least square method was used to extract factors. This method is recommended when the assumption of multivariate normality is violated (Fabrigar, Wegener, MacCallum, & Strahan, 1999), which was the case for the current data set as most variables showed skewed distributions.
Number of factors to retain
Parallel analysis (PA; Horn, 1965) was used in combination with Scree Plot (Cattell, 1966) to determine the number of factors to retain. PA compares the observed eigenvalues extracted from the correlation matrix to be analyzed with those obtained from uncorrelated normal variables. In this method, a factor is considered significant if the associated eigenvalue is bigger than the 95th percentile of those obtained from the randomly uncorrelated data (Cota, Longman, Holden, Fekken, & Xinaris, 1993; Glorfeld, 1995). In addition to PA, the scree plots allowed for a visual examination of where the eigenvalues significantly drop. The two methods are used in tandem to prevent both over- and under-extraction. The scree plots of PA for each subtest are shown in Figures 1 through 5. The numbers of factors retained per subtest are as follows: three factors for LWR, two factors for NWD, two factors for SP, three factors for MCA, and two factors for MC.
Rotation
After the completion of the preliminary factor analysis and determination of the appropriate number of factors to retain, a promax rotation was applied, which allowed the factors to correlate with each other. The factor structures after rotation are shown in Tables 8 through 10.
|
Table 8. Factor Structure Coefficients for Language and Spelling Subtests.

|
Table 9. MCA: Factor Structure Coefficients.

|
Table 10. MC: Factor Structure Coefficients.

We reviewed the error groups in each factor, compared commonalities of those categories within each factor, and aligned those findings with current literature on cognitive development. Input from psychoeducational experts with experience in assessment development was also incorporated. Together, these qualitative analyses yielded the factor descriptions outlined below.
LWR
The three-factor solution explained a total of 46% of the variance in the polychoric correlation matrix, with the three factors explaining 23%, 12%, and 11% of the total variance, respectively. As shown in Table 8, the first factor has the following variables loading on it: suffix/inflection, syllable insertion/omission, long vowel, short vowel, R-controlled vowel, wrong vowel, and prefix/word beginning. In addition, whole word error/misplaced accent, silent letter, and initial/final sounds error categories cross-load onto Factor 1. This factor, labeled as Contextual Vowel Pronunciation, is focused on not only the regular vowel sounds (long and short vowels) but also the more contextually dependent regular and irregular vowel pronunciations, as in the case of the R-controlled vowel and wrong vowel categories (D. Kilpatrick, personal communication, April 2, 2016). The suffix and prefix categories may not seem to fit neatly into this factor, but they lend support to the argument that the pronunciation of vowels should be interpreted with respect to their contexts. Three error categories load onto this factor and Factor 2, indicating that whole word errors, silent letter knowledge, and skills in initial/final sounds involve contextual vowel pronunciations and intermediate letter–sound knowledge.
The second factor, labeled as Intermediate Letter–Sound Knowledge, is made up of consonant blend and vowel team/diphthong, with initial/final sound, silent letter, and whole word error/misplaced accent cross-loading onto this factor as well. Learning of consonant blends and vowel diphthong sounds follows the development of single-letter sound knowledge, making them more intermediate than basic skills (D. Kilpatrick, personal communication, April 2, 2016). The initial/final sound category loaded at above 0.40 onto this factor and Factor 1, indicating this type of error is dependent on intermediate phonics and contextual pronunciation knowledge.
The remaining error categories—hard/soft C, G, S; single/double consonant; misordered sounds; and consonant digraph make up Factor 3, labeled as Consonant Pattern Knowledge. As in Factor 1, hard/soft letter sounds are dependent on their contexts but can be interpreted by learning the underlying patterns (D. Kilpatrick, personal communication, April 2, 2016). The consonant digraphs and double consonants are letters that are interpreted together as single sounds, having similarities with, but not cross-loading onto Factor 2. Significant inter-factor correlations were present: .46 between Factors 2 and 3, .58 between Factors 1 and 2, and .63 between Factors 1 and 3. These inter-factor correlations are suggestive of the interdependency between contextual vowel pronunciation, letter–sound knowledge, and consonant/orthographic patterns.
NWD
A two-factor solution explained 39% of total variance, with the two factors explaining 29% and 10% of total variance, respectively. The first factor, labeled as Letter–Sound Knowledge, consists of the following error types: single/double consonant, initial blend, medial/final blend, consonant digraph, short vowel, long vowel, vowel team/diphthong, R-controlled vowel, wrong vowel, prefix/word Beginning and suffix/inflection, misordered sounds, and whole word error. The majority of these skills can be classified as basic phonics skills, which are generally the focus of this assessment.
The second factor, labeled Basic Phonic Decoding, is made of silent letter and syllable insertion/omission error categories, with the initial/final sound cross-loading. These error categories may have loaded together because students with high numbers of errors in these categories are lacking basic phonic decoding skills (D. Kilpatrick, personal communication, April 2, 2016). In addition, mistakenly pronouncing a silent letter will often insert an extra syllable into a word, so these error categories may load together because they tend to co-occur. The only cross-loading error category in the NWD subtest was initial/final sound, which is not that surprising because most silent letters occur at the beginning and end of words. A significant inter-factor correlation of .58 suggested the presence of interdependency between sound-to-letter mapping and basic phonic decoding.
SP
The error categories in the SP subtest can be summarized in a two-factor structure. The two-factor solution explained 36% of total variance, with the two factors explaining 22% and 14% of total variance, respectively. The first factor, Sound to Letter Mapping, consists of the following error types: silent letter, long vowel, single/double consonant, consonant digraph, vowel team/diphthong, R-controlled vowel, short vowel, whole word error, initial blend, and hard/soft C, G, S. This factor is characterized by basic spelling skills that are indicative of a student’s ability to determine the type of letter that represents a pronounced sound (D. Kilpatrick, personal communication, April 2, 2016; N. Mather, personal communication, February 22, 2016). Suffix/inflection also cross-loads onto this factor.
The second factor, Phonological Awareness, consists of error categories that are blends and more advanced spelling skills. These error groups are medial/final blend, syllable insertion/omission, and non-phonetic, with suffix/inflection cross-loading. Students with low phonological awareness tend to make more errors in these categories than those with strong skills in this area. The prefix/word beginning error category had low loadings on both factors, indicating that it did not share enough in common with the other error categories to load onto either factor. A significant inter-factor correlation of .58 indicated the existence of interdependency between sound-to-letter mapping and phonological awareness.
MCA
Three factors were identified for this subtest. The three-factor solution explained 43% of total variance, with the three factors explaining 20%, 12%, and 11% of the total variance, respectively. The first factor, Math Calculation, consists of the following error categories: subtraction, addition, division and word problems, multiplication, algebra, and fractions. The error categories in this factor are basic math skills and operations used across grade levels. The second factor, Geometric Concepts, is made up of the geometry and measurement error categories, which are visual–spatial math skills. These concepts also overlap with science curricula and are taught in both content areas in U.S. schools, which might explain their linking together (D. Kilpatrick, personal communication, April 2, 2016). The remaining error types—multi-step problems and time and money—make up Factor 3, Complex Math Problems. The error categories in this factor rely on the use of working memory, which might help explain their connection. The error categories of number concepts and tables and graphs did not have factor loadings high enough to support their assignment to one factor over another. Moderate correlations were present between Factors 1 and 3 (r = .46) and Factors 2 and 3 (r = .38). The commonalities between these two pairs could be explained by the influence of working memory.
MC
The types of errors on this subtest fit into two factors that explain 49% of total variance, with the two factors explaining 12% and 37% of total variance, respectively. The first factor, labeled as Basic Math Concepts, contains the following error categories: multiplication and fact computation, division, fractions, subtraction and regrouping; subtraction, and subtraction by smaller from larger. These error categories are all types of mathematical skills that are taught in curricula after addition, which is the most basic form of arithmetic, and require the use of logic and basic math principles (N. Mather, personal communication, March 26, 2016).
The second factor, Addition, consists of addition and addition with regrouping errors. The connection between these two categories is obvious—the ability to arrive at a sum or total by combining addends (N. Mather, personal communication, March 26, 2016). The Wrong Operation category cross-loads onto both factors, which makes sense as the wrong operation may be used during different kinds of mathematical calculations. This error can occur when a student has not mastered a new procedure and carries out the known procedure regardless of the operation, or when a student knows how to carry out a procedure, but lacks the knowledge of when to carry out such procedure (Star, 2005). A moderate inter-factor correlation of .38 is present between the two factors, indicating interdependency between addition and other arithmetic operations.
Summary of Findings
EFA results were suggestive of a three-factor model for the LWR subtest that included factors of contextual vowel pronunciation, intermediate letter–sound knowledge, and consonant pattern knowledge. A two-factor model was proposed for the NWD subtest that included factors of letter–sound knowledge and basic phonetic decoding. Similarly, a two-factor model was proposed for the SP subtest that included factors of sound to letter mapping and phonological awareness. On math subtests, EFA results were supportive of a three-factor model for the MCA subtest that yielded factors of math calculation, geometric concepts, and complex math problems. A two-factor model for the MC subtest included the factors of basic math concepts and addition.
These results have implications for understanding the relationships between students’ errors in the areas of word recognition, decoding, spelling, and math. The LWR, NWD, and SP subtests have most of the same error categories in common, yet the composition of error factors differed across subtests. This finding suggests that although the skill categories are similar, different error patterns emerge due to differences in the task demands (e.g., reading vs. spelling) and stimuli (e.g., real words vs. pseudowords). Hence, students with a skill weakness in a particular area may make different patterns of errors in reading, decoding, and spelling.
The MC and MCA subtests share some of the basic operations error categories but generally have less overlap in error categories than the language subtests. Factor 1 for the MCA subtest included the basic operations error categories of addition, subtraction, multiplication, and division, yet the basic operations errors were split for the MC subtest with addition and addition-regrouping errors forming a separate factor. Hence, specific computation weaknesses may result in a different pattern of errors across math computation and math problem solving tasks.
Inter-Correlation Between Factors Within Subtests
Within each subtest, moderate-to-strong correlations were present between factors, with the exception of math calculation and geometric concepts on the MCA subtest. Within the LWR subtest, the strongest inter-factor correlation was between Factors 1 (Contextual Vowel Pronunciation) and 3 (Consonant Pattern Knowledge). The overlap between these two factors is most likely due to the context dependency of the types of errors. Another strong inter-factor correlation existed between Factors 1 (Contextual Vowel Pronunciation) and 2 (Intermediate Letter–Sound Knowledge). This connection is a reflection of the three cross-loading error categories between these two factors. The inter-factor correlation between Factors 2 (Intermediate Letter–Sound Knowledge) and 3 (Consonant Pattern Knowledge) was also relatively strong and is reflective of the similarities between digraphs, blends, and double consonants as letters that must be interpreted as single sounds. There was a relatively strong correlation between the two factors of the NWD subtest—Letter–Sound Knowledge and Basic Phonics Decoding. This connection is representative of the overlap between the mapping and decoding skills that are the focus of this subtest. The strong correlation between the two SP factors (Sound to Letter Mapping and Phonological Awareness) can be attributed to the relationship between the skill of determining which letters represent which sounds and the contextual awareness necessary to apply those patterns.
The highest correlation among the MCA subtest factors was between Factors 1 (Math Calculation) and 3 (Complex Math Problems). This overlap may be due to the continued instruction on these skills across grade levels. The moderate correlation between MCA Factors 2 (Geometric Concepts) and 3 (Complex Math Problems) is reflective of the more advanced (as opposed to basic) nature of these skills. The lack of significant correlation between Factors 1 (Math Calculation) and 2 (Geometric Concepts) could be attributable to the dominant focus on visual–spatial skills required in Factor 2 and the lack of that focus in Factor 1 (Debnath, 2016). The overlap of the MC subtest factors (Basic Math Concepts and Addition) is supportive of the general connection between the different forms of math computation.
Similarity of Covariance Pattern Across Subtests
Similarities in the covariance patterns across subtests were revealed in the results. On math subtests, errors in arithmetic calculations share communality regardless of subtests. For example, errors such as multiplication, division, fractions, and subtraction load on the same factor in both MC (Basic Math Concepts factor) and MCA (Math Calculation factor) subtests. On language subtests, errors in sound–letter mapping share communality regardless of subtests. For example, errors such as consonant digraph, short vowel, long vowel, vowel team/diphthong, R-controlled vowel, wrong vowel, single/double consonants load on the same factor in both NWD and SP test.
The consistency with which some error categories load together across subtests is indicative of validity for the proposed factor structures. There are groups of errors that share more communalities within themselves than with other errors, regardless of the subtest. From this, we can conclude that there are obvious similarities in the underlying constructs of the factors. Therefore, the factor structures proposed here are evidence of stability, at least within the KTEA-3 context. The relative stability of the factor structure may also be reflective of students’ errors occurring independent of context. When types of errors occur independently of context, we can expect that a high number of errors in one category on the NWD subtest would be predictive of a high number of errors in the same category on the SP test. This context independency of errors aids in the overarching goals of error analysis in achievement testing—evaluation and intervention.
Limitations and Future Directions
There are two noteworthy limitations to the current study. First, although the skill status coding system (0 for weakness and 1 for average and above average) provided information on whether a student is weaker than his or her peers, it lacks the information on the degree to which a student is weaker than his or her peers. The degree can be estimated by aggregating skill status on related domains, for example, by creating factor scores resulting from an EFA. Second, although we focused on extract communalities in current study, we are not trying to devalue the uniqueness of certain skill domains. Some error categories failed to load onto any factors due to the lack of communality with others across the entire age band; yet, they may possess a unique position in the development of academic skills and warrant follow-ups.
Nonetheless, the current study provided a possible way of grouping students’ errors across domains and subtests, and identified latent structures, which will serve subsequent studies in this special issue. Subsequent studies will include closer examination of the patterns of factor presence across demographic groups and a review of the pattern of strengths and weaknesses across clinical groups.
Acknowledgements
The authors thank NCS Pearson for providing the standardization and validation data for the Kaufman Tests of Educational Achievement–Third Edition (KTEA-3). Copyrights by NCS Pearson, Inc. used with permission. They also thank Alan and Nadeen Kaufman for their supervision of the comprehensive error analysis research program.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245-276. Google Scholar | Crossref | Medline | ISI | |
|
Cota, A. A., Longman, R. S., Holden, R. R., Fekken, G. C., Xinaris, S. (1993). Interpolating 95th percentile eigenvalues from random data: An empirical example. Educational & Psychological Measurement, 53, 585-596. Google Scholar | SAGE Journals | |
|
Debnath, L. (2016). A brief history of partitions of numbers, partition functions and their modern applications. International Journal of Mathematical Education in Science and Technology, 47, 329-355. Google Scholar | Crossref | |
|
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299. Google Scholar | Crossref | ISI | |
|
Glorfeld, L. W. (1995). An improvement on Horn’s parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55, 377-393. Google Scholar | SAGE Journals | ISI | |
|
Gronlund, N. E. (2006). Assessment of student achievement (8th ed.). Boston, MA: Allyn & Bacon. Google Scholar | |
|
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179-185. Google Scholar | Crossref | Medline | ISI | |
|
Jordan, N. C., Hanich, L. B. (2000). Mathematical thinking in second-grade children with different forms of LD. Journal of Learning Disabilities, 33, 567-578. Google Scholar | SAGE Journals | ISI | |
|
Kaufman, A. S., Kaufman, N. L. (2014). Kaufman Test of Educational Achievement–Third Edition (KTEA-3). Bloomington, MN: Pearson. Google Scholar | |
|
Kaufman, A. S., Kaufman, N. L., Breaux, K. C. (2014). Kaufman Test of Educational Achievement–Third Edition (KTEA-3) technical & interpretive manual. Bloomington, MN: Pearson. Google Scholar | |
|
McCoach, D. B., Gable, R. K., Madura, J. P. (2013). Instrument development in the affective domain: School and corporate applications (3rd ed.). New York, NY: Springer. Google Scholar | Crossref | |
|
Pennington, B. F., McCabe, L. L., Smith, S. D., Lefly, D. L., Bookman, M. O., Kimberling, W. J., Lubs, H. A. (1986). Spelling errors in adults with a form of familial dyslexia. Child Development, 57, 1001-1013. Google Scholar | Crossref | Medline | ISI | |
|
Star, J. R. (2005). Reconceptualizing procedural knowledge. Journal for Research in Mathematics Education, 36, 404-411. Google Scholar | ISI |






