This study investigated the differences in error factor scores on the Kaufman Test of Educational Achievement–Third Edition between individuals with mild intellectual disabilities (Mild IDs), those with low achievement scores but average intelligence, and those with low intelligence but without a Mild ID diagnosis. The two control groups were matched with the Mild ID clinical cases on demographic variables including age, gender, and parental education. Results showed significant differences between the groups on several error factors, particularly between the Mild ID group and the two control groups, and no significant differences between all three groups on six error factors. In addition, the two control groups differed significantly on four error factors. Implications for intervention selection, diagnostic considerations, and future directions for achievement test creation are discussed.
Since its foundation, the American Association on Intellectual and Developmental Disability (AAIDD) has been the primary organization involved in defining intellectual disability (ID). AAIDD currently defines intellectual disability by significant limitations both in intellectual functioning (reasoning, learning, and problem solving) and adaptive behavior (conceptual, social, and practical adaptive skills) originating before the age of 18 years (Schalock et al., 2010). Similarly, intellectual disability (ID; intellectual developmental disorder) is defined by the Diagnostic and Statistical Manual for Mental Disorders (5th ed.; DSM-5; American Psychiatric Association [APA], 2013) as having an intelligence quotient (IQ) score that is two standard deviations below the mean (i.e., 70 and below), beginning before the age of 18 years, along with poor adaptive functioning.
Mild intellectual disability (Mild ID) is a subcategory within ID; however, there is some debate over the exact IQ range for the “mild” categorization. Individuals with Mild ID still face significant challenges to daily living and experience comorbid diagnoses, but usually have a better overall outcome in life (e.g., work, school, socially) in comparison with those in more severe IQ categories (Kitkanj & Georgieva, 2013). The worldwide prevalence of ID is estimated to be 3% (World Health Organization, 2001) and it includes moderate and severe ID (IQ < 50) and Mild ID (IQ between 50 and 69).
Although there is no universally accepted theory of cognitive abilities, Cattell-Horn-Carroll (CHC) theory is perhaps the most widely accepted, unifying empirical theory of cognition to date, especially pertaining to the abilities measured by clinical tests of intelligence and achievement (Flanagan & Harrison, 2012). CHC theory merges Horn and Cattell’s Gf–Gc theory of fluid and crystallized intelligence (Horn & Cattell, 1966) with Carroll’s (1993) three stratum theory of general intelligence, broad abilities, and narrow abilities. CHC theory thereby represents an integration of the last century of theory and empirical testing on human cognitive abilities (Flanagan & Harrison, 2012).
The CHC theory is a three-level model of human cognitive abilities that includes general intelligence (g), as many as 16 broad cognitive abilities, and more than 100 narrow cognitive abilities (McGrew & Flanagan, 1998; Flanagan & Harrison, 2012). The most common CHC abilities measured by cognitive instruments are long-term retrieval (Glr), auditory processing (Ga), fluid reasoning (Gf), processing speed (Gs), short-term memory (Gsm), visual–spatial thinking (Gv), comprehension knowledge (Gc), reading–writing (Grw), and quantitative knowledge (Gq).
Also referred to as comprehension knowledge or crystallized intelligence, Gc is a store of acquired knowledge that includes both declarative and procedural knowledge and is the CHC ability being utilized in this study. It includes lexical knowledge, general information, language development, and listening ability (Mather & Wendling, 2008). Similar definitions of Gc state that it measures an individual’s breadth and depth of general knowledge of a culture, including verbal communication and reasoning with previously learned procedures (Woodcock, Mather, & McGrew, 2001), or that it represents, the depth and breadth and of knowledge and skills that are valued by one’s culture (Flanagan & Harrison, 2012). Evidence has consistently shown that although crystallized intelligence depends upon the acquisition of knowledge through life experience and formal education, it tends to remain stable, or “maintain,” over the lifetime (Kaufman, Reynolds, & McLean, 1989; McArdle, Ferrer-Caja, Hamagami, & Woodcock, 2002).
“Reading is a set of skills that allow individuals to extract linguistic meaning from orthographic representations of speech” (Barker, Sevcik, Morris, & Romski, 2013, p. 365). Reading ability is a pivotal skill that guides people to interact positively with society. This foundational ability enables people to reach successful academic achievement, seize job opportunities, and sustain a quality life. It is, however, not considered to be a naturally inborn, innate ability but rather an outcome of intentional learning and direct instruction (Cohen et al., 2001).
The literature concerning the reading achievement factors of students with ID is sparse (van Tilborg, Segers, van Balkom, & Verhoeven, 2014). With some exceptions, dyslexia being a no table case (Tanaka et al., 2011), lower IQ scores have generally been associated with low reading achievement; this includes both the decoding skills (the ability to transfer letters or words into sounds) as well as comprehension (literal and inferential understanding; Gottfredson, 1997). Similarly, those with ID demonstrate difficulties in reading (Fajardo, Tavares, Ávila, & Ferrer, 2013; van den Bos, Nakken, Nicolay, & van Houten, 2007).
Among all educational disability categories, those with ID demonstrated the lowest performance in reading comprehension and letter–word identification—a measure of decoding skill (Wei, Blackorby, & Schiller, 2011). Even though many studies show people with ID can make progress in reading after intensive intervention instruction (Allor, Mathes, Roberts, Cheatham, & Champlin, 2010; Allor, Mathes, Roberts, Cheatham, & Otaiba, 2014; Lundberg & Reichenberg, 2013; van den Bos et al., 2007), this potential has often been overlooked by educators and researchers (Joseph & Seery, 2004). Consequently, many children with ID have been mainly taught “sight–word instruction” to recognize words to survive. This strategy uses results from the false assumption that individuals with ID do not receive benefit from various reading instructions, such as phonics, due to their lack of language and cognitive abilities (Hua, Woods-Groves, Kaldenberg, & Scheidecker, 2013).
Wise, Sevcik, Romski, and Morris (2010) noted that phonological processing skills are highly correlated with reading achievement in children with Mild ID, similar to typically developing readers (Sermier Dessemontet & de Chambrier, 2015), suggesting the use of early phonological processing interventions (Channell, Loveall, & Conners, 2013). One longitudinal study showed that elementary students with mild and moderate intellectual disabilities made significant progress in all reading measures after 2 or 3 years of intensive reading interventions (Allor et al., 2014). In this study, the researchers argue that relatively long-term reading interventions can help the students with Mild ID develop their skills. Another study reports that adults with Mild ID successfully transferred from the level of isolated linguistic skills (such as phonological processing and decoding) to the level of reading comprehension after explicit reading instructions (van den Bos et al., 2007).
Written expression is a complex process requiring a multifaceted approach, which entails a basic awareness of grammar and mechanics, vocabulary, knowledge of text structure, and the organization and planning of thoughts to produce finished products (Joseph & Konrad, 2009). It is uncertain which level of writing children with ID are capable of due to limited research, although observable differences have been found between the written performance of students with disabilities and their counterparts without disabilities (Graham & Harris, 1997; Joseph & Konrad, 2009; Varuzza, De Rose, Vicari, & Menghini, 2015). Writing instruction is often overlooked in educational settings serving those with ID due to the focus being on daily living and social skills rather than academic performance (Joseph & Konrad, 2009). Lack of instruction, combined with the tendency of individuals with ID to acquire skills at a slower pace, may lead to a lower proficiency of writing abilities (Joseph & Konrad, 2009). The large number of elements to keep in mind while composing written works lends itself to a higher risk to commit error, especially for those with ID. Despite the challenging nature of written expression, studies have found that through proper instruction and guided support, individuals with ID can learn to efficiently express themselves through written means (Joseph & Konrad, 2009).
Studies looking at mathematical abilities in children with below-average intellectual abilities (IQ < 85) suggest that these children have difficulties with the development of mathematical skills (Hoard, Geary, & Hamson, 1999), but little is known about the cognitive deficits that underlie their poor achievement in mathematics. Brankaer, Ghesquière, and De Smedt (2011) found that children with Mild ID performed more poorly than their typically developing chronological age-matched peers on both symbolic and non-symbolic comparison tasks, while their performance did not substantially differ from the ability-matched control group. These findings suggest that the development of numerical magnitude representation in children with Mild ID is marked by a delay.
Overall, there are a limited number of studies that explore how Mild ID affects learning and many of those that do look at broad categories of learning, such as “reading” and “writing” rather than nuanced errors within those categories. In addition, Mild ID has been a problematic concept being confused with specific learning disabilities (SLDs) and low achievement. The errors that students make within these various classifications are more alike than different, making differential diagnosis problematic (Gresham, MacMillan, & Bocian, 1996).
This study was conducted to analyze the error patterns of individuals with Mild ID on the Kaufman Test of Educational Achievement–Third Edition (KTEA-3; Kaufman & Kaufman, 2014) and to discern whether there are significant differences between the performance of individuals with Mild ID compared with matched control individuals with low cognitive ability and those with low achievement ability but without a Mild ID diagnosis. The KTEA-3 uniquely offers a scoring protocol that requires coding of specific error types, which lends itself to analysis of specific error patterns across individuals and groups.
We investigated the following four research questions:
Research Question 1: Is there a distinct error factor response pattern of individuals with Mild ID on the KTEA-3?
Research Question 2: Does the error factor response pattern exhibited by individuals with Mild ID on the KTEA-3 significantly differ from that of matched children with low achievement ability and average cognitive ability (Low Achievement Control)?
Research Question 3: Does the error factor response pattern exhibited by individuals with Mild ID on the KTEA-3 significantly differ from that of matched children with low cognitive ability (Low IQ Control)?
Research Question 4: Do the error factor response patterns of the Low IQ Control and Low Achievement Control groups on the KTEA-3 differ significantly from each other?
Participants
The total number of individuals included in the grade norm sample on the KTEA-3 was 2,600 in grades kindergarten through 12 (Kaufman, Kaufman, & Breaux, 2014). Of those, 73 met the clinical sample criteria of Mild ID (i.e., IQ between 55 and 70 with adaptive behavior challenges). The mean age of the Mild ID group was 10.6 years and 50.7% were male.
Two control groups were selected from a subset of the KTEA-3 standardization sample that had minimal missing data (N = 506). To assist with this process, a new Oral Language Composite score was created using the average between Oral Expression (OE) and Listening Comprehension (LC), following the guideline in Flanagan, Ortiz, and Alfonso (2013). This composite was used as a proxy for crystallized intelligence (Gc). Cases whose new Oral Language Composite was lower than 90 formed the Low IQ Control group (N = 79). Among the rest, cases whose new Oral Language Composite score fell between 90 and 125 points and whose Academic Skills Battery Composite score was lower than 100 formed the Low Achievement Control group (N = 77). The two control groups approximately matched the Mild ID group on demographic variables, including age, gender, and parent education level. The mean age of the Low Achievement Control group was 11.2 years and 48.1% were male. The mean age of the Low IQ Control group was 11.1 years and 53.2% were male.
The full demographic characteristics of these three groups are in Table 1. In addition to the above, the demographics include grade, ethnicity, parent education, and geographic region of the United States.
|
Table 1. Demographic Characteristics of the Experimental and Control Samples.

Measures
KTEA-3
The KTEA-3 (Kaufman & Kaufman, 2014) is an individually administered measure of academic achievement that provides an analysis of the student’s strengths and weaknesses in the areas of reading, mathematics, written language, and oral language for grades pre-kindergarten through 12 or ages 4 through 25 years. The KTEA-3 Reading composite is a combination of the Letter & Word Recognition (ability to identify letters and read grade appropriate words), Nonsense Word Decoding (sounding out made-up words), Reading Comprehension (ability to derive meaning from contextualized print), and Reading Vocabulary (ability to determine word meaning in contextual print). The Reading Fluency KTEA-3 composite is comprised of Word Recognition Fluency (ability to read as many words as possible within a time limit), Decoding Fluency (ability to read as many made-up words as possible within a time limit), and Silent Reading Fluency (ability to silently read simple questions and circle yes or no for each question or sentence verification). Quantitative reasoning is assessed on the KTEA-3 by the mathematics composite and contains the Math Concepts & Applications (ability to solve math problems that relate to real life situations and assess skills such as number concepts, arithmetic, time and money, and measurement), Math Computation (ability to solve written math calculation problems), and Math Fluency (ability to solve simple arithmetic problems within a time limit) subtests. The Writing composite on the KTEA-3 contains Written Expression (ability to complete a story read by the examiner by writing letters, words, sentences, and an essay), Spelling (student writes single letters and words dictated by the examiner), and Writing Fluency (ability to write simple sentences and describe given pictures within a given time limit) subtests. The Oral Language Composite contains the Listening Comprehension (ability to respond to comprehension questions based on a passage presented orally by the examiner), Oral Expression (ability to orally describe given photographs), and Associational Fluency (ability to state as many words as possible that belong to a given category within a time limit) tasks. The Language Processing composite represents a combination of the following tasks: Phonological Processing (ability to use phonological information in processing oral and written language), Object Naming Facility (ability to name pictured objects as quickly as possible), and Letter Naming Facility (ability to name upper and lowercase letters as quickly as possible).
The KTEA-3 provides an innovative gathering of a student’s academic strengths and weaknesses through the error analysis methodology. This methodology resembles existing criterion-referenced assessment procedures that are used to provide specific information on the acquisition of selected skills. The key difference that sets this apart from criterion-referenced assessment procedures is that the KTEA-3 compares an examinee’s total errors in a category with the average number of errors made by the reference group instead of indicating mastery by using a specific cutoff score.
Exploratory factor analysis and principal components analysis conducted on the error scores of the KTEA-3 yielded several factors for many of the subtests. (For details of these analyses, see Choi et al., 2017; Hatcher et al., 2017; O’Brien et al., 2017). Table 2 illustrates the factors and their associated descriptions provided by several experts in the field.
|
Table 2. KTEA-3 Error Factor Descriptions.

Analysis
A multi-step process was used to investigate the differences between students identified with Mild ID and the two control groups. The first analytic step in this process was the derivation of factor scores.
The KTEA-3 utilizes a unique error analysis methodology based on the specific sub-skills measured by a given subtest. For 10 of the KTEA-3 subtests, curriculum experts identified the different categories of errors students are likely to make on each subtest. For each category of error on a given subtest, students received a grade-level, normative performance label of weakness, average, or strength based on a comparison of a student’s total errors to the average number of errors made by individuals in the KTEA-3 normative sample (Kaufman et al., 2014). This performance label is called the skill status. Based on this error analysis system, students received multiple skill status error scores within each subtest. To facilitate the use of these skill status error scores in further analyses, exploratory factor analysis and principal components analysis were employed to create a reduced error score variable set.
Polychoric correlation matrices were developed for each subtest to create the error factor scores, except for the Comprehension (Reading and Listening) and Expression (Oral and Written) subtests. Because each of these four subtests has a minimal number of error scores that are generally the same across the comprehension and expression subtests, one polychoric correlation matrix was generated for each subtest type (comprehension or expression). An exploratory factor analysis using unweighted least square extraction was completed for all subtests with the exception of comprehension, expression, and phonological processing. Because the comprehension, expression, and phonological processing subtests contain a limited number of error scores, principal components analysis was used to identify the factors for these subtests.
For all factor extractions, parallel analysis (PA; Horn, 1965), visual review of the scree plot (Cattell, 1966), and content analysis of the factor structure were employed to determine the number of factors to extract. For the subtests related to the current study, four factors were extracted from the comprehension and expression subtests, three factors from the letter and word recognition subtest, and two factors from the nonsense word decoding, spelling, and phonological processing subtests. R version 3.2.3 was utilized to create Bartlett factor scores (DiStefano, Zhu, & Mîndrilă, 2009) for each of the extracted factors.
The next analytic step involved the identification of a subset of students with comparable ability or students who matched the sample on achievement but whose proxy IQ scores would not classify them as having an intellectual disability. For the sample of students with comparable ability, students were included in this sample if they were not diagnosed with an intellectual disability and scores on the KTEA-3 Oral Language Composite were less than 90. For the sample of students with comparable ability but whose proxy IQ scores would not classify them as having an intellectual disability, students were selected based on having KTEA-3 Oral Language Composite scores between 90 and 125 and less than 100 on the KTEA-3 Academic Skills Battery Composite.
The final analytic step was to investigate whether the errors made on the KTEA-3 tests varied between the different intellectual samples and the diagnosed mild intellectual disability group. To test this hypothesis, ANOVAs were conducted with subtest error factor scores as dependent variables and grouping variable (diagnosed mild intellectual disability, students with scores on the KTEA-3 Oral Language Composite less than 90, or students with KTEA-3 Oral Language Composite between 90 and 125 and scores less than 100 on the KTEA-3 Academic Skills Battery Composite) as the independent variable. To examine the assumption of homogeneity of variance assumption, a two-step analysis process was utilized. First, for each analysis, the Levene’s test for homogeneity of variance was calculated. Because of the number of Levene’s tests and the power of this test, an alpha level of .005 was used. Using these criteria, seven of the error factor scores violated the assumption of homogeneity of variance. As a follow-up analysis, Welch’s ANOVA was also calculated. In each of these instances, there was very little difference between the p values generated from Welch’s ANOVA and the standard ANOVA results.
Descriptive statistics were completed on the subtest standard scores and composite index standard scores for the Mild ID and Low Achievement and Low IQ Control groups. The range of the Mild ID group mean scores across all subtests was 61.0 to 78.0 with an average of 67.2. The Low Achievement Control group had mean subtest scores just below 100 on most subtests (range = 94.5-101.5 with M = 98.8), which is influenced by the group membership constraint of an Academic Skills Battery Composite score under 100. The Low IQ Control group had mean subtest scores that ranged from 86.6 to 102.8, with five subtests greater than or equal to 100.0 and an overall mean of 96.7 across all subtest means. For the full range of subtest and composite index standard scores, see Table 3.
|
Table 3. Group Means and Standard Deviations on KTEA-3 Subtest and Composite Standard Scores.

Descriptive statistics were executed on the subtest error factor scores across all three groups. If any case did not have enough subtest problems completed to yield a factor score, they were eliminated from the statistical analysis. If a resulting group sample size fell below 15, group descriptive statistics were not run for that particular factor. This occurred with Math Concepts & Applications Factor 3 (complex math problems) and both Listening Comprehension factors (narrative–inferential and expository–literal). Original z-score results were converted to scale scores with a mean of 10 and standard deviation of 3.
Table 4 contains the results of the group mean and standard deviation scores of the subtest error factors across the Mild ID group and two control groups. The two control groups’ error factor mean scores all fell within the average range. For the Mild ID group, several interesting means were noted. For instance, although Math Concepts & Applications’ overall mean was 63.3 for the Mild ID group, the breakdown in factors shows the math calculation factor group mean score to be more than 2 standard deviations below the mean (3.6) and the geometric concepts factor group mean score to be solidly at the mean (10.0). The Mild ID Written Expression factors of general (3.6) and mechanics (4.9) are two and one standard deviations below the mean, respectively. On the Oral Expression subtest, the overall Mild ID group mean is 72.1, which falls on the higher end of the Mild ID range of subtest mean scores but still nearly two standard deviations below the mean. However, the error factor mean scores within that subtest reveal that the grammar error factor group mean score (8.1) falls within the average range and the general factor group mean score (5.1) falls more than a standard deviation below the average. On spelling, the overall Mild ID group mean is 61.6, but the error factors reveal that the sound to letter mapping factor group mean score (6.2) is more than a standard deviation below the mean and the phonological awareness factor group mean score (7.7) falls within the average range.
|
Table 4. Group Mean and Standard Deviation Scores on Subtest Error Factors.

The ANOVA results for all factors with a sample size greater than or equal to 15 are summarized in Table 5. For each ANOVA conducted, overall significance was evaluated for each factor (measured by F values), followed by pairwise comparisons of the three groups with one another.
|
Table 5. Summary Table of ANOVA Results With Bonferonni Pairwise Comparisons.

Within Phonological Processing, both factors had significant F values at the .001 level. On the basic phonological awareness factor, the Mild ID group scored significantly lower than both control groups. However, the two control groups did not differ significantly from one another. On the advanced phonological processing factor, all three groups differed significantly from one another, with the Mild ID group scoring significantly lower than both control groups and the Low IQ Control group scoring significantly lower than the Low Achievement Control group.
On Math Concepts & Applications, the math calculation factor had a significant F value, with the Mild ID group scoring significantly lower than both control groups. There were no significant differences between the two control groups on that factor. On the geometric concepts factor, there were no significant differences between any of the groups.
On Letter & Word Recognition, all three factors had significant F values. For both contextual vowel pronunciation and intermediate letter–sound knowledge, the Mild ID group scored significantly lower than the two control groups and the two control groups did not differ significantly from each other. For the consonant pattern knowledge factor, the Mild ID group was significantly different from the Low IQ Control group, but no other group differences were significant.
On Math Computation, the addition factor F value was not significant but the basic math concepts factor was significant. On that factor, the Mild ID group scored significantly lower than the Low IQ Control group but not the Low Achievement Control group. The Low IQ Control group also scored significantly higher than the Low Achievement Control group.
Neither nonsense word decoding nor reading comprehension factors had significant F values. Therefore, there were no significant group differences on any of the factors of these subtests.
On Written Expression, both the general and mechanics factors had significant F values. In both instances, the Mild ID group scored significantly lower than both control groups. There were no significant differences between the control groups on either factor.
The same results were found on the Spelling subtest, with significant F values for both the sound to letter mapping and phonological awareness factors. In both cases, the Mild ID group again scored significantly lower than both control groups, neither of which differed significantly from the other.
On Oral Expression, both factors had significant F values. For the grammar factor, the Mild ID group did not differ significantly from the Low IQ Control group. However, the Mild ID and Low IQ Control groups both scored significantly lower than the Low Achievement Control group. On the general factor, the Mild ID group scored significantly lower than both control groups and the Low IQ group scored significantly lower than the Low Achievement group.
This study revealed many interesting findings that are important for diagnostic and intervention selection purposes. A unique profile of error factor group mean scores on the KTEA-3 emerged for the Mild ID group. In addition, the profile for the clinical and control groups was each distinct with statistically significant differences between the groups across some of the error factors. Perhaps more interestingly, there were several error factors that contained no significant differences in group means between any of the groups.
As expected, the Mild ID group scored one to two standard deviations lower than the two control groups on almost every subtest. However, when the subtests were subcategorized into error factors, the results were informative in how they differentiated the three groups from each other. Specifically, none of the three groups differed significantly on the following factors: (a) Math Concepts & Applications—geometric concepts factor, (b) Math Computation—addition factor, (c) Nonsense Word Decoding—letter–sound knowledge factor, (d) Nonsense Word Decoding—basic phonic decoding factor, (e) Reading Comprehension—expository–literal factor, and (f) Reading Comprehension—narrative–inferential factor. Therefore, the Mild ID, Low IQ Control, and Low Achievement Control groups all performed similarly on tasks that load on these error factors. These results contradict the work of Gottfredson (1997) for those with Mild ID and support the idea that individuals with Mild ID can benefit from the same intensive evidence-based interventions that other children receive, particularly in reading and math. Relying upon sight–word instruction for students with Mild ID, for instance, is not sufficient to meet their educational needs. This work also clearly illustrates that those with Mild ID have strong geometry skills, which refutes some of the Brankaer et al. (2011) assertions.
Assumptions about students with Mild ID’s ability to respond to evidence-based interventions based on preconceived notions about their diagnosis, achievement subtest scores, or IQ scores are erroneous and may lead to failure to use effective strategies. This study soundly supports the concept of using evidence-based strategies for individuals with Mild ID, as well as exploring and capitalizing on known group areas of strength such as visual–spatial abilities (as exemplified on geometric concepts) and letter–sound knowledge in instruction. It is interesting to compare this study’s findings with the results of the Koriakin et al. (2017) study, which examined differences in error patterns for students with distinct patterns of cognitive strength and weakness (PSW)—high crystallized/low memory speed versus high memory-speed/low crystallized. These researchers found that there were no significant differences between the PSW groups on KTEA-3 geometry-related problems, whereas there were significant differences on the other two error factor scores. The unique visual–spatial skill requirements of geometry may account for these non-significant differences for the ID students in the present study and for the Low Crystallized group investigated by Koriakin and colleagues. The results of this study strongly support further exploration into the unique profile of achievement error responses of those with Mild ID, as opposed to relying upon broad subtest scores, which mask the strengths of those with Mild ID.
Of the error factors that had significant F values, each group had significant differences from at least one other group on some factors. The Mild ID group scored significantly lower than the Low IQ Control group on all significant F error factors except the Oral Expression–grammar factor. The Low IQ Control group did not score significantly lower than the Mild ID group on any error factors. Therefore, other than the six error factors listed above and the Oral Expression–grammar factor, the Mild ID group scored significantly lower than the Low IQ group on all other error factors. This is an important distinction to note during diagnostic evaluations, even though it was not possible to equate the groups precisely on IQ. Students with lower IQ scores but not matching the definition of Mild ID may look similar to students with Mild ID on Oral Expression grammar, geometric math concepts, addition, letter–sound knowledge, basic phonic decoding, and expository–literal and narrative–inferential reading comprehension skills. However, they will likely score higher on all other error factors.
The Mild ID group scored significantly lower than the Low Achievement Control group on all significant F error factors except Letter & Word Recognition–consonant pattern knowledge factor and Math Computation–basic math concepts factor. We believe that these exceptions can be explained by the fact that these factors may rely on rote memorization rather than complex reasoning and may reflect abilities that students with Mild ID have mastered even though they have not progressed to more difficult processes (J. Willis, personal communication, March 18, 2016). It is also possible that instruction did not advance beyond rote memorization, and therefore, the Mild ID group is hampered by systemic failure to use evidence-based interventions to increase skills beyond the basics.
The Low Achievement Control group did not score significantly lower than the Mild ID group on any error factors. This is diagnostically important to know because the Low Achievement Control group also had an IQ proxy score of 90 to 125. This control group may therefore encompass students who could have a learning disability (i.e., average to above-average intelligence with a significantly lower area of achievement), but at a minimum scored below 100 on the academic skills behavior composite. Thus, students in the Mild ID group may score similarly to those in the Low Achievement group on consonant pattern knowledge, basic math concepts, geometric math concepts, addition, letter–sound knowledge, basic phonic decoding, and expository–literal and narrative–inferential reading comprehension skills. Their patterns will likely diverge on all other error factors, with students with Mild ID scoring significantly lower than students with low achievement but average to above-average cognitive ability.
The Low IQ Control group scored significantly lower than the Low Achievement Control group on the advanced phonological processing, Oral Expression–grammar, and Oral Expression–general factors. The Low Achievement Control group scored significantly lower than the Low IQ Control group on the Math Computation–basic math concepts factor. These two groups are, thus, differentiated by basic math skills, general and grammatical Oral Expression, and advanced phonological processing. Interventions can be selected and strengths capitalized based on these distinctions, which skilled practitioners now know to identify.
Limitations and Future Directions
There were several limitations to this study that can guide future research in this area. First, we theoretically sought to match the Mild ID clinical cases with control group cases based on demographic characteristics as well as IQ and Academic Skills Battery Composite scores. However, given the limited number of possible matches, not enough cases were available to run the statistical analyses with such stringent match criteria. We, therefore, methodically loosened the criteria to the level used in this study to maximize the number of cases in each group while still approximating the desired groupings. Second, we did not have confirmed IQ scores for the control groups, but rather relied upon an accepted proxy of IQ. Given that the participants in the original normative sample were inaccessible to us and the time required to administer intelligence tests is cost and time prohibitive, we recommend future researchers use a similar technique for approximating IQ. Third, there is no reliability estimate available for this work, such as test–retest or coefficient alpha. This limitation exists due to the nature of this type of research.
This study yielded highly interesting distinctions between the Mild ID, Low IQ Control, and Low Achievement Control groups. It is incumbent upon the field to build on this research by running similar analyses for other achievement tests that offer error scoring. Beyond this, it is imperative that newly developing achievement tests contain error scoring options because they are highly informative and effective for distinguishing groups of learners, selecting interventions, and refuting preconceived ideas about group-level skills and potential for benefit from evidence-based interventions.
Acknowledgements
The authors wish to thank NCS Pearson for providing the standardization and validation data for the Kaufman Test of Educational Achievement–Third Edition (KTEA-3). Copyrights by NCS Pearson, Inc. used with permission. They also wish to thank Alan and Nadeen Kaufman for their supervision of the comprehensive error analysis research program
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
Allor, J. H., Mathes, P. G., Roberts, J. K., Cheatham, J. P., Champlin, T. M. (2010). Comprehensive reading instruction for students with intellectual disabilities: Findings from the first three years of a longitudinal study. Psychology in the Schools, 74, 445-466. doi:10.1002/pits.20482 Google Scholar | Crossref | |
|
Allor, J. H., Mathes, P. G., Roberts, J. K., Cheatham, J. P., Otaiba, S. A. (2014). Is scientifically based reading instruction effective for students with below-average IQs? Exceptional Children, 80, 287-306. doi:10.1177/0014402914522208 Google Scholar | SAGE Journals | ISI | |
|
American Psychiatric Association . (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing. Google Scholar | Crossref | |
|
Barker, R. M., Sevcik, R. A., Morris, R. D., Romski, M. (2013). A model of phonological processing, language, and reading for students with mild intellectual disability. American Journal on Intellectual and Developmental Disabilities, 118, 365-380. doi:10.1352/1944-7558-118.5.365 Google Scholar | Crossref | Medline | |
|
Brankaer, C., Ghesquière, P., De Smedt, B. (2011). Numerical magnitude processing in children with mild intellectual disabilities. Research in Developmental Disabilities, 32, 2853-2859. Google Scholar | Crossref | Medline | |
|
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor analytic studies. New York, NY: Cambridge University Press. Google Scholar | Crossref | |
|
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245-276. Google Scholar | Crossref | Medline | ISI | |
|
Channell, M. M., Loveall, S. J., Conners, F. A. (2013). Strengths and weaknesses in reading skills of youth with intellectual disabilities. Research in Developmental Disabilities, 34, 776-787. doi:10.1016/j.ridd.2012.10.010 Google Scholar | Crossref | Medline | |
|
Choi, D., Hatcher, R. C., Langley, S. D., Liu, X., Bray, M. A., Courville, T., . . . DeBiase, E. (2017). What do phonological processing errors tell about students’ skills in reading, writing, and oral language? Journal of Psychoeducational Assessment, 35(1-2), 25-47. Google Scholar | SAGE Journals | |
|
Cohen, D., Philippe, J., Plaza, M., Thompson, C., Chauvin, D., Hambourg, N., Flament, M. (2001). Word identification in adults with mild mental retardation: Does IQ influence reading achievement? Brain and Cognition, 46(1), 69-73. doi:10.1016/S0278-2626(01)80037-3 Google Scholar | Crossref | Medline | |
|
DiStefano, C., Zhu, M., Mîndrilă, D. (2009). Understanding and using factor scores considerations for the applied researcher. Practical Assessment, Research & Evaluation, 14(20). Retrieved from http://pareonline.net/getvn.asp?v=14&n=20 Google Scholar | |
|
Fajardo, I., Tavares, G., Ávila, V., Ferrer, A. (2013). Towards text simplification for poor readers with intellectual disability: When do connectives enhance text cohesion? Research in Developmental Disabilities, 34, 1267-1279. doi:10.1016/j.ridd.2013.01.006 Google Scholar | Crossref | Medline | |
|
Flanagan, D., Harrison, P. (2012). Contemporary intellectual assessment theories, tests, and issues (3rd ed.). New York, NY: Guilford Press. Google Scholar | |
|
Flanagan, D. P., Ortiz, S. O., Alfonso, V. C. (2013). Essentials of cross-battery assessment (Vol. 84). New York, NY: John Wiley. Google Scholar | |
|
Gottfredson, L. S . (1997). Why g matters: The complexity of everyday life. Intelligence, 24(1), 79-132. doi:10.1016/S0160-2896(97)90014-3 Google Scholar | Crossref | |
|
Graham, S., Harris, K. R. (1997). It can be taught, but it doesn’t develop naturally: Myths and realities in writing instruction. School Psychology Review, 26, 414-424. Google Scholar | |
|
Gresham, F. M., MacMillan, D. L., Bocian, K. M. (1996). Learning disabilities, low achievement, and mild mental retardation more alike than different? Journal of Learning Disabilities, 29, 570-581. Google Scholar | SAGE Journals | |
|
Hatcher, R. C., Breaux, K. C., Liu, X., Bray, M. A., Ottone-Cross, K. L., Courville, T., . . . Dulong Langley, S. (2017). Analysis of children’s errors in comprehension and expression. Journal of Psychoeducational Assessment, 35(1-2), 58-74. Google Scholar | SAGE Journals | |
|
Hoard, M., Geary, D., Hamson, C. (1999). Numerical and Arithmetical Cognition: Performance of Low- and Average-IQ Children. Mathematical Cognition, 5(1), 65-91. Google Scholar | Crossref | |
|
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179-185. Google Scholar | Crossref | Medline | ISI | |
|
Horn, J. L., Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57, 253-270. Google Scholar | Crossref | Medline | ISI | |
|
Hua, Y., Woods-Groves, S., Kaldenberg, E. R., Scheidecker, B. J. (2013). Effects of vocabulary instruction using constant time delay on expository reading of young adults with intellectual disability. Focus on Autism and Other Developmental Disabilities, 28, 89-100. doi:10.1177/1088357613477473 Google Scholar | SAGE Journals | |
|
Joseph, L. M., Konrad, M. (2009). Teaching students with intellectual or developmental disabilities to write: A review of the literature. Research in Developmental Disabilities, 30(1), 1-19. doi:10.1016/j.ridd.2008.01.001 Google Scholar | Crossref | Medline | |
|
Joseph, L. M., Seery, M. E. (2004). Where is the phonics? Remedial and Special Education, 25(2), 88-94. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=add_14&AN=22798186 Google Scholar | |
|
Kaufman, A. S., Kaufman, N. L. (2014). Kaufman test of educational achievement (3rd ed.). Bloomington, MN: NCS Pearson. Google Scholar | |
|
Kaufman, A. S., Kaufman, N. L. (with Breaux, K. C.). (2014). Technical & interpretive manual. Kaufman test of educational achievement, third edition. Bloomington, MN: NCS Pearson. Google Scholar | |
|
Kaufman, A. S., Reynolds, C. R., McLean, J. E. (1989). Age and WAIS-R intelligence in a national sample of adults in the 20-to 74-year age range: A cross-sectional analysis with educational level controlled. Intelligence, 13, 235-253. Google Scholar | Crossref | |
|
Kitkanj, Z., Georgieva, E. (2013). Behavioral disorders in adolescents with mild intellectual disability. Journal of Special Education and Rehabilitation, 14(3-4), 7-21. Google Scholar | Crossref | |
|
Koriakin, T., White, E., Breaux, K., DeBiase, E., O’Brien, R., Howell, M., . . . Courville, T. (2017). Patterns of cognitive strengths and weaknesses and relationships to math errors. Journal of Psychoeducational Assessment, 35(1-2), 156-168. Google Scholar | SAGE Journals | |
|
Lundberg, I., Reichenberg, M. (2013). Developing reading comprehension among students with mild intellectual disabilities: An intervention study. Scandinavian Journal of Educational Research, 57(1), 89-100. doi:10.1080/00313831.2011.623179 Google Scholar | Crossref | |
|
Mather, N., Wendling, B. J. (2008). Essentials of evidence-based academic interventions (Vol. 74). West Sussex, UK: John Wiley. Google Scholar | |
|
McArdle, J. J., Ferrer-Caja, E., Hamagami, F., Woodcock, R. W. (2002). Comparative longitudinal structural analyses of the growth and decline of multiple intellectual abilities over the life span. Developmental Psychology, 38(1), 115-142. Google Scholar | Crossref | Medline | ISI | |
|
McGrew, K. S., Flanagan, D. P. (1998). Interpreting intelligence tests from contemporary Gf-Gc theory: Joint confirmatory factor analysis of the WJ-R and KAIT in a non-white sample. Journal of School Psychology, 36, 151-182. Google Scholar | Crossref | |
|
O’Brien, R., Pan, X., Courville, T., Bray, M. A., Breaux, K. C., Avitia, M., Choi, D. (2017). Exploratory factor analysis of reading, spelling, and math errors. Journal of Psychoeducational Assessment, 35(1-2) 8-24. Google Scholar | |
|
Schalock, R., Borthwick-Duffy, S. A., Bradley, V. J., Buntinx, W. H. E., Coulter, D. L., Craig, E. M., . . . Yeager, M. H. (2010). Intellectual disability: Definition, classification, and systems of supports (11th ed.). Washington, DC: American Association on Intellectual and Developmental Disabilities. Google Scholar | |
|
Sermier Dessemontet, R., de Chambrier, A. F. (2015). The role of phonological awareness and letter-sound knowledge in the reading development of children with intellectual disabilities. Research in Developmental Disabilities, 41-42, 1-12. doi:10.1016/j.ridd.2015.04.001 Google Scholar | Crossref | |
|
Tanaka, H., Black, J. M., Hulme, C., Stanley, L. M., Kesler, S. R., Whitfield-Gabrieli, S., . . . Hoeft, F. (2011). The brain basis of the phonological deficit in dyslexia is independent of IQ. Psychological Science, 22, 1442-1451. doi:10.1177/0956797611419521 Google Scholar | SAGE Journals | ISI | |
|
van den Bos, K. P., Nakken, H., Nicolay, P. G., van Houten, E. J. (2007). Adults with mild intellectual disabilities: Can their reading comprehension ability be improved? Journal of Intellectual Disability Research, 51, 835-849. doi:10.1111/j.1365-2788.2006.00921.x Google Scholar | Crossref | Medline | |
|
van Tilborg, A., Segers, E., van Balkom, H., Verhoeven, L. (2014). Predictors of early literacy skills in children with intellectual disabilities: A clinical perspective. Research in Developmental Disabilities, 35, 1674-1685. doi:10.1016/j.ridd.2014.03.025 Google Scholar | Crossref | Medline | |
|
Varuzza, C., De Rose, P., Vicari, S., Menghini, D. (2015). Writing abilities in intellectual disabilities: A comparison between Down and Williams syndrome. Research in Developmental Disabilities, 37, 135-142. doi:10.1016/j.ridd.2014.11.011 Google Scholar | Crossref | Medline | |
|
Wei, X., Blackorby, J., Schiller, E. (2011). Growth in reading achievement of students with disabilities, ages 7 to 17. Exceptional Children, 78, 89-106. Retrieved from http://eric.ed.gov/?id=EJ939955 Google Scholar | |
|
Wise, J. C., Sevcik, R. A., Romski, M., Morris, R. D. (2010). The relationship between phonological processing skills and word and nonword identification performance in children with mild intellectual disabilities. Research in Developmental Disabilities, 31, 1170-1175. doi:10.1016/j.ridd.2010.08.004 Google Scholar | Crossref | Medline | ISI | |
|
Woodcock, R., Mather, N., McGrew, K. (2001). Woodcock-Johnson III tests of cognitive abilities examiner’s manual. Itasca, IL: Riverside. Google Scholar | |
|
World Health Organization . (2001). The World Health Report 2001: Mental health: New understanding, new hope. Geneva, Switzerland: Author. Google Scholar |

