Through the use of excerpts from one of our own case studies, this commentary applied concepts inherent in, but not limited to, the neuropsychological literature to the interpretation of performance on the Kaufman Tests of Educational Achievement–Third Edition (KTEA-3), particularly at the level of error analysis. The approach to KTEA-3 test interpretation advocated here parallels the cognitive process-oriented approach used by McCloskey and colleagues in their interpretation of the Wechsler scales. This approach is also advocated by Hale and Fiorello as part of their cognitive hypothesis testing model and is inherent in the neuropsychological assessment and interpretation frameworks proposed by Miller and Dehn. For the purpose of this commentary, we describe how this approach to KTEA-3 test interpretation fits within our own Cattell-Horn-Carroll (CHC)-based approach to specific learning disabilities (SLD) identification. To derive maximum benefit from error analysis, practitioners must pay careful attention to the manner in which students respond to test items and copiously document their observations during test administration.

Although several comprehensive achievement batteries such as the Wechsler Individual Achievement Test (WIAT) and Woodcock–Johnson (WJ) have existed for decades, only the Kaufman Tests of Educational Achievement (KTEA) has included systematic, thorough, and meaningful error analysis. Over the years, the authors of the KTEA (A. S. Kaufman and Nadeen L. Kaufman) have increased the number of subtests for which there is error analysis with the highest number on the third edition of the instrument (KTEA-3; Kaufman & Kaufman, 2014). Error analysis on the KTEA-3 subtests has clear benefits for the practitioner such as (a) identifying the types of errors that a student makes, (b) determining whether there is a pattern among the errors within and across academic domains, (c) generating hypotheses about the reasons why the student is making the errors, and (d) designing specific instruction to remediate the errors (see Mather & Wendling, in press; Wendling & Mather, 2009).

The authors of the KTEA-3 should be applauded for their inclusion of error analysis as well as base rate data for comparing a student’s performance to that of the norm group. Perhaps one of the greatest contributions of this special issue is the “normalizing of errors” and the applications of error factor scores in practice. For example, O’Brien et al. (2017) revealed via exploratory factor analysis of reading, spelling, and math errors the following error factors on the KTEA-3: (a) Contextual Vowel Pronunciation, Intermediate Letter-Sound Knowledge, Consonant Pattern Knowledge (on the Letter Word Recognition subtest); (b) Letter-Sound Knowledge and Basic Phonetic Coding (on the Nonsense Word Decoding subtest); (c) Sound to Letter Mapping and Phonological Awareness (on the Spelling subtest); (d) Math Calculation, Geometric Concepts, and Complex Math Problems (on the Math Concepts and Application subtest); and (e) Basic Math Concepts and Addition (on the Math Calculation subtest). In other words, now it is possible to calculate error factor scores for individuals and interpret performance under the normal curve. These error factor scores may be used to classify students’ performance as low, moderate, or high in errors (O’Brien et al.), design interventions for individual students, and compare error performance before and after intervention.

Several studies in this special issue were conducted with individuals with specific diagnoses such as intellectual disability (Root et al., 2017), specific learning disability (Avitia et al., 2017; Pagirsky et al., 2017), and giftedness (Ottone-Cross et al., 2017). One potential implication of these studies is to use error analysis for differential diagnosis. While the authors of these studies should be commended for pushing the edge of the envelope in terms of translating standardized test performance into results that are more meaningful for diagnosis and intervention planning, additional research is necessary before error analysis can be used as a primary basis for differential diagnosis. Notwithstanding, the studies in this special issue form the foundation of a research paradigm that may provide evidence for how error analysis can and should be used to inform diagnosis and intervention (see also McCloskey, in press).

This commentary focuses on the utility of KTEA-3 error analysis for diagnosis of specific learning disabilities (SLD) based, in part, on a consideration of the findings from various studies in this special issue (e.g., Choi, Hatcher, Langley, Liu, & Bray; in press, Liu et al., 2017). However, to derive maximum benefit from error analysis, it is not only necessary to know the types of errors that an individual makes but also why the individual makes certain errors (Flanagan, Ortiz, & Alfonso, 2013; McCloskey, 2017). To demonstrate that process, we place error analysis within the context of a comprehensive evaluation of suspected learning disability.

Through the use of excerpts from one of our own case studies, this commentary applies concepts inherent in, but not limited to, the neuropsychological literature to the interpretation of performance on the KTEA-3, particularly at the level of error analysis. The approach to KTEA-3 test interpretation advocated here parallels the cognitive process-oriented approach used by McCloskey and colleagues in their interpretation of the Wechsler scales (e.g., McCloskey, 2009a, 2009b; McCloskey & Maerlender, 2005; McCloskey, Slonim, & Hartz, 2016; McCloskey, Slomin, Kaufman, & Nagoshi, in press). This approach is also advocated by Hale and Fiorello (2004) as part of their cognitive hypothesis testing model and is inherent in the neuropsychological assessment and interpretation frameworks proposed by Miller (2013) and Dehn (2013). For the purpose of this commentary, we describe how this approach to KTEA-3 test interpretation fits within our own Cattell-Horn-Carroll (CHC)-based approach to SLD identification (Flanagan, Alfonso, Mascolo, & McDonough, 2016; Flanagan, Alfonso, Ortiz, & Dynda, 2010; Flanagan et al., 2013). According to McCloskey et al. (in press), “the key to effective interpretation of test performance after administration . . . is careful observation of test performance during administration” (p. 233, emphasis in the original). To derive maximum benefit from error analysis, practitioners must pay careful attention to the manner in which students respond to test items and copiously document their observations during test administration.

Several approaches to test interpretation, for cognitive and achievement tests alike, stress the importance of drilling down. The concept of drilling down requires interpretation to move from a global level of functioning to more specific levels of functioning (e.g., from an IQ to subtest-level performance). The idea is that global ability scores (e.g., Full Scale IQ [FSIQ] or KTEA-3 Academic Skills Battery [ASB] Composite) may obscure important information about broad cognitive abilities (e.g., Fluid Reasoning Index, Processing Speed Index) and academic domains (e.g., Reading Composite, Math Composite), and cognitive indexes and academic composites may obscure important information about narrow abilities (e.g., Inductive Reasoning vs. Quantitative Reasoning) and specific academic skills (e.g., Reading Decoding vs. Reading Fluency). Furthermore, subtest-level scores may obscure important information about specific processes that may have mediated performance at the item level. Observations during test administration often lead to hypotheses about the specific cognitive processes that may have mediated or adversely influenced performance on certain achievement subtests. To test those hypotheses, analysis of error patterns is informative.

To demonstrate the utility of KTEA-3 error analysis in the SLD identification process, we first place error analysis within the context of a pattern of strengths and weaknesses (PSW) approach. Next, we describe the various steps that we take to ensure that the results of error analysis are meaningfully connected to the referral, research on the nature of the suspected learning disorder, actual manifestations of error patterns in the student’s daily academic activities and work samples, and an ultimate diagnosis of SLD if one is considered necessary based on a convergence of multiple data sources gathered through multiple methods. Finally, we use excerpts from the case of “Emma” to demonstrate those steps.

The 2006 federal regulations include three options for determining SLD: (a) ability-achievement discrepancy, (b) Response-to-Intervention, and (c) an alternative research-based procedure (§ 300.307[a]). The focus of the research in Part IV of this special issue was on the third option, which involves the evaluation of a PSW in a student’s cognitive, academic, and neuropsychological processing profile.

Figure 1 provides an illustration of the three common components of third-method approaches to SLD identification. The two bottom ovals depict academic and cognitive weaknesses, and their horizontal alignment indicates that the level of performance in the academic and cognitive domains is expected to be similar or consistent. In students with SLD, there often exists an empirical or otherwise clearly observable and meaningful relationship between the cognitive and academic weaknesses, as the cognitive weakness is presumed to be related meaningfully and significantly to the academic weakness. The oval depicted at the top of Figure 1 represents cognitive strengths or integrities. The double-headed arrows between the top oval and bottom two ovals represent reliable differences in performance between cognitive strengths and the areas of cognitive and academic weakness. The pattern of cognitive and academic strengths and weaknesses represented in Figure 1 reflects a “disorder in one or more basic psychological processes” (IDEIA, 2004) and retains the concept of unexpected underachievement that has historically been synonymous with the SLD construct (Kavale & Forness, 2000; see also Hale et al., 2010). Several book chapters and articles have been written on the PSW model in Figure 1, and therefore a lengthier description is not provided here (see Christo, D’Incau, & Ponzuric, 2016; Flanagan & Alfonso, 2016, in press; McDonough & Flanagan, 2016).


                        figure

Figure 1. Conceptual similarities among PSW approaches that assist in SLD diagnosis.

Source.Flanagan, Ortiz, Alfonso, and Mascolo (2006); Flanagan, Ortiz, and Alfonso (2013); Flanagan, Fiorello, and Ortiz (2010).

Note. PSW = pattern of strengths and weaknesses; SLD = specific learning disabilities.

aCriteria vary across models.

bUnique to Flanagan et al. (e.g., 2013, 2016) model.

Many view the PSW model in Figure 1 as if it were a formula wherein numbers are entered and a diagnosis of SLD/no SLD is produced (e.g., Kranzler, Floyd, Benson, Zaboski, & Thibodaux, 2016; McGill & Busse, 2016; Stuebing, Fletcher, Branum-Martin, & Francis, 2012). Viewing PSW as a diagnostic formula is not only ill informed, but it detracts from the vast amount of diagnostic and practical information the approach yields. SLD cannot be diagnosed with a formula, although an examination of numbers (i.e., test scores) within the PSW framework is helpful because it assists in the data gathering process. The bottom two ovals in Figure 1, for example, are not just low test scores; those ovals contain information about specific cognitive processes and their influences on the acquisition and development of academic skills. Figure 2 depicts this interplay between cognitive processes and academic performance. Note that the results of KTEA-3 error analysis are included in the right oval in this figure because they play a critical role in helping the practitioner understand which cognitive processes may adversely affect academic performance—information necessary for diagnosis and intervention planning. This process is demonstrated using data from Emma’s case.


                        figure

Figure 2. Cognitive processing deficits interfere with academic skill development and patterns of errors provide insight into which cognitive processes adversely affect academic performance.

Prior to determining whether a student meets criteria for a PSW that is consistent with SLD, a comprehensive evaluation is conducted. There are no shortcuts to understanding a student’s learning needs, the potential contributing factors to the student’s learning difficulties, and how assessment data may be used best to inform diagnosis and intervention. Students who do not benefit as expected from evidence-based instruction and interventions that were delivered with fidelity need comprehensive evaluations that provide information that will assist in (a) understanding why previously implemented interventions were not successful; (b) specifying the nature of the learning disorder or disability, if one is present; and (c) changing the nature of instruction and intervention in meaningful ways to address better the student’s learning needs. The results of KTEA-3 error analysis play an important role in this process and the test itself provides practitioners with the added benefit of measuring a student’s growth before and after intervention. How the KTEA-3 fits within the context of a comprehensive evaluation is discussed briefly next.

While the literature is only just beginning to mount on the PSW approach to SLD identification, dozens of books have been published in recent years on assessment, comprehensive evaluations, and diagnosis of SLD (e.g., Flanagan & Alfonso, 2011, 2016, in press; Kaufman, Raiford, & Coalson, 2016; Lezak, Howieson, & Loring, 2004; Miller, 2013; Reynolds & Fletcher-Janzen, 2009; Saklofske, Reynolds, & Schwean, 2013; Sattler, Dumont, & Coalson, 2016; Strauss, Sherman, & Spreen, 2006; Swanson, Harris, & Graham, 2003). Based on our knowledge of this literature, we highlight here the key components to a comprehensive evaluation of suspected SLD with emphasis on where the results of KTEA-3 error analysis inform diagnosis, in particular.

The key components of a comprehensive evaluation include the following: (a) a well-defined reason for referral; (b) an understanding of the etiology of presenting difficulties; (c) a thorough knowledge of theory and research as it applies to measurement of cognitive abilities, neuropsychological processes, and specific academic skills and the nature of the relationship among them; (d) an understanding of the literature on specific learning disorders and disabilities to inform clinical impressions; and (e) an analysis of error patterns to support or refute clinical impressions and assist in SLD diagnosis. SLD diagnosis is guided by theory, research, sound assessment principles and procedures, and a willingness to drill down to the item level using quality tests, such as the KTEA-3. Each of the components of a comprehensive evaluation of SLD is depicted in Figure 3 and will be discussed using excerpts from Emma’s evaluation. Table 1 provides referral and background information for Emma.


                        figure

Figure 3. SLD diagnosis is guided by theory, research, sound assessment principles and procedures, and a willingness to drill down to the item level of quality tests.

Note. SLD = specific learning disabilities; KTEA = Kaufman Tests of Educational Achievement.

Table

Table 1. Referral and Background Information for Emma.

Table 1. Referral and Background Information for Emma.

Reason for Referral

A comprehensive evaluation must begin with a well-defined reason for referral. The more precise and accurate the referral is, the more targeted and meaningful the evaluation. The horizontal arrow at the top of Figure 3 shows that a comprehensive evaluation is dependent on the nature and type of data gathered, from background information to behavioral observations. In general, the information in Figure 3 moves from general (information at the left of the figure) to specific (information at the right of the figure), which reflects the drilling down concept mentioned above. As shown in the figure, Emma’s reason for referral needs to be more specific than “difficulties in reading” or even “difficulties in basic reading skills.” When the nature of the problem is described more precisely, as it is in the last column associated with the reason for referral in Figure 3, then that information (e.g., “knows sounds only if letters are spoken”) can be filtered through what is known about SLD, the etiology of these types of difficulties, the cognitive correlates and associated impairments related to such difficulties, and so forth. In other words, the reason for referral assists with generating hypotheses which in turn helps shape the selection of instruments that will comprise the evaluation and ultimately yield data that are useful for diagnostic and intervention purposes. The reason for referral should also inform analysis of the types of errors made by the student, in this case Emma, the reasons for those errors, and whether or not the same or similar errors are made on one or more tests or items—a cross-validation process.

Etiology of Presenting Difficulties

There are numerous factors that contribute to the development of learning disabilities, rather than a single known cause. SLD is a neurodevelopmental disorder with a biological basis resulting in cognitive processing deficits; academic skills deficits are considered the behavioral sign or manifestation of the disorder. As reported in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5; American Psychiatric Association [APA], 2013), genetic, epigenetic, and environmental factors interact to impair the brain’s ability to perceive and process information accurately and efficiently. Table 2 provides a summary of the etiology of SLD, specifically as it applies to word reading accuracy.

Table

Table 2. Etiology of Difficulties in Word Reading Accuracy.

Table 2. Etiology of Difficulties in Word Reading Accuracy.

Based on the information in this table and Emma’s background information, it seems clear that there may be a genetic component to her word reading difficulties. Poor reading difficulties are inherent on the maternal side of her family, and her older sister was diagnosed with dyslexia. Environmental factors do not appear to be contributing to her reading difficulties. Instead, her environment has always been rich with language exposure and usage as well as educational materials, factors that typically facilitate the acquisition and development of reading.

Assessment

Assessment is driven by knowledge of the referral, background information, etiology of a suspected disorder, and the associated impairments and cognitive correlates known to be related to the presenting problems and suspected disorder. Typically, assessments are organized and interpreted within the context of an overarching theory, such as CHC theory or neuropsychological theory. Table 3 provides a summary of impairments that are associated with word reading accuracy and its cognitive correlates. This table provides information from the CHC and neuropsychological literature. The following information guided test selection for Emma’s evaluation:

Table

Table 3. Difficulties in Word Reading Accuracy: Associated Impairments and Cognitive Correlates.

Table 3. Difficulties in Word Reading Accuracy: Associated Impairments and Cognitive Correlates.

  • CHC theory and knowledge of cognitive-achievement relations, specifically related to reading

  • Records review: Past evaluation completed by the district that documented typical functioning in many areas (e.g., fluid reasoning, math, oral language, listening comprehension), indicating areas that did not need to be reevaluated

  • Cognitive processing tests that are related to word reading accuracy: phonological awareness (auditory processing, phonetic coding), orthographic awareness, memory (auditory memory span, short-term working memory in both visual and auditory modalities), speed of lexical access, processing speed, and cognitive efficiency

  • Achievement tests that measure basic reading skills, including decoding and fluency, spelling, and phoneme-grapheme knowledge, as well as informal assessment of letter-sound knowledge (via letter-identification and sound-identification tasks)

  • Achievement tests that allow for a systematic review of errors and error patterns

A review of the results of Emma’s comprehensive evaluation revealed that she has associated impairments in cognitive processes that are related to word reading accuracy, as well as weaknesses or deficits in other cognitive areas that are correlated with word reading. Key findings from Emma’s evaluation are summarized below where SS = standard score:

  • Phonological awareness: Scores were variable and ranged from below average to average (SS = 81-92).

  • Rapid naming: Variable scores on speed of lexical access with pronounced deficits in visual as compared with auditory modalities; KTEA-3 Letter Naming Facility (SS = 76).

  • Processing speed: Consistent weaknesses in processing speed tasks, with lowest performance on perceptual speed measures containing orthographic units as stimuli (SS = 46-77).

  • Memory: Variable performance, with working memory lower than memory span (SS = 77 vs. SS = 95, respectively).

  • Orthographic awareness: Consistently low performance (SS = 46-72), including KTEA-3 Orthographic Processing Composite (SS = 68).

  • KTEA-3 Academic performance: Decoding (SS = 74), Reading Fluency (SS = 75), Sound-Symbol (SS = 84), Reading Understanding (SS = 75) and Written Language (SS = 80).

  • At least average cognitive abilities and academic skills: FSIQ = 94, Fluid Reasoning (SS = 110-115), Language Skills (SS = 99-124), Broad Math (SS = 92), Listening Comprehension (SS = 120), Oral Expression (SS = 116).

Figure 3 shows that Emma’s evaluator found support for known cognitive correlates to word reading accuracy as well as associated impairments. Based on the results of the evaluation, it appears that Emma’s inability to decode words is mediated by weaknesses in orthographic processing, working memory, and speed of lexical access. She also appeared to have difficulty on some aspects of phonological processing. At this point in the evaluation process, clinical impressions begin to crystallize.

Clinical Impressions

Based on the data gathered to this point, it seems likely that Emma has an SLD in the area of reading. However, such a diagnosis or classification is not necessarily informative for intervention planning. After all, Emma’s parents and teachers all suspected that she had a learning disability and offered supports in school and at home that were designed to assist students with reading disabilities. These support services were not effective. Figure 3 shows that the evaluator is considering a diagnosis of dyslexia. More information is necessary, however, to determine the specific nature or subtype of dyslexia. As learning disorders or disabilities are more discreetly defined, there is greater correspondence between the diagnosis and the intervention. Table 4 provides a description of different subtypes of dyslexia that was accumulated from the school neuropsychology literature (e.g., Feifer, 2011, 2016). It is important to filter test results through what is known about learning disabilities and dyslexia to ensure an accurate diagnosis. Drilling down to the item level via KTEA-3, error analysis is critical to this process.

Table

Table 4. Description of Reading Disability Subtypes.

Table 4. Description of Reading Disability Subtypes.

Examine Error Patterns

Examination of errors and error patterns assists in identifying the dyslexia subtype that is most consistent with Emma’s learning difficulties and aids in understanding which cognitive processes are interfering most with Emma’s reading difficulties. Based on the examiner’s observations during KTEA-3 administration and an error analysis of the KTEA-3 reading subtests, the following information was generated:

  • On real word and nonsense word decoding tasks (i.e., KTEA-3 Letter & Word Recognition and Nonsense Word Decoding, respectively), there were few, if any, words that Emma recognized automatically.

  • Emma exhibited several behaviors that appeared to be consistent with cognitive processing deficits, particularly orthographic processing. For example, Emma often looked at the first letter of a word and then guessed the word. She was also confusing left–right directionality (e.g., reading “on” as “no”) and misread words based on letter orientation (e.g., confusing “b” and “d,” “p,” and “q”).

  • Based on the KTEA-3 Qualitative Observations Form, these behaviors are typically associated with orthographic processing weaknesses. Emma also demonstrated some behaviors suggestive of a phonological processing weakness (e.g., substituting visually similar words for target words, such as reading “every” as “very”).

  • Emma’s pattern of KTEA-3 errors is most consistent with the Consonant Pattern Knowledge error factor score discussed in O’Brien et al. (2017).

To evaluate whether error analysis results have ecological validity (i.e., whether the same errors and error patterns manifest in real world performances outside of the standardized testing setting), other data were examined. Based on the following observations, it is clear that the results of KTEA-3 error analysis have ecological validity:

  • Classroom Performance: Emma evidenced letter confusion when reading aloud (e.g., reading “bat” as “pat”).

  • Work Sample Review: Emma’s written work contained frequent spelling errors including letter reversals (e.g., she wrote “book” as “dook” and “graded” as “gradeb”) and phonetic errors (“people” spelled as “pepul”). The nature of Emma’s errors was generally consistent with incorrect orthographic representations (e.g., writing “night light” as “nite lite”).

  • Teacher Observations: A reading inventory, completed by Emma’s teacher, contained a number of oral reading errors as well as the comment, “Emma has inconsistent letter-identification and a tendency to ‘guess’ at words. Needs to pay more careful attention to her ‘bs’ and ‘ds!’”

  • Reading Tutor Observations: An interview with Emma’s tutor suggested that Emma tends to rely on picture cues, when available, to decode (e.g., instead of reading, the boy was catching the ball, Emma might say, after looking at the picture, the boy played catch). While she relies on visual information at times, and might approximate words (e.g., reading “hose” as “house”), she most often will isolate an initial letter and create her own word (e.g., reading “part” as “people”).

Note that Figure 3 includes double-headed arrows in the first column between “Assessment,” “Clinical Impressions,” “Examine Error Patterns,” and “Clinical Impressions Informed by Error Patterns.” These double-headed arrows mean that the process of linking assessment data to a diagnosis does not necessarily occur in a linear fashion. There is often a need to “circle back” to previous data or gather additional data to inform diagnosis (and intervention). Some examples of circling back from error analysis on the KTEA-3 to cognitive test performance to inform diagnosis are as follows:

  • Emma’s KTEA-3 error analysis was suggestive of orthographic and, to a lesser extent, phonological weaknesses. It is clear from cognitive test data that Emma demonstrated weaknesses in both areas. A closer analysis of performance on these cognitive processing measures provides further information regarding the nature of these weaknesses. In terms of orthographic awareness, on an 84-item letter-matching task, Emma completed approximately 30% of the items (25 items in 3 min), identifying 21 correct letter pairs. Errors included identifying “p” and “q” as similar letters and skipping a row of items on two occasions. Similarly, on a number matching task, Emma exhibited difficulty, noting initially that she could not find any matching numbers and, later, began selecting numbers that were visually similar, but not identical (e.g., 38 and 83, 17 and 71). As the number of digits increased from 2 to 3, Emma experienced more difficulty. Overall, these weaknesses on cognitive processing tests and the specific nature of her errors (e.g., misidentifying letters, transposing orthographic stimuli) are consistent with the types of errors that Emma demonstrated on the KTEA-3 tasks involving encoding and decoding of real and nonsense words.

  • On phonological processing tasks, Emma reliably produced words that began with a specific sound and was able to rhyme words and perform simple sound matching tasks.

  • On phonological processing tasks requiring deleting and segmenting sounds, Emma had considerable difficulty. For example, when asked to provide a word that contained the medial /g/ sound, Emma provided a word that began with the sound. When asked to provide a word that ended with the /r/ sound, Emma provided a word that contained a medial /r/ sound. Emma expressed that the task was difficult. Emma had particular difficulty with sound substitution (e.g., say the word “ray,” now say the word but change the /r/ to a /b/ sound—for example, “bay”). She also evidenced more difficulty when items moved from monosyllabic to polysyllabic nonwords, and struggled with segmenting sounds. On a phoneme deletion task, Emma had difficulty isolating the exact phoneme(s) to delete (e.g., if asked to say the word “sand” without the “/s/” sound, she might omit the final sound as well saying, “an”).

Overall, Emma’s phonological processing patterns of errors demonstrate that her Basic Phonological Awareness is adequately developed but her Advanced Phonological Processing is deficient—a finding consistent with the phonological processing factors that were found in the KTEA-3 standardization sample (Choi et al., 2017). While Choi et al. speculated that these basic and advanced phonological processing factors may not indicate two separate error categories, but rather difficulty level, the practitioner must make this determination based on observations during test administration along with the results of performance on other cognitive processing tests, as is demonstrated next.

Clinical Impressions Informed by Error Patterns

Emma’s error patterns suggest that she has poor advanced phonological processing skills and poor orthographic awareness, both of which interfere with word reading accuracy. However, Emma’s inability to perform some advanced phonological processing tasks was mediated by poor working memory, generally low cognitive efficiency (e.g., slow processing speed), and difficulty retrieving information from long-term stores quickly (i.e., speed of lexical access). When Emma was tested by the examiner informally, she was able to delete and segment sounds with greater accuracy when provided with visual information/cues, indicating that when working memory demands are minimized, Emma is able to demonstrate more advanced phonological processing skills.

Following the lightly shaded arrows in Figure 3, the data suggest that the results of Emma’s comprehensive evaluation and KTEA-3 error analysis support a diagnosis of the orthographically based dyslexia subtype known as surface dyslexia. The lightly shaded arrows in Figure 3 also reflect the notion that a convergence of indicators is necessary to support a diagnosis. That is, no one data source is sufficient to render an SLD diagnosis. Emma has a pattern of cognitive strengths (top oval in Figure 1), including Gc, Gf, Gv, and oral language, that ought to facilitate word reading accuracy, particularly high Gc and oral language (see Liu et al., 2017). Therefore, her deficit in word reading accuracy is unexpected. However, there is evidence to demonstrate that her difficulty in acquiring basic reading skills (bottom right oval in Figure 1) is related to deficits in orthographic processing, working memory, and speed of lexical access (bottom left oval in Figure 1). With the support of referral and background information, etiology of word reading accuracy difficulties, information about the behavioral manifestations of dyslexia subtypes, and ecological validity for test findings, converging data sources are indeed available to demonstrate that Emma’s PSW is consistent with a diagnosis of SLD.

Interestingly, the case of Emma demonstrates that a diagnosis of SLD can be made based on a systematic, theory- and research-based approach to examining results of a comprehensive evaluation, including error analysis, and that a formula is not necessary to render a diagnosis. A diagnosis of SLD is a clinical judgment that is made by a private independent psychologist or a multidisciplinary team based on a convergence of data sources that appear to be consistent with the SLD construct. However, due to federal statutory and regulatory requirements, a classification of SLD is made in the schools following one of three methods, as stated earlier—methods that necessitate quantification for purposes of consistency in identification and accountability. Truth be told, there will always be instances when scores look like SLD, but SLD is not present (false positive), and when scores do not look like SLD, but SLD is present (false negative). The only viable means we see to reduce false positives and false negatives at this time is to consider the pattern within the context of multiple data sources, gathered via multiple methods. When all necessary data sources are available (as summarized in Figure 3), the results of PSW analysis will either converge to support SLD or will diverge in different ways and fail to support SLD, and so goes the process of SLD identification. The data sources gathered in the case of Emma converged in a manner that supported a diagnosis of SLD and, more specifically, surface dyslexia. Given the specific nature of Emmas’s diagnosis, which was informed to a large extend by the results of her KTEA-3 performance errors, the following recommendations were made.

  • Build letter recognition (alphabet matching tasks/games)

  • Target instruction in sound segmentation/phoneme deletion (using visuals to reduce demands on working memory)

  • Use audiobooks to access grade-level texts (with visual highlighting features to allow Emma to track words as they are read)

  • Paired reading activities

  • Comprehension monitoring strategies (e.g., click-or-clunk, asking if the sentence makes sense as read)

  • Text previewing to review unfamiliar words

  • My Virtual Reading Coach (program can be implemented via home or school)

  • Lindamood Bell Seeing Stars program (teaches orthographic processes)

We draw the following conclusions based on several studies included in this special issue, the extant research base on diagnosis of and interventions for SLD, and the case study excerpts referenced in our commentary: (a) error analysis is an important part of the assessment process, and having norm-referenced error factor scores will prove to be invaluable; (b) the KTEA-3 has become more valuable as an assessment tool because it has systematically increased its use and study of errors on its subtests; (c) the studies in the special issue were well conducted and provide a psychometrically sound foundation for an error analysis research paradigm; (d) error analysis of KTEA-3 reading subtests added to an understanding of the case study highlighted in this commentary and provided information that informed diagnosis and intervention; and (e) practitioners should be cautious about using error factor scores for differential diagnosis because the state of the research is too premature to engage in such practice as limited support for differential diagnosis was found in the studies in this issue.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

American Psychiatric Association . (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.
Google Scholar | Crossref
Christo, C., D’Incau, B. J., Ponzuric, J. J. (2016). Response to McGill and Busse, “When theory trumps science: A critique of the PSW model for SLD Identification.” Contemporary School Psychology, 10-15.
Google Scholar
Dehn, M. J. (2013). Enhancing SLD diagnoses through the identification of psychological processing deficits. The Australian Educational and Developmental Psychologist, 30(2), 119-139.
Google Scholar | Crossref
Eckert, M. A., Leonard, C. M., Richards, T. L., Aylward, E. H., Thomson, J., Berninger, V. W. (2003). Anatomical correlates of dyslexia: Frontal and cerebellar findings. Brain, 126, 482-494.
Google Scholar | Crossref | Medline | ISI
Feifer, S. (2011). How SLD manifests in reading. In Flanagan, D. P., Alfonso, V. C. (Eds.), Essentials of specific learning disability identification (pp. 21-41). Hoboken, NJ: John Wiley.
Google Scholar
Feifer, S. (2016). How SLD manifests in reading. In Flanagan, D. P., Alfonso, V. C. (Eds.), Essentials of specific learning disability identification (2nd ed.). Hoboken, NJ: John Wiley. Manuscript submitted for publication.
Google Scholar
Flanagan, D. P., Alfonso, V. C. (in press). Essentials of WISC-V assessment. Hoboken, NJ: John Wiley.
Google Scholar
Flanagan, D. P., Alfonso, V. C. (Eds.). (2016). Essentials of specific learning disability identification (2nd ed.). Hoboken, NJ: John Wiley. Manuscript submitted for publication.
Google Scholar
Flanagan, D. P., Alfonso, V. C. (Eds.) (2011). Essentials of specific learning disability identification. Hoboken, NJ: John Wiley.
Google Scholar
Flanagan, D. P., Alfonso, V. C., Mascolo, J. T., McDonough, E. M. (2016). A CHC-based operational definition of SLD: Integrating multiple data sources and multiple data gathering methods. In Flanagan, D. P., Alfonso, V. C. (Eds.), Essentials of specific learning disability identification (2nd ed.). Hoboken, NJ: John Wiley & Sons. Manuscript submitted for publication.
Google Scholar
Flanagan, D. P., Alfonso, V. C., Ortiz, S. O., Dynda, A. M. (2010). Best practices in cognitive assessment for school neuropsychological evaluations. In Miller, D. C. (Ed.), Best practices in school neuropsychology: Guidelines for effective practice, assessment, and evidence-based intervention (pp. 101-140). Hoboken, NJ: John Wiley.
Google Scholar
Flanagan, D. P., Fiorello, C., Ortiz, S. O. (2010). Enhancing practice through application of CHC theory and research: A “third method” approach to SLD identification. Psychology in the Schools, 40, 739-760.
Google Scholar
Flanagan, D. P., Ortiz, S. O., Alfonso, V. C. (2013). Essentials of cross-battery assessment (3rd ed.). Hoboken, NJ: John Wiley.
Google Scholar
Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., Mascolo, J. (2006). The achievement test desk reference (ATDR) – Second edition: A guide to learning disability identification. New York: Wiley.
Google Scholar
Fletcher, J. M., Simos, P. G., Papanicolaou, A. C., Denton, C. (2004). Neuroimaging in reading research. In Duke, N., Mallette, M. (Eds.), Literacy research methods (pp. 252-286). New York, NY: Guilford Press.
Google Scholar
Grigorenko, E. L. (2001). Developmental dyslexia: An update on genes, brains, and environments. Journal of Child Psychology and Psychiatry, 42, 91-125.
Google Scholar | Crossref | Medline | ISI
Grigorenko, E. L. (2005). A conservative meta-analysis of linkage and linkage-association studies of developmental dyslexia. Scientific Studies of Reading, 9, 285-316.
Google Scholar | Crossref
Groce, N., Challenger, E., Berman-Bieler, R., Farkas, A., Yilmaz, N., Schultink, W., . . . Kerac, M. (2014). Malnutrition and disability: Unexplored opportunities for collaboration. Paediatrics and International Child Health, 34(4), 308-314.
Google Scholar | Crossref | Medline
Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., . . . Dumont, R. (2010). Critical issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disability Quarterly, 33(3), 223-236.
Google Scholar | SAGE Journals | ISI
Hale, J. B., Fiorello, C. A. (2004). School neuropsychology: A practitioner’s handbook. New York, NY: Guilford Press.
Google Scholar
Individuals with Disabilities Education Improvement Act of 2004, 20 U.SC. §§ 1401 et seq . (2004).
Google Scholar
Kaufman, A. S., Kaufman, N. L. (2014). Kaufman Test of Educational Achievement (3rd ed.). Bloomington, MN: Pearson.
Google Scholar
Kaufman, A. S., Raiford, S. E., Coalson, D. L. (2016). Intelligent testing with the WISC-V. Hoboken, NJ: John Wiley.
Google Scholar
Kavale, K. A., Forness, S. R. (2000). What definitions of learning disability say and don’t say: A critical analysis. Journal of Learning Disabilities, 33, 239-256.
Google Scholar | SAGE Journals | ISI
Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., Thibodaux, L. (2016). Classification agreement analysis of cross-battery assessment in the identification of specific learning disorders in children and youth. International Journal of School and Educational Psychology, 62, 829-843.
Google Scholar
Lezak, M. D., Howieson, D. B., Loring, D. W. (2004). Neuropsychological assessment (4th ed.). New York, NY: Oxford University Press.
Google Scholar
McCloskey, G. (2009a). Clinical applications I: A neuropsychological approach to interpretation of the WAIS-IV and the use of the WAIS-IV in learning disability assessments. In Lichtenberger, E. O., Kaufman, A. S. (Eds.), Essentials of WAIS-IV assessment (pp. 208-244). Hoboken, NJ: John Wiley.
Google Scholar
McCloskey, G. (2009b). The WISC-IV integrated. In Flanagan, D. P., Kaufman, A. S. (Eds.), Essentials of WISC-IV assessment (2nd ed., pp. 310-467). Hoboken, NJ: John Wiley.
Google Scholar
McCloskey, G., Maerlender, A. (2005). The WISC-IV integrated. In Prifitera, A., Saklofske, D. H., Weiss, L. G. (Eds.), WISC-IV clinical use and interpretation: Scientist-practitioner perspectives (pp. 102-149). San Diego, CA: Academic Press.
Google Scholar | Crossref
McCloskey, G., Slonim, J., Hartz, E. (2016). Interpreting the WISC–V Using George McCloskey’s neuropsychologically oriented process approach to psychoeducational evaluations. In Kaufman, A. S., Raiford, S. E., Coalson, D. L. (Eds.), Intelligent testing with the WISC-V (pp. 493-547). Hoboken, NJ: John Wiley.
Google Scholar
McCloskey, G., Slonim, J., Whitaker, R., Kaufman, S., Nagoshi, N. (in press). A neuropsychological aproach to interpretatrion of the WISC-V. In Flanagan, D. P., Alfonso, V. C. (Eds.), Essentials of WISC-V assessment (pp. 231-270). Hoboken, NJ: John Wiley.
Google Scholar
McDonough, E. M., Flanagan, D. P. (2016). Use of the Woodcock-Johnson IV in the identification of specific learning disabilities sin school-age children. In Flanagan, D. P., Alfonso, V. C. (Eds.), WJ IV clinical use and interpretation: Scientist-practitioner perspectives (pp. 211-252). Burlington, MA: Elsevier.
Google Scholar | Crossref
McDonough, E. M., Flanagan, D. P., Sy, M., Alfonso, V. C. (in press). Specific learning disorder. In Goldstein, S., DeVries, M. (Eds.), Handbook of DSM-5 disorders in children. New York, NY: Springer.
Google Scholar
McGill, R. J., Busse, R. T. (2016). When theory trumps science: A critique of the PSW model for SLD identification. Contemporary School Psychology, 1-9.
Google Scholar
Melby-Lervaå, M., Lyster, S. -A. H., Hulme, C. (2012). Phonological skills and their role in learning to read: A meta-analytic review. Psychological Bulletin, 132, 322-352.
Google Scholar | Crossref
Miller, D. C. (2013). Essentials of school neuropsychological assessment (2nd ed.). Hoboken, NJ: John Wiley.
Google Scholar
Pennington, B. F., Olson, R. K. (2005). Genetics of dyslexia. The science of reading: A handbook (pp. 453-472). Oxford, UK: Blackwell.
Google Scholar | Crossref
Petrill, S. A., Deater-Deckard, K., Thompson, L. A., DeThorne, L. S., Schatschneider, C. (2006). Genetic and shared environmental effects of serial naming and phonological awareness on early reading outcomes. Journal of Educational Psychology, 98, 112-121.
Google Scholar | Crossref | Medline
Resnick, M. B., Gueorguieva, R. V., Carter, R. L., Ariet, M., Sun, Y., Roth, J., . . . Mahan, C. S. (1999). The impact of low birth weight, perinatal conditions, and sociodemographic factors on educational outcome in kindergarten. Pediatrics, 104(6), 1-10.
Google Scholar | Crossref | Medline
Reynolds, C. R., Fletcher-Janzen, E. (2009). Handbook of clinical child neuropsychology. New York, NY: Springer.
Google Scholar | Crossref
Richlan, F. (2012). Developmental dyslexia: Dysfunction of a left hemisphere reading network. Frontiers in Human Neuroscience, 6, 120.
Google Scholar | Crossref | Medline
Richlan, F., Kronbichler, M., Wimmer, H. (2009). Functional abnormalities in the dyslexic brain: A quantitative meta-analysis of neuroimaging studies. Human Brain Mapping, 30, 32993308.
Google Scholar | Crossref | Medline | ISI
Saklofske, D. H., Reynolds, C. R., Schwean, V. L. (2013). The Oxford handbook of child psychological assessment. New York, NY: Oxford University Press.
Google Scholar | Crossref
Sattler, J. M., Dumont, R., Coalson, D. L. (2016). Assessment of children: WISC-V and WPPSI-IV. San Diego, CA: Jerome M. Sattler.
Google Scholar
Schatschneider, C., Fletcher, J. M., Francis, D. J., Carlson, C. D., Foorman, B. R. (2004). Kindergarten prediction of reading skills: A longitudinal comparative analysis. Journal of Educational Psychology, 96, 265-282.
Google Scholar | Crossref | ISI
Shaywitz, S. E., Pugh, K. R., Jenner, A. R., Fulbright, R. K., Fletcher, J. M., Gore, J. C. (2000). The neurobiology of reading and reading disability (dyslexia). In Kamil, M. L., Mosenthal, P. B., Pearson, P. D., Barr, R. (Eds.), Handbook of reading research (Vol. 3, pp. 229-249). Mahwah, NJ: Erlbaum.
Google Scholar
Silani, L. S., Frith, U., Demonet, J. R., Fazio, F., Perani, D., Price, C., . . . Paulesu, E. (2005). Brain abnormalities underlying altered activation in dyslexia: A voxel-based morphometry study. Brain, 128, 2453-2461.
Google Scholar | Crossref | Medline
Skeide, M. A., Kirsten, H., Kraft, I., Schaadt, G., Muller, B., Neef, N., . . . Friederici, A. D. (2015). Genetic dyslexia risk variant is related to neural connectivity patterns underlying phonological awareness in children. NeuroImage, 118, 414-421.
Google Scholar | Crossref | Medline
Strauss, E., Sherman, E. M. S., Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary. (3rd ed.). New York, NY: Oxford University Press.
Google Scholar
Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., Francis, D. J. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41, 3-22.
Google Scholar | Medline | ISI
Swanson, H. L., Harris, K. R., Graham, S. (2003). Handbook of learning disabilities (2nd ed.). New York, NY: Guilford Press.
Google Scholar
Wadsworth, S. J., Olson, R. K., Pennington, B. F., DeFries, J. C. (2000). Differential genetic etiology of reading disability as a function of IQ. Journal of Learning Disabilities, 33, 192-199.
Google Scholar | SAGE Journals | ISI
Wagner, R. K., Torgesen, J. K., Rashotte, C. A., Hecht, S. A. (1997). Changing relations between phonological processing abilities and word-level reading as children develop from beginning to skilled readers: A 5-year longitudinal study. Developmental Psychology, 33, 468-479.
Google Scholar | Crossref | Medline | ISI
Wendling, B. J., Mather, N. (2009). Essentials of evidence-based academic interventions. Hoboken, NJ: John Wiley.
Google Scholar
Willcutt, E. G., Petrill, S. A., Wu, S., Boada, R., DeFries, J. C., Olson, R. K., Pennington, B. F. (2013). Comorbidity between reading disability and math disability: Concurrent psychopathology, functional impairment, and neuropsychological functioning. Journal of Learning Disabilities, 46, 500-516.
Google Scholar | SAGE Journals | ISI

Article available in:

Related Articles