Despite growing use of CBM Maze in universal screening and research, little information is available regarding the number of CBM Maze probes needed for reliable decisions. The current study extends existing research on the technical adequacy of CBM Maze by investigating the number of probes and assessment durations (1-3 min) needed for reliable relative (e.g., rank-ordering students) and absolute (e.g., comparing a specific score to a cutoff) decisions. Nine CBM Maze probes were administered to 272 students in third through fifth grades. Results suggested that the number of probes needed for reliable relative and absolute decisions varied by grade, with assessments in fifth grade exhibiting the highest reliability (at least two probes needed for both types of decisions). In addition, declining gains in reliability appeared to occur as assessment duration increased. Implications of the findings for universal screening and future research are discussed.

Ardoin, S. P., Williams, J. C., Christ, T. J., Klubnik, C., Wellborn, C. (2010). Examining readability estimates’ predictions of students’ oral reading rate: Spache, Lexile, and Forcast. School Psychology Review, 39, 277-285.
Google Scholar | ISI
Ardoin, S. P., Witt, J. C., Suldo, S. M., Connell, J. E., Koenig, J. L., Resetar, J. L., . . . Williams, K. L. (2004). Examining the incremental benefits of administering a maze and three versus one curriculum-based measurement reading probes when conducting universal screening. School Psychology Review, 33, 218-233.
Google Scholar | ISI
Bates, D., Maechler, M., Bolker, B. (2011). lme4: Linear mixed-effects models using S4 classes. R package (Version 0.999375-39). Retrieved from http://CRAN.R-project.org/package=lme4
Google Scholar
Betts, J., Pickart, M., Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. Journal of School Psychology, 47, 1-17. doi:10.1016/j.jsp.2008.09.00110.1016/j.jsp.2008.09.001
Google Scholar | Crossref | ISI
Brown-Chidsey, R., Davis, L., Maya, C. (2003). Sources of variance in curriculum-based measures of silent reading. Psychology in the Schools, 40, 363-377. doi:10.1002/pits.1009510.1002/pits.10095
Google Scholar | Crossref | ISI
Christ, T. J., Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology, 47, 55-75. doi:10.1016/j.jsp.2008.09.00410.1016/j.jsp.2008.09.004
Google Scholar | Crossref | ISI
Christ, T. J., Johnson-Gros, K. N., Hintze, J. M. (2005). An examination of alternate assessment durations when assessing multiple-skill computational fluency: The generalizability and dependability of curriculum-based outcomes within the context of educational decisions. Psychology in the Schools, 42, 615-622. doi:10.1002/pits.2010710.1002/pits.20107
Google Scholar | Crossref | ISI
Cronbach, L. J., Gleser, G. C., Nanda, H., Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: John Wiley.
Google Scholar
Deno, S. L., Reschly, A. L., Lembke, E. S., Magnusson, D., Callender, S. A., Windram, H., Stachel, N. (2009). Developing a school-wide progress-monitoring system. Psychology in the Schools, 46, 44-55. doi:10.1002/pits.2035310.1002/pits.20353
Google Scholar | Crossref | ISI
Edformation . (2003). AIMSweb MAZE-comprehension curriculum-based measures. Retrieved from http://aimsweb.com/
Google Scholar
Espin, C., Wallace, T., Lembke, E., Campbell, H., Long, J. D. (2010). Creating a progress-monitoring system in reading for middle-school students: Tracking progress toward meeting high-stakes standards. Learning Disabilities Research & Practice, 25, 60-75. doi:10.1111/j.1540-5826.2010.00304.x10.1111/j.1540-5826.2010.00304.x
Google Scholar | Crossref
Fuchs, L. S., Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45-58.
Google Scholar | ISI
Fuchs, L. S., Fuchs, D., Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9, 20-28. doi:10.1177/07419325880090020610.1177/074193258800900206
Google Scholar | SAGE Journals | ISI
Graney, S. B., Martínez, R. S., Missall, K. N., Aricak, O. T. (2010). Universal screening of reading in late elementary school: R-CBM versus CBM Maze. Remedial and Special Education, 31, 368-377. doi:10.1177/074193250933837110.1177/0741932509338371
Google Scholar | SAGE Journals | ISI
Hintze, J. M., Owen, S. V., Shapiro, E. S., Daly, E. J. (2000). Generalizability of oral reading fluency measures: Application of G theory to curriculum-based measurement. School Psychology Quarterly, 15, 52-68. doi:10.1037/h008877810.1037/h0088778
Google Scholar | Crossref | ISI
Howe, K. B., Shinn, M. M. (2002). Standardized reading assessment passages (RAPs) for use in general outcome measurement: A manual describing development and technical features. Retrieved from https://aimsweb.pearson.com
Google Scholar
Lord, F. M., Novick, M. R. (1968). Statistical theories of mental test scores. Oxford, UK: Addison-Wesley.
Google Scholar
Marcoulides, G. A. (1990). An alternative method for estimating variance components in generalizability theory. Psychological Reports, 66, 379-386. doi:10.2466/pr0.66.2.379-38610.2466/pr0.66.2.379-386
Google Scholar | SAGE Journals | ISI
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
Google Scholar
Poncy, B. C., Skinner, C. H., Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute using curriculum-based measurement. Journal of Psychoeducational Assessment, 23, 326-338. doi:10.1177/07342829050230040310.1177/073428290502300403
Google Scholar | SAGE Journals | ISI
R Development Core Team . (2010). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org
Google Scholar
Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., Long, J. D. (2009). Curriculum-Based Measurement Oral Reading as an indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47, 427-469. doi: 10.1016/j.jsp.2009.07.00110.1016/j.jsp.2009.07.001
Google Scholar | Crossref | Medline | ISI
Shavelson, R. J., Webb, N. M. (1991). Generalizability theory: A primer. Thousand Oaks, CA: SAGE.
Google Scholar
Shin, J., Deno, S. L., Espin, C. (2000). Technical adequacy of the maze task for curriculum-based measurement of reading growth. Journal of Special Education, 34, 164-172. doi:10.1177/00224669000340030510.1177/002246690003400305
Google Scholar | SAGE Journals | ISI
Shinn, M. R., Shinn, M. M. (2002). Administration and scoring of reading maze for use in general outcome measurement. Retrieved from http://www.aimsweb.com
Google Scholar
View access options

My Account

Welcome
You do not have access to this content.



Chinese Institutions / 中国用户

Click the button below for the full-text content

请点击以下获取该全文

Institutional Access

does not have access to this content.

Purchase Content

24 hours online access to download content

Your Access Options


Purchase

AEI-article-ppv for $15.00