The current study served to extend previous research on scaling construction of Direct Behavior Rating (DBR) in order to explore the potential flexibility of DBR to fit various intervention contexts. One hundred ninety-eight undergraduate students viewed the same classroom footage but rated student behavior using one of eight randomly assigned scales (i.e., differed with regard to number of gradients, length of scale, discrete vs. continuous). Descriptively, mean ratings typically fell within the same scale gradient across conditions. Furthermore, results of generalizability analyses revealed negligible variance attributable to the facet of scale type or interaction terms involving this facet. Implications for DBR scale construction within the context of intervention-related decision making are presented and discussed.

Bendig, A. W. (1954). Reliability and the number of rating scale categories.Journal of Applied Psychology, 38, 3840.
Google Scholar | Crossref | ISI
Brennan, R. L. (2001). Generalizability theory. New York, NY: Springer-Verlag.
Google Scholar | Crossref
Briesch, A. M., Chafouleas, S. M., Riley-Tillman, T. C. (2010). Generalizability and dependability of behavior assessment methods to estimate academic engagement: A comparison of systematic direct observation and Direct Behavior Rating. School Psychology Review, 39, 408421.
Google Scholar | ISI
Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T. J., Black, A., Kilgus, S. P. (2010). An investigation of the generalizability and dependability of Direct Behavior Rating– Single Item Scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students. Journal of School Psychology, 48, 219246. doi:10.1016/j.jsp.2010.02.001
Google Scholar | Crossref | Medline | ISI
Chafouleas, S. M., Christ, T. J., Riley-Tillman, T. C. (2009). Generalizability of scaling gradients on direct behavior ratings. Educational and Psychological Measurement, 69, 157173. doi:10.1177/0013164408322005
Google Scholar | SAGE Journals | ISI
Chafouleas, S. M., Christ, T., Riley-Tillman, T. C., Briesch, A. M., Chanese, J. A. (2007). Generalizability and dependability of Daily Behavior Report Cards to measure social behavior of preschoolers. School Psychology Review, 36, 6379.
Google Scholar | ISI
Chafouleas, S. M., Riley-Tillman, T. C., McDougal, J. L. (2002). Good, bad, or in-between: How does the daily behavior report card rate? Psychology in the Schools, 39, 157169.
Google Scholar | Crossref | ISI
Chafouleas, S. M., Riley-Tillman, T. C., Sassu, K. A. (2006). Acceptability and reported use of daily behavior report cards among teachers. Journal of Positive Behavior Interventions, 8, 174182.
Google Scholar | SAGE Journals | ISI
Chafouleas, S. M., Riley-Tillman, T. C., Sassu, K. A., LaFrance, M. J., Patwa, S. S. (2007). Daily behavior report cards: An investigation of the consistency of on-task data across raters and methods. Journal of Positive Behavior Interventions, 9, 3037.
Google Scholar | SAGE Journals | ISI
Chafouleas, S. M., Riley-Tillman, T. C., Sugai, G. (2007). School-based behavioral assessment: Informing intervention and instruction. New York, NY: Guilford.
Google Scholar
Chafouleas, S. M., Volpe, R. J., Gresham, F. M., Cook, C. R. (2010). School-based behavioral assessment within problem-solving models: Current status and future directions. School Psychology Review, 39, 343349.
Google Scholar | ISI
Christ, T. J., Boice, C. H. (2009). Rating scale items: A brief review of nomenclature, components, and formatting to inform the development of Direct Behavior Rating (DBR). Assessment for Effective Intervention, 34, 242250. doi:10.1177/1534508409336182
Google Scholar | SAGE Journals
Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M. (2009). Foundation for the development and use of Direct Behavior Rating (DBR) to assess and evaluate student behavior. Assessment for Effective Intervention, 34, 201213. doi:10.1177/1534508409340390
Google Scholar | SAGE Journals
Cicchetti, D. V., Showalter, D., Tyrer, P. J. (1985). Scale categories on levels of interrater reliability: A Monte Carlo investigation. Applied Psychological Measurement, 9, 3136. doi:10.1177/014662168500900103
Google Scholar | SAGE Journals | ISI
Matell, M. S., Jacoby, J. (1972). Is there an optimal number of alternatives for Likert-scale items? Effects of testing time and scale properties. Journal of Applied Psychology, 56, 506509. doi:10.1037/h0033601
Google Scholar | Crossref | ISI
Preston, C. C., Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104, 115. doi:10.1016/S0001-6918(99)00050-5
Google Scholar | Crossref | Medline | ISI
Ramsey, J. O. (1973). The effect of number of categories in rating scales on precision of estimation of scale values. Psychometrika, 38, 513532. doi:10.1007/BF02291492
Google Scholar | Crossref | ISI
Rasmussen, J. L. (1989). Analysis of Likert-scale data: A reinterpretation of Gregoire and Driver. Psychological Bulletin, 105, 167170. doi:10.1037/0033-2909.105.1.167
Google Scholar | Crossref | ISI
Revill, S. I., Robinson, J. O., Rosen, M., Hogg, M. I. (1976). The reliability of a linear analogue for evaluating pain. Anasthesia, 31, 11911198.
Google Scholar | Crossref | Medline | ISI
Riley-Tillman, T. C., Chafouleas, S. M., Briesch, A. M., Eckert, T. L. (2008). Daily behavior report cards and systematic direct observation: An investigation of the acceptability, reported training and use, and decision reliability among school psychologists, Journal of Behavioral Education, 17, 313327. doi:10.1007/s10864-008-9070-5
Google Scholar | Crossref
Riley-Tillman, T. C., Christ, T. J., Chafouleas, S. M., Boice-Mallach, C. H., Briesch, A. M. (2010). The impact of observation duration on the accuracy of data obtained from Direct Behavior Rating (DBR). Journal of Positive Behavior Interventions, 13, 119128.
Google Scholar | SAGE Journals | ISI
Shapiro, E. S. (2004). Academic skills problems workbook. New York, NY: Guilford.
Google Scholar
Sterling-Turner, H. E., Watson, T. S. (2002). An analog investigation of the relationship between treatment acceptability and treatment integrity. Journal of Behavioral Education, 11, 3950. doi:10.1023/A:1014333305011
Google Scholar | Crossref
Sterling-Turner, H. E., Watson, T. S., Wildmon, M., Watkins, C., Little, E. (2001). Investigating the relationship between training type and treatment integrity. School Psychology Quarterly, 16, 5667. doi:10.1521/scpq.16.1.56.19157
Google Scholar | Crossref | ISI
Vannest, K., Davis, J., Davis, C., Mason, B. A., Burke, M. D. (2010). Effective intervention for behavior with a Daily Behavior Report Card. School Psychology Review, 39, 654672.
Google Scholar | ISI
Weng, L. (2004). Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educational and Psychological Measurement, 64, 956972. doi:10.1177/0013164404268674
Google Scholar | SAGE Journals | ISI
View access options

My Account

Welcome
You do not have access to this content.



Chinese Institutions / 中国用户

Click the button below for the full-text content

请点击以下获取该全文

Institutional Access

does not have access to this content.

Purchase Content

24 hours online access to download content

Your Access Options


Purchase

AEI-article-ppv for $15.00