Abstract
Time-sampling techniques are popular among school practitioners and researchers alike when conducting observations of student behavior. Despite their popularity, the psychometric properties of such observation data have not received much attention, likely given that both procedures and targets vary widely across individual coding systems. A systematic literature review was therefore conducted to provide insight into how various procedural aspects may influence the psychometric properties of time-sampling data. Specifically, the influences of the number of target behaviors, time-sampling method, interval length, observation length, and number of observations on the psychometric properties of time-sampling data are explored. Both implications for practice and future research are discussed.
|
Briesch, A. M., Chafouleas, S. M., Riley-Tillman, T. C. (2010). Generalizability and dependability of behavior assessment methods to estimate academic engagement: A comparison of systematic direct observation and Direct Behavior Rating. School Psychology Review, 39, 408–421. Google Scholar | ISI | |
|
Briesch, A. M., Volpe, R. J., Ferguson, T. D. (2013). The influence of student characteristics on the dependability of behavioral observation data. School Psychology Quarterly, 29, 171–181. doi:10.1037/spq0000042 Google Scholar | Crossref | Medline | |
|
Brulle, A. R., Repp, A. C. (1984). An investigation of the accuracy of momentary time sampling procedures with time series data. British Journal of Psychology, 75, 481–485. Google Scholar | Crossref | ISI | |
|
Cone, J. (1977). The relevance of reliability and validity for behavioral assessment. Behavior Therapy, 8, 411–426. Google Scholar | Crossref | ISI | |
|
Crocker, L., Algina, J. (2006). Introduction to classical and modern test theory. Mason, OH: Cengage Learning. Google Scholar | |
|
Cronbach, L. J. (1955). Processes affecting scores on “understanding others” and “assumed similarity.” Psychological Bulletin, 52, 177–193. Google Scholar | Crossref | Medline | ISI | |
|
Doll, B., Elliott, S. (1994). Research methods representativeness of observed preschool social behaviors: How many data are enough? Journal of Early Intervention, 18, 227–238. Google Scholar | SAGE Journals | ISI | |
|
Dorsey, B. L., Nelson, R. O., Hayes, S. C. (1986). The effects of code complexity and of behavioral frequency on observer accuracy and interobserver agreement. Behavioral Assessment, 8, 349–363. Google Scholar | |
|
Ferguson, T., Briesch, A., Volpe, R., Daniels, B. (2012). The influence of observation length on the dependability of data. School Psychology Quarterly, 27, 187–197. Google Scholar | Crossref | Medline | ISI | |
|
Greenwood, C. (1996). The case for performance-based instructional models. School Psychology Quarterly, 11, 283–296. Google Scholar | Crossref | |
|
Gunter, P. L., Venn, M. L., Patrick, J., Miller, K. A., Kelly, L. (2003). Efficacy of using momentary time samples to determine on-task behavior of students with emotional/behavioral disorders. Education and Treatment of Children, 26, 400–412. Google Scholar | |
|
Hintze, J. M., Matthews, W. J. (2004). The generalizability of systematic direct observations across time and setting: A preliminary investigation of the psychometrics of behavioral observation. School Psychology Review, 33, 258–270. Google Scholar | ISI | |
|
Hintze, J. M., Volpe, R., Shapiro, E. (2008). Best practices in systematic direct observation of student behavior. In Thomas, A., Grimes, J. (Eds.), Best practices in school psychology V (pp. 993–1006). Bethesda, MD: National Association of School Psychologists. Google Scholar | |
|
Hutt, S. J., Hutt, C. (1970). Direct observation and measurement of behavior. Springfield, IL: Charles C. Thomas. Google Scholar | |
|
Johnson, A. H., Chafouleas, S. M., Briesch, A. M. (2017). Dependability of data derived from time sampling methods with multiple observation targets. School Psychology Quarterly, 32, 22–34. Google Scholar | Crossref | Medline | |
|
Karweit, N. L., Slavin, R. E. (1982). Time-on-task: Issues of timing, sampling, and definition. Journal of Educational Psychology, 74, 844–851. Google Scholar | Crossref | ISI | |
|
Leff, S. S., Lakin, R. (2005). Playground-based observational systems: A review and implications for practitioners and researchers. School Psychology Review, 34, 475–489. Google Scholar | ISI | |
|
McConaughy, S., Achenbach, T. M. (2009). Manual for the ASEBA Direct Observation Form. Burlington, Ontario, Canada: Research Center for Children, Youth, & Families, University of Vermont. Google Scholar | |
|
McConaughy, S., Ritter, D. (2008). Best practices in multimethod assessment of emotional and behavioral disorders. In Thomas, A., Grimes, J. (Eds.), Best practices in school psychology V (pp. 697–715). Bethesda, MD: National Association of School Psychologists. Google Scholar | |
|
McWilliam, R. A., Ware, W. B. (1994). The reliability of observations of young children’s engagement: An application of generalizability theory. Journal of Early Intervention, 18, 34–47. Google Scholar | SAGE Journals | ISI | |
|
Murphy, G., Goodall, E. (1980). Measurement error in direct observation: A comparison of common recording methods. Behaviour Research and Therapy, 18, 147–150. Google Scholar | Crossref | Medline | ISI | |
|
Olswang, L. B., Svensson, L., Coggins, T. E., Beilinson, J. S., Donaldson, A. L. (2006). Reliability issues and solutions for coding social communication performance in classroom settings. Journal of Speech, Language, and Hearing Research, 49, 1058–1071. Google Scholar | Crossref | Medline | |
|
Powell, J., Martindale, A., Kulp, S. (1975). An evaluation of time-sample measures of behavior. Journal of Applied Behavior Analysis, 8, 463–469. Google Scholar | Crossref | Medline | ISI | |
|
Powell, J., Martindale, A., Kulp, S., Martindale, A., Bauman, R. (1977). Taking a close look: Time sampling and measurement error. Journal of Applied Behavior Analysis, 10, 325–332. Google Scholar | Crossref | Medline | ISI | |
|
Reynolds, C., Kamphaus, R. W. (2015). Behavior Assessment System for Children, Third Edition. San Antonio, TX: Pearson. Google Scholar | |
|
Saudargas, R., Fellers, G. (1986). State-event classroom observation system (SECOS). Unpublished manuscript, University of Tennessee, Knoxville. Google Scholar | |
|
Shapiro, E. S. (1996). Academic skills problems workbook. New York, NY: Guilford Press. Google Scholar | |
|
Shapiro, E. S., Heick, P. F. (2004). School psychologist assessment practices in the evaluation of students referred for social/behavioral/emotional problems. Psychology in the Schools, 41, 551–561. Google Scholar | Crossref | ISI | |
|
Shavelson, R. J., Webb, N. M. (1991). Generalizability theory. Newbury Park, CA: SAGE. Google Scholar | |
|
Silva, F. (1993). Psychometric foundations of behavioral assessment. Newbury Park, CA: SAGE. Google Scholar | |
|
Steege, M., Watson, T. (2008). Best practices in functional behavioral assessment. In Thomas, A., Grimes, J. (Eds.), Best practices in school psychology V (pp. 337–348). Bethesda, MD: National Association of School Psychologists. Google Scholar | |
|
Volpe, R. J., DiPerna, J., Hintze, J., Shapiro, E. (2005). Observing students in classroom settings: A review of seven coding schemes. School Psychology Review, 34, 454–474. Google Scholar | ISI | |
|
Volpe, R. J., McConaughy, S. H., Hintze, J. M. (2009). Generalizability of classroom behavior problem and on-task scores from the Direct Observation Form. School Psychology Review, 38, 382–401. Google Scholar | ISI | |
|
Whitcomb, S. A., Merrell, K. (2012). Behavioral, social, and emotional assessment of children and adolescents (4th ed.). New York, NY: Routledge. Google Scholar |

