Despite significant advancements in supporting students’ social, emotional, and behavioral health in recent years, progress remains hindered by limited formative assessment options. The focus of this special issue is on applications for Direct Behavior Rating (DBR), which is a hybrid assessment method incorporating procedures utilized in direct observations and teacher ratings. Three articles in the special issue utilize multiple baseline designs to demonstrate the utility and effectiveness of DBR single-item scales, and a fourth article describes a multiple-item DBR applied to social behaviors. Expert commentaries provide perspectives on the current literature related to the assessment of social, emotional, and behavioral constructs in school settings and also chart a future course for continued study.
With each year that passes, seemingly greater attention is focused on the critical need to provide effective support of students’ social, emotional, and behavioral (SEB) health in school settings. This attention is welcome and long overdue; for years, educators have noted that a primary concern is their students’ development of appropriate behavioral and social skills (Coalition et al., 2006; Public Agenda, 2003). For decades, calls have been made to provide greater supports for students in this regard (e.g., Weist, 1997). As such, this call to action is certainly not new but has gained momentum with the advancement of multitiered student support (MTSS) frameworks and inclusion of these domains in federal policy (e.g., Every Student Succeeds Act). In particular, the development of MTSS, such as Response to Intervention (RtI), has shifted our collective thinking about providing student supports in a proactive manner.
This is an exciting time, with significant opportunity to change the status quo in supporting students’ SEB health. This era’s increased focus on evidence-based practice and data-based decision making in schools has led to important advancements and innovations in supporting SEB health. While, in many ways, it seems that we may have finally reached the golden age—that student SEB health is, at long last, receiving the attention it so desperately needs, significant barriers still remain to fully realizing that ideal.
Despite substantial advancements in the development of proactive prevention efforts such as universal screening, and the identification of a growing number of evidence-based interventions, progress remains hindered in evaluating student response to intervention. That is, there is a substantial gap in the availability of assessments with reasonable utility to monitor response to intervention in SEB domains in an ongoing fashion (i.e., progress monitoring). This is problematic, as the success of evidence-based intervention efforts hinges on the ability to accurately determine student response to intervention. This gap is particularly salient with regard to important indicators of behavioral outcomes, as the variability inherent in most behaviors necessitates assessment at more frequent intervals (i.e., daily) than academic outcomes, which are typically assessed across longer time periods. The National Center for Intensive Intervention, for example, reviewed hundreds of academic progress monitoring tools compared with 17 behavioral progress monitoring tools (http://www.intensiveintervention.org). One reason for the disconnect between progress monitoring for behavior outcomes compared with progress monitoring for academic outcomes may be because many behavioral progress monitoring tools will require teacher input daily or multiple times per day, making assessment development work more labor intensive from an application and evaluation standpoint. Assessment burden must also be balanced with the need to obtain accurate and ongoing data to make decisions for intervention planning, evaluation, and modification. Thus, to provide educators and researchers with the required tools to adequately monitor progress within the context of interventions with social and behavioral targeted outcomes, researchers must work to demonstrate the reliability, validity, and feasibility/utility of assessment approaches used to monitor progress in an ongoing basis.
Recently, efforts have been undertaken in the development of an assessment method called Direct Behavior Rating (DBR). DBRs are considered a hybrid assessment method, in that they combine the strengths of direct observations with the efficiency of rating scales (Chafouleas, 2011). That is, DBRs involve a brief rating of specific target behavior(s) following a prespecified observation period. The promise of this assessment method lies in its efficiency (ratings generally take only a few seconds), flexibility (it can be tailored to different contexts and to different behaviors), and emerging research base. Different types of DBRs have been developed, including both single-item scales (DBR-SIS) which involve rating a single global target behavior (such as academic engagement) and multiple-item scales (DBR-MIS) which involve rating multiple discrete behaviors (such as staying seated, raising hand, and completing assigned tasks). Recently, a primary thrust of the research on DBR has involved screening (e.g., Chafouleas, Kilgus, & Hernandez, 2009; Miller et al., 2015), yet the origins of DBR are as a progress monitoring tool. We present this special issue as a timely overview of recent research on DBR within formative assessment applications.
In the present special issue, we therefore aim to accomplish several goals. First, this series is intended to address the meet the mission of Assessment for Effective Intervention to “provide critical analysis of practitioner-developed assessment procedures,” “analyze relationships between existing instruments,” and “introduce innovative assessment strategies.” To this end, we present four applied research studies which focused on the validation of DBR methods for progress monitoring purposes. The first three studies (Fabiano, Pyle, Kelty, & Parham, 2017; Miller, Crovello, & Chafouleas, 2017; Sims, Riley-Tillman, & Cohen, 2017) pertain to validation of DBR-SIS, while the fourth (Daniels, Volpe, Briesch, & Gadow, in press) pertains to the validation of DBR-MIS. With regard to the former, across three sites, we implemented parallel multiple baseline design studies aligned to meet contemporary What Works Clearinghouse design standards for single-case research (Kratochwill et al., 2010). The purpose of these studies was to evaluate the psychometric properties of DBR-SIS for monitoring progress within a tiered intervention framework. Following presentation of these studies, a research-to-practice paper is provided by Miller, Crovello, Swenson, & Chafouleas (in press), which describes the application of DBR-SIS for progress monitoring. Finally, the study performed by Daniels et al. (in press) provides an important contribution to the development and evaluation of DBR-MIS in progress monitoring. In particular, few applied studies have examined differences between individualized DBR-MIS compared with factor-analyzed DBR-MIS.
The special issue culminates with three expert commentaries. The commentary provided by Maggin and Bruhn (in press) outlines directions to advance the study of evidence-based assessment. The commentary provided by Owens and Evans (in press) reviews the current status of DBR in formative assessment and explores gaps in our current knowledge base. Finally, the commentary provided by Riley-Tillman, Miller, and Sims (in press) explores the issue of problem-solving behavior—an elusive but critical issue in the advancement of evidence-based practices in schools. Together, the articles in this special issue aim to advance the science and practice of formative assessment using DBR in schools, in the hope that this will add to the toolkit of assessments that may be used to continue the aspiration of promoting SEB health in school settings.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
|
Chafouleas, S. M. (2011). Direct Behavior Rating: A review of the issues and research in its development. Education & Treatment of Children, 34, 575–591. doi:10.1353/etc.2011.0034 Google Scholar | Crossref | |
|
Chafouleas, S. M., Kilgus, S. P., Hernandez, P. (2009). Using Direct Behavior Rating (DBR) to screen for school social risk: A preliminary comparison of methods in a kindergarten sample. Assessment for Effective Intervention, 34, 214–223. doi:10.1177/1534508409333547 Google Scholar | SAGE Journals | |
|
Coalition for Psychology in Schools and Education. (2006, August). Report on the Teacher Needs Survey. Washington, DC: American Psychological Association, Center for Psychology in Schools and Education. Google Scholar | |
|
Daniels, B., Volpe, R. J., Briesch, A. M., Gadow, K. D. (in press). Dependability and treatment sensitivity of multi-item direct behavior ratings scales for interpersonal peer conflict. Assessment for Effective Intervention. Google Scholar | |
|
Fabiano, G. A., Pyle, K., Kelty, M. B., Parham, B. R. (2017). Progress monitoring using Direct Behavior Rating Single Item Scales in a multiple-baseline design study of the daily report card intervention. Assessment for Effective Intervention. Advance online publication. doi:10.1177/1534508417703024 Google Scholar | SAGE Journals | ISI | |
|
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., Shadish, W. R. (2010). Single-case designs technical documentation. What Works Clearinghouse. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_scd.pdf Google Scholar | |
|
Maggin, D., Bruhn, A. (in press). Evidence-based assessment and single-case research: A commentary for the special issue on Direct Behavior Ratings. Assessment for Effective Intervention. Google Scholar | ISI | |
|
Miller, F. G., Cohen, D., Chafouleas, S. M., Riley-Tillman, T. C., Welsh, M. E., Fabiano, G. A. (2015). A comparison of measures to screen for social, emotional, and behavioral risk. School Psychology Quarterly, 30, 184–196. doi:10.1037/spq0000085 Google Scholar | Crossref | Medline | ISI | |
|
Miller, F. G., Crovello, N., Chafouleas, S. M. (2017). Progress monitoring the effects of daily report cards across elementary and secondary settings using Direct Behavior Ratings—Single Item Scales. Assessment for Effective Intervention. Advance online publication. doi:10.1177/1534508417691019 Google Scholar | SAGE Journals | ISI | |
|
Miller, F. G., Crovello, N., Swenson, N., Chafouleas, S. M. (in press). Bridging the Gap: Direct Behavior Rating - Single Item Scales. Assessment for Effective Intervention. Google Scholar | ISI | |
|
Owens, J., Evans, S. (in press). Progress monitoring change in children’s social, emotional, and behavioral functioning: Advancing the state of the science. Assessment for Effective Intervention. Google Scholar | ISI | |
|
Public Agenda . (2003). Where we are now: 12 things you need to know about public opinion and public schools. New York, NY: Author. Google Scholar | |
|
Riley-Tillman, T. C., Miller, F. G., Sims, W. (in press). Overlooked and understudied: Promoting problem-solving behavior in schools. Assessment for Effective Intervention. Google Scholar | |
|
Sims, W., Riley-Tillman, T. C., Cohen, D. (2017). Formative assessment using Direct Behavior Ratings: Evaluating intervention effects of daily behavior report cards. Assessment for Effective Intervention. Advance online publication. doi: 10.1177/1534508417708183 Google Scholar | SAGE Journals | ISI | |
|
Weist, M. D. (1997). Expanded school mental health services: A national movement in progress. In Ollendick, T. H., Prinz, R. J. (Eds.), Advances in clinical child psychology (Vol. 19, pp. 319–352). New York, NY: Plenum Press. Google Scholar | Crossref |

