Direct Behavior Ratings (DBRs) are behavioral assessment methods that combine the benefits of systematic direct observation and behavior rating scales. That is, DBRs involve the observation of operationally defined target behaviors during a prespecified observation period and the evaluation of those behaviors via brief ratings. In this way, DBR is considered a hybrid assessment method. This article describes the landscape of DBR tools and provides an overview of the procedures and research currently available to support the use of one specific version, DBR–Single Item Scales (DBR-SIS), for progress monitoring within multi-tiered systems of support.

Direct Behavior Rating–Single Item Scales (DBR-SIS).

Collecting data about student progress is integral to collaborative problem solving, accountability, and support delivery in schools. Frequent and repeated collection of progress monitoring data allows educators to evaluate student performance and make decisions about supports that can best fulfill student needs. Even with the use of evidence-based interventions, collecting reliable and valid progress monitoring data is the only way to confirm that students are receiving appropriate and effective services and are making progress toward their goals. There is a need for evidence-based behavioral progress monitoring tools to be used within multi-tiered systems of support (MTSS) to support data-based decision making. To this end, Direct Behavior Ratings (DBRs) are assessment methods that have been effectively used by educators to monitor student progress in response to behavioral interventions and are viable options for measuring student behavioral progress within an MTSS framework.

DBRs refer to a category of assessments that allow for repeated observations and efficient ratings of behavior, which are essential characteristics of any progress monitoring tool within MTSS. DBRs are hybrid assessment methods, in that they combine the strengths of direct observations with the efficiency of rating scales. Different types of DBRs have been developed, including both single item scales (DBR-SIS) which involve rating a single global target behavior (such as academic engagement) and multi-item scales (DBR-MIS) which involve rating multiple discrete behaviors (such as staying seated, raising hand, and completing assigned tasks). Here, we focus specifically on DBR-SIS, which has been the focus of many of the studies included in the special issue and has been validated for both screening and progress monitoring purposes. Using DBR-SIS, teachers observe students for a prespecified period of time and provide daily ratings on target behaviors using a 0 to 10 scale to rate the proportion of time the student was engaged in the target behavior. A large portion of the validation work that has been completed surrounding DBR-SIS involves a standard form that includes three core behavioral competencies that are critical to student success: academically engaged, respectful, and disruptive behavior. However, users also have the option to create individualized scales and define additional target behaviors.

The first step in using DBR-SIS is to select target behaviors that are of interest for progress monitoring. The standard DBR-SIS form involves the assessment of academically engaged, respectful, and disruptive behavior, each of which have clear operational definitions of each target behavior standardized on the form (see Figure 1). As previously noted, additional target behaviors may be added as desired, but it important to note that user-developed targets have not been psychometrically evaluated.


                        figure

Figure 1. DBR-SIS standard form.

Note. DBR-SIS = Direct Behavior Rating–Single Item Scales.

After selecting the target behaviors, the next step is to select an observation period. The length of observation periods can vary widely based on the assessment context and should be selected to best fit the needs of the student and intervention team (Chafouleas, 2011). To that extent, observation periods could be as short as 15 min or as long as the entire day. For example, some teams may only be interested in assessing a student’s academic engagement during math class, while others may be interested in examining academic engagement throughout the entire morning before lunch. Regardless of the length selected, it is critical to rate the target behaviors immediately following the end of the observation period to strengthen rating accuracy. As time passes between the observation and the rating, less accurate judgments, and therefore less reliable ratings, may be provided.

Before beginning to use DBR-SIS, it is recommended that raters complete an online training module, which is freely available at http://dbrtraining.education.uconn.edu/. The training, which encompasses a brief familiarization of DBR-SIS as well as both guided and independent practice, has been shown to improve rating accuracy (Chafouleas, Kilgus, Riley-Tillman, Jaffery, & Harrison, 2012). Once training is complete and rating logistics have been determined, the rater is ready to collect data.

Suppose Jordan is a second-grade student who exhibits problem behavior during reading class. Jordan’s teacher reviews the definition of disruptive behavior on the DBR-SIS standard form and determines that this scale best captures the problem behavior she sees from him in class—he frequently engages in off-topic conversations with his peers during instruction and periodically wanders around the room. She decides to use the disruptive scale on DBR-SIS to generate baseline data to estimate the percentage of time that Jordan engages in disruptive behavior during reading class. Jordan’s teacher completes the online training module, which describes the process for using DBR-SIS and provides opportunities to practice assigning ratings. Jordan’s teacher then monitors him during reading instruction each day, and rates his disruptive behavior immediately after reading class. After collecting 5 days of baseline data, the data indicate that Jordan is currently displaying disruptive behavior an average of 50% of the time. Together with the school psychologist, Jordan’s teacher decides to implement an intervention aimed at decreasing Jordan’s disruptive behavior. After implementing the additional behavior support, Jordan’s teacher continues to use DBR-SIS daily to monitor Jordan’s response to the intervention. After 2 weeks of implementation, Jordan’s teacher and the school psychologist review the data. As the intervention was put into place, they notice that Jordan is disruptive only 20% of the time on average. Thus, due to the decrease in the level of disruptive behavior, Jordan’s teacher decides to continue using the intervention.

Investigations regarding the technical adequacy of DBR-SIS support the reliability of scores obtained and provide evidence that the scales are sufficiently sensitive to detect changes in behavior (e.g., Chafouleas, Sanetti, Kilgus, & Maggin, 2012; Riley-Tillman, Chafouleas, Sassu, Chanese, & Glazer, 2008). In addition, evidence for concurrent validity and classification accuracy has also been provided (e.g., Miller et al., 2015). Recently, DBR-SIS was reviewed by a panel of experts for the National Center on Intensive Intervention. A clear and concise summary of the existing evidence for use of DBR-SIS as a progress monitoring tool can be found on the National Center on Intensive Intervention website, under the Tools Charts (http://www.intensiveintervention.org/chart/behavioral-progress-monitoring-tools). As a progress monitoring tool, DBR-SIS has been used for a variety of behavioral interventions, ranging from class-wide evaluations of classroom management strategies to individualized interventions such as daily behavior report cards.

DBR-SIS resources are freely available and can be found under the library tab of our website at http://dbr.education.uconn.edu which contains further information about the use of DBR-SIS across assessment, intervention, and communication contexts. The website also contains a link to DBR Connect, a commercially available package which allows electronic recording and monitoring of DBR-SIS data. For readers interested in a comprehensive and contemporary review of DBR, we refer them to the book titled Direct Behavior Rating: Linking Assessment, Communication, and Intervention (Briesch, Chafouleas, & Riley-Tillman, 2016).

Authors’ Note
Opinions expressed herein do not necessarily reflect the position of the U.S. Department of Education, and such endorsements should not be inferred.

Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Preparation of this article was supported by funding provided by the Institute of Education Sciences, U.S. Department of Education (R324A110017).

Briesch, A. M., Chafouleas, S. M., Riley-Tillman, T. C. (2016). Direct Behavior Rating (DBR): Linking assessment, communication, and intervention. New York, NY: Guilford Press.
Google Scholar
Chafouleas, S. M. (2011). Direct Behavior Rating: A review of the issues and research in its development. Education and Treatment of Children, 34, 575591. doi:10.1353/etc.2011.0034
Google Scholar | Crossref
Chafouleas, S. M., Kilgus, S. P., Riley-Tillman, T. C., Jaffery, R., Harrison, S. (2012). Preliminary evaluation of various training components on accuracy of Direct Behavior Ratings. Journal of School Psychology, 50, 317334. doi:10.1016/j.jsp.2011.11.007
Google Scholar | Crossref | Medline | ISI
Chafouleas, S. M., Sanetti, L. M. H., Kilgus, S. P., Maggin, D. M. (2012). Evaluating sensitivity to behavioral change across consultation cases using Direct Behavior Rating Single-Item Scales (DBR-SIS). Exceptional Children, 78, 491505.
Google Scholar | SAGE Journals | ISI
Miller, F. G., Cohen, D., Chafouleas, S. M., Riley-Tillman, T. C., Welsh, M. E., Fabiano, G. A. (2015). A comparison of measures to screen for social, emotional, and behavioral risk. School Psychology Quarterly, 30, 184196. doi:10.1037/spq0000085
Google Scholar | Crossref | Medline | ISI
Riley-Tillman, T. C., Chafouleas, S. M., Sassu, K. A., Chanese, J. M., Glazer, A. D. (2008). Examining agreement between Direct Behavior Ratings (DBRs) and systematic direct observation data for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10, 136143.
Google Scholar | SAGE Journals | ISI

Article available in:

Related Articles

Citing articles: 0