This article presents an evaluation of the first 2 years of a research-based summer learning program that provided self-selected and developmentally appropriate books to students in low-income and low-resource elementary schools by a local philanthropic organization in a large urban district. The evaluation found evidence of a positive effect of participation in the program on the state year-end standardized reading assessment but found no statistically significant effects on the proximal measures of reading achievement in the fall after summer vacation. The article also provides an analysis of implementation of the program and lessons learned that could be useful to other organizations that are interested in implementing similar programs.

Learning to read is one of the most fundamental outcomes of education and is one of the most important skills for our children to learn and master. The ability to read is fundamental to students’ progression through schools, high school graduation, and ultimately their ability to become contributing members to our society. Although improvements have been made over recent years, Baltimore students, much like their peers in other large urban districts, continue to lag behind the nation as a whole in reading achievement.

Summer vacation provides, in the ideal, a time for students to take a break from the rigors of the school year, to relax, and to play. However, it has been well-established in the research literature that students’ learning growth can slow, remain unchanged, or even decline during the summer. Especially challenging in urban districts and schools where socioeconomically disadvantaged students make up most of the public school enrollment is research that has shown that these students generally experience greater losses in academic performance during summer break than their more advantaged peers (e.g., Burkham, Ready, Lee, & LoGofero, 2004; Cooper, Nye, Charlton, Lindsay, & Greathouse, 1996; Downey, von Hippel, & Broh, 2004; Entwisle, Alexander, & Olson, 2000; Heyns, 1987) and these summer losses have been implicated as a major contributing factor in socioeconomic achievement gaps (Alexander, Entwisle, & Olson, 2007). The summer represents a potentially dramatic change in context for disadvantaged children as they no longer have the academic, social, and resource supports provided by the school and must rely solely on the supports available in their families and communities (Entwisle, Alexander, & Olson, 2001; Slates, Alexander, Entwisle, & Olson, 2012).

In recognition and in response to this need, the Abell Foundation, Inc. implemented the Baltimore SummerREADS book distribution program, which was modeled on the successful voluntary summer reading and book distribution programs designed and implemented by James Kim at Harvard University and colleagues (e.g., Kim, 2006; Kim & White, 2008) over the course of three summers from 2011 to 2013. The program provided second- and third-grade students in high-poverty and extremely low-performing Baltimore elementary schools with a set of 12 developmentally appropriate books as well as training (for teachers and parents) in comprehension and reading fluency strategies intended to provide students and families with self-support learning tools for the summer.

This article presents an evaluation of the first 2 years of implementation of the Baltimore SummerREADS program. The purpose of this article is twofold. First, we present analyses that look to inform the question: “Does the provision of 12 self-selected and developmentally appropriate books to students during the summertime show evidence of positive support for reading achievement of low-income and low-resource elementary schools in Baltimore?” Second, the article provides an analysis of implementation of the program and lessons learned that could be useful to other organizations that are interested in implementing similar programs.

Research on Summer Learning Loss

Although many reasons for achievement gaps have been suggested, a substantial literature exists that suggests that a partial explanation of achievement gaps can be found in differences in summer learning opportunities. Research has shown that lower socioeconomic status and minority students experience greater losses in academic achievement than their more affluent and majority status peers during the period of summer vacation (e.g., Downey et al., 2004; Heyns, 1978, 1987). In turn, these seasonal inequalities likely exacerbate achievement gaps between African American and White students in reading and mathematics (Alexander et al., 2007). Cooper et al. (1996), in a meta-analysis of 13 studies on summertime learning published since 1975, concluded that “socioeconomic inequalities are heightened by summer break.”

Doris Entwisle, Karl Alexander, and colleagues investigated seasonal inequalities in learning among Baltimore elementary school children during the late 1980s and early 1990s with the Beginning School Study (BSS; see Alexander, Entwisle, & Olson, 2001; Alexander et al., 2007; Entwisle & Alexander, 1992, 1994). The BSS was designed to be a longitudinal study that followed a cohort of Baltimore public school students from the first grade in the fall of 1982 through high school until the students were about the age of 22 in 1998. These studies found that socioeconomic status and not race was the main driver of academic inequalities in mathematics and reading among Baltimore first and second graders. Furthermore, they concluded from their analyses that schools were not a major source of academic inequality based on socioeconomic status as poorer children saw consistent losses in academic achievement only during the summer season when school was not in session. These losses were related to the widening of observable achievement gaps as more affluent students consistently made gains during the summer, regardless of race.

Findings of summer learning loss led Alexander, Entwisle, and colleagues (Alexander et al., 2001, 2007; Entwisle & Alexander, 1992, 1994) to postulate a “faucet theory” to describe the pattern of seasonal learning between advantaged and disadvantaged students that had been observed. The idea embedded in the theory is that during the school year all students, both advantaged and disadvantaged, have access to the same resources embodied in the school and the process of learning. In the summer, when students are no longer in school, the resource “faucet” of school is effectively turned off. During this time, the only resources available to children for learning growth are those that are embedded in their families and neighborhoods. For example, students learn from their parents through being read to and through activities such as trips to the library and other enrichments (e.g., Chin & Phillips, 2004; Hart & Risley, 1995; Lareau, 2002). Thus, differential summer learning likely occurs when more advantaged children continue to have access to educational resources and learning experiences due to their socioeconomic status and environments in the family and neighborhoods, whereas their less affluent and disadvantaged peers do not have access to the same amount or type of learning resources.

This differential access to educationally meaningful experiences and resources can lead to the observed differences in learning during the summer between advantaged and disadvantaged children. In this way, advantaged children may gain in academic achievement during the nonschool period, whereas disadvantaged children may gain no ground academically or may actually lose ground so that they return to school in the fall at a level lower than when they left in the spring for summer vacation.

Access to Print Materials

A consistent finding in social science research is the positive and statistically significant relationship between the number of books in a child’s home and their academic achievement in reading net of other characteristics (e.g., Jaeger, 2011; Lee & Burkham, 2002). An extensive meta-analysis of interventions that distribute books or facilitate children’s ownership of print materials estimated that these programs had average positive impacts on children’s attitudes toward reading and motivation to read as well as literacy skills and reading achievement (Lindsay, 2010).

Several summer book distribution programs have been evaluated over the last decade using randomized control trials (Allington et al., 2010; Kim, 2006; Kim & White, 2008). The basic premise of these programs, with some variations, is that a partial explanation for summer learning losses among low-income students may be attributed to a lack of access to educational materials (e.g., books) in the home. This lack of resources may then in turn be related to lower summer reading activities and thus summer learning loss in reading. By providing a selection of reading-level appropriate books of high interest to students during the summer, these programs have attempted to fill the hypothesized resource gap in the hopes that it would lead to better learning outcomes over the course of the summer.

Allington and colleagues (2010) conducted a longitudinal experimental study in Florida where they provided a supply of 12 self-selected books to approximately 850 treatment students for 3 years. Control students (approximately 480) did not receive books during the summer. Treatment students were estimated to have statistically higher reading achievement at the end of 3 years on the state-mandated reading assessment (effect size = 0.14). The effect for free and reduced priced lunch (FRL) students was larger (effect size = 0.21). Higher self-reported reading frequency over the summer was found for the treatment group versus the control.

In a separate but related study, the provision of free books during the summer was combined with teacher and parent scaffolding of oral reading and comprehension strategies (Kim & White, 2008) under the assumption that providing books alone may not be sufficient to improve student reading achievement (White & Kim, 2011). In the study, third- through fifth-grade students were randomly assigned to three treatment conditions (books only, books with oral reading scaffolding, and books with oral reading and comprehension strategies) or to the control condition (no books). At the end of the school year, students in the books with oral reading and comprehension scaffolding condition received three 45-min lessons that focused on oral reading and comprehension strategies that students could use during the summer. Students in the books with oral reading scaffolding group received two lessons that did not include comprehension scaffolding and the books only condition received one lesson that did not include either oral reading or comprehension scaffolding. Students were allowed to self-select books from a selection that was matched to the student’s reading level. Postcards were mailed to parents throughout the summer that prompted the parents to read to their children or provided questions to be asked of the child that modeled the scaffolding strategies learned in the end of year lessons.

Estimated impacts showed there to be no statistically significant effect of the books only group compared with controls. Statistically significant positive effects were found for the books with oral reading and comprehension strategies compared with the control condition (effect size = 0.14) and compared with the books only condition (effect size = 0.12).1 A comparison of the scaffolding treatment conditions (books with oral reading scaffolding and books with oral reading and comprehension strategies) to the nonscaffolding conditions (control and books only) also showed statistically significant positive effects on reading achievement (effect size = 0.09). An earlier study with similar methods with a different sample showed similar positive effects on student reading achievement (Kim, 2006).

In a replication of the earlier studies (Kim, 2006; Kim & White, 2008), White, Kim, Kingston, and Foster (2014) failed to replicate the main findings of positive effects of end-of-year teacher lead comprehension lessons and the provision of books to students in a broader range of school poverty contexts. Despite the lack of a main effect, the study did find significant heterogeneity in treatment effects across school poverty context. When school status as a high-poverty school (defined as 75%-100% of students receiving FRL) was interacted with treatment condition, statistically significant positive effects were found for high-poverty treatment schools relative to controls, whereas in moderate-poverty schools (defined as 45%-74% FRL) treatment effects were estimated to be negative relative to the control condition. Explaining the negative effect in moderate-poverty schools, White et al. note that treatment in these schools may have induced less reading than would otherwise happen, and compensatory effects (e.g., John Henry effects) may have been operating in the control condition or the results may have just been anomalous. Despite the lack of definitive explanation for the heterogeneous effect, these results highlight that summer reading interventions may be sensitive to school and community contexts in which they are implemented and wider replication of these types of interventions across contexts is needed.

Baltimore in Context

On the 2011 administration of the Trial Urban District Assessment (TUDA) of the National Assessment of Educational Progress (NAEP), 60% of Baltimore fourth-grade students scored at the below Basic level compared with only 34% of fourth graders in the nation as a whole (see Figure 1). This means that almost two out of every three Baltimore fourth graders did not score well enough to make the following standard for the Basic level on the NAEP:

Fourth-grade students performing at the Basic level should be able to locate relevant information, make simple inferences, and use their understanding of the text to identify details that support a given interpretation or conclusion. Students should be able to interpret the meaning of a word as it is used in the text.


                        figure

Figure 1. Comparison of NAEP 2011 fourth-grade reading levels for national public schools, large city public schools and Baltimore.

Source. U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP, 2011 Reading Assessment.

Note. NAEP = National Assessment of Educational Progress.

Baltimore’s public school children also appear to have limited access to books in the home. As self-reported on the 2011 TUDA, slightly over half of Baltimore fourth-grade students (53%) had fewer than 26 books in their home compared with only 35% of all national public school students (see Figure 2). A simple comparison of student reported number of books in home and fourth-grade NAEP reading scale scores shows a clear distinction in average reading achievement between students in Baltimore with fewer books in the home (<26) and those with relatively more books in the home (see Figure 3). Baltimore fourth-grade students with fewer than 26 books in the home score on average at the below Basic level, whereas their peers with more than 26 books in the home score at the Basic level.


                        figure

Figure 2. Percentage of fourth-grade students reported of number of books in home for Baltimore and national public school students.

Source. U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP, 2011 Reading Assessment.

Note. All contrasts between Baltimore and the National Public School Average are statistically significant at the p < .001 level. NAEP = National Assessment of Educational Progress.


                        figure

Figure 3. Baltimore fourth-grade average NAEP reading scale score by number of books in home.

Source. U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP, 2011 Reading Assessment.

Note. The average scale score difference between children with 0 to 10 books in the home and children with 26 to 100 and more than 100 books in the home is statistically significant at the p < .001 level. The average scale score difference between children with 11 to 25 books in the home and children with 26 to 100 books in the home is statistically significant at the p < .001 level and more than 100 books in home at the p < .001 level. NAEP = National Assessment of Educational Progress.

Baltimore SummerREADS Program

The goal of the SummerREADS project was to investigate the efficacy of a voluntary summer book distribution program on student achievement in schools that serve highly disadvantaged population of students. The program was modeled on the successful voluntary summer reading and book distribution programs detailed above implemented by James Kim and Thomas White (Kim, 2006; Kim & White, 2008). These programs provide students with a set of developmentally appropriate books as well as training in comprehension and reading fluency strategies intended to provide students with self-support learning tools for the summer. The following discussion provides an overview of the SummerREADS program as implemented in Baltimore during the first and second years of the program.

Program components

SummerREADS coordinators

With the support of Baltimore Public Schools’ (City Schools) central Literacy Department, each participating school identified a coordinator who was designated to work with the SummerREADS program manager to facilitate the program in the school. The coordinators were asked to schedule a date for a book fair for students during the first 2 weeks in May, hold a parent orientation for the program, distribute books prior to the end of the school year, and help with data collection (e.g., student surveys and summer reading logs) in their school. The coordinators were also asked to facilitate the participation of teachers in their school who worked with the targeted grade levels (Grades 2 and 3). Specifically, coordinators made sure that grade-level teachers attended a training session on the SummerREADS program and taught the two SummerREADS classroom lessons to students during the last few weeks of school. They also helped teachers guide student selections of books that matched their measured reading levels. School coordinators received a US$400 stipend for their participation in schools with second, third, and fourth graders, and a US$300 stipend in schools with second graders only.

Teacher training and end-of-year lessons

Teachers in the targeted grades at program schools participated in a 3-hr teacher session during April that covered the impetus and rational for SummerREADS, reviewed program logistics, and trained teachers on the end-of-year lessons. Teachers were compensated for their time at the normal district rate for professional development hours.

The end-of-year lessons were adapted from those used by James Kim in his studies by the City Schools’ Director of Humanities in coordination with the SummerREADS project and focused on oral reading and comprehension strategies that formed the basis for the SummerREADS end-of-year lesson. These were designed to teach students strategies to maintain reading comprehension over the summer as well as how to reread for fluency improvement. Teachers were given a storybook and all materials for the lessons (e.g., copies of all student paper materials, book for the lesson, parent letters, and translations into native languages for English language learners) and were asked to implement the two designated lessons at end of the school year. They were also provided with materials and guidance to help assign students to book selections by reading level.

Book fairs and book distribution

Book fairs were conducted approximately 1 month before the end of the school year at study schools by the SummerREADS program in conjunction with Scholastic, Inc., a well-known publishing, education, and media company. Teachers brought their students in classes to the school library where they received a short orientation on how to choose their books. The students were directed to a table that contained selections at the child’s reading level and had 15 to 20 min to choose their 12 books from a group of approximately 75 high interest fiction and nonfiction titles. If students were unable to find a desired book at their reading-level table, they were allowed to select books from tables that were one reading level above or below their own. Students recorded 14 titles on a form with the promise that they would receive a total of 12 books prior to summer vacation (books were not distributed at the Book Fairs).

Parent orientations

All parents received a letter announcing the SummerREADS program in April and allowing them to opt out if they did not want to participate. Program schools were asked to develop a plan to give parents an overview and orientation to the program as well as to get them involved in reading with their children during the summer vacation. Schools were given a small budget for this outreach to fund refreshments for meetings or other incentives to encourage parent attendance.

Summer check-in

Past research on summertime book distribution programs highlights the difficulty in determining to what extent students read the distributed books during the summer. For example, Kim (2006) reported only about half of all children returned at least one postcard during the summer indicating that they had read a book. In Year 1 of the program, classroom teachers and school coordinators were paid a US$200 stipend to volunteer to check-in with a group of SummerREADS students to ascertain if the students had actually read at least one of their books at that time. Summer check-in teachers kept a running record of student contacts and attempted contacts through a web-based log during the course of the summer. After each attempted or successful contact with students, teachers recorded the mode of contact with the student(s) (e.g., mailed letters or postcards, emails, face-to-face meeting) and whether or not they successfully reached the student or family member. If the check-in teacher successfully contacted the student, they were asked to record if the student had read more books since the last contact. If contact was not successfully made, teachers were asked to record why they could not make successful contact.

In Year 2, teachers who agreed to participate in the summer check-ins received US$250 and were asked to contact their students 4 times over the course of the summer (on or around July 1, July 15, July 29, and August 12). For the first and third contact attempts, teachers were asked to send preprinted postcards that encouraged the students to read, asked them to write the name of the book they just read, and mark if they liked the book and if they had completed reading it. Students were also asked to indicate how many books they had read so far during the summer. Summer check-in teachers were asked to utilize a different mode of communication with students for the second and fourth check-in attempts. Teachers were encouraged to use emails, phone calls, text messages, visits, or some other method of their choice. It was hoped that an increase in the stipend and clear expectations for the number, mode, and timing of contact attempts would increase the success rate of the summer check-ins.

Summer book logs

Students were given a book log to take home during the summer. The book logs contained space for students to report on up to 15 books. Students were asked to report on whether or not they had finished reading the book, how many times they read the book (Did not finish, 1 time, 2 times, 3 times, or more), and two questions that were aligned with the reading strategies taught to the students prior to the summer vacation. Parents or other family members were asked to sign each page and were encouraged to comment on the child’s reading of the book. In the first year, students were offered a trip to the Grand Prix of Baltimore if they returned their book logs after summer vacation in hopes of motivating them to read during the summer and to return their book logs so that the program could better understand how many children read during the summer. In the second year, students who returned their Book Logs by a set date within the first 2 weeks of the new school year received a Book Readers’ trophy and their parents were eligible to enter a school drawing for one US$150 debit card.

Sample

School selection and participation

In both years of the program, schools were identified for participation by the following criteria: (a) The school served a population of students that was greater or equal to 80% FRL eligible, (b) the school did not meet adequate yearly progress (AYP) targets in reading for the previous school year, and (c) the school utilized Wireless Generation mClass benchmark tests. Principals of schools who met these criteria were contacted to solicit their interest in participating in the program. Schools were randomly assigned to receive SummerREADS from the pool of interested schools.

Based on these criteria, 35 schools were identified as eligible to participate in the SummerREADS program in the summer of 2011. Of the 35 schools initially contacted, only 14 responded with their willingness to participate in SummerREADS. These schools enrolled 1,459 students in the second and third grades (723 and 736 students, respectively). Given that the program was designed to focus on all students within the targeted grades at a school and had a capacity to reach 1,000 students, schools were randomly selected to participate in the program. Schools were selected at random until the projected total number of students enrolled in the selected schools reached 1,000, resulting in a total of 10 of the 14 interested schools being selected to participate. One of the 14 selected schools that did not respond after being informed of their selection to participate in SummerREADS and after repeated unsuccessful attempts to contact the principal was withdrawn from the program. This resulted in a final sample of nine schools that participated in SummerREADS and four comparison schools in Year 1.

Schools that had participated in SummerREADS during Year 1 were invited to participate again in Year 2; all nine of the implementing schools agreed to continue their participation in SummerREADS. These schools continued to serve the third- and fourth-grade students for a second year and added new second graders. A second group of schools that had not participated in SummerREADS during Year 1 implemented SummerREADS with second-grade students only for the first time in 2012. These schools were identified by similar criteria that were used during Year 1 of the project. A total of 61 schools were identified as eligible to participate in SummerREADS based on these criteria. Of the 61 schools initially contacted, 22 responded with their interest to participate in SummerREADS during Year 2. From the 22 schools, 11 were randomly assigned to participate in SummerREADS. The remaining 11 schools served as comparison schools that did not implement SummerREADS.

The final participating sample of schools included a total of 20 schools that implemented SummerREADS (nine schools participated in both years and 11 schools participated in the second year only) and 15 schools that served as comparison schools did not implement the program.

Analytic sample

The analytic sample is comprised of a total of 4,881 second- and third-grade student records (2,649 students in SummerREADS schools, and 2,232 in comparison schools) across the 35 study schools. Table 1 provides a comparison by year of key student demographic variables across the SummerREADS and comparison groups. Although the samples of students across groups appear to be quite similar, there were some significant differences of note. Across both years, students in SummerREADS implementing schools were significantly less likely to be Hispanic and more likely to be White or Other race. In Year 1, students in SummerREADS schools were more likely classified as receiving special education services but this contrast was not significant in the second year. Finally, the SummerREADS group was more likely to be classified as chronically absent (missing more than 20 days of school during the previous school year) in Year 2 but was not significant in Year 1. Given these differences and the randomization occurring at the school level, analytic models (discussed in the “Analytic Approach” section) will include these variables as controls.

Table

Table 1. Comparison of Student Demographics Across Years and Groups.

Table 1. Comparison of Student Demographics Across Years and Groups.

Data and Measures

Data for the evaluation came from City Schools’ administrative and testing databases and were procured through an agreement with the Baltimore Education Research Consortium (BERC) [Research Group] which is a partnership of City Schools, Johns Hopkins University, Morgan State University, and other civic and community partners. Specific data elements and measures are discussed below.

Measures of student reading

During the course of the evaluation of SummerREADS, City Schools implement the Wireless Generation mClass suite of benchmark testing for progress monitoring of second-, third-, and fourth-grade reading achievement. These benchmark tests are administered at strategic points during the school year by classroom teachers to all of their students, including testing at the beginning and end of the academic school year. mClass data were collected for the spring and fall of 2011 and 2012 for all rising third- and fourth-grade-students who were enrolled in SummerREADS and comparison schools.

Included in the suite of mClass assessments is the DIBELS Oral Reading Fluency (DORF) which is “a measure of advanced phonics and word attack skills, accurate and fluent reading of connected text, and reading comprehension” (Good & Kaminski, 2011, p. 79) and is composed of two parts, ORF and passage retell. The DORF measures can be used by teachers to identify students with potential reading problems and also as benchmarks for measuring student reading progress over time (Good & Kaminski, 2011). Several studies have also found that ORF scores are moderate predictors of student reading proficiency on state end-of-year proficiency tests (Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Shaw & Shaw, 2002).

Fluency

On the ORF component of the DORF students are given an unfamiliar, grade-level passage and are asked to read for 1 min. Students are scored on the total number of words read correctly and the total number of errors made (e.g., substitutions, omissions, and hesitations more than 3 s). For benchmark testing, students complete the activity 3 times, and the median words read correctly and median total errors made are used as the student’s score. Fluency is the median number of words correctly identified per minute over the course of the three 1-min passage readings. The measure takes the value of zero in cases where the student is unable to read any words correctly on the first line of the passage.

Retell

During passage retell, “the student is asked to tell about what he or she has read” immediately after the reading of each passage in the ORF component. The retell component indicates when students are reading for meaning as opposed to “speed-reading without attending to text comprehension.” The DORF assessment utilizes discontinuation rules that stop administration; if the student fails to read any words correctly on the first line of the passage, the student is asked to stop and Retell is not administered; if the student reads fewer than 10 words correctly on the first passage Retell, the second and third passages are not administered; if fewer than 40 words are read correctly, professional judgment is used to determine whether Retell is administered for that passage. Retell measures the total number of words in the student’s response that are related to the passage and is based on the judgment of whether the student is retelling the passage or has moved to another story or topic. Students are given one point for every word in their retell that is related to the passage. The assessment is discontinued if the student does not respond or gets off track for more than 5 s.

Maryland School Assessment (MSA) reading test

The MSA is the Maryland State Department of Education’s standardized achievement test that fulfills the federal requirements for annual student testing as part of No Child Left Behind. The MSA reading test is tied to state grade-level curricula and benchmarks and is intended to provide information of student performance toward proficiency targets. The test is administered annually in Grades 3 to 8 and includes multiple-choice and brief constructed response items. MSA reading scale scores and proficiency designations were collected from City Schools’ data for students who participated in the first 2 years (2011, 2012) of the SummerREADS program as well as for students who were in comparison schools.

Student demographic data

Student demographic data for SummerREADS and comparison students were obtained from City Schools and merged with student reading assessment data. Key student characteristics obtained from these data included student race, gender, free and reduced-price meals status (FARMS), special education status, limited English proficiency status, and attendance during the school year prior to SummerREADS participation. An indicator was also created to capture students who transferred schools during the summer.

Analytic Approach

The potential relationship between SummerREADS and students’ summer learning is identified by comparing the summer learning of students in schools that implemented SummerREADS to peers in the randomly assigned comparison group of schools that did not implement the program. This group of students represents the best available comparison group for two reasons: These schools are similar to SummerREADS schools in that they also met the initial inclusion criteria, by expressing interest in the program these unselected schools likely share unobservable organizational and leadership characteristics that fostered a desire to participate and may also be related to the academic achievement of students, and the receipt of SummerREADS was determined by random assignment.

To estimate the potential effects of SummerREADS on student reading as measured by the DORF measures, we use hierarchical linear models (HLMs) that account for the fact that students are nested in schools and that the SummerREADS program was implemented by all teachers in the target grades within a school. For each reading measure, we modeled a student’s fall score as a function of the student’s spring fluency score to account for differences among students in initial reading prior to the summer, a variable that captures the amount of time between the first day of school and the fall testing date to control for differences among students in exposure to the school year, and a set of student demographic characteristics detailed above that may be related to reading (race, gender, special education status, limited English proficiency status, FRL, percentage of days absent, student was retained in grade). At the school level, an indicator for a school’s participation in SummerREADS as well as the percentages of FARMS, limited English proficiency, special education, and proficient on the Maryland School Assessment (MSA) reading test entered the model. Estimates on the SummerREADS indicator are the estimates of interest for this evaluation and can be interpreted as the average difference between students in SummerREADS and comparison schools on the fall outcome of interest controlling for student and school characteristics.

For MSA scale scores, the models remained the same with respect to student controls with the exception that for rising third graders, because prior MSA scale scores are not available (testing begins in the third grade), the benchmark determinations (below benchmark, above benchmark, at benchmark is the excluded category) from the prior fall mClass benchmark testing are used as controls for prior reading, and for rising fourth graders the prior year’s MSA reading scale score is used as the control for prior reading. Models for the two MSA proficiency categories of Proficient and Above, and Basic were implemented using logistic regression which is appropriate for dichotomous variables.

Does the provision of 12 self-selected and developmentally appropriate books to students during the summertime show evidence of positive support for reading achievement of low-income and low-resource elementary schools in Baltimore? No positive significant effects were found for the effect of participation in the SummerREADS program on either DORF measure. This finding was consistent across separate analyses that looked at years individually as well as the main pooled analyses across years.2

Despite the lack of statistically significant effects on the proximal measures of reading achievement in the fall after summer vacation, we did find some evidence of delayed effects of SummerREADS on the state reading test in the subsequent school year, especially for rising fourth-grade students. The estimated coefficient on participation in SummerREADS indicates that, on average, rising fourth-grade students in SummerREADS schools scored 7 points higher on the spring administration of the SA Reading test than their peers who did not participate in SummerREADS (Scale Score column, Table 2) net of student and school characteristics. When converted to standard deviation units, this difference is relatively small at 0.15, however is of a magnitude found in similar studies and is of practical policy relevance as it is of the same magnitude of estimates of summer learning losses. No statistically significant effect of SummerREADS participation on rising third graders SA reading scale scores was found. For both rising third and fourth graders, a positive effect on the odds of being categorized as Proficient or Advanced (and concomitantly lower odds of being categorized as Basic) on the 2012 MSA reading test was found (Basic and Proficient or Advanced columns, Table 2).

Table

Table 2. Estimated Effect of SummerREADS Participation on MSA Reading Scale Scores and Proficiency Categories.

Table 2. Estimated Effect of SummerREADS Participation on MSA Reading Scale Scores and Proficiency Categories.

Process Findings

This evaluation was also interested in understanding what happens when a community-based organization takes an intervention from the research literature and implements a variant of the intervention in their own context. Based on teacher and coordinator survey responses, it appears that overall the various aspects of the SummerREADS program were felt to have been implemented well and improved over the 2 years. As shown in Table 3 average ratings of all of the individual components of the program improved from the ratings given for Year 1 of the project. Overall, teachers and site coordinators rated the book selection process at the book fairs and the teacher training on the SummerREADS program to be successful aspects of the program. Notable in Year 2 is the increase in perceptions of the book distribution process and incentives to return book logs over Year 1 perceptions. Teachers and site coordinators continued to rate the process for summer check-ins with students, and completion and collection of book logs as the least successful aspects of the SummerREADS program.

Table

Table 3. Average Ratings of How Well Individual SummerREADS Program Components Worked in School by Year.

Table 3. Average Ratings of How Well Individual SummerREADS Program Components Worked in School by Year.

Teacher and coordinator comments on the check-in process highlight the difficulties in connecting with students in an urban district over the course of the summer. Similar to most comments on this, a teacher reflected, “I found getting in touch with many students to be difficult. Even though I made the phone calls and mailed the postcards, the student [phone] numbers did not work and the postcards were not returned.” Most teachers noted that it was very difficult to keep up-to-date records of working phone numbers to contact parents and that mailings to the address of record for students were largely unsuccessful. The difficulty in having early elementary students return papers to school after summer vacation is highlighted by a teacher who stated, “During the summer, both parents and kids told me they were reading but just recently I learned that either their book logs were thrown away by the parents or the students misplaced them.”

Common in comments on the program across both years was the notion that teachers and coordinators felt that a lack of parent involvement or investment in the program was a contributing factor in the difficulties experienced in contacting students during summer check-in and in getting book logs returned to school in the fall. One teacher noted, “Parent involvement was very poor. Parents did not assist their children in responding to the postcards or phone calls.” Another wished that “parents would remind their child/children to send in their postcards and to complete their reading logs.”

Despite frustration with and perceptions of a lack of parent involvement as well as the difficulties encountered contacting students over the summer, teachers did report on positive feedback they received from parents and students about the program and reading over the summer. As one teacher noted,

Even though, most of the student’s book logs were lost or thrown away; they could summarize each book. The students expressed their joy of choosing their own books from a large set of books. The parents were pleased to see their children reading during the summer.

Another teacher had a similar experience upon returning in the fall:

My former students came up to me on the first day and said they read all of their books. When asked about filling out their log, they did not want to because it was too much work some of them said. One said they didn’t want a trophy. Others were happy just reading the books for rewards.

Despite very poor return rates on book logs and successful contact rates for summer check-in, most teachers felt that SummerREADS was a positive experience for their students and a belief that it facilitated more reading during the summer than may have otherwise happened.

The students were very excited to participate in the program and loved receiving the books; however, we still did not have a significant return on the logs. Most of the students read some of their books which is good . . . more than they would have read if they were not part of the SummerREADS program . . .

Similarly, from the comments of other teachers, it is apparent that the potential effects of SummerREADS may not be confined to summer vacation. Several noted that, upon returning in the fall, children were able to discuss the books that they read and they were continuing to use the SummerREADS books during the school year. For example,

My 4th grade students, now 5th graders, returned to school and had very in-depth discussions with me about the books they read over the summer. I believe the SummerREADS program was very effective and promoted reading in this group of students. They are still reading the books during their independent reading time. They love to read and receiving these books gave them the opportunity they might not have had over the summer.

Another teacher noted how excited their students were to receive the books and highlighted the potential carryover effects of SummerREADS into the new school year in the fall.

I think giving the students books to read during the summer really motivated them. I hear a lot of the students say they have some of the same books that I have in the classroom. Some of the students continue to carry the bags that were given with the books. I think the students really were appreciative of the program. If I tell them to do a book report at home, I am more confident that students will complete them because I know they all received books in June.

Over the course of 2 years, the SummerREADS program distributed approximately 24,000 books to more than 2,000 early elementary students enrolled in 20 high-poverty and poor-performing schools. Given that over half of Baltimore fourth-grade students reported owning fewer than 26 books in their homes on the 2011 NAEP, on its own this program has likely dramatically increased the number of books in the homes of those students who participated.

Despite this, this evaluation failed to find statistically significant effects on students’ short-term learning as measured by DORF measures over the course of the summer. It is likely that the SummerREADS program as implemented over the course of the 2 years was not sufficiently powerful given its characteristics (e.g., no direct instruction of students, lack of verifiable parent engagement and involvement) to engender short-term improvements on these measures. The face validity of this point is supported by Kim and White’s (2008) finding that student who only received books during the summer performed no differently than comparison students in the fall. Kim and White (2008) noted that “providing children with more books and opportunities to read is necessary but not sufficient for improving reading achievement” (p. 17) and that “voluntary reading can be made more effective by scaffolding that consists in part of teacher-directed lessons involving oral reading and comprehension strategy instruction” (p. 17). Although SummerREADS provided training to teachers on scaffolding and were asked to provide these lessons to students prior to summer vacation, it is unclear to what extent these lessons were implemented with fidelity as well as the frequency or intensity of these lessons prior to summer vacation. Furthermore, it is an open question on the extent to which students and parents both understood and were able to use these strategies during the summer.

Based on the work of James Kim, the National Summer Learning Association promotes what it calls the “ABC’s of Summer Reading,” which states that to be successful, a summer reading program must provide Access to books, Books matched to the readers’ ability level and Comprehension monitored and guided by an adult (National Summer Learning Association, 2009). Although the SummerREADS program provided the first two elements, it lacked a systematic way to ensure parental or adult monitoring and guidance of summer reading. The lack of this element may potentially explain the absence of positive effects of SummerREADS on proximal measures of reading in the fall immediately after summer vacation. In the future, SummerREADS or similar programs will need to be intentional in creating program structures that strengthen students’ and parents’ ability and knowledge in how to engage with the books in meaningful ways over the course of the summer. Given the difficulties encountered with contacting parents and students during the summer check-ins, it is not readily apparent how SummerREADS could have improved efforts to support and engage parents during the summer in working on these skills. Working with parents and students more intensively prior to summer vacation may accomplish this; however, given the inherent difficulties in engaging parents in high-poverty schools, this may be difficult to realize. Specifically, those students who need the most reading support during the summer may be the most difficult to reach through parent engagement efforts.

Despite the lack of statistically significant effects on the proximal measures of reading achievement in the fall after summer vacation, this evaluation did find evidence of a positive effect of participation in SummerREADS on the state of Maryland’s year-end standardized reading assessment. Although the program was primarily intended to stem summer learning loss, it was also intended to support student reading more generally. The success of SummerREADS in providing this general form of support is evidenced in the positive distal effects on end-of-year state reading assessments. The program may have provided support to students program’s potential to spill over into instruction during the school year. Although the relative difficulty of meeting proficiency standards on state standardized tests, and what these categorizations may or may not mean for student reading proficiency, is certainly debatable, it is still a positive finding that providing students in high-poverty and poor-performing schools with reading resources in the form of reading-level appropriate books is related to higher odds of meeting policy relevant and desired outcomes.

The comments by teachers about how the SummerREADS program has the potential to spill over into instruction during the school year are suggestive of new avenues for developing similar programs further, beyond the summer, through the intentional integration of the program with school-year instruction. Although the provision of books alone may not be sufficiently powerful to stem learning loss on the short timescale of summer vacation, it may provide resources in the home that students can utilize in support of learning over a longer time span and in conjunction with school-year teaching and learning.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This article was supported with funding from the Abell Foundation, Inc.

Alexander, K., Entwisle, D., Olson, L. (2001). Schools, achievement, and inequality: A seasonal perspective. Educational Evaluation and Policy Analysis, 23, 171-191.
Google Scholar | SAGE Journals | ISI
Alexander, K., Entwisle, D., Olson, L. (2007). Summer learning and its implications: Insights from the Beginning School Study. New Directions for Youth Development, 2007, 11-32.
Google Scholar | Crossref
Allington, R. L., McGill-Franzen, A., Camilli, G., Williams, L., Graff, J., Zeig, J., . . . Nowak, R. (2010). Addressing summer reading setback among economically disadvantaged elementary students. Reading Psychology, 31, 411-427.
Google Scholar | Crossref
Baker, S. K., Smolkowski, K., Katz, R., Fien, H., Seeley, J., Kame’enui, E. J., Beck, C. T. (2008). Reading fluency as a predictor of reading proficiency in low-performing, high-poverty schools. School Psychology Review, 37, 18-37.
Google Scholar | ISI
Barger, J. (2003). Comparing the DIBELS Oral Reading Fluency indicator and the North Carolina end of grade reading assessment (Technical report). Asheville: North Carolina Teacher Academy.
Google Scholar
Buck, J., Torgesen, J. (2003). The relationship between performance on a measure of oral reading fluency and performance on the Florida Comprehensive Assessment Test (FCRR Technical Report No. 1). Tallahassee: Florida Center for Reading Research.
Google Scholar
Burkham, D., Ready, D., Lee, V., LoGofero, L. (2004). Social-Class Differences in Summer Learning between Kindergarten and First Grade: Model Specification and Estimation. Sociology of Education, 77(1), 131.
Google Scholar | SAGE Journals | ISI
Chin, T., Phillips, M. (2004). Social reproduction and child-rearing practices: Social class, children’s agency, and the summer activity gap. Sociology of Education, 77, 185-210.
Google Scholar | SAGE Journals | ISI
Cooper, H., Nye, B., Charlton, K., Lindsay, J., Greathouse, S. (1996). The effects of summer vacation on achievement test scores: A narrative and meta-analytic review. Review of Educational Research, 66, 227-268.
Google Scholar | SAGE Journals | ISI
Downey, D., von Hippel, P., Broh, B. (2004). Are schools the great equalizer? Cognitive inequality during the summer months and the school year. American Sociological Review, 69, 613-635.
Google Scholar | SAGE Journals | ISI
Entwisle, D., Alexander, K. (1992). Summer setback: Race, poverty, school composition, and mathematics achievement in the first two years of school. American Sociological Review, 57, 72-84.
Google Scholar | Crossref | ISI
Entwisle, D., Alexander, K. (1994). Winter setback: The racial composition of schools and learning to read. American Sociological Review, 59, 446-460.
Google Scholar | Crossref | ISI
Entwisle, D., Alexander, K., Olson, L. (2000). Summer learning and the home environment. In Kahlenberg, R. (Ed.), A notion at risk: Preserving public education as an engine for social mobility (pp. 9-30). New York, NY: The Century Foundation Press.
Google Scholar
Entwisle, D., Alexander, K., Olson, L. (2001). Keep the Faucet Flowing: Summer Learning and Home Environment. American Educator, 25(3), 1015.
Google Scholar
Good, R., Kaminski, R. (2011). DIBELS Next® assessment manual. Eugene, OR: Dynamic Measurement Group.
Google Scholar
Hart, B., Risley, T. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Brookes Publishing.
Google Scholar
Heyns, B. (1978). Summer learning and the effects on schooling. New York, NY: Academic Press.
Google Scholar
Heyns, B. (1987). Schooling and cognitive development: Is there a season for learning? Child Development, 58, 1151-1160.
Google Scholar | Crossref | ISI
Jaeger, M. (2011). Does cultural capital really affect academic achievement? New evidence from combined sibling and panel data. Sociology of Education, 84, 281-298.
Google Scholar | SAGE Journals | ISI
Kim, J. (2006). Effects of a voluntary summer reading intervention on reading achievement: Results from a randomized field trial. Educational Evaluation and Policy Analysis, 28, 335-355.
Google Scholar | SAGE Journals | ISI
Kim, J., White, T. (2008). Scaffolding voluntary summer reading for children in grades 3 to 5: An experimental study. Scientific Studies of Reading, 12(1), 1-23.
Google Scholar | Crossref | ISI
Lareau, A. (2002). Invisible inequality: Social class and childrearing in black families and white families. American Sociological Review, 67, 747-776.
Google Scholar | Crossref | ISI
Lee, V., Burkham, D. (2002). Inequality at the starting gate: Social background difference in achievement as children begin school. Washington, DC: Economic Policy Institute.
Google Scholar
Lindsay, J. (2010). Children’s access to print material and education-related outcomes: Findings from a meta-analytic review. Naperville, IL: Learning Point Associates.
Google Scholar
National Summer Learning Association . (2009). How to make summer reading effective. Retrieved from http://c.ymcdn.com/sites/www.summerlearning.org/resource/resmgr/publications/2009.makesummerreadingeffect.pdf
Google Scholar
Shaw, R., Shaw, D. (2002). DIBELS Oral Reading Fluency-based indicators of third grade reading skills for Colorado State Assessment Program (CSAP) (Technical report). Eugene: University of Oregon.
Google Scholar
Slates, S. L., Alexander, K. L., Entwisle, D. R., Olson, L. S. (2012). Counteracting Summer Slide: Social Capital Resources Within Socioeconomically Disadvantaged Families. Journal of Education for Students Placed at Risk (JESPAR), 17(3), 165185. http://doi.org/10.1080/10824669.2012.688171
Google Scholar
White, T., Kim, J. (2011). Teacher and parent scaffolding of voluntary summer reading. The Reading Teacher, 62, 116-125.
Google Scholar | Crossref | ISI
White, T., Kim, J., Kingston, H., Foster, L. (2014). Replicating the effects of a teacher-scaffolded voluntary summer reading program: The role of poverty. Reading Research Quarterly, 49(1), 5-30.
Google Scholar | Crossref | ISI

Author Biography

Marc L. Stein is an assistant professor in the School of Education at the Johns Hopkins University. He is also an affiliated researcher with the Baltimore Education Research Consortium (BERC) and a faculty affiliate with the Center for Social Organization of Schools (CSOS).

Article available in:

Related Articles