We have two goals for this article: to question the efficacy of evidence-based practice as the foundation of reading education policy and to propose practice-based evidence as a viable, more socially just alternative. In order to reach these goals, we describe the limits of reading policies of the last half century and argue for the possibilities of policies aimed at more equitable distribution of academic literacies among all social groups, recognition of subaltern groups’ literacies, and representation of the local in regional and global decision making.

Despite a century of reading research and policies investing billions of dollars in reading education, students from lower income families (Carnoy & Rothstein, 2013), confounded by race (Vigor, 2011), immigrant status (Swartz & Stiefel, 2011), and segregated location (Burdkick-Will et al., 2011), continue to “struggle” with school reading. Moreover, the income achievement gap for reading has increased by 40% since the Reagan Administration (Reardon, 2011). These facts warrant a closer look at the policies that shape students’ opportunities to learn to read in U.S. schools.

We understand policy as a call to action. “A policy is both a hypothesis and an argument that a particular action should be taken to solve a problem. That action, however, has to be politically acceptable and economically feasible” (Cuban, 2010, para. 9). From this understanding of policy, we argue that the current definition of the problem (noncompetitive levels of general reading proficiency), the hypothesis (higher standards/evidence-based practices/high stakes testing will address this problem), and the argument (scaling up/fidelity/accountability will ensure success) have been unsuccessful. In this article, we explicate why we believe this to be the “wrong” problem, we provide evidence that its hypothesis and supportive arguments are problematic, and make a case for how socially just reading education programs through the pursuit of a different problem (systemic barriers that suppress low-income and poor students’ opportunities to learn and life chances), different hypotheses (improvement networks/practice-based evidence/whatever-it-takes attitude can mitigate those barriers), and different arguments (equal moral worth/complexity/participatory parity) could contribute to more just reading education programs. To take these steps, we adopt a critical pragmatic focus (Feinberg, 2014) to trace the origins of the current policy emphases and to legitimize calls for alternative reading education policies.

Pragmatism privileges the doing over the done, specific over the general (Vannini, 2008), and views the consequences of social action as “always in the making” (James, 1904, p. 2). “Critical pragmatism highlights situations in which power relations influence the formation and internalization of common sense serving to conceal alternative ways of understanding problems and solutions” (Feinberg, 2014, p. 150). We agree with Delaney and Neuman (2016) that schools are complex environments in which a number of different discourses compete for authority in the determination of what will be considered educational common sense. They argued that the intersectionalities of people negotiating those discourses within and around particular school environments enhance that complexity. In our way of thinking, the social and political power behind evidence-based practice (EBP) (Walsh, Reutz, & Williams, 2015) works to conceal the possibilities of practice-based evidence, directing teachers’ attention toward fidelity and away from justice.

Evidence-based practice is an interdisciplinary approach to evaluate interventions in order to ensure that the best available data are used to determine their effectiveness and general implementation (Walsh, Reutz, & Williams, 2015). The International Literacy Association (2002) defines EBP as “reliable, trustworthy, and valid evidence to suggest that when a program is used with a particular group of children, the children can be expected to make adequate gains in reading achievement” (p. 2). Haskins (2014), among others, has promoted randomized control trials (RCT) as the best available tool for these evaluations and concluded that, “the evidence-based movement separates the wheat from the chaff” (para. 11) when considering the implementation and funding of social programs.

Practice-based evidence (PBE) is immediately relevant, contextually based data that social service practitioners collect systematically and intentionally in order to address particular individuals in particular contexts. Educators might ask, how does adding this intervention alter the complex personalized systems of the student before me? (Biesta, 2010; Maxwell, 2004). Bryk (2015) explained that through the collective knowledge of practitioners PBE demonstrates, “the difference between knowledge that something can work and knowledge of how to actually make it work reliably over diverse contexts and populations” (p. 469). Erickson (2014) argued for, “policy that provides wiggle room to enable custom tailoring of practices to fit the particular of local circumstances” (p. 4). PBE informs practice about how to make interventions work in the “real world” of complex classroom environments.

We define fidelity as closely following instructional “scripts” validated by EBP in order to scale-up what works into universal best practices of reading education (e.g., the 2009 Investing in Innovation Fund (i3)). Additionally, we define justice as a more equitable distribution of dominant literate practices; recognition of the powerful literacies, texts, and practices among culturally marginalized groups; and representation of the local within flows of decision making regionally and globally (Luke, 2014).

Cuban’s definition of policy pushes past the notion that policy making is an objective rational process or negotiation of facts in order to include the political values of those involved. We rely on a tradition of education policy analysis from House (1978), Glass (1987), and Mathis and Trujillo (2016) in our application of labels to those values. In his critique of the use of standardized testing in Title 1 of the original Elementary and Secondary Education Act, House (1978) articulated liberals’ preference to use government authority to ensure and regulate equal opportunities among citizens to learn in school. Glass explained the conservative philosophy encoded in the Reagan Administration’s What Works document, showing how its authors assumed that the traditional moral and social order, and not government intervention should determine how schools are run. Contributors to Mathis and Trujillo’s volume analyzed the federal governments’ neoliberal market-based values—flexibility, competition, and choice—enacted in No Child Left Behind and Race to the Top.

A historical set of policy actions enacting liberal, conservative, and neoliberal discourses provides the backdrop for understanding the positioning of evidence within current educational policy as shown in Table 1. The liberal Elementary and Secondary Education Act (ESEA) of 1965 was part of the Johnson Administration’s War on Poverty, social welfare legislation intended to improve the lives of low-income and racial minority citizens. In rapid legislative negotiations, conservative Dixiecrat legislators insisted that within ESEA the targeted group be specified as low-income rather than African American and the federal government be prohibited from mandating national academic standards or curriculum. Title 1 of the ESEA enabled schools to provide additional compensatory reading instruction for low-income students, distributing one billion dollars in federal funds annually to states willing to participate. For Senator Robert Kennedy, Title 1 was to be a service program ensuring equal access to quality instruction. He insisted that schools consult periodically with families about participating students’ progress in order to ensure that states were spending the additional funding as intended (McLaughlin, 1975).

Table

Table 1. Ideologies and Discourses in Policies.

Table 1. Ideologies and Discourses in Policies.

Education Commissioner Francis Keppel and other officials in the new Program Evaluation office within the Department of Health Education and Welfare interpreted the consultation clause as a call for the development of a program evaluation system driven by mandatory testing that would eventually enable the government to determine the most effective program (Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010; House, 1978). Although Title 1 was not without problems, the program was successful at raising reading scores. Targeted funding to low-income students (Martin & McClure, 1969) and the development of compensatory reading instruction (Calfee & Drum, 1979) enabled low-income students to improve their reading test scores more rapidly than their more prosperous peers. This resulted in a narrowing of the income achievement gap slightly, but consistently, into the late 1970s (Reardon, 2011).

With the support of a conservative legislature, President Reagan began to reduce the federal footprint in public life via the Omnibus Reconciliation Act of 1981. The reauthorization of ESEA as the Educational Consolidation and Improvement Act (ECIA) of 1981 was part of that effort. Over 1 billion dollars were cut from the federal education budget. Title 1 became Chapter 1, transferring much of the authority over the program to the states and shifting funding to block grants to be used to improve all students’ reading proficiency. The changes reduced targeted aid significantly and relaxed regulatory requirements, leading to fewer eligible students being served than during the 1960s and 1970s (Thomas & Brady, 2005). This reduction of concern and service can be symbolized by the description of Chapter 1 in ECIA. In previous iterations of the ESEA, Title 1 required 75 pages to represent its regulations, however, in ECIA, just 14 pages were needed to explain Chapter 1, although the testing and reporting requirements were retained. Reardon (2011) locates the renewed widening of the income achievement gap in reading within these and other fiscally conservative changes.

The struggle between liberal and conservative values about the role schools ought to play in helping low-income children learn to read can be traced through subsequent reauthorizations of the ESEA. United States federal education officials rejected both the International Reading Association and the National Council of Teachers of English’s joint effort to produce national reading standards for America 2000 (McCollum-Clark, 1995) as well as the Common Core Standards for English Language Arts within the Every Student Succeeds Act (ESSA) of 2015 (Shannon, in press).

Although President Reagan couched the findings of A Nation at Risk (1983) in conservative terms in his press release (Reagan, 1983), the report introduced neoliberal values into the struggle over schooling and reading education in bold terms. The premise of A Nation at Risk (1983) was that globalization placed all people in competition with one another for survival. Schools’ primary task should, therefore, be to ensure that future citizens have sufficient cultural, social, and informational capital to enable them to compete successfully in the global marketplace. This was a difficult order. A Nation at Risk (1983) contradicted liberal moves toward equity taken in the 1960s’ and conservative insistence on Western tradition failed to produce the hoped-for educational results of that agenda (see Bell, 1988). A Nation at Risk explained, “if only to keep and improve on the slim competitive edge we still retain in world markets, we must dedicate ourselves to the reform of our education system” (National Commission on Excellence in Education, 1983, p. 7).

Standing before leaders of the National Governors Association, philanthropic organizations, and large corporations in 1991, President G. H. W. Bush announced the American 2000 policy initiative through which the federal government would “define new world-class standards for schools, teachers, and students in the five core subjects: math and science, English, history and geography…[and] develop voluntary national tests for 4th, 8th, and 12th graders in the five core subjects” (para. 18). Federal officials devised a two-pronged approach to address this challenge—establish world-class academic standards for English Language Arts in every school and identify proven teaching methods to ensure all American students become proficient on tests of those standards. In a nonlinear path, the first prong started with Secretary of Education Lamar Alexander’s “nine-year crusade” (1991) through “a series of truly radical incentives” (1993) and led to President Obama’s Race to the Top (2009) initiative and its support of Common Core State Standards (2009) and testing. The second prong produced a series of national reports on reading education beginning with Becoming a Nation of Readers (National Academy of Education & Anderson, 1985) and culminating with the National Reading Panel Report (2000).

Across the last half century, the relative power among competing liberal, conservative, and neoliberal discourses have shifted the “problem” of reading education from distributing academic literacy more equitably among social classes to raising the reading proficiency of all students. The neoliberal and conservative rationales (see Table 1) for this shift are demonstrably false. Using international test scores, Berliner (2009) demonstrated that middle and upper class American students score on par with the best students of other countries and low-income students rank among the lowest in participating countries. Carnoy and Rothstein (2013) demonstrated that U.S. students of all social classes score above their international peers. However, U.S. average scores sink to the middle of the rankings because the United States is more willing to tolerate higher levels of poverty among children and youth than other developed countries.

Carnoy and Rothstein’s (2013) findings cannot be attributed to federal interference as implied in conservative portions of ESSA, which returns authority over standards and testing to the states. Reductions in federal funding levels shift responsibility for funding and testing outcomes to the states. In the Brown Center Report on American Education, Loveless (2012) found “within state variation [on the National Assessment of Educational Progress scores, NAEP] is four or five times larger than the variation between states” (p. 8). Recently, the State Supreme Court in Connecticut ruled the state must produce a more equitable plan for reading education because the state enjoys the highest average NAEP scores, but the state’s low-income students scored below their peers in 40 other states (Harris, 2016). Former Secretary of Education, Margaret Spelling opined about ESSA, “I’m a little bit skeptical. We’ve tried the local control approach before, and we saw pretty pitiful results” (as quoted in Davis, 2015, para. 19). As Berliner (2009) concluded, America has a two-tiered reading education system and that should be the focus of reading education policy.

The second prong of the federal strategy has problems as well. Conservative, neoliberal, and liberal discourses subscribe (at least rhetorically) to the hypothesis that scaled up (widely disseminated) EBPs, taught with fidelity and monitored closely by examination, will produce a nation of readers. The social power behind this hypothesis is a century-long progressive commitment to the use of experimental science to determine the basic structures of the physical and natural world in order to develop technologies to address recognized problems (Shannon, 2008). The keys to this commitment are found in the identification of generalizable causal relationships that can unlock rational courses of action without influences of history, culture, or economic interests. The roots of this commitment in reading education are at least as deep as Edmund Burke Huey’s (1908/1968) statement about the problem of reading instruction:

After all we have thus far been content with trial and error, too often allowing publishers to be our jury, and a real rationalization of the process of inducing the child with the practice of reading has not been made. (p. 9)

A more recent manifesto of the superiority of RCT can be found in the National Reading Panel Report (2000) that adopted evidence-based methodological standards of psychological and medical research:

To make a determination that any instructional practice could be or should be adopted widely to improve reading achievement requires that the belief, assumption, or claim supporting the practice be causally linked to a particular outcome. The highest standard of evidence for such a claim is the experimental study, in which it is shown that treatment can make such changes and effect such outcomes. Sometimes when it is not feasible to do a randomized experiment, a quasi-experimental study is conducted. This type of study provides a standard of evidence that, while not as high, is acceptable, depending on the study design. (pp. 1–7)

The political power of this hypothesis reached its apogee in the No Child Left Behind Act (NCLB, 2002) where all instruction, materials, and procedures had to be scientifically based in order to receive federal funding. The concentration of the social and political sources of power in NCLB legitimized EBP as the “common sense” behind federal, state, and local reading education policies.

Yet, recent federal reports should give pause. Relying on the efficacy of “experimental and quasi-experimental research literature” (National Reading Panel, 2000, pp. 1–1), the National Reading Panel defined reading and presented evidence-based action in order to implement “what works.” These definitions, priorities, and suggested interventions were assembled quickly as The Reading First Initiative (RFI) of NCLB (2002). The RFI succeeded in eliciting mandated fidelity to EBPs in elementary classrooms—teachers did what they were told to do. Yet despite teachers’ compliance with what works, the Department of Education found that the RFI failed to demonstrate the predicted “statistically significant impact on reading comprehension scores” (Gamse et al., 2008, p. v). Moreover, there were few trends which demonstrated improvement over time. Congress ended the RFI abruptly, but the recommended EBPs were recoded into the official response to intervention (RTI) (IDEA, 2004) process and repackaged as the foundational skills of the Common Core State Standards for English Language Arts (2010).

RTI has succeeded in eliciting more direct, explicit EBP interventions for struggling readers. However, a 2015 federal policy evaluation of RTI in 119 schools, with over 20,000 students in 13 states, found that: “assignment to receive reading intervention did not improve reading outcomes; it produced negative impacts” (Balu et al., 2015, p. 1). First-grade students whose scores were near the score required to qualify for services, actually did worse than virtually identical peers who did not get targeted assistance. Study co-author Dolittle stated: “We don’t want to have people say that these findings say these schools aren’t doing RTI right; this turns out to be what RTI looks like when it plays out in daily life” (Sparks, 2015, para. 6).

Calfee (2014) and Pressley (2005) argued that these failures of daily life were predictable given the common sense understandings of science and reading. First, they suggested that the National Reading Panel science avoided the complexities of place and people, seeking to control complexities as variables in order to declare what works generally. Separately, Calfee (2014) and Pressley (2005) explained that a useful science for education would make every effort to understand why and how interventions work, for whom and under what conditions, and without controls. Second, Calfee challenged reading researchers to rethink and to act differently on working definitions of reading, how reading develops across time, and the indicators of those developments in people’s everyday lives (see Pearson, Valencia, & Wixson, 2014). Pogrow (2017) concluded that the accepted common sense hypotheses and arguments of reading research are “a house of cards built on the basis that a given trivial effect size is bigger than some other trivial effect size” (p. 12). Pogrow continued:

Given (a) the lack of evidence that effective practices policies actually improve student outcomes, and (b) the disconnect between actual real-world effectiveness and how researchers determine that a practice is effective, implementing any form of effective practices policies at this point in time is an unwarranted government intrusion into local educational decision making that will result in stagnation and deflect efforts to seek alternative approaches to developing and identifying practices that are actually effective. (p. 12)

There is growing awareness of the limits of EBP in other “helping fields” such as social work (Smyth & Schorr, 2009), physical (Swisher, 2010) and psychological (Barkham & Mellor-Clark, 2003) therapies, psychiatry (Lieberman et al., 2010), and even medicine (Institute of Medicine, 2012). In these fields, many practitioners note that fidelity to standard EBP treatments results in variety of outcomes across diverse individuals, groups, and contexts because the controls of the original studies cannot be replicated in everyday environments. Practitioners seek treatment systems that recognize the fundamental role of practitioner knowledge in addressing the central question of their work: “how does adding X intervention alter the complex personalized system of patient Y before me?” (Swisher, 2010, p. 4).

In his Distinguished Lecture before the American Educational Research Association, Bryk (2015) connected these concerns to education, “Every student is not the same, nor is every context. The complexity is real, and it cannot be side-stepped by standardizing all activity in an effort to teacher proof instructional environments” (p. 474). Bryk’s acknowledgement should direct educational inquiries to questions regarding “how to make various work processes, tools, role relationships, and norms interact more productively under a variety of conditions…. The resultant knowledge and associated empirical warrants are practice based evidence” (p. 473, italics in the original).

Reading education in the United States has been both successful and unjust. The success of reading education can be measured in the comparison of middle and upper socioeconomic class student test scores against their international peers. The injustices of reading education are demonstrated in its maldistribution of academic literacy across social groups, its neglect of literacies of low-income and minority students’ everyday lives, and its fear of a critically literate populous across geographic and political scales. Channeling competitive and complimentary values of conservative, neoliberal, and liberal discourses, current EBP processes and consequences sort students according to economic, social, and cultural capital, creating real and perceived bureaucratic and legal barriers that prevent all students from participating as equal peers in social life across their life spans. From a radical-democratic discourse (Fraser, 2008), reading education policy in the United States should be directed toward the development of social conditions and arrangements to address these failures and to dismantle barriers directly. Failure to address problems and dismantle barriers is the reading education problem in the United States.

The argument for justice in reading education policy has been voiced from the margins of American society for over one hundred years:

The methods of the few, in their control of the many, still govern our public schools and to a great degree determine their management. (Parker, 1884, p. 436)

To deny education to any people is one the greatest crimes against human nature. (Douglass, 1894, n.p.)

We may make foreign birth a handicap to them and to us, or we may make it a very interesting and stimulating factor in their development and ours. (Addams, 1910, p. 410)

The historical voices arguing for justice remain clear in more contemporary times through the Eight Year Study (Aiken, 1942), citizenship schools for voting and civil rights (Clark & Blythe, 1962), the project of bilingual education (Hernandez-Chavez, 1988), radical democratic publications such as Rethinking Schools (e.g., Christensen, 2000), the crossing of ideological and geographic boundaries, urban and rural (e.g., Cuervo, 2016; Hicks, 2013; Kinloch, 2012). Teaching pedagogies and practices grounded in EBP are ill-equipped to respond to the variety of contexts in which our children live and learn and seek more just lives. The hypothesis and argument for a more just reading education policy and programs requires a form of PBE (Shannon, 2017).

There is no way to discover what is ‘more truly educational’ except by the continuation of the educational act itself. (Dewey, 1929, p. 39)

PBE works from the values of pragmatism. Dewey’s sense of the educational act is always situated, holistic, cultural/historic, and engages complex social beings in transactions around the production of social knowledge and practices that have immediate as well as long-term benefits. Participants have different goals, motives, and routes of development, none of which should necessarily transcend all others. Within any authentic context, transactions do not have universal consequences (Gould, 1985), and therefore, a range of outcomes should be expected and respected. To accept and enlist variation and diversity, teachers, school personnel, and community members must gather data systematically and intentionally to address the particular “what and how of local practices in order to determine specific local mechanisms of cause—why what is working does so, why it sometimes works better, why it sometimes falters” (Erickson, 2014, p. 5). Data provide a record of relationships within the educational act, allowing one to consider the constellations of individual needs, interests, desires, and their consequences over periods of time, and distinguishing discreet temporal improvements from integrated lasting impacts.

PBE becomes immediately relevant, tracking in detail with whom the act worked, how it worked, and toward what ends. With an eye toward justice, that record must account for whom the act did not work, in what ways, and with what consequences (Biesta, 2010). This leads to a “what it takes” (Smyth & Schorr, 2009, p. 5) stance to ensure that no learner is “acquired by failure” (McDermott, 1996, p. 295). Designs for PBE direct educational communities to refine the quality of transactions to support the complex work of reading education through localized disciplined inquiries. Localized disciplined inquiries are systematic, intentional inquiries to produce data that is understood as locally and systematically framed. Privileging the local brings about a rebalance of power among stakeholders in reading education. Data and theories flow out from classrooms as well as into them in order to improve teaching and learning. PBE democratizes the negotiations around Calfee’s (2014) constructs of definition, development, and indicators for reading education and what they mean in people’s lives. Democratizing the constructs redistributes power and responsibility in the negotiations of the common sense of reading education more justly. The development and negotiations around the knowledge and empirical warrants produced through records “of the educational act itself” (Bryk, 2015) suggest the possibility of just reading education policy and practice via PBE.

There are several precedents of PBE in U.S. schools (Shannon, 2017): Dewey’s Laboratory School at the University of Chicago, the Bureau of Educational Experiments that became Bank Street College, the evaluation system for the Eight Year Study, the Prospect School in North Burlington, VT, the reading and writing process movements, and more recently, the Centers of Inquiry in Indianapolis, IN, and Richland, SC. None of these examples mirror each other except in their commitments to find ways to make “work processes, tools, role relationships, and norms interact more productively” (Bryk, 2015, p. 473) in their situated environments. Consider the range of PBE between Mills, O’Keefe, Hass, and Johnson (2014) and Jaeger (2016). In the former, Mills et al. presented a school community—teachers, elementary students, and their parents—engaged in continuous “collaborative inquires” and “curricular conversations” about understandings, processes, and outcomes in order to “intentionally work to live into” commitments that each child will “learn to use reading, writing, and mathematics as tools for learning as young researchers in the sciences and social sciences” (p. 37). In the latter, Jaeger described a triad–teacher, reading specialist and researcher—“crafting Tier 2 Response to Intervention in an era of the Common Core” (p. 179) by adjusting the time and space of intervention, embedding all lessons in meaning-centered activity, and producing evidence to legitimize their departure from standard protocols. Across this range, there are situated efforts to avoid, subvert, and remove barriers that keep students from participating as peers in high-quality curriculum.

In his AERA address, Bryk (2015) concluded, “unfortunately, no professional infrastructure currently exists for educators to collaborate in the systematic development and testing of changes and to generate and synthesize practice-based evidence” (p. 473). Bryk immediately followed with, “But it could.” He presented a brief overview of improvement science built upon PBE, which he believed would require a paradigm shift that, “calls for data not for purposes of ranking individuals or organizations but for learning about how instructional practices and organizations actually work” (pp. 474–475). Delaney and Neuman (2016), Erickson (2014), and Pogrow (2017) seem to concur that “if educators joined together in structured improvement networks, our field would have extraordinary capacities to innovate, test, and rapidly spread effective practices” (Bryk, p. 475). Bryk believed that the purpose of the networks is “to inform educators as to what is more likely to work, where, for whom, and under what conditions” (p. 473), continuing the flow of hypothesized interventions for local inquiries. Perhaps, with the prestige of Carnegie Foundation for the Advancement of Teaching behind him, some forms of improvement science might become politically acceptable to reverse the polarity of the official types of evidence acceptable within the ESSA: strong (RCT), moderate (quasi experimental), promising (correlational), and ongoing (active situated evaluation). As the data at the start of this article attest, there is simply too much evidence to rationally deny the need for this paradigm shift from EBP to PBE in the common sense of reading education policy.

Yet, we are skeptical that the shift is possible. The social power of EBP science remains strong as editors of educational research journals, which feed the federal What Works Clearinghouse, still fixate on effect size, even when differences are in the range of “difficult to detect” and thus have little real-world value (Pogrow, 2017, p. 8). The political power of EBP remains in the ESSA: “From a teacher’s point of view, the new law continues the basic operations and principles of the previous law: It is fundamentally a test-driven, top-down remediate-and-penalize law” (Mathis & Trujillo, 2016, p. 667). Moreover, the Trump Administration used the Congressional Review Act to reaffirm states’ authority in “decisions about what to do about the children” sweeping aside the Obama Administration’s final articulations of the ESSA for federal oversight of issues of equity and access (Goldstein, 2017, para. 5). Economic power protects and bolsters the status quo. “Testing agencies, education management organizations, (and) segments of the school improvement industry” derive economic benefit from the law (Mintrop & Sunderman, 2009, p. 361). Advocating for Common Core, Gates (2009) explained to legislators, “When the tests are aligned with the common standards, the curriculum will line up as well, and it will unleash a powerful market of people providing services for better teaching” (para. 49). Sources of power located within the traditional competitions among conservative, neoliberal, and liberal discourses constrain the possibilities of just reading programs and PBE from becoming the common sense behind reading education policy.

We hope that the members of the Literacy Research Association (LRA) will not submit to these social, political, and economic powers. There is precedent such as when LRA members worked through a paradigm shift in the past when, in response to research data demonstrating complexity, injustice, and learner and teacher capacities, the National Reading Conference became the LRA. In a show of leadership, forthcoming conferences could open discussions to the problem that reading education policy should address, take up various possibilities and limits of PBE and EBP, and ask presenters to consider explicitly how they position their work within these discussions.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

Addams, J. (1910). Twenty years at Hull House. New York, NY: Macmillan.
Google Scholar
Aiken, W. (1942). The story of the eight-year study. New York, NY: Harper & Brothers.
Google Scholar
Alexander, L. (1991, 7 19). America 2000: An education strategy. Washington, DC: Department of Education. Retrieved from https://www.c-span.org/video/?19476-1/america-2000-education-program
Google Scholar
Alexander, L. (1993). What we were doing when we were interrupted. In Jennings, J. (Ed.), National issues in education: The past is prologue (pp. 318). Washington, DC: Phi Delta Kappan/Institute for Educational Leadership.
Google Scholar
A Nation at Risk . (1983). The imperative for educational reform. Washington, DC: National Commission on Excellence in Education. Retrieved from http://www2.ed.gov/pubs/NatAtRisk/index.html
Google Scholar
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., Gersten, R. (2015, 11). Evaluation of response to intervention practices for elementary school reading. U.S. Department of Education. Retrieved from http://www.mdrc.org/sites/default/files/RtI_2015_Full_Report_Rev_21064000.pdf.pdf
Google Scholar
Barkham, M., Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence. Clinical Psychology & Psychotherapy, 10, 319327.
Google Scholar | Crossref | ISI
Bell, T. (1988). The thirteenth man: A Regan Cabinet Memoir. New York, NY: Free Press.
Google Scholar
Berliner, D. (2009). Poverty and potential: Out of school factors and school success. Boulder, CO: Education Public Interest Center.
Google Scholar
Biesta, G. (2010). Why ‘what works’ still won’t work: From evidenced based practice to value based education. Studies in Philosophy of Education, 29, 491503.
Google Scholar | Crossref | ISI
Bryk, A. (2015). Accelerating how we learn to improve. Educational Researcher, 44, 467477.
Google Scholar | SAGE Journals | ISI
Bryk, A., Sebring, P. B., Allensworth, E., Luppescu, S., Easton, J. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: University of Chicago Press.
Google Scholar
Burdkick-Will, R., Ludwig, J., Raudenbash, S., Sampson, R., Sadonmatsu, L., Sharkey, P. (2011). Converging evidence for neighborhood effects on children’s test scores. In Duncan, G., Murnane, R. (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 443464). New York, NY: Russell Sage Foundation.
Google Scholar
Bush, G. H. W. (1991, 4 18). Address to the nation on the national education strategy. The American Presidency Project. Retrieved from http://www.presidency.ucsb.edu/ws/?pid=19492
Google Scholar
Calfee, R. (2014). Knowledge, evidence and faith: How the federal government used science to take over public schools. In Goodman, K., Calfee, R., Goodman, Y. (Eds.), Whose knowledge counts in government literacy policies? (pp. 117). New York, NY: Routledge.
Google Scholar
Calfee, R., Drum, P. (1979). Teaching reading in compensatory reading classes. Newark, DE: International Reading Association.
Google Scholar
Carnoy, M., Rothstein, R. (2013, 1 28). What do international tests really show about U. S. student performance? Education Policy Institute. Retrieved from http://www.epi.org/publication/us-student-performance-testing/
Google Scholar
Christensen, L. (2000). Reading, writing and rising up. Milwaukee, WI: Rethinking Schools.
Google Scholar
Clark, S., Blythe, L. (1962). Echo in my soul. New York, NY: Dutton.
Google Scholar
Cuban, L. (2010). Common core standards: Hardly an evidence based policy. Retrieved from https://larrycuban.wordpress.com/2010/07/25/common-core-standards-hardly-an-evidence-based-policy/
Google Scholar
Cuervo, H. (2016). Understanding social justice in rural education. New York, NY: Palgrave Macmillan.
Google Scholar | Crossref
Davis, J. (2015, 12 10). President Obama signs into law a rewrite of No Child Left Behind. The New York Times. Retrieved from https://www.nytimes.com/2015/12/11/us/politics/president-obama-signs-into-law-a-rewrite-of-no-child-left-behind.html
Google Scholar
Delany, K., Neuman, S. (2016, 8 22). Contexts for teacher practice: (Re)Considering the role of context in interventions in early childhood teacher engagement with new approaches to shared book reading. Education Policy Analysis Archives, 24. Retrieved from http://epaa.asu.edu/ojs/article/view/2166
Google Scholar
Dewey, J. (1929). The sources of a science of education. New York, NY: Liveright.
Google Scholar
Douglass, F. (1894). Blessings of liberty and education. Speech delivered in Manassas, VA. Retrieved from http://teachingamericanhistory.org/library/document/blessings-of-liberty-and-education/
Google Scholar
Erickson, F. (2014, 2 17). Scaling down: A modest proposal for practice based policy research in teaching. Education Policy Analysis Archives, 22. Retrieved from http://epaa.asu.edu/ojs/article/view/1473
Google Scholar
Feinberg, W. (2014). Critical pragmatism and the appropriation of ethnography by philosophy of education. Studies in Philosophy of Education, 34, 149157.
Google Scholar | Crossref | ISI
Fraser, N. (2008). Scales of justice: Reimaging political space in a globalizing world. London, England: Polity.
Google Scholar
Gamse, B., Bonlay, B., Fountain, A., Unlu, F., Maree, K., McCall, T., McCormack, R. (2008). Reading first impact study: Final report. U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/pdf/20094038.pdf
Google Scholar
Gates, B. (2009). Address to the national conference of state legislatures. Philadelphia, PA. Retrieved from http://www.gatesfoundation.org/media-center/speeches/2009/07/bill-gates-national-conference-of-state-legislatures-ncsl
Google Scholar
Glass, G. (1987). What works: Politics and research. Educational Researcher, 16, 510.
Google Scholar | SAGE Journals
Goldstein, D. (2017, 3 9). Obama education rules are swept aside by congress. The New York Times. Retrieved from https://www.nytimes.com/2017/03/09/us/every-student-succeeds-act-essa-congress.html
Google Scholar
Gould, S. J. (1985). The median isn’t the message. Discover, 6, 4042. Retrieved from https://people.umass.edu/biep540w/pdf/Stephen%20Jay%20Gould.pdf
Google Scholar
Harris, E. (2016, 9 7). Judge, citing inequality, orders Connecticut to overhaul its school system. The New York Times. Retrieved from https://www.nytimes.com/2016/09/08/nyregion/connecticut-public-schoolsinequality-judge-orders.html
Google Scholar
Haskins, R. (2014, 12 31). Social programs that work. The New York Times. Retrieved from http://www.nytimes.com/2015/01/01/opinion/social-programs-that-work.html
Google Scholar
Hernandez-Chavez, E. (1988). Language policy and language rights in the United States. In Skutnabb-Kangas, T., Cummins, J. (Eds.), Minority education: From shame to struggle (pp. 4557). Clevedon, England: Multilingual Matters.
Google Scholar
Hicks, D. (2013). The road out: A teacher’s odyssey in poor America. Berkeley: University of California.
Google Scholar
House, E. (1978). Evaluation as scientific management in United States school reform. Comparative Educational Review, 22, 388401.
Google Scholar | Crossref | ISI
Huey, E. B. (1968). The psychology and pedagogy of reading. Cambridge, MA: MIT Press. (Original work published 1908)
Google Scholar
Individuals with Disabilities Education Act, 20 U.S.C. § 1400 (2004) .
Google Scholar
Institute of Medicine . (2012, 9). Best care at lower cost: The path to continuously learning health care in America. Washington, DC: National Academies. Retrieved from http://nationalacademies.org/hmd/∼/media/Files/Report%20Files/2012/Best-Care/BestCareReportBrief.pdf
Google Scholar
International Literacy Association . (2002). What is evidence-based reading instruction. Newark, DE: International Literacy Association. Retrieved from https://www.literacyworldwide.org/docs/default-source/where-we-stand/evidence-based-position-statement.pdf?sfvrsn=6
Google Scholar
Investing in Innovation . (2009). U. S. Department of Education, DC. Retrieved from https://www2.ed.gov/programs/innovation/index.html?exp=0
Google Scholar
Jaeger, E. L. (2016). Reproducing vulnerability: A Bourdieuian analysis of readers who struggle in neoliberal times. British Journal of Sociology of Education, 113. doi:10.1080/01425692.2016.1213158
Google Scholar | Crossref | ISI
James, W. (1904). Letter to Francois Pillon. In William James letters archive. Retrieved from https://archive.org/stream/lettersofwilliam02jame/lettersofwilliam02jame_djvu.txt
Google Scholar
Kinloch, V. (2012). Crossing boundaries: Teaching and learning with Urban youth. New York, NY: Teaching College Press.
Google Scholar
Lieberman, R., Zubritsky, C., Martinez, K., Massey, O., Fisher, S., Kramer, T.…Obrochta, C. (2010). Issue brief: Using practice-based evidence to complement evidence-based practice in children’s behavioral health. Atlanta, GA: ICF Macro, Outcomes Roundtable for Children and Families. Retrieved from http://cfs.cbcs.usf.edu/_docs/publications/OutcomesRoundtableBrief.pdf
Google Scholar
Loveless, T. (2012). How well are American students learning? The Brown Center Report on American Education. Washington, DC: Brookings Institute. Retrieved from https://www.brookings.edu/wp-content/uploads/2016/07/0216_brown_education_loveless.pdf
Google Scholar
Luke, A. (2014). Defining critical literacy. In Pandya, J., Avila, J. (Eds.), Moving critical literacy forward (pp. 1931). New York, NY: Routledge.
Google Scholar
Martin, R., McClure, P. (1969). Title 1 of ESEA: Is it helping poor children? Washington, DC: Washington Research Project and NAACP Legal Defense and Education Fund.
Google Scholar
Mathis, W., Trujillo, T. (2016). Learning from the federal market-based reforms: Lessons for ESSA. Charlotte, NC: Information Age.
Google Scholar
Maxwell, J. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Research, 33, 311.
Google Scholar | SAGE Journals
McCollum-Clark, K. (1995). National council of teachers of English, corporate philanthropy and national education standards. Unpublished doctoral dissertation, Pennsylvania State University, State College, PA.
Google Scholar
McDermott, R. (1996). The Acquisition of child by a learning disability. In Chaiklin, S., Lave, J. (Eds.), Understanding practice: Perspectives on activity and context (pp. 269–305). New York, NY: Cambridge University.
Google Scholar
McLaughlin, M. (1975). Evaluation and reform: The elementary and secondary education act of 1965, Title 1. New York, NY: Harper Collins.
Google Scholar
Mills, H., O’Keefe, T., Hass, C., Johnson, S. (2014). Changing hearts, minds, and actions through collaborative inquiry. Language Arts, 92, 3651.
Google Scholar
Mintrop, H., Sunderman, G. (2009). The predictable failure of federal sanctions-driven accountability for school improvement and why we may retain it anyway. Educational Researcher, 38, 353364.
Google Scholar | SAGE Journals | ISI
National Academy of Education & Anderson, R. C. (1985). Becoming a nation of readers: The report of the Commission on Reading. Washington, DC: National Academy of Education.
Google Scholar
National Commission on Excellence in Education . (1983). A nation at risk: The imperative for educational reform: A report to the nation and the Secretary of Education, United States Department of Education. Washington, DC: The Commission.
Google Scholar
National Reading Panel Report . (2000). Teaching children to read: An evidence-based assessment of the scientific research literature of reading and its implications for reading instruction. NICHD. Retrieved from https://www.nichd.nih.gov/publications/pubs/nrp/documents/report.pdf
Google Scholar
No Child Left Behind (NCLB) Act of 2001, 20 U.S.C.A. § 6301 et seq . (West 2003).
Google Scholar
No Child Left Behind Act of 2001 , P.L. 107-110, 20 U.S.C. § 6319 (2002).
Google Scholar
Parker, F. (1884). Talks on pedagogics: An outline on the theory of concentration. New York, NY: Kellogg.
Google Scholar
Pearson, P. D., Valencia, S., Wixson, K. (2014). The complicated world of reading assessment: Toward better assessments for better teaching. Theory into Practice, 53, 236246.
Google Scholar | Crossref | ISI
Pogrow, S. (2017). The failure of the U.S. education research establishment to identify effective practices: Beware effective practices policies. Educational Policy Analysis Archives, 25, 119. Retrieved from http://epaa.asu.edu/ojs/article/view/2517
Google Scholar
Pressley, M. (2005, 12 15). The rocky road of Reading First: Another chapter in the long history of complaints about the federal reading efforts. Education Week, 25, 2425.
Google Scholar
Reagan, R. (1983, 4 26). A nation at risk report. White House press release. Retrieved from http://www.presidency.ucsb.edu/ws/?pid=41239
Google Scholar
Reardon, S. (2011). The widening academic achievement gap between the rich and poor: New evidence and possible explanations. In Duncan, G., Murnane, R. (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 91116). New York, NY: Russell Sage Foundation.
Google Scholar
Shannon, P. (2008). Reading against democracy: The broken promises of reading instruction. Portsmouth, NH: Heinemann.
Google Scholar
Shannon, P. (2017). Progressive reading education in America: Teaching toward social justice. New York, NY: Routledge.
Google Scholar
Shannon, P. (IN PRESS). Common core standards. In Noblit, G. (Ed.), Oxford research encyclopedia of education; New York: Oxford University.
Google Scholar
Smyth, K., Schorr, L. (2009). A lot to lose: A call to rethink what constitutes evidence in finding social interventions that work. Cambridge, MA: Harvard Kennedy School. Retrieved from https://www.hks.harvard.edu/ocpa/pdf/A%20Lot%20to%20Lose%20final.pdf
Google Scholar
Sparks, S. (2015, 11 11). Study: RTI practice falls short of promise. Education Week. Retrieved from http://www.edweek.org/ew/articles/2015/11/11/study-rti-practice-falls-short-of-promise.html
Google Scholar
Swartz, A., Stiefel, L. (2011). Immigrants and inequality in public schools. In Duncan, G., Murnane, R. (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 419442). New York, NY: Russell Sage Foundation.
Google Scholar
Swisher, A. (2010, 6 21). Practice-based evidence. Cardiopulmonary Physical Therapy Journal. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2879420/
Google Scholar
Thomas, J., Brady, K. (2005). The elementary and secondary education act at 40: Equity, accountability and the evolving federal role in public education. Review of Research in Education, 29, 5167. doi:10.3102/0091732X029001051
Google Scholar | SAGE Journals | ISI
Vannini, P. (2008). Critical pragmatism. In Given, L. (Ed.), The SAGE encyclopedia of qualitative research methods (pp. 160163). New York, NY: Sage.
Google Scholar
Vigor, J. (2011). School desegregation and the black-white test score gap. In Duncan, G., Murnane, R. (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 443464). New York, NY: Russell Sage Foundation.
Google Scholar
Walsh, C., Reutz, J., Williams, R. (2015). Selecting and implementing evidence-based practices. A guide for child and family serving systems. San Diego, CA: California Evidence-Based Clearinghouse for Child Welfare. Retrieved from http://www.cebc4cw.org/implementing-programs/guide/
Google Scholar

Author Biographies

Karen Eppley is an associate professor of Curriculum and Instruction at Penn State University. Dr. Eppley’s research interest is at the intersection of literacy education and rural education. She is a former fifth-grade classroom teacher.

Patrick Shannon has taught, worked with teachers, and conducted research across North America. A former preschool and primary grade teacher, he is the author, coauthor, or editor of nine books.