Skip to main content

[{id=145852, label=ICD, uri=icd}, {id=145700, label=Other and unspecified diseases of blood and blood-forming organs, uri=d70-d77_d75}, {id=147747, label=Other bacterial diseases (A30-A49), uri=a30-a49}, {id=146941, label=Other diseases caused by chlamydiae (A70-A74), uri=a70-a74}, {id=147226, label=Other disorders of blood and blood-forming organs (D70-D77), uri=d70-d77}, {id=145722, label=Other specified diseases with participation of lymphoreticular and reticulohistiocytic tissue, uri=d70-d77_d76}]

icd, d70-d77_d75, a30-a49, a70-a74, d70-d77, d70-d77_d76,
Intended for healthcare professionals
Skip to main content
Open access
Research article
First published online February 11, 2024

Critical Quantitative Literacy: An Educational Foundation for Critical Quantitative Research

Abstract

Education research has recently seen the emergence of two distinct frameworks guiding the application of quantitative methods through a more critical and equity-oriented lens. These two frameworks are critical quantitative (CritQuant) studies and quantitative critical race theory (QuantCrit). Although different in their intellectual traditions, they both acknowledge the oppressive history of quantitative methods and the need to improve the criticality of quantitative research in education. For applied quantitative research in education to become more critical, it is imperative that learners of quantitative methodology be made aware of its historical and modern misuses. This directive calls for an important change in the way quantitative methodology is taught in educational classrooms. Critical quantitative literacy (CQL) is introduced in this manuscript as a paradigm for teaching, learning, understanding, and applying quantitative methods in a way that supports the application of CritQuant and QuantCrit frameworks in educational research.

Introduction

Quantitative research in the social sciences is undergoing a change. After years of scholarship on the oppressive history of quantitative methods, quantitative scholars are grappling with the ways that our preferred methodology reinforces social injustices (Zuberi, 2001). Among others, the emerging fields of CritQuant (critical quantitative studies) and QuantCrit (quantitative critical race theory; both articulated below) address these challenges through the application of critical perspectives to quantitative research, particularly within education (Tabron & Thomas, 2023). Although these parallel frameworks have points of departure, they agree on a key issue: The application of quantitative methods should incorporate a more critical lens.
Education toward quantitative methodology in many social science departments has prioritized quantitative literacy in the form of mathematics, programming, and high-level interpretation, often taking the epistemological and ontological aspects of statistical methods for granted. Critical literacy, which regards the ability to read the world in ways that recognize and challenge systems that perpetuate injustice and inequality, is often developed in more qualitatively oriented courses where critical theories are introduced. Said another way, quantitative and critical literacies are seldom developed in tandem—something this manuscript aims to change by introducing and defining critical quantitative literacy. The gamut of quantitative methods education has, until recently, been mostly uninformed by critical theory (Arellano, 2022). Moreover, quantitative methods are often taught under the implicit assumptions that the methods are objective, and the numbers speak for themselves. These assumptions are false, harmful, and unnecessary, and they compromise the rigor of quantitative research in the social sciences (Garcia et al., 2018; Gillborn et al., 2018).
The history and critiques of quantitative methods are not new, and many scholars have admirably called for reconciliation between critical theory and quantitative methods (e.g., Dixon-Román, 2017). Some scholars have encouraged the integration of Indigenous methodologies to challenge Western ethnocentric assumptions (e.g., J. D. Lopez, 2021; Smith, 2012; Walter & Andersen, 2013). Data literacy scholars have also called for social justice around the production and consumption of data (e.g., Dencik et al., 2019). For applied quantitative research in education to become more critical, learners of quantitative methodology must be made aware of its historical and modern misuses. I join Arellano (2022), Tabron et al. (2020), Wise (2020), and many others across various research spaces in calling for a critical reimagining of how statistical methods are taught in education classrooms. The aim of this manuscript, therefore, is to suggest a paradigm for teaching quantitative methods focused on developing critical quantitative literacy. I formally define critical quantitative literacy, or CQL, as the critically informed understanding of the scope of quantitative methodology, including, but not limited to, statistical research design, definitions, variables, methods, and findings. CQL is critical theory–agnostic but may include, for example, feminist theory, queer theory, or critical race theory, and the theory should challenge oppression and prioritize equity. I argue that pedagogy for CQL can be adopted in any quantitative classroom, ideally beginning with introductory statistics coursework, but it is also suitable for advanced statistics and research design. Finally, I hold that developing CQL serves as an important support for the quantitative side of CritQuant and QuantCrit scholarship as well as for the consumption of statistics in everyday life. Examples and lines of inquiry are offered throughout this manuscript.
This manuscript is structured as follows. In accordance with good practices of CritQuant and QuantCrit, I first offer a positionality statement to situate myself, my influences, and my biases within the context of this manuscript (Castillo & Gillborn, 2022; Diemer et al., 2023). Second, I discuss the history of quantitative methods to motivate the need for CQL. Third, I introduce the emerging fields of QuantCrit and CritQuant by providing supporting scholarship and tenets. Fourth, I suggest five fundamental considerations for developing CQL: definitions, mathematics, assumptions, design, and language. Examples of how each may appear in statistics classrooms are provided. Fifth, I differentiate CQL from CritQuant and QuantCrit and suggest the role of CQL in supplementing these two quantitative frameworks. Finally, I conclude with thoughts on the scope of CQL and its potential impact on educational scholarship.

Author Positionality

Positionality statements aim to illuminate, to the reader and the author(s), how an author’s identities and professional background interface with the context of the research being presented. Such statements are common in qualitative research studies, yet they are scarce in quantitative studies due partly to the misconception that quantitative studies are objective and that author positionality plays no role (Castillo & Gillborn, 2022). I include a positionality statement in this manuscript because I believe that neither this work nor any other research is wholly objective. Moreover, I endorse the inclusion of such statements as an essential part of producing CQL research, and I encourage such practices in other quantitative studies. I write in the first person to underscore the personal and subjective nature of this statement.
I produced this manuscript from the position of numerous privileged social identities, including that of being a cisgender, heterosexual, White male. My academic and professional backgrounds consist of philosophy, mathematics, statistics, and educational studies. I have spent more than a decade teaching statistical methods to university students, and I have applied statistical methods to academic research for most of my career. Moreover, I have no intention to stop applying them to academic research, despite recognizing their flaws. Rather, I aim to acknowledge these flaws and revisit, revise, and repurpose quantitative methods from a critical and equity-focused perspective. My doctoral studies in a large school of education introduced me to numerous critical theories, some of which were predicated on philosophies familiar from prior studies. My familiarity with philosophy, mathematics, and statistics predates my growing knowledge of critical theories. I was not taught mathematics or statistics from a critical perspective, and I was surprised to learn of their history later in my career. I attribute my ignorance to my more privileged socialization and to the nonexistence of such a paradigm as CQL. Over time, I became increasingly familiar with the emerging fields of CritQuant and QuantCrit. As I read through the research in these fields, I had numerous points of agreement, disagreement, and confusion. Some of these reactions were due to my more privileged socialization that constructed systems of maintaining ignorance (Sullivan & Tuana, 2007), and some of them were due to differences in my understanding of the strengths, weaknesses, context, and limits of quantitative methods. The idea of developing CQL arose from these tensions, including the objective to improve the criticality of quantitative methods in its goals, design, assumptions, findings, and language.

History of Quantitative Methods

Perhaps unbeknownst to many educated in traditional quantitative environments, statistical application in the social sciences began with the eugenics movement. Early pioneers of statistical methodology, such as Francis Galton, Karl Pearson, and Ronald Fisher, developed and adapted the methodology to justify the atrocities of slavery and European colonialism (Zuberi, 2001). The goal of statistics, as applied in the social sciences, was to use the “objectivity” and access to “truth” provided by the mathematical sciences to “prove” the racial and cultural superiority of Europeans versus those who were colonized by them (Zuberi & Bonilla-Silva, 2008). Discussions about how to deal with this legacy continue in professional statistics communities (Langkjær-Bain, 2019).
The overt racism embedded in eugenics-based research was publicly ostracized as little as 70 years ago, around the end of World War II (Zuberi, 2001). However, the eugenicist ideas introduced earlier did not go away. Psychometrics and intelligence testing, both of which continue to have public support and a troubling eugenicist history, became their new home (Hilliard, 1990). Psychometric methods, often used to justify and perpetuate intelligence or aptitude testing through technocratic gatekeeping, have undergone substantial development since the 1950s. Yet prior to this, such eugenicists as Lewis Terman devised and used intelligence testing with the goal of identifying candidates for sterilization (Helms, 2012; Terman, 1922, 1924). In the early 20th century, these ideas were used to justify thousands of sterilizations in the United States (Stephens & Cryle, 2017). The supremacist myth of intelligence lives on today, notably supported by such books as The Bell Curve (see Helms, 2006; Zuberi & Bonilla-Silva, 2008). These myths persist despite a wealth of research surrounding disparate educational access (e.g., Ladson-Billings, 2006), critiques of intelligence as an existing and unified construct (Schlinger, 2003), and heightened standards regarding consequential and other forms of validity in psychometric testing (AERA/APA/NCME, 2014).
Beyond their eugenicist roots, applied quantitative methods have a history of discriminating in other ways. For example, quantitative assessment has been used to gatekeep entry into universities and professions. For decades, students’ grades, which are subject to teachers’ racial and other biases, have been used alongside aptitude tests as a meritocratic signal of college worthiness (Childs & Wooten, 2022). In the legal profession, the bar examination, which is taken to provide licensure to practice law after successfully completing law school, was established for the very purpose of limiting access to mostly White American male citizens (Root et al., 1916). By design, the application of quantitative methods also has a history of dismissing “outlying” observations (whom are often people), discussing such non-manipulable variables as race as causal, and uncritically producing misleading findings with social and political consequences (Arellano, 2022; Crawford, 2019; Holland, 2003). These contexts provide an eye-opening backdrop for the contemporary push to bring criticality into quantitative methods research and education.

Emerging Fields of CritQuant and QuantCrit

This manuscript considers the development of CQL as supporting two distinct frameworks being used to integrate critical theories with quantitative methods: CritQuant and QuantCrit. Although there are others, such as Indigenous methodologies (Walter & Andersen, 2013), their integration with CQL is left to scholars better able to speak to these topics. For now, this work limits its scope to CritQuant and QuantCrit, beginning with CritQuant. A richer description of CritQuant can be found in the work of Tabron and Thomas (2023); see Gillborn et al. (2018) for QuantCrit. Additionally, these ideas have been contrasted in a recent editorial in Review of Educational Research (Boveda et al., 2023). Please see these sources for further discussion.
Early iterations of what has become CritQuant can be traced back to two issues of New Directions for Institutional Research (Stage, 2007; Stage & Wells, 2014). In 2007, Stage introduced the idea of the quantitative criticalist in higher education research, defining such researchers as being more concerned with the [critical] questions asked than the [quantitative] methods used. Such questions, Stage argued, should illuminate conflict and develop critique by using quantitative methods to advance theory and policy. Stage suggested that quantitative criticalists operate from a critical quantitative framework, which is differentiated from the traditional positivist framework by its pursuit of investigation and equity rather than explanation through fair and objective methodology. Kincheloe and McLaren (1994) provided Stage’s supporting framework, suggesting many features of research, such as subjectivity and the inseparability of facts and values. This work marked an early abandonment of quantitative methods as objective, and these ideas would later appear in more formal CritQuant tenets (Diemer et al., 2023).
Baez (2007) was another early contributor to CritQuant research who interrogated what it means to be critical in research, thereby sparking the need to define criticality and the openness to various critical theories. Most important to Baez was how research can be critically transformative, asking how it can offer critiques of society so that it can be transformed and improved. Baez argued that critical research is inherently political and that critical scholars must consider the privilege and authority that their words carry in their capacity to liberate and to oppress. In this sense, the critical scholar should be reflective and self-reflective. Taken with Stage (2007), this early CritQuant scholar can be loosely described as a self-reflexive researcher who is critically reflective on the existence and perpetuation of social inequality and who uses quantitative inquiry to illuminate and challenge these inequalities with the aim of social transformation.
A defining feature of CritQuant scholarship is that the form of inequality and social transformation it focuses on is not predetermined by the framework. Social transformation is central to CritQuant, and social transformation must be toward equity, but the type of equity focused on in CritQuant research is left to the researcher. Accordingly, the foci and guiding critical theories within CritQuant research may vary.
In contrast to CritQuant, quantitative critical race theory, or QuantCrit, is a quantitative instantiation specifically of critical race theory (CRT; Garcia et al., 2018; Gillborn et al., 2018). Although space and context prohibit a complete detailing of the scope and history of CRT, suffice it to say that it has had a tremendous impact on modern educational scholarship. CRT can be traced back to its roots with scholars of color in critical legal studies, such as Derrick Bell, Kimberlé Crenshaw, and Mari Matsuda (e.g., see Matsuda et al., 1993). Before them, W. E. B. Du Bois (1899) applied quantitative research methods to questions around racial equity. As a framework, CRT tells us that race is a social construct and that racism is embedded in legal policies and other social systems. A corollary is that race is not readily quantifiable and that quantitative research involving race ought to be critical toward its treatment of race and interpretation of its conclusions. Other important ideas emerging from CRT include the use of counter-stories to challenge and expose dominant narratives and intersectionality, which describes how oppression manifests differently along interconnected lines of other identities, such as gender, class, and disability.
QuantCrit scholars are explicit about QuantCrit scholarship being traceable to Du Bois (1899) and that the framework’s tenets are an extension of the well-established CRT tenets into quantitative research. These tenets, along with some examples of how they have been taken up in QuantCrit scholarship, are (a) the centrality of racism in data, research, and society (N. López et al., 2018; Pérez Huber & Solorzano, 2015); (b) the non-neutrality of numbers (Gillborn, 2010); (c) the nonnatural categories, such as race, in quantitative research (Sablan, 2019); (d) that the numbers do not and cannot speak for themselves (Covarrubias & Vélez, 2013; Solórzano & Yosso, 2002); and (e) the use of numbers for social justice (Crawford, 2019). For future research, Castillo and Gillborn (2022) offered suggestions for how to implement QuantCrit in educational scholarship.
Scholars from CritQuant and QuantCrit have made a compelling case against the objectivity of quantitative research in social science. They have argued that quantitative calculations can reify the human bias embedded in data and that automated arithmetic is insufficient for remediating these biases. Worse yet, clinging to the naïve belief that quantitative findings are objective reinforces systems of privilege and oppression. Again, the subjectivity of quantitative research is a noteworthy departure from the axiological tradition of viewing them as objective.
The most apparent point of divergence between these frameworks rests with the choice of critical theory to animate them. QuantCrit is explicitly an extension of CRT, wherefrom it draws its guiding tenets. CritQuant is developed out of conflict theory and is open to critical theories other than CRT (Boveda et al., 2023). For example, Garvey et al. (2019) integrated feminist and queer theory into CritQuant to examine how data on gender and sex are collected and operationalized within higher education. Because race was not the focus in their study, CritQuant offered an alternative critical quantitative framework. Indeed, due to the absence of a centrally informative critical theory, such as CRT, applied CritQuant research must draw its guiding tenets from the critical theory informing the work being undertaken. Efforts are being made to advance a more formalized CritQuant framework (e.g., Diemer et al., 2023).
More important to CQL, QuantCrit and CritQuant call for a dramatic reimagining of the way quantitative methods are viewed and understood in educational research and, therefore, taught within classrooms. Seldom is the methodological history introduced, nor are discussions had about how early racist thinking may have informed the mathematics therein. Moreover, quantitative methods still enjoy the privileged guise of objectivity in terms of political treatment (e.g., research funding) and public perception. Both frameworks call for scrutiny of the data itself along with the information’s collection and analytic processes. CritQuant explicitly calls for a deeply informed background of quantitative methods. Said another way, the heightened scrutiny and demand for criticality and rigor in education quantitative research call for an increase in critical quantitative literacy.

Defining Critical Quantitative Literacy

Loosely speaking, CQL can be thought of as the ability to read and produce quantitative research with a critical eye toward remediating the ways in which quantitative methods continue to perpetuate an oppressive status quo. CQL is formally defined as the critically informed understanding of the scope of quantitative methodology, including but not limited to statistical research design, definitions, variables, methods, and findings. The term is deliberately broad and inexhaustive; therefore, elaborating on definitional components may help communicate the breadth of content covered by CQL. The goal of this elaboration is to provide enough detail so that it can be adopted as a guide for critical quantitative education without being overly prescriptive. An essential part of this education is that CQL should impart an understanding of the epistemological, ontological, and axiological roots of quantitative methods as described by the CritQuant and QuantCrit frameworks (see Tabron & Thomas, 2023). This understanding should be embodied in the classroom and is not something to be added to or subtracted from a lesson. CQL, in essence, contextualizes quantitative education. All three roots are present throughout the following definitional components and considerations.

Critically Informed

Every aspect of the quantitative research enterprise in the social sciences can have a direct mapping onto real consequences for real people in the real world. The outcomes of quantitative research may challenge the systems that marginalize individuals or perpetuate marginalization. Researchers cannot be tasked with omniscience, but they can be cognizant of the quantitative decisions they are making and consider how these decisions may translate to real people and real consequences. Such cognizance requires careful attention to detail and scrutiny at each stage of the quantitative research continuum. This scrutiny is supported by the insights of critical theories, such as feminist theory and CRT.

Understanding

The word understanding here is intended to encompass full and thoughtful consideration of the early, middle, and later components of quantitative research, including the logical throughline of the entire quantitative research process. Early axiological and ontological components include, for example, the research questions being asked, the research design and objectives, and the data being collected. The middle components of quantitative research include the data cleaning decisions, the analytical decisions (e.g., outlier removal, data dis/aggregation), and choice of analytic tools. Some later epistemological components of quantitative research include the purely quantitative interpretation of findings, the narrative around the interpretation of findings, and the dissemination of those findings. Important decisions are made at each step, and CQL requires critical awareness throughout. The potential impact of every decision should be given careful thought.

Statistical Research Design

Designing statistical research requires the awareness that in order to use quantitative methods to answer research questions, complex social phenomena must be distilled into measurable variables (ontology). In doing so, decisions must be made, and these decisions introduce external and researcher biases into the research design. Such introductions cannot be avoided, and CQL requires an analysis of the implications. Additionally, before a design is ever considered, research questions must be formulated, and these questions ought to be critical and equity-oriented in nature. These aspects of statistical research design must be met with the elements of the previously mentioned understanding. Key considerations of statistical research design include the questions of what, where, why, how, and for whom?

Definitions

From the beginning stages of formulating research questions to the final stages of presenting quantitative research findings, variables and terms are being defined (and sometimes redefined) by the researcher. Many of these, such as racial or gender categories, are crudely approximated and heterogenous monoliths that fail to reflect the diversity of reality (e.g., Garvey et al., 2019; Philip et al., 2016; Zuberi & Bonilla-Silva, 2008). Interpretation of quantitative findings relies on meaningfully defined terms. To the extent to which these definitions have not been considered and articulated, the findings do not merit a clear substantive interpretation (epistemology and ontology). CQL requires cautious awareness of how variable definitions limit or enhance quantitative research.

Variables

Variables encompass what has been included and omitted from the statistical model. Moreover, CQL investigates why chosen variables have been included and omitted and the potential impact on the research. CQL does not task researchers with the impossible task of including every relevant variable, but it does task them with being aware that variables’ exclusion does not preclude their influence on quantitative findings. Additionally, variables require measurement, which exists in multiple forms, such as Likert scales, self-identification, and open responses. Often, such as with Likert scales, these measures provide crude quantification of socially complicated phenomena. The way variables are measured carries over into model assumptions, performance, and critical interpretation.

Methods

Although considerable attention is paid to the choice of quantitative methodology, considerably less attention is paid to satisfying the mathematical assumptions of these methods, how these methods function (i.e., the internal mathematics), and how inattention to either of these affects social justice when interpreting the findings. Exploratory data analysis may reveal, for example, the (in)appropriateness of linearity assumptions or whether linearity holds for everyone in the sample. It may also reveal differential response patterns in measurement (e.g., Culpepper & Zimmerman, 2006). Exploring unmet assumptions may also reveal biases, often tilted toward whoever is the majority in the sample, such as masking heterogeneous effects between groups. Satisfying mathematical assumptions is essential for conducting any rigorous quantitative research, but it is doubly important for CQL because failure to meet these assumptions can have consequences that are counterproductive for critical and equity-focused work.

Findings

Understanding the findings includes not only recognizing the most precise mathematical interpretation of statistical results from the method chosen for analysis but also having a contextualized interpretation of these results, given the many decisions made around methods, variables, definitions, designs, and subjective biases. Knowing the mathematics of statistical analysis is important, but many other parts of the quantitative research pipeline also influence the findings. Awareness and careful interpretation provide rigor and criticality to the work, encourage caution toward overinterpretation, and underscore the need for replication in quantitative research.

Fundamental Considerations for Developing CQL

The ways in which educators of quantitative research methods can reimagine the way content is conceptualized and presented in their classrooms are endless. In fact, the core quantitative content of most methods courses need not change. Developing CQL only requires changes in the way content is contextualized, framed, presented, understood, and prioritized, such that learners of quantitative methodology can couple and apply it with axiological, ontological, and epistemological insights from critical theories, such as CRT. The following are some considerations that might be incorporated for cultivating CQL in a quantitative methods classroom. In many cases, these considerations also translate into a more rigorous and careful application of statistical methods writ large.

Unpacking the Statistical Definitions

Perhaps the most familiar statistic in quantitative methods is the arithmetic mean (henceforth, mean). Often, the mean is one of the more intuitive statistics for learners in introductory courses and is defined as the sum of values within a set of numbers divided by the total number of entries in the set. Students (and educators) seldom critically interrogate the mean, which is unfortunate because much more can be said about it, particularly from a critical perspective. For example, the purpose of the mean is to estimate what is happening around the center of the data, but by its mathematical definition, and by virtue of being a point estimate, it obscures what is happening at the fringes of a set of values. Said another way, the mean, as a single “representative” number, effectively hides the distribution of the data. Indeed, the goal of the mean is to identify a point of central tendency. But in a school district where some students are performing exceptionally well or poorly in comparison to the rest of the district, for example, the mean has ontological and axiological implications. Appealing to the mean of the district masks real scores by real students who are having real and different experiences. Much more is happening in schools and districts than mean scores can convey, and we miss seeing the trees for the forest. Moreover, means can be increased in ways that benefit only some students in the set, which risks exacerbating disparities between privileged and marginalized students if the fuller distribution goes unexamined while focusing only on increasing means. Given the mean’s ubiquity throughout quantitative methods, this phenomenon is not limited to descriptive statistics, and critical quantitative scholars ought to be keenly aware of this obfuscation. Indeed, the mean forms the basis of most statistical analyses (e.g., linear regression models) and serves as the typical point of comparison for hypothesis tests, such as t tests. The purpose of this discussion, from a CQL perspective, is to develop an awareness of what the number does, what it represents, what it does not tell you, and why these pieces of information are important and, accordingly, to identify useful information to supplement the mean.
A related definition to consider might be statistical variance. It is, by definition, a sample’s average squared deviation from its mean. Perhaps because the value itself is squared and superficially fails to intuitively convey much about the distribution of the data, variance is often treated as a mere means to the standard deviation. But again, this view is unfortunate because the way in which the variance is defined makes ontological claims about meaningful ways of describing diversity and heterogeneity within a sample.1 Variance is defined in relation to the mean, as opposed to another measure of central tendency, such as the median or mode. And, as already suggested, the mean obscures the general distribution of a sample, despite its relative sensitivity to highly deviant values. This definition of variance is, like the mean, propagated throughout the breadth of statistical methods, thereby normalizing quantitative methods’ focus on the mean. For example, variance serves as the basis for the F test used in analysis of variance. As with the mean, the goal from a CQL perspective is to place the definition and interpretation into a critical context, recognizing the critical implications of the variance being defined around the mean and supplementing understanding the mathematical definition with a critical understanding of its implications.

Unpacking the Math

Slowing down to contextualize the mathematical formulae found in quantitative research methods is essential for building CQL and for reimagining how these tools should be used. The goal is to read the mathematical machinery through a critical lens. For example, an educator might ask which part of the mean’s equation led it to obscure the values found in the tails of a distribution. The answer may be twofold. First is the invisible (and unnecessary) equal weighting of each observation in the data set. Second, the sum obtained in the numerator is divided by the total number of observations n. Coupled with the equal weighting, division by n accentuates the most dense regions of the data. Responses farther from the mean, although influential, are fewer in number and thus are less represented by the point estimate that is the mean. Moreover, division by n (or n – 1) is common in other statistics, including the variance and some effect sizes. Careful consideration of who is included in that n is a suggested practice in QuantCrit (Castillo & Gillborn, 2022).
Other opportunities to unpack the math arise and illuminate ways in which biases, inequities, and hasty generalizations can seep in. For example, it is well known that the n in the denominator of standard error calculations, coupled with overreliance on p values as arbiters of statistical significance, can lead to trivial differences in means whose epistemological “significance” is not placed in context (Nuzzo, 2014; Ziliak & McCloskey, 2008). Moreover, differences with p values that fall above conventional thresholds for statistical significance (α = .05) are not unimportant. There are trivial differences in p values of .049 and .051, and many factors can affect p value calculations (Gelman & Stern, 2006). Alternatively, consider that most regression models make such assumptions as additivity and linearity. These assumptions are fine but represent a very specific relationship between variables that is taken for granted and often unexplored. In psychometric measurement, subsets of eigenvalues and eigenvectors are used to approximate complex relationships between variables, represented by shared (mean-centric) variance, often on limited-range Likert-scale data, and carry traditional assumptions, such as linearity and residual normality. Yet we often uncritically give these variables labels and obscure the underlying mathematics. This practice is part of what is referred to as the jingle-jangle fallacy (Kline, 2016).
Building CQL also means building the awareness that quantitative methods do not have to be limited by many of these assumptions. Exploratory data analysis is a powerful tool that can help uncover, for example, differential response patterns between groups (e.g., Culpepper & Zimmerman, 2006). Moreover, despite their heightened difficulty in interpretation, nonlinear models exist and can be adopted by researchers. Similarly, effect sizes exist to help contextualize mean differences, and awareness of their mathematical functioning can help scholars critically discuss mean differences in scientific research. In psychometrics, such tools as robust estimators and multiple group modeling help mitigate unmet statistical assumptions or analyze differences in measurement along group-based lines. The mathematics undergirding quantitative methodology are riddled with definitions and assumptions that are important for building insightful CQL. Importantly, these definitions and assumptions render epistemological claims from quantitative research more ambiguous and uncertain than often believed.

Unpacking the Assumptions

Quantitative research methods include at least two types of assumptions: mathematical assumptions, such as the homoscedasticity of residuals in linear regression models, and philosophical assumptions, such as the intrinsic value in the variables being used as part of the quantitative inquiry. Mathematical assumptions are often discussed in quantitative methods classrooms, but the philosophical assumptions are often taken for granted. Both have important implications for building CQL.
There are too many mathematical assumptions in quantitative research methods to address here. Moreover, these assumptions vary, depending on the method being discussed. For the sake of illustration, consider the assumptions of homoscedastic and normally distributed residuals in a linear regression model. Besides simply knowing that these are mathematical assumptions of the linear regression model, emphasizing their importance from a critical perspective can help develop CQL. The interpretation of non-normal residuals, for example, may change, depending on the residual distribution’s shape, but in general, non-normal residuals imply that the model is performing poorly for some individuals (observations). This result could mean that the model is making poor predictions for some individuals (given by standardized residuals far from zero), or it could mean that the model is systematically over- or underpredicting for individuals within some range of the data if the residual plot is skewed. Augment this information with the interpretation of heteroscedastic residuals, which imply that the regression model is systematically performing better for some ranges of the data than for others. In both cases, axiological issues around equity arise when we ask such questions as Who is being poorly modeled? To the extent to which model diagnostics go unexplored and unreported, researchers bury the answers to such questions. Unpacking the math also provides an opportunity to discuss these questions. The definition of a residual—the difference between an observed and predicted value—makes it clear that residuals systematically above or below zero, residuals many standard deviations above or below zero, or residuals exhibiting disparate variances within different regions of the data come from a model that is favoring some members of the data over others. CQL places these diagnostics into critical context to underscore their importance.
Philosophical assumptions in quantitative research methods often go unexamined because they are not mathematical and, therefore, can evade discussion in quantitative methods classrooms. However, they are crucial for meaningful statistical inference and, therefore, a fundamental part of developing CQL. For example, if a researcher wants to compare students’ performance on a statewide assessment along racial and ethnic lines, they are making implicit ontological assumptions about the meaningfulness of the statewide assessment and the classification mechanism for racial and ethnic groups. If such assumptions about the meaningfulness of these things were not made, there would be no point in asking the question. The answer would be irrelevant, ambiguous, and/or nonsensical. For example, if it is held that the statewide assessment fails to provide important, accurate, or relevant information about the examinee, and/or if the classification mechanism for racial and ethnic groups is unacceptably crude or inconsistent, then performance differences between groups provide no useful information, irrespective of statistical significance. Yet we, as quantitative researchers, make these assumptions all the time, such as with meaningful racial silos, despite substantial arguments to the contrary and alternative recommendations (James, 2001; Sen & Wasow, 2016; Zuberi & Bonilla-Silva, 2008). Whether in one’s own research or in the consumption of others’, what is important for CQL is that quantitative scholars are cognizant of these assumptions and explicit about how they function.
Other philosophical assumptions are also dormant in quantitative research. Taking all prior assumptions for granted, if a researcher were to claim statistical significance between racial or ethnic groups on a statewide assessment, then some authority is afforded to the confidence level α to arbitrate what is statistically relevant. This assumption is epistemological in that if a p value falls on one side of α, we believe that we have learned something new about the world. But if the p value falls on the other side of α, we believe that we have learned nothing. Reframing statistical inference outside this arbitrary classification can help build CQL in that it contextualizes the epistemology of statistical inference for what it really is: a probability claim predicated on numerous mathematical and philosophical assumptions, each of which needs investigation and scrutiny.

Unpacking the Design

Designing a study to answer questions by using quantitative methods is difficult and requires ontological sacrifices to translate a complex reality into numbers. These challenges have been suggested elsewhere in this manuscript, yet other important considerations for CQL include the two closely related issues of the data collection mechanism and the analytic sample. These issues are farsighted in that they have generalization in mind and are concerned with for whom results will be valid (other CQL considerations notwithstanding). Random sampling from a population of interest is often the preferred approach to ensure that findings are generalizable, and deviations from a random sample can reconfigure and obfuscate the population being represented. Yet in many social settings, random sampling is not feasible, if not altogether unethical. Self-selection and convenience sampling can become the norm, thereby limiting, or altering, the generalizability of a statistical analysis.
Other important considerations of research design include the mechanism by which data are collected, and how this mechanism may relate to participants’ responses. For example, psychological phenomena, such as stereotype threat, are well known for their downward impact on evaluation scores for more marginalized individuals (Nguyen & Ryan, 2008). Alternatively, other psychological phenomena, such as desirability bias, are known to skew responses to survey questions pertaining to sensitive topics (Grimm, 2010). The insight of CQL is to recognize that numbers are not generated in a vacuum. Consideration must be given to the influences on these numbers when using them for statistical analysis. These ideas are shared with the data literacy scholarship, within which some scholars have called for an increased criticality around the production and consumption of data itself (e.g., Irgens et al., 2020; Pangrazio & Selwyn, 2019). The key realization is that whether explicitly or implicitly, decisions are made about what data to collect and how to collect it. These decisions are not neutral and have implications for quantitative output.

Unpacking the Language

An easy way in which educators can help build CQL is by paying careful attention to the language used in quantitative research. This focus often requires picking apart the words used within the findings and discussion sections of empirical research. For example, it is common for interpretations of regression models to invoke causal language, such as the “effect” of X on Y. Many scholars have articulated the problem with causal language, especially with noncausal variables, such as race (e.g., Holland, 2003). Causal language is often invoked when what is really being observed are probability-based mean differences or nonzero (linear) relationships between two variables conditioned on numerous other assumptions being met. Loose language provides an opportunity for educators to contrast written or spoken words with everything else underlying the quantitative methods, thereby reinforcing CQL when reading and producing quantitative findings. Relatedly, quantitative findings are often placed into a larger discussion in scientific research by being cited in other studies. CQL encourages readers (and authors) to be mindful of how language is propagated and thus to situate their language within the broader scientific community.
Other important considerations of the language in quantitative research regard how some individuals may be implicitly excluded from the study and how deficit frameworks may be introduced. Limiting an investigation of educational outcomes to boys and girls, for example, assumes a gender binary that alienates and discredits the experiences of individuals with other gender or sex identities (Garvey et al., 2019). Alternatively, deficit language is often used to describe the results of statistical analyses, especially in conversations around student achievement (Ladson-Billings, 2007). It could even be argued that deficit language comes naturally to a system of epistemological inference formulated around analyzing the probability and magnitude of mean differences between two populations. This work is an opportunity to discuss how quantitative research ignores the underlying sociopolitical systems that produce and contribute to deficit narratives (Russell et al., 2022). Deficit framings are not necessary to illuminate mean differences, and with care and attention, such framings can be avoided entirely (Ro & Bergom, 2020). Moreover, those with CQL can contextualize findings with critical explanations for why such differences may occur.

Differentiating CQL From QuantCrit and CritQuant

Unlike QuantCrit and CritQuant, which apply critical frameworks to produce quantitative findings, CQL should be thought of as a precursor that focuses on the reading, understanding, and contextualizing of the quantitative methodology itself. In much the same way that knowledge of linear regression models is thought of as a prerequisite for conducting an informed linear regression analysis, CQL can be thought of as the combined knowledge of quantitative methods and critical theory needed to conduct informed critical quantitative research or to interrogate the criticality of the quantitative components therein. CQL is a step toward producing scholars who are better able to do this type of work.
Because of its emphasis on CRT, coursework can (and should) be developed on how to conduct rigorous QuantCrit research (Arellano, 2022). Similarly, early tenets are being put in place that offer a starting point for CritQuant training (Diemer et al., 2023). Yet neither of these approaches takes the methods themselves as its focus. By contrast, CQL is a critical methods–focused paradigm that can be adopted in any quantitative classroom. For instance, introductory statistics classes in education can discuss the racist history of eugenicist Karl Pearson when introducing linear correlation (Zuberi, 2001); discussing the statistical mean can illuminate the fact that individual experiences are obscured by definition; causal inference coursework can introduce the social construction and non-manipulability of such variables as race (Sen & Wasow, 2016); psychometric courses can discuss the methods’ eugenicist history and the importance of racial bias on consequential validity (Helms, 1992, 2006; Randall, 2021); and any quantitative classroom can discuss the axiological, ontological, and epistemological considerations above regarding research design, measurement, randomization, replication, and inference.
CQL aims to develop a critically informed understanding of statistical methods, making it an essential pedagogical component of CritQuant, QuantCrit, and other equity-focused quantitative frameworks. Moreover, CQL is not independent of these frameworks. Just as CQL intends to support research applying CritQuant and QuantCrit frameworks, research may also reveal important considerations for quantitative methods classrooms. For example, scholars thinking critically about the language around such techniques as dummy coding may provide better methodological suggestions for a CQL-focused classroom (e.g., Ro & Bergom, 2020). Alternatively, scholars with advanced CQL may operationalize their CQL to produce antiracist quantitative research (e.g., Campbell, 2020). Figure 1 offers a conceptual diagram that distinguishes the role and position of CQL in research production while placing it in communication with other critical quantitative frameworks.
Figure 1 CQL in relation to CritQuant, QuantCrit, and outcomes.
Note. Critical quantitative literacy (CQL) is proposed as a supplemental paradigm for the CritQuant and QuantCrit frameworks via its critical consideration of statistical definitions, mathematics, methodological assumptions, research design, and language. Symbiotically, research using CritQuant and QuantCrit frameworks may reveal new ways to strengthen CQL. All of these approaches are argued to help produce more socially just outcomes.

Guiding Questions for Building CQL

A useful practice for building CQL is to slow down, ask questions that may seem to have obvious answers, and reflect on what those answers really mean in the context of quantitative research. The aim is to move away from the presumed clarity and objectivity of the numbers, situating them instead in the ambiguous, subjective context of their assumptions and mapping them to more substantive research questions. The following questions and themes are not exhaustive but encourage researchers and students to reflect on the fundamental considerations above. Please also see the example lesson plan provided in the supplemental materials.

Design

Fundamental questions for constructing a quantitative research design include What does this research aim to reveal? and Whom will the findings be about? Alternatively, if reading research produced by others, the question might be reframed as Whom is the research about? and Do the sampling methodology and definitions accurately map the sample to the intended population? Where, if anywhere, are the discrepancies? As noted, the variables selected, the variables not selected, and the supporting arguments for the research hypothesis ought to match the design and narrative. This approach, then, provides other lines of inquiry: Which variables were included? Which variables were not included? How do the included variables relate to one another, and how do omitted variables affect these relations? If, for example, the research hypothesis is that socioeconomic status (SES) correlates with social mobility, then does the research design isolate the effect of SES, or does it allow it to be confounded with unmeasured factors, such as systemic racism? What instrumentation is used in the data collection process, and what are the implications of this decision? Is the collection process designed to facilitate accurate and relevant responses, and have such potential biases as social desirability been mitigated? The instrumentation used invokes specific definitions. At each step, the narrative should match the answers to these questions.

Measurement

Quantitative research findings should be interpreted in the context of specific measurement definitions for the variables used in a study. Accordingly, interrogating definitions and placing them in context offers a broad space for inquiry. SES is commonly measured in a variety of ways (such as household income or free and reduced lunch), yet these definitions are not the same thing and are often described under the umbrella term of SES. Interrogating how variables—independent and dependent—are measured is an essential part of CQL. Contrasting the interpretation of quantitative findings with the variable definitions while paying close attention to the language is also an essential part of CQL. Interrogating the appropriateness of variable definitions for answering the proposed research questions is, again, central to CQL. For example, broad racial categories, such as Asian, notably fail to capture important sociocultural and historical heterogeneity often siloed by this label. Studies that contrast Asian success, say, with that of other minoritized communities often fail to recognize challenges facing Hmong, Vietnamese, Cambodian, or other Southeast Asian communities included in the broader Asian label (Her, 2014). On a more fundamental level, it has also been noted that most quantitative methods operate on means, variances, and covariances. Each of these has a specific definition that represents a choice made by the researcher. Developing CQL requires asking questions about how variables are measured and placing the answers to those questions in the context of the research questions, implications, language, and discussions.

Methodology

Just as developing CQL is theory-agnostic, it is also (quantitative) methods-agnostic in that developing CQL applies to any quantitative methodology. No matter the method, CQL encourages one to ask why this method is being used, whether an appropriate alternative exists, precisely what is conveyed by this method, and how the results of this method map onto the substantive research questions. Relatedly, one might ask about how outliers are being defined and accommodated by the chosen method and what assumptions are being made about them. Quantitative methods require a variety of mathematical assumptions to be met, some of which can be quite difficult. Knowing and understanding these assumptions are central parts of CQL, as is inquiring about who might be affected, and how.

Suggestions, Future Research, and Conclusion

This paper introduces critical quantitative literacy as the critically informed understanding of the scope of quantitative methodology, including but not limited to statistical research design, definitions, variables, methods, and findings. CQL is framed as the requisite combined knowledge of quantitative methodology and critical theory to support CritQuant, QuantCrit, and other equity-oriented quantitative research frameworks. It is (critical) theory- and (quantitative) method-agnostic and spans the entire process of quantitative inquiry, from hypotheses to design, to analysis, to narration, and to dissemination. The goal of this paper is not to exhaust the scope of CQL but rather to familiarize the reader with the idea so that developing CQL might be taken up in practice and in educational spaces.
Development of CQL has important implications for the future of quantitative research and quantitative methods education. First, focusing on CQL joins the overdue process of recognizing and publicizing the oppressive history of quantitative social science research. Developing CQL in quantitative education informs learners so that injustices are recognized and can be better avoided in the future. Second, CQL starts the engine of reformulating and reimaging quantitative methods to serve critical goals toward equality in education research. In this way, CQL is allied with CritQuant and QuantCrit. As suggested, the broad scope of CQL makes its adoption amenable to any quantitative research endeavor or in any quantitative methods classroom. Third, CQL carries with it the capacity to cultivate more equity-minded quantitative scholars ready to produce critically informed research. The simplifying nature of quantitative methods has, in many ways, precluded pursuit of answers to critical questions. Scholars with CQL may help develop the tools and produce research more capable of answering important critical questions. Finally, CQL has the potential to positively influence public and educational policy by fine-tuning quantitative research methodologies and applications. Through their heightened criticality within quantitative methods, those who develop CQL are poised to thoughtfully use quantitative methods to tell counter-stories and propose more equity-oriented policies.
What is included in this manuscript is not and cannot be an exhaustive list of the scope, foundations, or guideposts for conducting CQL. However, CQL can be included as the subject of additional research in numerous ways. For example, the extent to which developing CQL serves as a gateway into students’ interest in CritQuant, QuantCrit, or quantitative methods more generally is currently unclear. If incorporating CQL fosters these interests, it would be insightful to contrast the successes of different CQL-building practices. Another area of further research might be the implications of CQL on chosen methodology. It may be, for example, that some statistical methods emerge as theoretically preferable to others, given the assumptions these methods do or do not make. Alternatively, newly developed statistical design, theory, or methodology better capable of reaching critical goals may emerge. Whatever the direction, getting CQL off the ground is an important first step toward critical quantitative scholarship. In more ways than one, developing CQL is only the beginning.

Declaration of Conflicting Interests

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author received no financial support for the research, authorship, and/or publication of this article.

ORCID iD

Footnote

1. It is worth noting, here and elsewhere, that this should not be interpreted as a challenge to the supporting statistical theory that proves, for example, that the equations for the mean and variance are unbiased estimators of the parameters in a normal distribution (see Casella & Berger, 2002). Rather, the claim is that the supporting mathematics operationalize specific contexts and definitions and that these contexts and definitions have implications that must be considered by critical quantitative scholars in education.

References

American Educational Research Association, American Psychological Association, National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing (U.S.). (2014). Standards for educational and psychological testing. American Educational Research Association.
Arellano L. (2022). Questioning the science: How quantitative methodologies perpetuate inequity in higher education. Education Sciences, 12(2), 116. https://doi.org/10/gqkdtq
Baez B. (2007). Thinking critically about the “critical”: Quantitative research as social critique. New Directions for Institutional Research, 2007(133), 17–23. https://doi.org/10.1002/ir.201
Boveda M., Ford K. S., Frankenberg E., López F. (2023). Editorial vision 2022–2025. Review of Educational Research, 00346543231170179.
Campbell S. L. (2020). Ratings in Black and White: A QuantCrit examination of race and gender in teacher evaluation reform. Race Ethnicity and Education, 26(7), 815–833. https://doi.org/10.1080/13613324.2020.1842345
Casella G., Berger G. L. (2002). Statistical inference (2nd ed.). Duxbury.
Castillo W., Gillborn D. (2022). How to “QuantCrit”: Practices and questions for education data researchers and users (EdWorkingPaper: 22-546). Retrieved from Annenberg Institute at Brown University. https://doi.org/10.26300/v5kh-dd65
Childs T. M., Wooten N. R. (2022). Teacher bias matters: An integrative review of correlates, mechanisms, and consequences. Race Ethnicity and Education, 26(3), 368–397. https://doi.org/10.1080/13613324.2022.2122425
Covarrubias A., Vélez V. N. (2013). Critical race quantitative intersectionality: An anti-racist research paradigm that refuses to “let the numbers speak for themselves.” In Lynn M., Dixson A. D. (Eds.), Handbook of Critical Race Theory in education (pp. 270–285). Routledge.
Crawford C. E. (2019). The one-in-ten: Quantitative Critical Race Theory and the education of the “new (White) oppressed.” Journal of Education Policy, 34(3), 432–444.
Culpepper R., Zimmerman R. A. (2006). Culture-based extreme response bias in surveys employing variable response items: An investigation of response tendency among Hispanic-Americans. Journal of International Business Research, 5, 75.
Dencik L., Hintz A., Redden J., Treré E. (2019). Exploring data justice: Conceptions, applications and directions. Information, Communication and Society, 22(7), 873–881. https://doi.org/10.1080/1369118X.2019.1606268
Diemer M., Frisby M. B., Marchand A., Bardelli E. (2023, July 30). Illustrating and enacting a critical quantitative approach to measurement with MIMIC models. https://doi.org/10.31234/osf.io/8thpu
Dixon-Román E. (2017). Inheriting possibility: Social reproduction and quantification in education. University of Minnesota Press.
Du Bois W. E. B. (1899). The Philadelphia Negro: A social study. University of Pennsylvania Press.
Garcia N. M., López N., Vélez V. N. (2018). QuantCrit: Rectifying quantitative methods through critical race theory. Race Ethnicity and Education, 21(2), 149–157. https://doi.org/10.1080/13613324.2017.1377675
Garvey J. C., Hart J., Metcalfe A. S., Fellabaum-Toston J. (2019). Methodological troubles with gender and sex in higher education survey research. Review of Higher Education, 43(1), 1–24. https://doi.org/10.1353/rhe.2019.0088
Gelman A., Stern H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. American Statistician, 60(4), 328–331. http://www.jstor.org/stable/27643811
Gillborn D. (2010). The colour of numbers: Surveys, statistics and deficit-thinking about race and class. Journal of Education Policy, 25(2), 253–276.
Gillborn D., Warmington P., Demack S. (2018). QuantCrit: Education, policy, “Big Data” and principles for a critical race theory of statistics. Race Ethnicity and Education, 21(2), 158–179. https://doi.org/10.1080/13613324.2017.1377417
Grimm P. (2010). Social desirability bias. In Sheth J. N., Malhotra N. K. (Eds.), Wiley international encyclopedia of marketing. Wiley.
Helms J. E. (1992). Why is there no study of cultural equivalence in standardized cognitive ability testing? American Psychologist, 47(9), 1083. https://doi.org/10.1037/0003-066X.47.9.1083
Helms J. E. (2006). Fairness is not validity or cultural bias in racial-group assessment: A quantitative perspective. American Psychologist, 61(8), 845. https://doi.org/10.1037/0003-066X.61.8.845
Helms J. E. (2012). A legacy of eugenics underlies racial-group comparisons in intelligence testing. Industrial and Organizational Psychology, 5(2), 176–179.
Her C. S. (2014). Ready or not: The academic college readiness of southeast Asian Americans. Multicultural Perspectives, 16(1), 35–42. https://doi.org/10.1080/15210960.2014.872938
Hilliard A. G. (1990). Back to Binet: The case against the use of IQ tests in the classroom. Contemporary Education, 61(4), 184.
Holland P. (2003). Causation and race. ETS Research Report Series, 2003.
Irgens G. A., Simon K., Wise A., Philip T., Olivares M. C., Van Wart S., Vakil S., Marshall J., Parikh T., Lopez M. L., Wilkerson M. H., Gutiérrez K., Jiang S., Kahn J. B. (2020). Data literacies and social justice: Exploring critical data literacies through sociocultural perspectives. In Gresalfi M., Horn I. S. (Eds.), The interdisciplinarity of the learning sciences (Vol. 1, pp. 406–413). International Society of the Learning Sciences. https://idealab.sites.clemson.edu/papers/dataliteraciesandsocialjustice.pdf
James A. (2001). Making sense of race and racial classification. Race and Society, 4(2), 235–247. https://doi.org/10.1016/S1090-9524(03)00012-3
Kincheloe J. L., McLaren P. L. (1994). Rethinking critical theory and qualitative research. In Denzin N. K., Lincoln Y. S. (Eds.), Handbook of qualitative research (pp. 138–157). Sage.
Kline R. B. (2016). Principles and practices of structural equation modeling. Guilford Press.
Ladson-Billings G. (2006). From the achievement gap to the education debt: Understanding Achievement in U.S. schools. Educational Researcher, 35(7), 3–12.
Ladson-Billings G. (2007). Pushing past the achievement gap: An essay on the language of deficit. Journal of Negro Education, 76(3).
Langkjær-Bain R. (2019, June). The troubling legacy of Francis Galton. Significance, 16(3), 16–25.
Lopez J. D. (2021). Examining construct validity of the scale of Native Americans giving back. Journal of Diversity in Higher Education, 14(4), 519–529. https://doi.org/10.1037/dhe0000181
López N., Erwin C., Binder M., Chavez M. J. (2018). Making the invisible visible: Advancing quantitative methods in higher education using critical race theory and intersectionality. Race Ethnicity and Education, 21(2), 180–207.
Matsuda M. J., Lawrence C. R., Delgado R., Crenshaw K. W. (Eds.). (1993). Words that wound: Critical Race Theory, assaultive speech, and the First Amendment. Westview Press.
Nguyen H.-H. D., Ryan A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of Applied Psychology, 93(6), 1314–1334. https://doi.org/10.1037/a0012702
Nuzzo R. (2014). Scientific method: Statistical errors. Nature, 506, 150–152.
Pangrazio L., Selwyn N. (2019). “Personal data literacies”: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21(2), 419–437. https://doi.org/10.1177/1461444818799523
Pérez Huber L., Solorzano D. G. (2015). Racial microaggressions as a tool for critical race research. Race Ethnicity and Education, 18(3), 297–320.
Philip T. M., Olivares-Pasillas M. C., Rocha J. (2016). Becoming racially literate about data and data-literate about race: Data visualizations in the classroom as a site of racial ideological micro-contestations. Cognition and Instruction, 34, 361–388. https://doi.org/10.1080/07370008.2016.1210418
Randall J. (2021). “Color-neutral” is not a thing: Redefining construct definition and representation through a justice-oriented critical antiracist lens. Educational Measurement: Issues and Practice, 40(4), 82–90. https://doi.org/10.1111/emip.12429
Ro H. K., Bergom I. (2020). Expanding our methodological toolkit: Effect coding in critical quantitative studies. New Directions for Student Services, 2020(169), 87–97. https://doi.org/10.1002/ss.20347
Root E., Bacon R., Scott J. B. (1916). Addresses on government and citizenship. Harvard University Press.
Russell M., Oddleifson C., Kish M. R., Kaplan L. (2022). Countering deficit narratives in quantitative educational research. Practical Assessment, Research, and Evaluation, 27(14). https://doi.org/10.7275/k44e-sp84
Sablan J. R. (2019). Can you really measure that? Combining Critical Race Theory and quantitative methods. American Educational Research Journal, 56(1), 178–203. https://doi.org/10.3102/0002831218798325
Schlinger H. D. (2003). The myth of intelligence. Psychological Record, 53(1), 15–32.
Sen M., Wasow O. (2016). Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science, 19.
Smith L. T. (2012). Decolonizing methodologies (2nd ed.). Zed Books.
Solórzano D. G., Yosso T. J. (2002). Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative Inquiry, 8(23), 23–44.
Stage F. K. (2007). Using quantitative data to answer critical questions: New directions for institutional research. Jossey-Bass.
Stage F. K., Wells R. S. (2014). New scholarship in critical quantitative research, part 1: Studying institutions and people in context. New Directions for Institutional Research, Number 158. Wiley.
Stephens E., Cryle P. (2017). Eugenics and the normal body: The role of visual images and intelligence testing in framing the treatment of people with disabilities in the early twentieth century. Continuum, 31(3), 365–376.
Sullivan S., Tuana N. (2007). Race and epistemologies of ignorance. SUNY Press.
Tabron L. A., Hunt-Khabir K., Thomas A. K. (2020). Disrupting Whiteness in introductory statistics course design: Implications for educational leadership. In Mullen C. A. (Ed.), Handbook of social justice interventions in education (pp. 1–25). Springer.
Tabron L. A., Thomas A. K. (2023). Deeper than wordplay: A systematic review of critical quantitative approaches in education research (2007–2021). Review of Educational Research. https://doi.org/10.3102/00346543221130017
Terman L. M. (1922). A new approach to the study of genius. Psychological Review, 29(4), 310–318.
Terman Lewis M. (1924). The possibilities and limitations of training. Journal of Educational Research, 10(5), 335–343.
Walter M., Andersen C. (2013). Indigenous statistics: A quantitative research methodology. Left Coast Press.
Wise A. (2020). Educating data scientists and data-literate citizens for a new generation of data. Journal of the Learning Sciences, 29(1), 165–181. https://doi.org/10.1080/10508406.2019.1705678
Ziliak S. T., McCloskey D. N. (2008). The cult of statistical significance: How the standard error costs us jobs, justice, and lives. University of Michigan Press.
Zuberi T. (2001). Thicker than blood: How racial statistics lie. University of Minnesota Press.
Zuberi T., Bonilla-Silva E. (Eds.). (2008). White logic, White methods: Racism and methodology. Rowman and Littlefield.

Biographies

MICHAEL B. FRISBY is an assistant professor of research, measurement, and statistics at Georgia State University, 30 Pryor Street SW, Suite 450, Atlanta, GA 30303; email: [email protected]. His research features the theoretical and methodological expansion of the CritQuant and QuantCrit frameworks in addition to their incorporation in quantitative methods education. He also studies the measurement and development of critical consciousness among young adults with more privileged identities.

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
Email Article Link
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: February 11, 2024
Issue published: January-December 2024

Keywords

  1. CQL
  2. critical quantitative literacy
  3. critical theory
  4. CritQuant
  5. equity
  6. mathematics education
  7. QuantCRiT
  8. quantitative education
  9. quantitative methods
  10. statistics

Rights and permissions

© The Author(s) 2024.
Creative Commons License (CC BY-NC 4.0)
This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Authors

Affiliations

Notes

Metrics and citations

Metrics

Journals metrics

This article was published in AERA Open.

View All Journal Metrics

Article usage*

Total views and downloads: 4709

*Article usage tracking started in December 2016


Altmetric

See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores



Articles citing this one

Receive email alerts when this article is cited

Web of Science: 2 view articles Opens in new tab

Crossref: 2

  1. Effect of an Educational Initiative for Sustainability on Pre-Service Teachers’ Ethical Decision-Making Skills, Motivation to Learn Science, and Learning Atmosphere in the Classroom
    Go to citationCrossrefGoogle Scholar
  2. Illustrating and Enacting a Critical Quantitative Approach to Measurement with MIMIC Models
    Go to citationCrossrefGoogle Scholar

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/EPUB

View PDF/EPUB

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.