Online courses provide flexible learning opportunities, but research suggests that students may learn less and persist at lower rates compared to face-to-face settings. However, few studies have investigated more distal effects of online education. In this study, we analyzed 6 years of institutional data for three cohorts of students in 13 large majors (N = 10,572) at a public research university to examine distal effects of online course participation. Using online course offering as an instrumental variable for online course taking, we find that online course taking of major-required courses leads to higher likelihood of successful 4-year graduation and slightly accelerated time-to-degree. These results suggest that offering online courses may help students to more efficiently graduate college.

Success in postsecondary education is a key determinant for individual career trajectories and for national prosperity and societal well-being (Moretti, 2004; National Academy of Sciences et al., 2007). As high-paying, secure jobs increasingly require advanced skills to keep pace with technological developments, the bachelor’s degree remains a primary avenue for low-income and historically marginalized groups to secure middle-class wages (Duncan & Murnane, 2014). However, recent statistics from the U.S. Department of Education (2017) indicate that, among first-time, full-time bachelor’s degree-seeking students beginning at 4-year institutions, only about 41% successfully graduate college within 4 years and 60% within 6 years.

Historically, student progression research primarily focused on student-level characteristics and seldom addressed institutional inefficiencies that may delay or impede student graduation. More recent studies, however, have begun to examine how institutional policies and procedures affect graduation (e.g., Attewell et al., 2011; Scott-Clayton, 2015). For example, students’ inability to enroll in required courses—due to capacity constraints on the part of the institution or scheduling constraints on the part of the student—can delay graduation (Gurantz, 2015; Pearson Foundation, 2011). Online course offerings may alleviate some of these supply-and-demand constraints by allowing students to enroll in otherwise inaccessible courses, thus potentially accelerating time to graduation.

Online courses are an increasingly important part of students’ college experience, even prior to the COVID-19 outbreak. For instance, in 2016, more than 30% of all undergraduate students took an online course (McFarland et al., 2017). Advocates of online education argue that online courses can provide greater and easier access to coursework for students while also serving as cost-effective forms of instruction for universities (Bartley & Golek, 2004; Watson & Gemin, 2008). However, numerous studies indicate that near-term measures of student learning and performance (e.g., course completion, course grades, success in subsequent courses) are slightly lower in online settings as compared with traditional face-to-face environments (e.g., Bettinger et al., 2017; Figlio et al., 2013; Xu & Jaggars, 2013, 2014)

Much less work has examined how online courses affect more distal outcomes (e.g., time-to-degree, graduation rates). This is a timely topic: If online courses are a potentially effective method for reducing delays in completing course requirements, departments may want to keep some of their online courses that were introduced during the COVID-19 pandemic. For students, the cost of potentially earning a lower grade in an online setting may be offset by the benefit of making more efficient progress toward graduation.

Of course, such effects may vary across student groups. Students with low-socioeconomic status (SES), first-generation college students, and students with relatively poorer academic preparation have longer average times-to-degree, controlling for institution type, than their more privileged counterparts (Ginder et al., 2017; Zarifa et al., 2018). As these students may also have greater demands on their time and thus require greater flexibility to accommodate their schedules (Grabowski et al., 2016), online courses may present an opportunity to manage external demands while progressing through degree programs (Stone et al., 2016). However, students managing multiple external temporal demands might lack access to campus resources and connections to supportive institutional figures, which could mean that taking online courses, which require greater self-regulation, could lead to negative consequences. Thus, it is crucial to understand not only the overall distal effects of online course taking but also potentially heterogeneous effects for traditionally underserved groups.

Conceptual Framework

This study draws on Rovai’s (2003) composite model of student persistence in online programs. The model posits that students’ ability to persist in online courses relates to various student characteristics, student experiences, institutional policies and programs, and pedagogical styles. Combining Tinto’s (1975, 1987, 1993) and Bean and Metzners’s (1985) previous work examining persistence in higher education, Rovai classifies these factors into two broad categories: those prior to and after admission to college.

There are two broad domains of pre-admission characteristics: (a) student characteristics, (e.g., demographics, academic preparation and performance), and (b) student skills (e.g., facility with technology, information literacy, and time management). After admission, Rovai focuses on three sets of factors: external factors (e.g., students’ financial situation, work hours, and personal and life circumstances); institutional factors (e.g., provision of support, course availability, and pedagogical features); and individual factors (e.g., affinity with studies, peers, and institution).

These potential influences are not merely additive as student characteristics and skills interact with their environments. For example, students with high self-regulation will likely fare better in online classes than those with weaker self-regulatory skills (e.g., Baker et al., 2020; Broadbent & Poon, 2015; Li et al., 2020). Similarly, external factors may affect students’ decisions to take advantage of institutional support.

Although Rovai’s model focuses specifically on student progression within courses, we follow past work (e.g., Ortagus, 2018) that has extended the model to examine related outcomes (probability of graduation and time-to-degree). The model’s focus on salient background student characteristics related to student success informs our covariate selection and subgroup analyses (i.e., first-generation college students, low-income students, students with weak academic preparation). Also, our study assumes that pre-admission characteristics influence the uptake of online course offerings (with the availability of online courses representing an after-admission institutional factor), as well as course and degree success.

Distal College Performance Indicators

Despite being understood as a 4-year endeavor, only about 41% of first-time, full-time undergraduate students in the 2010 cohort graduated within 4 years, and about 60% graduated within 6 years (U.S. Department of Education, 2017). Graduation outcomes are correlated with student characteristics; low-income, first-generation college, and older (generally considered over age 21) students are much less likely to complete a bachelor’s degree on time compared with their counterparts (Ewert, 2010; Zarifa et al., 2018).

Delayed graduation has economic consequences for society, for the institution, and for students (Jenkins & Rodriguez, 2013; Kurlaender et al., 2014; Zarifa et al., 2018). Longer time-to-degree reduces the supply of college-educated workers and increases the burdens on state and federal loans. Prolonged time-to-degree may decrease institutional efficiency by increasing the amount of resources devoted to an individual student and contribute to educational changes such as larger class sizes (Jenkins & Rodriguez, 2013). Taking longer to graduate may increase direct costs for students by increasing total tuition fees, inducing more borrowing, increasing opportunity costs from foregone earnings, and potentially decreasing lifetime earnings (Witteveen & Attewell, 2021).

Factors Affecting Time-to-Degree

Institutional efforts to mitigate delayed graduation generally focus on social, financial, and academic support for students (Bettinger & Baker, 2014; Boyle et al., 2010; Xu et al., 2018). However, such efforts seldomly address the institutional factors that delay student trajectories. Research suggests that measures of how well institutions provide and structure resources for students—such as student-faculty ratios, program design, and expenditures on student services—are related to students’ time-to-degree and eventual graduation (e.g., Bound et al., 2012; Chen, 2012; Shapiro et al., 2016; Zarifa et al., 2018). Many institutions implement policies and programs to improve time-to-degree. For example, schools may offer summer courses that enable students to take coursework they cannot complete during the regular academic year due to insufficient seating capacity or unmet academic course requirements (e.g., Fischer, Xu, et al., 2020; Smith & Byrd, 2015).

The institutional factors affecting time-to-degree are especially salient for first-year students, as the freshman year is a period of transition. Students in their first year are often disconnected from institutional supports and the campus community, and many first-year students lack the self-regulatory and time management skills necessary to thrive in unstructured academic environments (Bailey et al., 2019; Bruffaerts et al., 2019).

Recent work has sought to better understand how course offerings and curricular structures might affect student progress (Bailey et al., 2015; Bhaskaran et al., 2017). For instance, constrained offerings may induce students to enroll in courses that do not count toward a degree because necessary courses are unavailable (Pearson Foundation, 2011). Although the evidence on the effect of course scarcity on time-to-degree is mixed and context-dependent (e.g., Gurantz, 2015; Kramer et al., 2018; Kurlaender et al., 2014; Yue & Fu, 2017), many students state that enrollment difficulties inhibit their ability to progress on time (Pearson Foundation, 2011). One potential institutional response to alleviate pressures of course crowding is offering more online courses.

Present Study

Although online courses are gaining popularity with both departments and students, we lack a clear understanding of the overall effects of offering classes online. Although several studies have examined the effect of online classes on proximal outcomes such as course grade (e.g., Fischer, Xu, et al, 2020), course persistence (Xu & Jaggars, 2014), and grade in a subsequent course (Bettinger et al., 2017), few have investigated whether online education affects students’ time-to-degree and other distal success metrics. Huntington-Klein et al. (2017) found that participating in online courses led to a smaller probability of successful graduation in community college settings. Other prior work found that students who enrolled in an online course early in their college path graduated several months sooner than their counterparts (Ortagus, 2018; Sublett, 2019).

However, there are two limitations of these studies: (a) Most were situated in community college settings that enroll different student populations and have different structural constraints than traditional 4-year colleges, and (b) while these studies have attempted to control for potential biases due to selection into online classes through propensity score matching and student- and instructor-fixed-effects, these methods cannot completely account for such biases. This limited research highlights unanswered questions regarding the longer-term effects of online course taking, particularly in 4-year colleges. Although our study also cannot fully address all potential selection biases, we add a different credibly causal approach to the extant body of research and test the robustness of our results to a range of potential threats to validity.

Using a large institutional data set, this study examined the following research questions (RQs):

  • Research Question 1 (RQ1): How does enrollment in major-required online courses affect students’ probability of graduating within 4 or 6 years?

  • Research Question 2 (RQ2): How does enrollment in major-required online courses affect students’ time-to-degree for students who graduate within 6 years?

  • Research Question 3 (RQ3): How does enrollment in major-required online courses affect students’ 4-year and 6-year graduation rates and time-to-degree for student populations who are traditionally at-risk in college environments (i.e., first-generation college students, low-income students, students with weak academic preparation)?

Study Setting

Institution

This study is situated at a large public research university in California with more than 30,000 undergraduate students in more than 80 undergraduate degree programs in some 15 schools. This college enrolls a diverse undergraduate student body with about 48% first-generation college students and 45% Pell grant recipients and is federally designated as a Hispanic-Serving Institution (HSI) and an Asian American and Native American Pacific Islander-Serving Institution (AANAPISI). The institution has substantially expanded its online course offerings from 18 courses in 2009 to 93 courses in 2015 to 109 courses in 2017. Most online courses (78%) were offered in summer. During the time of this study, the university did not centrally organize online offerings; for the most part, instructors could decide whether to offer their courses online or in-person. In addition, online course offerings were not centrally advertised; in the course enrollment window each term, students were able to view whether courses were offered in an online or face-to-face modality in the schedule of classes. Institutional knowledge and the historic schedules of classes indicate that all these online courses were offered asynchronously and were fully online without on-site, in-person interactions (i.e., the classes were not offered in blended/hybrid learning formats).

Sample and Measures

Institutional data for this study come from the Registrar’s Office and the Offices of Institutional Research, Admission, and Summer Session. This study examines 6 years of institutional data for three cohorts of newly matriculated degree-seeking students (those who entered in fall terms of 2009, 2010, and 2011) in 13 of the largest majors at this university. Although we initially planned to examine the 15 largest majors on campus (representing over 80% of students) the historic major requirements were unclear for two of these majors and when we contacted departmental administrators, we were unable to get sufficient information on the major-required courses for the 2009 to 2011 cohorts. Therefore, we excluded these two majors from our analyses. Our analysis examining graduation rates includes students from the three cohorts who finished their program of study in one of these 13 majors (i.e., graduated from the institution with the major, or were affiliated with the major at the end of Year 6; sample 1). In addition, we restricted this sample to students who successfully graduated with a degree in one of these 13 majors within 6 years to examine the effects of online class offerings on students’ time-to-degree (sample 2).

Dependent variables in this study are dichotomous variables that indicate whether students graduated within 4 or 6 years (RQ1) and a continuous variable that represents students’ time-to-degree in years (RQ2). The key independent variables, which we discuss in detail below, are measures of online course taking. In addition, the analyses include a number of student-level demographic, achievement, and enrollment covariates from application and transcript files. Table 1 provides descriptive information and variable definitions of the student- and cohort-level variables in this study by sample.

Table

Table 1 Descriptive Cohort and Student Information

Table 1 Descriptive Cohort and Student Information

Major-Required Online Courses

We focus on courses that were required for students to complete their majors. Information on major requirements were accessed from the publicly available general catalogue. In a few cases in which requirements were unclear, we contacted department representatives. For each of the analyzed majors, we collected information on the course requirements students need for successful major completion. Some requirements must be fulfilled by a specific course, while others can be fulfilled by taking one course from a list of options. In this analysis, we treated every course that would advance student progress toward school and departmental major requirements as a “major-required” course.

We examine all major-required courses offered and taken online in students’ first 4 years. In our analytic sample, 8% of the students took at least one major-required course online in the first 4 years of their college career. On average, 3% of the major required courses were offered online (shown in Figure 1, details for each major in Supplementary Appendix Table A1 in the online version of the journal). As these data are from a period when the institution was beginning to expand its offering of online courses, it is not surprising that only a small percentage of major-required courses were offered online and that students only took a small proportion of their major-required courses online. Nonetheless, Figure 1 indicates that there are differences in major-required online course offering across majors within a cohort and across cohorts within a major, which is the variation that this study uses to examine the impact of course modality.


                        figure

Figure 1. Proportions of all major-required courses offered and taken online in students’ first 4 years by major and cohort for each major (gray lines) and aggregated (black line).

In addition, we specifically examine major-required lower division courses offered and taken online in students’ first year. Figure 2 provides descriptive information for all lower division major-required courses offered and taken online across cohorts by major; details across cohorts by major are provided in Supplementary Appendix Table A2 in the online version of the journal. The decision to examine lower division courses is both empirically and theoretically motivated. First, because there is less flexibility in lower division than upper division requirements, we expect there to be a stronger relationship between courses offered online and courses taken online (a stronger first stage). Indeed, most major-required courses that students took online were lower division courses (about 63%). Also, focusing on major-required lower division courses allows us to compare our results more directly with previous studies, which found that first-year online enrollment has particularly positive associations with long-term academic success in community colleges settings (e.g., Ortagus, 2018). Finally, the first year of college poses unique challenges to students that may play an especially large role in affecting academic success and retention; therefore, effects of taking classes online—both positive and negative—may be most pronounced in the first year.


                        figure

Figure 2. Proportion of major-required lower division courses offered and taken online in students’ first year by major and cohort for each major (gray lines) and aggregated (black line).

Analytic Methods

RQ1 and RQ2

We apply an instrumental variables approach to examine the effects of enrollment in major-required online courses on students’ 4- and 6-year graduation rates (RQ1) and time-to-degree (RQ2), using the sample of students who finished in one of our 13 included majors and the subsample of students who finished in one of our 13 majors and graduated within 6 years, respectively. We use two versions of the independent variable: courses offered/taken online in the first 4 years of each starting cohort (Model 1) and lower division courses offered/taken online in the first year of each starting cohort (Model 2).

The goal of this study is to provide an unbiased estimate of the relationship between online course enrollment and student outcomes. Simple comparisons of outcomes between students who take a higher versus lower percentage of courses online may yield biased estimations; students who tend to take courses online might differ systematically from those who do not. For instance, students who live off campus may be more likely to enroll in online courses as compared with residential students. Students who live off campus may also be less likely to use academic resources available on campus, such as study spaces and one-on-one tutoring services, which may lead to comparatively lower achievement. Such unobserved student characteristics may be correlated with both the key explanatory variable (i.e., online course taking) and the outcome variables and can lead to biased results when not properly accounted for.

Indeed, we found correlations between student observed characteristics and outcomes (Columns 1 and 2 in Supplementary Appendix Table A3 in the online version of the journal) and student observed characteristics and the percentage of major-required courses that students took online (Columns 3 and 4 in Supplementary Appendix Table A3 in the online version of the journal). These correlations suggest that students may systematically sort into taking courses online and that this sorting could bias estimations of the relationship between taking classes online and student outcomes. Although ordinary least squares (OLS) regression analyses control for observed student characteristics, there may remain unobserved characteristics that cannot be properly accounted for in traditional regression analyses. We present naïve OLS regressions predicting outcomes using measures of student online course taking and controlling for observed student characteristics in Supplementary Appendix Table A4 in the online version of the journal.

Thus, instead of OLS regression, we used an instrumental variable (IV) approach to address these unobserved selection issues to provide more plausibly causal estimates. IV techniques allow researchers to isolate exogenous variation in a potentially endogenous explanatory variable (e.g., taking an online course) and use only this exogenous portion to estimate the causal impact of the explanatory variable on a subsequent outcome (e.g., graduation rate and time-to-degree; Murnane & Willett, 2010).

Specifically, we instrument online course taking with online course offering. We argue that, after controlling for anything unique about a given major and anything unique about a given cohort of students, the offer of online courses to students at the major-by-cohort level is essentially exogenous. Conditional on the validity of this assumption (which we interrogate in depth below) instrumenting for online course taking with online course offering should provide unbiased estimates of the effect of online courses taking on student college completion and time-to-degree. Importantly, this estimation strategy only allows us to examine the relationship between online courses that fulfill major requirements and student outcomes. This focus relates to the specifics of our estimation strategy; we leverage variation in access to online classes across majors and cohorts. Figures 1 and 2 demonstrate sufficient variation in online course offerings across majors and cohorts, which mostly resulted from differences in individual faculty desires to offer classes online.

Reduced form estimates

Before we introduce our IV estimation strategy, we first present our reduced form models. That is, we predict our outcomes using our instrument while controlling for student-level covariates, major fixed effects, and cohort fixed effects:

Yimc=α+φonline_offeredmc+γXi+θYmc+ζm+ηc+μimc.(1)

Yimc is the outcome variable of student i in major m and cohort c. Three outcomes were examined: probability of earning a degree within 4 years/6 years and time-to-degree measured in years.1 online_offeredmc represents the plausibly exogenous offering of online classes for each cohort of each major. Depending on the model, this variable describes either (a) the percentage of all major-required courses that were offered online for a particular cohort or (b) the percentage of lower division major-required courses offered online in the first year of the particular cohort. Xi represents a vector of student covariates including students’ racial/ethnic background, gender, first-generation student status, low-income status, English language learner status, California resident status, admission score, number of passed AP exams, number of enrolled summer terms, and number of major-required courses enrolled in summer. Ymc represents a vector of covariates at the major-by-cohort level (e.g., cohort size). ζm and ηc are the major fixed effects and cohort fixed effects, respectively, which control for unobserved differences across majors and across cohorts. α and μimc represent the regression intercept and the not directly overserved additive error terms, respectively. We cluster the error at the major-by-cohort level, as this is the level of treatment assignment (Abadie et al., 2017) and the individual student level, as students who double majored or switched majors appear more than once in our data. We discuss the issue of multiple observations per student in more detail below.

Although the reduced form estimates do not provide information about the effects of taking classes online on graduation outcomes (which the instrumental variables estimates do), they do provide the effects of offering classes online, conditional on the offering of online classes being plausibly exogenous at the major-by-cohort level. Presenting these intent-to-treat estimates is important for two reasons: (a) Estimates of the effects of offering, rather than taking, online classes, might be the more policy-relevant estimate for departmental administrators, and (b) reduced form estimates are unbiased even in the face of weak instruments (Angrist & Krueger, 2001), and, as we discuss below, our instrument is weak in one of our analytic approaches.

Instrumental variable estimates

Our IV analysis follows a two-stage least squares format. The first stage of the IV approach predicts the endogenous regressor (i.e., percentage of courses taken online) using the plausibly exogenous predictor (i.e., percentage of major-required courses offered online) as well as the student-, major-, and cohort-level covariates. Thus, our first-stage equation is:

online_takenimc=α+δonline_offeredmc+γXi+θYmc+ζm+ηc+μimc.(2)

online_takenimc represents the percentage of major-required courses taken online for student i in major m and cohort c, respectively. Again, depending on the model, this variable describes either (a) the percentage of all major-required courses taken online in the first 4 years of enrollment or (b) the percentage of lower division major-required courses taken online in the first year of enrollment.

The second stage of the IV approach used the fitted values from the first stage to predict the outcomes. The main independent variable is now the predicted value of the percentage of classes a student would take online. Based on exogenous variation in online course availability, we assume that our originally endogenous predictor is now uncorrelated with any omitted variables. Therefore, the coefficient β in our second-stage model will be an unbiased estimate of the impact of taking major-required courses online on the dependent variables of interest (i.e., graduation rate, time-to-degree):

Yimc=α+βonline_takenimc+γXi+θYmc+ζm+ηc+μimc.(3)

Research Question 3

We conducted heterogeneity analyses to examine the effects of taking online courses for student populations traditionally at-risk in college environments (i.e., first-generation college students, low-income students, and students with weaker academic preparation) as previous research has identified that nontraditional and historically marginalized students typically take longer to graduate (e.g., Zarifa et al., 2018) and that these same groups might perform less well in online classes (e.g., Figlio et al., 2013; Kaupp, 2012). Whereas the dichotomous first-generation college student status and low-income status variables allowed for straightforward differentiation between subpopulations, subgroups based on student admission scores were created by dividing students into high and low admission score groups using a median split procedure. Thus, we replicate the analytical methods for the first two RQs and include terms interacting online course taking and the subgroup indicators to compare students traditionally at-risk in college environments with their non-at-risk counterparts.

Assumptions of the IV Approach

In order for the IV approach to be valid, five conditions must be met: The potential outcome of each individual must be independent of the treatment status of other individuals (the stable unit treatment value assumption, SUTVA), the instrument must have a relationship with the endogenous predictor (there must be a first stage), the instrument must be as good as randomly assigned, the instrument can only affect outcomes through the endogenous predictor (the exclusion restriction), and the effect of the instrument must be in the same direction for all subjects in the population of interest (there can be no defiers; Cunningham, 2021). We address each of these assumptions below.

First, SUTVA states that the outcome for student i depends only on her treatment status and not the treatment status of any other student. That is, student i’s time to graduation is affected by the number of online classes available to her, but not the number of online classes that are available to student j ≠ i. Although SUTVA is notoriously difficult to empirically assess (Cunningham, 2021), in this context it is a reasonable assumption. As all students in each cohort-major cell have the same treatment assignment, the offer of online courses to student j ≠ i should not affect student i’s outcomes beyond classic peer effects, which we expect regardless of modality.

Second, differences in online course availability must affect students’ online course taking (i.e., there must be a first stage). Conditional on individual student characteristics, major-by-cohort characteristics, major fixed effects, and cohort fixed effects, the percentage of major-required courses offered online must be correlated with the percentage of major-required courses taken online. We show this to be true in the first-stage analyses presented in Columns 1 and 2 in Tables 4 and 5.

Third, the instrument must be independent of potential outcomes and potential treatment assignments. In this case, this assumption would be violated if the offering of online classes was correlated with demographic differences or additional supports (e.g., increased tutoring) at the major-by-cohort level. To interrogate this first threat, we examined the relationships between student and cohort characteristics and the proportion of classes offered online in each major for a particular cohort. We first identified the student and cohort characteristics that were significantly correlated with the outcome variables (e.g., low-income status, admission score, and English language learner status) and then investigated if these characteristics, aggregated at the major by cohort level, were systematically associated with measures of the major-required courses offered online (Column 1 in Supplementary Appendix Table A5 in the online version of the journal) or the lower-division major-required courses offered online (Column 2 in Supplementary Appendix Table A5 in the online version of the journal) in a given major and for a particular cohort.

We find some evidence of significant relationships between these key student characteristics and online course offerings. Specifically, we find suggestive evidence that, controlling for major and cohort fixed effects, the number of classes offered online within a major-by-cohort cell is positively related to the percent of Black students, the percent of first-generation students, and the average admission score within that cell. When we examine the F-statistic of the joint significance of all student- and cohort-characteristics, we find a significant relationship between student characteristics and major-required lower division courses offered online, but no significant relationship between student characteristics and all major-required courses offered online. Additional analyses indicated that the relationship between student characteristics and percent of lower-division major required courses offered online was driven mainly by a single major, Public Health. After excluding the Public Health major, there is no longer a significant relationship between student characteristics and the percent of lower division major-required classes offered online (Columns 3 and 4 in Supplementary Appendix Table A5 in the online version of the journal). A robustness check comparing the main results to results that excluded this major from the analysis indicated that results are very similar in magnitude and direction (Supplementary Appendix Table A6 in the online version of the journal).

To interrogate the second threat to independence (online course offerings correlated with the provision of other services), we interviewed departmental administrators in three departments. All interviewees stated that the decision to offer courses online was made by individual instructors without departmental influence. These interviews indicated that instructors had two major incentives for offering courses online: (a) geographic flexibility when teaching courses and (b) small grants offered by the state system for putting courses online. Administrators stated that both incentives were available to instructors across all departments and that decisions to offer courses online were instructor-driven and idiosyncratic. The administrators also confirmed that increases in online course offerings were not paired with additional student support or changes to departmental admissions requirements and were not correlated with cohort size. The interviews imply that decisions offer courses online were not driven by administrators attempting to address student needs or affect student outcomes. Conditional on major- and cohort-fixed effects, we believe that the offering of online classes was essentially exogenous. This supports our IV modeling approach.

Fourth, the instrument must meet the exclusion restriction. That is, the instrument (percentage of major-required courses offered online) must not be directly related to the outcome variables (graduation rate, time-to-degree) except through its relationship with the endogenous predictor of interest (i.e., online courses taking). There is one main path by which the assumption that the only way the instrument (online course offerings) affects the outcomes is through the main predictor (online course taking) could be invalid: students could select into specific majors within a cohort based on online course offerings. Because access to advising, curricular structures, and peer composition varies across majors, if students select into majors based on availability of online classes, the instrument might affect outcomes via a path other than the endogenous regressor. As initial major selections are predominantly made before or during the college application process, and pre-enrolled students do not have reliable or systematic access to information about which classes are offered online, initial major choices are unlikely to be influenced by online course offerings. However, students can change majors after enrolling, and such subsequent choices may be affected by availability of online course offerings. We examine the extent of this threat through two separate robustness checks.

First, we interrogate if there is any evidence of students meaningfully sorting into majors within cohorts based on online classes offerings by examining the relationships between the percentage of students who switched into a given major and the percentage of all major-required courses offered online/major-required lower division courses offered online in that major. The results show that the relationships are small and not significant (Supplementary Appendix Table A7 in the online version of the journal). Second, we examine the extent of the potential threat of students selecting a major within a cohort based on course offerings by replicating our main analyses on a subsample of students who never changed majors (i.e., students who started and ended with the same major, Supplementary Appendix Table A8 in the online version of the journal). Results are similar in direction compared with the main results, though not statistically significant, which is likely due to the reduced statistical power. We note that for these analyses, there is a relatively weak first stage which could lead to biased estimates from this instrumental variable approach (Staiger & Stock, 1997).

The final assumption of instrumental variables analyses is that increasing levels of, or opportunities to receive, treatment do not encourage or induce some individuals to get less treatment. This is commonly referred to as monotonicity and implies that there are no defiers—subjects who get the treatment when assigned to the control group and do not receive treatment when assigned to the treatment group (Angrist et al., 1996; Imbens & Angrist, 1994). In this study, this translates to assuming that there is no one who would have taken more online courses if provided reduced online course offerings, but not taken more if provided increased online offerings. Importantly, this does not preclude the presence of always-takers (those who will get the treatment no matter their assignment status) and never-takers (those who will not get the treatment no matter their assignment status).

Although we cannot fully interrogate the assumption that there are no defiers, we have some evidence that monotonicity holds in this context. The primary mode through which monotonicity could be violated is by students switching majors—a student could feel that because his major offers too few online classes, he needs to switch into a major with more online classes. As a specific example, a student may feel that four classes available online is sufficient but two is too few. In this case, the student may switch into a major with more classes available online—being assigned to fewer offered online classes induces him to take more. (The other direction of switching, a student switching out of a major that he feels offers too many online classes, is less likely in this context because the vast majority of classes were offered in-person and most classes that were offered online were also offered in person.) As we note above when discussing the exclusion restriction, we explicitly test for this behavior in our data and do not find evidence of major switching based on online course availability.2

Summer Course Taking

As we noted when describing the setting of this study, most online classes at our focal college were offered in the summer. Thus, the mechanisms through which offering classes online affects graduation outcomes could be summer course taking; online classes could induce students to enroll in summer terms when they otherwise would not have, which could increase graduation efficiency. Although our analytic models include number of summer terms enrolled and number of major-required classes taken over the summer, which thus allows us to compare the graduation outcomes of students with similar summer course-taking patterns, we interrogate how large a role summer enrollment plays in explaining the relationship between online course taking and graduation outcomes in two ways.

First, we predict number of summer terms enrolled and number of major-required courses taken over the summer using our reduced form models (using number of major-required classes offered in Year 1/all 4 years as the predictor of interest and controlling for student-level covariates and major- and cohort-fixed effects). These models, presented in Supplementary Appendix Table A9 in the online version of the journal, indicate that the offering of online major-required classes induced students to enroll in fewer summer terms. Second, we re-estimate our main instrumental variable models, but do not include the number of summer terms enrolled and the number of major-required classes taken over the summer as control variables. Comparing the results of these models to the results from our main models allows us to examine to what extent overall summer enrollment mediates the relationship between online course taking and graduation outcomes. These models, presented in Supplementary Appendix Table A10 in the online version of the journal, produce findings very similar to our main results. These findings indicate that overall summer enrollment is not the main mechanism by which online course taking affects graduation outcomes. This does not, however, indicate that summer course enrollment is not a mechanism explaining the relationship between online course offering and graduation outcomes, as online course offering could affect enrollment intensity or the timing of summer enrollment.

Students Enrolled in More Than One Major

Nearly 30% of students in our sample were officially enrolled in more than one of the 13 majors during our period of observation. There are two, nonmutually exclusive, reasons for students to have multiple majors on record. Some students (28.8%) successfully fulfilled the requirements of more than one major and graduated as double or triple majors. Also, students could enroll sequentially in more than one major (major switchers). We found that 28.9% of students in our sample switched majors (not counting students who started as undeclared and thus had to “switch majors” to graduate). These students may pose a threat to the validity of our research design as our key instrument (number of classes offered online in the student’s major) is measured with noise for these students.

In our main models, we include one student-per-major observation. 29.4% of students have more than one observation since they enrolled in more than one of the 13 majors. We prefer this specification as it reflects the fact that students are subject to the requirements of every major in which they are ever enrolled. In addition to accounting for multiple observations per student by clustering our standard errors at the student-level, we test the robustness of our results to this decision in three ways. First, as described above when discussing potential violations of the exclusion restriction in our instrumental variables approach, we limit our sample to students who start and end with the same major. Second, we randomly select only one observation per student for all students who were ever enrolled in more than one major. Finally, we include all observations for all students but weight students’ observations such that each student contributes equally to our estimates. All the models also produce results that are similar in magnitude, sign, and significance to our preferred models (Supplementary Appendix Tables A11 and A12 in the online version of the journal).

Missing Data Analysis

We use a listwise deletion approach as missingness was below 6.0% across all variables except for the admission score. Around 20% of students in our analytic sample did not have admission score data. Most of them (92.5%) were transfer students, who are not required to submit standardized test scores. We examined the sensitivity of our results to our decisions regarding missing data. First, we replicated our analyses on a sample that excludes all transfer students (Supplementary Appendix Table A13 in the online version of the journal). Second, we replicated our analyses using a dummy variable adjustment approach with the full sample of students who finished in one of the analyzed majors (Allison, 2002). All the results from the missing data analyses are consistent with the results of the main analysis in terms of the direction and strength of the coefficients (Supplementary Appendix Table A14 in the online version of the journal).

Association With College Graduation Rates

We first examine if the offer of online classes is associated with changes in 4-year and 6-year graduation rates through our reduced form models (Table 2). Each 1% increase in the number of major-required courses offered online is related to a 1.2% greater chance of a student successfully graduating within 4 years, b = 1.233, SD = 0.42, p < .01. Similarly, 1% increase in the proportion of first-year lower division major-required courses offered online is related to a 1.2% greater chance of successfully graduating within 4 years, b = 1.232, SD = 0.39, p < .01. We found no significant associations between the proportion of online courses offered and students’ likelihood of graduating within 6 years for either all major-required courses or all first-year lower division major-required courses.

Table

Table 2 Relationship Between Online Course Offering and Graduation Rates and Time-to-Degree

Table 2 Relationship Between Online Course Offering and Graduation Rates and Time-to-Degree

To examine whether online course taking impacts 4- and 6-year graduation rates, we used an instrumental variables approach. We present the first- and second-stage estimates (Table 3). The first-stage analysis estimates the relationships between online course offering and online course taking and tests if the instrument is sufficiently strong to produce unbiased results. According to Staiger and Stock (1997), the estimate resulting from an instrumental variables approach could suffer from bias similar to the naive OLS regression when the partial correlation between the instruments and the endogenous variable is small. Although there is not universal consensus on appropriate thresholds for F-statistics, studies suggest that in this context (one endogenous regressor, one instrument, and a relatively conservative level of bias/distortion), the F-value should be greater than 16.38 (Stock & Yogo, 2005). The F-statistic on the instrument of all major-required courses offered online in the first four years was smaller than this threshold, F(1,12) = 8.526, p < .01. The F-statistic on the instrument of major-required lower division courses offered online in the first year was larger than the threshold, F(1,12) = 74.823, p < .001. Because of weak first-stage estimate for the first model, we interpret results from these models with a degree of caution.

Table

Table 3 Relationship Between Online Course Taking and 4- and 6-Year Graduation Rates

Table 3 Relationship Between Online Course Taking and 4- and 6-Year Graduation Rates

Table 3 shows the estimates of the relationships of online course enrollment on graduation rates using the IV approach. One percent increase in the proportion of major-required courses taken online is related to a 14.4% greater chance of successfully graduating within 4 years, b = 14.375, SD = 7.78, p < .10. Similarly, 1% increase in the proportion of first-year lower division major-required courses taken online is related to a 9.0% greater chance of successfully graduating within 4 years, b = 8.955, SD = 2.66, p < .001. There were no significant associations of the proportion of online courses taken and students’ likelihood of graduating within 6 years both for all major-required courses or all first-year lower division major-required courses.

As is the case in all studies using instrumental variables, the effects we estimate are the local average treatment effects (LATE). This is the average treatment effect for the sub-population of compliers (here, those students who are induced to take more classes online because more are offered; Angrist et al., 1996; Imbens & Angrist, 1994). These estimates do not necessarily apply to the always takers (those who would take online classes no matter what). The LATE can have weak external validity and the interpretation is dependent on the specific instrument we use. The effects we estimate might not hold if we were to, for example, randomly assign students to take online courses or if we were to leverage a different naturally occurring source of exogenous variation in online course taking. However, we believe that in this specific instance, the LATE is of great practical interest, as it answers a question that is informative to many departmental administrators.

Examining Associations With Time-to-Degree

We next examine effects on time-to-degree for students who graduated within 6 years. Table 2 presents estimates of the effect of offering major-required classes online on time-to-degree. A 1% increase in the proportion of major-required lower division courses offered online is associated with a decrease of 1.3% of a year in student time to degree, corresponding to approximately 0.16 months. A 1% increase in the proportion of lower division courses offered online is associated with a decrease of 1.6% in student time to degree, corresponding to approximately 0.20 months.

We next tested whether taking online courses impacts students’ time to degree by examining the subgroup of students who graduated college within 6 years, again using an instrumental variables approach. Table 4 shows the first- and second-stage estimates. The results from first-stage analysis were similar to those of the full sample. The F-statistic on the instrument of major-required lower division courses offered online in the first year was larger than the threshold, F(1,12) = 76.212, p < .001, but the F-statistic on the instrument of all major-required courses offered online in the first 4 years was slightly smaller than the established threshold, F(1,12) = 8.009, p < .01.

Table

Table 4 Relationship Between Online Course Taking and Student Time-to-Degree

Table 4 Relationship Between Online Course Taking and Student Time-to-Degree

Notably, results from the second-stage analysis show that students’ enrollment across all online courses in their first 4 years is not significantly associated with students’ time-to-degree. However, students’ enrollment in lower division online courses during their first year is significantly associated with shorter time-to-degree. A 1% increase in the proportion of lower division courses taken online is associated with a 12.0% decrease in time to degree, corresponding to approximately 1.4 months.

Heterogeneity Effects Regarding Student Graduation Rates and Time-to-Degree

To examine heterogeneous effects for student populations traditionally at-risk in college environments (i.e., first-generation college students, low-income students, students with weak academic preparation) on student graduation rates and time-to-degree, we included interaction terms in the instrumental variables analysis (Table 5).

Table

Table 5 Heterogeneity Analysis of the Relationships of Taking Online Courses

Table 5 Heterogeneity Analysis of the Relationships of Taking Online Courses

Graduation Rates

There was no evidence for heterogeneous impacts in terms of first-generation status and academic preparation. For instance, for non-first-generation college students, 1% increase in online course enrollment in major-required courses during the first 4 years is associated with a 14.0% greater chance of successfully graduating within 4 years, which is not significantly different from the association for first-generation college students (Column 2 in Table 5, Panel A). For students with strong academic preparation, enrolling in more major-required lower division courses during the first year is associated with higher chance of graduating in 4 years for students with strong academic preparation, b = 9.823, SD = 3.30, p < .01, and the association is not significantly different from that for students with weak academic preparation (Column 2 in Table 5, Panel C). In contrast, heterogeneous impacts of online course taking were evident in terms of student low-income status. For non-low-income students, a 1% increase in online course enrollment in major-required lower division courses during the first year is associated with a 11.6% greater chance of successfully graduating within 4 years. Notably, this increased chance of successfully graduating college within 4 years is lower for low-income students, b = −5.176, SD = 2.20, p < .05, when compared with non-low-income students (Column 2 in Table 5, Panel B).

Time-to-Degree

Heterogeneity analyses indicated that online course enrollments shortened students’ time to college graduation for first-generation college students, low-income students, and students with weak academic preparation. However, this reduction in student time-to-degree is smaller (but greater than zero) for at-risk students compared with their non-at-risk counterparts. For instance, for non-first-generation college students, 1% increase in online course enrollment in major-required lower division courses during the first year was associated with a decrease of 14.6% of a year in student time-to-degree, corresponding to approximately 1.8 months, b = −14.623, SE = 4.19, p < .001(Column 5 in Table 5, Panel A). However, for first-generation college students, this time-to-degree decrease is approximately 0.7 months smaller when compared with non-first-generation college students, b = 5.465, SE = 1.81, p < .01 (Column 5 in Table 5, Panel A). Similarly, for non-low-income students, 1% increase in online course enrollment in major-required lower division courses during the first year was significantly associated with a decrease of 18.5% of a year in student time-to-degree, which corresponds to approximately 2.2 months, b = −18.489, SE = 4.85, p < .001 (Column 5 in Table 5, Panel B). However, for low-income students, this time-to-degree decrease is approximately 1.5 months smaller when compared with non-low-income students, b = 12.205, SE = 4.19, p < .01.

Scholarly Significance

This study examines more distal outcomes of online course taking. We hope to inform educational administrators in their decision-making whether to retain some online course offerings in their course portfolios after the COVID-19 pandemic. Overall, this study extends the research base in three key ways.

First, most extant research has examined proximal college student outcomes such as course completion, course performance, and subsequent course success (e.g., Alpert et al., 2016; Bowen et al., 2014; Fischer et al., 2019; Fischer, Xu, et al., 2020). Although these outcomes are important, especially the extent to which they measure student learning, they primarily serve as indicators of progress toward outcomes with important economic implications, such as eventual graduation and time-to-degree. This study directly examined such distal college success factors and thus contributes to a more nascent research base.

Second, most existing evidence of the effects of online course taking is situated at community colleges or for-profit universities (e.g., Bettinger et al., 2017; Kaupp, 2012; Xu & Jaggars, 2013, 2014). The student bodies and institutional contexts of community colleges and for-profit colleges are different from four-year institutions such that findings from these schools might not generalize to the four-year context. Thus, this study extends our understanding on distal effects of online course taking to residential four-year colleges.

Third, students select into online courses for several reasons, many of which are also related to the students’ expected outcomes. Random assignment to online courses is difficult to implement. As the few studies that have done so have only assigned students to a single online course (Alpert et al., 2016; Bowen et al., 2014; Figlio et al., 2013; Joyce et al., 2015), it is difficult to estimate causal effects of online course taking on students’ more distal college outcomes. Failure to fully account for selection into online courses can bias estimates of the effects of online course taking. Indeed, we see evidence of this when comparing our baseline OLS estimates with our instrumental variables estimates (Table A1). When we do not leverage the exogenous variation in the number of online courses offered across majors over time, taking online courses appears to be, if anything, associated with a lower probability of graduating within 4 years and a longer time-to-degree. We find the opposite in our estimates in which we instrument online course taking with online course offerings.

Conclusion and Implications

The most important finding of this study is the following: Online course taking is associated with more efficient college graduation; students who are given the opportunity to take classes online graduate more quickly than those who are not. We also found that online course taking is associated with a higher likelihood of successfully graduating college within 4 years. Although the magnitudes of the coefficients need to be interpreted with caution, our results are robust to a number of model specifications and analytic strategies. The consistency of our results is important, as each of our models presents unique limitations.

These are promising findings. Despite somewhat lower student performance in online courses compared with their corresponding face-to-face courses (Bettinger et al., 2017; Figlio et al., 2013; Fischer, Xu, et al., 2020; Kaupp, 2012; Xu & Jaggars, 2013, 2014), our study finds that online courses may provide distal benefits for students to graduate college efficiently. Notably, our results are consistent with the limited existing quasi-experimental research examining online course-taking patterns and time-to-degree using nationally representative data sets (Ortagus, 2018; Shea & Bidjerano, 2014; Sublett, 2019). This contrasts with research on distal impacts of online courses situated at single state-wide community college systems which has found online course taking to be associated with lower college persistence, transfer rates, and graduation rates (Huntington-Klein et al., 2017; Shea & Bidjerano, 2018; Xu & Jaggars, 2011). This highlights the need for research to examine the effectiveness of online courses in various contexts.

To put the findings from this study in context, in the last decade there have been several institutional strategies put forth to reduce undergraduate time-to-degree, such as limiting credit requirements for programs, publishing term-by-term road maps for undergraduates, and guaranteeing the transfer of general education curriculum (e.g., Complete College America, 2014). These represent a mix of informational (e.g., degree maps) and programmatic (e.g., credit requirements) solutions. This study identifies another programmatic strategy for decreasing time-to-degree: offering more online classes.

Academic research on the effectiveness of such strategies is scant. Although there is a fairly large literature examining individual characteristics or attributes associated with longer degree completion times (e.g., Behr and Theune (2016) identify that off-campus work extends time-to-degree approximately one term, and Yue and Fu (2017) find that double-majoring, entering college undeclared, and switching majors is associated with extended time to degree), there is less work that looks at the effects of policies and programs on time-to-degree.

A few recent studies, however, do show that institutional policies can significantly affect time to degree. For example, Baker et al. (2021) find that associate degrees specifically structured to support transfer to 4-year schools (thus reducing the complexity of making course choices) reduce time to degree at the 4-year college by 0.03 to 0.16 semesters. Brownback and Sadoff (2020) find that an intervention to increase enrollment in summer courses for students in community college leads to a 31% increase in graduation within 1 year and no increase within 2 years. Similar to the results in our study, they conclude that the effect is on degree acceleration and not overall attainment. Most akin to our study is Sublett’s (2019) analysis of community college students time-to-degree. In this study, time-to-degree for both an associate degree and bachelor’s degree was reduced by about 3 months by taking a distance education course in the first year. This reduction is similar to this study regarding increases in online course enrollment in major-required lower division courses during the first year.

Efforts to improve existing online courses by, for instance, providing students with more opportunities to improve their self-regulation skills (e.g., Broadbent & Poon, 2015; You, 2016), are laudable. However, departments should recognize that online courses may bring distal benefits even if student performance lags slightly in them. The added flexibility to course-taking for both the student (e.g., to accommodate their schedules) and the institution (e.g., to address capacity constraints) increases overall access of opportunities for students to earn course credits, which, in turn, may help with students’ degree efficiency.

Students who are generally considered at-risk in college environments also show a small advantage from enrollment in online courses to graduate more efficiently. Compared with their non-at-risk counterparts, first-generation college students, low-income students, and students with weaker academic preparation have considerably smaller, but still positive, benefits of online course enrollments on distal college success factors. Detecting these reduced benefits of online course enrollment are in line with prior research suggesting that online course environments pose additional challenges to at-risk students (e.g., Figlio et al., 2013; Kaupp, 2012; Xu & Jaggars, 2013, 2014) Nonetheless, this study indicates that online courses can potentially benefit students on distal college success factors without adversary effects for students who are traditionally at-risk in college environments.

Overall, our study intends to inform departments at residential universities that traditionally offer just a few or no online courses, as the effects presented in this study were found for departments offering on average about 3% of online courses with an average of about 8% of students ever enrolling in an online class. Thus, even departments that did not typically include online courses in their teaching portfolio prior to the COVID-19 pandemic may consider retaining or increasing some of their online courses to increase the likelihood of students successfully completing course requirements and graduating.

Limitations

Although online courses are increasingly important for colleges, many 4-year schools, and particularly selective 4-year schools, typically do not offer a significant proportion of their courses online. Since online course offerings have grown significantly over the past decade, it may be important to verify the generalizability of our results with more recent cohorts as the structure and quality of online courses has improved significantly in the past decade and COVID-19 may change how administrators and universities value online instruction (Sin & Muthu, 2015; Xu & Xu, 2019). We would not be surprised to see the benefits of online course enrollments replicated in settings that offer even higher quality online courses.

This study examines the effects of taking online courses to fulfill major requirements but does not examine the effects of online courses taken for other reasons, such as to fulfill general education requirements. If course enrollment decisions vary across contexts (e.g., if students are more willing to opt into online courses for general education requirements than for major requirements or online courses were perceived to be of poor/high quality at certain departments), we may also find that effects of offering online courses differ across contexts.

As with any instruments that do not utilize random assignment, it is difficult to fully interrogate the exclusion restriction for our instrument. We cannot fully rule out the possibility that the instrument could have an effect on the outcomes we examine in any way other than through the number of online courses taken. Also, our instrument is weak for the smaller sample of students who started and ended with the same major and somewhat weak for when examining online courses offered/taken across the first four years of a student’s enrollment. We believe there are two important contextual explanations for this. First, related to the point above, because online course offerings were relatively sparse at this school in this time period, our instrument lacks the type of variation one would like to see. This may limit our ability to identify positive effects. This is likely related to our sample range falling at the beginning of the expansion of online course offerings at this institution. Future research that uses later starting cohorts may find larger variation in online course offerings and yield more conclusive results.

Finally, although course requirements represent the general rules for earning a degree in a certain major, they are, in certain cases, somewhat flexible. Thus, our analyses represent a noisy estimate of the effects of taking online courses on time-to-degree. And if certain students—those who are especially savvy or those who face especially strong struggles in completing coursework—petition to use specific online courses to fulfill their major requirements, our results could be biased upward or downward. Although this is a legitimate concern, at large state schools, such as the one we examined, there is typically relatively little leeway for making bespoke curricular adjustments based on student petitioning. That said, our analysis is based on a single institution, which may limit its potential generalizability across the country but encourages replication analyses at other institutions given the increased availability of large-scale institutional data (Fischer, Pardos, et al., 2020).

Promising Directions for Future Research

The encouraging results from this study motivate several directions for future research. First, this study examines fully online courses that are mostly asynchronous at one institution. With the number of online courses growing rapidly, continued studies of this type at other institutions of various types can confirm whether the benefits in aiding graduation rates and time-to-degree found in this study are similar across different college settings (e.g., universities that offer fully online programs) and varied online course formats (e.g., blended learning/hybrid courses). In addition, although we show that online course taking, especially early in one’s academic trajectory, can shorten time-to-degree, distal efficiency must be considered in tandem with the proximal penalties to academic performance attributed to online courses when compared with traditional face-to-face courses (e.g., Bettinger et al., 2017; Figlio et al., 2013; Fischer, Xu, et al., 2020; Xu & Jaggars, 2013, 2014) Therefore, an analysis to compare tuition savings and lifetime earnings for those who graduate earlier due to online course taking against individuals who learn, and possibly pay, more for taking longer and enrolling in face-to-face courses could shed some light on the overall costs and benefits.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is supported by the Teaching and Learning Research Center at the University of California, Irvine, the National Science Foundation through the EHR Core Research Program (ECR), Award 1535300, and the Andrew W. Mellon Foundation, Award 1806-05902. The views contained in this article are those of the authors, and not of their institutions, the National Science Foundation, or the Andrew W. Mellon Foundation.

ORCID iDs
Christian Fischer https://orcid.org/0000-0002-8809-2776

Mark Warschauer https://orcid.org/0000-0002-6817-4416

Abadie, A., Athey, S., Imbens, G. W., Wooldridge, J. (2017). When should you adjust standard errors for clustering? (No. w24003). National Bureau of Economic Research.
Google Scholar
Allison, P. D. (2002). Quantitative applications in the social sciences: Missing data. SAGE. https://doi.org/10.4135/9781412985079
Google Scholar
Alpert, W. T., Couch, K. A., Harmon, O. R. (2016). A randomized assessment of online learning. American Economic Review, 106(5), 378382.
Google Scholar | Crossref
Angrist, J. D., Imbens, G. W., Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444455.
Google Scholar | Crossref | ISI
Angrist, J. D., Krueger, A. B. (2001). Instrumental variables and the search for identification: From supply and demand to natural experiments. Journal of Economic Perspectives, 15(4), 6985.
Google Scholar | Crossref | ISI
Attewell, P., Heil, S., Reisel, L. (2011). Competing explanations of undergraduate noncompletion. American Educational Research Journal, 48(3), 536559.
Google Scholar | SAGE Journals | ISI
Bailey, E. J., Duffrin, M., Carels, R., O’Brien, K. (2019). The “Freshman 15”: Exploring weight issues, eating patterns, psychological, mental health, stress, and weight loss prevention programs among college students at East Carolina University. Journal of Community Medicine & Public Health Care, 6, 044.
Google Scholar | Crossref
Bailey, T. R., Jaggars, S. S., Jenkins, D. (2015). Redesigning America’s community colleges. Harvard University Press.
Google Scholar | Crossref
Baker, R., Friedmann, E., Kurlaender, M. (2021). Improving the community college transfer pathway to the baccalaureate: The effect of California’s associate degree for transfer (EdWorkingPaper: 21-359, Annenberg Institute Working Paper Series). Brown University. https://doi.org/10.26300/569x-2a48
Google Scholar
Baker, R., Xu, D., Park, J., Yu, R., Li, Q., Cung, B., Fischer, C., Rodriguez, F., Warschauer, M., Smyth, P. (2020). The benefits and caveats of using clickstream data to understand student self-regulatory behaviors: Opening the black box of learning processes. International Journal of Educational Technology in Higher Education, 17, 124.
Google Scholar | Crossref
Bartley, S. J., Golek, J. H. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Journal of Educational Technology & Society, 7(4), 167175.
Google Scholar | ISI
Bean, J., Metzner, B. (1985). A conceptual model of nontraditional undergraduate student attrition. Review of Educational Research, 55, 485650.
Google Scholar | SAGE Journals | ISI
Behr, A., Theune, K. (2016). The causal effect of off-campus work on time to degree. Education Economics, 24(2), 189209.
Google Scholar | Crossref
Bettinger, E. P., Baker, R. B. (2014). The effects of student coaching: An evaluation of a randomized experiment in student advising. Educational Evaluation and Policy Analysis, 36(1), 319.
Google Scholar | SAGE Journals | ISI
Bettinger, E. P., Fox, L., Loeb, S., Taylor, E. S. (2017). Virtual classrooms: How online college courses affect student success. American Economic Review, 107(9), 28552875.
Google Scholar | Crossref
Bhaskaran, S. S., Lu, K., Aali, M. A. (2017). Student performance and time-to-degree analysis by the study of course-taking patterns using J48 decision tree algorithm. International Journal of Modelling in Operations Management, 6(3), 194213.
Google Scholar | Crossref
Bound, J., Lovenheim, M., Turner, S. (2012). Increasing time to baccalaureate degree in the United States. Education Finance and Policy, 7(4), 375424.
Google Scholar | Crossref | ISI
Bowen, W. G., Chingos, M. M., Lack, K. A., Nygren, T. I. (2014). Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management, 33(1), 94111.
Google Scholar | Crossref | ISI
Boyle, F., Kwon, J., Ross, C., Simpson, O. (2010). Student–student mentoring for retention and engagement in distance education. Open Learning: The Journal of Open, Distance and e-Learning, 25(2), 115130.
Google Scholar | Crossref
Broadbent, J., Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 113.
Google Scholar | Crossref | ISI
Brownback, A., Sadoff, S. (2020). College summer school: Educational benefits and enrollment preferences. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3622279
Google Scholar
Bruffaerts, R., Mortier, P., Auerbach, R. P., Alonso, J., Hermosillo De la Torre, A. E., Cuijpers, P., Demyttenaere, K., Ebert, D. D., Green, J.G., Hasking, P., Stein, D. J., Ennis, E., Nock, M. K., Pinder-Amaker, S., Sampson, N. A., Vilagut, G., Zaslavsky, A. M., Kessler, R. C., & WHO WMH-ICS Collaborators . (2019). Lifetime and 12-month treatment for mental disorders and suicidal thoughts and behaviors among first year college students. International Journal of Methods in Psychiatric Research, 28(2), Article e1764.
Google Scholar | Crossref
Chen, R. (2012). Institutional characteristics and college student dropout risks: A multilevel event history analysis. Research in Higher Education, 53(5), 487505.
Google Scholar | Crossref
Complete College America (2014). Four-year myth: Make college more affordable. Restore the promise of graduating on time. Complete College America.
Google Scholar
Cunningham, S. (2021). Causal inference: The Mixtape. Yale University Press.
Google Scholar | Crossref
Duncan, G. J., Murnane, R. J. (2014). Restoring opportunity: The crisis of inequality and the challenge for American education. Harvard Education Press.
Google Scholar
Ewert, S. (2010). Male and female pathways through four-year colleges: Disruption and sex stratification in higher education. American Educational Research Journal, 47(4), 744773.
Google Scholar | SAGE Journals | ISI
Figlio, D., Rush, M., Yin, L. (2013). Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics, 31(4), 763784. https://doi.org/10.3386/w16089
Google Scholar
Fischer, C., Pardos, Z., Baker, R. S., Williams, J. J., Smyth, P., Yu, R., Slater, S., Baker, R., Warschauer, M. (2020). Mining big data in education: Affordances and challenges. Review of Research in Education, 44(1), 130160. https://doi.org/10.3102/0091732X20903304
Google Scholar
Fischer, C., Xu, D., Rodriguez, F., Denaro, K., Warschauer, M. (2020). Effects of course modality in summer session: Enrollment patterns and student performance in face-to-face and online classes. The Internet and Higher Education, 45, 19. https://doi.org/10.1016/j.iheduc.2019.100710
Google Scholar
Fischer, C., Zhou, N., Rodriguez, F., Warschauer, M., King, S. (2019). Improving college student success in organic chemistry: Impact of an online preparatory course. Journal of Chemical Education, 96(5), 857864. http://dx.doi.org/10.1021/acs.jchemed.8b01008
Google Scholar
Ginder, S. A., Kelly-Reid, J. E., Mann, F. B. (2017). Graduation rates for selected cohorts, 2008-13; outcome measures for cohort year 2008; student financial aid, academic year 2015-16; and admissions in postsecondary institutions, fall 2016: First Look (Provisional Data, NCES 2017-150rev). National Center for Education Statistics.
Google Scholar
Grabowski, C., Rush, M., Ragen, K., Fayard, V., Watkins-Lewis, K. (2016). Today’s non-traditional student: Challenges to academic success and degree completion. Inquiries Journal, 8(3) 12.
Google Scholar
Gurantz, O. (2015). Who loses out? Registration order, course availability, and student behaviors in community college. The Journal of Higher Education, 86(4), 524563.
Google Scholar | Crossref
Huntington-Klein, N., Cowan, J., Goldhaber, D. (2017). Selection into online community college courses and their effects on persistence. Research in Higher Education, 58, 244269.
Google Scholar | Crossref
Imbens, G. W., Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica, 6 2 (2), 467475.
Google Scholar | Crossref
Jenkins, D., Rodriguez, O. (2013). Access and success with less: Improving productivity in broad-access postsecondary institutions. The Future of Children, 187209.
Google Scholar | Crossref
Joyce, T., Crockett, S., Jaeger, D. A., Altindag, O., O’Connell, S. D. (2015). Does classroom time matter? Economics of Education Review, 46, 6477.
Google Scholar | Crossref
Kaupp, R. (2012). Online penalty: The impact of online instruction on the Latino-White achievement gap. Journal of Applied Research in the Community College, 19(2), 311.
Google Scholar
Kramer, D. A., Holcomb, M. R., Kelchen, R. (2018). The costs and consequences of excess credit hours policies. Educational Evaluation and Policy Analysis, 40(1), 328.
Google Scholar | SAGE Journals | ISI
Kurlaender, M., Jackson, J., Howell, J. S., Grodsky, E. (2014). College course scarcity and time to degree. Economics of Education Review, 41, 2439.
Google Scholar | Crossref | ISI
Li, Q., Baker, R., Warschauer, M. (2020). Using clickstream data to measure, understand, and support self-regulated learning in online courses. The Internet and Higher Education, 45. https://doi.org/10.1016/j.iheduc.2020.100727
Google Scholar
McFarland, J., Hussar, B., de Brey, C., Snyder, T., Wang, X., Wilkinson-Flicker, S., Gebrekristos, S., Zhang, J., Rathbun, A., Barmer, A., Bullock Mann, F., Hinz, S. (2017). The condition of education 2017 (NCES 2017-144). U.S. Department of Education, National Center for Education Statistics.
Google Scholar
Moretti, E. (2004). Estimating the social return to higher education: Evidence from longitudinal and repeated cross-sectional data. Journal of Econometrics, 121(1–2), 175212.
Google Scholar | Crossref | ISI
Murnane, R. J., Willett, J. B. (2010). Methods matter: Improving causal inference in educational and social science research. Oxford University Press.
Google Scholar
National Academy of Sciences, National Academy of Engineering, & Institute of Medicine . (2007). Rising above the gathering storm: Energizing and employing America for a brighter economic future. National Academies Press.
Google Scholar
Ortagus, J. C. (2018). National evidence of the impact of first-year online enrollment on postsecondary students’ long-term academic outcomes. Research in Higher Education, 59(8), 10351058.
Google Scholar | Crossref
Pearson Foundation . (2011). Pearson Foundation community college student survey: Summary of California results. Retrieved from https://www.yumpu.com/en/document/read/31004416/community-college-student-survey-pearson-foundation
Google Scholar
Rovai, A. P. (2003). In search of higher persistence rates in distance education online programs. The Internet and Higher Education, 6(1), 116.
Google Scholar | Crossref
Scott-Clayton, J . (2015). The Shapeless River: Does a Lack of Structure Inhibit Students’ Progress at Community Colleges? In Castleman, B. L., Schwartz, S., Baum, S. (Eds.), Decision making for student success: Behavioral insights to improve college access and persistence. (pp. 102123) Routledge.
Google Scholar
Shapiro, D., Dundar, A., Wakhungu, P. K., Yuan, X., Nathan, A., Hwang, Y. (2016). Time to degree: A national view of the time enrolled and elapsed for associate and bachelor’s degree earners (Signature Report No. 11). https://eric.ed.gov/?id=ED580231
Google Scholar
Shea, P., Bidjerano, T. (2014). Does online learning impede degree completion? A national study of community college students. Computers & Education, 75, 103111.
Google Scholar | Crossref
Shea, P., Bidjerano, T. (2018). Online course enrollment in community college and degree completion: The tipping point. The International Review of Research in Open and Distributed Learning, 19(2), 282293. https://doi.org/10.19173/irrodl.v19i2.3460
Google Scholar
Sin, K., Muthu, L. (2015). Application of big data in education data mining and learning analytics—A literature review. ICTACT Journal on Soft Computing, 5(4), 10351049.
Google Scholar | Crossref
Small, D. S., Tan, Z., Ramsahai, R. R., Lorch, S. A., Brookhart, M. A. (2017). Instrumental variable estimation with a stochastic monotonicity assumption. Statistical Science, 32(4), 561579. https://doi.org/10.1214/17-STS623
Google Scholar | Crossref
Smith, K., Byrd, C. N. (2015). 2014 Joint Statistical Report Summary. Summer Academe: A Journal of Higher Education, 9, 112.
Google Scholar
Staiger, D., Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica, 65, 557586.
Google Scholar | Crossref | ISI
Stock, J. H., Yogo, M. (2005). Testing for weak instruments in linear IV regression. In Andrews, D. W. K., Stock, J. H. (Eds.), Identification and inference for econometric models: Essays in honor of Thomas Rothenberg Cambridge University Press (pp. 80108).
Google Scholar | Crossref
Stone, C., O’Shea, S., May, J., Delahunty, J., Partington, Z. (2016). Opportunity through online learning: Experiences of first-in-family students in online open-entry higher education. Australian Journal of Adult Learning, 56(2), 146169.
Google Scholar
Sublett, C. (2019). Examining distance education coursetaking and time-to-completion among community college students. Community College Journal of Research and Practice, 43(3), 201215.
Google Scholar | Crossref
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89125.
Google Scholar | SAGE Journals | ISI
Tinto, V. (1987). Leaving college. University of Chicago Press.
Google Scholar
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition. University of Chicago Press.
Google Scholar
University of California . (2021). Admission by exam. https://admission.universityofcalifornia.edu/admission-requirements/freshman-requirements/admission-by-exam.html
Google Scholar
U.S. Department of Education . (2017). Digest of Education Statistics 2017. https://nces.ed.gov/programs/digest/d17/tables/dt17_326.10.asp
Google Scholar
U.S. Department of Health & Human Services . (2021). U.S. Federal poverty guidelines used to determine financial eligibility of certain federal programs. https://aspe.hhs.gov/poverty-guidelines
Google Scholar
Watson, J., Gemin, B. (2008). Using online learning for at-risk students and credit recovery. Promising practices in online learning. North American Council for Online Learning.
Google Scholar
Witteveen, D., Attewell, P. (2021). Delayed time-to-degree and post-college earnings. Research in Higher Education, 62, 230257.
Google Scholar | Crossref
Xu, D., Jaggars, S. S. (2011). Online and hybrid course enrollment and performance in Washington State community and technical colleges (CCRC Working Paper No. 31). Community College Research Center, Columbia University
Google Scholar
Xu, D., Jaggars, S. S. (2013). The impact of online learning on students’ course outcomes: Evidence from a large community and technical college system. Economics of Education Review, 37, 4657. https://doi.org/10.1016/j.econedurev.2013.08.001
Google Scholar
Xu, D., Jaggars, S. S. (2014). Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. The Journal of Higher Education, 85(5), 633659. https://doi.org/10.1080/00221546.2014.11777343
Google Scholar
Xu, D., Solanki, S., McPartlan, P., Sato, B. (2018). EASEing students into college: The impact of multidimensional support for underprepared students. Educational Researcher, 47(7), 435450.
Google Scholar | SAGE Journals
Xu, D., Xu, Y. (2019). The promises and limits of online higher education: Understanding how distance education affects access, cost, and quality. American Enterprise Institute. http://www.aei.org/publication/the-promises-and-limits-of-online-higher-education/
Google Scholar
You, J. W. (2016). Identifying significant indicators using LMS data to predict course achievement in online learning. The Internet and Higher Education, 29, 2330.
Google Scholar | Crossref
Yue, H., Fu, X. (2017). Rethinking graduation and time to degree: A fresh perspective. Research in Higher Education, 58(2), 184213.
Google Scholar | Crossref
Zarifa, D., Kim, J., Seward, B., Walters, D. (2018). What’s taking you so long? Examining the effects of social class on completing a bachelor’s degree in four years. Sociology of Education, 91(4), 290322. https://doi.org/10.1177/0038040718802258F
Google Scholar

Authors

CHRISTIAN FISCHER is an assistant professor at the Hector Research Institute of Education Sciences and Psychology at the University of Tübingen, Germany. His research examines pathways to improve teaching and learning, in particular through the use of digital technologies.

RACHEL BAKER is an associate professor at the University of California, Irvine’s School of Education. She studies how institutional and state policies affect student decision making in higher education with the aim of increasing access, persistence, and success for traditionally underserved groups.

QIUJIE LI is a postdoctoral scholar in the School of Education at the University of California, Irvine. Her research agenda utilizes motivation and self-regulation theories and learning analytics methods to describe, explain, and improve online learning in higher education.

GABE AVAKIAN ORONA is a PhD candidate in Education at the University of California, Irvine. His research focuses on measurement and assessment in higher education. Broadly, his interest is in designing and analyzing assessments relating to the aims of higher education and using these to study intellectual development in college.

MARK WARSCHAUER is a professor of education at the University of California where he directs both the Digital Learning Lab and the Online Learning Research Center. His research focuses on the uses of digital media to promote language, literacy, and STEM learning among diverse students.

Article available in:

Related Articles

Articles Citing this One: 0