ddml: Double/debiased machine learning in Stata

We introduce the package ddml for Double/Debiased Machine Learning (DDML) in Stata. Estimators of causal parameters for five different econometric models are supported, allowing for flexible estimation of causal effects of endogenous variables in settings with unknown functional forms and/or many exogenous variables. ddml is compatible with many existing supervised machine learning programs in Stata. We recommend using DDML in combination with stacking estimation which combines multiple machine learners into a final predictor. We provide Monte Carlo evidence to support our recommendation.


Introduction
Identification of causal effects frequently relies on an unconfoundedness assumption, which is that treatment or instrument assignment is sufficiently random given observed control covariates.Estimation of causal effects in these settings then involves conditioning on the controls.Unfortunately, estimators of causal effects that are insufficiently flexible to capture the effect of confounds generally do not produce consistent estimates of causal effects even when unconfoundedness holds.For example, Blandhol et al. (2022) highlight that two-stage least-squares estimands obtained after controlling linearly for confounds do not generally correspond to weakly causal effects even when instruments are valid conditional on controls.Even in the ideal scenario, where theory provides few relevant controls, theory rarely specifies the exact nature of confounding.Thus, applied empirical researchers wishing to exploit unconfoundedness assumptions to learn causal effects face a nonparametric estimation problem.ddml Traditional nonparametric estimators suffer greatly under the curse of dimensionality and are quickly impractical in the frequently encountered setting with multiple observed covariates. 1 These difficulties leave traditional nonparametric estimators essentially inapplicable in the presence of increasingly large and complex datasets, for example, textual confounders as in Roberts, Stewart, and Nielsen (2020) or digital trace data (Hangartner, Kopp, and Siegenthaler 2021).Tools from supervised machine learning have been put forward as alternative estimators.These approaches are often more robust to the curse of dimensionality via the exploitation of regularization assumptions.A prominent example of a machine learning-based causal-effects estimator is the post double selection lasso (PDS-lasso) of Belloni, Chernozhukov, and Hansen (2014), which fits auxiliary lasso regressions of the outcome and treatments against a menu of transformed controls.Under an approximate sparsity assumption, which posits that the data-generating process (DGP) can be approximated well by relatively few terms included in the menu, this approach allows for precise treatment-effects estimation.The lasso can also be used for approximating optimal instruments (Belloni et al. 2012).Lasso-based approaches for estimation of causal effects have become a popular strategy in applied econometrics (for example, Gilchrist and Sands [2016]; Dhar, Jain, and Jayachandran [2022]), partially facilitated by the availability of software programs in Stata (pdslasso, Ahrens, Hansen, and Schaffer 2018) and R (hdm, Chernozhukov, Hansen, and Spindler 2016).
Although approximate sparsity is a weaker regularization assumption than assuming a linear functional form that depends on a known low-dimensional set of variables, it may not be suitable in a wide range of applications.For example, Giannone, Lenza, and Primiceri (2021) argue that approximate sparsity may provide a poor description in several economic examples.Thus, there is a potential benefit to expanding the set of regularization assumptions and correspondingly considering a larger set of machine learners, including, for example, random forests, gradient boosting, and neural networks.While the theoretical properties of these estimators are an active research topic (see, for example, Athey, Tibshirani, and Wager [2019] and Farrell, Liang, and Misra [2021]), machine learning methods are widely adopted in industry and practice for their empirical performance.To facilitate their application for causal inference in common econometric models, Chernozhukov et al. (2018) propose double/debiased machine learning (DDML), which exploits Neyman orthogonality of estimating equations and cross-fitting to formally establish asymptotic normality of estimators of causal parameters under relatively mild convergence rate conditions on nonparametric estimators.
DDML increases the set of machine learners that researchers can leverage for estimation of causal effects.Deciding which learner is most suitable for a particular application is difficult, however, because researchers are rarely certain about the structure of the underlying DGP.A practical solution is to construct combinations of a diverse set of machine learners using stacking (Wolpert 1992;Breiman 1996).Stacking is a meta-learner given by a weighted sum of individual machine learners (the "base learners").When the weights corresponding to the base learners are chosen to maximize out-of-sample predictive accuracy, this approach hedges against the risk of relying on any particular poorly suited or ill-tuned machine learner.
In this article, we introduce the package ddml, which implements DDML for Stata.2ddml adds to a few programs for causal machine learning in Stata (Ahrens, Hansen, and Schaffer 2018).We briefly summarize the four main features of the program: 2. ddml supports data-driven combinations of multiple machine learners via stacking by leveraging pystacked (Ahrens, Hansen, and Schaffer 2023), our complementary Stata front-end relying on the Python library scikit-learn (Pedregosa et al. 2011;Buitinck et al. 2013).ddml also supports two novel approaches to pairing DDML with stacking introduced in Ahrens et al. ( 2024): Short-stacking takes a shortcut by leveraging the cross-fitted predicted values for estimating the stacking weights, and pooled stacking enforces common weights across cross-fitting folds.
3. Aside from pystacked, ddml can be used in combination with many other existing supervised machine learning programs available in or via Stata.ddml has been tested with lassopack (Ahrens, Hansen, and Schaffer 2020), rforest (Schonlau and Zou 2020), svmachines (Guenther and Schonlau 2018), and parsnip (Huntington-Klein 2021).Indeed, the requirements for compatibility with ddml are minimal: any eclass program with the Stata-typical "regress y x" syntax, support for if conditions, and a postestimation predict command is compatible with ddml.
4. ddml provides flexible multiline syntax and short one-line syntax.The multiline syntax offers a wide range of options, guides the user through the DDML algorithm step by step, and includes auxiliary programs for storing, loading, and displaying additional information.We also provide a complementary one-line version called qddml ("quick" ddml), which uses a similar syntax to pdslasso and ivlasso (Ahrens, Hansen, and Schaffer 2018).
The article proceeds as follows.Section 2 outlines DDML for the partially linear and interactive models under conditional unconfoundedness assumptions.Section 3 outlines DDML for IV models.Section 4 discusses how stacking can be combined with DDML and provides evidence from Monte Carlo simulations illustrating the advantages of DDML with stacking.Section 5 explains the features, syntax, and options of the command.Section 6 demonstrates the command's usage with two applications.ddml

DDML with conditional unconfoundedness
This section discusses DDML for the partially linear model and the interactive model in turn.The exposition follows Chernozhukov et al. (2018).Both models are special cases of the general causal model where f 0 is a structural function, Y is the outcome, D is the variable of interest, X are observed covariates, and U are all unobserved determinants of Y (that is, other than D and X). 3 The key difference between the partially linear model and the interactive model is their positions in the tradeoff between functional form restrictions on f 0 and restrictions on the joint distribution of observables (D, X) and unobservables U .For both models, we highlight key parameters of interest, state sufficient identifying assumptions, and outline the corresponding DDML estimator.A random sample

The partially linear model (partial)
The partially linear model imposes the estimation model where θ 0 is a fixed unknown parameter.The key feature of the model is that the controls X enter through the unknown and potentially nonlinear function g 0 .Note that D is not restricted to be binary and may be discrete, continuous, or mixed.For simplicity, we assume that D is a scalar, although ddml allows for multiple treatment variables in the partially linear model.
The parameter of interest is θ 0 , the causal effect of D on Y . 4The key identifying assumption is given in assumption 1. 5 Assumption 1 (Conditional orthogonality).E{Cov(U, D|X)} = 0.
To show identification of θ 0 , consider the score , the model is also dubbed the "all causes model" (see, for example, Heckman and Vytlacil [2007]).Note that the model can equivalently be put into potential-outcome notation with potential outcomes defined as Y (d) ≡ f 0 (d, X, U ). 4. The interpretation of θ 0 can be generalized.For example, the results of Angrist and Krueger (1999) imply that in the general causal model (1), θ 0 is a positively weighted average of causal effects (for example, conditional average treatment effects) under stronger identifying assumptions.The basic structure can also be used to obtain valid inference on objects of interest, such as projection coefficients, in the presence of high-dimensional data or nonparametric estimation without requiring a causal interpretation.5. Discussions of partially linear models typically show identification under the stronger assumption that E(U |D, X) = 0. We differentiate here to highlight differences between the partially linear model and interactive model.
where W ≡ (Y, D, X) and and m are nuisance functions.Letting m 0 (X) ≡ E(D|X) and 0 (X) ≡ E(Y |X), note that by assumption 1. When, in addition, E[Var(D|X)] = 0, we get Equation ( 4) is constructive in that it motivates estimation of θ 0 via a simple twostep procedure: First, estimate the conditional expectation of Y given X (that is, 0 ) and of D given X (that is, m 0 ) using appropriate nonparametric estimators (for example, machine learners).Second, residualize Y and D by subtracting their respective conditional expectation function (CEF) estimates, and regress the resulting CEF residuals of Y on the CEF residuals of D. This approach is fruitful when the estimation error of the first step does not propagate excessively to the second step.DDML leverages two key ingredients to control the impact of the first-step estimation error on the second-step estimate: 1) second-step estimation based on Neyman orthogonal scores and 2) crossfitting.As shown in Chernozhukov et al. (2018), this combination facilitates the use of any nonparametric estimator that converges sufficiently quickly in the first step and potentially opens the door for the use of many machine learners.
Neyman orthogonality refers to a property of score functions ψ that ensures local robustness to estimation errors in the first step.Formally, it requires that the Gateaux derivative with respect to the nuisance functions evaluated at the true values is meanzero.In the context of the partially linear model, this condition is satisfied for the moment condition (3), where the derivative is with respect to the scalar r and evaluated at r = 0. Heuristically, we can see that this condition alleviates the impact of noisy estimation of nuisance functions as local deviations of the nuisance functions away from their true values leave the moment condition unchanged.We refer to Chernozhukov et al. (2018) for a detailed discussion but highlight that all score functions discussed in this article are Neyman orthogonal.
Cross-fitting ensures independence between the estimation error from the first step and the regression residual in the second step.To implement cross-fitting, we randomly split the sample into K evenly sized folds, denoted as I 1 , . . ., I K .For each fold k, the conditional expectations 0 and m 0 are estimated using only observations not in the kth fold-that is, in I c k ≡ I \ I k -resulting in I c k and m I c k , respectively, where the subscript I c k indicates the subsample used for estimation.The out-of-sample predictions for an observation i in the kth fold are then computed via I c k (X i ) and m I c k (X i ).Repeating this procedure for all K folds then allows for computation of the DDML estimator for θ 0 : where k i denotes the fold of the ith observation. 6 We summarize the DDML algorithm for the partially linear model in algorithm 1: 7 Algorithm 1: DDML for the partially linear model randomly in K folds of approximately equal size.Denote I k the set of observations included in fold k and I c k its complement.1.For each k ∈ {1, . . ., K}: a. Fit a CEF estimator to the subsample I c k using Y i as the outcome and X i as predictors.Obtain the out-of-sample predicted values I c k (X i ) for i ∈ I k .b. Fit a CEF estimator to the subsample I c k using D i as the outcome and X i as predictors.Obtain the out-of-sample predicted values m I c k (X i ) for i ∈ I k .2. Compute (5).Chernozhukov et al. (2018) give conditions on the joint distribution of the data, particularly on g 0 and m 0 , and properties of the nonparametric estimators used for CEF estimation, such that θ n is consistent and asymptotically normal.Standard errors are equivalent to the conventional linear regression standard errors of ddml computes the DDML estimator for the partially linear model using Stata's regress command.All standard errors available for linear regression in Stata are also available in ddml, including different heteroskedasticity and cluster-robust standard errors. 8 Remark 1: Number of folds.The number of cross-fitting folds is a necessary tuning choice.Theoretically, any finite value is admissible.Chernozhukov et al. (2018) report in remark 3.1 that four or five folds perform better than only using K = 2. Based on our simulation experience, we find that more folds tend to lead to better performance because more data are used for estimation of CEFs, especially when the sample size is small.We believe that more work on setting the number of folds would be useful but believe that setting K = 5 is likely a good baseline in many settings.
6.We here omit the constant from the estimation stage.Because the residualized outcome and treatment may not be exactly mean-zero in finite samples, ddml includes the constant by default in the estimation stage of partially linear models.7. Algorithm 1 corresponds to the "DML2" algorithm in Chernozhukov et al. (2018).Chernozhukov et al. (2018) in remark 3.1 recommend "DML2" over the alternative "DML1" algorithm, which fits the final estimator by fold.8. See help regress##vcetype for available options.
Remark 2: Cross-fitting repetitions.DDML relies on randomly splitting the sample into K folds.We recommend running the cross-fitting procedure more than once using different random folds to assess randomness introduced via the sample splitting.ddml facilitates this using the rep() option, which automatically fits the same model multiple times and combines the resulting estimates to obtain the final estimate.By default, ddml reports the median over cross-fitting repetitions.ddml also supports the average of estimates.Specifically, let θ (r) n denote the DDML estimate from the rth cross-fit repetition, and let s (r) n denote its associated standard-error estimate with r = 1, . . ., R. The aggregate median point estimate and associated standard error are defined as The aggregate mean point estimate and associated standard error are calculated as where hmean is the harmonic mean. 9 Remark 3: Cluster-dependence and folds.Under cluster-dependence, we recommend randomly assigning folds by cluster; see fcluster().

The interactive model (interactive)
The interactive model is given by where D takes values in {0, 1}.The key deviations from the partially linear model are that D must be a scalar binary variable and that D is not required to be additively separable from the controls X.In this setting, the parameters of interest we consider are which correspond to the average treatment effect (ATE) and average treatment effect on the treated (ATET), respectively.
9. The harmonic mean of x 1 , . . ., xn is defined as hmean(x 1 , . . ., xn) = n{ n i=1 (1/x i )} −1 .We use the harmonic mean because it is less sensitive to outlier values.ddml Assumptions 2 and 3 below are sufficient for identification of the ATE and ATET.Note that the conditional mean independence condition stated here is stronger than the conditional orthogonality assumption sufficient for identification of θ 0 in the partially linear model.
Under assumptions 2 and 3, we have so that identification of the ATE and ATET immediately follows from their definitions. 10 In contrast to section 2.1, second-step estimators are not directly based on the moment conditions used for identification.Additional care is needed to ensure local robustness to first-stage estimation errors (that is, Neyman orthogonality).In particular, the Neyman orthogonal score for the ATE that Chernozhukov et al. (2018) consider is the efficient influence function of Hahn (1998), where W ≡ (Y, D, X).Similarly for the ATET, Importantly, for g 0 (D, X) ≡ E(Y |D, X), m 0 (X) ≡ E(D|X), and p 0 ≡ E(D), assumptions 2 and 3 imply and we also have that the Gateaux derivative of each condition with respect to the nuisance parameters (g 0 , m 0 , p 0 ) is zero.
10.In the defined interactive model under assumption 2, the heterogeneity in treatment effects that the ATE and ATET average over is fully observed because U is additively separable.Under stronger identifying assumptions, the DDML ATE and ATET estimators outlined here also apply to the ATE and ATET in the general causal model (1) that average over both observed and unobserved heterogeneity.See, for example, Belloni et al. (2017).
As before, the DDML estimators for the ATE and ATET leverage cross-fitting.The DDML estimators of the ATE and ATET based on ψ ATE and ψ ATET are where g I c k and m I c k are cross-fitted estimators for g 0 and m 0 as defined in section 2.1.Because D is binary, the cross-fitted values g I c k (1, X) and g I c k (0, X) are computed by using only treated and untreated observations, respectively.p I c k is a cross-fitted estimator of the unconditional treatment probability.
ddml supports heteroskedasticity and cluster-robust standard errors for θ ATE n and θ ATET n .The algorithms for estimating the ATE and ATET are conceptually similar to algorithm 1.We delegate the detailed outline to algorithm A.1 in the online appendix.Mean and median aggregation over cross-fitting repetitions are implemented as outlined in remark 2.

DDML with IV
This section outlines the partially linear IV model, the flexible partially linear IV model, and the interactive IV model.The discussion is again based on Chernozhukov et al. (2018).As in the previous section, each model is a special case of the general causal model (1).The discussion in this section differs from the preceding section in that identifying assumptions leverage IVs Z.The two partially linear IV models assume strong additive separability as in (2), while the interactive IV model allows for arbitrary interactions between the treatment D and the controls X as in (6).The flexible partially linear IV model allows for approximation of optimal instruments 11 as in Belloni et al. (2012) and Chernozhukov, Hansen, and Spindler (2015a) but relies on a stronger independence assumption than the partially linear IV model.Throughout this discussion, we consider a random sample 11. Optimality requires the assumption of homoskedasticity.The instruments are valid more generally but are not optimal under heteroskedasticity.Obtaining optimal instruments under heteroskedasticity would require estimating conditional variance functions.ddml

Partially linear IV model (iv)
The partially linear IV model considers the same functional form restriction on the causal model as the partially linear model in section 2.1.Specifically, the partially linear IV model maintains where θ 0 is the unknown parameter of interest.12 The key deviation from the partially linear model is that the identifying assumptions leverage IVs Z instead of directly restricting the dependence of D and U .For ease of exposition, we focus on scalar-valued instruments in this section, but we emphasize that ddml for partially linear IV supports multiple IVs and multiple treatment variables.
Assumptions 4 and 5 below are sufficient orthogonality and relevance conditions, respectively, for identification of θ 0 .
The DDML estimator based on ( 9) is given by where I c k , m I c k , and r I c k are appropriate cross-fitted CEF estimators.Standard errors corresponding to θ n are equivalent to the IV standard errors where is the instrument.ddml supports conventional standard errors available for linear IV regression in Stata, including heteroskedasticity and cluster-robust standard errors.Mean and median aggregation over cross-fitting repetitions are implemented as outlined in remark 2. When we have multiple instruments or endogenous regressors, we adjust the algorithm by residualizing each instrument and endogenous variable as above and applying two-stage least squares with the residualized outcome, endogenous variables, and instruments.

Flexible partially linear IV model (fiv)
The flexible partially linear IV model considers the same parameter of interest as the partially linear IV model.The key difference here is that identification is based on a stronger independence assumption, which allows for approximating optimal instruments using nonparametric estimation, including machine learning, akin to Belloni et al. (2012) and Chernozhukov, Hansen, and Spindler (2015a).In particular, the flexible partially linear IV model leverages a conditional mean independence assumption rather than an orthogonality assumption as in section 3.1.As in section 3.1, we state everything in the case of a scalar D.
Assumption 6 implies that for any function p(Z, X), it holds that where 0 (X) = E(Y |X) and m 0 (X) = E(D|X).Identification based on (11) requires that there exists some function p such that A sufficient assumption is that D and Z are not mean independent conditional on X.This condition allows setting p(Z, X) = E(D|Z, X), which will then satisfy (12). 13Assumption 7 is a consequence of this nonmean independence.
Rewriting then results in a Wald expression given by 13.The choice p(Z, X) = E(D|Z, X) results in the optimal instrument, in the sense of semiparametric efficiency, under homoskedasticity.
where assumption 7 ensures a nonzero denominator.
The DDML estimator based on the moment solution ( 13) is given by where I c k , m I c k , and p I c k are appropriate cross-fitted CEF estimators.In simulations, we find that the finite-sample performance of the estimator in ( 14) improves when the LIE applied to E{p 0 (Z, X)} = m 0 (X) is explicitly approximately enforced in estimation.As a result, we propose an intermediate step to the previously considered two-step DDML algorithm: Rather than estimating the conditional expectation of D given X directly, we estimate it by projecting first-step estimates of the conditional expectation of p 0 (Z, X) onto X instead.Algorithm 2 outlines the LIEcompliant DDML algorithm for computation of ( 14).

Algorithm 2: LIE-compliant DDML for the flexible partially linear IV model
i=1 randomly in K folds of approximately equal size.Denote I k the set of observations included in fold k and I c k its complement.1.For each k ∈ {1, . . ., K}, do the following: a. Fit a CEF estimator to the subsample I c k using Y i as the outcome and X i as predictors.Obtain the out-of-sample predicted values I c k (X i ) for i ∈ I k .b. Fit a CEF estimator to the subsample I c k using D i as the outcome and (Z i , X i ) as predictors.Obtain the out-of-sample predicted values p Standard errors corresponding to θ n in ( 14) are the same as in section 3.1, where the instrument is now given by p I c k i Mean and median aggregation over cross-fitting repetitions are as outlined in remark 2.

Interactive IV model (interactiveiv)
The interactive IV model considers the same causal model as in section 2.2; specifically, where D takes values in {0, 1}.The key difference from the interactive model is that this section considers identification via a binary instrument Z representing assignment to treatment.
The parameter of interest we target is where p 0 (Z, X) ≡ Pr(D = 1|Z, X).Here θ 0 is a local average treatment effect (LATE).Note that in contrast to the LATE developed in Imbens and Angrist (1994), we follow the exposition in Chernozhukov et al. (2018), where "local" does not strictly refer to compliers but instead to observations with a higher propensity score-that is, a higher probability of complying. 14dentification again leverages assumptions 6 and 7, made in the context of the flexible partially linear IV model.In addition, we assume that the propensity score is weakly monotone with probability 1 and that the support of the instrument is independent of the controls.
Assumptions 6-9 imply that where 0 (Z, X) ≡ E(Y |Z, X), verifying identification of the LATE θ 0 .Akin to section 6, however, estimators of θ 0 should not directly be based on (15) because the estimating equations implicit in obtaining (15) do not satisfy Neyman orthogonality.Hence, a direct estimator of θ 0 obtained by plugging nonparametric estimators in for nuisance functions in (15) will potentially be highly sensitive to the first-step nonparametric estimation error.Rather, we base estimation on the Neyman orthogonal score function Identification of the conventional complier-focused LATE is achieved under stronger conditional independence and monotonicity assumptions not introduced in this article.Under these stronger assumptions, the DDML LATE estimator outlined here targets the conventionally considered LATE parameter.
The DDML estimator based on the orthogonal score ψ is then where I c k , p I c k , and r I c k are appropriate cross-fitted CEF estimators.Because Z is binary, the cross-fitted values I c k (1, X) and p I c k (1, X), as well as I c k (0, X) and p I c k (0, X), are computed by using only assigned and unassigned observations, respectively.
ddml supports heteroskedasticity and cluster-robust standard errors for θ n .Mean and median aggregation over cross-fitting repetitions are implemented as outlined in remark 2.
4 The choice of machine learner Chernozhukov et al. (2018) show that DDML estimators are asymptotically normal when used in combination with a general class of machine learners satisfying a relatively weak convergence-rate requirement for estimating the CEFs.While asymptotic properties of common machine learners remain a highly active research area, recent advances provide convergence rates for special instances of many machine learners, including lasso (Bickel, Ritov, and Tsybakov 2009;Belloni et al. 2012), random forests (Wager and Walther 2015;Wager and Athey 2018;Athey, Tibshirani, and Wager 2019), neural networks (Schmidt-Hieber 2020; Farrell, Liang, and Misra 2021), and boosting (Luo, Spindler, and Kück 2016).It seems likely that many popular learners will fall under the umbrella of suitable learners as theoretical results are further developed.However, we note that currently known asymptotic properties do not cover a wide range of learners, such as very deep and wide neural networks and deep random forests, as they are currently implemented in practice.
The relative robustness of DDML to the first-step learners leads to the question of which machine learner is the most appropriate for a given application.It is ex ante rarely obvious which learner will perform best.Further, rather than restricting ourselves to one learner, we might want to combine several learners into one final learner.This is the idea behind stacking generalization, or simply "stacking", due to Wolpert (1992) and Breiman (1996).Stacking allows one to accommodate a diverse set of base learners with varying tuning and hypertuning parameters.It thus provides a convenient framework for combining and identifying suitable learners, thereby reducing the risk of misspecification.Ahrens et al. ( 2024) introduce short-stacking, which reduces the computational cost of pairing DDML and stacking drastically, and pooled stacking, which enforces common weights across cross-fitting folds.
We discuss stacking approaches to DDML estimation in section 4.1.Section 4.2 demonstrates the performance of DDML in combination with stacking approaches using a simulation.

DDML and stacking
Our discussion of stacking in the context of DDML focuses on the partially linear model in ( 2), but we highlight that DDML and stacking can be combined in the same way for all other models supported in ddml.Suppose we consider J machine learners, referred to as base learners, to estimate the CEFs 0 (X) ≡ E(Y |X) and m 0 (X) ≡ E(D|X).The set of base learners could, for example, include cross-validated lasso and ridge with alternative sets of predictors, gradient-boosted (GB) trees with varying tree depths, and feed-forward neural nets with varying numbers of hidden layers and neurons.Generally, we recommend considering a relatively large and diverse set of base learners and including some learners with alternative tuning parameters.
We randomly split the sample into K cross-fitting folds, denoted as I 1 , . . ., I K .In each cross-fitting step k, we define the training sample as I c k ≡ T k , comprising all observations excluding the cross-fitting holdout fold k.This training sample is further divided into V cross-validation folds, denoted as T k,1 , . . ., T k,V .The stacking regressor fits a final learner to the training sample T k using the cross-validated predicted values of each base learner as inputs.A typical choice for the final learner is constrained least squares (CLS), which restricts the weights to be positive and sum to 1.The stacking objective function for estimating 0 (X) using the training sample T k is then defined as where w k,j are referred to as stacking weights.We use (X i ) to denote the crossvalidated predicted value for observation i, which is obtained from fitting learner j on the subsample , that is, the subsample excluding the fold v(i) into which observation i falls.The stacking predicted values are obtained as j w k,j where each learner j is fit on the step-k training sample T k .The objective function for estimating m 0 (X) is defined accordingly.

ddml
CLS frequently performs well in practice and facilitates the interpretation of stacking as a weighted average of base learners (Hastie, Tibshirani, and Friedman 2009).However, it is not the only sensible choice of combining base learners.For example, stacking could instead select the single learner with the lowest quadratic loss by imposing the constraint w k,j ∈ {0, 1} and k,j w k,j = 1.We refer to this choice as "single best" and include it in our simulation experiments.We implement stacking for DDML using pystacked (Ahrens, Hansen, and Schaffer 2023).
Pooled stacking.A variant of stacking specific to DDML is pooled stacking.Standard stacking fits the final learner K separate times, once in each cross-fitting step, yielding K separate sets of stacking weights w k,j for the J learners.With DDML pooled stacking, we can impose the additional constraint in ( 17) that the weights are the same across all cross-fitting folds, w k,j = w j , ∀ k.By returning one set of stacking weights, pooled stacking imposes an additional degree of regularization and facilitates interpretation but suffers from the same high computational cost as pairing DDML with (regular) stacking.

Short-stacking.
Stacking and pooled stacking rely on cross-validation.In the context of DDML, we can also exploit the cross-fitted predicted values directly for stacking.That is, we can directly apply CLS to the cross-fitted predicted values for estimating 0 (X) [and similarly, m 0 (X)]: We refer to this form of stacking that uses the cross-fitted predicted values as shortstacking because it takes a shortcut.This is to contrast it to regular stacking, which estimates the stacking weights for each cross-fitting fold k.The main advantage of short-stacking relative to standard stacking is the lower computational cost because short-stacking does not require the fitting of the j learners on subsamples to obtain the cross-validated predicted values (X i ) needed for standard stacking.Furthermore, short-stacking (like pooled stacking) also produces one set of weights for the entire sample, which facilitates interpretation and implies a higher degree of regularization.
A potential disadvantage of short-stacking is that it is more susceptible to overfitting issues because stacking weights and structural parameters are estimated using the same cross-fitted predicted values.We thus recommend only considering short-stacking in regular settings where the number of candidate learners is small relative to N (see also the discussion in Ahrens et al. [2024]).Algorithm A.4 in the online appendix summarizes the short-stacking algorithm for the partially linear model.15

Monte Carlo simulation
To illustrate the advantages of DDML with stacking, we generate artificial data based on the partially linear model where both ε i and u i are independently drawn from the standard normal distribution.
We set the target parameter to θ 0 = 0.5 and the sample size to either n = 100 or n = 1000.The controls X i are drawn from the multivariate normal distribution with N (0, Σ), where Σ ij = (0.5) |i−j| .The number of controls is set to p = dim(X i ) = 50 except in DGP 5, where p = 7.The constants c Y and c D are chosen such that the R 2 in ( 18) and ( 19) are approximately equal to 0.5.To induce heteroskedasticity, we set The nuisance function g(X i ) is generated using five exemplary DGPs, which cover linear and nonlinear processes with varying degrees of sparsity and varying numbers of observed covariates: DGP 5: g(X i ) = same as DGP 4 with p = 7 DGP 1 is a linear design involving many negligibly small parameters.While not exactly sparse, the design can be approximated well through a sparse representation.DGP 2 is linear in the parameters and exactly sparse but includes interactions and secondorder polynomials.DGPs 3-5 are also exactly sparse but involve complex nonlinear and interaction effects.DGPs 4 and 5 are identical, except that DGP 5 does not add nuisance covariates that are unrelated to Y and D.
We consider DDML with the following supervised machine learners for cross-fitting the CEFs:16 1.-2.Cross-validated lasso and ridge with untransformed base controls.3.-4.Cross-validated lasso and ridge with fifth-order polynomials of base controls but no interactions (referred to as "Poly 5").5.-6.Cross-validated lasso and ridge with second-order polynomials and all first-order interaction terms (referred to as "Poly 2 + Inter.").ddml 7. Random forests with low regularization: base controls, maximum tree depth of 10, 500 trees, and approximately √ p features considered at each split.
8. Random forests with medium regularization: same as 7 but with maximum tree depth of 6. 9. Random forests with high regularization: same as 7 but with maximum tree depth of 2. 10.GB trees with low regularization: base controls, 1,000 trees, and a learning rate of 0.3.We enable early stopping, which uses a 20% validation sample to decide whether to stop the learning algorithm.Learning is terminated after five iterations with no meaningful improvement in the mean squared loss of the validation sample.1711.GB with medium regularization: same as 10 but with learning rate of 0.1.12. GB with high regularization: same as 10 but with learning rate of 0.01.13.Feed-forward neural net with base controls and 2 layers of size 20.We use the above set of learners as base learners for DDML with stacking approaches.Specifically, we estimate DDML using stacking, short-stacking, and pooled stacking, which we combine with CLS and the single-best learner.We set the number of folds to K = 20 if n = 100 and to K = 5 if n = 1000.That is, we adapt the number of folds K to the total sample size n to ensure that the CEF estimators are trained on sufficiently large training samples.
For comparison, we report results for ordinary least squares (OLS) and PDS-lasso with base controls, PDS-lasso with Poly 5, PDS-lasso with Poly 2 + interactions, and an oracle estimator using the full sample. 18The oracle estimator presumes knowledge of the function g(X) and obtains estimates by regressing Y on the two variables D and g(X).
We report simulation median absolute bias (MAB) and coverage rates (CR) of 95% confidence intervals for DGPs 1-3 in table 1.We delegate results for DGPs 4 and 5, including a brief discussion, to online appendix B. DDML estimators leveraging stacking approaches perform favorably compared with individual base learners in terms of bias and coverage.The relative performance of stacking approaches seems to improve as the sample size increases, likely reflecting that the stacking weights are more precisely estimated in larger small samples.For n = 1000, the bias of stacking with CLS is at least as low as the bias of the best-performing individual learner under DGP 1-2, while only gradient boosting and neural net yield a lower bias than stacking under DGP 3.

ddml
Results for coverage are similar with stacking-based estimates being comparable with the best-performing feasible estimates and the oracle when n = 1000.With n = 100, coverage of confidence intervals for stacking-based estimators are inferior to coverages for a few of the individual learners but are still competitive and superior to most learners.Looking across all results, we see that stacking provides robustness to potentially very bad performance that could be obtained from using one poorly performing learner.
There are overall small performance differences among the six stacking estimators considered, suggesting that short-stacking has a substantial practical advantage because of its lower computational cost.Ahrens et al. (2024) report that short-stacking reduces the compute time by a factor of 1/V , where V is the number of cross-validation folds.There is some evidence that the single-best selector outperforms CLS in very small sample sizes in DGPs 2-3 but not in DGP 1 (and also not in DGPs 4-5; see table B.1).We suspect that the single-best selector works better in scenarios where there is one base learner that clearly dominates.
The mean squared prediction errors (MSPE) and the average stacking weights, which we report in tables B.2 and B.3 in the online appendix, provide further insights into how stacking functions with CLS.CLS assigns large stacking weights to base learners with low MSPEs, which in turn are associated with low biases.Importantly, stacking assigns zero or close-to-zero weights to poorly specified base learners such as the highly regularized random forest, which in all three DGPs ranks among the individual learners with the highest MSPE and the highest bias.The robustness to misspecified and illchosen machine learners, which could lead to misleading inference, is indeed one of our main motivations for advocating stacking approaches to DDML.
DDML with stacking approaches also compares favorably with conventional fullsample estimators.In the relatively simple linear DGP 1, DDML with stacking performs similarly to OLS and the infeasible oracle estimator-both in terms of bias and coverage-for n = 100 and n = 1000.In the more challenging DGPs 2 and 3, the bias of DDML with stacking is substantially lower than the biases of OLS and the PDS-lasso estimators.While the bias and size distortions of DDML with stacking are still considerable in comparison with the infeasible oracle for n = 100, they are close to the oracle for n = 1000.The results overall highlight the flexibility of DDML with stacking to flexibly approximate a wide range of DGPs, provided a diverse set of base learners is chosen.

The program
In this section, we provide an overview of the ddml package.We introduce the syntax and workflow for the main programs in section 5.1.Section 5.2 lists the options.Section 5.3 covers the simplified one-line program qddml.We provide an overview of supported machine learning programs in section 5.4.Finally, section 5.5 adds a note on how to ensure replication with ddml.See the help files for all available commands and options.

Syntax: ddml
The ddml estimation proceeds in four steps.
Step 1: Initialize ddml and select model.where model selects between the partially linear model (partial), the interactive model (interactive), the partially linear IV model (iv), the flexible partially linear IV model (fiv), and the interactive IV model (interactiveiv).This step creates a persistent Mata object with the name provided by mname(), in which model specifications and estimation results will be stored.The default is mname(m0).
At this stage, the user-specified folds for cross-fitting can be set via integer-valued Stata variables (see foldvar()).By default, observations are randomly assigned to folds and kfolds() determines the number of folds (the default is 5).Cluster-randomized fold splitting is supported (see fcluster()).The user can also select the number of times to fully repeat the cross-fitting procedure (see reps()).
Step 2: Add supervised machine learners for estimating conditional expectations.
In step 2, we select the machine learning programs for estimating CEFs.
ddml cond_exp , mname(name) vname(varname) learner(name) vtype(string) predopt(string) : command depvar vars , cmdopt where cond_exp selects the conditional expectation to be estimated by the machine learning program command.At least one learner is required for each conditional expectation.Table 2 provides an overview of which conditional expectations are required by each model.The program command is a supervised machine learning program such as cvlasso or pystacked (see compatible programs in section 5.4).The options cmdopt are specific to that program.ddml Table 2. Conditional expectations that must be specified for each model cond_exp partial interactive iv fiv interactiveiv Step 3: Perform cross-fitting.
This step implements the cross-fitting algorithm.Each learner is fit iteratively on training folds, and out-of-sample predicted values are obtained.Cross-fitting is the most time-consuming step because it involves fitting the selected machine learners repeatedly.
Finally, we estimate the parameter of interest for all combinations of learners added in step 2. kfolds(integer) is the number of cross-fitting folds.The default is kfolds(5).
fcluster(varname) is the cluster identifier for cluster randomization of folds.
foldvar(varlist) is the integer variable to specify custom folds (one per cross-fitting repetition).
reps(integer) is the number of cross-fitting repetitions, that is, how often the crossfitting procedure is repeated on randomly generated folds.
tabfold prints a table with frequency of observations by fold.
vname(varname) is the name of the dependent variable in the reduced-form estimation.This is usually inferred from the command line but is mandatory for the fiv model.
learner(name) is the optional name of the variable to be created.
vtype(string) is the optional variable type of the variable to be created.The default is vtype(double).none can be used to leave the type field blank.(Setting vtype(none) is required when using ddml with rforest.)predopt(varname) is the predict option to be used to get predicted values.Typical values could be xb or pr.The default is blank.ddml
shortstack asks for short-stacking to be used.Short-stacking uses the cross-fitted predicted values to obtain a weighted average of several base learners.
poolstack asks for pooled stacking to be used.This is available only if pystacked has been used for standard stacking in all equations.
nostdstack is used with pystacked and short-stacking; it tells pystacked to generate the base learner predictions without the computationally expensive additional step of obtaining the stacking weights.
finalest(name) sets the final estimator for all stacking methods; the default estimator, finalest(nnls1), is least squares without a constant and with the constraints that weights are nonnegative and sum to 1. Alternative final estimators include singlebest (use the minimum mean squared error [MSE] base learner), ols (ordinary least squares), and avg (unweighted average of all base learners).
spec(string) selects the specification.This can be either the specification number, mse for minimum-MSE specification (the default), or ss for short-stacking.
rep(string) selects the cross-fitting repetitions.This can be the cross-fitting repetition number, mn for mean aggregation, or md for median aggregation (the default).See remark 2 for more information.
allcombos estimates all possible specifications.By default, only the minimum mean squared error, short-stacking, or pooled stacking specification is estimated and displayed.
notable suppresses the summary table.
replay is used in combination with spec() and rep() to display and return estimation results.
robust reports standard errors that are robust to the presence of arbitrary heteroskedasticity.
noconstant suppresses the constant term in the estimation stage (only relevant for partially linear models).
showconstant displays the constant term in the summary estimation output table (partial, iv, and fiv models only).
atet reports the average treatment effect on the treated (the default is ATE).
ateu reports the average treatment effect on the untreated (the default is ATE).
trim(real) trims propensity scores for the interactive and interactive IV models.The default is trim(0.01)(that is, values below 0.01 and above 0.99 are set to 0.01 and 0.99, respectively).
Refitting the final learner using ddml estimate with stacking options is generally very fast because it does not require cross-fitting again.
For descriptions of the utility commands ddml describe, ddml extract, and ddml export and of their options, see their corresponding help files.

Short syntax: qddml
The ddml package includes the wrapper program qddml, which provides a one-line syntax for fitting a ddml model.The one-line syntax follows the syntax of pdslasso and ivlasso (Ahrens, Hansen, and Schaffer 2018).The main restriction of qddml compared with the more flexible multiline syntax is that qddml allows for only one user-specified machine learner.
qddml has integrated support for pystacked, which is the default learner in all equations.The syntax for qddml options differs depending on whether pystacked is used as the learner in each equation.The cmd() option sets the options for all the conditional expectations estimated by pystacked; the ycmd(), dcmd(), and zcmd() variants control the options sent to the corresponding conditional expectation estimations.The cmdopt() option can be used to either set the options for all equations or, if the asterisk is replaced with y, d, or z, set the options for the corresponding conditional expectation estimation.Other options are as in ddml.

Supported machine learning programs
ddml is compatible with any supervised machine learning program in Stata that supports the typical "regress y x" syntax, comes with a postestimation predict command, and supports if statements.We have tested ddml with the following programs: • pystacked facilitates the stacking of a wide range of machine learners, including regularized regression, random forests, support vector machines, GB trees, and feed-forward neural nets using Python's scikit-learn (Ahrens, Hansen, and Schaffer 2023;Pedregosa et al. 2011;Buitinck et al. 2013).In addition, pystacked can also be used as a front-end to fit individual machine learners.ddml has integrated support for pystacked and is the recommended default learner.• lassopack implements regularized regression, for example, lasso, ridge, and elastic net (Ahrens, Hansen, and Schaffer 2020).• rforest is a random forest wrapper for Weka (Schonlau and Zou 2020;Frank et al. 2009).• svmachines allows for the estimation of support vector machines using libsvm (Chang and Lin 2011;Guenther and Schonlau 2018).• The program parsnip of the package mlrtime provides access to R's parsnip machine learning library through rcall (Huntington-Klein 2021; Haghish 2019).
Using parsnip requires the installation of the supplementary wrapper program parsnip2. 19  Stata programs that are currently not supported can be added relatively easily using wrapper programs (see parsnip2 for an example).

Inspecting results and replication
In this section, we discuss how to ensure replicability when using ddml.We also discuss some tools available for tracing replication failures.First, however, we briefly describe how ddml stores results.
ddml stores estimated conditional expectations in Stata's memory as Stata variables.These variables can be inspected, graphed, and summarized as usual.Fold ID variables are also stored as Stata variables (named m0_fid_r by default, where m0 is the default model name and r is the cross-fitting repetition).ddml models are stored on Mata structs and using Mata's associative arrays.Specifically, the ddml model created by 19.Available from https: // github.com/ aahrens1 / parsnip2.ddml init is an mStruct, and information relating to the estimation of conditional expectations is stored in eStructs.Results relating to the overall model estimation are stored in associative arrays that live in the mStruct, and results relating to the estimation of conditional expectations are stored in associative arrays that live in the corresponding eStructs.

Replication tips:
• Set the Stata seed before ddml init.This ensures that the same random fold variable is used for a given dataset.
• Using the same fold variable alone is usually not sufficient to ensure replication, because many machine learning algorithms involve randomization.That said, note that the fold variable is stored in memory and can be reused for subsequent estimations via the foldvar() option.
• Replication of ddml results may require additional steps with some programs that rely on randomization in other software environments, for example, R or Python.pystacked uses a Python seed generated in Stata.Thus, when ddml is used with pystacked, setting the seed before ddml init also guarantees that the same Python seed underlies the stacking estimation.Other programs relying on randomization outside of Stata might not behave in the same way.Thus, when using other programs, check the help files for options to set external random seeds.Try estimating each individual learner on the entire sample to see what settings need to be passed to them for their results to replicate.
• Beware of changing samples.Fold splits or learner idiosyncrasies may mean that sample sizes vary slightly across learners, estimation samples, or cross-fitting repetitions.ddml extract with the show(n) option will report sample sizes by learner and fold.See the ddml extract help file for more information.
• The ddml export utility can be used to export the estimated conditional expectations, fold variables, and sample indicators to a CSV format file for examination and comparison in other software environments.

Applications
We demonstrate the ddml workflow using two applications.In section 6.1, we apply the DDML estimator to estimate the effect of 401(k) eligibility on financial wealth following Poterba, Venti, and Wise (1995).We focus on the partially linear model for the sake of brevity but provide code that demonstrates the use of ddml with the interactive model, partially linear IV model, and interactive IV model using the same application in online appendix C. Additional examples can also be found in the help file.Based on Berry, Levinsohn, and Pakes (1995), we show in section 6.2 how to use ddml for the estimation of the flexible partially linear IV model, which allows both for flexibly controlling for confounding factors using high-dimensional function approximation of confounding factors and for estimation of optimal IVs.ddml

401(k) and financial wealth
The data consist of n = 9915 households from the 1991 Survey of Income and Program Participation.The application is originally due to Poterba, Venti, and Wise (1995) but has been revisited by Belloni et al. (2017), Chernozhukov et al. (2018), and Wüthrich and Zhu (2023), among others.Following previous studies, we include the control variables age, income, years of education, and family size, as well as indicators for marital status, two-earner status, benefit pension status, individual retirement account participation, and home ownership.The outcome is net financial assets, and the treatment is eligibility to enroll for the 401(k) pension plan.
We load the data and define three globals for outcome, treatment, and control variables.We then proceed in the four steps outlined in section 5.1.
. use sipp1991 .global Y net_tfa .global X age inc educ fsize marr twoearn db pira hown .global D e401 Step 1: Initialize ddml model We initialize the ddml model and select the partially linear model in (2).Before initialization, we set the seed to ensure replication.This should always be done before ddml init, which executes the random fold assignment.In this example, we opt for four folds to ensure the readability of some of the output shown below, although we recommend considering more folds in practice.
. set seed 123 .ddml init partial, kfolds(4) Step 2: Add supervised machine learners for estimating conditional expectations In this step, we specify which machine learning programs should be used for the estimation of the conditional expectations E(Y |X) and E(D|X).For each conditional expectation, at least one learner is required.For illustrative purposes, we consider regress for linear regression, pystacked with the m(lassocv) option for cross-validated lasso (as an example of how to use pystacked to estimate one learner), and rforest for random forests.When using rforest, we need to add the option vtype(none) because the postestimation predict command of rforest does not support variable types.The flexible ddml syntax allows specification of different sets of covariates for different learners.This flexibility can be useful because, for example, linear learners such as the lasso might perform better if interactions are provided as inputs, whereas tree-based methods such as random forests may detect certain interactions in a data-driven way.
Here we use interactions and second-order polynomials for the cross-validated lasso but not for the other learners.
This application has only one treatment variable, but ddml does support multiple treatment variables.To add a second treatment variable, we would simply add a statement such as ddml E[D|X]: reg D2 $X, where D2 would be the name of the second treatment variable.An example with two treatments is provided in the help file.
The auxiliary ddml subcommand describe allows us to verify that the learners were correctly registered: . ddml describe Step 3: Perform cross-fitting The third step performs cross-fitting, which is the most time-intensive process.The shortstack option enables the short-stacking algorithm of section 4.1.
In this final step, we obtain the causal effect estimates.Because we requested shortstacking in step 3, ddml shows the short-stacking result, which relies on the crossfitted values of each base learner.In addition, the specification that corresponds to the minimum-MSE learners is listed at the beginning of the output.
. Because we have specified three learners per conditional expectation, there are nine specifications relying on the base learners in total (because we can combine Y1_reg_1, Y2_pystacked_1, and Y3_rforest_1 with D1_reg_1, D2_pystacked_1, and D3_rforest_1).To get all results, we add the allcombos option: We can use the spec(string) option to select among the listed specifications.string is either the specification number-ss, st, or ps to get the short-stacking, standard stacking, or pooled stacking specification, respectively-or mse for the specification corresponding to the minimal MSPE.In the example above, spec(1) reports in full the specification using regress for estimating both E(Y |X) and E(D|X).The spec() option can be provided either in combination with allcombos or after estimation in combination with the replay option, for example, Manual final estimation.
In the background, ddml estimate regresses Y1_reg_1 against D1_reg_1 with a constant.We can verify this manually: . Manual estimation using regress allows the use of regress's postestimation tools.

Stacking
We next demonstrate DDML with stacking.To this end, we exploit the stacking regressor implemented in pystacked.pystacked allows combining multiple base learners with learner-specific settings and covariates into a final meta-learner.The learners are separated by ||.method() selects the learner, xvars() specifies learner-specific covariates (overwriting the default covariates $X), and opt() passes options to the learners.In this example, we use OLS, cross-validated lasso and ridge, random forests, and gradient boosting.We furthermore use parallelization with five cores.A detailed explanation of the pystacked syntax can be found in Ahrens, Hansen, and Schaffer (2023).
. After cross-fitting, we retrieve the cross-fitted MSPE using the ddml extract command with show(mse) or examine the stacking weights using stweights: . quietly ddml crossfit The DDML-specific stacking approaches of short-stacking and pooled stacking can be requested at either the cross-fitting or the estimation step.Refitting the final learner at the estimation step allows us to avoid repeating the computationally intensive crossfitting.Here we request short-stacking and pooled stacking but using the single-best base learner.Short-stacking is computationally much faster than regular stacking (or pooled stacking) because it avoids the cross-validation within cross-fitting folds.Below, we disable regular stacking with the nostdstack option in the cross-fitting stage.In this example, where we use parallelization with 5 cores, the run time is only 50.7 seconds for DDML with short-stacking compared with 93.0 seconds for DDML with regular stacking.One-line syntax.
qddml provides a simplified and convenient one-line syntax.The main constraint of qddml is that it allows only for one user-specified learner.The default learner is pystacked, which by default uses OLS, cross-validated lasso, and gradient boosting as default learners.The pystacked base learners and non-pystacked commands can be modified via various options.Below, we use qddml with pystacked's default base learners.We omit the output for the sake of brevity.

The market for automobiles
For this demonstration, we follow Chernozhukov, Hansen, and Spindler (2015b), who fit a stylized demand model using IVs based on the data from Berry, Levinsohn, and Pakes (1995).The authors of the original study estimate the effect of prices on the market share of automobile models in a given year (n = 2217).The controls are product characteristics (a constant, air conditioning dummy, horsepower divided by weight, miles per dollar, vehicle size).To account for endogenous prices, Berry, Levinsohn, and Pakes (1995) suggest exploiting other products' characteristics as instruments.Fol- Manual final estimation.We can obtain the final estimate manually.To this end, we construct the instrument as E(D|X, Z) − E(D|X) and the residualized endogenous regressor as D − E(D|X).The residualized dependent variable is saved in memory.
Here we obtain the estimate from the first cross-fitting replication.We could obtain the estimate for replication r by changing the "_1" to "_r". .

Conclusion
This article introduced the command ddml, which implements double/debiased machine learning.It allows for flexible estimation of structural parameters in five econometric models, leveraging a wide range of supervised machine learners.While ddml is compatible with many existing machine learning programs in Stata, it is specifically designed to be used with pystacked, which allows combining several learners into a meta-learner via stacking.We see several avenues for extensions: First, ddml primarily focuses on crosssectional models.Some panel models are readily implementable in ddml, but expanding its capabilities for seamless use with a wide range of panel models would increase its practical relevance.Second, researchers and policymakers are frequently interested in learning treatment effects for specific subpopulations sharing observable characteristics.The estimation of conditional average treatment effects would be a natural extension to the existing ddml program.Third, ddml currently lacks underidentification diagnostics and weak-identification robust inference for IV regressions, which we hope to add in future releases.
1. ddml supports flexible estimators of causal parameters in five econometric models: a) the partially linear model, b) the interactive model (for binary treatment), c) the partially linear instrumental-variables (IV) model, d) the flexible partially linear IV model, and e) the interactive IV model (for binary treatment and instrument).

Table 1 .
Bias and coverage rates in the linear and nonlinear DGPs notes:The table reports median absolute bias (MAB) and coverage rates (CR) of 95% confidence intervals.We use standard errors robust to heteroskedasticity.For comparison, we report the following full-sample estimators: infeasible oracle, OLS, PDS-lasso with base, and two different expanded sets of covariates.DDML estimators use 20 folds for cross-fitting if n = 100 and 5 folds if n = 1000.
Six variables are created and stored in memory that correspond to the six learners specified in the previous step.These variables are called Y1_reg_1, Y2_pystacked_1_1, Y3_rforest_1, D1_reg_1, D2_pystacked_1_1, and D3_rforest_1.Y and D indicate the outcome and the treatment variable.Indexes 1 to 3 are learner counters.regress, pystacked, and rforest correspond to the names of the commands used.The _1 suffix indicates the cross-fitting repetition.The additional _1 in the case of D2_pystacked_1_1 indicates the learner number (there is only one pystacked learner).
Finally, in the estimation stage, we retrieve the results of DDML with stacking: