A weighting method for simultaneous adjustment for confounding and joint exposure-outcome misclassifications

Joint misclassification of exposure and outcome variables can lead to considerable bias in epidemiological studies of causal exposure-outcome effects. In this paper, we present a new maximum likelihood based estimator for the marginal causal odd-ratio that simultaneously adjusts for confounding and several forms of joint misclassification of the exposure and outcome variables. The proposed method relies on validation data for the construction of weights that account for both sources of bias. The weighting estimator, which is an extension of the exposure misclassification weighting estimator proposed by Gravel and Platt (Statistics in Medicine, 2018), is applied to reinfarction data. Simulation studies were carried out to study its finite sample properties and compare it with methods that do not account for confounding or misclassification. The new estimator showed favourable large sample properties in the simulations. Further research is needed to study the sensitivity of the proposed method and that of alternatives to violations of their assumptions. The implementation of the estimator is facilitated by a new R function in an existing R package.


Introduction
In epidemiological research on causal associations between a particular exposure and a certain outcome, erroneous information on either or both of these variables poses a serious methodological obstacle in making valid inferences. In particular, joint misclassification of exposure and outcome can lead to considerable bias of standard causal effect estimators, with direction and magnitude depending on various factors, including the misclassification mechanism and the direction and magnitude of the true effect. [1][2][3][4][5][6] Exposure and outcome misclassification is typically categorised according to two separate properties: whether or not the misclassification is differential and whether or not it is dependent relative to some covariate vector L containing patient characteristics. 1,5 Joint misclassification of exposure and outcome is said to be nondifferential if (1) the sensitivity and specificity of exposure classification are constant across all categories of the (true) outcome given L and (2) the sensitivity and specificity of outcome classification are constant across all categories of the (true) exposure given L; otherwise it is differential. Misclassification is said to be independent if the joint probability of any exposure and outcome classification given any true exposure and outcome categories and L can be factored into the product of the corresponding probabilities for exposure and outcome separately; otherwise, it is dependent. In Dawid's notation, 7 that is, if true exposure level A and true outcome Y are (potentially mis)classified as B and Z, respectively, misclassification is nondifferential if and only if B? ?YjA; L and Z? ?AjY; L and independent if and only if Z? ?BjY; A; L.
Epidemiological research hampered by joint misclassification of some type is likely voluminous. 6 Examples of studies affected by exposure and outcome misclassification can be found, for example, in the literature on the causal effects of drug use, which is largely based on routinely collected data, where exposures are typically operationalised on the basis of prescription records and where outcomes are often self-reported. [8][9][10][11] In applied epidemiological research, misclassification or some of its potential consequences are often ignored. 12,13 The assertion often made in the discussion of study results that observed measures of association are biased toward the null under nondifferentiality, for example, is not generally true unless additional conditions are presupposed. 2,6 Methods to adjust for misclassification rely on additional information that can be used to estimate or correct for bias. One potential source of information is validation data obtained through supposedly infallible measurement. Recently, Gravel and Platt proposed an inverse probability weighting (IPW) method to simultaneously address confounding and outcome misclassification by means of internal validation data. 14 Other methods likewise suppose that either the exposure or the outcome is subject to misclassification. [14][15][16][17] In what follows, we propose an extension of Gravel and Platt's method to allow for confounding adjustment and joint exposure and outcome misclassification. This flexible estimator allows for the misclassifications to be dependent, differential or both. In Section 2, inverse probability weights for confounding and joint misclassification are introduced through a hypothetical study based on the illustrative example of Gravel and Platt. Section 3 details methods for estimation of the various components of the proposed weights based on validation data. In Section 4, we describe a series of Monte Carlo simulations that were used to study properties of the proposed method in finite samples. We conclude with a summary and discussion of our findings in context of the existing literature.

Data distribution for illustration and development of weighting method
We first consider the data and setting described by Gravel and Platt and suppose that Table 1 represents a simple random (i.i.d.) sample from (or that its cell counts are proportional to the respective densities in) the population of interest. This illustration is based on a cohort study on the association between post-myocardial infarction statin use (A) and the one-year risk of reinfarction (Y). In what follows, we will refer to this example as the 'reinfarction example'.

No misclassification
Under conditional exchangeability given L (i.e. ðYð0Þ; Yð1ÞÞ? ?AjL), consistency (Y(a) ¼ Y if A ¼ a) and positivity (PrðA ¼ ajL ¼ lÞ > 0 for a ¼ 0, 1 and all l in the support of L), the mean counterfactuals E½Yð0Þ and E½Yð1Þ can Table 1. Cross-classification of the reinfarction data for 33,007 individuals as given by Gravel and Platt.
be expressed in terms of 'observables' (meaning, here, variables that would be observed in the absence of measurement error) as follows where W denotes the inverse probability of the allocated exposure level A given L (i.e. the inverse propensity score if A ¼ 1 and the inverse of the complement of the propensity score if A ¼ 0) multiplied by the prevalence of the allocated exposure level A (i.e. W ¼ PrðAÞ=PrðAjLÞ; Supplementary Appendix I). We therefore have Replacing components of the right-hand side of equation (2) with sample analogues, we obtain the following estimator for the setting where L is binary where n yal denotes the number of subjects with Y ¼ y, A ¼ a, L ¼ l and where b W al is the product of the proportion of subjects in the sample with A ¼ a and the inverse of the proportion of subjects with A ¼ a among those with L ¼ l.
For the data in Table 1, we obtain d OR % 0:573. The corresponding crude odds ratio (i.e. with b W ¼ 1) is 0.509.

Joint misclassification
Suppose that rather than observing Y and A we observe Z and B, the misclassified versions of Y and A, respectively. The relation between Z and B on the one hand and Y, A and L on the other can be expressed as follows for z; b 2 f0; 1g and all possible realisations y, a, l of Y, A, L, and where p byal ¼ PrðZ To simulate (dependent differential) misclassification in the reinfarction dataset, we use the true positive and false positive rates given in Table 2. The expected cell counts for these rates are given in Table 3.
We redefine the weights in equation (2) as a function of B and L (as per Supplementary Appendix I) such that where p(B) is the prevalence of level B of the potentially misclassified version of the exposure variable and where e al ¼ PrðY ¼ 1jA ¼ a; L ¼ lÞ and d l ¼ PrðA ¼ 1jL ¼ lÞ for all possible realisations a and l of A and L, respectively. In Supplementary Appendix I, it is shown that which suggests the plug-in estimator where b E denotes the sample mean operator and b W the sample analogue (i.e. consistent estimator) of W in equation (4). For other effect measures (i.e. other choices of g), the same plug-in strategy can be implemented.
In the absence of exposure misclassification, equation (4) reduces to The first term within the round brackets corrects for confounding and represents the propensity score if A ¼ 1 or its complemement if A ¼ 0 divided by the prevalence of exposure level A. The term within square brackets is a factor that corrects for misclassification in the outcome variable. This correction factor is similar to that proposed by Gravel and Platt. 14 The only difference is that where in equation (7) it does not depend on the fallible measurement Z of Y, Gravel and Platt define different weights for subjects with Z ¼ 0. Note, however, that the choice of weights for subjects with Z ¼ 0 does not affect the population quantity in equation (5) or the estimator defined by equation (6), because the weights only appear in products with Z, which equal zero if Z ¼ 0.
As for the reinfarction example, the odds ratio estimate for the exposure-outcome effect based on inverse probability weighting that assumes absence of exposure or outcome misclassification is 1.120, while the corresponding misclassification naive crude odds ratio is 1.031. Estimation of the population weights W from observables using validation data is discussed in the next section. As shown below, weighting using the proposed weights that account for confounding and outcome and exposure misclassification results in an odds ratio of OR ¼ d OR % 0:573. Inference based on equation (7) rather than equation (4), i.e. using Gravel and Platt's method and ignoring misclassification in the exposure but correcting for outcome misclassification, yields an odds ratio estimate of 0.934.

Parameterisation based on positive and negative predictive values
In the foregoing discussion, the proposed weights were expressed in terms of sensitivity and specificity parameters. The sensitivity and specificity of Z with respect to Y, given (B, A, L), are p B1AL and 1 À p B0AL , respectively. Table 3. Expected cell counts (rounded to integers) for reinfarction example after misclassification was introduced.
Note: Because of rounding, the sum of all cell entries is 33,006 rather than 33,007, the size of the reinfarction dataset.
Similarly, k Y1L and 1 À k Y0L reflect the sensitivity and specificity, respectively, with respect to A, conditional on Y and L.
As discussed below, it may be more convenient to choose a parameterisation that is based on (positive and negative) predictive values. Define d Ã The weights in equation (4) can be rewritten as In the absence of exposure misclassification, these weights simplify to

Estimation of weights based on validation data
Estimation of the proposed weights can be done using a number of approaches and we will here consider a maximum likelihood approach that assumes the availability of internal validation data, i.e. that some study participants have their observed exposure or outcome measured by an 'infallible' or 'gold standard' (100% accurate) classifier, and that all participants have the misclassified exposure and outcome variables measured.

Validation subset inclusion mechanism
Let R Y be the indicator variable that takes the value of 1 if the outcome is observed (i.e. measured by an infallible classifier) and 0 otherwise. Similarly, define R A to be the indicator variable that takes the value of 1 if the exposure variable is observed and 0 otherwise. R Y and R A reflect which subjects have validation data available on Y and A, respectively. The subset of subjects with validation data on Y need not fully overlap with the subset with validation data on A.
The validation subsets can be approached from the missing data framework of Rubin. 23 Provided that Z, B, L are free of missing values, Rubin's missing at random (MAR) condition is met whenever the vector ðR Y ; R A Þ is conditionally independent of (Y, A) given (Z, B, L).

Full likelihood approach based on parameterisation in terms of sensitivities and specificities
Simultaneous estimation of the whole vector of d, e, k and p parameters can be done via maximum likelihood estimation as follows. Assuming i.i.d. observations ðZ i ; B i ; Y i ; A i ; L i Þ and ignorable missingness in the sense of Rubin 23 (MAR and distinctness), for valid likelihood-based inference it is appropriate to maximise the following log-likelihood over the parameter space of h, the vector of d, e, k and p parameters Evaluating this log-likelihood involves marginalising over unobserved quantities in the last three terms of 'ðhÞ. The log-likelihood equations may become considerably more tractable if we choose a parameterisation of the likelihood that is based on predictive values rather than sensitivities and specificities.

Full likelihood approach based on parameterisation in terms of predictive values
Inference may alternatively be based on a log-likelihood that is parameterised in terms of the vector h Ã of the d Ã ; e Ã ; k Ã and p Ã parameters, i.e.
If validation data is available on Y if and only if it is available on A, the complete data log-likelihood ignoring the missing data mechanism can be conveniently expressed as follows with h Ã denoting the vector of d Ã ; e Ã ; k Ã and p Ã parameters and where Now, assuming distinct parameter spaces for the vectors of p Ã ; k Ã ; e Ã , and d Ã parameters, the parameter values that maximise ' Ã ðh Ã Þ can be found by separately maximising ' Ã 1 ðh Ã Þ and ' Ã 2 ðh Ã Þ in the validation subset with respect to the p Ã and k Ã parameters, respectively, and ' Ã 3 ðh Ã Þ and ' Ã 4 ðh Ã Þ in the entire dataset with respect to e Ã and d Ã . Following Gravel and Platt 14 and Tang et al., 24 the sum of the first and last two terms are therefore suitably labelled the internal validation and main study log-likelihood, respectively. With this parameterisation, finding the maximum likelihood estimates is readily achieved by taking advantage of standard statistical software.

Equivalence of likelihood approaches based on different parameterisations
Without restrictions imposed on 15 , it can be shown that the maximum likelihood estimator based on the internal validation design is invariant to its parameterisation (sensitivities/specificities versus positive and negative predictive values). This is because there exists a function mapping every h l 2 ð0; 1Þ 15 to a unique h Ã l 2 ð0; 1Þ 15 and vice versa. Maximising 'ðhÞ with respect to h is then equivalent to maximising 'ðrðh Ã ÞÞ If more restrictions are imposed on h or h Ã , e.g. if we assume non-saturated logistic models for the components of h and h Ã , this equivalence no longer holds and the resulting weight estimates may differ depending on the parameterisation.

Application
For the re-infarction data example, we assume validation data are available according to a MAR mechanism characterised by This mechanism assigns validation data to an individual on either both Y and A (30% of all individuals) or neither depending on their realisation of B, the misclassified version of the exposure variable A (Table S. L), respectively, and to the validation subset the fully saturated logistic regression models of A and Y on (Z, B, L) and ðA; Z; B; LÞ, respectively. Estimated weights are then obtained by plugging in the maximum likelihood estimates into equation (8). As in the complete data setting where we assumed the weights to be known, evaluating equation (6) then yields an odds ratio of d OR ¼ OR % 0:573.

Simulations
We performed a series of Monte Carlo simulation experiments to illustrate the implementation of the proposed method, to study its finite sample properties and to compare the method to estimators that ignore the presence of confounding or joint exposure and outcome misclassification. All simulations were conducted using R-3.5.0 25 on x86_64-pc-linux-gnu platforms of the high performance computer cluster of Leiden University Medical Center.

Methods
For all 54 simulation experiments, we generated n sim ¼ 1000 samples of size n according to the data generating mechanisms depicted in the directed acyclic graphs of Figure 1. This multi-step data generating process included generating values on measurement error-free variables, introducing misclassification and allocating individuals validation data. We applied various estimators to each of the simulation samples to yield, for each scenario, an empirical distribution of each point estimator and corresponding precision estimators. These distributions were then summarised into various performance metrics. These metrics include the empirical bias of the estimator on the log-scale (i.e. the mean estimated log-OR minus the target log-OR across the n sim samples), the empirical standard error (SE) of the estimator on the log-scale (i.e. the square root of the mean squared deviation of the estimated log-OR from the mean log-OR), the empirical mean squared error (MSE) (i.e. the sum of the squared SE and the squared bias), the square root of the mean estimated variance (SSE, sample standard error) and the empirical coverage probability (CP) (i.e. the fraction of simulation runs per scenario where the 95% confidence interval (95% CI) contained the target quantity).

Distribution of measurement error-free variables
Following Gravel and Platt, 14 we consider a setting based on that of "Scenario A" in the work of Setoguchi et al. with slight modifications to the propensity score and outcome models. 26 We consider a fully observed covariate vector L ¼ ðL 0 ; . . . ; L 10 Þ whose distribution coincides with that of h(V), where V ¼ ðV 1 ; . . . ; V 10 Þ has the multivariate normal distribution with zero means, unit variances and correlations equal to zero except for the correlations between W 1 and V 5 , V 2 and V 6 , V 3 and V 8 , and V 4 and V 9 , which were set to 0.2, 0.9, 0.2, and 0.9, respectively. Function h was defined such that hðVÞ ¼ ðIðV 1 > 0Þ; V 2 ; IðV 3 > 0Þ; V 4 ; IðV 5 > 0Þ; IðV 6 > 0Þ; V 7 ; IðV 8 > 0Þ; IðV 9 > 0Þ; V 10 Þ Thus, sampling from the distribution of L is equivalent to sampling from the multivariate normal distribution with the given parameter values and dichotomising the first, third, fifth, sixth, eighth and ninth elements.
Next, let U 1 and U 2 be binary variables distributed according to the following logistic models: The distribution of the binary exposure variable A was defined according to the model Letting U 3 be a scalar random variable that is independent of ðA; L 1 ; . . . ; L 10 ; U 1 ; U 2 Þ and uniformly distributed over the interval ½0; 1, we defined the counterfactual outcome Y(a), under the intervention setting A to a, as With Y :¼ YðAÞ, the above implies consistency, conditional exchangeability given L and structural positivity.

Misclassification mechanism
For scenarios with joint misclassification, we defined B ¼ U 1 and Z ¼ U 2 , so that the predictive values take a standard logistic form For scenarios without exposure misclassification, we set a 11 ¼ 0 and defined B ¼ A and Z ¼ U 2 , so that For simplicity, we removed any marginal dependence of Z on the covariates L and U 1 as well as any marginal dependence of U 1 on L (cf. equations (10) and (11)). Although models (10) through (15) take a standard logistic form, they do not imply that the corresponding sensitivities and specificities can be written in the same form. We chose the predictive values rather than the sensitivities and specificities to take a standard logistic form so as to ensure correct model specification in the estimation of the weights in the simulation experiments, in which a likelihood approach based on predictive values was adopted (cf. equation (9)).

Missing data mechanism
For these simulations, we stipulated L, B and Z to be observed for all subjects. We consider scenarios where the dataset can be partitioned into a subset with validation data on all misclassified variables (denoted R ¼ 1) and a dataset with validation data on neither (R ¼ 0). That is, we simulated data such that subjects have validation data on both A and Y or neither on A nor on Y. Values for the response indicator R were generated according to the following (MAR) model logit PrðR ¼ 1jZ; B; Y; A; LÞ ¼ logit PrðR ¼ 1jZ; B; LÞ
Scenarios differ by sample size n, the presence of outcome misclassification, potentially misclassified outcome prevalence (via l 0 ), the associations between the exposure and outcome on the one hand and the respective misclassified versions on the other (via a 11 and b 11 ), outcome model intercept b 0 , the conditional log-OR c, or the size of the validation subset (via n 0 ). Based on an iterative Monte Carlo integration approach, 27 we specified c so as to keep the target marginal log odds ratio at À0.4.

Estimators
We considered five estimators of the OR for the marginal exposure-outcome effect: a crude estimator (labeled Crude) that ignores both confounding and misclassication of any variable, a misclassification naive estimator (labeled PS) that addresses confounding through IPW, complete cases analysis (CCA) in which IPW is applied only to the subset of subjects with validation data, the Gravel and Platt estimator (GP) that ignores exposure misclassification, and the method proposed in this article (labeled IPWM). Both GP and IPWM are implemented using the R function mecor::ipwm, 28,29 which in the simulation settings considered uses iteratively reweighted least squares via the stats::glm function for maximum likelihood estimation. GP coincides with the approach of Gravel and Platt where it concerns point estimation, but they differ in the construction of confidence intervals. Unlike Gravel and Platt, 14 we used a non-parametric rather than a semi-parametric bootstrap procedure for estimating standard errors and constructing confidence intervals. Semi-parametrically generating response indicators would preferably require modelling of (or making additional assumptions about) the missing data mechanism. In particular, to obtain a bootstrap dataset, we defined the record of a unit as their observed data and response indicators, imposed a uniform distribution across all records in the original dataset, and drew independently as many records from this distribution as the total number of records in the original dataset. For all methods and each original dataset, we drew 1000 bootstrap datasets for variance estimation and the construction of percentile confidence intervals.
All estimators are based on a function of the estimated outcome probability P 1 in the exposed group and the estimated outcome probability P 0 in the unexposed group. However, since P 1 and P 0 may take a value of 0 or 1, the crude odds ratio ½P 1 =ð1 À P 1 Þ=½P 0 =ð1 À P 0 Þ need not exist. In contrast to what is often (implicitly) done in simulation studies-i.e., studying the properties of the estimators after conditioning on datasets where ½P 1 =ð1 À P 1 Þ=½P 0 =ð1 À P 0 Þ is defined-we first define P Ã 1 ¼ ðP 1 s þ 1Þ=ðs þ 2Þ and P Ã 0 ¼ ðP 0 s þ 1Þ=ðs þ 2Þ for a large positive number s (here set to 10 6 ) and then regard ½P Ã 1 =ð1 À P Ã 1 Þ=½P Ã 0 =ð1 À P Ã 0 Þ as the estimator of the OR for the exposure-outcome association. This ensures the estimator is always defined and effectively shrinks the outcome probabilities towards 0.5 and the OR towards 1 (online Supplementary Appendix II).
For PS and CCA, we used a logistic regression of B and A, respectively, on covariates L 1 through L 10 as main effects to estimate the propensity scores. Taking the crude OR for the association between B and Z (PS) or A and Y (CCA) over the data weighted by the reciprocal of the propensity scores provided an estimate of target OR. R code for the methods GP and IPWM is given in online Supplementary Appendix III.

Results
The treatment assignment mechanism detailed above resulted in average exposure rates ranging from 17% to 51%, whereas average outcome rates ranged from 3% to 22%. Across all simulation studies, the average outcome rate ranged from 6% to 18%. Across all simulation studies with exposure misclassification, exposure and joint misclassification rates ranged from 16% to 33% and from 2% to 6%, respectively. Approximately 16% to 32% of subjects were allocated validation data.
The results on the performance of the various methods in simulations studies 1-9 are provided in Table 5 (see  Supplementary Table S.4 for the results on all scenarios). Note: Scenarios indicated with 'a' have n ¼ 10,000, those with 'b' have n ¼ 5000 and those with 'c' have n ¼ 1000. As expected, Crude, PS and CCA clearly showed bias with respect to the target log OR of À0.4. The bias associated with restricting the analysis to records with validation data is likely brought on to a large extent by collider stratification, with R acting as the collider here (cf. Figure 1). Both Crude and PS indicated a null effect, as one would anticipate in view of the marginal and L-conditional independence of B and Z implied by the simulation set-up. The empirical coverage probabilities were, although low for both estimators, similar to substantially larger for PS as compared with Crude. Paralleling this is that Crude, whose (implicit) propensity score model is inherently at least as parsimonious, yielded similar to smaller empirical and sample standard errors as compared with PS. With the average fraction of subjects with validation data being as low as 16% (in scenarios with low n 0 ) to 32%, it is not unsurprising that Crude was subject to the largest degree of variability.
The results for the IPWM approach are generally favourable for large samples and in line with its theoretical (large sample) properties. For scenarios with smaller samples (scenarios 1c, 2c and 4c, 6c and 9c in particular), however, we observed considerable bias (see Supplementary Appendix S.4). Comparing CCA with IPWM, we note a strong linear association between the methods in terms of the absolute within-method differences in estimated bias between scenarios of size 10,000 (scenarios labeled 'a') and the respective scenarios of size 1000 (scenarios labeled 'c') (Pearson correlation 0.997). Note that the results for GP and IPWM are identical for scenarios labeled 1-4 and 10-13 since the methods are equivalent in terms of point estimation in the absence of exposure misclassification. In all other scenarios, i.e. scenarios for which GP was not developed, GP performed substantially worse than IPWM. The non-zero, albeit relatively small, systematic deviations of the IPWM point estimates from the target À0.4, notably the estimated bias of À0.097 (scenario 2 b), may be attributable in part to the outcome being rare (with prevalence ranging from 3% to 8% across scenarios labeled 1-9). This is indicated by the superior performance of IPWM in scenarios where the outcome is more prevalent (cf. scenarios labeled 1-9 b versus 10-18 b, which have prevalence up to 22%). A similar observation was made by Gravel and Platt. 14 The standard errors for GP and IPWM were noticeably higher than those of Crude and PS, which is unsurprising in view of the discrepancies in the number of estimated parameters. As expected, increasing the sample size, the true outcome rate (via b 0 ) or both led to a decrease in the variability of IPWM (cf. Table 4 and Supplementary Table S.4). However, despite the large discrepancies between SSE and SE for some scenarios, the empirical coverage probabilities of IPWM were close to the nominal level of 0.95, except for scenarios 1c, 2c and 4c, where we observed considerable bias.

Discussion
The analysis of epidemiologic data is often complicated by the presence of confounding and misclassifications in exposure and outcome variables. In this paper, we propose a new estimator for estimating a marginal odds-ratio in the presence of confouding and joint misclassification of the exposure and outcome variables. In simulation studies, this weighting estimator showed promising finite sample performance, reducing bias and mean squared error as compared with simpler methods. The proposed IPWM estimator is an extension of the inverse probability weighting estimator recently proposed by Gravel and Platt (GP) which only addresses the misclassification in the outcome. 14 IPWM and GP are (mathematically) equivalent when the exposure is (assumed to be) measured without misclassification error.
Like the Gravel and Platt approach, IPWM relies on estimates of sensitivity and specificity or positive and negative predictive values for the misclassified variables. In this paper, we used an internal validation approach where a portion of subjects would receive error-free ('gold standard') measurements on either or both the outcome and exposure. However, we anticipate that in some settings the likelihood may not be fully identifiable from the data at hand. In these settings, it may be possible to incorporate external rather than internal information on the misclassification rates, possibly through a Bayesian approach using prior assumptions about misclassification probabilities. When validation data is external, however, it may be necessary to assume misclassification to be independent of covariates L, because external studies seldom consider the same covariates as the main study. 30 External validation approaches also require the assumption that the misclassification parameters targeted in the validation sample are transportable to the main study.
In the absence of internal and external validation data, it is possible to conduct a sensitivity analysis within the weighting framework. Formula (8) for the weights can readily be used in a sensitivity analysis in which the terms describing the distribution of true exposure and outcome variables in relation to the observed data (positive and negative predictive values) serve as sensitivity parameters of the sensitivity analysis. The models for the predictive values can take complex forms, however, thus complicating the analysis and presentation of results.
If internal validation is available, the subjects with validation data need not form a completely random subset. The proposed method, IPWM, was developed under the assumption that validation data allocation occurs in an "ignorable" fashion. 23 In practice, it may be that the researchers have limited control over the validation data allocation mechanism. For instance, it is conceivable that individuals with specific indications (e.g. with a realisation of L, B or Z) are practically ineligible to be assigned a double measurement of the exposure (A and B) and outcome (Y and Z). Further, the estimator also allows for validation subjects to receive either the double exposure or double outcome measurement. We simulated data such that subjects have validation data on both the exposure and outcome variables or on neither. Although this may greatly simplify analysis and enhance efficiency, in practice it is not necessary to assume that this condition holds. An interesting scenario is where subjects have validation data on at most one variable, i.e. on the exposure variable or the outcome variable but not both. In this case, valid estimation would require additional modelling assumptions; for example, the error-free outcome variable cannot then be regressed on the error-free exposure variable.
To accommodate settings where validation data allocation is not completely at random, we deviated from the semi-parametric bootstrap procedure for variance estimation proposed by Gravel and Platt. Instead, the nonparametric procedure we used requires less assumptions regarding the validation subset sampling procedure. The non-parametric procedure showed good performance in our simulations.
Whilst we have discussed under what conditions the proposed method consistently estimates or at least identifies the target quantity, the assumptions may be untenable in particular settings. Particularly, an infallible measurement tool for the exposure and outcome that can be performed on a subset of the data need not always exist. The robustness to deviations of infallibility is an interesting and important direction for further research. This is especially relevant where there exists considerable uncertainty about the tenability of the assumptions that is difficult to incorporate in the analysis. An obvious and flexible alternative to IPWM is to multiply impute missing values including absent measurement error-free variables before implementing IPW (MI þ IPW). Although MI þ IPW and IPWM may be comparable in terms of their assumptions, it is yet unclear how they behave under assumption violations such as misspecification of the outcome model.
An advantageous property of MI þ IPW is that it can easily accommodate missing covariate values. Other alternatives that can accommodate missing covariates were recently developed by Shu and Yi. 31 Their proposed weighting estimators simultaneously addresses confounding, misclassification of the outcome (but not of the exposure) and measurement error on the covariates under a classical additive measurement error model. The methods can be implemented using validation data or repeated measurements and use a simple misclassification model (in which the outcome surrogate is independent of exposure or covariates given the target outcome) that is suitable for performing sensitivity analyses.
Another interesting area for further research is where the researchers do have control over who is referred for further testing by the assumed infallible measurement tool(s). An obvious choice is to adopt a completely at random strategy (simple random sampling). However, other referral (sampling) strategies exist and it is not clear what strategy leads to the most favourable estimator properties for the given setting.
In summary, we have developed an extension to an existing method, to allow for valid estimation of a marginal causal OR in the presence of confounding and a commonly ignored and misunderstood source of bias-joint exposure and outcome misclassification. The R function mecor::ipwm has been made available to facilitate implementation. 28,29 Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: RHHG was funded by the Netherlands Organization for Scientific Research (NWO-Vidi project 917. 16.430). The views expressed in this article are those of the authors and not necessarily any funding body.

Supplemental material
Supplemental material for this article is available online.