Computing fairness: ethics of modeling and simulation in public health

The field of public health increasingly utilizes computational models. In this context, computer scientists are confronted with moral dilemmas like those around modeling the distribution of scarce resources. However, there is a lack of research on the ethical implications of computer modeling and simulation (M&S). In this paper I aim to show that taking a transdisciplinary ethical perspective is useful when analyzing these implications. The practice of modeling geospatial distribution of automated external defibrillators for sudden cardiac arrest treatment is used as a case study. It is shown that there exists no consensus on what theory of justice should underlie choices in computer M&S of public health resources, and that professionals struggle with building equity considerations into their models. The example highlights new ethical consequences arising at the nexus of public health and M&S. Computer models and simulations are not morally neutral, but have the effect of making those involved in their creation more responsible for making just choices. For some moral dilemmas, such as those related to distributive justice, there may be no correct solution that can be readily modeled. Promoting professional responsibility through a code of ethics will not help prescribe a right course of action in these situations. I suggest therefore that procedural justice and deliberation with a range of stakeholders is needed to take ethical considerations into account “by design” when developing computer models and simulations for public health policy. Future research should reflect on the content and practical procedures for these deliberations.


Introduction
Computer modeling and simulation (M&S) has been called the third paradigm in research. 1 It came into existence after theory and experiment but before the advent of big data. As a new scientific paradigm, computer M&S has implications for how we see the world and for the nature of scientific knowledge, i.e., for the philosophical areas respectively known as ontology and epistemology. In the past decades, authors have studied these philosophical meanings of computer models and simulations. [2][3][4][5] However, as Durán notes in his overview chapter on the ethics of computer simulations, there is a striking lack of research about the social and ethical implications. 3 The necessary role for ethical reflection is apparent from the definition of computer simulations provided by Frigg & Reiss. 6 They state that in a narrow sense, simulation can refer to the use of computers to solve problems that cannot be solved with traditional analytical methods, and in a broader sense, to the process of ''constructing, using and justifying'' these computational models.
Justification of these models is needed not just in epistemic terms, but also from an ethics perspective. The foundations for the wider field of computer ethics (or ''information and computer ethics'') are being built, 9 but only a handful of philosophers have studied the ethical implications of computer M&S specifically. [10][11][12][13][14] Sceptics might reject the view that the ethics of M&S comprises a topic in itself and rather see it as a subdiscipline of computer ethics or professional ethics more generally. Nonetheless, there may be ethical concerns that arise exclusively from the use of computer models and simulations. 3,15 Especially in healthcare, such ethical concerns are poignant because they can have direct implications for human health. 16 An increasing number of computer models and simulations are developed for the efficient design of healthcare systems. 18 However, the use of computer models in the field of public health (which focuses on populations rather than individuals) means that M&S professionals need to deal with inherent concerns about social justice. 19,20 An example of a classic moral dilemma in public health planning involves the trade-off between maximizing healthcare efficiency versus providing people with equal access to care. For instance, locating an HIV/ AIDS clinic in a city might save more lives than placing it in a rural area, but would create unequal chances for people to benefit from this clinic. 20 Similar distributive justice dilemmas are encountered by M&S professionals in various (public health) contexts, e.g., when aiming to optimize spatial allocation of humanitarian aid, 21 primary schools, 22 general hospitals, 23 or urban green spaces. 24 To illustrate the dilemma, I will use the example of spatial modeling of automated external defibrillator (AED) placement for emergency treatment of sudden cardiac arrest (SCA). This example, described later in this paper, is taken from experience working within an SCA research consortium.
In this article, I reflect on the case of AED placement modeling by using insights from the developing fields of public health ethics 19 and information and computer ethics. 9 I aim to show that new ethical consequences arise at the nexus of public health and computer M&S, and that taking a transdisciplinary perspective is useful when analyzing these consequences. The structure of this paper is as follows: in Section 2, the case of AED distribution is described. I show that there is no consensus on what theory of justice should underlie choices in computer M&S of AED placement and that professionals struggle with how to build equity considerations into their models. In Section 3, new ethical concerns are discussed that result from the use of M&S for public health planning. Computer models and simulations are not morally neutral, but have the effect of making those involved in their creation more responsible for making just choices. Section 4 reflects on approaches used to address ethical concerns, including codes of conduct. I suggest that theoretical concerns need to be taken into account ''by design'' when developing computer models and simulations for public health policy and that this requires procedural justice and deliberation when dealing with resource constraints. In Section 5 some concluding remarks are made.
Of note is that additional justice concerns might arise in our case, namely those related to the data from SCA victims that is used as input for constructing models. The use of these data may exacerbate existing social justice concerns, because: (1) problems with data quality might lead to inappropriate disadvantages for certain groups when biased models are built on factually wrong assumptions; and (2) challenges in safeguarding privacy of patients can give rise to fairness concerns when personal data are differentially secured across institutions or jurisdictions, or when data breaches lead to discrimination. [25][26][27][28][29] However, data quality and privacy concerns are not specific to M&S and in this article I focus on ethical consequences that are inherent to the design of the computer models.
2. Modeling distribution of public health resources: the case of AED placement SCA is a medical condition in which the heart stops functioning acutely. This leads to death within minutes if left untreated. Global average survival in 2010 was only 7%, and around 20% of all natural deaths in Europe are due to SCA. 30 Often the event is caused by cardiac arrhythmias (irregular heart beat) and in those cases, the patient's life can be saved by defibrillation, which is the application of an electric shock to restore the normal heart rhythm. An AED is a portable medical device that can diagnose cardiac arrhythmia and resuscitate the patient by delivering the electric shock. When placed in public areas, AEDs may also be referred to as public access defibrillators. These devices employ visual and audio commands easily understandable for laypersons, who in most jurisdictions do not require specific training to operate an AED. Increasingly, public health organizations around the world are creating smartphone applications that alert ''citizen responders'' of a nearby cardiac arrest and guide them to the location of the AED closest to the victim. In countries that have AEDs, these are generally placed in public spaces (e.g., schools, shopping malls, airports) and often funded through subsidies from governments or charities. 31 In the Netherlands, for instance, the Heart Foundation provides a discount for communities when they purchase a ''neighborhood AED''.
In recent years, many researchers in the field of operations research and management science have used M&S to study the planning of emergency medical services (EMS). The increased interest in EMS is potentially fueled by the increasing availability of EMS data and the push toward increased efficiency in healthcare, including emergency services. 26 Newer is the specific topic of simulating and modeling AED placement, on which less research has been conducted (some examples are references 31-35 below). Publicly funded AEDs are often placed without knowledge of SCA incidence in the area, leading to ineffective spatial distribution. 33,36 Thus, the ''lack of data-driven guidance in choosing AED locations'' may lead to low usage of AEDs and create nonoptimal survival chances among SCA victims. 37 The issue of AED placement is timely because at the moment, many countries do not systematically register AEDs, and policymakers may look for guidance when developing new policies around registration and (re-)location of these devices. 31 Although not used in practice yet, computational M&S could provide such guidance. This modeling can be either descriptive or normative, i.e., it respectively aims to simulate or optimize AED placement (although combinations of these strategies are often used). Of interest here are mainly the optimization models, i.e., models that maximize an objective function based on a set of constraints, since these are based on normative considerations and imply a need to choose among different ethical values. Nonetheless, these AED models often also make ''light'' use of simulation, e.g., when historical cardiac arrest cases do not follow normal distribution and simulation is needed as a first step and basis for the optimization model. Here, a comment on terminology is in order. While I refer to M&S in the same breath, naturally these are distinct methods. However, because the division is not always sharp, and because the general issues discussed in this paper will also -perhaps to a lesser extent -apply to moral dilemmas encountered in other types of M&S, I do not distinguish between different kinds of M&S in the remainder of this article.
In the following sections, I will reflect on the need for M&S professionals to choose among moral values, by describing how efficiency and equity come into conflict in the AED example (2.1) and by briefly reflecting on what equity means from a public health ethics perspective (2.2).

The dilemma: maximizing survival versus ensuring equity
When a person is struck by SCA, there is an expectation of a quick EMS response. This response can include the use by citizen responders or bystanders of AEDs to treat victims with cardiac arrhythmia. A linear relation exists between response time and survival, making quick action a priority. Intuitively, this response should not only be fast but response time should be approximately equal for everyone. The population is generally considered to be covered when an AED is available within 100 meters of each cardiac arrest. 38 However, when dealing with budget constraints (in the Netherlands, for instance, one AED costs between e1000-3000 depending on type), it is practically impossible to have a sufficient number of devices and satisfy both requirements of efficiency and equity. Data-driven planning of AED placement may help to realize efficiency by promoting better overall health outcomes among SCA victims. Efficiency in AED planning prescribes that the number of survivors is maximized, which is often expressed in terms of coverage of the model. In a study where an optimization model of AED relocation in an urban area (in this case, Toronto) was built, optimized placement was estimated to save an additional 10 lives annually, and was calculated to be costeffective. 31 In this scenario, AEDs would be concentrated in downtown areas where population rates are higher. Indeed, access to care for SCA victims generally differs across areas given the lower number of AEDs in lesspopulated (rural) areas and the longer response times in those parts. 34,39 Thus, coverage-based models of AED placement may lead to inequitable solutions given the preference for urban and densely populated areas, and we see a trade-off between efficiency (maximizing overall survival among SCA victims) and equity (providing equal access to AEDs). Recognizing this trade-off, scientists struggle to create 'fair' models: As there is no agreement about what equity should entail for a particular system, it is difficult to include fairness considerations during the planning of the system. In addition, several groups of stakeholders will usually have different objectives, and improving a single perspective can lead to bad solutions from other's points of view. 40 The consensus in the literature about how to include equity in AED location optimization (and more generally in EMS planning) is that there is no consensus. Researchers in this field have commented that there seems to be no universally accepted method to address the trade-off between efficiency and equity, no common fairness criterion. 41,42

What are ''fair'' model considerations?
The dilemma can be broadly construed as a disagreement between utilitarian (i.e., maximizing ''the good'') and egalitarian (i.e., promoting equality) perspectives on distributive justice. Full-blown utilitarianism is not considered ethical by many people, and generally an egalitarian ''constraint'' is built in when allocating resources. Contemporary theories of distributive justice are generally grounded in the idea of equality of opportunity, as made famous by philosopher John Rawls in his theory of justice. 43 Extending Rawls's egalitarian theory to the sphere of healthcare, Norman Daniels argues that in a just society, resources should be distributed in such a way that the health of individuals is promoted, given the ''special moral importance'' of health for protecting opportunity. 44,45 According to Daniels, health inequalities are unjust when they stem from unjust distribution of the determinants of health, which includes distribution of medical devices such as the AED.
A full discussion of theories of justice (for another influential theory, see Nussbaum 46 ) is beyond the scope of this paper, but it should be noted that different theories can lead to different conceptions of equity, making it a complex concept. The World Health Organization defines equity as ''the absence of unfair and avoidable or remediable differences in health among population groups defined socially, economically, demographically, or geographically.'' 47 The term equity may also refer to equal access to care. However, when equity is interpreted as supporting equal access, this has a not-so-equal implication for our case. Namely, a policy providing equal access to AEDs would ascribe different values to a life saved in densely versus scarcely populated areas. A German modeling study has shown that implementing uniform EMS response times would imply valuing lives saved in nonurban areas up to 2.6 times higher than in urban areas. 48 In another (hypothetical) situation, no one is provided with access to an AED which leads to an equitable but highly undesirable situation. This shows that the system's equity is not related to the system's efficiency or effectiveness, and that the latter is also of moral importance.
Another point is that, in order to evaluate the equity of a certain optimization approach, contextual factors need to be taken into account. 49 These can be population-based factors (e.g., by adjusting for SCA determinants such as age or socioeconomic status) or system-based factors (e.g., the availability of lay/citizen responder apps like PulsePoint). 31 Moreover, the consequences of a delay in AED treatment should ideally be taken into account, since the response time is positively correlated with brain injury and thus affects quality of life. 32

Ethical implications of computer M&S
The issues discussed so far resemble a classic distributive justice problem in healthcare. What then, from an ethics perspective, is special about the use of computer M&S for public health? Is it not merely a neutral tool, like other scientific methods, that is applied to a field in which justice concerns exist anyway? In my view, that is not the case. Building on the AED case and on scholarship in the philosophy of technology, I will argue that computer M&S is not a morally neutral enterprise. Instead, computer M&Ss have built-in moral consequences that put the aforementioned fairness concerns in a new light (see 3.1). In the next sections, two of these consequences are highlighted. On the one hand, the use of M&S in public health prompts the programmer to make moral decisions and to reflect on relevant contextual factors (3.2). On the other hand, it makes justice concerns less visible by situating ethical decision-making in the ''opaque'' realm of computer science (3.3).

Computational models are not morally neutral
To show that computer models are not morally neutral, let us first consider a simple example: the use of scissors. As a tool, household (''all purpose'') scissors may be used for a variety of aims, such as to cut out paper or fabric patterns, to open plastic packaging, to curl ribbon for gift wrapping, or to physically hurt someone. The specific use determines the ethicality of the action. While cutting another person with scissors is not ethically appropriate, this would also be the case when any other sharp tool is used. However, it is not just about use. Technological artifacts can also have ethical consequences built in to them, which manifest in a wide range of uses. [50][51][52] Due to the overlapping blades, scissors are not symmetrical and most pairs are designed for use with the right hand. While left-handed scissors exist, these are less readily available. Thus, by their design, scissors may disadvantage left-handed people, which can have serious consequences. Research has shown that due to a lack of access to left-handed instruments during medical training, left-handed surgeons are more prone to needlestick injuries and have difficulty handling certain instruments during surgery. 53 One in 10 surveyed surgeons even said they would not want to be operated on by a lefthanded surgeon themselves.
Like scissors, computer M&S has built-in consequences. As a new scientific paradigm, M&S creates solutions for problems that cannot be solved with traditional methods. 6 This is also the case when used in public health planning. Consider the idea of moving around AEDs to test every possible configuration of their distribution within a geographical region: this is simply not feasible (and probably not desirable). Calculating optimal distribution without a computer is also quite an impractical task. Because of their adaptability, speed and accuracy, computer M&S turns intractable problems into tractable ones. 2 This creates new practical possibilities, and with it, new ethical implications. In the next sections, two built-in consequences of M&S are identified using concepts from computer ethics.

Enhancing human agency
By allowing us to simulate reality and inform public health planning, computational models can help improve human wellbeing. Building computer models, however, implies the need to make design choices. In the case of AED distribution, some of these choices take the form of a moral dilemma. Nussbaum (p. 1007) characterizes moral dilemmas as difficult situations in which ''all the possible answers to the obvious question, including the best one, are bad, involving serious moral wrongdoing.'' 54 However hard M&S professionals look for a solution to the dilemma of maximizing survival versus providing equal AED access, there may be no right answer. Not choosing is not an option, however, because values like equity or efficiency are necessarily ''embedded'' by design as model considerations, whether this is done intentionally or not. 50 It should be noted that visualization may help bring out these embedded values, although the extent to which this happens will depend on whether the simulation accurately represents reality. 11,14 When I saw geographical distribution of AEDs visually represented on a screen, it was easier for me to grasp the equity concerns embedded in specific configurations that clustered all AEDs in city centers.
Because they prompt M&S professionals to notice and choose between competing values, computer models and simulations give a more central place to human agency. Computer scientists make choices and enact these on the world, especially when using models to improve the world. We may now know with a high degree of confidence how many lives would be saved in specific resource allocations, e.g., certain AED spatial distributions. Having such knowledge means that computer M&S gives us options and the effect of chance (or what philosophers call ''moral luck'') is reduced. In that sense, it changes M&S professionals' relation to responsibility: it makes them more responsible for the outcomes. In this regard, Luciano Floridi (p. 7) has stated that ''ICTs [information and communications technologies] are making humanity increasingly responsible, morally speaking, for the way the world is, will and should be.'' 9

Creating moral opacity
While in the previous section it was argued that computer M&Ss are making programmers more responsible for design choices, one could say contrariwise that technology reduces agency because it shifts the focus to technological artifacts and ''pushes humans away'' from sciences. 55 Philip Brey has proposed the term moral opacity to describe the obscuring of ethical problems by operations and technological systems that are ''very complex and difficult to understand for laypersons and that are often hidden from view for the average user'' (p. 51). 50 Outsiders such as ethicists and public health professionals may not have enough insight in complex computer models to grasp the way in which values are embedded (although proper visualization may help, as discussed above). Computer scientists, on the other hand, may not understand that their models raise ethical questions or what these questions are. This lack of clarity about ethical issues does not mean that people involved in M&S lack responsibility, given that the models remain based on human input and choices (although algorithmic decisionmaking and machine learning are gradually changing the scope of human control). It does mean that we need to think about exactly which agent is responsible for promoting fairness in public health modeling and about what that entails in practice.

How to compute fairness?
We have seen that the use of computer M&Ss increases responsibility for making moral choices in public health (in our case: ensuring just spatial distribution of AEDs), even though these choices may be hidden from view and not recognized as such. This responsibility is often placed in the hands of M&S professionals, who struggle with how to ''model'' ethics considerations especially when these involve moral dilemmas. 56 As a result, for studies related to modeling the distribution of public resources, ''very often it seems that computational tractability becomes a given reason for selecting a particular equity criterion'' (p. 419, emphasis added). 42 This amounts to injustice because public health resources should not be distributed based on technological principles like algorithm speed, but rather on the basis of medically relevant principles, whether this is saving lives or providing equal access to care. 57 The technical challenge of how to program fairness needs to be preceded by agreement about what is the right weighing of moral values like efficiency and equity. In what follows, some ideas are sketched on how to address fairness concerns in computer M&S for public health planning. I suggest that scientists who build computer models should be trained to recognize ethical issues but that this pedagogical approach has limits (4.1) and that when they are faced with a moral dilemma, procedural justice requires deliberation with other stakeholders (4.2).

Values and virtues
I have argued that computer models and simulations are making computer scientists increasingly responsible for moral choices in public health. As such, these professionals are involved in the constructing, or design, of the world. 9 Computer scientists can be characterized as ''choice architects'' who embed certain values when they design models. 50,58 When discussing moral dilemmas related to resource allocation, Nussbaum states that ''we will be spurred to use our imaginations, thinking how we might construct a world in which such conflicts do not confront citizens, or confront them as rarely as possible'' (p. 1036, emphasis added). 54 Using this imagination requires humanity to think about alternatives and find creative ways in which technology can help address moral concerns. 58 For instance, what would happen to moral dilemmas in spatial modeling of AED distribution when we imagine that AED-carrying drones become the standard? Noting the increased responsibility of computer scientists as choice architects prompts the question (from the philosophical area of ''virtue ethics'') about their role in the design of the world and about the kinds of persons these professionals should be. Is there a need to cultivate virtuous programmers to address moral dilemmas in M&S and public health?
Indeed several authors have proposed that, because designers of computer models and simulations have the responsibility to reflect on values, a professional code of ethics is needed. 10,12,59 Such a collection of values, principles and guidelines, has been created by the Society for Modeling and Simulation International in their ''Simulationist Code of Ethics.'' Under the heading of professional competence, this code states that ethical conduct requires providing full disclosure of system design assumptions and being explicit about the conditions of applicability of specific models (in prescriptions 2.5 and 2.6). In this way, a code of conduct would be useful to sensitize professionals to the fact that M&S has ethical consequences, by prompting them to make explicit what values are embedded in the design of models. In education, there is also an increased focus on cultivating responsible computer scientists. For example, at some universities in the Netherlands, students in operations research need to submit an ''ethics disclosure'' with their final thesis, in which they analyze an ethical dilemma related to the created model.
This pedagogical approach to ethical concerns in computer M&S has its limits, however. Cultivating virtue may lead to an individualized ethics where values are stimulated that are of concern only to the community of computer scientists. 9 While Ö ren states that professional responsibilities require that a researcher acts consistently with the public interest, 10 it is unclear how this duty to the general public can be informed by a code of ethics. Practicing computer scientists seldom consult professional codes and may not feel inclined to follow them. 3 I suggest this may partly be the case because these codes are not helpful for dealing with moral dilemmas in which values conflict. The Simulationist Code of Ethics only recommends to ''seek advice from professional colleagues when faced with an ethical dilemma in modeling and simulation activities'' (prescription 5.3). The code does not prescribe how to handle competing values, making it insufficiently helpful for addressing distributive justice concerns that arise in the public health context. As we saw in the case of AED placement, researchers working on these spatial distribution models are often well aware of the moral dilemma but still do not know how to address it, no matter how virtuous they are.

Procedural justice and deliberation
How can M&S professionals incorporate fairness considerations in their models? Knowing that health is of special moral importance (Section 2 of this article) does not tell us how to meet health needs fairly when we cannot meet them all. Different conceptions of equity may all be valid, depending on what reasons are deemed important. Following Daniels, whose conception of procedural justice has been influential in the field of public health, I suggest that what is needed is a fair process for dealing with dilemmas around resource allocation. This is captured in the framework of Accountability for Reasonableness, which holds that in order for decision-making processes to be considered fair they need to satisfy four general requirements. 60 Three of these are conditions for accountability requiring that: decisions and their rationales are made publicly available (the publicity condition); mechanisms exist for challenging priority-setting decisions (the revisability and appeals condition); public regulation is in place to ensure that the other conditions are met (the regulative condition). The remaining requirement (the relevance condition) holds that a reasonable explanation is given for decisions, that is, one that appeals to a comprehensive range of considerations, principles, and evidence that can be considered relevant.
Disclosing what values are embedded in models and what this entails, i.e., mitigating moral opacity, should be a multidisciplinary endeavor. 50 Researchers modeling scarce resources have noted the interplay between different levels of decision-making, stating that ''people concerned with operational decisions simply try to optimize their operational objectives under priorities and constraints that have already been fixed on the higher level before.'' 61 Therefore, rather than focusing on the individual agency and virtue of computer scientists confronted with design choices, responsibility needs to be shared with a range of other actors such as ethicists and public health policymakers at an early stage of development. When a broad variety of stakeholders is included in group deliberations, the quality of the reasoning process involved in decisionmaking is improved. 62 In addition, adopting a deliberative approach increases legitimacy and acceptance of the choices made. 45 Outlining the specifics of a deliberative procedural justice approach to the ethics of computer M&S is beyond the scope of this article. Insights might be gathered, however, from other areas of science and public health ethics (see for instance the deliberation framework by Bernheim et al. 63 ). Deliberative approaches in clinical medicine and public health may include methods like moral case deliberation, patient participation, and empirical ethics. [64][65][66] Generally speaking, stakeholder deliberation is becoming a priority for science funding organizations, and the European Union promotes a framework called Responsible Research and Innovation (RRI) requiring that societal actors work together during all phases of research to align studies with public values. 67 Some notes are in place about what determines the quality of deliberative processes. First, models should be represented in an accurate manner in order to enable discussions about ethical implications. 3,59,68 Second, deliberation needs to be sensitive to the specific context that models are created for, with different values prevailing in different social, political, or cultural contexts. 58 Third, a focus on procedural concerns should not lead to a neglect of discussions about substantive ethical problems (e.g., the rightness of utilitarian versus egalitarian approaches). It should not just be assumed that there is no right solution, and theoretical reflection should be included, for instance by cooperating with ethicists. 69

Concluding remarks
There is a lack of research regarding the ethics of computer M&S. Especially within the emerging field of public health ethics, there seems to be a blind spot regarding the ethics of information and communications technology (ICT) methods and perhaps technology more generally. In this paper, I showed that new ethical consequences arise at the nexus of public health and M&S, and that taking a transdisciplinary ethical perspective can be useful when analyzing these consequences. The public health perspective adds to computer ethics in the sense that it shows that for some moral dilemmas, such as those around distributive justice, there may be no correct solution that can be readily modeled. Promoting professional responsibility through a code of ethics will not help to prescribe a right course of action in these situations, and instead, deliberation with other stakeholders is needed to ensure procedural justice. Public health ethics may learn from the computer modeling context that tools are never neutral and that ICT methods used in public health planning can have built-in ethical consequences.
A number of questions remain unanswered and more case studies are needed that analyze the tension between efficiency in computer M&S and public values like equity. Generally speaking, broader discussions about ethics of computer M&S are needed, including on the relation between epistemology and ethics. Does epistemic opacity necessarily cause moral opacity, for instance? Debates about the ethics of future innovations in M&S are needed too. The rise of algorithmic decision-making and artificial intelligence may imply a shift from human to artificial agency, and potentially further increases the need for deliberation on embedded values in models. Further work is also needed to map how deliberative procedural justice in the field of M&S should take shape, in terms of stakeholders to be included, and practical processes to be implemented.
There is one final comment I would like to make. Computer M&S is needed to optimize the health benefits of the population. However, the possibilities of models and simulations to improve our world should not lead to an exclusive focus on (fair) model optimization while obscuring other courses of action. Moral dilemmas around resource allocations only arise when budgets are restricted. In the case of life-threatening cardiac arrhythmias that can be treated using AEDs, the special moral importance of health (or here: life) may suggest a more central role for governments in providing funding for AEDs to improve population health. As Daniels has stated, ''we protect equal opportunity best by reducing and equalizing the risk of these conditions arising'' (p. 141). 70