Function Allocation Considerations in the Era of Human Autonomy Teaming

Function allocation refers to strategies for distributing system functions and tasks across people and technology. We review approaches to function allocation in the context of human machine teaming with technology that exhibits high levels of autonomy (e.g., unmanned aerial systems). Although most function allocation projects documented in the literature have employed a single method, we advocate for an integrated approach that leverages four key activities: (1) analyzing operational demands and work requirements; (2) exploring alternative distribution of work across person and machine agents that make up a human machine team (HMT); (3) examining interdependencies between human and autonomous technologies required for effective HMT performance under routine and off-nominal (unexpected) conditions; and (4) exploring the trade-space of alternative HMT options. Our literature review identified methods to support each of these activities. In combination, they enable system designers to uncover, explore, and weigh a range of critical design considerations beyond those emphasized by the MABA–MABA (“Men are better at, Machines are better at”) and Levels of Automation function allocation traditions. Example applications are used to illustrate the value of these methods to design of HMT that includes autonomous machine agents.


IntroductIon
The advent of advanced technologies that exhibit high levels of intelligence and autonomy (e.g., self-driving cars; unmanned aerial systems [UASs]) raises important questions with respect to how to best distribute tasks across people and autonomous elements and what forms of interaction are required to support effective human machine teaming. The range of questions that arise includes the following: (1) do the new, more autonomous technologies enable a reduction in the number of people traditionally required to perform a task (e.g., can you reduce the crew size required to operate an aircraft?)? (2) how should tasks be redistributed across the people and autonomous agents in the system? (3) what kinds of interaction with the new autonomous technology will be required to operate safely and efficiently across a wide range of contextual situations? (4) will the people in the system be able to support these interactions within sustainable levels of workload? (5) what types of human interfaces will be required to support the necessary interactions?
These questions largely fall within the scope of what has traditionally been termed function allocation-a front-end analysis that is conducted to establish how system functions and tasks should be distributed across people, hardware, and software (MIL-HDBK-46855A, 1999;MIL-STD-46855A, 2011). Function allocation covers human-human function allocation, team design, and human-automation function allocation (Joe, O'Hara, Hugo, & Oxstrand, 2015). It is traditionally conducted as part of the human systems integration (HSI) process used during the design of complex systems (MIL-STD-46855A, 2011). In this paper, we review a range of approaches that have been used to address function allocation questions with the aim of identifying promising approaches for function allocation decisions when designing systems that incorporate more autonomous technologies. A central question is the extent to which function allocation methods largely developed in the context of more traditional technologies continue to be relevant when faced with technologies that display more autonomy. This is an important question to answer given the need to provide system designers with tools they can use to design effective joint human machine teams (HMTs) that include more autonomous machine agents.
Here, we take a broad view of what constitutes autonomous technologies. We take the position that autonomous technologies are technologies that fall toward the higher end of the automation continuum, in that they automate high-level cognitive tasks (e.g., perception, situation assessment, planning, decision-making) and can operate with some degree of independence (see Calhoun, Ruff, Behymer, & Frost, 2018, for a related definition). Examples of autonomous technologies include systems with physical embodiments (e.g., robots, autonomous vehicles) as well as software systems that perform high-level cognitive functions (e.g., situation assessment, planning). We acknowledge that others have made a sharper distinction between autonomous and automated agents (e.g., Kaber, 2018a), but for the purposes of this article, we treat autonomous technologies as on the high end of the continuum of automation.
We begin with a brief review of the two most prominent traditional approaches to function allocation, MABA-MABA ("Men are better at, Machines are better at"; Fitts et al., 1951) and Levels of Automation (LOA; for example, Endsley & Kaber, 1999;Parasuraman, Sheridan, & Wickens, 2000). Our primary conclusion is that these approaches, while useful in that they highlight some important considerations in function allocation decisions, provide limited support to system designers because they do not address a range of additional considerations that are of equal, if not greater, importance to take into account when designing a HMT.
We then turn to more recent approaches that we argue can be fruitfully leveraged to support function allocation decisions and HMT design relating to design of systems that include autonomous technologies. Their primary advantage is that they serve to broaden the set of considerations that system designers need to take into account in making function allocation decisions. Although most function allocation projects documented in the literature have focused on a single approach, we advocate for an integrated approach that leverages four key activities: (1) analyzing operational demands and work requirements; (2) exploring alternative distribution of work across person and machine agents that make up a HMT; (3) examining interdependencies between human and autonomous technologies required for HMT; and (4) exploring the function allocation trade-space. Our literature review identified methods to support each of these activities. We argue that these methods in combination enable designers to uncover, explore, and weigh a range of critical design considerations beyond those emphasized by the MABA-MABA and LOA function allocation traditions.
We end with a section that revisits the question of whether autonomous technologies offer unique challenges to function allocation methods. We conclude that while new forms of autonomy may raise unique detailed design challenges, the function allocation methods and frameworks identified through this literature review process provide a solid foundation for making function allocation decisions for these more autonomous technologies.

tradItIonal approaches to FunctIon allocatIon MaBa-MaBa
In 1951, Paul Fitts and colleagues edited a report about how to create an effective air navigation and air traffic control system that incorporated human and machine agents (Fitts et al., 1951). The report included a table that summarized the activities that humans are better at than machines and vice versa. This led to the classic "Fitts List," or "Men-are-better-at/ Machines-are-better-at" (MABA-MABA) classifi cation scheme. Since that time, recommendations for incorporating automation into work systems have largely been based on allocating functions to either automation or humans based on the capabilities of each. The concept of allocating functions based on capabilities continues to be relevant to the case of autonomous technologies. Clearly, one needs to consider the strengths and limitations of the autonomous technologies (including its level of reliability and the range of situations in which it is likely to operate effectively) when considering the design of the HMT.
However, the MABA-MABA approach has been subject to a variety of criticisms (e.g., Dekker & Woods, 2002;Schutte, 2017). One clear limitation is that the lists of what humans versus machines are better at can become quickly outdated as technologies continue to improve. More fundamental concerns raised include the implicit assumption of MABA-MABA approaches that a machine can be inserted to do something that a human is less good at doing without otherwise changing the nature of the work (the so-called "substitution myth"). However, as Dekker and Woods (2002) point out, inserting any technology into a system fundamentally changes the functioning of the joint system. New tasks are created for the human who now has to interact with the technology (e.g., entering inputs, engaging/disengaging the automation, monitoring, etc.). These new tasks (e.g., monitoring system states and functioning) may, ironically, require what the Fitts report originally stated humans are bad at doing-namely, tasks requiring vigilance and little activity.
Another related criticism of the MABA-MABA approach is that it encourages a technology-centered focus, where functions are allocated first to the automation based on what the technology is capable of doing, then the human operator gets the "leftover" functions. This method of function allocation accommodates the limits of the automation, but not the human, which can lead to performance problems. For example, in situations that deviate from the anticipated set of conditions that the automation was designed for (herein referred to as off-nominal conditions), the human must take over when the automation fails-often when the human is already experiencing high workload. This phenomenon has been referred to as "clumsy automation" (Sarter, Woods, & Billings, 1997;Wiener, 1989).
In summary, a clear contribution of the MABA-MABA list approach is that it encourages designers to consider the strengths and limitations of both humans and technology elements in a system when making function allocation decisions. This consideration remains important in design of HMTs that include more autonomous machine agents. However, as discussed above and further elaborated below, designers require additional guidance to enable them to avoid the "trap" of focusing too much on what the technology can do (leaving the human with the "leftover" tasks), and to encourage them to explicitly consider how the introduction of the new autonomous technology is likely to alter the details of the work of the people in the system.

loa Frameworks
Another prominent approach to function allocation draws on LOA frameworks (e.g., Endsley & Kaber, 1999;Parasuraman et al., 2000;Sheridan & Verplank, 1978). LOA frameworks incorporate taxonomies that specify which aspects of cognitive performance are being addressed (e.g., gathering the information, interpreting the information, generating solutions, deciding on action, taking an action), and the level of automation presumed. For example, a technology's LOA can range from fully manual (everything done by the human); automatic information gathering while the human identifies and implements the solution; automatic solution generation, while the human selects among options; automation solution selection and execution, while the human monitors system functioning and only intervenes when something goes wrong; fully automated, where the technology operates without human involvement.
Much of the focus of LOA research has been on understanding the impact of different LOA on human situation awareness, workload, complacency, trust, and ability to take over when automation fails (see Endsley, 2018;Kaber, 2018b;Wickens, 2018, for reviews of the state of the art). Although this research has identified useful empirical regularities, there is growing recognition that the range of function allocation design options is larger than what is represented in LOA frameworks, and the design questions that need to be confronted are more fine-grained (Defense Science Board, 2012;Jamieson & Skraaning, 2018;Johnson, Bradshaw, & Feltovich, 2018;Miller, 2018;Sheridan, 2018).
A major concern with the LOA approach as guidance to system designers is that it characterizes tasks at too gross a level of classification (e.g., information integration, decision, action). As a consequence, it limits the range of options for how work might be organized across the human and automated agents. For example, the humans and automated agents might work cooperatively on a task, taking on different portions of the task fluidly as their local relative advantages change (e.g., based on their physical position and visual perspective). The concern is that an LOA approach may promote an overly narrow conception of the range of design options available and the range of design questions that should be explored when addressing a new Human-Automation Interaction Design problem (Roth, Depass, Harter, Scott, & Wampler, 2018;Roth & Pritchett, 2018;Smith, 2018).
In summary, the insights gained from empirical research motivated by LOA frameworks clearly are relevant to design of systems that include more autonomous machine agents. The impact of different LOA on human performance (e.g., situation awareness, workload, ability to take over when automation fails) continues to be a critical consideration in making function allocation decisions for HMT. However, the design space of possible ways to share work between human and machine agents is much broader than what is represented in LOA frameworks, including possibilities for adaptive automation where tasks are dynamically reallocated based on the specific circumstances (e.g., based on who is more busy, or who is better able to handle the particular case; Feigh, Dorneich, & Hayes, 2012;Parasuraman, Cosenzo, & De Visser, 2009). Furthermore, as we discuss more fully below, there is a range of other factors that need to be explored in defining and evaluating alternative function allocations and supporting the design of HMT beyond those considered by LOA frameworks and MABA-MABA lists. These include factors such as implications for training requirements for skill development and maintenance; implications for ability to cope with off-nominal situations and the need for resilience in the face of unanticipated conditions; and multiple broader organizational design considerations such as job satisfaction and the need for career development opportunities. For these reasons, there is a need for analytic tools that can better support system designers in identifying and exploring a broad range of considerations in making function allocation and system design decisions beyond those highlighted by the MABA-MABA and LOA frameworks.

analyzIng operatIonal deMands and Work requIreMents
In the preceding section, we argued for the need to broaden the set of considerations that designers use in making function allocation decisions. One critical question that is not traditionally considered in the literature on function allocation is what is the nature of the work to be done and what makes it challenging? Although function allocation is often thought of as a means for deciding "who does what," an important prerequisite is understanding "what needs to be done" and what are the associated challenges. Effective function allocation decisions require an understanding of the operational demands associated with the envisioned world in which the new HMT is anticipated to operate. Function allocation schemes need to ensure that the joint human autonomy system can operate resiliently in the face of complex operational demands. It is not enough to define function allocations and HMT that work well in the straightforward, routine cases. It is also critical to ensure that analysts explore and accommodate the more complex, challenging cases that can potentially arise. This includes off-nominal conditions that may stress function allocation (e.g., situations that are likely to exceed the boundaries of the automated agents, or failed sensors that may degrade the ability of the automated agent to perform).
Cognitive task analysis (CTA) and cognitive work analysis (CWA) methods are well suited for identifying and analyzing the full range of demands of the work domain, both in the present environment and in a future envisioned world where new technologies and changes in operations are anticipated (Bisantz & Roth, 2007;Hoffman & Militello, 2009;Woods & Dekker, 2000). Collectively, CTA and CWA provide a variety of specific knowledge elicitation and knowledge representation techniques for identifying and representing domain characteristics and constraints that challenge performance, the knowledge and strategies that enable expert performance, and the factors that contribute to error-vulnerable performance (Bisantz & Roth, 2007;Roth, 2008). This includes techniques for projecting work demands, constraints, and complexities that are likely to arise in yet-to-berealized future worlds (Dekker, Nyce, & Hoffman, 2003;Dekker & Woods, 1999). These two complementary approaches to leveraging the domain knowledge of experts can ensure that all domain functions are systemically examined as well as identify the range of challenging situations that are likely to arise in the future world within which the new autonomous technologies are envisioned to operate.
CTA methods typically leverage knowledge of domain experts to uncover complexities in existing systems and how operational users have adapted to overcome such challenges (Crandall, Klein, & Hoffman, 2006;Hoffman & Militello, 2009). While CTA techniques often use a specific incident to ground the discussion, cognitive probes are used to understand the context in which work occurs, what makes work challenging, where information is obtained, how it is used, and where things commonly go wrong across a range of circumstances. The incident provides a concrete example, but the goal is to identify insights that are relevant beyond the specific time and place in which the incident occurred. Furthermore, hypothetical probes are commonly used to obtain an understanding of how a domain expert anticipates things will change in the envisioned world, what challenges may remain, and what new challenges are likely to emerge. The view of the domain expert provides a critical perspective to be considered in combination with predictions of technology developers and high-level decision makers who are envisioning the future world.
The resulting corpus of cases generated using CTA methods provides insight into cognitively challenging aspects of work such as judgments, perceptual discriminations, assessments, decisions, and collaboration, as well as contextual elements that increase complexity. The cases inform cognitive requirements for effective performance and can serve as an important source of information on what makes the domain challenging that can then be represented in CWA artifacts (Bisantz & Roth, 2007).
CWA is an integrated set of analytic tools intended to represent the cognitive demands of work and the requirements to effectively support work performance (Rasmussen, 1979;Vicente, 1999). CWA arose in response to the challenges in designing complex systems and the need to support performance in unanticipated conditions. Rasmussen and colleagues argued that one way to better support performance in unanticipated situations is to design user interfaces and related support systems (e.g., training, procedures, team design) based on a functional analysis of the work domain, called a work domain analysis, that identifies the purposes of the system (domain goals) and the functions available to achieve those goals, irrespective of particular situation or agent (Bisantz & Burns, 2008;Naikar, 2011, Naikar, 2013Naikar, Pearce, Drumm, & Sanderson, 2003;Roth & Bisantz, 2013).
Work domain analyses are often conducted using an abstraction hierarchy (AH) representation of the goals, constraints, and functional means available to achieve the goals at different levels of abstraction. The highest level specifies functional purposes (the primary goals the domain is intended to achieve). The next level specifies values and priorities as well as constraints and restraints. These collectively serve to guide and constrain performance. The next level specifies the functions that need to be accomplished to meet the functional purposes, independent of any technology assumptions (often called the purpose-related functions). The next level, called physical functions, describes the functions in terms of the physical systems and processes that accomplish the higher level functions. Finally, the lowest level specifies the physical forms or objects that achieve the physical functions.
AH has been used to reveal domain complexities and demands that will challenge performance, be it of humans or autonomous agents. For example, Bisantz et al. (2003) used an AH to highlight cognitive challenges associated with operating a next generation Navy combat ship. Their analysis highlighted functions that simultaneously influenced multiple higher level goals creating potential goal conflicts that needed to be managed (e.g., ship maneuvering could restrict use of some weapons systems and could create goal conflicts between the need to sense undersea targets and the need to manage acoustic signature). Revealing domain complexities not only pointed to challenges for attempts to automate some tasks but also display design requirements to enable the distributed operators responsible for different functions to recognize and respond to potential goal conflict situations.
Work domain representations can also be used to highlight differences between the current operational environment and the envisioned world in which the new forms of autonomy are anticipated to operate. We used this approach in a recent project exploring function allocation options for the U.S. Army's next generation rotorcraft, the Future Vertical Lift (FVL) that included managing sophisticated UASs. As part of the project, we developed an AH that highlighted the new challenges associated with the envisioned world in which the FVL is anticipated to operate (Ernst et al., 2019). Figure 1 shows an excerpt of the AH we created. Functions with gray borders represent functions that we anticipated to be more demanding in the envisioned world. Functions with black borders were largely new functions beyond what helicopters in current operations engage in. Note that we explicitly included autonomous agents (i.e., UAS) in the AH, as these represented systems that pilots would explicitly need to interact with to achieve mission objectives (Mazaeva & Bisantz, 2007).
The AH provided an analytic tool for analyzing and documenting what functions are new and which are changed from prior Army helicopter operations, and how those are likely to create new challenges (for human or machine). The AH highlighted new anticipated demands (e.g., detecting and responding to new types of threats), new anticipated functions (e.g., engaging in electronic warfare that was not previously required), and increasing demands associated with existing functions (e.g., faster speeds of FVL will complicate aviation and navigation functions as will the existence of near-peer threats that will require more complex maneuvers to mask, elude, and engage the threats).
Laying out the new demands served to highlight both additional stresses that were likely to affect human performance as well as complexities that were likely to challenge the ability of new forms of automation (e.g., automated flight systems; sophisticated UAS) to operate under these more demanding conditions. Thus, the AH brought into bold relief a range of complexities that could affect performance of both the human and automated elements of the HMT that would need to be considered in evaluating alternative function allocation options across the human and machine elements of the team.
The Navy Combat Ship and the FVL examples illustrate the kinds of additional insights that can inform function allocation decisions beyond the factors traditionally considered by either the MABA-MABA or the LOA approaches. Specifically, these examples highlight the need to uncover domain complexities that can arise to challenge performance of the human and/or automated agents in the system. These domain complexities are critical to identify in developing and testing alternative HMT function allocations. Identifying complexities that can challenge performance is relevant to design of any HMT, but especially important for systems that include new forms of autonomy whose boundaries of competence may not yet be fully specified.

explorIng dIstrIButIon oF Work across person and MachIne agents
Another important element in function allocation is to explore alternative options for distribution of work across the HMT that may include multiple people and/or multiple autonomous machine agents and their implications for joint HMT performance in both routine and offnominal conditions. We review approaches that have been developed to represent and explore implications for alternative options for distributing work as well as new methods that have been proposed to foster designs that better enable more fluid redistribution of work to accommodate changing demands. These methods have tended to grow out of the CWA tradition.
Investigating Implications of alternative Function allocation options CWA methods have been used to explore alternative distribution of functions, both across people and between people and automated (and autonomous) elements of a system. One recent example is the work of Feigh, Pritchett, and Kim (Feigh & Pritchett, 2014;Pritchett, Kim, & Feigh, 2014a;Pritchett, Kim, & Feigh, 2014b) who used an AH representation to explore how functions could be distributed across human and automated agents under different assumptions of air transport flight deck automation. Figure 2 shows an AH they developed that represents the descent phase of an air transport flight. This representation illustrates the distribution of functions across human agents (shaded boxes with thick border) and automation (shaded boxes with thin border) for a highly automated futuristic flight deck that automates communication management (partially), trajectory management, and aircraft control. Pritchett et al. (2014a) used the AH analysis to explore implications of alternative function allocations. The AH revealed differences in: • The coherence of function assignment across human and machine elements of a system (i.e., is the person assigned a related set of tasks that form a coherent whole?), • The degree of authority-responsibility mismatch that can result (does the person have ultimate responsibility for the actions of the autonomous machine agent with little opportunity to influence its performance?), and • The resulting cognitive demands imposed on the humans in the system relating to monitoring and taking over from the automated elements of the system when it was outside its boundary of competence.
There have been other CWA representations in addition to AH that have been used to explore alternative function allocations. In particular, the second phase of CWA, called Control Task Analysis, is specifically intended to represent what needs to be done to achieve domain goals, independent of how it is done or by whom (Naikar, Moylan, & Pearce, 2006). Output representations include Contextual Activity Templates and Decision Ladders. These have been used to explore the benefits and drawbacks of alternative options for distributing work across people and machine agents. Ernst et al. (2019) used contextual activity templates to represent the different phases of FVL missions that will need to be supported, and the activities that need to be achieved to accomplish those phases of the missions. This included the pre-mission, en route, engagement, and exfiltration portions of a mission. The analysis included activities that occur in Army helicopter missions in today's environment that are expected to also occur in the FVL envisioned world (e.g., route planning, mission briefing, and rehearsal) as well as new activities that will need to be performed by the FVL crew (e.g., preplanning and preprogramming the UAS portion of mission; in-flight modification of multiple UASs' flight parameters and/or their sensing parameters in response to situation changes).
Contextual activity templates provide the foundation to analyze and allocate functions across individuals within a platform, across platforms, and across people and automated elements (e.g., automated route planning tools; automated flight capabilities; automated control of UAS flight path). They can also be used to explore the possibility of shared functions (e.g., between humans and automated support systems) as well as of dynamic reallocation of functions (e.g., between pilot and nonflying crew member as their workload shifts; across platforms as one aircraft becomes incapacitated or needs to depart the area). Figure 3 provides an excerpt of a contextual activity template developed for FVL that highlights potential for task sharing across human and automated systems (Ernst et al., 2019). In this figure, the activity is represented by the rectangular boxes that span the mission phases where that activity is most likely to occur. The lengths of the arrows correspond to span of mission phases where the activity could possibly occur. The box's color indicates which agent has some capability to perform the activity. Black boxes indicate that only the automation has the capability to perform the task; white boxes indicate that only the human has capability to perform the task. Boxes with gradient shading indicate that both the automation and the human have capability to perform the task. The three left-most columns provide a traceable link back to the AH functions that correspond to the specific activities represented in that row.
The figure highlights that there are some activities that are expected to be allocated to automation (e.g., detecting signature of enemy threats from networked sensors), some that are expected to be allocated to the human pilot (e.g., determining the best maneuver to evade a threat), and some that might be performed by either the human or machine agents in the HMT or a combination of the two. An example of this is deciding and directing the sacrifice of a UAS in support of manned aircraft survivability. Jenkins, Stanton, Salmon, Walker, and Young (2008) used abstraction hierarchies and contextual activity templates to document the functions associated with helicopter mission planning and the opportunities for distributing work across different positions on the ground versus in the air. Similarly, Stanton, Harris, and Starr (2016) used CWA artifacts to explore the potential to move from a two-pilot commercial flight deck to a single-pilot flight deck by offloading the nonflying pilot tasks to a pilot on the ground who was responsible for supporting multiple aircrafts.
Huddlestone, Sears, and Harris (2017) used AH and event sequence diagrams to examine the implications of going from a two-pilot to a single-pilot commercial aircraft by shifting some functions currently supported by the second pilot (the pilot not flying) to ground stations. The study compared the AH for a traditional two-pilot commercial aircraft with a second revised AH when a ground station was added. The analyses revealed that many of the functions of the pilot not flying could be shifted either to automated systems or to a ground pilot located in a ground station responsible for supporting multiple aircraft. Note that the authors avoided the "substitution" myth, recognizing that the reallocation of tasks across individuals and automated systems would result in the emergence of new tasks and new information requirements. The analyses identified new functions (e.g., pilot flying communicating with the pilot on the ground) as well as the need for new display and information systems (e.g., ground station aircraft systems displays; enhanced vision systems) when moving to a single pilot in the aircraft. Tokadlı and Dorneich (2018) used AH and decision-action diagrams to explore the distribution of functions between mission control and space crews during space operations and how they would be disrupted by communication lags associated with Beyond Low Earth Orbit Missions. They used the analyses to identify how delayed communication would affect the ability of mission control to support the space crew in real time, which functions would be affected, and implications for design requirements for an automated cognitive assistant to support the space crews.

accommodating Fluid distribution of Work
Recent work by Naikar and her colleagues has emphasized the importance of designing systems that enable more fluid distribution and redistribution of work to accommodate changing demands (Naikar, 2018;Naikar & Elix, 2016;Naikar, Elix, Dâgge, & Caldwell, 2017). The idea is to analyze and design systems to support the functions that individuals and automated agents could, in principle, take on, rather than focusing exclusively on identifying and supporting a particular assignment of functions to individuals and automated agents.
Empirical studies of high-performing teams reinforce the value of designing for more flexibility in role allocation. Studies have shown that while team members may have formally defined roles and command structures, in practice, the allocation of tasks and leadership roles are more fluid, responding to the local demands of the situation (Bigley & Roberts, 2001;Militello, Sushereba, Branlat, Bean, & Finomore, 2015;Rochlin, La Porte, & Roberts, 1998). These findings highlight the importance of not overconstraining function allocation in team design, enabling team members to "self-organize," adapting the team structure as needed to accommodate shifting demands (Naikar et al., 2017). The importance of design for more flexibility in role allocation is also supported by the research on adaptive automation that argues for the benefits of having automated and autonomous technologies that can change their behavior in response to changes in situation characteristics and operator needs (Byrne & Parasuraman, 1996;Feigh et al., 2012). Naikar and colleagues (2017) propose a method they call work organization possibilities analysis to analyze and design to support what people and automated agents can do rather than what they are formally assigned to do. The first step of work organization possibilities analysis is to identify the physical and social constraints that place clear limits on what is possible with respect to which functions individuals and/or automated agents can take on. For example, timing requirements, policies, or regulations may constrain how functions can be allocated. Once these limits are identified, one can delineate the set of possible functions that individuals and automated agents could each, in principle, take on. The goal is then to design all aspects of the system, from the team structures, to the training, to the hardware and software elements, so as to enable flexible function allocation across team members, including automated agents. The objective of work organization possibilities analysis is to enable the team to dynamically reallocate tasks among themselves in response to local demands, expanding opportunity for effective performance in both routine and novel situations. Naikar et al. (2017) report that they have used the work organization possibilities analysis to support design for a future maritime surveillance aircraft.
In summary, there is a need from the design community for methods to support systematic exploration of the task demands and performance implications of alternative function allocations across people and automated agents. CWA methods have been successfully used to explore implications of alternative function allocations beyond implications for workload or situation awareness that have been the primary focus of LOA methods. This has included examining implications for coherence of tasks assigned to particular agents (to avoid assigning people the "leftover functions"), implications for authority-responsibility mismatches, and implications for new tasks that emerge as a consequence of the function allocation (e.g., new monitoring tasks; new communication tasks).
It is also important to design for fluid distribution and redistribution of functions across people and machine agents that make up the HMT. This includes incorporating adaptive automation that can take on and give back tasks depending on situational factors, as well as accommodating the ability of human team members to fluidly redistribute tasks among themselves. The need to support fluid task distribution is particularly important when HMTs include highly autonomous machine agents, as experience has shown that highly autonomous agents have the potential for brittle performance in situations beyond their bounds of competence (Casner, Hutchins, & Norman, 2016;Endsley, 2017;Sarter et al., 1997). The goal is to enhance the ability of the HMT to respond adaptively in the face of unanticipated conditions for more resilient performance.

analyzIng InteractIon requIreMents BetWeen (person and MachIne) agents
A closely related question to how best to distribute work across human and machine agents that make up a HMT is how to anticipate and support the detailed interaction that will be required between the agents to maximize the joint performance of the HMT. As discussed earlier, insertion of new technology inevitably changes the nature of the work (Dekker & Woods, 2002). New autonomous technology may change how individuals operate and interact with others (both other people and machine agents). It may reduce workload by eliminating or facilitating some tasks, but it may also introduce new tasks (e.g., having to monitor and interact with the autonomous technology). In particular, the literatures on distributed cognition and requirements for maintaining common ground reveal that changes in distribution of work across team members may create new demands on teamwork, such as new requirements for communication to maintain a shared understanding of the situation and each other's activities and intention (Hollan, Hutchins, & Kirsh, 2000;Hutchins, 1995aHutchins, , 1995bKlein, Feltovich, Bradshaw, & Woods, 2005). This makes it critical to examine in detail the interaction requirements to support effective joint HMT performance.
The review identified multiple tools that have been used to analyze the detailed HMT interaction required for effective joint HMT performance. For example, Pritchett, Kim, and Feigh (2014a) used a table format (Table 1) to itemize the detailed human flight crew and automation tasks that result from a function allocation for a futuristic, highly automated flight deck that automates communication management (partial), trajectory management, and aircraft control. This includes new tasks that emerge for the flight crew related to monitoring and controlling the automation (e.g., confirm target altitude and speed; monitor waypoint progress).
CWA decision ladders have also been used to illustrate the distribution of cognitive work and detailed requirements for interaction that arise under different autonomy assumptions. For example, Li and Burns (2017) used decision ladders to model distribution of cognitive work and interaction requirements for automated financial trading applications varying in degree of automation. Figure 4 provides one example. It illustrates allocation of cognitive tasks to human (unshaded) and automated (shaded) agents in the case of a highly automated financial trading application under routine operation. As can be seen, much of the cognitive activities were automated with the human operating at the goal selection level. Interestingly, Li and Burns also explored how the distribution of cognitive work would shift when unanticipated (nonroutine) conditions arose. The shift in distribution of cognitive tasks when confronted with nonroutine conditions is illustrated in Figure 5 where the human now needs to take on more of the cognitive tasks.
Others have used CWA representations including decision ladders to identify interaction requirements between human and automated agents in the HMT and implications for information, display, and control requirements to enable the people in the system to effectively coordinate with the automated agents. For example, Bisantz et al. (2003) used an AH, a set of decision ladders, and a series of cross-linked matrices to derive design recommendations with respect to automation requirements, human roles, information needs, and concepts for displays for a U.S. Navy ship combat command center. Scott and Cummings (2006) used decision ladders to depict the information and display requirements for controlling multiple heterogeneous unmanned vehicles. Johnson, Bradshaw, Feltovich, et al. (2014) offer another approach for analyzing and representing the detailed interaction required for effective HMT. Their approach, called coactive design, focuses on analyzing and designing human-automation systems as a joint system that involves interdependent roles between the human and automated agents (Johnson, Bradshaw, Feltovich, et al., 2014). Co-active design grows out of the principles and theories laid out by Klein, Woods, Bradshaw, Hoffman, and Feltovich (2004) that emphasize the need to shift from conceptualizing function allocation as deciding what should be automated to thinking more about how to support teamwork among all of the participants in the system, both humans and automated agents, and how to make auto mation more of a "team player" (Woods & Hollnagel, 2006).
Johnson and their colleagues (Johnson et al., 2018;Johnson, Bradshaw, Hoffman, Feltovich, & Woods, 2014) argue that all "intermediate levels" between no automation and full automation are really about joint activity, and that what is important is coordination and mutual support of work. An analogy they give is of two individuals playing a duet. Their behavior during the duet is different than if they played individually. They operate in an interdependent manner.
Johnson, Bradshaw, Feltovich, and their colleagues have developed interdependency analysis tools that represent both the human and the automated agent, the work to be performed, and the relationships between the human and the automated agent throughout the work. In analyzing interdependencies, the authors consider not only new tasks that emerge when automation is introduced (e.g., new monitoring tasks) but also ways that each agent can support the other agent in performing their tasks (e.g., a robot may need the help of a human to navigate around certain obstacles). They point out that human-human teams often offer proactive support to one another. For example, team members can provide warnings ("watch your step"), they can offer to take on tasks ("Do you want me to pick up something for you at the store?"), and point out unexpected events ("It has started to rain"). In the same way, when considering options for function allocation, the intent is not merely to assign a function to a primary agent, but also to consider how the other agents (person or machine) can provide support. For example, even if a task could be performed without support from other agents, if support would enhance efficiency, reliability, or resilience, it is explicitly considered as a design option.
Interdependency analysis provides improved guidance to designers for how to structure the human-automation architecture to enhance reliability and efficiency of the HMT. In addition, it specifies display and control requirements to facilitate coordination across the agents. As an example, the technique was used to analyze a robot navigation task, where the robot was able to avoid obstacles but was less than 100% reliable in detecting whether an obstacle was passable. The analysis revealed that reliability could be improved by exploiting the ability of people to recognize whether obstacles are passable. To provide this support, the human needed a display of information to be able to observe the obstacle and also to know that the robot needed support. The human also needed a way to indicate to the robot whether the obstacle was passable or not.
To summarize, in designing new autonomous technologies, it is important to understand how people and machines can best contribute to the joint task and design the overall system to support the interdependencies required for the HMT to coordinate effectively. This includes examining how the new technology will affect the cognitive and teamwork processes of the people in the system, analyzing how the human and automated elements will interact in different situations (e.g., routine cases and more complex, off-nominal cases), and looking for opportunities to leverage the capabilities of the different elements of the HMT to maximize the reliability and resilience of the joint system. It also includes identifying the specific information, display, and control requirements to facilitate effective HMT interaction. All of these are examples of considerations that go beyond the factors traditionally considered in MABA-MABA and LOA analyses.

explorIng the FunctIon allocatIon trade-space
Fundamentally, function allocation is part of a larger system engineering process that inevitably involves consideration of trade-offs (Holness, Shattuck, Winters, Pharmer, & White, 2011). These trade-offs may relate to concerns such as the maturity of the required technology, trade-offs across different HSI elements (e.g., personnel selection and training costs), and operational considerations (e.g., performance in routine situations vs. rarer, less well-understood conditions). Consequently, an important element of function allocation analysis is exploring the broader trade-space and documenting associated trade-offs. Feigh, Pritchett, and Kim (Feigh & Pritchett, 2014;Pritchett et al., 2014aPritchett et al., , 2014b) provide a nice synthesis of a range of factors designers should consider in making human-automation function allocation decisions. These include the following: • The need for coherence in the set of tasks that humans are assigned (i.e., avoiding "leftover" allocation); • The need to avoid workload spikes as well as excessively low workload during long durations; • The need to avoid situations where people are assigned responsibility for system outcomes but the machine agent is assigned authority to automatically take action (i.e., avoiding authority/ responsibility mismatches); • The need to avoid overly rigid (and unworkable) function allocations that lead to workarounds and disuse; • The need to avoid brittle automation that is not reliable and/or fails abruptly when outside its boundary conditions; and • The need to avoid automation that results in excessive and untimely interruptions.
They examined these factors in the context of an air transport flight deck where they compared different degrees of automation during the descent phase of flight. One of the clear findings from their analysis is that function allocation inevitably involved trade-offs-there was no single best design.
Hoffman and Woods (2011) articulated additional dimensions along which the design of joint cognitive systems can vary and the tradeoffs entailed. Considerations they bring up relevant to evaluating alternative joint HMT designs include the following: • The optimality-resilience of adaptive capacity trade-off, which refers to the fact that optimizing for performance under routine conditions, can lead to systems that are brittle when faced with unanticipated situations that pose different demands. In contrast, design for resilient adaptation in the face of surprise can lead to inefficiencies in routine situations; • The efficiency-thoroughness of situation plans trade-off, which points out that attempts at thorough preplanning that considers an extensive set of contingencies may increase the number of anticipated situations that can be handled, but at the cost of becoming cumbersome. Because thorough plans increase the number of assessments and decisions that are needed to execute the plan, they are less efficient when put into action and may be more difficult to modify in real time; • The revelation-reflection on perspectives tradeoff, which points out that bringing in multiple alternative perspectives, for example, by drawing on diverse sources, can enhance performance but incurs costs associated with the need to share and coordinate perspectives; • The acute-chronic goal responsibility trade-off, which refers to the fact that standing organizational goals such as safety and equity can be in conflict with short-run goals such as the need to meet immediate deadlines or budgets; • The concentrated-distributed action trade-off, which refers to the trade-off between distributing autonomy, initiative and authority across agents, groups, and organizational echelons versus concentrating autonomy, initiative, and authority in a single center of control (a particular agent, group, or echelon). Distributing autonomy, initiative, and authority can increase the range of effective action, but at the cost of increased need for coordination to manage synchrony and coherence.
The above dimensions identified by Hoffman and Woods (2011) represent additional tradeoffs that need to be balanced as part of analysis, design, and evaluation of alternative joint HMT designs.
Researchers from the sociotechnical systems literature have further expanded the set of factors that should be considered as part of a function analysis trade-space to encompass the larger job design, organizational, legal, financial, and regulatory considerations (Baxter & Sommerville, 2011;Challenger, Clegg, & Shepherd, 2013;Grote et al., 2000;Waterson et al., 2015;Waterson, Gray, & Clegg, 2002). For example, Grote, Ryser, Wafler, Windischer, and Weik (2000) developed a method called KOMPASS that defines criteria for what they call "complementary" function allocation. The KOMPASS method specifies criteria to guide function allocation and system design at each of the three levels. At the human-machine level, criteria include flexibility of allocation and process transparency (e.g., opportunities for developing and maintaining accurate mental models of the process to support comprehension and predictability). At the human work task level, criteria include task completeness, variety, and opportunities for learning and personal development. At the work system level, criteria include autonomy of work groups and polyvalence of work system members (i.e., the proportion of tasks individual team members have the skills to perform irrespective of whether they are formally assigned those tasks).
Another function allocation method that comes out of the sociotechnical systems literature is the work of Waterson and colleagues (2002). They describe a systematic method for exploring alternative function allocation options across people and automated agents that considers multiple sociotechnical factors beyond the technical capabilities of the people and the automation. These include the following: • Organizational issues (e.g., organizational structures and norms); • Cultural/environmental issues (e.g., legal requirements; political considerations); • Resource issues (e.g., development time and cost concerns); • People issues (e.g., knowledge and skill requirements; social acceptability); • Task issues (e.g., physical and cognitive demands; speed and accuracy requirements); • Job design (e.g., control, accountability, motivation, satisfaction); and • Technology issues (e.g., feasibility and cost of automation; maintainability, reliability; level of performance).
The main points articulated by Grote and colleagues (2000) and Waterson and colleagues (2002) were recently reiterated and expanded upon by Challenger et al. (2013). As others have, they emphasize the trade-offs that need to be considered in making function allocation decisions. They argue that function allocation decisions should form an explicit stage early in the systems design process and involve multiple stakeholders, including individuals who commission, sponsor, design, implement, and (especially) use the system. They suggest that the allocation process should be evidence-based (e.g., supported by empirical data or results of dynamic simulations) and utilize iterative methods (e.g., develop and test rapid prototypes). Finally, they argue that the function allocation options should be framed in the language of risk. They point out that organizations are versed in conducting risk analyses and framing design decisions in terms of associated risk. They argue that it is important to frame function allocation decisions similarly-presenting the likely risks involved in different function allocation choices. Risks include the risk that the proposed function allocation will prove to be unworkable requiring redesign (programmatic risk) as well as the risk that the proposed function allocation will lead to risks to personnel and equipment once implemented (safety risk).
To summarize, function allocation inevitably involves trade-offs making it important to explore the trade-space of alternative function allocation options. We are currently developing a trade-space framework for evaluating alternative function allocations based on an integration of the literature summarized above . Ultimately, the overriding concern is the ability of the joint HMT to achieve overall mission effectiveness, placing a premium on understanding how alternative function allocations affect the likelihood that the joint HMT will meet mission effectiveness goals.

suMMary and IMplIcatIons
This review provides a broad look at various approaches to function allocation that have been employed to date. One key finding is that the different function allocation methods we reviewed addressed different aspects of the function allocation question and raised different considerations. Collectively, these different methods serve to broaden the set of factors that system designers need to consider in making function allocation decisions, beyond those raised by the MABA-MABA and LOA approaches. Table 2 summarizes considerations highlighted by each method. A conclusion of our review is the importance of taking an integrated approach that leverages four key activities in support of design of human machine systems: (1) analyzing operational demands and work requirements; (2) exploring alternative distribution of work across person and machine agents that make up the HMT; (3) examining interdependencies between human and autonomous technologies required for effective HMT performance under routine (expected) and off-nominal (unexpected) conditions; and (4) exploring the trade-space of alternative HMT options. Our literature review identified methods to support each of these activities. A summary is provided in Table 2.
A second key finding is that while the function allocation methods reviewed were generally developed in the context of allocating functions across people and automated systems with more limited capabilities, the considerations highlighted by each method apply equally well when confronted with design of systems that incorporate more autonomous technologies. This point is underscored by the presence of check marks in every row of Table 2, indicating that each method has some relevance for HMT with autonomous machine agents, including the traditional MABA-MABA and LOA approaches. The claim is supported by the fact that many of the function allocation examples included in the review involved technologies with high levels of autonomy, such as automated flight decks and DARPA robotics technologies. Although autonomous systems raise new detailed design challenges (e.g., the need to design display and communication mechanisms to foster shared understanding between the human and machine agents with respect to the state of the world, the goals to be achieved, and each other's intentions and actions to facilitate coordination of work and mutual support), the fundamental considerations in function allocation remain largely the same. This argument is further bolstered by the fact that function allocation methods have been routinely applied to allocation of functions across human teams-whose members exhibit high levels of autonomy.
Whether considering traditional automation or more autonomous technologies, the designer must examine the following: • "What needs to be done"-that is, the broad set of functions that are needed to accomplish domain goals and the factors that are likely to complicate performance; • How to distribute the work across agents (human and machine) to maximize joint performance; • How to develop visualization and control mechanisms to enable the human and automated agents to work effectively as a joint cognitive system (See Roth et. al, 2018 for an example); and • How to foster flexible adaptation of roles and responsibilities as situations change and unexpected conditions arise.
A point highlighted in this review is the need for tools to enable system designers to uncover, represent, and analyze the operational demands associated with the envisioned world in which the new HMT is anticipated to operate. Function allocation schemes must ensure that the joint human-automation system will be resilient in the face of complex operational demands. In particular, the goal is to avoid defining a function allocation that optimizes for a minimal number of people, but breaks down when faced with complex, challenging conditions. To support operations in the context of real-world complexity, it is important to identify and explore off-nominal conditions that may stress function allocation (e.g., situations that are likely to exceed the boundaries of the automated agents, or failed sensors that may degrade the ability of the automated agent to perform). CWA and CTA methods are well suited for identifying and analyzing these types of challenging conditions. Furthermore, it is important to focus designer attention on the detailed interactions between the human and the automation needed for effective performance, and the information and control needs to support those interactions. We reviewed several examples where CWA, interdependency analysis, and co-active design methods were used to explore alternative HMT options and their implications for distribution of task work and teamwork across elements of the HMT.
The review illustrated elements that we believe are important to incorporate in any function allocation analysis: 2. Considering the implications of different function allocations on new functions and changed functions that result from the introduction of new autonomous technologies; 3. Exploring and analyzing sources of complexity that may challenge the HMT; and 4. Focusing on design strategies to support flexible, dynamic function reallocations across members of the HMT as situational demands shift.
A final point highlighted by the literature review is that function allocation analysis is fundamentally about exploring a trade-space of alternative function allocation options. An effective function allocation method should help to identify and present these trade-offs to decision makers, ideally in the language of risk that is used by program managers and systems engineers in evaluating design options.

authors' note
The work was conducted by the authors under a government funded program. As a work of the U.S. federal government, the content of the article is in the public domain. Christen Sushereba is a research associate at Applied Decision Science. She received her master's degree in human factors and industrial/organizational psychology from Wright State University in the spring of 2018. Her thesis research focused on evaluating cyber data visualizations from an ecological interface design perspective. In addition to research on function allocation in complex systems, Christen is also researching how to train macrocognitive and perceptual skills using augmented reality. Laura G. Militello is CEO of Applied Decision Science. LLC, the company she helped found 8 years ago. She applies cognitive systems engineering to the design of technology and training to support decision making in complex environments. She is acknowledged as one of the masters of advanced Cognitive Task Analysis methods, and one of the leaders of the Naturalistic Decision Making movement. She is an Associate Editor of the Journal of Cognitive Engineering and Decision Making.
Julie Diiulio is a cognitive designer at Applied Decision Science, a research and development company focused on designing technologies and training to support human decision making in complex environments. With a master's degree in human factors from Wright State University, Julie applies user-centered and userexperience design principles to create designs that support human performance in domains such as healthcare, military, telecommunications, and digital marketing.
Katie Ernst is a human factors engineer with Applied Decision Science. Katie leverages her experience in military operations with methods from human factors and cognitive systems engineering to analyze and design both military and healthcare applications. Prior to joining Applied Decision Science, she served 13 years in the U.S. Air Force in both active and reserve capacities.