Maintaining coherence in the situated cognition debate: what computationalism cannot offer to a future post-cognitivist science

It has been claimed that post-cognitivist approaches to cognition might be compatible with computationalism. A growing number of authors argue that if computations are theorized as non-representational and mechanistic, then many concepts typical of the enactive approach can also be used in computational contexts and vice versa. In this article, we evaluate the solidity and coherence of this potential combination and shed light on some of the most important problems that have been neglected by its defenders. We conclude by arguing that this potential integration between enactivism and computationalism might not be a priori impossible but, at the moment, it is still seen as problematic at best.


Introduction
Recently, it has been argued that post-cognitivist approaches should not reject the possibility that mental processes are realized by computations instantiated within an organism or, more specifically, by processes in the brain. Villalobos and Dewhurst (2017b, hereinafter called ''V&D'') claim that the opposition of most post-cognitivists against the idea that cognitive phenomena are brought about by computational processes arises from the assumption that computational processes necessarily involve representational content. As a consequence, because post-cognitivism is, as a matter of fact, non-representational, it opposes computationalism. Building on the work of Piccinini (2008) and Milkowski (2011), V&D argue that the concepts of mental representation and computation are not necessarily complementary. The two authors reject the commonly supported view that computational processes can and should be individuated by their contentbearing states. Roughly, their idea instead is that, if the notion of computation is decoupled from any form of representation, then all conceptual and methodological resources of computational theories of cognition become available for post-cognitivists as well. A positive side effect would be that a major segregation within cognitive science in general and situated cognition research in particular could be revoked. The latter segregation refers to the dispute between those who think of the mind as a computational and representing system on the one side and those who think of the mind as interactive and dynamical on the other. If this maneuver is successful, cognitive researchers would be potentially able to hold a position that might lead to a much stronger unification of cognitive science compared to the appearance of this discipline today. These prospects are enticing; however, we think that the envisaged position of a computational-friendly post-cognitivism is more problematic than initially acknowledged. As things stand, we are at odds with V&D's proposal and we would like to start a critical and more detailed discussion about their attempt. Below, we will present several points that are going to show why computationalism and post-cognitivism cannot be easily integrated. By doing so, we do not intend to claim that such integration is a priori impossible. Nevertheless, to seriously take into consideration the proposal made by V&D, certain theoretical and practical issues need to be acknowledged and systematically tackled. To make these issues explicit, we concisely present in section ''Post-cognitivism and computationalism: the usual divide and the envisaged theory integration'' what is meant by ''post-cognitivism'' and then re-draw the usual divide between the latter view and mainstream cognitive science. After that, we reproduce the position of V&D and outline the theoretical posits of their ''non-representational computationalism.'' Once the philosophical position we want to criticize is plausibly presented, we elaborate on problems that prevent a theory integration of post-cognitivism and computationalism in section ''Problems for theory integration''. In Section ''Conclusion: maintaining coherence in the situated cognition debate'', we offer a concise conclusion.

Post-cognitivism and computationalism: the usual divide and the envisaged theory integration
What exactly makes it so difficult to bring computationalism and post-cognitivism together? Post-cognitivism is a part of situated cognition research and often refers to theories like enactivism. V&D explicitly argue for a potential combination between computationalism and the enactive theory originally developed by Varela et al. (1991Varela et al. ( /2017 and advanced by Thompson (2007). A rough and ready exposition of one of the major enactivist claims is that cognition and life share the same organizational principles. This means that, usually, enactivists propose the so-called mind-life continuity thesis (Kirchoff & Froese, 2017;Thompson, 2007). Why is this continuity assumed? Concepts like ''autopoiesis'' or ''adaptivity'' are crucial at this point. Autopoiesis refers to the self-engendering and selfmaintaining capacities of (living) systems. A major aspect of self-engendering systems is that their internal processes are subject to ''operational closure.'' Processes of such systems are operationally closed because they are mutually dependent inasmuch as they sustain each other. Those processes are constitutive for the entire system of which they form a part. To protect these processes (or to survive), a system needs to maintain a proper relation with the environment. That is a reason why an autopoietic system, while interacting with the environment, partly creates the niche it lives in. For a living system, its niche is a product of its own activity. This means that this niche is a specifically adjusted space that is coordinated with the autopoietic processes of the same system. Every living or autopoietic system displays some flexible form of interaction with the environment. This interaction is claimed to be the constituents of cognitive states and processes. By adding the concept of ''adaptivity'' to the equation, living organisms are conceived as being able to develop their ''organismic preferences'' and showing a degree of attraction/repulsion toward certain aspects of the environment (Colombetti, 2010). If the interaction of (living) systems and environment is the basis for understanding the nature of cognition, then we can assume a mind-life continuity. It is important to acknowledge that, despite the notion of operational closure, cognitive processes are relational. They are defined by the kind of interaction a living system has with its environment. This will become important again below.
Like V&D rightly highlight, enactivism was established as an approach that opposes computational theories of cognition. The received view of computational theories is that cognition basically is an input-output conversion of acquired information (Ramsey, 2007). Information that enters the system (input) is internally represented and processed, and an adequate behavioral response is generated by the system as output. This conversion of input to output is assumed to be constitutive for cognitive processes for proponents of the received view. V&D argue for a different view of computation that allows for a form of non-representational computationalism. In this way, they attempt to integrate the computational theory of mind with the new mechanistic approach (Craver, 2006(Craver, , 2007, while cutting out representationalism. For succeeding in this ambitious philosophical enterprise, V&D try to extend the very recent account of computation proposed by Piccinini (2015) to the enactive approach to cognition. Different from more known versions of computationalism (Fodor, 1981;Pylyshyn, 1984), Piccinini developed a notion of computation that is supposed to be neutral regarding any form of representational content. Computational processes do not necessarily involve representations. They are presented as mechanisms that perform specific functions. This also means that they are not compulsory to be identified by their content-bearing states. An important premise for this approach is that computations in the brain are ''sui generis'' (Piccinini & Bahar, 2013). This allows for a broader concept of computation that is specifically brain-related. What really matters in this approach is that questions concerning the nature of content and intentionality (that is quite frequently assumed to imply representational entities) can be separated from the ones related to the nature of computation.
The theoretical separation between the notions of representation and computation is motivated by the fact that there are two ways of talking about semantics. Piccinini argues that generally philosophers think about (semantic) content as internal states of a system aimed at representing facts and things that are part of the external world. This broad and normative understanding (a representation needs to be ''right,''''accurate,'' or ''appropriate'') of semantics is known as external semantics. Internal semantics, on the contrary, consists of the idea that a computer performs operations that follow rules that are determined by the hardware and the software of the program. The computer, in this sense, makes no reference to facts in the external world. Piccinini argues that these two notions of semantics are extremely different: ''Internal semantics is no help to the supporters of the semantic view of computational individuation, for they are concerned with individuation by external semantic properties. This is because the semantic view is largely motivated by computational explanations of mental states and processes, which are widely assumed to be individuated by their (external) contents'' (Piccinini, 2008, p. 215).
As a consequence, computational processes or computing mechanisms do not give rise to the kind of semantics that is relevant to talk about ''bearing a certain content'' or ''represent x as such'' typical of philosophical debates. Instead, they are conceived as a part of a system that manipulates ''medium-independent vehicles'' (Piccinini & Bahar, 2013, p. 458). This term is crucial to Piccinini's and therefore also V&D's approach. Medium-independent vehicles are understood as entities that carry the information to a mechanism (an operational structure). That structure can be realized on different physical substrates. Some optimal examples of vehicles are neuronal spikes and neurotransmitters (dopamine, gaba, cortisol, etc.) since these components pass on relevant information to mechanisms that can manipulate them further. A vehicle, thus, can be thought of as a variable or a value of a variable (Ritchie & Piccinini, 2018, p. 193). The properties of these vehicles can also be described without referring to any specific physical substrate. 1 At this point, the mechanism applies a set of rules to process the vehicle. Importantly, rules, in this context, need to be understood as non-conceptual input to output maps (Hutto et al., 2018;Piccinini & Bahar, 2013). Building on Piccinini's hypothesis of non-representational computation, V&D attempt to connect post-cognitivist theories like enactivism with a computational theory of cognition. This envisaged connection suffers from several problems that are related with the consequences of positing computational properties/entities. These consequences are in contrast with different aspects of postcognitivist theories that are often underestimated by the two authors. The section ''Problems for theory integration'' enumerates nine issues with their attempted theory integration.

Problems for theory integration
In this section, we discuss several arguments that, as of yet and from our perspective, make the integration of post-cognitivism and computationalism highly problematic. This potential integration is unlikely to be achieved, even if the latter one is theorized as non-representational. The critical points we present below are divided into two categories. The first category includes indirect critique. This means that V&D's proposal is assessed based on the implications and consequences that come along with it but have, nonetheless, not been acknowledged by the authors. This category will be about issues which are inferentially connected to their major claims. These issues are neither explicitly mentioned nor recognized so far, but which pose serious tasks that need attention. As we will see, even before looking closer at the details of the relation between enactivism and computational theories of cognition, we encounter problems with V&D's account that are related with the general debate about the different epistemological positions and the specific developments in cognitive science. The second category includes direct critique. This means that this category is about internal problems of V&D's proposal. These problems are connected to the coherence of the claims they make, the way in which these terms are used, and the premises they hold on to.
The points provided in each category are not equal in terms of importance and philosophical impact. The respective categories treat the arguments against V&D, starting with the less relevant ones and reaching the most significant points in the end. In the first category, we start with critical reasons that are more likely to be rejected, as several premises are necessary to be accepted to conceive these reasons as problematic for a non-representational computationalism. Critical points that are quoted later in the category are, from our perspective, closer linked to V&D's proposal. However, these critical arguments are still indirect, as they refer to points that are not explicitly addressed by V&D in the context of their article. The same procedure is applied for the second category. The later the invoked point is mentioned, the more significant is its philosophical impact. These points are specifically related to the internal and the external coherence of their assertions.

Indirect critique. Implications and consequences
In this section, we discuss problems that arise if we accept the idea of non-representational computationalism. We focus on possible but critical implications and consequences of that position. Five problems will be discussed. More specifically, some of the potential problems we see in V&D's proposal emerge when the latter is related to important topics such as explanatory pluralism, revolutions in cognition research, the general vocabulary that is used in the context of nonrepresentational computationalism, the scaling-up problem, and the usage of differing heuristics in theories like enactivism and computationalism.
3.1.1. Diminishing explanatory pluralism. There are different approaches aimed at understanding what explanatory pluralism (ExP) in cognitive science is. Some of them have considerable overlaps. For example, a certain attempt presents ExP as the integration of multiple perspectives that have provided explanatory successin the case of situated cognition research regarding cognitive phenomena (Dale, 2008). A similar idea consists in the integration of different approaches to illuminate a concrete particular (Mitchell, 2002). Other proposals have argued that ExP is first and foremost concerned with scientific multi-level representations of complexly organized phenomena (Dale et al., 2009). In this context, a frequently highlighted aspect is that ExP is about the recognition of existing relations between levels of analysis. These relations are aimed at helping understand what happens on each level. Importantly, from this philosophical perspective, these relations de facto exist. Such a position thwarts reductionist and eliminativist premises (Looren de Jong, 2001). Close to this interpretation is the assumption that ExP is about the non-redundant use of multiple distinct frameworks to comprehensively account for cognitive phenomena (Carls-Diamante, 2019).
What these different approaches have in common is that they stay vague with regard to what ExP exactly covers. What do we need to be pluralists about? Do we need to allow for every ''framework,'' theory, method, model, and type of explanation that is somehow deployed and developed in the context of (situated) cognition research? Or do we need to be pluralist about specific things-like types of explanation? A common denominator of all the ideas mentioned above is that different types of explanation exist and that they can be concerned with different levels of a system. Mitchell (2002) suggests that this variety is supposed to result in a mutual advancement of explanations and, ultimately, in an amelioration of the understanding of a system under investigation. Types of explanation are different from whole theories. A theory might be liberal enough to allow different explanations to be employed. Other theories might only depend on a single explanation type. First and foremost, we understand ExP as a position in philosophy of science that is about accepting different types of explanation in a specific research branch. The different types of explanation are accepted without knowing whether they will be perfectly compatible with each other or whether they are only sometimes compatible with regard to specific problems. 2 With this function of explanatory pluralism in mind, we turn to V&D's proposal.
By introducing non-representational computationalism and by dovetailing it with a certain mechanistic basis, as V&D do, the scope of mechanistic explanations is increased as it also applies to phenomena, levels, and perspectives that are envisaged by post-cognitivist approaches. These perspectives (most prominently on interaction patterns between a system and its surroundings) and levels (usually the whole unit is in focus instead of micro-levels within the unit) in postcognitivist approaches are commonly explained in a different way compared to mechanistic approaches. To explain what happens on such levels, other types of explanation are applied like the dynamical (Beer, 1995;Lamb & Chemero, 2014;Nielsen, 2006) or the normative (Casper, 2019;Satne, 2015;Steiner, 2009) explanation. Others could be used as well like in the case of etiological (Garson, 2011;Millikan, 1984), ethological explanations (Kingstone et al., 2008), and so on. These explanations are not depicted by V&D as significant for cognitive science. As a matter of fact, these other options are often taken to be mere tools to find phenomena which then should be explained with mechanistic explanations (as it is sometimes argued in the case of dynamical explanations; Kaplan & Bechtel, 2011).
This explanatory usurpation and attempted unification is at odds with the pluralist stance. Such a position is a theoretical opposition to how cognitive phenomena are generally approached in scientific practice (Colombo & Wright, 2017).

3.1.2.
In support of which revolution? As already pointed out above, one of post-cognitivism's main aim is to avoid the idea that the activity of system-intern entities can generate internal stand-ins of pieces of the environment (or, in the case of B-formatted representations (Goldman & de Vignemont, 2009), to generate states that represent internal conditions of the system itself). On the grounds that representational and computational theories of cognition are theoretically and practically problematic enough to pursue alternative ideas of how cognition is constituted (Casper, 2019;Ramsey, 2007), post-cognitivists argued against the prominent and dominant ''received view'' on computationalism and representations (see for the ''received view' ' Sprevak, 2010). Frequently, post-cognitivists sold their arguments and their research branch with a revolutionary flavor: They claim that the received view of computation, hence the major part of contemporary cognitive science, its heuristics, and representational concepts need to be displaced. A ''paradigm shift'' is allegedly in the air.
Whether such a paradigm shift is really within reach or not is part of the debate about situated cognition. Independent of how that ''paradigm'' issue is judged in detail, the modification of the received view offered by V&D can be interpreted as an avowal to those ''revolutionaries'' that there is something deeply troubling with representationalism in cognitive science and that this trouble cannot be fixed. To be fair, V&D's contribution is surely supposed to be part of the mechanistic revolution that is more silently ongoing (Milkowski et al., 2018) rather than of the enactivist one. However, their modification of computational theories of cognition also supports, in a general sense, post-cognitivist ideas about how cognitive science should proceed. This implies at least two possibilities: (1) The modification of computational theories of cognition as provided by V&D might be just an interim stage toward a comprehensive, non-representational, enactivist cognitive science. It is a step that actually supports the enactivist revolution and is the last attempt to save computational theories from the future changes that will become necessary for cognitive science (namely, to replace computational theories of cognition).
(2) The modification of computational theories of cognition as provided by V&D are the right theoretical adjustments and provide a long-lasting approach to study cognitive phenomena adequately. Hence, their offered theoretical adaptation backs the mechanist revolution. However, whether non-representational computationalism supports the mechanist or the enactivist revolution in cognitive science is not certain, yet.

A problematic vocabulary.
For the sake of V&D's argument, let us suppose that enactivism can accept the overall idea that cognition is brought about by some non-representational and computational mechanisms. What naturally seems to follow is that enactivists might potentially look back at the rich literature provided by computationalist approaches and perhaps even at cognitivist positions which strongly relied on computationalist concepts. Without any doubt, this would generate more than a bit of confusion among both cognitivists and enactivists. Computational approaches, despite some notable exceptions (Egan, 1995;Fresco, 2014), are connected to a representationalist language (Sprevak, 2010). While it is definitely possible to distinguish, as Piccinini did, an external understanding of semantics from an internal one and thereby decouple the latter from terms like ''representation,'' the history of computationalism was and still is tied to a vocabulary whose key notions are the ones of ''representation,''''content,'' and ''information.'' For an enactivist interested in building a non-representational theory of perception, for example, the vocabulary of computationalism is pure anathema.
However, the point already mentioned does not force us to reject a priori the possibility of an exotic position like ''non-representational computationalism'' or ''computational enactivism.'' As a matter of fact, a merit that is possible to attribute to V&D is that they might have opened the door for the possibility of hybrid computational/enactive-oriented explanations. While this possibility seems indeed interesting, much more work needs to be done since, besides suggesting that computationalism and post-cognitivism might be compatible, V&D left no hints on how to concretely combine both views, what kind of technical vocabulary should be used when doing so, and how to deal with critical implications (both philosophical and empirical) of their approach.
Furthermore, besides the opposition between representational and non-representational vocabulary that is used in different cognition theories like enactivism and computationalism, there is another aspect that has not been covered by V&D's proposal. This issue is about the use of normative vocabulary in the context of the (new) mechanist approach and their own nonrepresentational computationalism. Normative vocabulary supported much of the development that took place with regard to enactivism. For example, it is frequently emphasized that ''organisms [cognitive systems] cast a web of significance on their world'' that ''establishes a perspective on the world with its own normativity'' (Di Paolo et al., 2010, p. 38). V&D highlight that, in the context of their research, their talk of enactivism refers to classical autopoietic theory (the so-called Santiago School of Cognition). This approach does not prominently include normative vocabulary as a resource for the analysis of cognitive systems. The inclusion of it begins later (Weber & Varela, 2002). However, since the classical autopoietic theory does not include that resource, it falls short of providing solutions to problems connected with the concept of adaptivity that come along only with contemporary enactivism (see point 3.2.2 for a further elaboration). Furthermore, V&D's lack of normative vocabulary also leads to problems in explaining how high-level cognitive states and processes are constituted (see more details about this point in the section ''The scaling-up problem and the missing contributions'') or in accounting for forms of representation that are different from the ones that are supposed to be realized internally by computational processes (see more in the section ''Not intrinsically representational does not mean nonrepresentational'').
In light of these open problems, there is no surprise then, if, at the moment, enactivists will keep relying more on disciplines that already present some affinities or, at least, that can be more easily integrated with it. Some example can be seen in dynamical systems theory (Chemero, 2009), ecological psychology (Baggs & Chemero, 2018;Gallagher, 2017;Stapleton, 2016), psychoendocrinology and emotion psychology (Colombetti, 2014;Colombetti & Zavala, 2019), developmental psychology (Di Paolo & De Jaegher, 2012;Gallagher, 2005), different branches of biology and bio-psychology (Thompson, 2007;Weber & Varela, 2002), and so on. It is hard, at the moment, trying to understand how to combine a long history of computationalism, most of the time tied to concepts such as the ones of information, representation and content, with the enactive approach.
3.1.4. The scaling-up problem and the missing contributions. Non-representational computationalism renounces long-assumed advantages over enactivist and similar theories. A frequently assumed major downside of the latter is that they might be able to efficiently deal with ''lower-level cognition'' like motor control and perceptual activities but that they are not capable of explaining how ''higher-level cognition'' like declarative memory, abstract reasoning, or having beliefs is constituted. Situated cognition researchers and embodied roboticists assume that, for example, stable locomotion can be explained by a patchwork of different sensory-motor feedback loops between the morphology of an organism and the direct surroundings. Such loops are supposed to be organized in a ''decentral'' manner since there is no entity inside or outside a moving organism that plans and controls the entire execution of such loops. There is nothing that could gather sensory-motor information, process that information, and generate output accordingly in a way remotely similar to the input-output conversion that is assumed by computationalists to be pivotal for cognitive systems (e.g., Brooks, 1991;Pfeifer et al., 2007).
Since these loop-based theories avoid the inputoutput conversion, computationalists doubt that these theories can ''scale-up'' from lower-level cognition and explain how higher-level cognitive states and processes are brought about. Enactivism is confronted with this ''scaling-up problem,'' since higher level cognition is assumed to be ''representation-hungry'' (e.g., Clark, 1999Clark, , 2001Edelman, 2003). If a cognitive system is confronted with problems, the solution of which involve, for example, ''reasoning about absent, nonexistent, or counterfactual states of affairs'' (Clark & Toribio, 1994, p. 419), then some sort of representation will be necessary to function as off-line stand-ins of a situation that is not sensorily present. Cognition researchers who defend computationalist approaches, including also the non-conventional ones like extended mind theorists, assume that computationalism exceeds the explanatory power of enactivism and postcognitivism by far, since way more cognitive phenomena and problem-solving capacities of cognitive systems can be explained by posing computational processes that involve representational content of some kind. If computationalists give up on ''representation'' or ''representational content,'' then they trade the problem to explain how ''representations'' are generated with the scaling-up problem. When they do so, they should be clear about how this problem can be approached from their perspective, which is supposed to be, at the same time, also in accordance to already existing enactivist approaches that do have different heuristics compared to the mechanistic ones (see section ''Differing heuristics'' for more details on this topic).
Coming from this direction, we cannot see how V&D's proposal can imply a positive contribution to this problem. Conventional computationalist approaches can, from their perspective, explain higher forms of cognition by appealing to the concept of representational content. In contrast, enactivists are confronted with the scaling-up problem, since they seem to lack both terms in their theory (the ones of computation and representation). However, enactivism gains its explanatory power also with regard to the scaling-up problem by turning the normative explanations that focus on norm-guided behavior which brings about high-level cognition. Another possible enactivist solution to the scaling-up problem relies on the embodied and dynamic features of (social) interaction. Human beings, for example, acquire certain cognitive capacities because of their attunement to specific sociocultural (Hutto & Myin, 2017) and normative practices (Casper, 2019;Rietveld, 2008). Often, what is also emphasized is the importance of concepts typical of dynamical systems theory such as the ones of self-organization, long-term temporally extended activities (Kiverstein & Rietveld, 2018) or our capacities to switch from a context to another such as to re-enact the action possibilities that have been relevant in other situations (Thompson, 2007). The take-home message is that enactivists off-load the work necessary to explain higher-level cognition to the skilled and dynamical abilities of the individual and to the richness of the socio-cultural environment.
Conventional computationalists, instead, reasonably focus on the work performed by mental representations that are carried on by some computations in the brain. Ambiguously, V&D collocate themselves in a middle way position on which their notion of non-semantic mechanisms does not seem to add much to the enactivist story of how socio-cultural practices and our history of interactions shape the way how cognitive systems are constituted. On the contrary, their weak computationalist approach does not rely on any notion of representation that is supposed to play a crucial role in cognitivist explanations. In this sense, V&D's proposal does not seem to represent a useful option for both sides. What are the benefits of assuming the existence of non-semantic computational mechanisms is not clear. In contrast, we have shown that the consequences of their proposal are not appealing independently of whether we wish to take an enactivist stance toward the nature of mental phenomena or not. This means that, without ruling out the possibility of combining enactivism and computationalism at some point in the future, enactivism prefers different types of explanation so far compared to computationalist approaches, even though the latter ones may be nonrepresentational as well as mechanistic. There might be (so far unknown) possibilities to combine both enactivism and non-representational computationalism. However, as things stand right now, we deem it highly problematic because of the points mentioned above and below.
3.1.5. Differing heuristics. V&D rightly acknowledge that enactivism and post-cognitivist approaches, generally speaking, aim to represent a break from the previous frameworks that have been used to study cognitive phenomena. Indeed, it is possible to individuate a group of claims and concepts that explain in which aspects enactivism is incompatible with old-school cognitivism. However, among all the ideas defended by enactivists, the authors seem to exclusively focus on the fact that the concept of mental representation is often found inadequate when we are about to explain cognitive processes. Nevertheless, it is clear that there is much more to say, since the ''enactivist interpretation [...] suggests a different way of conceiving brain function, specifically in non-representational, integrative, and dynamical terms'' (Gallagher, 2017, p. 161). While V&D rightly focus on non-representationalism, they seem to neglect that enactivism aims also to provide an alternative view to a strict localization of cognitive processes, mainstream definitions of information processing, and reductionism more in general. Our worry is that, while functional mechanisms might not be tied to a semantic notion of content, their proposal pushes toward an internalist understanding of the mind and to a cognitivist-friendly conception of information processing.
It seems to follow almost trivially that, if cognition is characterized as a very special form of computing, explaining cognitive processes consists of a complex and sophisticated localization and decomposition of the mechanisms responsible for processing information in a way or another. Instead, as pointed out by Richardson and Chemero (2014), radical embodied approaches, including enactivism, are incompatible with strong decompositional and localizational strategies. In contrast, cognition, from the enactive perspective, is described as a holistic and world-oriented process that cuts across brain, body, and environment. More specifically, Richardson & Chemero describe the study of cognitive processes in terms of self-organizing processes and interaction-dominant dynamics. By relying on principles typical of dynamical systems theory, enactivism, and similar radical embodied approaches seem to escape the possibility of an individuation of cognitive processes just by looking at some functional mechanisms located within the skull. 3 Even by claiming that the role played by the single computational mechanism is just a small part of a larger process that includes also the body and the environment, a reductionist and brain-bound ontology seems to be an unavoidable consequence. Indeed, it can be argued that physical computations could be implemented in a very distributed and non-localizable fashion across and beyond the system. This option would, without any doubt, be more sympathetic to postcognitivist approaches than their actual proposal. However, such a strategy would be in contrast with the most important principles typical of mechanistic explanations (Bechtel & Richardson, 1993;Craver, 2007;Piccinini, 2007) on which Dewhurst, Villalobos, and Piccinini strongly rely on. For example, principles such as the ones of decomposition and localization are already implicit within V&D's approach. Furthermore, the overall conception of the functional mechanism as an ''input to output map'' (Piccinini & Bahar, 2013) seems to be sympathetic to a potential Sandwich Model of the Mind (Hurley, 1998). In contrast, enactivism generally assumes a strong coupling between perception and action. Assuming that action and perception are continuously coupled, it forces us to consider cognitive processes as a product of the organism-environment system. It follows that mechanist heuristics cannot be properly applied anymore (for some exception, see Gallagher, 2018). Postulating the existence of such cognitive devices would be unacceptable for enactivists, since it implies to add something in between the continuous coupling between action and perception.

Direct critique. Internal problems
In this section, we problematize the coherence of the position presented as non-representational computationalism. Different from the section ''Indirect critique. Implications and consequence'', the following points refer directly to claims made by V&D and not to possible consequences and implications. Similar to the section ''Indirect critique. Implications and consequence'', we begin with the least important point that is followed by points of increasing theoretical impact. Four points will be discussed. In this context, the problems that V&D are invited to tackle cover issues such as the representational status of mechanism, the mixing and usage of concepts coming from incompatible postcognitivist theories, the ontic status of mechanisms, and the potential implications of a body-neutrality enactivism.
3.2.1. Not intrinsically representational does not mean nonrepresentational. Besides rejecting the notion of representational content, post-cognitivist theories also apply different approaches to explain how cognitive phenomena are constituted by the dynamical interaction patterns between an organism and its surroundings. V&D suggest that a special version of the computational approach can be incorporated in post-cognitivist theories. They follow the contemporary mechanistic account of computation that argues for the claim that ''representation and computation need not be regarded as intrinsically connected to one another. As a result, the enactivist opposition to representational theories of cognition need not necessarily require an opposition to computational theories of cognition'' (Villalobos & Dewhurst, 2017b, p. 118). This is true, but post-cognitivist approaches like enactivism cannot be unconditionally connected with computational theories of cognition even if computational and representational theories are not necessarily related. 4 For enactivists, a more delicate question develops in this context: If enactivism cannot generally oppose computational theories of cognition, since their ties to representational theories of cognition are more frequently dispensable than commonly assumed, then it is urgent to ask when exactly computational theories merge with representational theories. A principled opposition evolves into a detailed questioning and tracking of problem-solving strategies offered by computational theories. Such detailed questioning suggests itself since, even if computational theories of cognition are not intrinsically connected to representationalist posits, they can still be connected to them either most of the time or in specific contexts where such posits can seemingly play a role in explanations of cognitive phenomena. This is a consequence of the idea that ''computation should not be intrinsically representational, but rather provide a non-representational foundation on which representational account of cognition may be built'' (Villalobos & Dewhurst, 2017b, p. 121).
In theory, the position presented by V&D seems to be enactivist-friendly. However, their suggestion results in detailed negotiations between enactivists (or autopoietic theorists) and computationalists concerning which phenomena can be explained together and why they can be explained together. Such theoretical alliances will most likely happen in rare situations in which enactivists and computationalists will agree upon the need to strive for explanatory patchworks to illuminate a shared target phenomenon. In the future, more and more different cognitive phenomena might be approached by different amalgamations of explanation types, like patchworks of mechanistic, dynamical, or ethological explanations. Even if the integration of enactivism and non-representational computation via mechanistic explanations could be possible, something we are still critical about, then it will stay a complicated and rare topic.
In addition, computational theories of cognition that are not intrinsically representational can still be conceived of as internalist theories. This means that it is assumed that cognition is constituted by states and processes that are localized within a system. V&D are not entirely clear about their position and whether they think computational states and processes are (only) internally or transcranially realized as well. If they assume that they are only internally realized and nonrepresentational, then they buy into problems concerning the different heuristics of computationalist and enactivist approaches (see that point elaborated in section ''Differing heuristics''), a serious issue that, as we have already pointed out, makes the integration of the two problematic. If they allow for externally realized computation that is non-intrinsically connected to representational posits, then we basically get to a version of the extended mind approach (Clark, 2011;Clark & Chalmers, 1998) in which cognitive states and processes are also presented as constituted by ''supersized mechanisms'' (Clark, 2011, pp. 14, 68, 129). Two further possible positions for them that might be helpful in relating computationalism and enactivism are either (i) proposing that there are non-representational internal as well as non-representational external computations or (ii) claiming that there are non-representational internal computations but also representational external computations. The first option falls prey to the same problems we highlighted in section ''The scaling-up problem and the missing contributions'' related to the scaling-up problem. The second option would surely represent a more solid ground for V&D, since there are already existing proposals that go in a similar direction.
It has been argued elsewhere that the extended mind hypothesis can be decoupled from the representationalist vestiges within it. Following this interpretation of the extended mind, processes are allowed to be representational only if they are external to the bodily boundaries of a cognitive system (Steiner, 2010). External representations (or ''exograms'' (Donald, 1991) like lists, maps or graphs) are accepted in the enactivist context as crucial for the performance of practices necessary for the emergence of certain cognitive skills. Exograms are different enough from internal representations to be seen as different in kind (Rupert, 2004;Steiner, 2010) and hence be seen as deployable in enactivist research. To claim that there might be something as non-representational internal computations and representational external computations might present an option for V&D to connect their interpretation of enactivism and ''non-representational'' computationalism, whereby the label ''non-representational'' should only exclude internal representations. Although this position might seem generally possible, it struggles with a problem. Exograms are commonly presented as part of ''cognitive practices'' (Menary, 2007(Menary, , 2013 that are normatively guided and socially engendered. These norms are vital for the cognitive states and processes of the members of those practices. Even if a ''non-representational'' computationalism allows for external representations, it either needs to show how normatively guided practices and mechanistic computation can be thoroughly connected or it needs to show why external representational computations are again different from the internal and non-representational ones.
Such arguments have not been provided so far. The live question pulsing behind V&D's proposal is understanding what exactly is supposed to be provided by ''non-representational'' computationalism if the concept of non-intrinsically representational computation is not sufficiently spelled out.
3.2.2. The neglected difference between enactivism and theory of autopoiesis. One central aspect of V&D's proposal relies on the fact that the theory of autopoiesis, originally proposed by Maturana and Varela (1980), has been presented as a mechanistic/functionalistic framework. Whether or not Varela and Maturana used the term ''mechanism'' in a similar fashion of the neomechanistic wave, their argument seems to be developed as the following: 1. The Theory of Autopoiesis was originally conceived as a mechanistic framework 2. Enactivism has been inspired and still relies on the notion of autopoiesis 3. Both contemporary enactivism and classical theory of autopoiesis are compatible (at least potentially) with the notion of computation developed by Piccinini As a matter of fact, in ''Autopoiesis and Cognition,'' Maturana and Varela (1980) clarified their stance pretty clearly: ''Our approach will be mechanistic: no forces or principles will be adduced which are not found in the physical universe'' (p. 85). With this claim, the two Chilean biologists wanted to be clear that scientific explanations, from the perspective of the autopoietic theory, necessarily need to avoid any form of teleology or, in any case, to assign purposes and means to cognitive systems. As Villalobos (2013) rightly explains, from the perspective of the Santiago School of Cognition, teleological descriptions are scientifically ''groundless'' and ''conceptually empty'' (p. 4). While big and small differences between contemporary enactivism and classical theory of autopoiesis can be highlighted, a huge tension that immediately emerges is that enactivism is instead proposed as an account aimed at providing a naturalization of ''norms '', ''values'', and ''intentionality'' (Di Paolo, 2010, p. 47). Despite the mentioned differences, V&D seem to extend their proposal to both enactivism and theory of autopoiesis, independent of their dissimilarities (on which we are going to elaborate on in this section).
The main aim of their proposal, they claim, is analyzing ''the way enactivism and AT relate to computationalism'' and their potential compatibility with the latter (Villalobos & Dewhurst, 2017, p.171). 5 We are going to argue that V&D's move of considering enactivism compatible with computationalism only because of its historical roots is unjustified. Importantly, Villalobos on different occasions has argued about the virtues that classical autopoiesis theory can offer over the enactivist approach (Villalobos & Ward, 2015). We think that the connection with computationalism and the mechanistic approach can be plausible if limited to the early work developed by Maturana and Varela. However, in this specific context (but see also Villalobos & Dewhurst, 2016, 2017a, claims and arguments based on autopoietic theory are automatically extended also to the enactive approach. It is worth noting that enactivists spent a discrete amount of intellectual energy in explaining and discussing how their approach differs from the autopoietic theory and to take distance from its idealistic commitments (Froese, 2011;Froese & Stewart, 2010;Thompson, 2011). The same late Varela (2000) admitted that the theory of autopoiesis, as originally proposed, fell prey to solipsism and provided a too flat characterization of the environment with its concept of ''structural coupling.'' The co-determination between organism and environment defended by enactivism is instead deeply relational and dynamic. What follows is that the theoretical and empirical tools typical of the enactive research program strongly differ from the ones of the Santiago School of Cognition, especially if its mechanistic commitments are emphasized. Contemporary enactivism reformulated and developed the concept of autopoiesis on different occasions (Bitbol & Luisi, 2004). As pointed out by Di Paolo (2005), an autopoietic system, to show proper autonomy (and then being considered as genuinely cognitive), needs to be complemented by some worldinvolving and adaptive capacities. Adaptivity in this context is defined as ''a system's capacity [...] to regulate its states and its relation to the environment with the result that, if the states are sufficiently close to the limits of its viability. 1. Tendencies are distinguished and acted upon depending on whether the states will approach or recede from the boundary and, as a consequence, 2. tendencies of the first kind are moved closer to or transformed into tendencies of the second and so future states are prevented from reaching the boundary with an outward velocity'' (Di Paolo, 2005, p. 438).
Without the latter, autopoiesis remains an all-ornothing phenomenon on which an organism will be resistant to perturbations but not able to show any degree of flexibility or organismic preferences. What follows is that the notion of adaptivity described relies on ''a broader dynamical systems ontology'' that does not seem to exist in the autopoietic theory (Di Paolo, 2018, p. 21).
Last but not least, another difference that can play a role in regard to the combination with computationalism is the concept of autonomy. From the autopoietic point of view, the simple fact of being autopoietic and operationally closed guarantees a form of autonomy. This is surely not true for enactivism. Operational closure alone does not suffice for autonomy (Di Paolo & Thompson, 2014). A more dynamic notion of autopoiesis (labeled by Di Paolo, 2009 as ''Autopoiesis + '' 6 ) needs to be complemented by some equally dynamic adaptive capacities. Autonomous systems, from the perspective of enactivism, will always act in a futureoriented manner and improve their actual condition which will be always haunted by a ''surplus of signification'' (Varela, 1997). What crucially follows from these differences is that the enactive approach allows for new forms of autonomy to be established over time . On a similar line, Corris and Chemero (2019) claimed that enactivism replaced the notion of autopoiesis with a more general and world-oriented notion of autonomy. The moral of the story is that the notion of autopoiesis used by the enactive approach is surely dynamical-friendly and tends to escape reductionistic and mechanistic principles of explanations. From the enactive perspective, environment and living systems are then conceived in a very plastic, bi-directional and dynamical fashion, while, in the case of the autopoietic approach, Maturana surely decides to remain on the side of the cognitive system. V&D do not make explicit most of these crucial differences, especially in relation to the concept of autonomy. However, it is interesting to notice that, in another recent article, they tried to read the latter in computational terms (Villalobos & Dewhurst, 2017a). The example made by the two authors consists of thinking of a turing machine as a proper autonomous system. The read/head of the machine would represent its sensor device, the part of the machine that manipulates the symbols would be its effector device, and the tape would represent its environment. Following their interpretation, a turing machine can potentially be considered as autonomous since it is, by fact, a functionally closed system. Sensors and effectors are coupled in a continuous relation based on the perturbations coming from the ''environment.'' Nevertheless, as we previously pointed out, for enactivists, operational closure does not suffice for autonomy. A turing machine (or a thermostat, to use another example brought by V&D) cannot be considered under any aspect adaptive and surely live in an environment that is strongly deterministic and that cannot lead to any new forms of autonomy. Again, it seems that V&D considerations are more in line with the autopoietic theory rather than aimed at grasping the enactive notion of autonomy, although, as made explicit above, they try to bring the two views together. As a matter of fact, the relation between the senso-effectors devices of a turing machine with its innocuous environment/tape seems more easily defined in terms of structural coupling rather than a dynamic surrounding that will lead to more and more forms of autonomy.
The upshot is that V&D focused on a traditional version of autopoietic theory and extended their points to contemporary enactivism without plausibly arguing for such an extension. Nevertheless, the theory of autopoiesis alone does not reflect the recent developmental stage of the enactive approach. 7 We indeed see how the latter would be way more problematic when combined with computationalism (even if detached by the concept of representation). As a result, they do not consider the variety of possible post-cognitivist positions and thereby construct an overly artificial, theoretical niche for their well-contrived proposal of nonrepresentational computationalism that might not be as philosophically sustaining as planned.
3.2.3. Mechanisms are ontic structures. There are further complications regarding the idea that post-cognitivist approaches can be computation-friendly. As it has been pointed out in the previous section V&D have argued that post-cognitivist approaches like autopoietic theory imply the claim that scientific explanations are mechanistic explanations (Villalobos & Dewhurst, 2017b, p. 124) and that autopoietic theorists developed an epistemological framework that makes room for the usage of a mechanistic account of computation. The important point here is that the idea of non-representational computation as used by V&D is decoupled from the ''new mechanist approach'' as developed by Craver (2006Craver ( , 2007. In the context of this approach, a commonly accepted definition of ''mechanism'' can be found in the work of Illari and Williamson. They propose that a ''mechanism for a phenomenon consists of entities and activities organized in such a way that they are responsible for the phenomenon'' (Illari & Williamson, 2012, p. 120). In a similar fashion, Milkowski et al. (2018, p. 4f.) claim that a ''mechanism is a spatio-temporal structure responsible for the occurrence of at least one phenomenon to be explained. The orchestrated causal interaction of the mechanism's component parts and operations explains the phenomenon at hand.'' These definitions already imply that new mechanists strongly favor an ontic account of mechanisms (Craver, 2014) instead of an epistemic account. An epistemic account presents mechanistic explanations as epistemic activities of researchers. They produce texts that generate individual ''aha'' effects concerning a certain phenomenon (Bechtel, 2008). The new mechanist approach, on the contrary, develops an ontic account of mechanistic explanation that places the source of its explanatory power not in the epistemic activity of the researcher but in fitting the phenomenon into a specific causal structure in the world (Craver, 2014). Identifying and showing what the behavior of a mechanism does in the world is what a mechanist explanation is about (Illari, 2013). Hence, the new mechanist approach relies on a realist understanding of mechanisms. Mechanisms are entities in the world that exist without us perceiving, knowing, or interacting with them. The analysis of the mechanism's levels and its relation(s) is what gives us insights into how (cognitive) nature works. The original proponents of non-representational computationalism deem themselves new mechanists (Piccinini, 2008). They accept the realist stance on mechanisms as well.
However, as mentioned above, V&D explicitly state that computationalism comes in different flavors and that the ''ontological version'' (or realist version) of it is not deployed by them (Villalobos & Dewhurst, 2017b, p. 117). This also means that they are at odds with important aspects of the new mechanist position on which they base their proposal on. V&D also highlight that autopoietic theory has its own standards for using the term ''computation'' because autopoietic theorists would not easily buy into the ontological commitments that come along with the new mechanist approach. These commitments are concerned with (cognitive) realism-so the idea that a cognitive system is confronted with a pre-given world. Although there are earlier attempts to argue for non-representational approaches in cognitive science that can be aligned with realist positions (Zahidi, 2014), V&D miss to plausibly argue for how a non-realist version of computationalism matches with the realist position in the new mechanist approach. From our perspective, V&D did not show that the decoupling of non-representational computation and the new machinist approach is reasonable. By not being clear about how the new mechanistic approach relates to their proposal, they indirectly seem to rely on the same kind of heuristics that are preferred by new mechanists (see section ''Differing heuristics''). They also do not tackle this issue in other articles (Villalobos & Dewhurst, 2016, 2018, so they seem to implicitly assume that such decoupling can lead them to a theoretical position that can reconcile computationalism with post-cognitivism. However, in this regard, they do not go into the necessary details. On the contrary, it is argued that the ''notion of computation can also be used at a purely epistemological level, as a form of explanatory or predictive heuristic'' (Villalobos & Dewhurst, 2017b, p. 117). If ''computation'' is used that way-and is hence also applicable in, for example, autopoietic theories, then it is not entirely comprehensible how it can be brought together with mechanisms that most often are taken to be real, spatiotemporal structures in the world. If mechanisms would be presented by V&D as ontic structures, then their computation-friendly post-cognitivism cannot be preserved. They are simply not sufficiently clear about what ontological status mechanisms have from their point of view and how, in case they think of ''mechanism'' as a purely epistemological term, this relates to Piccinini's idea of non-representational computation.
3.2.4. Body-neutrality and enactivism. A very important concept of Piccinini's account that V&D try to extend to the enactive approach is the one of a ''medium-independent vehicle.'' The consequence of characterizing non-representational mechanisms as sensitive to this abstract notion of information is that enactivism would turn out to be compatible with multiple realizability. One problem that immediately emerges is that enactivism often emphasizes the uniqueness and importance of the bodily and neural components that are necessary for performing certain cognitive processes and experiencing the world in a certain way. In particular, enactivism and also weaker embodiment approaches come out of a long debate on the contribution of the physical body on cognitive processes (e.g. see Alsmith & de Vignemont, 2012 for a complex taxonomy of the different theories of embodiment). Importantly, enactivists claim that ''bodily processes shape and contribute to the constitution of consciousness and cognition in an irreducible and irreplaceable way'' (Gallagher, 2011, p. 7). The enactive approach tends to justify this claim because brain and body are thought of as an inseparable unit (Fuchs, 2017). The kind of mind an organism owns is defined and shaped by the kind of body it owns and by the possibilities of action it offers.
These considerations led Andy Clark (2008) in individuating two main positions in the contemporary embodied cognition debate. While extended functionalists and weaker embodiment theorists often subscribe to the so-called ''Larger Mechanisms Story,'' enactivism and cognitive scientists that tend to think of the body in a more radical sense generally subscribe to the ''Special Contribution Story.'' While in the first case the body is presented as important because of its computational and functional role, in the case of the ''Special Contribution Story,'' the body matters for its biological and physical contributions. In other words, enactivism rejects body-neutrality. This concept refers to the thesis that our specific form of embodiment does not contribute in any special way to the way we think and perform our cognitive functions (Cosmelli & Thompson, 2010;Shapiro, 2004). Independently, if cognition is considered as embodied or not, it is undeniable that the brain is shaped and continuously updated by all the feedback provided by the states and positions of the body. However, if the enactive position wants to be maintained, it becomes complicated arguing that functional mechanisms work on a notion of information that can be described in purely abstract terms, while the body and the kind of inputs it provides to the brain are unique and strongly tied to their non-multipliable realizable biological substrate. The consequences for enactivism, if the existence of specific kinds of mechanisms is accepted, are dramatic. The result is that it would be impossible for enactivists to properly emphasize the contribution of the biological body and reject body-neutrality. Furthermore, it would automatically follow that all the biological contributions of the physical body that cannot be grasped and defined through an abstract and multiply realizable notion of information would be left out from post-cognitivist explanations.
Cognitive processes then could potentially be realized by an envatted brain (Thompson & Cosmelli, 2011b). Independently from what is our take on the ''Brain In A Vat thought experiment,'' accepting its consequences from the perspective of enactivism would surely compromise its relational ontology as long as body and environment could potentially be simulated by the activity of a complex computer. What follows is that the brain would be the only real constituent of cognitive processes. There are reasons to be skeptical that a version of enactivism compatible with body-neutrality and brain-boundedness could be presented as a coherent theoretical position. More specifically, the concept of autonomy (discussed in section ''The neglected difference between enactivism and theory of autopoiesis'') and the inseparable intertwining of cognition and emotion in the perception of action-tendencies (Thompson & Stapleton, 2009) defended by enactivists do not allow for the possibility of body-neutrality or brain-bound cognition. Instead, if these consequences are accepted, enactivism would then turn out to become way more similar to weak embodied or extended functionalist accounts rather than representing an alternative to cognitivism.

Conclusion: maintaining coherence in the situated cognition debate
The approach developed by V&D on different occasions is an original and ambitious attempt to provide a philosophical patchwork of cognitivist and postcognitivist approaches (Dewhurst, 2016;Villalobos & Dewhurst, 2016, 2017a, 2017b, 2018. If their attempt would hold, it would lead to a greater theoretical cluster within cognitive science able to combine effort and conceptual resources from different branches of the field. However, the patchwork is only possible to envisage if certain concepts, developments, and problems of post-cognitivist theories like enactivism are neglected. If the neglected issues are made explicit, it becomes apparent that the integration of enactivism and computationalism remains a problematic idea. It is likely that because of the different heuristics and varying assumptions about the role of the body in cognition and the divergent ontological commitments that such an posal will stay a rare and rather insignificant option, as it needs a lot of conditions to be satisfied to work. For example, they argue that, as of yet, they can integrate only a canonical version of enactivism with computationalism, while they leave out recent developments of that position. What is really controversial in this case is that V&D use the term ''enactivism'' as a placeholder for the classic autopoietic theory developed by Maturana and Varela in the 80s. In addition to that, computationalists would be entirely unable to reconcile their research strategy with enactivist approaches to solve problems like the scaling-up problem even if both can be connected. Another problematic character of V&D's proposal originates from the unclear connection of non-representational computationalism and the new mechanist approach. Their vagueness concerning the conditions under which a computational process can be deemed representational or not also causes philosophical troubles. Because of all these problems we are still at odds with V&D's proposal.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD
Mark-Oliver Casper https://orcid.org/0000-0002-9735-8740 Notes 1. More concretely, an example provided by the same authors clarifies that while the instructions that we follow to bake a cake are medium-dependent (the amount and the specific kind of ingredients we use crucially matter to bake a cake properly), medium-independent vehicles can be described in abstract and formal ways (Ritchie & Piccinini, 2018). 2. This means that we sideline competitive versions of explanatory pluralism in which the plurality of methods, models, and types of explanation gets eventually destroyed by one type of explanation that removes all other types from the scene (Mitchell, 2002). We, instead, assume that an integrative or liberal version of explanatory pluralism should be followed that acknowledges genuine alternative explanation types. 3. Importantly, even if the considerations made by Richardson & Chemero are raised from a broader and radical embodied perspective, they are also equally valid for the enactive approach at least for two reasons. First of all, enactivism is, as a matter of fact, a radical embodied approach (See Clark & Toribio, 1994;Varela & Thompson, 2001). Second, to our knowledge, many contemporary enactive accounts rely, in a way or another, on the tools offered by dynamical systems theory (See Di Paolo, Buhrmann & Barandiaran, 2017; Hutto & Myin, 2017; Thompson, 2007). An exception of a form of enactivism that instead did not take a clear stance on dynamical systems theory is the original sensorimotor approach proposed by Alva Noe¨(2004). 4. Importantly, it is worthy to notice that computationalism and post-cognitivist approaches are incompatible only theoretically. Instead, the tools of computational modeling are already available to situated approaches independently of their claims regarding the nature of cognitive processes (see, for example, how the vast literature on dynamical modeling strongly relies on computational tools. Concrete cases can, for example, be found in Froese & Izquierdo, 2018or Buhrmann & Di Paolo, 2014. The epistemological validity of tools such as computational modeling thus is not something under discussion. 5. AT is an abbreviation used by the two authors to indicate the Autopoietic Theory. 6. Di Paolo (2018) is particularly worried that the original theory of autopoiesis failed in grasping what he calls ''the tension of life.'' By being too close, autopoiesis alone cannot explain how living systems can, at the same time, selfproduce and self-distinguish themselves. From his perspective, this challenge can be accommodated by the enactive approach. 7. Importantly, the pluralism defended earlier in the article is about accepting different types of explanation rather than theories. Hence, that pluralism does not undermine our critique against V&D's usage of enactivist theories and the new mechanistic approach. It is actually the combination of different or incompatible theories that risk damaging the pluralist stance. We want to thank an anonymous reviewer for pointing that out.