Formal Innovations in Clinical Cognitive Science and Assessment

Mathematical modeling is increasingly driving progress in clinical cognitive science and assessment. Mathematical modeling is essential for detecting certain effects of psychopathology through comprehensive understanding of telltale cognitive variables, such as workload capacity and efficiency in using capacity, as well as for quantitatively stipulating the subtle but important differences among these variables. The research paradigm guiding this formal clinical science is outlined. A distinctive cognitive abnormality in schizophrenia—taking longer to cognitively represent encountered stimulation—is used as a specific example to illustrate a general quantitative framework for studying intricate phenomena that impair mental health. Developments in mathematical modeling will also benefit symptom description and prediction; provide grounding in cognitive and statistical science for new methods of clinical assessment over time, both for individuals and for treatment regimens; and contribute to refining the cognitive-function side of clinical functional neurophysiology.


PSYCHOLOGICAL SCIENCE
Contemporary clinical applications of basic cognitive science have taken on a new direction involving mathematical modeling of symptom-related cognitive abnormalities. The basic research paradigm for this movement is illustrated in Figure 1. Mathematical models of cognitive performance among healthy individuals are adjusted to accommodate deviations from normal performance among participants with selected forms of psychopathology. Such deviations typically center around speed and/or accuracy in performing cognitive tasks. Parts of the model remaining intact after this adjustment are considered to indicate cognitive functions that are spared, and performance deviations that compel modification of the model are flagged as signifying disorderaffected functions. Minimal adjustment of the model is desired, in the interest of parsimony. In this way, models provide a formal framework to determine which cognitive processes do or do not differ between clinical groups and healthy control groups.
Such formal theoretical developments can offer multiple advantages in explaining and measuring psychopathology. This article describes these advantages, illustrating them through a case study involving the symptom-related cognitive neuroscience of schizophrenia.
As illustrated in Figure 1, clinical mathematical cognitive neuroscience stands to uniquely contribute to mainstream mathematical cognitive neuroscience. It can do so through model generalization testing (Busemeyer & Wang, 2000), in which the robustness of a model's performance is evaluated with new experimental paradigms or populations. Clinical mathematical cognitive neuroscience provides key opportunities for generalization testing with respect to extreme individual differences, which are associated with psychopathology. Models that readily accommodate performance deviations are preferred to those that fail to accommodate them or that are strained in doing so. That is, a model is supported when observed abnormalities relating to psychopathology can be well predicted without major adjustment of the model's workings.
Note that clinical mathematical modeling, the subject of this article, involves analytic theorems and proofs, as well as algebraic derivations expressing cognitive transactions relevant to clinical disorders. It should be distinguished from a somewhat related type of modeling known as computational psychiatry. Computational psychiatry addresses how cognitive transactions might be realized at the level of neural organization and operations. By and large, computational psychiatry delegates disorder-related deviations of neural functioning to computer simulation of neural networks and neurodynamics (communication between neurons). (Accounts and examples of computational psychiatry are available in Montague et al., 2012;Huys et al., 2016;Wang & Krystal, 2014;and Grossberg, 1999. For a rigorous, comprehensive treatment of artificial neural networks, see Golden, 1996.) Computational psychiatry and clinical mathematical modeling potentially are complementary when it comes to quantitative accounts of clinical disorders (e.g., Carter & Neufeld, 1999; extensive elaboration on distinctions among alternate approaches to modeling clinical phenomena can be found in Neufeld, 2007a).

Case Study: Cognitive Neuroscience of Stimulus Encoding in Schizophrenia
Schizophrenia affects approximately 0.5% of the North American population. Symptoms can take the form of delusions and hallucinations (thought-content disorder), incoherent speech, reduced cognitive efficiency, and impoverished motivation. Research studies applying the research strategy depicted in Figure 1 have shown that delayed completion of stimulus encoding is a deviation in cognitive performance recurrently found in schizophrenia, across multiple experimental tasks and levels of patient status (e.g., first episode, never treated, outpatient, inpatient). In this case, stimulus encoding refers to cognitively preparing and transforming cognitivetask stimuli into a format facilitating collateral processes. For instance, participants might be asked to memorize a set of novel stimuli (e.g., "TZAM," "CEYP") and then, after a delay, to identify which stimuli in a new set were part of the original set. To do well in this task, participants must encode stimuli presented in the second set into a cognitive format facilitating comparison with the previously memorized set of stimuli. For example, in the case of basic visual-template matching, it may be necessary to cognitively extract the physical features of a presented stimulus, such as its curves, lines, and intersections; or in the case of stimulus-name matching, it may be necessary to tag a presented digit or letter stimulus with its name, for comparison with names of the stimuli in the previously memorized set.
Mathematical modeling enables quantitative dissection of cognitive processes, to help pinpoint specific sources of deviation in cognitive performance. With respect to encoding, for example, the processes can be broken down as follows. First, the overall process is made up of constituent encoding operations-encoding subprocesses, such as registration of curves, lines, and intersections of a presented stimulus. Second, the encoding subprocesses take place at a certain rate, known as subprocess-level cognitive-workload capacity (e.g., Neufeld et al., 2007;Wenger & Townsend, 2000). Application of the model-adjustment operation of Figure 1 has repeatedly shown that subprocess-level cognitive-workload capacity remains intact in schizophrenia, but the number of encoding subprocesses undertaken is elevated. In other words, cognitive-workload capacity escapes impairment, whereas efficiency of its implementation does not. This combination of spared and affected components of encoding performance is analogous to a racehorse striding at a normal pace but closer to the outside rail, which increases the requisite number of paces, and therefore the time needed, to complete the course. This combination illustrates the nature of the model adjustment referred to in Figure 1: The altered model conforms to the specific pattern of empirical deviations among clinical participants (in this case, people with schizophrenia).
The studies just mentioned have tracked the abnormality in encoding to a specific formalized property-the subprocess-number parameter of the stimulus-encoding The upper arrow indicates the application of mathematical models of normal cognitive-task performance to the understanding of performance changes occurring with psychopathology. The lower arrow indicates that the success of such application, in turn, bears on the validity of the applied models. process. Identification of such an abnormality stands to open the way to potentially important advances in this domain of psychological clinical science. The abnormality arguably represents a critical deficit that compromises activities relying on timely stimulus encoding (e.g., performing daily self-maintenance and meeting environmental stresses and demands). The quantitative apparatus in which this property is embedded provides for certain methodological benefits, including theory-guided measurement and clinical assessment, and stipulation of the cognitive neurophysiological processes taking place. The abnormality, as quantitatively defined, also can be shown to be potentially symptom related, notably with respect to thought-content disorder (delusions and thematic hallucinations). Such symptomatology is considered to emanate from failure to encode specifically context-related features of a stimulus complex during episodes of information intake. When the influence of reality-grounding, objectifying cues is weakened, other information that successfully is taken in during an episode is open to false interpretation (this mechanism of symptom production is expanded upon quantitatively in Neufeld, 2007b, andNeufeld et al., 2010). This formal theoretical account is in the spirit of the current clinical-science trend toward determining underlying mechanisms of complex behaviors, in this case mathematically. It accords, moreover, with the currently prominent Research Domain Criteria initiative (e.g., Kozak & Cuthbert, 2016), inasmuch as the identified mechanism evidently extends to other forms of clinical disturbance (e.g., major depressive disorder; Taylor et al., 2016), and nonclinical populations (Nicholson & Neufeld, 1993).
Note that model adjustment capturing changes in cognition associated with clinical disorder typically takes the form of altering the values of model parameters, such as the number of encoding subprocesses or the rate at which subprocesses are completed. Model architecture (in terms of the number of model parameters involved or their arrangement in relation to each other) ordinarily is common to clinical and nonclinical groups alike (see, e.g., Neufeld & Broga, 1981;Wallsten et al., 2005). In other words, the basic mental apparatus meeting a cognitive challenge is common across groups, but modification of one or more of its parts (parameters) accompanies clinical disorder.

Measurement and Clinical Assessment Guided by Formal Theory
Samples of individuals' cognitive performance, for instance, on encoding-intensive tasks, permit estimation of cognitive-process parameter values. Such estimation is accomplished by applying established methods to empirical performance data (e.g., observed latencies of response on cognitive-task trials). The methods used include maximum likelihood, distribution-moment matching (e.g., Evans et al., 2000), and Bayesian parameter estimation (e.g., Alexandrowicz & Gula, 2020, who applied this method, along with a mathematical model of decision and choice, to clinical disorders). Clinically relevant cognitive processing concealed in raw data can be revealed via mathematical modeling.
Often it may not be reasonable to assume that all individuals within a clinical group have (roughly) the same level of cognitive processing, as indexed, for example, by a fixed value for a model parameter. Fortunately, mathematical models can be expanded to account for this, notably through expansion as mixture models. Mixture models treat the overall performance of a group as a mixture of different levels of performance among individual group members (e.g., Carter et al., 1998;Cutler & Neufeld, 2017).
Mixing distributions can be important not only because they make it possible to systematically accommodate individual differences, but also because the parameters that mathematically govern the random distributions of model properties (mixing-distribution hyperparameters) can be clinically meaningful in their own right. For example, mixture-model hyperparameters can convey a particular group's general level of facility with undertaking the elements of the cognitive process at hand (e.g., encoding subprocesses); they can also be used to indicate susceptibility of this facility to impairment during psychological stress (for concrete examples, see Neufeld, 2016).
In short, mixture-model expansions, illustrated in Figure 2, can increase the span of what a model explains by incorporating individual differences. They additionally can tap clinically meaningful constructs, such as cognitive-task facility and vulnerability of performance to psychological stress.

Measuring better with Bayes
Mixture models allow for the likelihood that individuals systematically differ in properties of mathematically expressed cognitive performance. They go an important step further, in providing for efficient estimation of model properties for the individual. They do so by customizing the properties to the person, through Bayesian statistical methodology, as follows. Bayes' theorem, 1 appropriated to the present context, states that where Pr(A|{*}) is the Bayesian probability that a predicted entity, such as a cognitive-process parameter * is the probability of the observations, all candidate values of the predicted entity considered (for accounts of Bayesian modeling generally, see classic works such as Berger, 1985, andO'Hagan &Forster, 2004).
With Bayes' theorem and a person's cognitiveperformance sample in hand, a versatile estimation of individual attributes of clinical interest, based on cognitive and statistical science, is possible. Predicted entities A actually can be diverse, including, for example, the parameter expressing the number of encoding subprocesses or the symptomatology to which the mathematical model and estimated model parameters relate (e.g., severity of thought-content disorder; for further discussion, see the following section on dynamic assessment of treatment efficacy). Bayesian estimation stabilizes estimated values through the anchoring effects of mixing distributions, which act as Bayesian priors (e.g., Pr A ( ), in Equation 1). Variance in estimates (statistical inefficiency) thus is reduced through the quantitative mechanism formally known as Bayesian shrinkage.
Estimates are solidified by feeding into their calculation specifically that information supplied by a preestablished referent, the Bayesian prior represented by the mixing distribution.
The operation of mixing-distribution Bayesian priors can help alleviate the problem of small-sample mathematical modeling, which is ubiquitous in applied settings. The approach allows researchers to work with small cognitive-performance sample sizes, which is particularly helpful when undertaking person-specific modeling for assessment or research purposes. Valid mixing distributions sharpen the estimation of model properties for the individual. They help compensate for small performance samples by bringing into play performance-relevant information about the group to which the individual at hand belongs. Again, this information is conveyed by the mixing distribution that quantifies the relative frequency of the target of prediction (e.g., a model-parameter value) in a membership group. Such a scenario resembles what takes place in a hematology laboratory, where a substantial extant bank of hematological information, applicable collectively, is brought to bear individually on a modest blood sample from the person at hand.
Moreover, dynamic assessment of changes in clinical condition is possible through undertaking Bayesian estimation at designated times of clinical interest (e.g., after  can be used to track changes in the status of symptomrelated (e.g., thought-content symptomatology) cognitivemodel parameters (e.g., number of stimulus-encoding subprocesses, which the model-adjustment operation of Fig. 1 has identified as inflated in schizophrenia). Changes can be monitored as they occur over the natural passage of time, over the course of treatment, or subsequent to an experimental manipulation. In these ways, the described formal methodology can be an important constituent in the arsenal of clinical assessment. Mixing-distribution Bayesian priors have replaced the usual population-based norms of multi-item psychometric inventories. Bayesian individualization of model properties also allows for evaluation of model performance at the personspecific level. Doing so ascertains a model's validity for an individual participant; it also affords strong tests of overall model performance. Fit of model predictions to empirical observations at both the group and the individual levels is an added means of model evaluation. This unique form of model evaluation potentially bears on the currently prominent issue of robustness of findings in cognitive modeling (Neufeld & Cutler, 2019).

Dynamic assessment of treatmentregimen efficacy
With a modest expansion of Equation 1, the present assessment methodology naturally extends beyond the individual; it can be applied to estimating the representation of varying levels of symptom severity in a clinically treated cohort. With A of Equation 1 standing for cognition-related symptom severity, and given performance samples from a random subsample of individuals in a treated cohort, changes in proportions of relative severity levels can be estimated and monitored repeatedly over time. The procedure loosely resembles one from mathematical ecology, in which the stocks of various fish species are estimated using netted samples taken over the course of a fishing season. In the present case, the moving profile of symptom-severity proportions addresses the efficacy of the treatment regimen in moving the treated cohort toward more healthy cognitive functioning. Note that inferences at the individual and cohort levels are both centered on a cognitive, symptom-related mechanism (e.g., parameterized deviation in cognitive encoding). Such estimation is of special interest, for example, when the administered treatment is a drug targeting the central nervous system (for elaboration on the mathematical and computational specifics, assumptions, and methodological caveats of the assessment procedure described here, see, e.g., Neufeld, 2007a;Neufeld et al., 2002Neufeld et al., , 2010.

Implications for Clinical Functional Neurophysiology
Mathematical modeling of the cognitive side of vascular and electrophysiological cognitive neurophysiology conveys several methodological assets. Formally anchoring cognitive functions in a viable mathematical model is an antidote to a thorny problem in cognitive neurophysiology known as reverse inference (Poldrack, 2011). This problem consists of circularly relying on measured neurophysiological signals (obtained with fMRI, functional magnetic resonance spectroscopy, magnetoencephalography, and electroencephalography) to infer the cognitive functions whose very neuronal substrates purportedly are being charted. This inferential dilemma in principle can be overcome as follows. The cognitive functions at work while neurophysiological measurements are taken are quantitatively stipulated in advance, anchored in a formal representation (e.g., Ahn et al., 2011;White et al., 2012). That is, cognitive functions whose neurophysiological substrates are being examined are staked out in terms of a quantitative modelone that is a priori freestanding, independent of the examined neurophysiological activity itself.
Note, further, that dynamic models of cognitive operations treat the development of cognitive processes as stochastic functions of time (Townsend & Ashby, 1983). The unfolding of target processes, such as stimulus encoding, can be overlaid on monitored neurophysiological signals, to produce times of neurophysiological measurement interest within trials of cognitive-task performance. Such times of measurement interest can complement brain regions of measurement interest (e.g., a region known as the encoding-intensive dorsal anterior cingulate cortex). In this way, mathematical cognitive models can contribute to the calibration of space-time coordinates of neurophysiological measurement (illustrated in Neufeld et al., 2010). Isolating critical times to measure a target process (e.g., encoding a presented stimulus) has the advantage of allowing it to function as it would alongside related processes involved in executing a cognitive task (e.g., comparing a presented stimulus with other stimuli held in memory). The approach, in other words, allows the target process to be examined as it operates in situ-inside its cognitive ecological niche.
Estimating individual differences in model parameters, as described above, also can facilitate the formation of parametrically homogeneous groups. Reducing participant-group heterogeneity potentially achieves greater statistical power for detecting subtle but key neurophysiological anomalies.
At a broader level, formal cognitive modeling can provide a cognitive-functional nexus for integrating observations from functional neurophysiology, investigative settings, and experimental sessions. Ascertaining mathematically that the cognition at play remains stable across different sources of data lends assurance that neurophysiological results converge on a shared set of cognitive operations. For example, a common mathematical model of stimulus encoding, as activated by a widely used cognitive task (the Stroop task), has been shown to apply across different levels of cognitive neurophysiological measurement. Investigations first focused on functional magnetic resonance spectroscopy, used to examine neurochemical mechanisms accompanying cognitive performance, and then on vascular-signal functional MRI, used to examine the specific neuronal circuits involved in performing the cognitive task (Taylor et al., 2015(Taylor et al., , 2016(Taylor et al., , 2017.
By adopting the strategy portrayed in Figure 1, researchers can identify and target cognitive-processing deviations, as well as estimate the time course of the deviant processing during trials of an experimental task. This time course then can be combined with measured activation of the brain region or regions apt to be involved in the suspected disorder-related cognitive process. The goal is to uncover abnormality in neuronal operations paralleling abnormality in the targeted cognition. The combination of cognitive-functional and neurophysiological information on a disorder, in turn, can profitably feed into clinical assessment and treatment activities.

Concluding Comments
Clinical mathematical psychology stands at the ready to contribute to progress in clinical science and assessment (see also Treat & Viken, 2010). Some readers may be put off by the requisite engagement in analytical developments (elaborated on in Neufeld, 2007a). However, behavioral scientists who took advanced statistics and design courses as undergraduate and graduate students are often in a strong position to grasp the necessary quantitative tools, possibly with the aid of available tutorials (see Recommended Reading). It is motivating to note that the history of science by and large is replete with exemplary advances hinging on decidedly formal theoretical developments (necessary propositions; e.g., Braithwaite, 1968;Harper, 2011). The transparency of mathematically stated accounts of deviations in cognitive processes, moreover, is intrinsically rewarding. It also can attest to rigor of developments, if justified, but as well can throw any flaws into relief-thereby promoting scientific self-correction.

Transparency
Action Editor: Teresa A. Treat Editor: Robert L. Goldstone

Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.