Neuro-fuzzy control of sit-to-stand motion using head position tracking

Based on the clinical evidence that head position measured by the multisensory system contributes to motion control, this study suggests a biomechanical human-central nervous system modeling and control framework for sit-to-stand motion synthesis. Motivated by the evidence for a task-oriented encoding of motion by the central nervous system, we propose a framework to synthesize and control sit-to-stand motion using only head position trajectory in the high-level-task-control environment. First, we design a generalized analytical framework comprising a human biomechanical model and an adaptive neuro-fuzzy inference system to emulate central nervous system. We introduce task-space training algorithm for adaptive neuro-fuzzy inference system training. The adaptive neuro-fuzzy inference system controller is optimized in the number of membership functions and training cycles to avoid over-fitting. Next, we develop custom human models based on anthropometric data of real subjects. Using the weighting coefficient method, we estimate body segment parameter. The subject-specific body segment parameter values are used (1) to scale human model for real subjects and (2) in task-space training to train custom adaptive neuro-fuzzy inference system controllers. To validate our modeling and control scheme, we perform extensive motion capture experiments of sit-to-stand transfer by real subjects. We compare the synthesized and experimental motions using kinematic analyses. Our analytical modeling-control scheme proves to be scalable to real subjects’ body segment parameter and the task-space training algorithm provides a means to customize adaptive neuro-fuzzy inference system efficiently. The customized adaptive neuro-fuzzy inference system gives 68%–98% improvement over general adaptive neuro-fuzzy inference system. This study has a broader scope in the fields of rehabilitation, humanoid robotics, and virtual characters’ motion planning based on high-level-task-control scheme.


Introduction
Sit-to-stand (STS) movement is a skill that helps determine the functional level of a person. 1 The ability to rise from sitting to standing is critical to a person's quality of life, as it is linked with the functional independence of an individual. Studies on the hierarchy of disability indicate that problem in STS starts at a later stage than problems in walking commence. STS is a mechanically more demanding physical activity as the body has to work against gravity more than it does during walking. 2 STS is a complex voluntary movement which is yet not fully understood. Studies, however, have shown that human motion control and maintenance of balance by central nervous system (CNS) rely on inputs from vision, proprioception, and tactile/somatosensory and vestibular systems. The multisensory integration, combined with motion control, undergoes both quick and slow alterations which are termed as fast and slow dynamics in CNS respectively. For any voluntary motion, CNS anticipates set patterns of inputs from multisensory systems. The anticipated pattern of signals is a function of slow dynamics in CNS, which is due to long-term processes of learning a motion pattern or changes in motion strategy due to aging or disease. 3 A study in Siriphorn et al. 4 evaluated the role of vision in STS by collecting various parameters like weight transfer time, rising index, and center of gravity (CoG) velocity sway during STS. Data were collected from volunteers, first with open eyes and then blindfolded. Results showed that there were significant differences between the two trials and suggested that visual perception played a role in balance control during STS. Furthermore, Mughal and Iqbal 5 evaluated the roles of ground reaction force (GRF), moments, and center of pressure (CoP) during STS. These kinetic quantities are measured and transmitted to CNS by the tactile and somatosensory systems. The results validated a modeling scheme that depends upon GRF and moments as variables of interest.
Of all sensory inputs, head position and orientation too is an area of researchers' interest. The vestibular system senses linear and angular head motions. The CNS uses this information for posture and gaze control. 6 Vestibular sense, in conjunction with neck proprioception, estimates body orientation. 3 Role of head position feedback to CNS in smooth execution of STS is also studied in Scholz et al., 7 where authors have provided the detailed experimental and physiological analyses that suggested the dependence of the STS movement on kinematic variables such as center of mass (CoM) and head position during the task. A person rises from a chair, then leans forward by putting his head position over the CoM point, and then extends into standing position. The head position trajectory is pivotal to provide a basis for the endpoint hypothesis for STS movement stabilization, which shows that the entire task becomes simple by maneuvering the head to the endpoint of the trajectory. This phenomenon was simulated and studied in Mughal and Iqbal 8 by proposing the feedback control law based upon inverse kinematic actuation and validating with experimental results of kinetic variables.
The behavioral richness exhibited in natural human motion results from a complex interplay of biomechanical and neurological factors. An adequate understanding of these factors is a prerequisite to understanding the overall mechanism of human motion as well as providing a means for synthesizing human motion. Basic constituents of human motor system can be modeled as biomechanical plant and CNS as a controller. 9 In human-like motion synthesis frameworks, modeling of CNS is always very vague 9 and hence it is limited to serve some specific task only. To emulate CNS, some controller is used following the concept that there exist optimal controllers in human CNS. 8 Due to similarity with the human reasoning mechanism, adaptive neurofuzzy inference system (ANFIS) seems a natural choice for biomechanical applications. Unlike most of the conventional controllers, ANFIS is a model-free controller. 10,11 It is the combination of two soft computing techniques, namely artificial neural networks (ANN) and fuzzy logic (FL). An ANFIS is a generalized neural network (GNN) that implements a fuzzy inference system (FIS) based on Sugeno type reasoning. ANFIS was proposed in Jang 12 to compensate FL's shortcoming of learning mechanism and ANN's inability to translate linguistic fuzzy rules into an inference system. Controlling human motion assistance devices using ANFIS is frequently found in the literature. Robotic manipulators for human-assist exoskeleton using the neural-fuzzy scheme in Peng and Woo 13 described the suitability of ANFIS in the biomechanical control application.
In this study, we emulate the clinical hypothesis that besides numerous other factors, CNS controls the STS motion by tracking a pre-learned head position trajectory. We propose a biomechanical human-CNS modeling and motion control solution to this hypothesis. Clinical evidence, that STS motion is somehow linked with head position feedback to CNS, is already available in the literature. 3 Motivated by the evidence for a task-oriented encoding of motion by the CNS, 9 we propose a biomechanical modeling and control framework that is capable of synthesizing and controlling STS motion using only head position trajectory as reference.
The human-CNS model should be customizable for any real subject and the control scheme must be validated using real subjects' motion capture results. The role of head position feedback to CNS in controlling STS using Cartesian control presented in Rafique et al. 14 motivated us to further study the movement using the ANFIS controller. Experimental data of STS motion are collected from healthy adult subjects using an optical motion capture system. Rafique et al. 15 give a detailed description of our experimental work. This paper is organized as follows: First, we provide the details of the experimental setup and data collection of STS motion. Next, we discuss the human biomechanical model in STS perspective, conduct forward and inverse kinematic analyses, and propose the task-space training (TST) algorithm as a development tool for the ANFIS controller. Next, we simulate each subject's STS motion and compare them with experimental results. Finally, we discuss the validity of the proposed design methodology for its physiological relevance to the STS maneuver.

Methodology and implementation
Workflow Initially, a general human biomechanical model based on anatomical data from Iqbal and Pai 16 is realized in the SimMechanics environment of Simulink/ MATLAB. Using the TST algorithm a generalized ANFIS is trained for estimation of appropriate joint angles and control of physiologically relevant STS motion. The motion control is carried out by tracking head position trajectory only, without using any other measurements.
Later we collected experimental data of STS transfer from seven subjects. Each subject's physical parameter (mass and height) data are converted into the body segment parameter (BSP) values using weighting coefficient anthropometry. Segment values are then used to scale the human model and subject-specific customization of the ANFIS controller.
STS motion in the sagittal plane is recorded using reflective markers and infrared cameras. Experimental (marker) data are imported into the MoCap toolbox 17 to calculate joint positions, joint angles, and head position trajectories during the STS trial.
STS motion of all subjects is simulated using (1) general ANFIS controller and (2) custom ANFIS controller. The two sets of simulations are compared with experimental results (Figure 1).

Motion capture
Owing to the diverse nature of applications like sports coaching, animation, academics, and biomechanical analysis, motion capture is an active area of research. van der Kruk and Reijne 18 provide comprehensive coverage of methods available. Of these, marker-based motion capture is termed as one of the most accurate and has been extensively used for biomechanical modeling. 19,20 Experimental setup. Experimental data of sit-to-stand (STS) transfer were collected at Biomechanics Lab of Riphah International University. Seven healthy subjects (five males and two females, age: 22 6 0.81 years, mass: 72.58 6 11.61 kg, height: 1.70 6 0.04 m) were selected for data collection of STS motion. The subjects had no history of movement disorder. They provided their informed consent under the Ethics Committee of Riphah International University.
Experiment protocol. Subjects completed the STS task using an armless chair 49 cm from the ground. To collect the data in the sagittal plane, three spherical reflective markers on the left side of each segment, that is, foot, shank, thigh, and trunk, were attached. Since markers pose problems in the segment and joint position assessment due to skin or loose garment artifacts, a set of markers on each segment were applied using rigid rulers. One marker was attached on top of the head using a hairband. Motion capture was done using four infrared Flex 3 cameras by OptiTrack. The data were recorded at 100 Hz using OptiTrack Motive 2.0.1 data acquisition software. Each subject completed multiple STS trials. All trials were done at once. Each trial began with the subject seated in the chair, arms crossed across the chest. The trial started with a verbal command of ''stand'' and then data were recorded for approximately 4 s. After this, the subject was again asked to be seated and then trial repeated ( Figure 2).
Motion capture equipment. For two-dimensional (2D) motion capture of STS maneuver we used the guidelines for three-dimensional (3D) motion capture. 21,22 We used a multiple-camera system, along with spherical markers to ensure better visibility and reliable data reconstruction by the system (Figure 3).
Data collection and analysis tools. Each marker was manually numbered in the captured data file. Markers were then grouped into segments. Segment labels, too, were assigned manually in Motive Edit mode for each trial. Motive 2.0.1 generates motion capture data in .tak and .c3d file formats. For data analysis, we have used a motion capture software MoCap, a freely available motion data analysis toolbox that works seamlessly with MATLAB.

MoCap data analysis
Motion data in .c3d format was imported into the MATLAB MoCap toolbox for analysis. A total of 13 markers were used for motion capture. Marker positions were converted into joints and then angular positions of each joint in every frame were calculated. Similarly, using a marker on the head, the head position trajectory was constructed. Marker data and joint data were used to animate the STS transfer of the subjects ( Figure 4). Data on subject 5 were corrupted and hence were rejected. We used seat off time as the start of STS and the whole body motion termination as the end of STS motion time. The ensemble average of head position and ankle, knee, and hip joint trajectories of all six subjects are shown in Figure 5. Standard deviation curves in the broken line show the amount of intra-subject variation.

Anthropometric conversion
Subjects' physical parameters (mass and height) data, as shown in Table 1, are used to calculate BSP. An extensive literature is available on methods of anthropometric conversion. Riemer and Hsiao-Wecksler 23 provide a comprehensive review of these methods discussing the advantages and limitations associated with each technique. We have used the method of weighting coefficient described in Winter, 24 which is widely accepted among the research community. For brevity, only one representative data out of a total of seven subjects are presented in Table 2. General anthropometric parameters are borrowed from Iqbal and Pai, 16 which have been extensively used in studies on STS motion. 5,8,25,26 Analytical modeling framework The general human biomechanical model. A general fourlink rigid body human model is used to simulate STS motion, as shown in Figure 6. It has 3 degrees of freedom (DoFs). Four links include foot, shank, thigh, and the upper body, which we term as a single link called head-arm-trunk (HAT). A triangular base of support   represents the foot fixed on the ground. Since the key movements of joints and limbs during STS take place in the sagittal plane only, we limit our model to planar (2D) motions (in the Cartesian plane). All joints are revolute (hinge-like) and the model is an open-chain mechanism with three actuators at each joint. u 1 , u 2 , and u 3 represent ankle, knee, and hip joint positions respectively. We refer shank, thigh, and HAT as links l 1 , l 2 , and l 3 respectively. (X, Y) is head position and (x, y) is the hip position in Cartesian coordinates. [ is the head orientation in the World frame {W}.
Based on forward and inverse kinematic analyses of the model, a dataset of joint angles corresponding to a range of head position trajectories is generated. Head position, in turn, is a function of segment lengths of the human model. The dataset is used to train, test, and validate ANFIS controllers.
Forward kinematics analysis. We map joint space (u n ) into Cartesian space (x, y, f) using forward kinematics. 27 f is the orientation of a point in the Cartesian plane with respect to the World reference {W}.To determine the head position (X, Y), the set of kinematic equations is given as where c 1 stands for cos(u 1 ), c 12 for cos(u 1 + u 2 ), s 1 for sin(u 1 ) and so on. Also where f is the orientation of HAT (or head) with respect to the x-axis. The generalized coordinate is a compact notation p = (X, Y, f.

Inverse kinematics analysis.
To solve the inverse kinematics (IK) problem, first, p is used to find a unique hip position (x,y) to reduce the problem at hand from four to three-link. To find hip position (x,y), hip joint angle constraint, that is, 04u 3 4p is imposed. The solution then simplifies as follows  Using algebraic manipulation, the three joint angles inferred from head position are as follows Kinematic constraints of human model. To determine an accurate range of joint angle trajectories during STS is difficult owing to different experimental conditions, joint motion profiles (angle constraints), and link lengths (link-length constraints). 28 This variety is also evident from our experimental findings in Figure 5, even though the number of subjects is very small. The TST algorithm works efficiently for any range of angular constraints. Segment lengths also impose a constraint on the determination of the head position subspace.
Determination of head position subspace. Head position during STS is a subspace of head positions during all possible human body movements with stationary feet.
To determine the reachable positions of the head during STS, joint and link-length constraints are imposed on the human model. Besides, we impose an additional constraint of head orientation, f. A dataset is generated which determines all possible head positions for all possible combinations of joint angles, head orientations, and segment lengths. Figure 7 shows the two subspaces.

Joint angle estimation scheme
Our scheme uses head position (X, Y) as a reference input to estimate joint angles ( u 1 u 2 u 3 ) using three ANFIS, which we will refer to as single ANFIS. Angle positions are commands to the human biomechanical model to rotate the three joints in the sagittal plane. The combination of three joint movements thus provides the required position of the head in Cartesian coordinates. The complete scheme shown in Figure 8 gives an identity mapping.
Task-space training (TST) algorithm. We propose an algorithm to generate training and validation datasets based on the task space of STS transfer: 1. Determine the constraints, l n , u n and f, for the individual human model, n = 1,2,3.

Development of the ANFIS controllers
For the n DoF system, we develop a set of n ANFIS controllers to control n joints individually. The scheme is given in Figure 9.
We apply head position X, Y data at inputs for fuzzification. Layer 1 comprises 2k membership functions (MFs); k for each inputs X and Y. Generalized bellshape curves of MFs for input X are nonlinear functions given in equation (9) MFs for input Y is given by where {a i , b i , c i } are premise parameters that define the shape of 2k bell functions. Initial values of premise parameters are arbitrary such that MFs are distributed uniformly over all input space. This arrangement of MFs distributes input space into k 2 uniformly distributed subspaces, each of which is governed by one fuzzy rule. Layer 2 provides the firing strength of each rule Layer 3 provides the normalized firing strength, w j , given by Contribution of each rule to output is given by layer 4 as w j f j = w j p j x + q j y + r j À Á ð13Þ where {p j , q j , r j } is the consequent parameter set. At the output of the last layer, all consequents are added to give the final result where u n is the ankle, knee, or hip angle for n = 1,2,3 respectively, corresponding to X, Y head position. An ANFIS scheme utilizes Sugeno reasoning, where output is a pre-de-fuzzified (crisp) number, obtained from the sum of k 2 linear equations and this scheme is very efficient in comparison to Mamdani reasoning for learning and generating the weights accordingly.
We now develop ANFIS controllers as first-order Sugeno models, using the technique described in the literature. [10][11][12] Training and optimizing ANFIS controllers. During training (supervised learning), the mean square error (MSE) between desired and output values is calculated and plotted for every epoch. The plots in Figure 10 help determine the optimal number of epochs that correspond to the lowest error. Training a controller beyond these values results in over-fitting and degradation of Figure 9. Schematic of ANFIS n : 2 inputs X, Y, 2k-input MFs, joutput MFs and one output u n . performance. Table 3 shows the final values of ANFIS parameters.
Validation of controllers. Using these datasets trained for minimum RMS error, we develop ANFIS controllers to test and check on validation data for generalization of the STS movement. Validation datasets [X f , Y f , u np ] n comprise 287 fictitious head positions X f , Y f , and predicted joint angles u np . These datasets are independent of training and test datasets and hence can be relied upon to validate ANFIS controllers. The generated output angles u ng are compared with predicted angles u np . Figure 11 shows the error plot between the two datasets.
Features of ANFIS controllers. Table 3 shows various features of ANFIS controllers. The three controllers operate on the same input (X, Y) and generate independent ankle, knee, and hip joint angles. Figure 12 shows surface plots of the three controllers for instantaneous head positions, X(m) and Y(m). The 3D plots give all possible combinations of ankle, knee, and hip joint angles that correspond to all possible head positions during STS motion. Figure 12 hence gives the neurofuzzy mapping of the head-position subspace as shown in Figure 7.

Head position trajectory generation
For the general human model, the head position reference trajectory is generated using an unforced statespace model proposed in Mughal and Iqbal. 29 Analytically generated head trajectories are shown in Figure 13 (left).

Simulations and results
General human model and general ANFIS control: First of all, the general human model is controlled by the general ANFIS, using analytically generated head position trajectories shown in Figure 13 (left). The joint angles measured in the simulation are plotted in Figure  13 (right). % STS Cycle is used to normalize the time taken by different subjects (both experimental and simulated motion) to complete STS as a standard procedure 15,19,20 Custom human models and general ANFIS control: BSP data of the subjects are used to customize subjectspecific human models. Subject-specific head position trajectories are used as reference input in simulations. The general ANFIS is used to simulate each subject's STS motion.
Custom human models and custom ANFIS control: BPS data of each subject are used to generate a TST dataset so that custom ANFIS controllers are designed. Custom ANFIS controls each custom human model. Average of six subjects' simulated motion, along with 1 SD, is plotted in Figure 14.
A comparison of motion control by general and customized ANFIS is given in Figure 15 in terms of error between experimental and simulated motions.

Discussion
This study proposes a modeling framework to evaluate the role of head position trajectory as a slow dynamics in CNS to carry out STS motion. CNS has been modeled by the ANFIS controller and its inference mechanism is used to generate appropriate joint angles needed to acquire suitable head position trajectory associated with STS. Our previous work 14,25,26 and some work from the literature 5,8,16 were based on the same analytical human model (realized in mathematical or simulation frameworks) and different combinations of measurements, feedbacks, and controllers. The concurrent scheme mainly utilizes feed-forward compensation.  We did the analytical design in the first phase to relate and compare our current study with the previous work. Using a well-defined human model and simulation results from previous studies helped us design and finetune the ANFIS controller that could produce comparable results. As a standard procedure 19,20,30 we later validated our modeling and control scheme framework with laboratory data as well. The experimental STS data were captured using four OptiTrack Flex 3 cameras and thirteen spherical reflective markers on four segments of each of seven subjects. The marker data recorded in the OptiTrack Motive environment and were imported into MATLAB and analyzed using the MoCap toolbox. At least three STS trials of each subject were taken. The motion was first simulated using marker data, as shown in Figure 4 (left). Simulation helped check the data for missing markers and frames. The missing data were reconstructed using interpolation. The simulation also helped determine the start and end of STS motion of all trials and the data were trimmed and normalized in terms of % STS cycle. The marker data were then converted into six joints data, as shown in Figure 4 (right). The experimental model based on joint data closely resembles the analytical model depicted in Figure 6. Joint data are the source of the segment and joint angle information. Head marker trajectories in Figure 5 (left) were used as a reference to generate general head position trajectory analytically, Figure 13 (left). The simulated head trajectory matches closely to the experimental trajectory.
To develop general ANFIS controllers, each of three analytically generated datasets ½X, Y, u n n are bifurcated into training and test datasets, each comprising 324 I/O data points. Initially, all three ANFIS controllers are trained for various values of MFs, starting from 3 and onwards. The number of epochs was varied between 10 and 50. Figure 10 shows a comparison between training and test error plots of ANFIS controllers. Although training error curves show better convergence, test error plots are considered a true measure of model performance. 10 ANFIS controllers are then trained on the optimal number of MFs and epochs and then validated using validating data set. Error values obtained in the validation step are very low, as shown in Figure 11, showing good learning of controllers. Surface plots in Figure 12 relate to the head position subspace in Figure  6. ANFIS controllers are capable of providing suitable angle commands to a much wider range of angles for which they were trained. This makes the controllers flexible and robust for various STS patterns.
ANFIS controller is then customized for each subject using BSP data and the TST algorithm. Using the same subject's head position trajectory constructed from experimental data, subject-specific STS motion is controlled as shown in Figure 14. Figure 15 gives a comparison of errors between experimental trajectories and simulated ones. Solid line curves represent the error between experimental and general ANFIS control simulation. Dashed line curves show error between experimental and custom ANFIS control simulations. Plots show that subject-specific tuning improves ANFIS control of the STS motion as compared to the general ANFIS scheme and matches more closely to experimental results. Figure 16 shows snaps from motion captured from real subjects (left) and the same motion reconstructed in simulation (right). Different phases of synthesized motion show that control scheme generates an STS motion which is physiologically relevant and is close to human-like motion. An overall comparison of performance by two control schemes is given in Table 4. Performance of ANFIS can be further improved in two stages: first, the simulation can be run with subject-specific joint initial conditions and second, for each subject the ANFIS controller may be optimized for the number of membership functions and training cycle epochs and tested for minimum error. For now, we have used a scheme, which is less timeconsuming due to its non-iterative nature and even then shows good results.
Accurate human-CNS modeling and detailed understanding of human motion can have significant impacts on a host of domains: 1. In computer graphics, this scheme can be extended to autonomously generating realistic motion for virtual characters. Instead of providing joint trajectories for detailed motion, only head position trajectories can be assigned to make the virtual characters follow a required path. 2. The inherent reasoning in the inference mechanism of ANFIS proves it to be a natural choice for humanoid robots for their autonomous and artificial intelligence-based applications. 3. Our scheme may find its application in the rehabilitation of patients with physical impairments, training of athletes, and design of machines for physical therapy and sports training. In case of rehabilitation, a force augmentation or therapy mechanism would benefit a patient to infer and actuate joint level motions for any required motion tasked at a higher level.

Conclusion
A modeling framework to emulate the role of head position trajectory in physiologically relevant STS motion control by the CNS is presented. The slow dynamics in CNS regarding the STS motion control strategy is hypothesized as an inference mechanism to generate appropriate joint angles that correspond to the required head position. The study contributes to the knowledge base by proposing a system that performs the following: (1) synthesizes human motion using a high-level task control framework, for which low-level motion control is automatically generated. (2) Validates a 2D biomechanical modeling scheme based on weighing coefficient method for inference of BSP using only mass and height of the subjects. The modeling scheme is validated using kinematic analyses of simulated and motion capture data of real subjects. Our study suggests a set of protocols specifically for 2D motion capture.
(3) Proposes TST algorithm to create a task space for ANFIS training. The analytically trained ANFIS is robust enough to simulate real subjects' STS motion.
Our scheme provides a further improvement in motion control by subject-specific tuning of biomechanical models and ANFIS controllers. (4) A high-level task framework results in redundant solutions. We have resolved the problem by implementing kinematic model constraints through ANFIS training. ANFIS inference ensures physiologically relevant STS motions and also avoids joints from hitting their limits. Low errors between experimental and simulated motions prove the validity of the modeling framework.
In the future, we want to modify the ANFIS training algorithm from rigid body kinematics to account for elastic body-links to better match subject-specific anthropometry.