Apparent Biological Motion in First and Third Person Perspective

Apparent biological motion is the perception of plausible movements when two alternating images depicting the initial and final phase of an action are presented at specific stimulus onset asynchronies. Here, we show lower subjective apparent biological motion perception when actions are observed from a first relative to a third visual perspective. These findings are discussed within the context of sensorimotor contributions to body ownership.

perceived ABM (through vs. above an obstacle) for the right index and little finger, in two separate blocks, by pressing two buttons with the middle and ring finger of the left hand. The initial and final positions of the fingers were presented for 90 ms and five stimulus onset asynchronies (SOAs) (100, 400, 700, 1,000, and 1,300 ms;Funk et al., 2005) gradually increased the perception of seeing the finger moving along a trajectory above an obstacle (Figure 1(a)). Two finger movements enabled to verify the generalizability of the results and describe any possible role of motor dexterity on visual perception (i.e., index finger movements are more familiar than little finger actions; Plata Bello, Modron˜o, Marcano, & Gonza´lez-Mora, 2013;Plata Bello, Modron˜o, Marcano, & Gonza´lez-Mora, 2015). Blocks order and response buttons were almost counterbalanced across subjects. There were 80 trials for each block (8 trials for each SOA-Finger interaction; 40 for 1PP; 40 for 3PP). Participants were allowed to watch the stimuli for as long as needed, and ''perceived ABM'' was collected (i.e., plausible ABM ''I perceived the finger as moving over the obstacle'' vs. implausible ABM ''I perceived the finger as moving through the obstacle''). No visible movements of subjects' right fingers were noted during the experiment. After each block participants verbally rated on a 7-point rating scale their agreement with a set of questions (À3 ¼ completely disagree, 0¼ neither agree nor disagree, 3¼ completely agree; see Figure 1(b)) in order to control illusory sensations over the virtual bodies.
Binary ABM answers were analyzed using logistic-GLMER mixed effects regression in ''lme4'' package (Bates, Maechler, Bolker, & Walker, 2016;R Development Core Team, 2013) with Perspective, SOA, and Finger as fixed effects. Ratings were analyzed using Cumulative Linear Mixed Model (CLMM) in ''ordinal'' package (Christensen, 2015) with Perspective and Finger as fixed effects. For all multilevel analyses, a by-subjects random intercept was included, and the saturated model (i.e., the model with all the available fixed parameters, factors, and interactions) was simplified by hierarchically dropping effects and interactions with p > .1. For the sake of simplicity, we report only the parameters of the final best-fitting model by considering both Akaike information criterion, Bayesian information criterion, and the log likelihood indexes.
Overall, the present data indicate that ABM perception may be affected by perspective and motor dexterity. That lower ABM was experienced only for the index in 1PP suggests a combined role of motor familiarity (Plata Bello et al., 2013, 2015 and embodiment over ABM perception. Crucially, participants were less prone to report a plausible ''above'' ABM when the action was observed from a 1PP, and further studies are necessary to disentangle the role of visual perspective from body ownership and the perceived control over the observed movements from a 1PP (Tieri et al., 2015b;Wegner, Sparrow, & Winerman, 2004). Virtual reality represents a useful tool to test the role of bodily re-afferences and sensorimotor brain areas responsible of motion/action perception during perceptual judgments (Orgs et al., 2016;

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work received financial support from the BIAL Foundation (no. 150/14) awarded to ET. His research interest revolve around action representation and the multimodal assessment of embodiment with particular emphasis on ownership and agency of artificial physical-robots and virtual-characters using behavioural, virtual reality and neurophysiological approaches.
Michele Scandola, Ph.D, is a researcher in the neuroscience field, a psychologist, a cognitive behavioural therapist candidate and a tech enthusiast. His research interests concern the body representations, in all their forms, in spinal cord injured people.
In addition, his studies and researches cover neuropsychology, Bayesian and mixed model statistical procedures.
Veronica Orvalho mother of a lovely boy and a girl, holds a Ph.D in Software Development (Computer Graphics) from Universitat Polite´cnica de Catalunya (2007), where her research centered on ''Facial Animation for CG Films and Videogames''. She worked for IBM and Ericsson, and the film company Patagonik Film Argentina. She has given many workshops and has international publications related to game design and character animation in conferences such as SIGGRAPH, EUROGRAPHICS. She has received international awards for several projects: ''Photorealistic facial animation and recognition'', ''Face Puppet'' and ''Face In Motion''. She has received the 2010 IBM Scientific Award for her work on facial rig retargeting. In 2010 she founded Porto Interactive Center (www.portointeractivecenter.org), which is the host of several International and national projects as project coordinator or participant. She provides technical consulting and participated in several productions like Fable 2, The Simpsons Ride, which use her developments. Now, she is founder of Didimo Inc. (http://www.mydidimo.com), which builds over her extensive experience on facial character animation, automating the creation of 3D characters for films, games and virtual reality. Her main expertise and interests are in developing new methods related to motion capture, geometric modeling and deformation, facial emotion synthesis and analysis, real time animation for virtual environments and the study of emotional reactive avatars.
Matteo Candidi, PhD, is a research assistant at the Department of Psychology, Sapienza University of Rome, and at IRCCS Santa Lucia, Rome, Italy. His main research interests focus on embodied cognition approaches to the study of the psychological and neural correlates of body and action representation, their predictive nature, their links with emotional processing, their role in social interactions, and how they shape higher-order cognitive functions and, conversely, how higher-order cognitive functions influenced sensorimotor processing.