Comparison of Augmented Reality Rearview And Radar Head-Up Displays for Increasing Situation Awareness During Exoskeleton Operation

Exoskeletons may reduce the incidence of work-related musculoskeletal disorders, but current full-body powered exo-skeletons impose loading, motion, and balance requirements on users that may induce mental workload, reduce spatial awareness, and increase risk of collisions, negating the safety benefits. This extended abstract presents an experimental study comparing three types of augmented reality (AR) head-up displays at improving spatial awareness in a simulated warehouse task to reduce risk of collision with pedestrians behind the user. The experiment includes three levels of display abstraction (rearview camera, overhead radar, and ring radar) and three levels of display elevation (15°, 45°, and 90°). Results of the on-going data analysis will include the amount of time an (experimenter) pedestrian waits behind the (participant) transporter who should sidestep to let the pedestrian pass, and subjective usability and situation awareness ratings. This will contribute to research in augmenting spatial awareness and reducing collision risk of exoskeleton users.


Background
Exoskeletons have potential to alleviate physical demand on industrial workers and reduce the incidence of work-related musculoskeletal disorders while augmenting their strength and endurance.(Gonsalves et al. 2021;Kim et al., 2019;Kim et al., 2021;Sawicki et al., 2020).However, current fullbody powered exoskeletons can be unwieldy due to differences in loading, motion, and balance compared to natural movement, increasing the likelihood and severity of collisions (Park et al., 2022;Yang et al. 2008).Collision incidence may be further increased by cognitive demands in exoskeleton control (Fox et al., 2019;Mitchell, 2000;Yee et al., 2007), resulting in lower situation awareness (SA; Endsley, 1995;Lau & Boring, 2017;Wickens, 2002).
Visual solutions like mirrors, cameras, and radar displays have proven effective at increasing SA and reducing collisions in different contexts (Denford et al., 2004;Kuwana et al., 2013;Mazzae et al., 2008).Further, recent developments in computer vision and augmented reality (AR) display technology can be applied to support exoskeleton users in their spatial awareness (Kulkarni et al., 2020).However, these visual solutions differ in their level of information abstraction with varying effectiveness with respect to display elevation angle while evaluation in context of exoskeleton users is sparce.
To determine what levels of abstraction and elevation angles are most effective at reducing collisions and increasing SA for exoskeleton users, we are conducting a study recruiting human participants to perform simulated industrial tasks while detecting hazards using AR displays of different abstraction levels (rearview camera, overhead radar, and ring radar) and display elevation angles (15°, 45°, and 90°) and trying to discover any main and interaction effects of abstraction level and elevation angle (Figures 1 and 2).

Experimental Procedure
The on-going study is collecting data from twenty participants, who first receive an introduction to the study, provide consent, and complete demographic questionnaires.Then, data collection begins with participant standing in front of a station at one end of the experimental space (Figure 3).The first task is using a handheld scanner on the station to register barcodes printed on the cargo as listed on the monitor.Total completed scans and number of errors are recorded as performance metrics, with a time limit of eight seconds for the scanning task.During this scanning task, one (experimenter) pedestrian may approach the participant from behind on one of the outer tracks and then pause behind the participant at a 45-degree angle, just outside their peripheral field of view.Participants should notice the pedestrian using the AR display and move with the cargo to the other side to let the pedestrian pass by.The time the pedestrian spends in the collision zone of the cargo will be measured to determine display effectiveness.To limit participant anticipation, the ordering of pedestrian approach from either side is randomly predetermined with half of the tasks containing no approach.
After eight seconds of the scanning task, participants move the cargo to the station across the aisle at the other end of the experimental space.While crossing the aisle, participants must slowly walk within the stepping zones marked on the floor, spaced shorter than the pedestrian's stepping zones to ensure the participant walks slow enough for the pedestrian to catch up to them in transit.As the pedestrian approaches from behind at the outer stepping zones, the participant should notice the pedestrian from the AR display and step aside with the cargo for the pedestrian to pass.The pedestrian approach is also randomly predetermined for this task.The time the pedestrian spends in the collision zone of the cargo is measured to determine display effectiveness, and the transport time is recorded as a performance metric.
After crossing the aisle and placing the cargo on the station, the pedestrian experimenter returns to their starting position and participants starts the next trial with another barcode scanning task and cargo transport task at the other station.Each trial is expected to take about 22 seconds, with eight seconds for the scanning task and fourteen seconds for the transport task.

Experimental Design
The experiment adopts an unbalanced nested two-factor within-subjects design.The independent variables include abstraction level and elevation angle of the review display.The no display and rearview camera conditions for the abstraction level are not nested with display angle.Three display angles of 90°, 45°, and 15° are nested within the overhead and ring radar abstraction level.The experiment has a total of eight conditions.Participants will experience each condition as a block of ten trials.The order of presentation for the conditions (or blocks) are randomized.

Measures
The experiment is collecting seven measures in three categories:   Scanning task performance 1.Time in Collision Zone is the pedestrian's duration in the cargo collision zone.As a surrogate measure, low time indicates better collision detection and avoidance.2. Number of Completed Scans is the number of barcodes registered with the scanner given the set time limit of eight seconds for each trial, after which the scanning task is completed.This will be measured for each block.3. Number of Scanning Errors is the total number of incorrectly registered barcodes per block.
Transport task performance 4. Transport time is the time from the last barcode scan of the preceding scanning task to the first scan of the next scanning task. 5. Time in Collision Zone is the pedestrian's duration in the cargo collision zone during the transport time.As a surrogate measure, low time indicates better collision detection and avoidance.Usability 6.The Situation Awareness Rating Technique (SART; Taylor, 1990) will be administered for the participant's subjective SA rating after each trial block.7. A modified version of the System Usability Scale (SUS; Brooke, 1996) will be administered for subjective usability ratings on a five-point scale from 1 (strongly disagree) to 5 (strongly agree), with statements modified for relevance to the displays in this study.

Hypotheses
The hypotheses for this study are the main effects where higher abstraction levels (H1) and lower elevation angles (H2) will increase participant performance because of reduced clutter and mental rotation respectively.We also hypothesize an interaction effect in that the ring radar will benefit more than the overhead radar at lower angles (H3) because of the overhead radar having distance judgements affected by perspective foreshortening.The analysis and results will be presented at the conference.

Conclusion
This study is expected to contribute to the AR display research for augmenting spatial awareness as applicable to industrial work settings involving exoskeletons.The experimental results will indicate which combination of abstraction level and display angle would be most effective for collision avoidance with pedestrians at the rear of the exoskeleton users.The findings would inform designers in developing the best displays for exoskeleton users in avoiding collisions, thereby contributing to occupational safety.This research may generalize to fields requiring similar spatial displays, such as emergency response and telerobotics, or inspire research for AR display design for situation awareness and navigation.Future studies may add display modalities (e.g., auditory or tactile), alerts for pedestrians, and test with participants operating a full-body powered exoskeleton.

Figure 1 .
Figure 1.AR visualization concept images at each abstraction level and elevation angle.Warehouse image adapted from Reycenas (2017).

Figure 2 .
Figure 2. User point-of-view of visualizations including rearview camera, overhead radar, and ring radar from Unity engine rendering.