Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

2021, Vol. 3 No. 6 Publish Date:2021-12

Previous
View Abstracts Download Citations

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Editorial

Locomotion perception and redirection

2021, 3(6) : 1-2

DOI:10.3724/SP.J.2096-5796.2021.03.06

PDF (17) HTML (186)

Article

Virtual climbing: An immersive upslope walking system using passive haptics

2021, 3(6) : 435-450

DOI:10.1016/j.vrih.2021.08.008

Abstract (295) PDF (16) HTML (174)
Background
In virtual environments (VEs), users can explore a large virtual scene through the viewpoint operation of a head-mounted display (HMD) and movement gains combined with redirected walking technology. The existing redirection methods and viewpoint operations are effective in the horizontal direction; however, they cannot help participants experience immersion in the vertical direction. To improve the immersion of upslope walking, this study presents a virtual climbing system based on passive haptics.
Methods
This virtual climbing system uses the tactile feedback provided by sponges, a commonly used flexible material, to simulate the tactile sense of a user's soles. In addition, the visual stimulus of the HMD, the tactile feedback of the flexible material, and the operation of the user's walking in a VE combined with redirection technology are all adopted to enhance the user's perception in a VE. In the experiments, a physical space with a hard-flat floor and three types of sponges with thicknesses of 3, 5, and 8cm were utilized.
Results
We recruited 40 volunteers to conduct these experiments, and the results showed that a thicker flexible material increases the difficulty for users to roam and walk within a certain range.
Conclusion
The virtual climbing system can enhance users' perception of upslope walking in a VE.
Effects of virtual environment and self-representations on perception and physical performance in redirected jumping

2021, 3(6) : 451-469

DOI:10.1016/j.vrih.2021.06.003

Abstract (231) PDF (16) HTML (187)
Background
Redirected jumping (RDJ) allows users to explore virtual environments (VEs) naturally by scaling a small real-world jump to a larger virtual jump with virtual camera motion manipulation, thereby addressing the problem of limited physical space in VR applications. Previous RDJ studies have mainly focused on detection threshold estimation. However, the effect VE or self-representation (SR) has on the perception or performance of RDJs remains unclear.
Methods
In this paper, we report experiments to measure the perception (detection thresholds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected forward jumping under six different combinations of VE (low and high visual richness) and SRs (invisible, shoes, and human-like).
Results
Our results indicated that the detection threshold ranges for horizontal translation gains were significantly smaller in the VE with high rather than low visual richness. When different SRs were applied, our results did not suggest significant differences in detection thresholds, but it did report longer actual jumping distances in the invisible body case compared with the other two SRs. In the high visual richness VE, the preparation time for jumping with a human-like avatar was significantly longer than that with other SRs. Finally, some correlations were found between perception and physical performance measures.
Conclusions
All these findings suggest that both VE and SRs influence users' perception and performance in RDJ and must be considered when designing locomotion techniques.
Redirected jumping in virtual scenes with alleys

2021, 3(6) : 470-483

DOI:10.1016/j.vrih.2021.06.004

Abstract (201) PDF (6) HTML (194)
Background
The redirected jumping (RDJ) technique is a new locomotion method that saves physical tracking area and enhances the body movement experience of users in virtual reality. In a previous study, the range of imperceptible manipulation gains in RDJ was discussed in an empty virtual environment (VE).
Methods
In this study, we conducted three tasks to investigate the influence of alley width on the detection threshold of jump redirection in a VE.
Results
The results demonstrated that the imperceptible distance gain range in RDJ was not associated with the width of the alleys. The imperceptible height and rotation gain ranges in RDJ are related to the width of the alleys.
Conclusions
We preliminarily summarized the relationship between the occlusion distance and manipulation range of the three gains in a complex environment. Simultaneously, the guiding principle for choosing three gains in RDJ according to the occlusion distance in a complex environment is provided.
Dynamic targets searching assistance based on virtual camera priority

2021, 3(6) : 484-500

DOI:10.1016/j.vrih.2021.10.001

Abstract (209) PDF (1) HTML (168)
Background
When a user walks freely in an unknown virtual scene and searches for multiple dynamic targets, the lack of a comprehensive understanding of the environment may have a negative impact on the execution of virtual reality tasks. Previous studies can help users with auxiliary tools, such as top view maps or trails, and exploration guidance, for example, automatically generated paths according to the user location and important static spots in virtual scenes. However, in some virtual reality applications, when the scene has complex occlusions, and the user cannot obtain any real-time position information of the dynamic target, the above assistance cannot help the user complete the task more effectively.
Methods
We design a virtual camera priority-based assistance to help the user search dynamic targets efficiently. Instead of forcing users to go to destinations, we provide an optimized instant path to guide them to places where they are more likely to find dynamic targets when they ask for help. We assume that a certain number of virtual cameras are fixed in virtual scenes to obtain extra depth maps, which capture the depth information of the scene and the locations of the dynamic targets. Our method automatically analyzes the priority of these virtual cameras, chooses the destination, and generates an instant path to assist the user in finding the dynamic targets. Our method is suitable for various virtual reality applications that do not require manual supervision or input.
Results
A user study is designed to evaluate the proposed method. The results indicate that compared with three conventional navigation methods, such as the top-view method, our method can help users find dynamic targets more efficiently. The advantages include reducing the task completion time, reducing the number of resets, increasing the average distance between resets, and reducing user task load.
Conclusions
We presented a method for improving dynamic target searching efficiency in virtual scenes by virtual camera priority-based path guidance. Compared with three conventional navigation methods, such as the top-view method, this method can help users find dynamic targets more effectively.
Detection of scene-irrelevant head movements via eye-head coordination information

2021, 3(6) : 501-514

DOI:10.1016/j.vrih.2021.08.007

Abstract (181) PDF (6) HTML (150)
Background
Accurate motion tracking in head-mounted displays (HMDs) has been widely used in immersive VR interaction technologies. However, tracking the head motion of users at all times is not always desirable. During a session of HMD usage, users may make scene-irrelevant head rotations, such as adjusting the head position to avoid neck pain or responding to distractions from the physical world. To the best of our knowledge, this is the first study that addresses the problem of scene-irrelevant head movements.
Methods
We trained a classifier to detect scene-irrelevant motions using temporal eye-head-coordinated information sequences. To investigate the usefulness of the detection results, we propose a technique to suspend motion tracking in HMDs where scene-irrelevant motions are detected.
Results/Conclusions
Experimental results demonstrate that the scene-relevancy of movements can be detected using eye-head coordination information, and that ignoring scene-irrelevant head motions in HMDs improves user continuity without increasing sickness or breaking immersion.