In virtual environments (VEs), users can explore a large virtual scene through the viewpoint operation of a head-mounted display (HMD) and movement gains combined with redirected walking technology. The existing redirection methods and viewpoint operations are effective in the horizontal direction; however, they cannot help participants experience immersion in the vertical direction. To improve the immersion of upslope walking, this study presents a virtual climbing system based on passive haptics.
This virtual climbing system uses the tactile feedback provided by sponges, a commonly used flexible material, to simulate the tactile sense of a user's soles. In addition, the visual stimulus of the HMD, the tactile feedback of the flexible material, and the operation of the user's walking in a VE combined with redirection technology are all adopted to enhance the user's perception in a VE. In the experiments, a physical space with a hard-flat floor and three types of sponges with thicknesses of 3, 5, and 8cm were utilized.
We recruited 40 volunteers to conduct these experiments, and the results showed that a thicker flexible material increases the difficulty for users to roam and walk within a certain range.
The virtual climbing system can enhance users' perception of upslope walking in a VE.
Redirected jumping (RDJ) allows users to explore virtual environments (VEs) naturally by scaling a small real-world jump to a larger virtual jump with virtual camera motion manipulation, thereby addressing the problem of limited physical space in VR applications. Previous RDJ studies have mainly focused on detection threshold estimation. However, the effect VE or self-representation (SR) has on the perception or performance of RDJs remains unclear.
In this paper, we report experiments to measure the perception (detection thresholds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected forward jumping under six different combinations of VE (low and high visual richness) and SRs (invisible, shoes, and human-like).
Our results indicated that the detection threshold ranges for horizontal translation gains were significantly smaller in the VE with high rather than low visual richness. When different SRs were applied, our results did not suggest significant differences in detection thresholds, but it did report longer actual jumping distances in the invisible body case compared with the other two SRs. In the high visual richness VE, the preparation time for jumping with a human-like avatar was significantly longer than that with other SRs. Finally, some correlations were found between perception and physical performance measures.
All these findings suggest that both VE and SRs influence users' perception and performance in RDJ and must be considered when designing locomotion techniques.
The redirected jumping (RDJ) technique is a new locomotion method that saves physical tracking area and enhances the body movement experience of users in virtual reality. In a previous study, the range of imperceptible manipulation gains in RDJ was discussed in an empty virtual environment (VE).
In this study, we conducted three tasks to investigate the influence of alley width on the detection threshold of jump redirection in a VE.
The results demonstrated that the imperceptible distance gain range in RDJ was not associated with the width of the alleys. The imperceptible height and rotation gain ranges in RDJ are related to the width of the alleys.
We preliminarily summarized the relationship between the occlusion distance and manipulation range of the three gains in a complex environment. Simultaneously, the guiding principle for choosing three gains in RDJ according to the occlusion distance in a complex environment is provided.
When a user walks freely in an unknown virtual scene and searches for multiple dynamic targets, the lack of a comprehensive understanding of the environment may have a negative impact on the execution of virtual reality tasks. Previous studies can help users with auxiliary tools, such as top view maps or trails, and exploration guidance, for example, automatically generated paths according to the user location and important static spots in virtual scenes. However, in some virtual reality applications, when the scene has complex occlusions, and the user cannot obtain any real-time position information of the dynamic target, the above assistance cannot help the user complete the task more effectively.
We design a virtual camera priority-based assistance to help the user search dynamic targets efficiently. Instead of forcing users to go to destinations, we provide an optimized instant path to guide them to places where they are more likely to find dynamic targets when they ask for help. We assume that a certain number of virtual cameras are fixed in virtual scenes to obtain extra depth maps, which capture the depth information of the scene and the locations of the dynamic targets. Our method automatically analyzes the priority of these virtual cameras, chooses the destination, and generates an instant path to assist the user in finding the dynamic targets. Our method is suitable for various virtual reality applications that do not require manual supervision or input.
A user study is designed to evaluate the proposed method. The results indicate that compared with three conventional navigation methods, such as the top-view method, our method can help users find dynamic targets more efficiently. The advantages include reducing the task completion time, reducing the number of resets, increasing the average distance between resets, and reducing user task load.
We presented a method for improving dynamic target searching efficiency in virtual scenes by virtual camera priority-based path guidance. Compared with three conventional navigation methods, such as the top-view method, this method can help users find dynamic targets more effectively.
Accurate motion tracking in head-mounted displays (HMDs) has been widely used in immersive VR interaction technologies. However, tracking the head motion of users at all times is not always desirable. During a session of HMD usage, users may make scene-irrelevant head rotations, such as adjusting the head position to avoid neck pain or responding to distractions from the physical world. To the best of our knowledge, this is the first study that addresses the problem of scene-irrelevant head movements.
We trained a classifier to detect scene-irrelevant motions using temporal eye-head-coordinated information sequences. To investigate the usefulness of the detection results, we propose a technique to suspend motion tracking in HMDs where scene-irrelevant motions are detected.
Experimental results demonstrate that the scene-relevancy of movements can be detected using eye-head coordination information, and that ignoring scene-irrelevant head motions in HMDs improves user continuity without increasing sickness or breaking immersion.