18 Article(s) found
Brain Computer Interface (BCI) provides the possibility of bypassing the peripheral nervous system and directly communicating with surrounding devices. The navigation technology using BCI has gone through the process of exploring the prototype paradigm in the virtual environment to accurately completing the locomotion intention of the operator in the form of a powered wheelchair or mobile robot in the real environment. This paper gives a brief overview of BCI navigation applications that have been used in both real and virtual environments in the past 20 years. Horizontal comparison is conducted between various paradigms applied to BCI and their unique signal processing methods. In view of the shift in control mode from synchronous to asynchronous, the development trend of navigation applications in the virtual environment is also reviewed. The contradiction between high-level commands and low-level commands is introduced as the main line to review the two major applications of BCI navigation in the real environment: mobile robot and Unmanned Aerial Vehicles. Finally, toward the popularization of BCI navigation applications to scenarios outside the laboratory, research challenges including human factors in navigation application interaction design and the feasibility of hybrid BCI for BCI navigation are discussed in detail.
Eye tracking technology for mobile devices has made considerable progress. However, due to limited computing capacity and the complexity of context, the traditional image feature-based technology can not extract features accurately and thus impact the performance. This paper proposes a novel approach by fusing appearance-based and feature-based eye tracking methods. Face and eye region detection were conducted to extract features, which were then used as an input to the appearance model to detect the feature points. These feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated. In order to find the feature vectors with the best performance, we conducted a comparison across different vectors under different image resolution and illumination conditions, and the results showed that the average gaze fixation accuracy was achieved with 1.93 degrees of visual angle, when the image resolution was 96×48 pixels, with light sources illuminating from the front of the eye. Compared with the current methods, our method improved the accuracy of gaze fixation and was more usable.
Currently, many simulator systems for medical procedures are under development. These systems can provide new solutions for training, planning, and testing medical practices, improve performance, and optimize the time of the exams. Some premises must be followed and applied to the model under development, such as usability, control, graphics realism, and interactive and dynamic gamification, to make the best of these technologies. This study presents a simulation system of a medical examination procedure in the nasal cavity for training and research, using a patient’s accurate computed tomography (CT) as a reference. The pathologies that are used as a guide for the development of the system are highlighted. Furthermore, an overview of current studies covering bench medical mannequins, 3D printing, animals, hardware, software, and software that use hardware to boost user interaction, is given. Finally, a comparison with similar state-of-the-art works is made. The main result of this work is interactive gamification techniques to propose an experience of simulation of an immersive exam by identifying pathologies present in the nasal cavity such as hypertrophy of turbinates, septal deviation adenoid hypertrophy, nasal polyposis, and tumor.
The problem of visualizing a hierarchical dataset is an important and useful technical in many real happened situations. Folder system, stock market, and other hierarchical related dataset can use this technical for better understanding the structure, dynamic variation of the dataset. Traditional space-filling(square) based methods have advantages of compact space usage, node size showing compared to diagram based methods. While space-filling based methods have two main research directions—static and dynamic performance. We present a treemapping method based on balanced partitioning that enables in one variant very good aspect ratios, in another good temporal coherence for dynamic data and in the third a good compromise between these two aspects. To layout a treemap, we divide all children of a node into two groups. These groups are further divided until we reach groups of single elements. Then these groups are combined to form the rectangle representing the parent node. This process is performed for each layer of a given hierarchical dataset. In one variant of our partitioning we sort child elements first and built two as equal as possible sized groups from big and small elements(size-balanced partition), which achieves good aspect ratios for the rectangles, but less good temporal coherence(dynamic). The second variant takes the sequence of children and creates the as equal as possible groups with-out sorting(sequence-based, good compromise between aspect ratio and temporal coherency). The third variant splits the children sets always into two groups of equal cardinality regardless of their size(number-balanced, worse aspect ratios but good temporal coherence). We evaluate aspect ratios and dynamic stability of our methods and propose a new metric that measures the visual difference between rectangles during their movement for representing temporally changing inputs. We demonstrate that our treemapping via balanced partitioning out performs state-of-the-art methods for a number of real-world datasets.
Background Social distancing is an effective way to reduce the spread of the SARS-Covid2 virus. Many students and researchers have already attempted to use computer vision technology to automatically detect human beings in the field of view of a camera and help enforce social distancing. However, because of the present lockdown measures in several countries, the validation of computer vision systems using large-scale datasets is a challenge. Methods In this paper, a new method is proposed for generating customized datasets and validating deep-learning-based computer vision models using virtual reality (VR) technology. Using VR, we modeled a digital twin (DT) of an existing office space and used it to create a dataset of individuals in different postures, dresses, and locations. To test the proposed solution, we implemented a convolutional neural network (CNN) model for detecting people in a limited-sized dataset of real humans and a simulated dataset of humanoid figures. Results We detected the number of persons in both the real and synthetic datasets with more than 90% accuracy, and the actual and measured distances were significantly correlated (r=0.99). Finally, we used intermittent-layer- and heatmap-based data visualization techniques to explain the failure modes of a CNN. Conclusions A new application of DTs is proposed to enhance workplace safety by measuring the social distance between individuals. The use of our proposed pipeline along with a DT of the shared space for visualizing both environmental and human behavior aspects preserves the privacy of individuals and improves the latency of such monitoring systems because only the extracted information is streamed.
Background In virtual environments (VEs), users can explore a large virtual scene through the viewpoint operation of a head-mounted display (HMD) and movement gains combined with redirected walking technology. The existing redirection methods and viewpoint operations are effective in the horizontal direction; however, they cannot help participants experience immersion in the vertical direction. To improve the immersion of upslope walking, this study presents a virtual climbing system based on passive haptics. Methods This virtual climbing system uses the tactile feedback provided by sponges, a commonly used flexible material, to simulate the tactile sense of a user’s soles. In addition, the visual stimulus of the HMD, the tactile feedback of the flexible material, and the operation of the user’s walking in a VE combined with redirection technology are all adopted to enhance the user's perception in a VE. In the experiments, a physical space with a hard-flat floor and three types of sponges with thicknesses of 3, 5, and 8 cm were utilized. Results We recruited 40 volunteers to conduct these experiments, and the results showed that a thicker flexible material increases the difficulty for users to roam and walk within a certain range. Conclusion The virtual climbing system can enhance users' perception of upslope walking in a VE.
Background Electroencephalography (EEG) has gained popularity in various types of biomedical applications as a signal source that can be easily acquired and conveniently analyzed. However, owing to a complex scalp electrical environment, EEG is often polluted by diverse artifacts, with electromyography artifacts being the most difficult to remove. In particular, for ambulatory EEG devices with a restricted number of channels, dealing with muscle artifacts is a challenge. Methods In this study, we propose a simple but effective novel scheme that combines singular spectrum analysis (SSA) and canonical correlation analysis (CCA) algorithms for single-channel problems and then extend it to a fewchannel case by adding additional combining and dividing operations to channels. Results We evaluated our proposed framework on both semi-simulated and real-life data and compared it with some state-of-theart methods. The results demonstrate this novel framework's superior performance in both single-channel and few-channel cases. Conclusions This promising approach, based on its effectiveness and low time cost, is suitable for real-world biomedical signal processing applications.
Background Computer Generated Animations (CGA) applied to 3D City Models (3DCM) can be used as powerful tools to support urban decision making. This leads to a new paradigm based on procedural modeling that allows the integration of known urban structure. This paper introduces a new workflow for developing high-quality approximations of urban models in a short time and facilities imported from other cities into a given city model following specific generation rules. Thus, this workflow provides a very simple way to observe, study and simulate the implementation of models already developed in other cities in a city where they do not exist. Examples of these models can be all types of mobility systems and urban infrastructures. All this allows us to have a perception of the environmental impact that these types of decisions can produce in the real world, as well as to carry out simple simulations to determine the changes that can occur in the flows of people, traffic, or any other type.
Background Accurate motion tracking in head-mounted displays (HMDs) has been widely used in immersive VR interaction technologies. However, tracking the head motion of users at all times is not always desirable. During a session of HMD usage, users may make scene-irrelevant head rotations, such as adjusting the head position to avoid neck pain or responding to distractions from the physical world. To the best of our knowledge, this is the first study that addresses the problem of scene-irrelevant head movements. Methods We trained a classifier to detect scene-irrelevant motions using temporal eyehead-coordinated information sequences. To investigate the usefulness of the detection results, we propose a technique to suspend motion tracking in HMDs where scene-irrelevant motions are detected. Results/ Conclusions Experimental results demonstrate that the scene-relevancy of movements can be detected using eye-head coordination information, and that ignoring scene-irrelevant head motions in HMDs improves user continuity without increasing sickness or breaking immersion.
Background Redirected jumping (RDJ) allows users to explore virtual environments (VEs) naturally by scaling a small real-world jump to a larger virtual jump with virtual camera motion manipulation, thereby addressing the problem of limited physical space in VR applications. Previous RDJ studies have mainly focused on detection threshold estimation. However, the effect VE or selfrepresentation (SR) has on the perception or performance of RDJs remains unclear. Methods In this paper, we report experiments to measure the perception (detection thresholds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected forward jumping under six different combinations of VE (low and high visual richness) and SRs (invisible, shoes, and human-like). Results Our results indicated that the detection threshold ranges for horizontal translation gains were significantly smaller in the VE with high rather than low visual richness. When different SRs were applied, our results did not suggest significant differences in detection thresholds, but it did report longer actual jumping distances in the invisible body case compared with the other two SRs. In the high visual richness VE, the preparation time for jumping with a human-like avatar was significantly longer than that with other SRs. Finally, some correlations were found between perception and physical performance measures. Conclusions All these findings suggest that both VE and SRs influence users' perception and performance in RDJ and must be considered when designing locomotion techniques.
Background Commonly, Species Monitoring is performed in mega-biodiverse environments by using bioacoustics methodologies where the species are more likely to be heard than seen. Furthermore, since bird vocalizations are reasonable estimators of biodiversity, their monitoring is of great importance in the formulation of conservation policies. However, birdsong recognition is an arduous task that requires dedicated training to achieve mastery; this training is costly in terms of time and money due to the lack of accessibility of relevant information in field trips or even on specialized databases. Immersive technology based on virtual reality (VR) and spatial audio may improve Species Monitoring by enhancing information accessibility, interaction, and user engagement. Methods This study used spatial audio, a Bluetooth controller, and a Head-mounted Display (HMD) to conduct an immersive training experience in VR. Participants moved inside a virtual world using a Bluetooth controller while their task was to recognize targeted birdsongs. We measured the accuracy of the recognition and the user engagement according to the User Engagement Scale. Results Experimental results revealed significantly higher engagement and accuracy for participants in the VR-based training system when compared to a traditional computer-based training system. All four dimensions of the user engagement scale received high ratings by the participants suggesting that VR-based training provides a motivating and attractive environment to learn demanding tasks through appropriate design, exploiting the sensory system and the virtual reality interactivity. Conclusions The accuracy and engagement of a VR-based training system were significantly highly rated when tested against traditional training. Future research will focus on developing a variety of realistic ecosystems and their associated birds to increase the information of newer bird species in the training system. Finally, the proposed VR-based training system must be tested with additional participants and for a greater duration to measure information recall and recognition mastery among users.
To reduce serious crashes, contemporary research leverages opportunities provided by technology. A potentially higher added value to reduce road trauma may be hidden in utilising emerging technologies, such as headset-delivered virtual reality (VR). However, there is no study to analyse the application of such VR in road safety research systematically. Using the PRISMA protocol, our study identified 39 papers presented at conferences or published in scholarly journals. In those sources, we found evidence of VR's applicability in studies involving different road users (drivers, pedestrians, cyclists and passengers). A number of articles were concerned with providing evidence around the potential adverse effects of VR, such as simulator sickness. Other work compared VR with conventional simulators. VR was also contributing to the emerging field of autonomous vehicles. However, few studies leveraged the opportunities that VR presents to positively influence the involved road users' behaviour. Based on our findings, we identified pathways for future research.
Background Compared with traditional biomagnetic field detection devices, such as superconducting quantum interference devices (SQUIDs) and atomic magnetometers, only giant magnetoimpedance (GMI) sensors can be applied for unshielded human brain biomagnetic detection, and they have the potential for application in next-generation wearable equipment for brain-computer interfaces (BCIs). Achieving a better GMI sensor without magnetic shielding requires the stimulation of the GMI effect to be maximized and environmental noise interference to be minimized. Moreover, the GMI effect stimulated in an amorphous filament is closely related to its working point, which is sensitive to both the external magnetic field and the drive current of the filament. Methods In this paper, we propose a new noisereducing GMI gradiometer with a dual-loop self-adapting structure. Noise reduction is realized by a direction-flexible differential probe, and the dual-loop structure optimizes and stabilizes the working point by automatically controlling the external magnetic field and drive current. This dual-loop structure is fully program controlled by a micro control unit (MCU), which not only simplifies the traditional constantparameter sensor circuit, saving the time required to adjust the circuit component parameters, but also improves the sensor performance and environmental adaptation. Results In the performance test, within 2 min of self-adaptation, our sensor showed a better sensitivity and signal-to-noise ratio (SNR) than those of the traditional designs and achieved a background noise of 12 pT/√Hz at 10 Hz and 7 pT/√Hz at 200 Hz. Conclusion To the best of our knowledge, our sensor is the first to realize self-adaptation of both the external magnetic field and the drive current.
Background As a novel approach for people to directly communicate with an external device, the study of brain-computer interfaces (BCIs) has become well-rounded. However, similar to the real-world scenario, where individuals are expected to work in groups, the BCI systems should be able to replicate group attributes. Methods We proposed a 4-order cumulants feature extraction method (CUM4-CSP) based on the common spatial patterns (CSP) algorithm. Simulation experiments conducted using motion visual evoked potentials (mVEP) EEG data verified the robustness of the proposed algorithm. In addition, to freely choose paradigms, we adopted the mVEP and steady-state visual evoked potential (SSVEP) paradigms and designed a multimodal collaborative BCI system based on the proposed CUM4-CSP algorithm. The feasibility of the proposed multimodal collaborative system framework was demonstrated using a multiplayer game controlling system that simultaneously facilitates the coordination and competitive control of two users on external devices. To verify the robustness of the proposed scheme, we recruited 30 subjects to conduct online game control experiments, and the results were statistically analyzed. Results The simulation results prove that the proposed CUM4-CSP algorithm has good noise immunity. The online experimental results indicate that the subjects could reliably perform the game confrontation operation with the selected BCI paradigm. Conclusions The proposed CUM4-CSP algorithm can effectively extract features from EEG data in a noisy environment. Additionally, the proposed scheme may provide a new solution for EEG-based group BCI research.