Electroencephalography (EEG) has gained popularity in various types of biomedical applications as a signal source that can be easily acquired and conveniently analyzed. However, owing to a complex scalp electrical environment, EEG is often polluted by diverse artifacts, with electromyography artifacts being the most difficult to remove. In particular, for ambulatory EEG devices with a restricted number of channels, dealing with muscle artifacts is a challenge.
In this study, we propose a simple but effective novel scheme that combines singular spectrum analysis (SSA) and canonical correlation analysis (CCA) algorithms for single-channel problems and then extend it to a few-channel case by adding additional combining and dividing operations to channels.
We evaluated our proposed framework on both semi-simulated and real-life data and compared it with some state-of-the-art methods. The results demonstrate this novel framework's superior performance in both single-channel and few-channel cases.
This promising approach, based on its effectiveness and low time cost, is suitable for real-world biomedical signal processing applications.
As a novel approach for people to directly communicate with an external device, the study of brain-computer interfaces (BCIs) has become well-rounded. However, similar to the real-world scenario, where individuals are expected to work in groups, the BCI systems should be able to replicate group attributes.
We proposed a 4-order cumulants feature extraction method (CUM4-CSP) based on the common spatial patterns (CSP) algorithm. Simulation experiments conducted using motion visual evoked potentials (mVEP) EEG data verified the robustness of the proposed algorithm. In addition, to freely choose paradigms, we adopted the mVEP and steady-state visual evoked potential (SSVEP) paradigms and designed a multimodal collaborative BCI system based on the proposed CUM4-CSP algorithm. The feasibility of the proposed multimodal collaborative system framework was demonstrated using a multiplayer game controlling system that simultaneously facilitates the coordination and competitive control of two users on external devices. To verify the robustness of the proposed scheme, we recruited 30 subjects to conduct online game control experiments, and the results were statistically analyzed.
The simulation results prove that the proposed CUM4-CSP algorithm has good noise immunity. The online experimental results indicate that the subjects could reliably perform the game confrontation operation with the selected BCI paradigm.
The proposed CUM4-CSP algorithm can effectively extract features from EEG data in a noisy environment. Additionally, the proposed scheme may provide a new solution for EEG-based group BCI research.
Compared with traditional biomagnetic field detection devices, such as superconducting quantum interference devices (SQUIDs) and atomic magnetometers, only giant magneto-impedance (GMI) sensors can be applied for unshielded human brain biomagnetic detection, and they have the potential for application in next-generation wearable equipment for brain-computer interfaces (BCIs). Achieving a better GMI sensor without magnetic shielding requires the stimulation of the GMI effect to be maximized and environmental noise interference to be minimized. Moreover, the GMI effect stimulated in an amorphous filament is closely related to its working point, which is sensitive to both the external magnetic field and the drive current of the filament.
In this paper, we propose a new noise-reducing GMI gradiometer with a dual-loop self-adapting structure. Noise reduction is realized by a direction-flexible differential probe, and the dual-loop structure optimizes and stabilizes the working point by automatically controlling the external magnetic field and drive current. This dual-loop structure is fully program controlled by a micro control unit (MCU), which not only simplifies the traditional constant-parameter sensor circuit, saving the time required to adjust the circuit component parameters, but also improves the sensor performance and environmental adaptation.
In the performance test, within 2 min of self-adaptation, our sensor showed a better sensitivity and signal-to-noise ratio (SNR) than those of the traditional designs and achieved a background noise of
at 10 Hz and
at 200 Hz.
To the best of our knowledge, our sensor is the first to realize self-adaptation of both the external magnetic field and the drive current.
Social distancing is an effective way to reduce the spread of the SARS-CoV-2 virus. Many students and researchers have already attempted to use computer vision technology to automatically detect human beings in the field of view of a camera and help enforce social distancing. However, because of the present lockdown measures in several countries, the validation of computer vision systems using large-scale datasets is a challenge.
In this paper, a new method is proposed for generating customized datasets and validating deep-learning-based computer vision models using virtual reality (VR) technology. Using VR, we modeled a digital twin (DT) of an existing office space and used it to create a dataset of individuals in different postures, dresses, and locations. To test the proposed solution, we implemented a convolutional neural network (CNN) model for detecting people in a limited-sized dataset of real humans and a simulated dataset of humanoid figures.
We detected the number of persons in both the real and synthetic datasets with more than 90% accuracy, and the actual and measured distances were significantly correlated (r=0.99). Finally, we used intermittent-layer- and heatmap-based data visualization techniques to explain the failure modes of a CNN.
A new application of DTs is proposed to enhance workplace safety by measuring the social distance between individuals. The use of our proposed pipeline along with a DT of the shared space for visualizing both environmental and human behavior aspects preserves the privacy of individuals and improves the latency of such monitoring systems because only the extracted information is streamed.
In this study, we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.
It can automatically organize the entities of a scene in a graph, where objects are nodes and their relationships are modeled as edges. More specifically, we employ the DGCNN to capture the features of objects and their relationships in the scene. A Graph Attention Network (GAT) is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure. A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.
Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.
The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.