BioLogger: A wireless physiological sensing and logging system with applications in poultry science | IEEE Conference Publication | IEEE Xplore
This paper presents the design and development of BioLogger, a wireless physiological signal sensing and logging system. BioLogger can simultaneously monitor and record various types of physiological signals. Energy saving design is incorporated to both hardware and software design phases in order to prolong the life time of sensor nodes. Moreover, a simple scheduler is implemented to make sure the emergency would not be missed.
A bidirectional hub for a programmable gain/filtering data acquisition of a low interference electroencephalogram | IEEE Conference Publication | IEEE Xplore
This article presents the implementation of an Electroencephalogram - EEG system less susceptible to electromagnetic interference - EMI. The amplification, filtering and A/D conversion are placed directly over each electrode. Due to the local amplification and conversion, the system becomes more resilient to EMI and consequently the results are trustier. It was developed a central chip or hub that collects data of all electrodes and send it wireless to an external computer or a registering device. Since the amplification and filtering are adjustable, the hub also receives the amplification/filtering parameters from the computer and sends them to the circuits over the electrodes. The amplification/filtering adjustment improves the signal quality since electrode is properly adjusted. This article describes the protocol and circuitry of the bidirectional communication between the hub and the electrodes. The circuit was developed in Verilog and validated on a set of FPGAs, where one FPGA works as a hub and the others work as electrodes. The results show the system works properly. The amplification/filtering is individually adjusted, thus providing great flexibility to the system.
ARM/THUMB code compression for embedded systems | IEEE Conference Publication | IEEE Xplore
The use of code compression in embedded systems based on standard RISC instruction set architectures (ISA) has been shown in the past to be of benefit in reducing overall system cost. The 16-bit THUMB ISA from ARM Ltd has a significantly higher density than the original 32-bits ARM ISA. In this paper we propose a new memory compression architecture, which employs a lossless data compression algorithm to achieve a further size reduction of around 20% on the THUMB code. We show that in some applications, the decompression can be performed in software on the main system processor without excessive processing time overheads.
USB Bulk Transfers between a PC and a PIC Microcontroller for Embedded Applications | IEEE Conference Publication | IEEE Xplore
The universal serial bus (USB) has become the most popular communication interface among personal computers and embedded devices because of its ease of use, low cost, data bandwidth and availability in most computing systems. Microchippsilas PIC 18Fx550 microcontroller has an embedded USB controller that allows rapid development of USB enabled devices. This paper presents a system for bidirectional communication between a personal computer and an embedded device using a PIC 18F2550 microcontroller in USB bulk transfer mode. A comparison between three different drivers for host application was done to determine the advantages of each one. Finally, a case of study for a remote home lighting control system is presented.
Epileptic Seizure Detection in EEG Signals Using a Unified Temporal-Spectral Squeeze-and-Excitation Network | IEEE Journals & Magazine | IEEE Xplore
The intelligent recognition of epileptic electro-encephalogram (EEG) signals is a valuable tool for the epileptic seizure detection. Recent deep learning models fail to fully consider both spectral and temporal domain representations simultaneously, which may lead to omitting the nonstationary or nonlinear property in epileptic EEGs and further produce a suboptimal recognition performance consequently. In this paper, an end-to-end EEG seizure detection framework is proposed by using a novel channel-embedding spectral-temporal squeeze-and-excitation network (CE-stSENet) with a maximum mean discrepancy-based information maximizing loss. Specifically, the CE-stSENet firstly integrates both multi-level spectral and multi-scale temporal analysis simultaneously. Hierarchical multi-domain representations are then captured in a unified manner with a variant of squeeze-and-excitation block. The classification net is finally implemented for epileptic EEG recognition based on features extracted in previous subnetworks. Particularly, to address the fact that the scarcity of seizure events results in finite data distribution and the severe overfitting problem in seizure detection, the CE-stSENet is coordinated with a maximum mean discrepancy-based information maximizing loss for mitigating the overfitting problem. Competitive experimental results on three EEG datasets against the state-of-the-art methods demonstrate the effectiveness of the proposed framework in recognizing epileptic EEGs, indicating its powerful capability in the automatic seizure detection.
Capsule Attention for Multimodal EEG-EOG Representation Learning With Application to Driver Vigilance Estimation | IEEE Journals & Magazine | IEEE Xplore
Driver vigilance estimation is an important task for transportation safety. Wearable and portable brain-computer interface devices provide a powerful means for real-time monitoring of the vigilance level of drivers to help with avoiding distracted or impaired driving. In this paper, we propose a novel multimodal architecture for in-vehicle vigilance estimation from Electroencephalogram and Electrooculogram. To enable the system to focus on the most salient parts of the learned multimodal representations, we propose an architecture composed of a capsule attention mechanism following a deep Long Short-Term Memory (LSTM) network. Our model learns hierarchical dependencies in the data through the LSTM and capsule feature representation layers. To better explore the discriminative ability of the learned representations, we study the effect of the proposed capsule attention mechanism including the number of dynamic routing iterations as well as other parameters. Experiments show the robustness of our method by outperforming other solutions and baseline techniques, setting a new state-of-the-art. We then provide an analysis on different frequency bands and brain regions to evaluate their suitability for driver vigilance estimation. Lastly, an analysis on the role of capsule attention, multimodality, and robustness to noise is performed, highlighting the advantages of our approach.
A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series | IEEE Journals & Magazine | IEEE Xplore
Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.
Comparing the Differences in Brain Activities and Neural Comodulations Associated With Motion Sickness Between Drivers and Passengers | IEEE Journals & Magazine | IEEE Xplore
It is common to believe that passengers are more adversely affected by motion sickness than drivers. However, no study has compared passengers and drivers’ neural activities and drivers experiencing motion sickness (MS). Therefore, this study attempts to explore brain dynamics in motion sickness among passengers and drivers. Eighteen volunteers participated in simulating the driving winding road experiment while their subjective motion sickness levels and electroencephalogram (EEG) signals were simultaneously recorded. Independent Component Analysis (ICA) was employed to isolate MS-related independent components (ICs) from EEG. Furthermore, comodulation analysis was applied to decompose spectra of interest ICs, related to MS, to find the specific spectra-related temporally independent modulators (IMs). The results showed that passengers’ alpha band (8-12 Hz) power increased in correlation with the MS level in the parietal, occipital midline and left and right motor areas, and drivers’ alpha band (8-12 Hz) power showed relatively smaller increases than those in the passenger. Further, the results also indicate that the enhanced activation of alpha IMs in the passenger than the driver is due to a higher degree of motion sickness. In conclusion, compared to the driver, the passenger experience more conflicts among multimodal sensory systems and demand neuro-physiological regulation.
Decoding Imagined Speech Based on Deep Metric Learning for Intuitive BCI Communication | IEEE Journals & Magazine | IEEE Xplore
Imagined speech is a highly promising paradigm due to its intuitive application and multiclass scalability in the field of brain-computer interfaces. However, optimal feature extraction and classifiers have not yet been established. Furthermore, retraining still requires a large number of trials when new classes are added. The aim of this study is (i) to increase the classification performance for imagined speech and (ii) to apply a new class using a pretrained classifier with a small number of trials. We propose a novel framework based on deep metric learning that learns the distance by comparing the similarity between samples. We also applied the instantaneous frequency and spectral entropy used for speech signals to electroencephalography signals during imagined speech. The method was evaluated on two public datasets (6-class Coretto DB and 5-class BCI Competition DB). We achieved a 6-class accuracy of 45.00 ± 3.13% and a 5-class accuracy of 48.10 ± 3.68% using the proposed method, which significantly outperformed state-of-the-art methods. Additionally, we verified that the new class could be detected through incremental learning with a small number of trials. As a result, the average accuracy is 44.50 ± 0.26% for Coretto DB and 47.12 ± 0.27% for BCI Competition DB, which shows similar accuracy to baseline accuracy without incremental learning. Our results have shown that the accuracy can be greatly improved even with a small number of trials by selecting appropriate features from imagined speech. The proposed framework could be directly used to help construct an extensible intuitive communication system based on brain-computer interfaces.
Inter-Subject Domain Adaptation for CNN-Based Wrist Kinematics Estimation Using sEMG | IEEE Journals & Magazine | IEEE Xplore
Recently, convolutional neural network (CNN) has been widely investigated to decode human intentions using surface Electromyography (sEMG) signals. However, a pre-trained CNN model usually suffers from severe degradation when testing on a new individual, and this is mainly due to domain shift where characteristics of training and testing sEMG data differ substantially. To enhance inter-subject performances of CNN in the wrist kinematics estimation, we propose a novel regression scheme for supervised domain adaptation (SDA), based on which domain shift effects can be effectively reduced. Specifically, a two-stream CNN with shared weights is established to exploit source and target sEMG data simultaneously, such that domain-invariant features can be extracted. To tune CNN weights, both regression losses and a domain discrepancy loss are employed, where the former enable supervised learning and the latter minimizes distribution divergences between two domains. In this study, eight healthy subjects were recruited to perform wrist flexion-extension movements. Experiment results illustrated that the proposed regression SDA outperformed fine-tuning, a state-of-the-art transfer learning method, in both single-single and multiple-single scenarios of kinematics estimation. Unlike fine-tuning which suffers from catastrophic forgetting, regression SDA can maintain much better performances in original domains, which boosts the model reusability among multiple subjects.
Dreem Open Datasets: Multi-Scored Sleep Datasets to Compare Human and Automated Sleep Staging | IEEE Journals & Magazine | IEEE Xplore
Sleep stage classification constitutes an important element of sleep disorder diagnosis. It relies on the visual inspection of polysomnography records by trained sleep technologists. Automated approaches have been designed to alleviate this resource-intensive task. However, such approaches are usually compared to a single human scorer annotation despite an inter-rater agreement of about 85% only. The present study introduces two publicly-available datasets, DOD-H including 25 healthy volunteers and DOD-O including 55 patients suffering from obstructive sleep apnea (OSA). Both datasets have been scored by 5 sleep technologists from different sleep centers. We developed a framework to compare automated approaches to a consensus of multiple human scorers. Using this framework, we benchmarked and compared the main literature approaches to a new deep learning method, SimpleSleepNet, which reach state-of-the-art performances while being more lightweight. We demonstrated that many methods can reach human-level performance on both datasets. SimpleSleepNet achieved an F1 of 89.9% vs 86.8% on average for human scorers on DOD-H, and an F1 of 88.3% vs 84.8% on DOD-O. Our study highlights that state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from OSA. Considerations could be made to use automated approaches in the clinical setting.
Effects of Virtual Reality and Augmented Reality on Induced Anxiety | IEEE Journals & Magazine | IEEE Xplore
To explore the effects of virtual reality (VR) and augmented reality (AR) in the treatment of claustrophobia, the potential effects of VR and AR on induced anxiety were investigated in this paper. During the experiment, 34 subjects were randomly selected and distributed in AR and VR scenes in a sequence. The skin conductance and heart rates of the subjects were measured throughout the entire process, and the anxiety scale was used to assess the subjective anxiety when the task in each scene was completed. The results showed the following: (1) AR and VR scenes led to feelings of discomfort, but the subjective anxiety scores obtained in the two scenes were not significantly different; (2) the skin conductance level of the subjects significantly increased from the baseline when the subjects entered the experimental scene but remained active in the two scenes without showing significant difference between the scenes; and (3) the heart rate index significantly increased from the baseline after the subjects entered the scene and then gradually decreased. The heart rates of the subjects significantly increased again when the anxiety-induced event was triggered. However, no significant difference was observed between AR and VR scenes. AR and VR have induced obvious anxiety, which was reflected in the subjective and objective physiological indicators. However, no significant difference was found in the effects of AR and VR on the induced anxiety. Considering the cost of building two scenes and other factors, AR was more suitable for the treatment of claustrophobia than VR.
Selection of Muscle-Activity-Based Cost Function in Human-in-the-Loop Optimization of Multi-Gait Ankle Exoskeleton Assistance | IEEE Journals & Magazine | IEEE Xplore
Using “human-in-the-loop” (HIL) optimization can obtain suitable exoskeleton assistance patterns to improve walking economy. However, there are differences in these patterns under different gait conditions, and currently most HIL optimizations use metabolic cost, which requires long periods to be estimated for each control law, as the physiological objective to minimize. We aimed to construct a muscle-activity-based cost function and to find the appropriate initial assistance patterns in HIL optimization of multi-gait ankle exoskeleton assistance. One healthy subject walked assisted by an ankle exoskeleton under nine gait conditions and each condition was the combination of different walking speeds, ground slopes and load weights. Ten assistance patterns were provided for the subject under each gait condition. Then we constructed a cost function based on surface electromyography signals of four lower leg muscles and select the muscle weight combination by using particle swarm optimization algorithm to compose the cost function with maximum differences between different assistance patterns. The mean weights of medial gastrocnemius, lateral gastrocnemius, soleus and tibialis anterior activity under all gait conditions are 0.153, 0.104, 0.953 and 0.145, respectively. Then we verified the effectiveness of this cost function by optimization and validation experiments conducted on four subjects. Our results are expected to guide the selection of muscle-activity-based cost functions and improve the time efficiency of HIL optimization.
DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG | IEEE Journals & Magazine | IEEE Xplore
This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channelEEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-longshort-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.
Deep Spatio-Temporal Representation and Ensemble Classification for Attention Deficit/Hyperactivity Disorder | IEEE Journals & Magazine | IEEE Xplore
Attention deficit/Hyperactivity disorder (ADHD) is a complex, universal and heterogeneous neurodevelopmental disease. The traditional diagnosis of ADHD relies on the long-term analysis of complex information such as clinical data (electroencephalogram, etc.), patients' behavior and psychological tests by professional doctors. In recent years, functional magnetic resonance imaging (fMRI) has been developing rapidly and is widely employed in the study of brain cognition due to its non-invasive and non-radiation characteristics. We propose an algorithm based on convolutional denoising autoencoder (CDAE) and adaptive boosting decision trees (AdaDT) to improve the results of ADHD classification. Firstly, combining the advantages of convolutional neural networks (CNNs) and the denoising autoencoder (DAE), we developed a convolutional denoising autoencoder to extract the spatial features of fMRI data and obtain spatial features sorted by time. Then, AdaDT was exploited to classify the features extracted by CDAE. Finally, we validate the algorithm on the ADHD-200 test dataset. The experimental results show that our method offers improved classification compared with state-of-the-art methods in terms of the average accuracy of each individual site and all sites, meanwhile, our algorithm can maintain a certain balance between specificity and sensitivity.
Deep Spatio-Temporal Representation and Ensemble Classification for Attention Deficit/Hyperactivity Disorder | IEEE Journals & Magazine | IEEE Xplore
Attention deficit/Hyperactivity disorder (ADHD) is a complex, universal and heterogeneous neurodevelopmental disease. The traditional diagnosis of ADHD relies on the long-term analysis of complex information such as clinical data (electroencephalogram, etc.), patients' behavior and psychological tests by professional doctors. In recent years, functional magnetic resonance imaging (fMRI) has been developing rapidly and is widely employed in the study of brain cognition due to its non-invasive and non-radiation characteristics. We propose an algorithm based on convolutional denoising autoencoder (CDAE) and adaptive boosting decision trees (AdaDT) to improve the results of ADHD classification. Firstly, combining the advantages of convolutional neural networks (CNNs) and the denoising autoencoder (DAE), we developed a convolutional denoising autoencoder to extract the spatial features of fMRI data and obtain spatial features sorted by time. Then, AdaDT was exploited to classify the features extracted by CDAE. Finally, we validate the algorithm on the ADHD-200 test dataset. The experimental results show that our method offers improved classification compared with state-of-the-art methods in terms of the average accuracy of each individual site and all sites, meanwhile, our algorithm can maintain a certain balance between specificity and sensitivity.