SIMULATION OPTICS

SIMULATION OPTICS

2726 bookmarks
Custom sorting
Frontiers | Encoding and Decoding Models in Cognitive Electrophysiology | Frontiers in Systems Neuroscience
Frontiers | Encoding and Decoding Models in Cognitive Electrophysiology | Frontiers in Systems Neuroscience
Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is pub...
·frontiersin.org·
Frontiers | Encoding and Decoding Models in Cognitive Electrophysiology | Frontiers in Systems Neuroscience
Decoding speech from spike-based neural population recordings in secondary auditory cortex of non-human primates | Communications Biology
Decoding speech from spike-based neural population recordings in secondary auditory cortex of non-human primates | Communications Biology
Communications Biology - Heelan, Lee et al. collect recordings from microelectrode arrays in the auditory cortex of macaques to decode English words. By systematically characterising a number of...
·nature.com·
Decoding speech from spike-based neural population recordings in secondary auditory cortex of non-human primates | Communications Biology
Frontiers | Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex | Neuroscience
Frontiers | Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex | Neuroscience
Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonst...
·frontiersin.org·
Frontiers | Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex | Neuroscience
Frontiers | Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis | Neuroscience
Frontiers | Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis | Neuroscience
Certain brain disorders resulting from brainstem infarcts, traumatic brain injury, cerebral palsy, stroke, and amyotrophic lateral sclerosis, limit verbal communication despite the patient being fully aware. People that cannot communicate due to neurological disorders would benefit from a system that can infer internal speech directly from brain signals. In this review article, we describe the state of the art in decoding inner speech, ranging from early acoustic sound features, to higher order speech units. We focused on intracranial recordings, as this technique allows monitoring brain activity with high spatial, temporal, and spectral resolution, and therefore is a good candidate to investigate inner speech. Despite intense efforts, investigating how the human cortex encodes inner speech remains an elusive challenge, due to the lack of behavioral and observable measures. We emphasize various challenges commonly encountered when investigating inner speech decoding, and propose potential solutions in order to get closer to a natural speech assistive device.
·frontiersin.org·
Frontiers | Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis | Neuroscience
Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks | Neurobiology of Language | MIT Press
Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks | Neurobiology of Language | MIT Press
Abstract. Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da. We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information.
·direct.mit.edu·
Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks | Neurobiology of Language | MIT Press
Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding | SpringerLink
Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding | SpringerLink
Science and Engineering Ethics - Brain reading technologies are rapidly being developed in a number of neuroscience fields. These technologies can record, process, and decode neural signals. This...
·link.springer.com·
Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding | SpringerLink
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields | Journal of Neuroscience
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields | Journal of Neuroscience
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.
·jneurosci.org·
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields | Journal of Neuroscience