Poster Abstracts:
---------------------------------------------------------------------------
---------------------------------------------------------------------------


Cellular and network mechanisms for sleep-dependent plasticity in the
visual cortex

Sara J. Aton 1  and Marcos G. Frank 2

1 Molecular, Cellular, and Developmental Biology, University of
Michigan, Ann Arbor MI

2 Neuroscience, University of Pennsylvania, Philadelphia, PA


The consolidation of recent experiences into long-term memories -
linked mechanistically to plastic changes at synapses between neurons
- is a fundamental function of the brain and critical for
survival. While sleep can facilitate memory consolidation, little is
known about how it contributes to synaptic plasticity in neuronal
circuits following waking experience. To assess sleep-dependent
plastic changes in a simple neuronal circuit, we measured functional
and biochemical changes in the visual cortex of freely-behaving
animals following novel visual experiences. To induce ocular dominance
plasticity in the juvenile cat cortex, cats were given 6 hours of
waking visual experience after patching one of the two eyes. To induce
stimulus-specific potentiation of neuronal responses in the adult
mouse visual cortex, an oriented grating stimulus was presented for up
to an hour. Functional and biochemical plasticity measures were made
after visual experience and after subsequent sleep.In both mouse and
cat cortex, plasticity following visual experience is significantly
enhanced by sleep. In both cases, sleep-dependent plasticity involves
potentiation of neuronal visual responses, increased neuronal
activity, and activation of LTP-like intracellular signaling
pathways. In cat cortex, this process is associated with suppression
of activity in a subset of interneurons, which is initiated by visual
experience and may disinhibit glutamatergic circuits during subsequent
sleep. Sleep-dependent plastic changes in these model systems share
several common features. A better general understanding of
sleep-dependent plasticity mechanisms may lead to new strategies to
counter the detrimental effects of sleep loss on cognitive function.

email: asara@mail.med.upenn.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


A Fast Iterative Greedy Algorithm for MEG/EEG Source Localization


*Behtash Babadi *(a)(b), Gabriel Obregon (b), Emery N. Brown
(a)(b)(c), and Patrick L. Purdon (a)(b)

(a) Department of Brain and Cognitive Sciences, Massachusetts
Institute of Technology

(b) Department of Anesthesia, Critical Care, and Pain Medicine,
Massachusetts General Hospital

(c) Harvard-MIT Division of Health, Sciences and Technology



Electroencephalography (EEG) and magnetoencephalography (MEG) are
among the most popular non-invasive methods for studying activity
within the human brain. The time series obtained from these recordings
are employed to estimate the spatiotemporal brain activity. Although
EEG and MEG recordings enjoy a high temporal resolution, in contrast
to other methods such as functional MRI, they suffer from poor spatial
resolution. Moreover, the current reconstruction algorithms applied to
the EEG and MEG time series do not take into account the sparse nature
of the brain activity and suffer from unwieldy computational
complexity due to the extremely high dimensionality of the traditional
source spaces.

In this work, we incorporate techniques from sparsity-based signal
processing into the framework of EEG/MEG source localization. In
particular, we develop a fast greedy algorithm for brain source
localization based on the class of subspace pursuit algorithms. The
subspace pursuit algorithms search for sparse solutions to an
underdetermined system of linear equations in a systematic fashion by
iteratively refining the subspace containing a potential solution. The
refinement is carried out iteratively based on a proxy signal obtained
from the observation vector. We apply a generalized subspace pursuit
algorithm across different source spaces obtained from the Voronoi
regions of recursively subdivided icosahedrons of decreasing size in a
traditional densely-sampled source space (~300,000 sources). Each
Voronoi region, denoted by a spatially compact cortical patch, is
represented by its first few significant eigen-modes. Given a target
sparsity level, the overall algorithm searches for sparse solutions
across the hierarchy of source spaces in a nested fashion.

Numerical studies for MEG source localization (under the MNE platform)
in various patches of activity in several cortical regions such as the
occipital, temporal, prefrontal, somatosensory and anterior cingulate
cortex reveal significant improvements in both the computational
requirements and the quality of source reconstruction. Further
evaluations of the algorithm for multiple active regions as well as
EEG source localization are currently under study.




email: behtash@nmr.mgh.harvard.edu



---------------------------------------------------------------------------
---------------------------------------------------------------------------




Effective Connectivity in Easy and Difficult Perceptual Decisions6

Sahil Bajaj, Bhim Adhikari, Bidhan Lamichhane and Mukesh Dhamala

Recognizing emotional facial expressions is a part of perceptual
decision-making processes in the brain.  Hierarchical and parallel
functional processes are at work during such decision-making. Arriving at
a decision for the brain becomes more difficult when available sensory
information is limited or ambiguous.  We used clear and noisy pictures
with happy and angry emotional expressions, and asked 30 subjects to
categorize these pictures based on emotions during fMRI data acquisition.
The inferior occipital gyrus (IOG), fusiform gyrus (FG), amygdala (AMG)
and ventral prefrontal cortex (VPFC) were found to be active during
decision-making. We found that the difficulty of the task modulated the
pathway between IOG and VPFC.  These findings help us to understand
general neural mechanisms during perceptual decision-making processes.

email: sahil.phy@gmail.com


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Long-term Decoding Stability without Retraining for Intracortical
Brain Computer Interfaces

W. E. Bishop, C. A. Chestek, V. Gilja, P. Nuyujukian,
S. I. Ryu, K. V. Shenoy, B. M. Yu

Most current intracortical brain computer interface (BCI)
systems rely on daily retraining. While this is feasible in a lab, it
is not clear that the burden of daily retraining will be viable in
clinical practice. We therefore sought to investigate the long-term
stability of an intracortical BCI system without retraining. We
recorded neural activity using a 96-electrode array implanted in the
motor cortex of a rhesus macaque performing center-out reaches in 7
directions over 41 sessions spanning 48 days. One simple way to avoid
retraining is to hold the decoder static from day-to-day. As expected,
we found that when decoding reach direction based on threshold
crossings collected during arm movement, the overall performance of
such a static decoder was reduced compared to one which was retrained
daily. However, we surprisingly found that day-to-day performance did
not significantly decline as time from training increased for this
decoder, though day-to-day variability was large.

We then considered a second decoding model which innovated in two
important ways upon the static decoder. First, we assumed that decode
parameters were randomly drawn anew each day from a fixed
distribution. This assumption can be motivated by the high variability
but lack of overall downward trend in day-to-day performance of the
static decoder. Second, our decoder used unlabeled trials collected as
the monkey performed reaches throughout the course of an experiment to
reduce the uncertainty in parameter estimates initially encoded in the
prior distribution. This effectively allowed the algorithm to
zero on particular parameters values throughout the
experimental session. We found that this decoder substantially
outperformed the static decoder, producing an overall 12% increase in
mean day-to-day decode accuracy. In fact, the mean day-to-day accuracy
of this decoder was not significantly different than one which was
retrained daily in a supervised manner. While these results must be
reproduced in a closed-loop setting, we believe such insights into the
role of decoder training will be important for the clinical
translation of BCI systems.


email: wbishop@cs.cmu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Analysis of neuronal functional connectivity using penalized GLM models

Yi Chai

University of Wisconsin

The capacity to simultaneously record spike trains of many neurons
from awake, behaving subjects, has surpassed our ability to describe
putative neural codes distributed across populations of
neurons. Identifying correlation structure of a neuron ensemble beyond
pairwise measures is critical for understanding how information is
transferred within such a neural population.  However, spike train
data pose significant challenges for statistical researchers.  In this
work, we combined a weighted $L_1$ penalized method with a generalized
linear model (GLM) framework. The simulation shows the weighted
penalization can perform better than traditional unweighted methods.
Then we apply this method to estimated the functional connectivity
structure of neurons in the rat prelimbic region of the frontal cortex
(plPFC). The neural data were obtained from adult male Sprague-Dawley
rats preforming a T-Maze based delayed-alternation task of working
memory.  The result indeed identified some special pattern of the
neural network.


email: ychai2@wisc.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Spectral factorization based current source density analysis of
ongoing neural oscillations"

Ganesh Chand, and Mukesh Dhamala
Georgia State University

Ongoing neural oscillations in a broad range of frequencies from
approximately 0.05 to 100 Hz and above can be captured in
intra-cranial and extra-cranial electrical recordings. These
oscillations emerge within the brain as a consequence of neuronal
firing and synaptic activities in neuronal networks. For incoming
sensory inputs, ongoing activity provides the brain's internal
context for setting the level of perception and behavioral
performance. Unlike the analysis of event-related time-locked
activity, analysis of ongoing activity is much more challenging
because of noisy non-phase locked signals and absence of a
time-trigger for averaging. Multi-electrode potential recordings
(intra cortical local field potentials or scalp EEG) are used to look
at the spatial distribution of activity patterns.  But, these
electrical potential patterns are usually masked by spatially
distributed common influence and spatially dependent factors, which is
why potential patterns may not reflect the true distribution of
underlying neuronal activities. The current source density (CSD)
analysis, which links trans-membrane or scalp currents with
potentials, eliminates such non-local contributions and renders more
localized activity leading to higher spatial resolution of
events. Here, we present a new spectral factorization-based CSD
analysis technique for ongoing oscillations. We validate it
applicability by using simulated and real experimental data (LFPs and
EEG).


Email: gchand1@student.gsu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Towards Using EEG to Improve ASR Accuracy

Yun-Nung Chen, Kai-Min Chang, Jack Mostow
School of Computer Science, Carnegie Mellon University

We report on a pilot experiment to improve the performance of an
automatic speech recognizer (ASR) by using a single-channel EEG signal
to classify the speaker's mental state as reading easy or hard
text. We use a previously published method (Mostow et al., 2011) to
train the EEG classifier. We use its probabilistic output to control
weighted interpolation of separate language models for easy and
difficult reading. The EEG-adapted ASR achieves higher accuracy than
two baselines. We analyze how its performance depends on EEG
classification accuracy. This pilot result is a step towards improving
ASR more generally by using EEG to distinguish mental states.

email: yvchen@cs.cmu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------

Uncovering spatial topology represented by rat hippocampal population neuronal codes 

*Zhe Chen*, Fabian Kloosterman, Emery N. Brown, Matthew A. Wilson

Abstract: 
Hippocampal population codes play an important role in
representation of spatial environment and spatial
navigation. Uncovering the internal representation of hippocampal
population codes will help understand neural mechanisms of the
hippocampus. For instance, uncovering the patterns represented by rat
hippocampus (CA1) pyramidal cells during periods of either navigation
or sleep has been an active research topic over the past decades.
However, previous approaches to analyze or decode firing patterns of
population neurons all assume the knowledge of the place fields, which
are estimated from training data a priori. The question still remains
unclear how can we extract information from population neuronal
responses either without a priori knowledge or in the presence of
finite sampling constraint. Finding the answer to this question would
leverage our ability to examine the population neuronal codes under
different experimental conditions. Using rat hippocampus as a model
system, we attempt to uncover the hidden ``spatial topology''
represented by the hippocampal population codes.

We develop a hidden Markov model (HMM) and a variational Bayesian (VB)
inference algorithm to achieve this computational goal, and we apply
the analysis to extensive simulation and experimental data. Our
empirical results show promising direction for discovering structural
patterns of ensemble spike activity during periods of active
navigation. This study would also provide useful insights for future
exploratory data analysis of population neuronal codes during periods
of sleep.

Email: zhechen@mit.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Spatial phase relationships in the fronto-parietal network during visual working memory.

NM Dotson, RA Salazar, and CM Gray

Montana State University

It well established that large-scale spatiotemporal patterns of
 neuronal activity underlie cognitive functions. These patterns
 reflect the communication between widely distributed neuronal
 ensembles spanning multiple cortical areas. Such patterns should be
 reproducible and specific to particular cognitive tasks. One
 mechanism to achieve this specificity is to maintain the same
 relative phase between neuronal ensembles. To address this, we
 performed an analysis of the spatial phase relationships within and
 between areas of the prefrontal and posterior parietal cortices, of
 two monkeys, during a visual working memory task. During the memory
 maintenance period, we calculated the average cross-correlograms
 between all pairs of local field potentials and extracted an estimate
 of the relative phase angle. We found three major phase
 relationships: 1) Posterior parietal cortical areas on opposites
 sides of the intraparietal sulcus have a consistent phase angle near
 180. 2) The medial bank of the posterior parietal cortex has a
 phase gradient that is roughly rostral to caudal with respect to the
 intraparietal sulcus; 3) Prefrontal signals lead medial posterior
 parietal signals by ~20 and show a dominant out-of-phase
 relationship near 180 with lateral posterior parietal
 signals. Units preferentially fired, on average, near the troughs of
 the local field potentials recorded on the same electrode in both
 posterior parietal and prefrontal cortices. These results indicate
 that the fronto-parietal network is composed of reliable phase
 relationships during a visual working memory task.






---------------------------------------------------------------------------
---------------------------------------------------------------------------

Hierarchical Latent Dictionaries for Models of Brain Activation

*Alona Fyshe* (Machine Learning Department, Carnegie Mellon University)
Emily Fox (Department of Statistics, The Wharton School, University of Pennsylvania)
David Dunson (Statistical Science, Duke University)
Tom Mitchell (Machine Learning Department, Carnegie Mellon University)

In this work, we propose a hierarchical latent dictionary approach to
estimate the time-varying mean and covariance of
Magnetoencephalography (MEG) sensors as they record a subject's brain
activity.  MEG is a very noisy recording mechanism, and in addition it
produces data of high dimension.  We wish to use this data to predict
the category of the noun that a person is reading from a single trial
of noisy MEG data.  We fully leverage the limited sample size and
redundancy in sensor measurements by transferring knowledge through a
hierarchy of lower dimensional smooth latent processes.  In addition
our model allows for changes in the correlation structure of the
sensors as the brain activity recorded by the sensors evolves.  These
techniques combine to produce a model that outperforms MLE-based
models as well as Support Vector Machines (SVMs).

Email: afyshe@cs.cmu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Can network structures be derived from calcium imaging data? - A
simulation study.

Felipe Gerhard 1,*, Henry Luetcke 2, Wulfram Gerstner 1, Fritjof Helmchen 2

1 Brain Mind Institute, Ecole Polytechnique Federale de Lausanne,
Lausanne, Switzerland 
2 Brain Research Institute, University of
Zurich, Zurich, Switzerland * 

Information processing in the brain has remained enigmatic, largely
because of methodological constraints in measuring neuronal circuit
activity and connectivity. Two-photon calcium imaging now enables
functional analysis of local neuronal populations under in vivo
conditions. We present a simulation framework for the quantitative
evaluation of reconstruction performance under different experimental
constraints. First, we simulated spike-evoked calcium transients and
noisy fluorescence imaging, and then applied a state-of-the-art
reconstruction algorithm to recover the underlying spike train. We
examined the effect of signal-to-noise ratio, imaging speed and
indicator properties on the fidelity of spike reconstruction under
conditions commonly observed in cortical pyramidal cells. Furthermore,
we explored how spike train reconstruction impacts on estimates of the
connectivity structure in statistical neural network models.

Reliable methods of network reconstruction are important for the
testing of hypotheses about the functional organization of neural
networks. Based on our extensive simulations, we show that it is
possible to determine which experimental conditions are necessary to
infer how well individual hub neurons can be identified, whether the
network has a scale-free structure, and to what extent the network
possesses a small-world, effective connectivity. Whether a network
characteristic can be accurately reconstructed depends on the balance
of different experimental control parameters - therefore our findings
provide a valuable set of recommendations for calcium imaging
experiments aimed at faithfully characterizing network properties.


email: felipe.gerhard@epfl.ch  


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Identification of functional information subgraphs in complex networks

Vadas Gintautas  (Chatham University)
Luis Bettencourt, Michael Ham (Los Alamos National Laboratory)

We present a general information theoretic approach for identifying
 functional subgraphs in complex networks where the dynamics of each
 node are observable.  We show that the uncertainty in the state of
 each node can be written as a sum of information quantities involving
 a growing number of variables a t other nodes.  We demonstrate that
 each term in this sum is generated by successively conditioning
 mutual informations on new measured variables, in a way analogous to
 a discrete differential calculus. The analogy to a Taylor series
 suggests efficient optimization algorithms for determining the state
 of a target variable in terms of functional groups of other nodes.
 We apply this methodology to electrophysiological recordings of
 cortical neuronal network grown it in vitro.  Despite strong
 stochasticity, we show that each cell's firing is generally explained
 by the activity of a small number of other neurons.  We identify
 these neuronal subgraphs in terms of their redundan t or synergetic
 character and reconstruct neuronal circuits that account for the
 state of target cells.



email: vgintautas@chatham.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Automated pattern recognition in fMRI data using optimal nonparametric
local forecasts

Georg M. Goerg a), Elisha P. Merriam b), Cosma R. Shalizi a), 
Christopher R. Genovese a) Affiliations:
a) Department of Statistics, Carnegie Mellon University; 
b) Center for Neural Science, New York University

We present a new nonparametric method, local statistical complexity
(LSC), to automatically find activity regions in functional magnetic
resonance imaging (fMRI) data - independent of the type of input
stimulus. We do this by finding optimal local predictors of the
spatio-temporal field, and then use entropy based metrics to assign
each fMRI voxel an "interestingness" score - measured in bits of
information. Applications to high-resolution fMRI data show that our
method detects the same brain regions as traditional analysis methods,
without making any assumptions regarding the shape of the unknown
neural signal. LSC will be particularly useful for applications where
the spatiotemporal patterns of brain activity are highly irregular and
non-harmonic and "matched filter'" techniques fail to filter signal
from noise.



email: gmg@stat.cmu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------


EEG-correlates of zolpidem-responsive tremor.

Andrew M. Goldfine, MD Weill Cornell Medical College and Burke Medical
Research Institute; Jonathan D. Victor, MD, PhD, Weill Cornell Medical
College; Nicholas D. Schiff, MD, Weill Cornell Medical College



Zolpidem, a GABA A alpha-1 agonist, improves arousal and multiple
facets of behavior in a small subset of patients with brain
injury. The mechanism of action is unknown, but is believed to involve
normalization of functional intracortical connectivity via
cortico-basal ganglia-thalamic loops. We investigated one patient
subject who, in addition to behavioral improvement, also had a marked
decrease in the prevalence of a 3 Hz unilateral resting arm tremor. We
used multi-taper spectral analysis of electroencephalography (EEG)
with a Hjorth Laplacian montage to study the cortical correlates of
this tremor. We found that tremor was associated with peaks in the EEG
power spectrum over primary motor / sensory areas, and primarily at
twice the tremor frequency. This EEG finding is similar to that of
previous studies of Parkinsonian tremor, essential tremor, and
voluntary mimicked tremor. Resolution of tremor from zolpidem was
associated with resolution of the EEG at the tw ice-tremor frequency
peaks. EEG coherence analysis revealed that the off-zolpidem state was
associated with widespread synchrony of EEG signals at twice the
tremor frequency, primarily over the cerebral hemisphere contralateral
to the tremor. Coherence analysis of the on-zolpidem state differed,
as periods of tremor were associated with enhanced EEG synchrony only
over primary motor / sensory areas. These findings suggest that
following traumatic brain injury, tremor may result from interruption
of cortio-basal ganglia-thalamic loops, and that zolpidem can
partially normalize activity within these loops.


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Synchrony Measure for Spontaneous Low Firing Rate Spike Trains and
Bootstrap Based Hypothesis Testing

A.M. Gonzallez-Montoro*, R. Cao, N. Espinosa, J. Maria and J. Cudeiro
 
University of La Corurra, Spain 

A synchrony index, aimed to be used in low firing rate scenarios such
 as spontaneous spike activity is presented.  The method, called the
 Integrated Cross-correlation Synchrony Index (ICCSI), is based on
 kernel density estimation of inter-neuron inter-spike intervals. With
 this index, synchrony of pairs of simultaneous recorded primary
 visual cortex (V1) neurons under spontaneous activity is
 estimated. The measure is also used to test for differences in
 synchronization levels under two induced brain states:
 slow-sleep-like and awake-like.  Also, we test for differences
 between two experimental conditions used to obtain the awake-like
 state. Two bootstrap resampling plans are proposed to calibrate the
 distribution of the tests. The results indicate that our method is
 useful to discern significant differences in the synchronization
 dynamics of brain states characterized by a neural activity with low
 firing rate. It is also adequate to unveil subtle differences in the
 synchroniz ation levels of the induced awake state, depending on the
 activation pathway.


email: agonzalezmo@udc.es


---------------------------------------------------------------------------
---------------------------------------------------------------------------

Estimation issues in modelling stimulus response from optical imaging data

*Haley Hedlin*, Daryl Hochman, Michael Haglund, Michael Lavine

Abstract: Intrinsic optical signal imaging is a technique that
measures the amount of light absorbed and scattered by tissue on the
surface of the brain in response to neuronal activity.  In our
motivating application, stimulus-evoked neuronal activity is recorded
from several regions of the cortical surface via optical imaging.  Our
goal is to estimate the stimulus response curve; however, the signal
of interest must be separated from heartbeat and respiration
artifacts. Dynamic linear models have been previously proposed to
model the neuronal signal and the noise artifacts.  In this work we
discuss estimation issues encountered in this setting and propose
improvements and potential solutions.

Email: hedline@math.umass.edu



---------------------------------------------------------------------------
---------------------------------------------------------------------------


Detecting neural signal nonstationarities in intracortical brain
computer interfaces using a model selection method

Mark L Homer(1), Janos A Perge(1), Matt T Harrison(2), 
Michael J Black(3,4), Leigh R Hochberg(1,5,6)

(1) Biomedical Engineering, 
(2) Applied Mathematics, 
(3) Computer Science, Brown University, Providence, RI; 
(4) Max Planck Institute for Intelligent Systems, Tuebingen, Germany; 
(5) Rehabilitation R&D Service, Veterans Affairs Medical Center, Providence, RI; 
(6) Neurology, Massachusetts General, Brigham & Women's, and Spaulding
Rehabilitation Hospitals, Harvard Medical School, Boston, MA.

Intracortical brain computer interfaces (iBCIs) have the potential to
restore freedom of movement and environmental control for people with
paralysis. This investigational technology uses multielectrode arrays,
chronically implanted in motor cortex, to record neural activity,
signal processing methods to extract informative neural features from
the signals, and a decoding algorithm (or "filter") which
operates on neural features to estimate motor states, e.g. arm
endpoint velocity. The estimated motor states can then be used to
drive an output device such as a computer cursor or a robotic
arm. Using the investigational BrainGate iBCI, people with tetraplegia
and anarthria have demonstrated control over a computer cursor and
have operated assistive communication software.

Current systems record neural signals from a small population of
cells, still they attempt robust and reliable control of devices.
With a population of only tens of cells, a single outlier can have a
large impact on the decoded neural control signal.  Changes in neural
firing rates can result in directional biases in iBCI cursor control
tasks, impairing performance. For example, a neurally controlled
computer cursor might drift toward the lower right corner of the
screen. Such suboptimal performance can be traced to nonstationary
characteristics of neural activity features. Here we used a velocity
Kalman filter to decode neural features and modeled nonstationarity as
a constant term (an offset) added to one or more of the features. We
present a method for detecting which features have statistically
significant offsets using a combination of likelihood ratio tests and
a variant of stepwise variable selection. The analytical tool's
value is then demonstrated on data from two research sessions where
one participant with tetraplegia engaged in a pilot study of the
investigational BrainGate Neural Interface System (IDE). During the
sessions, the participant was asked to move a neural cursor on a
computer screen to several targets (Radial-4 center-out-and-back
tasks).

In both cases, more than 20% of the features showed statistically
significant offsets. Furthermore, the uncovered offsets supported the
observed directional bias. The results suggest that the method can
serve as a diagnostic tool, pinpointing specific nonstationarities
within the neural signals.  This could be valuable for clinical
applications of iBCI technology where users of the technology seek the
best possible control from the available neural data.


email: Mark_Homer@brown.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



A Linear Model Predicting The Cerebral Blood Volume Responses to Locomotion
Bingxing Huo, Patrick Drew
Center for Neural Engineering, Department of Engineering Sciences 
and Mechanics. Department of Neurosurgery. Pennsylvania State University, 
University Park, PA

Increases in cerebral blood volume (CBV) have been shown to be
correlated with neural activity in anesthetized animals. However, the
cerebral hemodynamics during active behaviors is not well understood.
Here, we quantify the spatial and temporal dynamics of CBV in the
somatosensory cortex of awake mice voluntarily running on top of a
spherical treadmill (Dombeck et al., 2007). Mice were implanted with
polished and reinforced thinned-skull (PoRTS) windows over the
parietal cortices (Drew et al., 2010), and head-fixed for chronic
intrinsic optical signal (IOS) imaging. The cortex was illuminated
with 530nm light, an isobestic point for hemoglobin, so that
fractional decreases in reflectance (R/R) are driven by CBV
increases. During the animal's locomotion, we observed regionally
specific decreases in reflectance. We fit this change at any point on
the cortex with a linear convolution of the binarized velocity and a
CBV impulse response. The impulse response consisted of two
exponential decaying functions: a fast (3.3 second time constant,
"arterial") and a slow (100 second time constant, "venous")
components (Silva et al., 2007; Kim and Kim, 2010; Drew et al.,
2011). We found that with appropriate amplitudes of the two decaying
functions, the CBV response can be well fit (R2=0.5-0.8). For each
animal, the same impulse response can be used to predict the CBV
responses over different trials with similar goodness-of-fit. The
correlation between the predicted and the actual responses showed a
region-specific map as well. Our results suggest that the majority of
the cortical hemodynamic responses are linearly related to the
behavioral stimuli and this linear relationship is stronger in the
more responsive cortical areas.

Email: bih5103@psu.edu





---------------------------------------------------------------------------
---------------------------------------------------------------------------



Incorporating Relaxivities to More Accurately Reconstruct Magnetic Resonance Images

*M. Muge Karaman*1, Iain P. Bruce2, Daniel B. Rowe3

1,2 Ph.D. Student, Marquette University, Department of Mathematics,
Statistics, & Computer Science

3 Associate Professor of Statistics, Marquette University, Department
of Mathematics, Statistics, & Computer Science

In MRI, the spatial frequency measurements are subject to the effects
of transverse intra-acquisition decay, magnetic field inhomogeneities
and longitudinal relaxation time, during data acquisition. Thus, the
resulting image can include artificial effects that result from these
Fourier encoding anomalies. As such, the image-space data should be
reconstructed from measured spatial frequencies using an inverse
Fourier transform operator that accounts for the Fourier anomalies and
care should be taken when drawing conclusions from the fMRI
data. Nencka et al. [Journal of Neuroscience Methods 181 (2009)
268-282] developed the AMMUST (A Mathematical Model for Understanding
the STatistical) effects framework for incorporating Fourier encoding
anomalies. However, this framework does not account for the recovery
of the longitudinal relaxation time, and it is assumed that there is a
long repetition time, TR. As this assumption is not always valid, and
the signal amplitude becomes dependent on the longitudinal relaxation
time when performing fast repetitive excitations, the effect of the
longitudinal relaxation time should also be considered in this
setting. We expand upon the AMMUST framework to incorporate all the
Fourier encoding anomalies in an effort to correct these effects. The
exact image-space means, variances, and correlations are theoretically
and experimentally computed by implementing the AMMUST linear
framework, adapted to incorporate intra-acquisition decay, magnetic
field inhomogeneities and longitudinal relaxation time.

Email: meryem.karaman@marquette.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Estimation of Time-varying Inputs from a Single Spike Train

Hideaki Kim and Shigeru Shinomoto
Graduate School of Science, Kyoto University

Neurons temporally integrate input signals, translating them into
 timed output spikes. Because neurons nonperiodically emit spikes,
 examining spike timing can reveal information about input signals. We
 designed a method for analyzing a single spike train to estimate
 time-varying input parameters comprising the mean and fluctuation of
 the input current, which are determined by the firing frequencies of
 presynaptic excitatory and inhibitory neuronal populations. To track
 time-varying input parameters, we extracted instantaneous firing
 characteristics (e.g., the firing rate and non-Poisson irregularity
 in a spike train) and converted this information into likely input
 parameters. Instantaneous firing characteristics were estimated using
 a computationally feasible algorithm. The transformation formula was
 constructed by inverting the neuronal forward transformation of the
 input current to output spikes. Analyzing in vivo spike trains
 revealed marked differences in the input par ameters for the thalamic
 relay nucleus and the visual cortical areas.


email: kim@ton.scphys.kyoto-u.ac.jp


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Temporal Filtering Procedure for fMRI data

*Namhee Kim* (1), Prem K. Goel (2), David Q. Beversdorf (3)
(1) The Gruss Magnetic Resonance Research Center, Albert Einstein 
College of Medicine of Yeshiva University, Bronx, New York, USA
(2) Department of Statistics, The Ohio State University, Columbus, Ohio, USA
(3) Departments of Radiology, Neurology and Psychology and 
the Thompson Center, University of Missouri, Columbia, Missouri, USA

BOLD fMRI signal observed during cognitive tasks is a mixture of
various signals, e.g. physiologic changes, heart rates or respiratory
related changes, as well as cognitive task-related changes that are of
interest to experimenter.  Therefore, it has been difficult to find
task-relevancy for some regions in the brain with high mixing rates of
other sources than given cognitive tasks.  We in this study propose a
temporal filtering procedure reducing influence of nuisance signals
from observed fMRI data, acquired via block design with a preceding
resting period, before any cognitive tasks are launched.  Our method
utilizes a series of procedures: voxel-wise normalization, singular
value decomposition for dimension reduction purpose within subjects,
and dynamic linear model for separation of nuisance signals from
singular vector.  Denoised data, obtained by the proposed temporal
procedure was analyzed by GLM for across-subject analysis. The GLM
results with temporally filtered data were compared to the naive GLM
analysis meaning the GLM analysis without any temporal filtering
procedure.  The proposed method and naive GLM were applied to fMRI
data sets acquired from two generic cognitive tasks, a phonological
and a semantic task, and drug administration conditions, L-dopa
administration and placebo.  The proposed method demonstrates enhanced
activation for the regions known to be task-related, and contrast in
activation between L-dopa administration and placebo.


Email: namhee.kim@einstein.yu.edu



---------------------------------------------------------------------------
---------------------------------------------------------------------------



A data sharing project called Collaborative Database for Reaching Experiments


Ben Walker -1, Brian London -2, Konrad Kording -1,2
1 -- Rehabilitation Institute of Chicago
2 -- Northwestern University


Relevant data from many labs usually exists in different formats
making modeling and analysis of broad sets of data difficult.  Since
many reaching experiments share many aspects in common, we have
developed a Collaborative Database for Reaching Experiments (CaDRE).
This dataset collates data from multiple labs, both behavioral and
electrophysiological.

With a common format, we can also work with a broad range of models,
allowing them to be run against many experiments. CaDRE promises to be
useful for experimentalists who want to understand how their data
relates to models, for modelers who want to test their theories, and
for educators who want to give students a chance to better understand
current experiments and models.

Contact author: kk@northwestern.edu


email: kk@northwestern.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Measuring real-time synchronization in neuronal spike trains

Thomas Kreuz 1, Daniel Chicharro 2, Ralph G.Andrzejak 3, Florian Mormann 4

1 Institute for Complex Systems, CNR, Sesto Fiorentino, Italy
2 Center for Neuroscience and Cognitive Systems, Italian Institute 
of Technology, Rovereto, Italy
3 Department of Information and Communication Technologies, 
Universitat Pompeu Fabra, Barcelona, Spain
4 Department of Epileptology, University of Bonn, Bonn, Germany



Measuring synchronization among two or more simultaneously measured
neuronal spike trains is a ubiquitous task in the analysis of
electrophysiological recordings. It can be used to quantify the
reliability of neuronal responses upon repeated presentations of a
stimulus [1], to test the performance of neuronal models [2], or to
address questions regarding the nature of the neuronal code [3].

Accordingly, a wide variety of approaches has been proposed. The
Victor-Purpura metric [4] evaluates the cost needed to transform one
spike train into the other using only certain elementary
steps. Another metric proposed by van Rossum [5] measures the
Euclidean distance between the two spike trains after convolution of
the spikes with a causal exponential function. These methods, like
many others, involve one parameter that sets the time-scale at which
the spike trains are compared. In contrast, two more recent
approaches, the ISI- and the SPIKE-distance, are parameter free and
time-scale adaptive [6-8]. While the ISI-distance relies on the
relative length of interspike intervals, the SPIKE-distance is
sensitive to spike coincidences. Both measures can be applied to more
than two spike trains, either as averages over pairwise distances or
as truly multivariate measures [9].

The ISI- and the SPIKE-distance are easy to visualize in a
time-resolved manner, however, the original proposal of the
SPIKE-distance, while correctly reflecting long-term trends by means
of a moving average, led to spurious high instantaneous values during
reliable events with a non-zero but small amount of jitter. Here this
problem is resolved.

Moreover, both the ISI- and the SPIKE-distance are calculated from
instantaneous values of spike train dissimilarity for which at each
time moment not only the last preceding spike but also the first
following spike is taken into account. This non-causal dependence on
future spiking does not allow for a real-time calculation. Here
the SPIKE-distance is modified such that the instantaneous value of
dissimilarity for two or more spike trains relies on past information
only so that time-resolved and causal spike train synchrony can be
estimated in real-time.

Potential applications include rapid online decoding with
brain-machine interfaces or monitoring the activity of neuronal
populations in epileptic patients.

References 

1. Mainen Z, Sejnowski T: Reliability of spike timing in
neocortical neurons. Science 268:1503--1506 (1995).
 
2. Jolivet R, Kobayashi R, Rauch A, Naud R, Shinomoto S, Gerstner W: A
benchmark test for a quantitative assessment of simple neuron
models. J Neurosci Methods 169:417--424 (2008).

3. Victor JD: Spike train metrics. Current Opinion in Neurobiology
15:585--592 (2005).

4. Victor JD, Purpura KP: Nature and precision of temporal coding in
visual cortex: A metric-space analysis.  J Neurophysiol 76,
1310--1326 (1996).

5. van Rossum MCW. A novel spike distance.  Neural Comput 13,
751--763 (2001).

6. Kreuz T, Haas JS, Morelli A, Abarbanel HDI, Politi A: Measuring
spike train synchrony.  J Neurosci Methods 165, 151--161 (2007).

7. Kreuz T, Chicharro D, Andrzejak RG, Haas JS, Abarbanel HDI:
Measuring multiple spike train synchrony.  J Neurosci Methods 183,
287--299 (2009).

8. Kreuz T, Chicharro D, Greschner M, Andrzejak RG: Time-resolved and
time-scale adaptive measures of spike train synchrony.  J Neurosci
Methods 195, 92--106 (2011).

9. The Matlab source code for calculating and visualizing the ISI- and
the SPIKE-distance as well as information about their implementation
can be found under
http://www.fi.isc.cnr.it/users/thomas.kreuz/sourcecode.html.


Email: thomas.kreuz@cnr.it

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Modulation of Brain Activity By Task Difficulty in Perceptual Decision-Making

Bidhan Lamichhane1*, Mukesh Dhamala1,2

1 Department of Physics and Astronomy, Georgia State University, Atlanta, USA
2 Neuroscience Institute, Georgia State University, Atlanta, USA

 
The brain forms various perceptual decisions based on available
sensory information. When sensory information is scant or ambiguous,
the interpretation of signals becomes harder. 

The brain has to integrate sparse information over time to arrive at a
decision and decision times can become longer for more difficult
decision-making tasks. To understand how exactly the brain integrates
such available sensory information, we used four perceptual
categorization tasks and performed fMRI experiments. While inside an
MRI scanner, thirty-three participants made the following
forced-choice categorizations: (i) audio-visual synchrony or
asynchrony, (ii) moving-dots' left or rightward motion, (iii) face or
house, and (iv) happy or angry face. 

The difficulty level in audio-visual synchrony-asynchrony task was
altered by changing the time lag between the onsets of sound tone and
visual flash pairs. In case of moving-dots, different difficulty
levels were achieved by changing the per centage of coherently moving
dots toward left or right. 

Image pixel phase randomization and
addition of Gaussian noise enabled us to make visual image stimuli
difficult in case of other two tasks. In all these tasks, participants
expressed their decisions by button presses. We found that the
behavioral performance degraded and the reaction time increased with
increasing difficulty levels in all experiments. From brain
activations, we found the signature of task difficulty in the 
parietal, frontal, insular cortices with a higher fMRI BOLD response
in harder trials than in easier trials. The activity in the inferior
parietal lobe, dorsolateral prefrontal cortex, frontal eye fields,
supplementary eye fields and bilateral insula significantly changed
with the difficulty of the task. This increase in brain activity
with task difficulty in higher-order brain regions provides us
important clues  about the hierarchical organization of brain areas in
perceptual dec ision making functions.

Email: blamichhane1@student.gsu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Trade-off between attention demanding and automatic processes in
frontal-striatal circuits in non-human primates

Eunjeong LEE*, Moonsang Seo, Bruno B. Averbeck
Unit on learning and decision making, Laboratory of Neuropsychology, NIMH/NIH



The role of frontal-striatal (FS) circuits in the trade-off between
attention demanding and automatic processes has not been examined
directly. To investigate this, we trained monkeys on an oculomotor
sequential decision making task with two conditions. In the first
condition (random) the correct spatial sequence of eye movements
varied randomly every trial. In the second condition (fixed) the
sequence was fixed for blocks of eight correct trials, always
following one of eight highly over-learned sequences. For each
decision the animal had to determine whether the fixation point
contained a higher proportion of blue or red pixels (color bias) and
saccade to either a red or a blue peripheral target. In the random
condition the animal had to rely on the fixation point to make its
decision.  In the fixed condition, for the first few trials after a
sequence switched the animals selected their movements by determining
the majority pixel color. After doing this for a few trials, th ey
transitioned to executing the sequence from memory.  Analysis of
behavioral performance in the fixed condition suggested that the
learning stage was from the first to third trial just after switching
to a new sequence. While the monkeys performed the task, we recorded
local field potentials from dorsal lateral prefrontal cortex (lPFC)
and dorsal striatum (dStr) simultaneously with multiple electrodes.
In our preliminary analyses we have examined changes in coherence
while animals transitioned from using attention to automatically
executing the sequences in the task.  We found that coherence in the
beta band was higher when animals were trying to work out the
sequence, and became weaker after the sequence had been
learned. Overall, our results show that the trade-off between
attention demanding and overlearned processes may be mediated by
changes in cooperation between lPFC and dStr.


*Email: Eunjeong.lee@nih.gov


---------------------------------------------------------------------------
---------------------------------------------------------------------------




Uncovering the visual components of cortical object representation

Daniel D Leeds 1, Darren A Seibert 2 3, John A Pyles 1, Michael J Tarr 1,4

1  Center for the Neural Basis of Cognition, Carnegie Mellon University
2 Department of Biomedical Engineering, University of Houston
3 uPNC Summer Program, Carnegie Mellon University
4 Department of Psychology, Carnegie Mellon University

Object perception recruits a cortical network that encodes a hierarchy
of increasingly complex visual features. While early stages of vision
in the human brain have been reasonably well-modelled using local
oriented edges, the visual properties encoded in higher-level cortical
regions are less clear.  Prior work has been able to partially predict
cellular activity in IT and V4 using simple holistic (Yamane 2008) and
parts-based (Cadieu 2007) approaches.  However, these studies focus on
simplistic computer-generated shape stimuli not representative of
real-world objects.  Similar computational models have not previously
been pursued in humans nor at super-cellular cortical scales, e.g.,
through neuroimaging.  Here we use a searchlight procedure in fMRI
(e.g., Kriegeskorte 2007) to explore the ability of several computer
vision models (and a simple Gabor filter bank, for comparison) to
account for object encoding across the visual cortex. The responses of
voxel sphere searchlig hts and of computational models to sixty
real-world object pictures were compared using representational
dissimilarity, explored by Kriegeskorte 2008.  All four computer
vision models show significant matches with imaging data for distinct
cortical regions---largely in the ventral-temporal cortex, anterior of
regions matching the Gabor filter bank.  Each model captures complex
different visual structures tied to holistic or parts-based
perception.  Our findings indicate the varying selectivities, and
varying encoding principles, of visual cortical regions.

email: dleeds@andrew.cmu.edu



---------------------------------------------------------------------------
---------------------------------------------------------------------------



Inferring evoked brain connectivity through adaptive perturbation

Kyle Q. Lepage, Boston University
ShiNung Ching, MIT
Mark A. Kramer, Boston University

Inference of functional networks -- representing the statistical
associations between time series recorded from multiple sensors -- has
found important applications in neuroscience.  Typical methods for
functional connectivity employ passive measurement and are susceptible
to confounding factors such as network elements producing physically
independent yet time-locked activity.  Here, a perturbative and
adaptive method of inferring network connectivity based on measurement
and stimulation -- so called `evoked network connectivity' is
introduced.  This procedure, employing a recursive Bayesian update
scheme, allows principled network stimulation given a current network
estimate inferred from all previous stimulations and recordings.  The
method decouples stimulus and detector design from network inference
and can be suitably applied to a wide range of clinical and basic
neuroscience related problems.  The proposed method demonstrates
improved accuracy compared to network inference based on passive
observation of node dynamics and an increased rate of convergence
relative to network estimation employing a more naive stimulation
strategy.


email: lepage@math.bu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Detection and Characterization of Gamma Oscillations Evoked by Dynamic
Natural Scenes in Macaque Visual Cortex

Tiphani Lynn,1,2 Rafal Angryk,2 Charles Gray1 

Departments of Cell Biology and Neuroscience1 
and 
Computer Science,2 Montana State University, Bozeman MT

Despite the complexity of visual perception, object recognition can be
achieved in the short time-frame between eye movements. The
synchronized neural firing that generates gamma oscillations in the
local field potential (LFP) speeds the process of temporal summation
and improves the fidelity of information transfer, so it is thought
that these events are a component of the mechanism that allows for
rapid visual perception. If gamma oscillations play a critical role in
this process, their onset should have a consistent and predictable
relationship to the onset of visual fixation during viewing of natural
images. To examine this hypothesis, we analyzed recordings from areas
V1 (n = 93) and V2 (n = 211) of a macaque monkey while the animal
freely-viewed a dynamic natural scene (movie). We first computed a
sliding-window RMS (50 ms window, 1 ms step-size) on the band-pass
filtered (30-80 Hz) LFP from each channel and applied a global
threshold to detect high-amplitude events. We then computed a set of
quantitative features for each event, and subjected the resulting
feature matrix to cluster analysis which partitioned the data into
four clusters. We selected the cluster containing events consistent
with gamma oscillations (n = 403) and compared these data to events
selected using an 80 ms duration threshold (n = 2596). The clustering
algorithm is more selective and tends to identify stereotypical
oscillatory events. The resulting latency distributions were similar,
having median latencies close to 110 ms. Interestingly, there was
little or no difference between the latencies observed in V1 and V2 in
either data set. These results demonstrate that stimulus-evoked gamma
oscillations exhibit a wide range of latencies, amplitudes, and
durations under the conditions of this experiment.





email: tlynn24@gmail.com


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Bayesian nonparametric analysis of neuronal intensity rates

Published in Journal of Neuroscience Methods. 2012 
Jan 15;203(1):241-53. Epub 2011 Oct 1. 

1-Athanasios Kottas, Department of Applied Mathematics and Statistics,
University of California, thanos@soe.ucsc.edu

2-Sam Behseta, Department of Mathematics, California State University,
Fullerton, sbehseta@fullerton.edu

3-David E Moorman, Department of Neurosciences, Medical University of
South Carolina, moorman@musc.edu

4-*Valerie Poynor*, Department of Applied Mathematics and Statistics,
University of California, Santa Cruz, vpoynors@soe.ucsc.edu

5-Carl R Olson, Neuroscience, Center for the Neural Basis of
Cognition, colson@cnbc.cmu.edu


In neuroscience, there is great interest in comparing neuronal firing
rates across multiple conditions. Traditionally, neuroscientists
record the firing activity of neurons under these conditions for a
number of trials. In this research, we are comparing the activity of a
single neuron obtained from the Supplementary Eye Field (SEF) area of
a macaque monkey's brain responding to three different visual
stimuli. Data were recorded for 4000 milliseconds per condition per
trial. We propose a flexible Bayesian nonparametric dependent
Dirichlet process (DDP) mixture model to jointly model the
nonhomogeneous Poisson process (NHPP) intensity function of the
neuronal firing times across the three conditions. Under these
modeling techniques we are able to borrow strength from conditions
having higher firing rates to make inference on conditions where the
neuron exhibits less activity, all the while maintaining a data driven
distributional structure. We illustrate the methodology with global
and point-wise comparison of an SEF neuron firing rates across the
three conditions.


Email: vpoynor@soe.ucsc.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Methods for detection of functional connectivity between cortex and muscles


Sagi Perel [1,3], Andrew B. Schwartz [2,3], Valerie Ventura [3,4]

[1] BioEngineering, Univ. of Pittsburgh, PITTSBURGH, PA; 
[2] Neurobio., Univ. of Pittsburgh, Pittsburgh, PA; 
[3] Ctr. for Neural Basis of Cognition, Pittsburgh, PA; 
[4] Statistics, Carnegie Mellon Univ., Pittsburgh, PA

Direct, monosynaptic cortical output to motoneurons originates from
corticomotoneuronal cells (CMN), located predominantly in the primary
motor cortex. Post-spike effects (PSEs) in averages of spike-triggered
EMG snippets provide physiological evidence of connectivity between
CMN cells and spinal motoneurons innervating skeletal muscles. PSEs
within a narrow window following the trigger are currently detected
using either a visual inspection of a spike-triggered average (SpTA)
or the Multiple-Fragments-Analysis test (MFA, Poliakov and Schieber
1998).

SpTA requires a large number of spikes to clearly visualize the PSE
and lacks a statistical significance measure, so it cannot be
automated easily. More formal techniques exist to assess SpTA
significance (Kasser and Cheney 1985, Lemon et al. 1986), but their
statistical properties have not been studied, so their reliabilities
are unknown. MFA was suggested as a more rigorous PSE detection
method, but its reliability is also unknown.

Here, we investigate the statistical properties of PSE detection via
SpTA and MFA. We show that the rate of spurious detections from SpTA
visual inspections is very sensitive to how one decides that a PSE
exists. Sensible decision rules tend to be conservative, and thus have
low probabilities of detecting PSEs. We show that MFA is neither
conservative nor liberal, but has a rate of spurious detections that
matches the chosen significance level. We also show that MFA often has
higher probability than SpTA to detect PSEs, including in small
samples. But MFA is limited to detecting PSEs in the 6 - 16ms
post-spike window, which is mostly appropriate for monosynaptic
connectivity. We develop a scan test that allows PSE detections at any
latency; this test yields a p-value to assess PSE significance instead
of relying on visual inspection. On-line PSE detection is useful to
inform the investigator of significant PSEs while data are
collected. A visual SpTA inspection is difficult to implement; MFA is
inconvenient because it requires partitioning the data in
fragments. We propose an automatic test that is functionally
equivalent to MFA, but better suited for real-time PSE detection: the
single snippet analysis (SSA). We provide practical guidelines to
apply SSA and SSA-scan tests for automatic off- and on-line PSE
detection.  Finally, MFA and SSA tests and their scan versions rely on
assumptions, such as large samples and linear SpTA baselines. We find
that these tests are mostly robust to assumption
misspecification. Nevertheless we propose bootstrap diagnostics to
detect deviations from the assumptions, and to correct p-values when
needed. In particular, we can diagnose when SpTA non-constant
baselines affect the tests, and correct them, without explicitly
estimating the baselines.

In summary, the primary utility of the automatic tests is objective
and more efficient PSE detection. They detect functional connectivity
without making assumptions about underlying anatomy. They can be
applied automatically to many datasets, and can further be conducted
on-line, while the data are collected. The scan test also provides a
putative classification of PSEs based on the latencies at which they
are detected. However, SpTA remains the essential tool for definitive
classification of PSEs.




email: sagi@cmu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Discrete-state dynamical model based neurofeedback for BCI

*Alexey Pupyshev*, Alex Ossadtchi.
Saint-Petersburg State University

Typical neurofeedback paradigms generate the feedback signal based on
the deviation of specific characteristics of the EEG measurements from
the nominal (desired) value. The goal of the neurofeedback is to
normalize the value of that specific characteristic of EEG. Typical
behavior of a subject in neurofeedback experiment consists of
``remembering''  the states for which he or she received a
positive feedback and attempting to reenter such states. At the early
stages of training the occurrence of states with positive feedback is
infrequent and therefore the efficacy of neurofeedback training is
low.

In this work in order to increase the neurofeedback training
efficiency we propose to use a discrete state probabilistic model
(a.k.a. Markov Model).  Each state of such model is characterized by a
certain mean vector of EEG parameters and the states can be sorted
based on the values of the training criterion. The matrix of state
transition probabilities can also be estimated.  First, we hypothesize
that the states vary based on the probability of transition to the
subset of states with desired value of the optimized
criterion. Secondly, given that the previous hypothesis is true, for
each instance in time we can generate the feedback signal based on the
proprietary measure approximating the likelihood of achieving the
desired states from the current one.

Using randomization tests we confirmed that the states defined based
on the EEG characteristics indeed can be characterized by highly
non-uniform distribution of interstate transition probabilities
(p<0.01).  We implemented the novel paradigm and ran a series of pilot
experiments and received preliminary results demonstrating the
increased efficacy of the proposed model based neurofeedback paradigm
as opposed to the non-model based approach.

Email: alex2-92@mail.ru

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Validity of Independent Component Analysis for Neural Signals

Mark Reimers and Paul Manser,
Virginia Commonwealth University
Richmond, VA
---------------------

Independent Component Analysis (ICA) is a popular technique used in
the analysis of neuro-imaging time-series data such as fMRI and EEG.
The goal of ICA is to extract from the experimentally measured signals
a (usually smaller) set of signals that are maximally mutually
statistically independent which are estimates of putative underlying
processes that were originally mixed in the recorded signals.  In
practice ICA is often used as a ‘black box’; ICA algorithms will give
results under most experimental conditions and input parameters, but
without any measure of reliability or accuracy,which can lead to
spurious results.

In order to validate ICA we performed simulation studies, and compared
the resulting estimates from ICA to the “true” underlying processes.
This allows us to assess the accuracy and limitations of ICA under
various experimental conditions similar to those found in different
types of neuro-imaging recording data, with particular focus on
situations analogous to use in fMRI.  We examine the effects of
varying signal length (number of frames), signal-to-noise ration, true
distribution of underlying processes, number of signals recorded,
number of true underlying processes, number of estimated underlying
processes, and how the true underlying processes are represented as
mixtures in the recorded signals.

We show that the performance of ICA is most strongly affected by the
number of processes estimated relative to number of true underlying
processes.  Specifically, when we underestimate the true number of
distinct sources, ICA seems to perform very poorly.  We also show that
ICA performs poorly on time series with lengths comparable to the
number of frames in most fMRI time series.


email: mreimers@vcu.edu




---------------------------------------------------------------------------
---------------------------------------------------------------------------



Short term synaptic depression imposes a frequency dependent filter on synaptic information transfer

Robert Rosenbaum, Jonathan Rubin, and Brent Doiron

University of Pittsburgh and Center for the Neural Basis of Cognition

Depletion of synaptic neurotransmitter vesicles induces a form of
short term depression in synapses throughout the nervous system.  This
plasticity affects the way in which synapses filter presynaptic spike
trains.  The filtering properties of short term depression are often
studied using a deterministic synapse model that predicts the mean
synaptic response to a presynaptic spike train, but ignores
variability introduced by the probabilistic nature of vesicle release
and stochasticity in synaptic recovery time.  We show that this
additional variability has important consequences for the way in which
synapses filter presynaptic information.  

In particular, a synapse model with stochastic vesicle dynamics
suppresses information encoded at lower frequencies more than
information encoded at higher frequencies, while a widely used model
that ignores this stochasticity transfers information encoded at any
frequency equally well.  These distinctions between the two models
persists even when large numbers of synaptic contacts are considered.
In addition a stochastic synapse model drastically reduces
correlations between the synaptic currents across a pair of cells'
membrane, suggesting a mechanism through which asynchrony can be
achieved in densely connected networks.  Our study provides strong
evidence that the stochastic nature neurotransmitter vesicle dynamics
must be considered when analyzing the information flow across a
synapse.



emal: robertr@pitt.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------






Reviewing therapeutic implications of nutraceuticals on drug conflicts
associated with brain cancers using bioinformatics approach:

Manisha Sapre, Monika Anand.
Georgia State University, Winship Cancer Institute of Emory University


Cancer is a global epidemic and leading cause of deaths
worldwide. Inspite of recent advancements in science, number of cancer
deaths are increasing every year. To overcome this, implementation of
research activities allowing justified inputs from different fields
including basic biomedical research, information technology and
computing are being undertaken. This would allow us better techniques
to prevent, diagnose and treat cancers.


Of all the different types of cancers, brain cancers are more infamous
to deal with. The central nervous system is inclusive of brain and
spinal cord. Brain is master control and spinal cord is (signaling)
highway, with gate-keepers of this signaling expressway acting as toll
booths. Pharmacotherapy of brain cancers is limited by the blood-brain
barrier (BBB). Several anti-cancer drugs are available for treating
different types of cancer; however the choices are limited when it
comes to treating cancers of the brain. Most chemically formulated
anti-cancer drugs are unable to cross the gate-keeper of the
blood-brain barrier. The BBB prevents the entry of many molecules into
the brain and is a primary obstacle to drug delivery into the
brain. This limits the entry and deposition of commonly prescribed
anti-cancer drugs (such as Paclitaxel) into the brain.


Increased numbers of patients are being diagnosed with CNS allied
cancers; and new cases of developing brain metastases due to spread of
cancers from their original site (such as spread of lung cancers to
brain) are increasing.

Standard therapeutic options for brain tumors such as surgery,
radiation, and chemotherapy have long-term complications such as chemo
resistance (such as those caused by Temozolomide), neurological
toxicity, cognitive impairment, cerebellar dysfunction. Effective
treatment options are required because of the drug conflicts in CNS
associated cancers.

In recent years lot of work has been done on Nutraceuticals, with
major developments in FDA approved cellular-level nutraceuticals for
the prevention and treatment of cancer.

 

This poster reviews the therapeutic implications of including
nutraceuticals as an effective approach to bypass BBB associated drug
resistance in brain cancer applying multiple software analysis.

 

E-mail: msapre2@student.gsu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Detection limit for rate fluctuations in a spike train

Toshiaki Shintani (Kyoto University)
Shigeru Shinomoto (Kyoto University)


There are great demands for estimating the time-dependent rate
underlying neuronal spike signals. Proper estimation methods are
designed to avoid overfitting to the spike signals and accordingly
they ignore small or rapid fluctuations in the underlying rate when
the spike train is sparse. For sequences of events derived from
inhomogeneous rate processes, we estimate critical values of amplitude
and timescale of rate fluctuations, below which proper rate estimators
are unable to detect fluctuations.

To determine whether there is a common limit for detecting rate
fluctuations for inhomogeneous Poisson processes, we examine three
principled rate estimation methods; the histogram method with the bin
size optimized with respect to a mean integrated square error (MISE),
the empirical Bayes rate estimator, and the variational Bayes hidden
Markov model(VB-HMM), all of which are designed to minimize estimation
error.

The conditions for the detectable-undetectable phase transition turn
out to be identical between the MISE-optimal histogram method and the
empirical Bayes rate estimator. For the VB-HMM, we obtained the
detection limit numerically and found that the detection limit is
comparable to that of the others. The consistency among these three
principled methods suggests the presence of a theoretical limit for
detecting rate fluctuations.

References:

T.Shintani, S.Shinomoto (2012) Detection limit for rate fluctuations
in inhomogeneous Poisson processes, Phys. Rev. E, 85, 041139.

S.Koyama, S.Shinomoto (2004) Histogram bin width selection for
time-dependent Poisson processes, J. Phys. A, 37:7255-7265.

S.Koyama, T.Shimokawa, S.Shinomoto (2007) Phase transitions in the
estimation of event rate: a path integral analysis, J. Phys. A,
40:F383-F390.

email: shintani@ton.scphys.kyoto-u.ac.jp

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Using blind source separation to improve statistical power for EEG analysis

Adam C. Snyder PhD, Visual Neuroscience Laboratory, Dept. of
Ophthalmology, Univ. of Pittsburgh

John J. Foxe PhD, Sheryl and Daniel R. Tishman Cognitive
Neurophysiology Laboratory, Dept.'s of Pediatrics and Neuroscience,
Albert Einstein College of Medicine


EEG signals observed at the scalp are a superposition of contributions
 from many underlying current sources. In event-related averages, it
 can be difficult or impossible to isolate contributions from these
 disparate sources. Consequently, when statistically comparing EEG
 effects across two or more conditions, it is possible that one
 current source has a large within-condition variance that obscures a
 smaller between-conditions effect arising from another source. Blind
 source separation is a statistical method for disentangling
 superposed signals at a single-trial level. Using blind source
 separation as an early processing step can be used to unmask
 statistically significant effects that are obscured by other,
 highly-variable sources nearby. Here we describe how we used one such
 blind source separation method, independent components analysis
 (ICA), to uncover a robust effect of feature-based visual selective
 attention on 8-14Hz ‘alpha’-band oscillatory activity in the huma
 n dorsal and ventral visual processing streams.


email: adam@adamcsnyder.com



---------------------------------------------------------------------------
---------------------------------------------------------------------------



Mean, Covariance and Variance in Neural Processes: a New 
Causal Decomposition for Multivariate Data

Andrew Sorenborger

Abstract: In modern neural imaging experiments, gigabytes of 
multivariate data are acquired in minutes. Typically, dimensional 
reduction methods must be used in order to make the data 
tractable. Although neuronal models have causal structure, 
standard data reduction methods such as the singular value 
decomposition (SVD), independent component analysis (ICA) or 
non-negative matrix factorization (NMF) only make use of 
information at zero-lag. In this presentation, I will present a new 
non-parametric decomposition that makes use of causal 
information to improve estimates of spatial structure in 
multivariate imaging data. The new decomposition applies 
multitaper spectral methods to the statistical detection and 
estimation of significant causal structures in both the time-
dependent mean signal and the covariance of the background 
stochastic signal latent in neural imaging data. We will compare 
results from standard methods and from our new method 
demonstrating how it has advanced our understanding of 
seizure-related calcium activity in the larval zebrafish.


Email: ats@math.uga.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------





Fitting dynamic models to extracellular recordings in mouse cortex
during optogenetic stimulation

Emily Stephen, Jason Ritt, Uri Eden

This study aims to improve our understanding of the interactions
 between excitatory and inhibitory networks in sensory cortex, by
 creating statistical models of single unit activity in mouse
 somatosensory cortex during optogenetic stimulation of inhibitory
 interneurons. Neural spiking activity was recorded extracellularly
 from putative excitatory neurons in the barrel cortex of mice, while
 stimulating inhibitory (parvalbumin positive) interneurons in the
 same region of cortex by pulse trains of varying frequencies. The
 response of a neuron to a single pulse is characterized by an initial
 suppression in firing rate, followed by a rebound and return to
 baseline. The nature of this interaction was investigated in three
 ways: (1) fit of the peristimulus time histogram by Bayesian Adaptive
 Regression Splines, (2) comparison of inhomogeneous Poisson models
 under a generalized linear model framework, and (3) fit of a simple
 2-dimensional dynamical systems model of firing intensity using
 particle filtering. Each approach contributes a slightly different
 perspective on the behavior of the neuron.

email: emilyps14@gmail.com

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Towards parametric models of V1 responses to natural movies
Ian Stevenson 1, Urs Koster 1, Charles Gray 2, and Bruno Olshausen 1

1 Redwood Center for Theoretical Neuroscience, UC Berkeley
2 Department of Cell Biology and Neuroscience, Montana State University

Over the past few decades, the responses of neurons in primary visual
 cortex have been extensively characterized using simple, parametric
 stimuli. However, verifying and extending these response properties
 with natural scenes has been difficult. Here we present an approach
 to modeling V1 responses based on feature extraction and spatial
 pooling. This approach makes the strong assumption that receptive
 fields (RF) are spatially localized to a Gaussian window, and that
 features, such as orientation energy and direction of motion, are
 pooled across this window and combined to generate a
 linear-nonlinear-Poisson response. This approach separates the
 problem of visual receptive field estimation into a non-covex, window
 optimization step and a low-dimensional, convex optimization step to
 estimate tuning properties. While this class of models is not as
 flexible as typical RF estimation techniques based on systems
 identification, we can directly model many tuning properties
 previousl y characterized with simple stimuli, such as orientation
 tuning, direction selectivity, and contrast gain control, in the
 context of natural scenes. We compare results from this new model
 with results from a standard pixel LNP model, as well as, Fourier
 power and phase-separated Fourier models. This approach provides a
 link between low-dimensional, parametric descriptions of V1
 selectivity to simple stimuli and complex natural scene responses.





email: i-stevenson@berkeley.edu
---------------------------------------------------------------------------
---------------------------------------------------------------------------



Using a Low-cost EEG Sensor to Detect Mental States

Lucas Tan

The ability to detect mental states, whether cognitive or affective,
would be useful in intelligent tutoring and many other domains.  Newly
available, inexpensive, single-channel, dry-electrode devices make EEG
feasible to use outside the lab, for example in schools.  Mostow et
al. (2011) used such a device to record the EEG of adults and children
reading easy and hard text; the purpose of this experimental
manipulation was to induce distinct mental states.  They trained
classifiers to predict from the reader’s EEG signal whether the text
being read was easy or hard.  The classifiers achieved better than
chance accuracy despite the simplicity of the machine learning
employed.  The goal of the proposed thesis is to achieve significantly
higher classification accuracy on the same data set by exploiting the
time-varying structure of the EEG signals.

Affiliations: Project LISTEN, Carnegie Mellon University. 

Presentor: Lucas Tan
Advisor: Prof. Jack Mostow


email: btan@andrew.cmu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Simple automatic spike sorting methods improve decoding accuracy in a
3D movement task

Sonia Todorova*
Joint work with Valerie Ventura*+ and Steven Chase**+

* Department of Statistics, Carnegie Mellon University
** Department of Biomedical Engineering, Carnegie Mellon University
+ The Center for the Neural Basis of Cognition, Carnegie Mellon University

Spike sorting to recover single-unit activity from electrode array
recordings is a difficult task. In fact, it is now common practice to
decode neural activity directly from the electrode signal in order to
minimize computation and labor intensive preprocessing (Fraser et al.,
2009). Our concern is that this simplifying practice may cost some
efficiency. To investigate this, we study the efficiency of the
standard cosine tuning Kalman filter decoding algorithm under several
spike sorting procedures.

We spike sort based on clustering a low-dimensional feature
representation of the spike waveforms together with tuning information
(Ventura, 2009). The quality of recordings on some channels makes
clustering the waveforms trivial even in a one-dimensional feature
representation. Sorting the spikes based on waveform amplitude alone,
reduces the mean squared error of predicted velocity by 18%. The main
appeal of this simple approach is that it does not require a
decomposition into principal components. We explore the benefits of
more refined sorting procedures considering the tuning properties of
all units recorded on a channel. When the waveform clusters overlap,
including tuning information in the sorting can produce sharper tuning
curves and thus better decoding results. Training the tuning model for
sorting does not increase the complexity of computation because it is
a necessary part of the decoding algorithm.

Our results are based on data from a behavioral experiment, performed
at Andrew Schwartz' MotorLab. A macaque monkey performs a center-out
and out-center target reaching task with 26 targets in a virtual 3D
environment. The recorded neural activity consists of all action
potentials detected above a channel-specific threshold on a 96-channel
Utah array in the primary motor cortex.

email: sktodoro@stat.cmu.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Predicting the physiology of diverse neuron types using differences in
gene expression

Shreejoy J. Tripathy1,2, Judy Savitskaya3, Richard C. Gerkin2,3 and
Nathaniel N. Urban2,3,4 
1) Program in Neural Computation, Carnegie
Mellon University, Pittsburgh, PA 
2) Center for the Neural Basis of
Cognition, Pittsburgh, PA 
3) Department of Biological Sciences,
Carnegie Mellon University, Pittsburgh, PA 
4) Department of
Neuroscience, University of Pittsburgh, Pittsburgh, PA

Brains achieve efficient function through implementing a division of
labor, in which different neurons serve distinct computational
roles. One striking way in which neuron types differ is in their
electrophysiology properties. These properties arise through
combinations of ion channels that collectively define the computations
that a neuron performs on its inputs. Though the electrophysiology of
many neuron types has been previously characterized, these data exist
across thousands of journal articles, making cross-study
neuron-to-neuron comparisons difficult. Furthermore, the recent
collection of datasets describing the differential expression of each
gene in the geneome throughout the brain raises the exciting
possibility of linking neuron genetics with neuron function.

Here, using a combination of manual and automated methods, we describe
a methodology to curate neuron electrophysiology information into a
centralized database. We then combine this information with datasets
on neuron gene expression from the Allen Brain Institute with the goal
of predicting differences in neuron physiology from differences in
gene expression. Using purely automated approaches, we show that
electrophysiology properties can in fact be predicted from gene
expression. For example, we show that the uncertainty in a neuron’s
resting membrane potential can be lowered from an average of 10.5 mV
to 8 mV when incorporating information about neuronal gene
expression. These findings suggest that more refined and more accurate
data curation approaches can possibly further reduce the uncertainty
of electrophysiology parameters. Ultimately, we hope that these
methods may allow for neuron physiology to be determined from existing
gene expression datasets alone, in the abs ence of additional
neurophysiology experiments.



email: stripathy@cmu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------



Feedback inhibition induces memory in a simple neural
model circuit

Richard Watson
Dept of Mathematics, University of California, Davis


The relationship between patterns of neural spikes (carrying
information) and the anatomical configurations that underlie them is
fundamental to understanding how brains work. Feedback inhibition
circuits are a specific variety of neural connectivity which appear
ubiquitously throughout nervous systems in a range of species. What
behavioral characteristics are typical of these subsystems? Attempts
to date to rigorously examine such phenomena have been limited in
their techniques or assumptions. A method recently adapted and
introduced to the neuroscience community non-parametrically models the
computational structure of spike trains as an epsilon-Machine, the
minimal Hidden Markov Model capable of identically statistically
reproducing observed behavior. Here we employ this technique to
dissect the behavioral character of a simple Leaky Integrate-and-Fire
model circuit of a neuron with feedback inhibition. We examine and,
and explain analytically why, the state space of our sy stem grows in
size and complexity as the delay time on the inhibition is
increased. Memory is extended and the system is capable of storing
more information related to the input.


email: watsonr@alum.mit.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Spurious Correlations in Two-Photon Calcium Imaging

Bronwyn Woods (1,3), Alberto Vazquez (2,3), William F Eddy (1,3), Seong-Gi Kim (2,3)

(1) Department of Statistics, Carnegie Mellon University
(2) Department of Radiology, University of Pittsburgh
(3) Center for the Neural Basis of Cognition, Pittsburgh PA
 

In vivo two-photon microscopy using calcium sensitive dyes allows
simultaneous functional imaging of tens to hundreds of neurons.  This
provides a great opportunity to study the properties and dynamics of
local networks.  When studying these networks, the correlation of
activity between neurons is very frequently of interest.  However,
extracting accurate correlation values from in vivo calcium imaging is
often difficult.  Brain motion resulting from physiological processes
(such as respiration) can introduce artifacts into the data which
yield spurious and misleading correlations.  We demonstrate this
problem using example in vivo calcium imaging from rat somatosensory
cortex.  We compare existing techniques for removing physiology
artifacts from the data, including simple filters, regression
techniques, and PCA.  Finally, we propose new directions for future
work on this problem.

email: bwoods@cmu.edu
---------------------------------------------------------------------------
---------------------------------------------------------------------------




Experimental and Computational Analysis of Mouse Sleep-Wake Dynamics

Farid Yaghouby1, Ting Zhang2, Martin Striz2, Kevin Donohue3, 
Bruce O'Hara2 and Sridhar Sunderam1.  

University of Kentucky 
(1) Center for Biomedical Engineering, 
(2) Department of Biology, and
(3) Electrical and Computer Engineering.

Genetic and behavioral screening of mice play important roles in sleep
research, but the need for invasive electrophysiological (EEG/EMG)
measurements for determination of sleep-wake and behavioral state
limits the scope and rate of experimentation. In this study we explore
the utility of a noninvasive method based on the signal from a
piezoelectric sensor on the cage floor for scoring sleep-wake behavior
in mice. It was previously demonstrated that the piezo signal can
accurately discriminate sleep from wake activity; however, this was
verified mostly by visual observation. Here we perform a more
objective validation by correlating piezo measurements with EMG
activity, which is dramatically suppressed during sleep. Furthermore,
the piezo sensor is sensitive to respiration-related thoracic
movements. Since breathing is relatively irregular in REM sleep
compared to non-REM, we extract piezo features that reflect breathing
regularity to try to distinguish between these sleep st ates. We
validate our methods against simultaneous video/EEG/EMG measurement,
which constitute the gold standard for scoring sleep. But rather than
rely on subjective visual scoring to determine state, we use an
unsupervised probabilistic model, the hidden Markov model (HMM), to
automatically partition time series of extracted EEG/EMG features into
REM, non-REM and wake states. A similar HMM, estimated exclusively
from piezo features of instantaneous energy and breathing regularity,
displayed dynamical stages with a similarity to REM/non-REM sleep,
transient arousal, and wakefulness. These preliminary results suggest
that a combination of piezoelectric measurements and computational
modeling could yield a novel noninvasive method for analysis of sleep
and sleep-related disorders.

email: f.yaghouby@uky.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------


Inferring Brain Networks through Graphical Models with Hidden Variables

 Justin Dauwels, *Hang Yu*, Xueou Wang
 School of Electrical and Electronic Engineering
 School of Physics and Mathematics Science
 Nanyang Technological University, 639798, Singapore


Inferring the interactions between different brain areas is an
important step towards understanding brain activity. Most often,
signals can only be measured from some specic brain areas (e.g.,
cortex in the case of scalp electroencephalograms). However, those
signals may be affected by brain areas from which no measurements are
available (e.g., deeper areas such as hippocampus). In this paper, the
latter are described as hidden variables in a graphical model; such
model quanties the statistical structure in the neural recordings,
conditioned on hidden variables, which are inferred in an automated
fashion from the data.

As an illustration, electroencephalograms (EEG) of Alzheimer's disease
patients are considered. It is shown that the number of hidden
variables in AD EEG is not signicantly different from healthy
EEG. However, there are fewer interactions between the brain areas,
conditioned on those hidden variables.

Email: fhlyhv@gmail.com


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Bayesian learning in assisted brain-computer interface tasks

*Yin Zhang* (Carnegie Mellon University)
Andrew B. Schwartz (University of Pittsburgh)
Steve M. Chase (Carnegie Mellon University)
Robert E. Kass (Carnegie Mellon University)

Successful implementation of a brain-computer interface depends
critically on the subject's ability to learn how to modulate the
neurons controlling the device. However, the subject's learning
process is probably the least understood aspect of the control
loop. How should training be adjusted to facilitate dexterous control
of a prosthetic device? An effective training schedule should
manipulate the difficulty of the task to provide enough information to
guide improvement without overwhelming the subject. In this paper, we
introduce a Bayesian framework for modeling the closed-loop BCI
learning process that treats the subject as a bandwidth-limited
communication channel. We then develop an adaptive algorithm to find
the optimal difficulty-schedule for performance
improvement. Simulation results demonstrate that our algorithm yields
faster learning rates than several other heuristic training schedules,
and provides insight into the factors that might affect the learning
process.

Email: yinzhang@cs.cmu.edu


---------------------------------------------------------------------------
---------------------------------------------------------------------------


Bayesian graphical models for multivariate functional data

*Hongxiao Zhu*, David B Dunson and Nathaniel Strawn

Abstract: 

In many applications there is interest in the dependence structure in
multivariate functional data. For vector data, conditional
independence relationships can be inferred through allowing zeros in
the precision matrix in a Gaussian graphical model. Bayesian methods
can allow unknown locations of zeros relying on hyper inverse-Wishart
priors for the covariance. To generalize these methods to multivariate
functional data, we propose a multivariate Gaussian process with an
extended block hyper-inverse Wishart prior for the covariance
structure. Theoretical properties of this prior are considered. Posterior 
computation is performed in the frequency domain using orthogonal 
basis expansions, with Markov chain Monte Carlo algorithms developed 
with and without measurement errors. The methods are evaluated 
through simulation studies and are applied to Electroencephalography data.

Email: hz52@stat.duke.edu

---------------------------------------------------------------------------
---------------------------------------------------------------------------



Information transmission using non-Poisson regular firing

Shinsuke Koyama

In many cortical areas neural spike trains are non-Poisson. In this 
article we investigate a possible benefit of non-Poisson spiking for 
information transmission by studying the minimal rate fluctuation that 
can be detected by a downstream optimal observer, i.e., a Bayesian 
estimator. The idea is that an inhomogeneous Poisson process may make it 
difficult for downstream decoders to resolve subtle changes in rate 
fluctuation, but by using a more regular non-Poisson process the nervous 
system can make rate fluctuations easier to detect and, therefore, more 
informative. We evaluate the degree to which regular firing reduces the 
rate fluctuation detection threshold. We find that the threshold for 
detection is reduced as the coefficient of variation of interspike 
intervals increases.