Poster Abstracts:

A multi-dimensional surface-based method for determining brain lateralization

Essang Akpan
University of Pittsburgh
The lateralization of cognitive functions in the human brain has long influenced 
neuroscience theory and practice. Examining lateralization of a structure or function 
requires two contralateral regions (left and right homologues) to be identified in a 
particular brain. In magnetic resonance imaging (MRI) scans, a common approach is 
to use distance from the brain’s midline to identify regions that have the same 
separating distance from this anatomical divider in standardized space. However, 
this approach has two key limitations. First, in three-dimensional voxel space, 
the convoluted shape of the neocortex is not taken into account, so two voxels 
can appear adjacent in voxel space, yet actually be further apart after flattening 
cortical folds into a surface. Second, distance from the midline is only one spatial 
dimension. Additional potential anatomical markers (and associated distances) are 
typically unused, ignoring a potentially valuable source of information for finding 
associated homologues. We describe a technique for identifying left and right 
homologues in MRI brain images using a novel Lateralization by Surface Fingerprint 
(LSF) technique. LSF draws on FreeSurfer-generated cortical surface parcellations 
to identify left and right homologues in a person’s native brain space. After a 
subject’s MRI scan is reconstructed to form a cortical flat map, each surface 
vertex is allocated a vector that reflects its Euclidian (surface) distance 
from every segmented and labeled region in its hemisphere. This set of distances 
is used to identify a vertex with an equivalent set of distances in the other 
hemisphere. Left and right vertices are then paired based on this distance vector. 
When comparing this new LSF coordinate system to Talairach coordinates, we found 
that LSF more accurately pairs left and right regions (based on their FreeSurfer 
labels), allowing us to identify interhemispheric regions in a more precise 
multidimensional manner. This approach has the potential to help investigators 
examine anatomical and functional lateralization by taking into account multiple 
distance dimensions in a person’s unique surface space. 


Rules of Thumb in Assessing Goodness-of-Fit in Marked-Point Process Models

Yalda Amidi	
Department of Neurological Surgery, Massachusetts General Hospital 
and Harvard Medical School 

Marked-point process modeling framework has received increasing recent 
attention for real-time analysis of neural spike trains. For this modeling 
framework, an important aspect is the development of goodness-of-fit techniques 
which enable us to have an accurate and interpretable assessment of the fitted 
model to the observed full spike events. We previously proposed a goodness-of-fit 
technique for the marked-point process models of the population spiking activity 
and showed that under the correct model, each spike event can be rescaled individually 
to generate a uniformly distributed set of events in time and a stochastic boundary 
in mark space. Using this transformation, we were able to use different uniformity 
tests to evaluate the strength and accuracy of the model fitted to the observed 
data. A modeling challenge associated with this transformation is its extension 
to high-dimensional mark spaces, where the computation of stochastic boundary 
of the transformation becomes burdensome. Here, we propose a set of new 
transformations, which generate events that are uniformly and identically 
distributed in the new mark and time spaces. These transformations are scalable 
to multi-dimensional mark spaces and provide independent and identically distributed 
(i.i.d) samples in hypercubes, which are best suited for uniformity tests. We 
discuss properties of these transformations and demonstrate what aspect(s) of a 
model fit is captured per each transformation. We also provide a list of uniformity 
tests to assess whether the transformed events are distributed according to a 
multivariate uniform distribution or not. We demonstrate applications of these 
transformation and uniformity tests in a simulation data sample. Theoretical proof 
for each transformation is provided in the paper appendix.

Ali Yousefi
Uri. T. Eden
Department of Mathematics and Statistics, Boston University, Boston, MA


Inferring the dynamics of strategy change in a simple, competitive game 

Kensuke Arai
Boston University Dept. of Mathematics and Statistics 

In studying the brain, the relationship between neural activity and behavior is 
often sought. When the behavior can be directly observed, such as hand grip type or 
movement of an arm, it is fairly straightforward to capture quantitatively, and the 
relationship of neural activity to behavior can be modeled directly. The behavior, 
however, might not be directly observable or difficult to characterize. For instance 
in a game between players such as rock-scissors-paper, players often try to predict 
what the opponent will do based on the history of moves of each player. The internal 
model a player has of the opponent may also change, as the opponent may also adjust 
his/her strategy. Using the state-space modeling framework, we present a simple model 
of the hands played in a matching pennies game that can infer the dynamic change in 
player strategy over consecutive games. Inferring these unobserved dynamics may be 
useful in linking neural activity to cognition, and provide interpretation to neural 
variability that would otherwise be unexplained.

Uri Eden, Boston University Dept. of Mathematics and Statistics


An heterogeneous model for multiple random dot product graphs

Jesus Arroyo 
Johns Hopkins University 

We propose a new model to jointly analyze heterogeneous populations of graphs 
with a shared latent structure on the vertices but different connectivity patterns. 
This model extends the random dot product graph (RDPG), and hence encompasses 
several other popular random graph models but in a multiple graph setting. We 
introduce an efficient method based on singular value decompositions to fit the 
model, and study its statistical properties. Using simulated and real data, we 
show the applicability and effectiveness of the model in solving different tasks, 
including classification, hypothesis testing and community detection, and 
demonstrate its performance in the study of brain connectomics.

Carey E. Priebe
Joshua T. Vogelstein
Johns Hopkins University


Inferring connectivity and latent input covariance from spike train correlations 

Cody Baker
Notre Dame University

A major goal in computational neuroscience is to obtain estimates of functional 
or synaptic connectivity via large scale, in vivo, extracellular recordings of 
neural activity. Several measures of functional connectivity have been proposed, 
but their relationship to synaptic connectivity is often not explored. Measuring 
the relationship between functional and synaptic connectivity requires knowledge 
of ground truth synaptic connectivity, which is typically unavailable in experiments. 
Some studies have used in silico simulations as benchmarks for investigating this 
relationship, but these approaches often use small networks or assume that synaptic 
inputs from outside the recorded network are uncorrelated. Inferring connectivity 
under a more biologically realistic assumption that neurons receive correlated input 
from unobserved sources (i.e. latent variability) has also only been studied in 
particular settings. We combine spiking network simulations, general analytical 
formulae, and calcium imaging data to give an in-depth analysis of when and how 
functional connectivity, synaptic connectivity, and latent variability can be 


Autoregressive State Space Oscillator Models for Neural Data Analysis

A.M. Beck 
Massachusetts Institute of Technology Department of Electrical Engineering and 
Computer Science; Massachusetts General Hospital Department of Anesthesia, 
Critical Care and Pain Medicine; Harvard Medical School  

Brain oscillations are typically analyzed using frequency domain methods such as 
nonparametric spectral analysis, or time domain methods based on linear bandpass 
filtering. A typical analysis might seek to estimate the power within an oscillation 
sitting within a particular frequency band. A common approach to this problem is to 
estimate the signal power within that band, in frequency domain using the power 
spectrum, or in time domain by estimating the power or variance in a bandpass 
filtered signal. A major conceptual flaw in this approach is that neural systems, 
like many physiological or physical systems, have inherent broad-band ``1/f" dynamics, 
whether or not an oscillation is present. Calculating power-in-band, or power in a 
bandpass filtered signal, can therefore be misleading, since such calculations do 
not distinguish between broadband power within the band of interest, and true 
underlying oscillations. We present an approach for analyzing neural oscillations 
using a combination of linear oscillatory models in a state space framework, 
estimating the parameters of these models using an expectation maximization (EM) 
algorithm. We employ AIC to select the appropriate model and thereby identify the 
number of oscillations present in the data. In addition to avoiding the pitfalls of 
bandpass filtering and ``1/f” dynamics, this method provides a low dimensional time 
domain representation of the signal, enabling comparisons between neural data 
recordings in the estimated time series and by proxy in the fitted autoregressive 
models. We demonstrate the application of this method to univariate electroencephalogram 
(EEG) data recorded at quiet rest and during propofol-induced unconsciousness. 

A. M. Beck, E. P. Stephen, P. L. Purdon. (2018). State Space Oscillator Models 
for Neural Data Analysis. Annual International Conference of the IEEE Engineering 
in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. 
Conference. 2018. 4740-4743. 


A latent factor model for discovering lead-lag relationships between two brain 
areas based on multiple electrode recordings

Heejong Bong
Carnegie Mellon University

An important and pressing problem in neurophysiology is to discover interpretable 
functional connections among two or more brain regions based on large-scale 
recordings. We define and study a latent factor model that provides an estimate 
of the joint precision matrix (the inverse of the covariance matrix) of recordings 
across both time and electrodes. The estimate is sparse, so that many partial 
correlations vanish, and we arrive at a graph (a set of nodes and edges) that 
indicates lead-lag relationships. We develop an inferential framework to control 
false discovery rate, and we verify its properties in simulations. We then apply 
the methodology to data recorded from a Neuropixel array in mouse motor cortex and 
striatum during a lever-pressing task. In these data we find that striatum leads 
motor cortex just before movement onset and that motor cortex feeds back to 
striatum afterward.

Valerie Ventura
Rob Kass
Eric Yttri
Carnegie Mellon University


Effects of unattended deviants on the auditory selective attention of budgerigars 
(Melopsittacus undulatus) 

Huaizhen Cai
Department of Psychology, University at Buffalo, State University of New York 
Budgerigars were trained using operant conditioning methods on an objective 
paradigm of auditory streaming. Birds were trained to discriminate a frequency 
deviant embedded in a target pure tone stream while ignoring a simultaneously 
presented background stream. Across trials, the frequency deviant appeared at 
random locations in the target stream. The background stream consisted of pure 
tones that were temporally offset to those in the target stream. A constant frequency 
deviant appeared at a random sequential location in the background stream as a 
distractor. Across sessions, the frequency deviant in the background stream took 
values of different perceptual salience. The thresholds for the detectable frequency 
deviant in the target stream along with response latencies were measured as 
indicators of the manipulation of selective auditory attention in birds. We found 
that birds’ thresholds increased as the frequency deviant in the background stream 
became more salient. In sessions with more salient background deviants, birds 
responded to target deviants faster for trials with background deviants appearing 
further away (sequential locations) from the target deviant, indicating that it 
takes time for birds to switch their selective attention from the unattended 
distractors to the attended target. Hence, birds experience attentional capture 
in auditory streaming, even though they were trained to ignore the capture in the 
background. Further studies need to be conducted to elucidate the neural 
mechanism of the interaction between bottom-up and top-down process in auditory 
selective attention. 

Micheal L. Dent
Department of Psychology, University at Buffalo, State University of New York


Uncovering representations of large-scale spatially distributed rodent hippocampal LFPs 

Liang Cao
NYU Neuroscience Institute, East China Normal University

Rodent hippocampal place cells are well known to have the localized spatial 
tuning property, and population spike activities from place cell assemblies provide 
a readout of animal’s spatial location. However, direct use of spike information for 
population decoding appears challenging due to practical issues of spike sorting, 
unit classification and instability etc. In contrast, hippocampal local field potentials 
(LFPs) consist of local subthreshold activities reliably represent neuron assembly 
activity around the recording site, serving as an alternative and robust information 
carrier for animal’s spatial information representation. To date, however, 
representations of spatial distributed hippocampal LFPs are not well studied. 
Previously, researchers have succeeded in decoding rodent’s position during navigation 
based on features extracted from large-scale spatially distributed hippocampal LFPs 
(Agarwal et al., 2014). Here we employed several supervised and unsupervised methods 
to investigate representations of large-scale rodent hippocampal LFPs when the animal 
is freely foraging in multiple environments (linear, circular and theta mazes) and 
during the offline state (quiet wakefulness and slow wave sleep). Multiple LFP features 
(e.g., theta phase, gamma amplitude, and ultra-high frequency amplitude or multiunit 
activity) are thoroughly examined. We found that spatially distributed theta phase 
features across silicon probe channels provides a robust and reliable readout of 
animal’s position during running, and spatially distributed ultra-high frequency 
amplitude features provide a readout of replay contents during hippocampal 
sharp-wave ripple (SWR) events. For instance, in the 2.9m-circular maze example, 
LFP-based feature decoder yielded a median decoding error of 6.6 cm (in contrast 
to 8.5 cm in spike-based decoder). We further used a leave-one-out method to investigate 
the representational contribution of each shank during running and SWR events, 
and found similar spatial contributing patterns between these periods. We also applied 
several unsupervised learning methods (such as independent component analysis and 
dictionary learning) to optimize and visualize the spatially localized LFP features.
Our preliminary results reveal the potential representational power of large-scale 
spatially distributed hippocampal LFPs. An efficient decoding strategy would be 
beneficial for closed-loop brain-machine interface applications. Further studies of 
their representations may provide important hints for animal’s planning and 
Zhe (Sage) Chen
Neuroscience Institute, Department of Neuroscience and Physiology, New York University 


Non-invasive algorithm for silence localization in stroke and traumatic brain injuries 

Alireza Chamanzar
Department of Electrical and Computer Engineering, Carnegie Mellon University 

Introduction: We present a novel algorithm for noninvasive localization of 
“silenced regions” of the brain, i.e., regions of the brain without any electrical 
activity, using electroencephalography (EEG) recordings. “Silences” can model 
ischemic tissue resulting from a wide variety of neurological alterations, e.g., 
traumatic brain injuries (TBIs), and ischemic stroke. They can also approximately 
model (for a small time-frame of a few tens of seconds) cortical spreading 
depolarizations (CSDs), and peri-infarct depolarizations (PIDs), which are waves of 
neural silencing that spread slowly across the cortical surface and can cause secondary 
brain injuries [Dreier JP et al., 2018]. Resected regions of brain or other lesion 
types form another type of silences in the brain. Motivation: Commonly, silences are 
localized using magnetic resonance imaging (MRI) or computed tomography (CT) scans. 
However, in emergency situations, and/or for patients with metallic objects in their 
body, MRI and CT scans cannot be used. We aim to localize the silences in the brain 
using the non-invasive and widely-used EEG systems. Methods: Our algorithm uses a 
spatial successive refinement approach to estimate the regions of silence based on 
the contribution of each voxel (source) in the recorded signals and detects the 
sources with a reduced contribution as silences in the brain. Results: We used a 
high-density EEG recording system with 128 electrodes to record neural activities 
during different visual, auditory, and rest tasks for two patients with silences 
in their brain (a patient with visual agnosia and another with a resected region).
Our algorithm successfully localized different sizes of regions of silences, 
ranging from small ones, e.g., a lesion in the right ventral occipitotemporal 
cortex in a patient with agnosia, to large ones, e.g., in a patient with 
occipitotemporal lobectomy. Structural MRI scans of these two patients were used 
as ground truth for silences. Additional simulation results suggest that our  
algorithm reduces the localization error by a factor of ~4 over classical source 
localization algorithms adapted to this problem.

Marlene Behrmann, Department of Psychology, Carnegie Mellon University (CMU), 
Pittsburgh, PA
Pulkit Grover, Department of Psychology, Carnegie Mellon University (CMU), 
Pittsburgh, PA


The in vivo spatiotemporal evaluation of damage in oligodendrocyte 
lineage structure in neural electrode implantation 

Keying Chen
Department of Bioengineering, University of Pittsburgh
Center for the Neural Basis of Cognition, University of Pittsburgh
Carnegie Mellon University

Brain implantable electrodes are known to be a promising advanced technology for 
studying brain circuity and patterns of neurodegenerative diseases, with the ability 
to detect extracellular action potentials in vivo and send electrical pulses into 
local brain. This brain machine interface technology has wide applications within 
neuroprosthetic field to restore functional outputs. Oligodendrocytes are 
brain-resident neuroglia which produce myelin processes wrapped around axons to 
provide electrical insulation and metabolic support, maintaining integrity and 
functionality of brain circuit. Oligodendrocytes are extremely vulnerable to oxidative 
stress, pro-inflammatory signals, and phagocytosis. Therefore, the implantation of 
electrodes can break the integrity of oligodendrocyte lineage strucutres with induced 
injury and damage, contributing to chronic device failure. However, it is unclear how 
oligodendrocytes and myelin degenerate during implantation. This project mainly 
investigates the spatiotemporal generation patterns of oligodendrocytes and myelin at 
neural electrode interface in vivo, observing the development of early myelin 
morphological damage, with multi-photon microscopy. We hypothesized that the mechanical 
strain induced by probe insertion might alter myelin alignment at neural electrode 
interfaces. A morphological opening operation with line structuring elements in 
11 angles (15-180 in 11 intervals) is was programmed in MATLAB. Each pixel whose 
intensity was higher than filter threshold (Otsu’s thresholding method) was measured. 
Myelin angle was calculated between line structuring element and highest opening 
operation direction. Limited preliminary data shows that the weighted averages in 
myelin angle are 89.4766 $pm$ 58.6676 degree at interfaces, 85.9003 $pm$ 43.3848 
degree in distal region (300 um away from implant), and 72.7215 $pm$ 53.2499 degree 
in the contralateral side. 

Steven M. Wellman (a, b) Franca Cambic (d) James R. Eles (a, b) 
Takashi DY. Kozai (a,b,e,f,g)
a Department of Bioengineering, University of Pittsburgh, USA
b Center for the Neural Basis of Cognition, University of Pittsburgh and 
  Carnegie Mellon University, United States
c Veterans Administration Pittsburgh, Pittsburgh, PA, USA
d Department of Neurology, University of Pittsburgh, USA
e Center for Neuroscience, University of Pittsburgh, USA
f McGowan Institute of Regenerative Medicine, University of Pittsburgh, USA
g NeuroTech Center, University of Pittsburgh Brain Institute, USA


Neuronal response variability and divisive normalization  

Ruben Coen-Cagli
Albert Einstein College of Medicine 

The activity of cortical neurons varies in response to repeated presentations of 
a stimulus. This variability is sensitive to experimental manipulations that are 
also known to engage divisive normalization: a widespread operation that describes 
neuronal activity as the ratio of a numerator (representing the excitatory stimulus 
drive) and denominator (the normalization signal). Although it has been suggested 
that normalization affects response variability, the standard modeling framework 
for normalization is unable to quantify the relation between the two. We propose 
an extension of the standard normalization model, that treats the numerator and 
denominator as stochastic quantities. Within this framework, we also derive a 
method to infer the single-trial normalization strength, which cannot be measured 
directly. We test this extended model on neuronal responses to stimuli of varying 
contrast, recorded in macaque primary visual cortex. Consistent with general 
predictions of the model, we find a reduction of response variability for neurons 
that are more strongly normalized, and during trials in which normalization is 
inferred to be strong. These results suggest that normalization may play a functional 
role in modulating response variability. We are also currently testing a further 
extension of this model that captures normalization signals shared between neurons, 
and their impact on correlated variability. Because our framework can infer the 
single-trial normalization, it could provide a direct quantification of how 
single-trial normalization affects perceptual judgments, and can readily be applied 
to other sensory and non-sensory factors. 


Power and coherence in low frequency bands from macaque frontoparietal 
cortex track working memory performance
Bryan Conklin
Florida Atlantic University

Working memory (WM) is a cognitive system that uses internal representations of 
recent events for a pending action (Baddeley, 1992). Activity in prefrontal and 
posterior parietal cortical regions, working together as the fronto-parietal 
brain network (FPN), serves to manage this system (Chafee & Goldman-Rakic, 1998; 
Cohen, 1997; Fuster & Alexander, 1971; Gnadt & Andersen, 1988). Oscillations and 
synchronous activity are thought to govern the FPN (Bressler, 1995; Fell & Axmacher, 
2011; Johnson, 2017). However, it is not clear how they contribute to WM performance. 
We tested the hypothesis that measures of spectral power and FP synchrony in areas 
of the macaque FPN would track WM performance within specific frequency bands by 
analyzing a dataset from a previously reported experiment (Salazar et. al, 2012). 
Two rhesus macaque monkeys (1 and 2) performed an oculomotor delayed match-to-sample 
task to test visual WM while wide-band electrophysiological signals were 
intracortically recorded from FP regions. We applied time-frequency analysis via 
complex Morlet wavelets with frequencies ranging from 3.5-200Hz. 3.5-8Hz 
condition-differenced power from the delay period was significant in recording 
area 8B of Monkey 2. 3.5-10Hz condition-differenced power from baseline through 
stimulus onset was significant in recording area dPFC of Monkey 2. Condition-
differenced coherence over all frontoparietal pairs was significant in Monkey 2 
in the 3.5-9Hz band throughout baseline and beginning of stimulus onset and in 
the 3.5-17Hz band during mid to late delay. We have shown that low frequency power 
and coherence activity tracks macaque WM performance. Crucially, these results 
implicate both the baseline and delay periods as being important for visual WM 
performance. Task-relevant preparatory top-down control or attention-related mechanisms 
during the baseline period might prime the subject for correct performance. Therefore, 
both raw and normalized findings should be reported in studies to ensure that 
task-relevant baseline activity doesn’t bias the normalized results.


Neuro-Current Response Functions: an integrated approach to MEG source 
analysis for continuous stimuli paradigm 

Proloy Das
Department of Electrical and Computer Engineering, University of Maryland, 
College Park, Maryland 

The human brain routinely processes complex information as it unfolds over time, 
including speech sounds and the linguistic information they represent. Quantitative 
investigation of such neural processing relies on building mathematical models that 
map various features of the sensory stimulus onto the neural response. Over the past 
decade, MEG/EEG recordings in response to continuous auditory stimuli have been 
productively modeled as the output of a linear time invariant (LTI) system, where 
the input to the system is one or several representations of the auditory stimulus, 
e.g., the broadband acoustic envelope. In the context of auditory experiments, the 
filters characterizing these LTI system are called temporal response functions (TRFs). 
The functional roles of TRF components (specific peaks at particular latencies), 
characterizing different forms of temporal properties of neural processing in the brain, 
have been well studied, but less so any specific links to the cortical sources 
responsible for these components. Existing approaches to associate neural current 
sources with estimated TRFs are sub-optimal: either 1) TRFs for each source are 
estimated independently only after mapping the raw MEG data to the cortical surface, 
or 2) the estimated sensor level TRFs are treated as evoked potentials to be mapped 
to the neural source space. Due to this artificial separation, any such method fails 
to utilize the full MEG source localization power. Here we provide a novel framework 
for simultaneously determining the TRFs and their cortical distribution directly 
from the MEG data, by unifying the TRF and distributed forward source models, and 
casting the joint estimation task as a Bayesian optimization problem. Now by design, 
the cortical current sources, which process different features of the continuous 
stimulus as linear filters, compete among themselves to explain the recorded responses, 
given the external stimulus features. Though the resulting problem emerges as 
non-convex, we propose efficient solutions that leverage recent advances in evidence 
maximization to compute these filters at each potential location, thus obtaining 
their cortical distribution. In the analogy to hemodynamic response functions 
encountered in fMRI studies, we call these filters neuro-current response functions 
(NCRFs). Finally, we estimate NCRFs from both simulated MEG data and real MEG data 
recorded under auditory stimulation using the proposed algorithm, and demonstrate 
significant improvements over other methods, including better effective spatial 
resolution, and reduced reliance on fine-tuned coordinate co-registration. In summary, 
we put forward a new paradigm for MEG source analysis tailored for investigating 
recordings under presentations of continuous stimuli.

Christian Brodbeck, Institute for Systems Research, University of Maryland, 
College Park, Maryland 
Jonathan Z. Simon, Department of Electrical and Computer Engineering, Institute for 
Systems Research, Department of Biology, University of Maryland, College Park, Maryland 
Behtash Babadi, Department of Electrical and Computer Engineering, Institute for
Systems Research, University of Maryland, College Park, Maryland  


Direct Comparison of Nonlinear Sensory Encoding Models in Ferret 
Primary Auditory Cortex

Stephen David
Oregon Health & Sciences University

A common framework for describing the function of auditory neurons is the 
linear-nonlinear spectro-temporal receptive field (LN STRF). This model casts a 
neuron’s sound-evoked activity at each moment in time as the linear weighted sum 
of the immediately preceding sound spectrogram, followed by nonlinear rectification. 
However, the LN STRF is an incomplete model since it cannot account for 
context-dependent encoding or other nonlinear aspects of auditory processing. 
Two alternative models have improved on the predictive power of the LN STRF by 
accounting for experimentally observed biological mechanisms: short-term 
plasticity (STP) and contrast-dependent gain control (GC). While both models 
improve performance over the LN model, they have never been compared directly, 
and it is unclear whether they account for separate processes or simply describe 
the same phenomenon in different ways. To address this question, we recorded the 
activity of single primary auditory cortical neurons ($n = 423$) in awake ferrets 
($n = 7$) during the presentation of natural sound stimuli. We then fit each model 
and a combined model (STP+GC) on this single dataset and compared their performance, 
measured as the noise-corrected correlation coefficient (Pearson’s R) between 
predicted and observed firing rate for each neuron. Our results indicate that there 
is no significant performance difference between the STP and GC models, and that 
the STP+GC model performs significantly better than either individual model. This 
finding indicates that the STP and GC models contain distinct explanatory power. 
Further, the success of the combined model hints that auditory cortical neurons 
utilize two independent mechanisms to adapt their encoding properties to different 
sensory contexts. Future sound processing and hearing aid technologies may therefore 
improve their performance by incorporating STP- and GC-based strategies. 

Jacob Pennington - Washington State University Vancouver 
Alexander Dimitrov - Washington State University Vancouver 


Clusterless Inference of Compression of Spatial Representation in Hippocampal Replay 

Xinyi Deng
Columbia University

Memories are only useful if we can take something that happened over a long period and 
compress it into something that we can retrieve quickly to guide our behavior. 
For example, effective navigation is thought to involve retrieval of compressed spatial 
memories: During sharp wave-ripple (SWR) events, spiking sequences observed during a 
rat's spatial experience are replayed at a compressed timescale in the hippocampus. 
However, few studies have provided approaches to characterize this compression from 
a statistical perspective. Here we present a hierarchical framework to infer the 
compression of spatial representation between locomotion and SWR behavioral states 
on an event-by-event basis from unsorted, population spiking activity. We first 
develop a latent-distance-dependent hidden Markov model to infer transition 
probabilities between latent states and coordinates associated with each state in a 
latent 2-dimensional space during locomotion. In this model, the transition 
probability between any two states depends on the distance between the latent 
locations associated with each discrete state, scaled by a hyperparameter. We then 
use a maximum likelihood estimation method to evaluate the single-event compression 
rate for individual SWR event. We validate our approach with a simulation study. 
More broadly, this framework can be used for linking representations between one 
behavioral state where the neural data is constrained by finite sampling and others 
where the neural data is sufficient for accurate unsupervised estimation methods.

Mattias Karlsson (2) Loren Frank (2) Liam Paninski (1) and Scott Linderman (1)
1.Columbia University
2 University of California, San Francisco


A state space model for characterizing trajectory dynamics of non-local 
spatial firing in hippocampus
Eric Denovellis
Department of Mathematics and Statistics, Boston University

During sleep and immobility, hippocampal place cells fire in sequences consistent 
with temporally compressed versions of trajectories previously run by the animal. 
These replayed sequences may be part of an important mechanism for consolidation of 
spatial memory. However, recent work has shown that these place cell sequences can 
have more complex dynamics beyond representing spatially continuous runs through 
the environment. For example, sequences can alternate between hovering on a particular 
spatial location and continuous movement (Pfeiffer and Foster 2015) or represent 
continuous trajectories in other spatial environments (Karlsson et al. 2009), which 
may appear spatially incoherent in the context of the current environment. To 
investigate this, we develop a state space model that uses a combination of discrete 
and continuous latent states in order to decompose place cell sequences into categories 
of latent dynamics. Each discrete latent “category” is associated with a type of 
continuous latent dynamic—such as hovering, spatially fragmented or spatially 
continuous. This allows for (1) direct comparison between different categories of 
sequence dynamics, (2) expression of our confidence in one or more categories 
explaining the data, and (3) characterization of the transitions between categories. 
We demonstrate the utility of this model on simulated and real data of an animal 
performing a spatial memory task. 

Anna K. Gillespie (2) Michael E. Coulter (2) Loren M. Frank (2) Uri T. Eden (1)
1 Department of Mathematics and Statistics, Boston University
2 Department of Physiology and Kavli Institute for Fundamental Neuroscience, 
University of California, San Francisco


Replicating the mouse visual cortex using Neuromorphic hardware

Srijanie Dey
Washington State University

The primary visual cortex is one of the most complex parts of the brain offering 
significant modeling challenges. With the ongoing development of neuromorphic hardware, 
simulation of biologically realistic neuronal networks seems viable. According to [1], 
Generalized Leaky Integrate and Fire Models (GLIFs) are capable of reproducing cellular 
data under standardized physiological conditions. The linearity of the dynamical 
equations of the GLIFs also work to our advantage. In an ongoing work, we proposed 
the implementation of five variants of the GLIF model [1], incorporating different 
phenomenological mechanisms, into Intel’s latest neuromorphic hardware, Loihi. 
Owing to its architecture that supports hierarchical connectivity, dendritic 
compartments and synaptic delays, the current LIF hardware abstraction in Loihi 
is a good match to the GLIF models. In spite of that, precise detection of spikes 
and the fixed-point arithmetic on Loihi pose challenges. We use the experimental data 
and the classical simulation of GLIF as references for the neuromorphic implementation. 
Following the benchmark in [2], we use various statistical measures on different 
levels of the network to validate and verify the neuromorphic network implementation. 
In addition, variance among the models and within the data based on spike times are 
compared to further support the network’s validity [1,3]. Based on our preliminary 
results, viz., implementation of the first GLIF model followed by a full-fledged 
network in the Loihi architecture, we believe it is highly probable that a successful 
implementation of a network of different GLIF models could lay the foundation for 
replicating the complete primary visual cortex. 

1. Teeter C, Iyer R, Menon V, Gouwens N, Feng D, Berg J, Szafer A, Cain N, Zeng H, 
Hawrylycz M, Koch C. Generalized leaky integrate-and-fire models classify multiple
 neuron types. Nature communications. 2018 Feb 19;9(1):709. 
2. Trensch G, Gutzen R, Blundell I, Denker M, Morrison A. Rigorous neural network 
simulations: a model substantiation methodology for increasing the correctness of 
simulation results in the absence of experimental validation data. Frontiers in 
Neuroinformatics. 2018;12.
3. Paninski L, Simoncelli EP, Pillow JW. Maximum likelihood estimation of a stochastic 
integrate-and-fire neural model. Advances in Neural Information Processing Systems. 
2004; pp 1311-1318. 


Brain Structural Basis of Musical Improvisation: dMRI and fMRI Study  

Kiran Dhakal
Department of Physics and Astronomy, Georgia State University

Recent neuroimaging studies on musical improvisation have identified brain regions 
and networks involved in musical improvisation, which provides an excellent paradigm 
for understanding human creative cognition. Despite some studies on structural brain 
differences between musician and non-musician, whether and how the underlying white 
matter properties are associated with brain activity and connectivity are not clearly 
understood. In this study, we investigated the relationship between the white matter 
diffusion properties and functional connectivity from functional and diffusion magnetic 
resonance imaging (fMRI and dMRI) of 20 advanced level jazz improvisers. We found that 
musical improvisation compared with pre-learned melody is characterized by the higher
node activity in the Broca’s area (IFG), lateral premotor cortex (LPM) supplementary 
area and cerebellum, and the lower functional connectivity in number and strength 
among these regions. The measures of white matter diffusion properties, including 
generalized fractional anisotropy (GFA), quantitative anisotropy (QA), isotropic 
diffusion component (ISO ) were elevated in advanced improvisers compared to control 
non-musicians in multiple brain areas, especially in IFG, LPM and SMA. These results 
point to the notion that a human creative behavior performed under real-time 
constraints is an internally directed behavior controlled primarily by a smaller 
functional brain network and has a definite structural basis.

Martin Norgaard (2) Mukesh Dhamala (1,3-6)
1 Department of Physics and Astronomy, Georgia State University, Atlanta, GA, USA
2 School of Music, Georgia State University.  Atlanta, GA, USA
3 Neuroscience Institute, Georgia State University, Atlanta, GA, USA
4 Center for Behavioral Neuroscience, 
5 Center for nano-optics, and
6 Center for Diagnostics and Therapeutics, Georgia State University


Inferring Effective Connectivity From High-Dimensional ECoG Recordings 

C.M. Endemann
Department of Anesthesia, University of Wisconsin School of Medicine and Public Health

Given the vast interconnectivity of the brain, it is preferable to infer causality 
between two cortical regions while sampling from as many regions of the brain as 
possible. While this approach may drastically increase the odds of revealing true 
effective connectivity patterns (i.e. due to accounting for additional mediating 
variables that impact a given region), it is seldomly used given the numerous 
computational challenges related to fitting models to high-dimensional datasets. 
Here, we utilize dimensionality-reduction techniques (PCA + group lasso) to make 
it computationally feasible to fit autoregressive models to hundreds of intracranial 
channels spanning the auditory cortical hierarchy (core, belt, auditory-related and 
prefrontal cortex) in five neurosurgery patients. We then interpret our models using 
block connectivity methods to pool estimates of effective connectivity across channels 
into estimates across ROIs. We will present our method along with preliminary findings 
regarding how such networks change across various arousal states that occur during 
natural sleep [wake (WS), REM, N1, N2] and propofol anesthesia [pre-drug wake (WA), 
sedated/responsive (S) and unresponsive (U)]. 

D. Campbell (1), B.M. Krause (1), K.V. Nourski (2), B.V. Veen (3), M.I. Banks (1) 
1. Department of Anesthesia, University of Wisconsin School of Medicine and Public Health
2. Department of Neurosurgery, University of Iowa
3. Department of Electrical and Computer Engineering, University of Wisconsin-Madison


Topological analysis of multi-site LFP data  

Leonid Fedorov
Dept. Physiology of Cognitive Processes, 
Max Planck Institute for Biological Cybernetics, Tuebingen

The Local Field Potential (LFP) summarizes synaptic and somato-dendritic 
currents in a bounded ball around the electrode and is dependent on the spatial 
distribution of neurons. Both fine-grained properties and the temporal distribution 
of typical waveforms in spontaneous LFP have been used to identify global brain states 
(see e.g. [1] for P-waves in stages of sleep). While some LFP signatures have been 
studied in detail (in addition to Pons, see e.g. sleep spindles in the Thalamus and 
areas of the cortex [2], sharp-wave-ripples [3] in the Hippocampus and k-complexes 
[4]), it stands to understand the relationship between simultaneous signaling in 
cortical and subcortical areas. To characterize the mesoscale spontaneous activity, 
we quantify data-driven properties of LFP and use them to describe different brain 
states. Inspired by [5], we treat frequency-localized temporary increases in LFP power 
simultaneously recorded from Cortex, Hippocampus, Pons and LGN as neural events that 
carry information about the brain state. Here, we give a characterization of neural 
events in the 0-60Hz frequency range using tools from topological data analysis. In 
detail, we look at collections of barcodes computed using persistence homology [6] 
in two different ways. First, we look at the sublevel set filtration of a neural event 
to describe its critical points. Second, we use Vietoris-Rips filtration of the 
point-cloud of the delay embedding [7,8] of a neural event to describe periodicity. 
Both collections of barcodes are mapped to their persistence landscape spaces [9,10] 
for statistical description of the neural events. Both neural events representations 
are stable with respect to noise. They can be used for comparison of sustained 
low-frequency activity between different brain sites, as well as local spatial 
variability of the electrode position. 

1. Gott JA, Liley DTJ, Hobson AJ. Towards a Functional Understanding of PGO Waves. 
Front Hum Neurosci. 2017, 11-89. 
2. Contreras D et al. Spatiotemporal patterns of spindle oscillations in cortex 
and thalamus. J Neurosci. 1997, 17:1179-96. 
3. Buzsáki G. Hippocampal sharp wave-ripple: A cognitive biomarker for 
episodic memory and planning. Hippocampus. 2015, 25, 1073-188. 
4. Amzica F, Steriade M. Cellular substrates and laminar profile of sleep K-complex. 
Neurosci. 1997, 82, 671-686. 
5. Logothetis NK et al. Hippocampal–cortical interaction during periods of subcortical 
silence. Nature, 2012, 491, 547–553. 
6. Edelsbrunner H, Letscher D and Zomorodian A. Topological persistence and 
simplification. Disc & Comp Geom. 2002, 28, 511–533. 
7. Perea J A, Harer J. Sliding windows and persistence: An application of topological 
methods to signal analysis. Foun Comp Math. 2015, 15, 799–838. 
8. Sanderson N et al. Computational Topology Techniques for Characterizing 
Time-Series Data. 16 Int Sym Int Dat Analysis. 2017, 284-296. 
9. Bubenik P. Statistical topological data analysis using persistence landscapes. 
J Mach Learn Res. 2015, 16:77–102. 
10. Chazal F, Michel B. An introduction to Topological Data Analysis: fundamental 
and practical aspects for data scientists. 2017, arXiv:1710.04019v1 

Joint work with: Tjeerd Dijkstra, Yusuke Murayama, Christoph Bohle, Nikos Logothetis


Large-scale brain oscillatory network dynamics of perceptual decision-making: 
an EEG study 

Sushma Ghimire
Georgia State University, Physics and Astronomy 

Large-scale brain networks are believed to be involved in perceptual decision-making 
processes. Even though the dynamics of few brain areas in these networks are studied 
for their role in multi-step subprocesses, from sensory input to a perceptual 
decision and to a motor response, large-scale networks across the whole brain 
and their spatiotemporal oscillatory dynamics remain to be fully understood. 
In this study, using human scalp electroencephalography (EEG) recordings combined 
with source reconstruction techniques, we study how network oscillations 
functionally organize all the Broadman areas and what temporal sequence of events 
in interactions occur during a face-house perceptual decision-making task. Each of 
these regions included multiple voxels. Single task trials of 26 participants were 
used to reconstruct source signals on those voxels and the singular value 
decomposition was used to estimate the representative orientation for dipoles in 
those voxels in each Broadman area. Spectral interdependency analysis showed that 
network oscillations in different frequency bands link many of these distributed 
areas during the task. Measures of network activity from the frontoparietal network 
were correlated with behavioral performance. These findings of whole-brain network 
oscillations and timings of their peak activities broaden our understanding of the 
local and large-scale subprocesses leading to perceptual decisions.

Mukesh Dhamala: Georgia State University, Physics and Astronomy


State Space Models for Multiple Interacting Neural Populations 

Joshua Glaser
Columbia University 

As we move toward more complex recordings spanning multiple brain areas and cell 
types, existing data analysis methods may not provide a clear picture of the intra- 
and inter-population dynamics. Lacking any knowledge of subpopulation structure, 
factor analysis and its linear dynamical systems generalizations find intermixed 
representations that explain all populations with the same factors, leaving the 
practitioner to disentangle the result. Here, we leverage our knowledge of 
subpopulation structure to constrain state space (latent variable) models such 
that the state space is partitioned into separate subspaces for each subpopulation. 
The latent variables of these subpopulations interact with each other across time 
through a linear dynamical system. This allows separating internal and external 
contributions to dynamics. In simulations, we demonstrate this approach applied to 
Poisson linear dynamical systems (PLDS) and switching linear dynamical systems 
(SLDS) models. In these simulations, the PLDS model can accurately recover important 
aspects of the interaction between populations of neurons. In particular, it can 
accurately recover which changes in neural activity are due to internal dynamics 
versus input from another population. Additionally, it can recover the dimensionality 
of interactions between populations. Moreover, in a simulation in which one 
population sporadically provides input to another population, the constrained 
SLDS model can accurately determine these times of interaction. We then provide 
a preliminary demonstration of this approach on real data, by fitting the 
constrained PLDS model to simultaneously recorded neurons in primary motor cortex 
(M1) and dorsal premotor cortex (PMd). Constrained state space models promise to 
be tools for better understanding the interactions between neural populations.

Scott Linderman, Matthew Whiteway, Brian Dekleva, Matthew Perich, Lee Miller, 
Liam Paninski, John Cunningham 


Combined biophysical and statistical modeling pipeline for investigating roles of 
ion channels in stimulus encoding 

Nathan Glasgow
University of Pittsburgh

To understand single neuron computation, it is necessary to know how specific 
ion channel conductances affect neural integration and output. Knowledge of these 
relationships is critical in understanding how changes in biophysical properties 
affect stimulus encoding. Here we present a computational pipeline combining 
biophysical and statistical models to provide a link between variation in functional 
ion channel expression and changes in single neuron stimulus encoding. Biophysical 
models provide mechanistic insight, whereas statistical models provide insight into 
what spiking actually encodes. We used published biophysical models of two 
morphologically and functionally distinct projection neuron cell types: mitral 
cells (MCs) of the main olfactory bulb, and layer V cortical pyramidal cells (PCs). 
We first simulated MC and PC responses to pink noise stimuli while scaling individual 
ion channel conductances and then fit point process regression models (generalized 
linear models; GLMs) to the resulting spike trains. This provides both stimulus 
effects (the stimulus filter) and spike-history effects (the history filter). 
Although we find interesting differences for several channel types in each model, 
we focus here on high-voltage-activated Ca2+ channels (CaHVA) as an example of our 
pipeline. Changing CaHVA conductance converts our MC and PC models from regular 
firing to burst firing or vice versa. These changes are reflected predominantly in 
changes of the GLM history filter and early components of the stimulus filter. 
Through stimulus reconstruction, we find that increasing CaHVA conductance in MCs 
reduces coherence of low and medium frequency stimulus components, but not high 
frequency components. In contrast, varying CaHVA conductance in PCs has moderate 
effects on coherence of low and medium frequency components, but substantial 
reductions in coherence of high frequency components. Thus, we can predict how 
differences in individual conductances affect encoding of specific stimulus features. 
Our computational pipeline provides a way of screening all channel types to identify 
those channels that most strongly influence single neuron computation in any cell 
type of interest.


Granger Causality Analysis of Rat Cortico-cortical Connectivity during Pain  

Xingling Guo
Zhejiang University, China and School of Medicine, New York University 

Pain is a multidimensional experience involving multiple and distributed cortical 
regions. The primary somatosensory cortex (S1) and anterior cingulate cortex (ACC) 
are known to contribute to sensory and affective processing of pain, respectively. 
To date, however, their functional connectivity during pain episodes remains unclear. 
Here, we recorded in vivo extracellular local field potential (LFP) activity of the 
ACC and S1 simultaneously from freely behaving adult Sprague-Dawley rats during acute 
pain experiments, while the animals received noxious mechanical stimulus (pin prick) 
stimulations on the contralateral hind paw. To study the directed functional 
connectivity and information flow between the S1 and ACC circuits, we used 
model-based Granger-Geweke Causality to study the frequency-dependent Granger ca
usality (GC) during evoked pain episodes based on multichannel LFP recordings. 
Our preliminary data led to several findings. First, conditional GC (S1?ACC and 
ACC?S1) increased after pin prick stimulation from baseline (p<0.05, signed rank 
test) at nearly all frequency bands. There was also a decreasing trend in the GC 
difference (pain period minus baseline) from the lower to higher frequency band. 
Second, in the theta (4-8 Hz) and high-gamma (60-100 Hz) bands, the conditional GC 
from the S1 to ACC (i.e. S1?ACC) was significantly greater than the GC in the 
opposite direction (i.e., ACC?S1; p<0.05). Third, since an insufficient model 
order of the vector autoregressive (VAR) model may lead to an estimation bias in 
GC, especially in high-frequency bands, we used computer simulations to identify 
the impact of model order mismatch and missing variables on the estimation bias in 
GC. Put together, our results derived from experimental data and computer simulations 
provide an indirect yet strong support to our independent circuit dissection 
investigations of the S1?ACC projection. Combining computational and experimental 
investigations of anatomical and functional connectivity of the rat S1 and ACC 
circuits yields important insight into the circuit mechanisms of pain perception. 

Zhe (Sage) Chen (2), Amrita Singh (2), Jing Wang (2)
1 Zhejiang University, China 
2. School of Medicine, New York University


Identification of Magnetic Patterns Using Human Computer Interaction 
Generated by Human Brain 

Bineet Kumar Gupta
Shri Ramswaroop Memorial University, India

Study of human brain imaging and electrical properties of the brain functioning is an 
important for current clinical medicine and brain research. A human body comprises of m
ore than 70 trillion cells which further carry out several metabolic processes each 
second. There is great deal of communication required for smooth functioning amongst 
these cells in such a level of complexity. Cells are actually programmed for such 
communication and able to make required changes as and when necessary in fractions 
of seconds. This is made possible due to complex electrical activity in several 
different types of cells like, neurons, endocrine and muscle cells due to which 
they are referred as excitable cells. We propose a different approach or technique 
of solving such clinical problems which is proper study of natural magnetic patterns 
emergence actions related and similarities in these patterns using various computational 
tools available. These similarities can be used in tracing, analysing and detecting 
several magneto-chemical phenomenon of excitable tissues. The reverse application of 
such fields using Computer aided or programmed magnetic fields over a target over a 
period of time, may be studied. This programmed magnetic fields application may open 
new dimensions of research in fields of clinical research. 

1. Background  - Researchers all over the worlds have worked on stimulation of 
same cells using various methodological aspects most popular of which is TMS or 
Transcranial Magnetic Stimulation, for the stimulation of neurons in brain [Barker 
et al. 1985] and rTMS or repetitive Transcranial Magnetic Stimulation [Ogiue-Ikeda 
et al., 2003]. Detection of the magnetic fields of neurons in the brain produced by 
the electrical activities is area of importance of researchers today. In USA Cohen[1972] 
succeeded in the detection of extremely weak magnetic fields around head which is 
related to EEG alpha rhythm using a device SQUID (Superconducting Quantum Interference 
Device) system in a magnetically shielded room. Measurement of these weak magnetic 
fields around the brain is known as Magnetoencephalography (MEG). MEG has proven 
to be very efficient in detection of signals from within the brain non-invasively. 
It has several advantages in time and spatial resolution when compared to other tool 
for functional neuroimaging such as functional MRI, optical tomography and 
near-infrared spectroscopy [Williamson and Kaufman,1981]. The EEG and MEG maps 
when calculated show complimentary relationship or orthogonal relationship. As in 
case of all the electricity, this electric activity of body also lead to resultant 
magnetic fields referred to as Bio-Magnetic fields. The Bio-Magnetic fields of the 
body is extremely low in intensity and have been measured with the techniques like 
Magneto Cardiogram which is used to measure magnetic fields generated by heart and 
Magneto Encephalography, which is used for functional neuroimaging of the brain. 
These techniques are of great help in guiding treatment of brain and heart. Primarily 
the electrical activity of body happens in cell membrane. The cell membrane maintains 
an appropriate voltage or charge. 

2. Statement of Problem  - Focus of the researchers in recent years have been missing 
in some of the areas related to Biomagnetism which mentioned magnetic fields generated 
by the human brain its detection and further possible stimulation of brain or other 
parts of human body with static or dynamic magnetic fields with varying time. 
Scientist worked on several intensities of magnetic fields, directions of magnetic 
fields, variances of magnetic fields. In field of health as mentioned aim was to 
reduce effects of brain and central nervous system problems such as depression and 
anxiety using time varying magnetic fields. In cancer cure related TMS therapy we 
saw burning or targeting tumor cells using eddy current focusing to a point and 
hence destroying them. Although scientists were very successful in their intentions 
but all focus was intensities and directions of applying magnetic fields around a 
target. ``Magnetic Patterns” is the area which is being neglected in the field of 
Biomagnetism. As we have already observed the number of research having focus on 
`Intensity’ and `Direction’ of magnetic fields. We have observed Pattern of magnetic 
fields generated may prove to be much more importance. Patterns can be studied action 
wise of human or animals and even plants. These patterns may be analysed using 
Artificial Intelligence tools. 

3. Proposed Methodology -  Identification of Brain’s Magnetic Signals will be started 
with Magnetic Encephalography (MEG) which is a functional neuroimaging technique i.e. 
mapping brain activities using arrays SQUIDs( Superconducting Quantum Interference 
Detectors). Results obtained would be converted into electrical signals and further 
into digital ones, zeros and ones, since all the machines understand language of ON 
(i.e. 1) and OFF (i.e. 0) and work on the combination of same. We will do a study 
upon a single individual. We instruct the subject to think about a predefined topic 
in his mind and further we will record the magnetic field generated by his brain. 
On the different point of time (say after day or two) we will repeat the same process. 
We will analyse the results obtained on two different instances and measure the 
uniqueness or the percentage of uniqueness, since same pattern of neurons are involved 
in same pattern of thinking, same magnetic fields will be generated. When Magnetic 
Signals are converted in machine language it would be easier to analyse the uniqueness 
and similarity of both the results. When similarity and uniqueness will be detected 
a device can be coded to perform its task. If uniqueness is absent. We would analyse 
the signals, repeat the process and checkout the ‘Percentage of Similarity’. We will 
ignore the rest and focus on portion of signal which is Unique. 

Rajat Sharma, Shri Ramswaroop Memorial University


Kronecker sum covariance models for spatio-temporal data with applications 
to neural encoding analysis

Byoungwook Jang
University of Michigan

Scientific experiments often face corrupted data where measurement errors cannot 
be ignored. We estimate the dependency structures of the row- and column- correlation 
matrices and discover the common temporal dependency in hawkmoth flight data. We 
emphasize how the standard graphical lasso and regression techniques diminish the 
underlying signals when the individual measurement errors are prevalent. Instead, 
the proposed method models the dependencies among individuals and variables with 
a non-separable Kronecker sum covariance matrix to decompose the corrupted data as 
a sum of two latent components. These components reflect dependencies in the data 
occurring at two different time scales. We assess these latent components individually 
and present that one latent component carries most of the neural encoding with 
common dependency structures among different hawkmoths despite the presence of the 
measurement errors from hawkmoth specific behaviors. We estimate the proposed 
model with a corrected form of nodewise regression, analyze the statistical 
convergences of our method, and provide empirical studies that support our theoretical 
results. Several analyses of the neural encoding dependencies show that our method 
successfully recovers the underlying autoregressive structure among different 
individuals while the other methods that do not incorporate the measurement errors 
suffer from noisy dependency structures.

Seyoung Park, Sungkyunkwan University, Seoul, South Korea
Kerby Shedden, University of Michigan, Ann Arbor
Shuheng Zhou, University of California, Riverside


Feature selectivity is stable in primary visual cortex across a range of 
spatial frequencies

Brian B. Jeon
Center for Neural Basis of Cognition, Carnegie Mellon University 
Department of Biomedical Engineering, Carnegie Mellon University 

Reliable perception of environmental signals is a critical first step to 
generating appropriate responses and actions in awake behaving animals. 
The extent to which stimulus features are stably represented at the level 
of individual neurons is not well understood. To address this issue, we 
investigated the persistence of stimulus response tuning over the course of 
1–2 weeks in the primary visual cortex of awake, adult mice. Using 2-photon 
calcium imaging, we directly compared tuning stability to two stimulus features 
(orientation and spatial frequency) within the same neurons, specifically in 
layer 2/3 excitatory neurons. The majority of neurons that were tracked and 
tuned on consecutive imaging sessions maintained stable orientation and spatial 
frequency preferences (83% and 76% of the population, respectively) over a 
2-week period. Selectivity, measured as orientation and spatial frequency bandwidth, 
was also stable. Taking into account all 4 parameters, we found that the proportion 
of stable neurons was less than two thirds (57%). Thus, a substantial fraction of 
neurons (43%) were unstable in at least one parameter. Furthermore, we found that 
instability of orientation preference was not predictive of instability of spatial 
frequency preference within the same neurons. Population analysis revealed that 
noise correlation values were stable well beyond the estimated decline in monosynaptic 
connectivity (~250–300 microns). Our results demonstrate that orientation preference 
is stable across a range of spatial frequencies and that the tuning of distinct 
stimulus features can be independently maintained within a single neuron.

Alex D. Swain (3), Jeffrey T. Good (3), Steven M. Chase (1,2), Sandra J. Kuhlman (1,2,3)
1 Center for Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, USA. 
2 Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA. 
3 Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, USA.


Beneficial effects of video game-playing: a look into the brain functional 
connectivity during perceptual decision-making 

Tim Jordan
Georgia State University 

Video games have become a prominent part of the lives of children and young 
adults in today’s culture with thousands of games releasing each year. With 
this trend emerging towards producing more games each year and the increasing 
number of those who play them, finding how they affect their perceptual 
decision-making abilities of those who play them continuously and what changes 
in the brain to allow for these behavioral performance changes is required to be 
understood. Previous studies have shown that video game-playing can improve 
information processing and working memory capabilities. In this rapid-sampling 
fMRI study, we used a modified version of the moving-dots perceptual decision-making 
task and examined in 20 participants how video game-playing changed the behavior 
and the brain. We found out that the decision response time was significantly lower 
in video-gamers than in non-gamers. The decision response time was correlated with 
the individual brain activation and specific network activity in and across the key 
regions of the perceptual decision-making as defined in previous studies. These results 
help improve our understanding of how the brain’s abilities to integrate sparse sensory 
information and to make moment-to-moment decisions can change over time with intense 
and engaging perceptual-motor activities like video-game playing. 

Dr. Mukesh Dhamala, Georgia State University


Two Strategies for the Control of Individual Finger Movements with an 
Intracortical Brain-Computer Interface 

Ahmed Jorge
University of Pittsburgh

Introduction: Intracortical microelectrode arrays (MEAs) can record neural activity 
to enable people with tetraplegia to control multiple robotic arm movements (e.g. 
arm translation, rotation and various grasp degrees of freedom). The ultimate goal 
of these neuro-prosthetics is to serve as assistive devices to restore function. 
Dexterous tasks such as tying shoelaces will require control of individual fingers. 
Thus understanding neural correlates to these movements is of importance. 
Methods: Spike counts were recorded on a 31-year-old man with a C5/6 ASIA B spinal 
cord injury with two 96-electrode-microarrays (4mm x4 mm footprint, 1.5mm shank length) 
in the left motor cortex. Across 14 days, the subject observed a hand flex each finger 
while neural firing rates were recorded. Either a linear model (n=10 days) or a 
six-class linear discriminant (LDA) classifier (n=4 days) was used to decode finger 
position during full-brain control experiments. A Kolmogorov-Smirnov test, with spike 
counts samples before and during cue onset, was used to identify channels significantly 
modulated by a particular finger movement.

Results: During online brain control, we found that an LDA decoder resulted in higher 
kinematic accuracies when compared to a linear decoder (61% vs. 27%) during multiple 
days of testing. In addition, the ratio of failure-to-flex a cued finger to 
failure-to-maintain other fingers from moving was decreased from 2:3 with a linear 
vs. 1:20 with an LDA decoder. The number of significant channels on average associated 
with any finger movement was not significantly different during LDA (n=42) vs. 
linear (n=46) decoder training (p=0.564). 
Conclusion: On the same participant, a different decoder with similar number of 
significant channels produced vastly different accuracy results and failure modes. 
These results highlight the importance of decoder selection for highly complex 
movements (e.g. finger dexterity) and considering each decoder’s limitations 
(e.g. LDA can only classify movement or rest).
Dylan Royston, Elizabeth Tyler-Kabara, Michael Boninger, Jennifer Collinger  


Ensemble Pursuit: an algorithm for finding overlapping clusters of correlated 
neurons in large-scale recordings

Maria Kesa
Howard Hughes Medical Institute, Janela Research Campus. 

Large populations of neurons coordinate their activity to process shared 
sensory inputs and encode internal behavioral states. To extract these patterns 
of coordination in large datasets, we developed a fast greedy algorithm based on 
dictionary learning which extracts correlated and overlapping ensembles of cells 
from calcium imaging recordings. The learning algorithm continuously initializes 
and extracts new dictionary terms greedily from the residuals of the cost function 
given the current active set of dictionary elements. It shares this strategy with 
projection pursuit methods like independent components analysis, and we thus called 
the algorithm ”ensemble pursuit”. The method has a tunable parameter to control the 
sparsity of the ensembles, e.g. how many cells they include on average. Because one 
cell typically belongs to multiple ensembles, the model is more flexible than 
traditional clustering approaches. We evaluated the algorithm on simultaneous calcium 
imaging recordings of approximately 10,000 neurons from V1, and found that we could 
decode one out of 2800 natural stimuli with 15.6% (s.e. 2.8%) accuracy from 150 
ensembles, compared with 35.6% (s.e. 4.1%) for PCA, and 9.0% (s.e. 2.2%) for latent 
dirichlet allocation (LDA) (classification accuracy with 150 random neurons sampled 
ten times for each mouse is on average 3.3%, s.e. 0.2%). Ensemble pursuit achieves 
this accuracy with 0.5% non-zero weights per component, compared with 100% for PCA 
and 70.8% for LDA. In addition, we find that many of the ensembles have Gabor-like 
linear receptive fields, unlike the receptive fields obtained by PCA. These analyses 
show that ensemble pursuit can be applied to extract meaningful patterns of neural 
activity from large-scale recordings and has the potential to be applied in other 
experimental designs, such as extracting ensembles of cells that respond to reward, 
shock or have mixed selectivity in classical conditioning paradigms, or ensembles 
that encode particular task rules or internal variables used for solving tasks. We 
are integrating the algorithm with the suite2p calcium imaging library in Python for 
interactive data analyses.

Co-authors: Carsen Stringer, Marius Pachitariu 
Howard Hughes Medical Institute, Janela Research Campus.


Synchrony Analysis in Large Neural Populations 

Spencer Koerner
Carnegie Mellon University 

Many methods have been proposed for detection of excess synchrony among pairs of 
neurons, that is, approximately synchronous firing that occurs more frequently 
than can be explained by chance alone. When there are N neurons, there are, in 
principle, at every examined time point, $2^N$ combinations of possible spiking 
patterns. An appealing strategy is to consider only the pairwise interactions, 
based on the maximum entropy pairwise interaction model. However, maximum 
entropy pairwise interaction models, which include all possible pairwise interactions 
among the N neurons, require iterative methods for fitting, which become prohibitively 
expensive when N is moderate or large. We propose a faster solution that relies on 
an assumption of sparsity among pairwise synchronous interactions, and use 
FDR-regression [Scott et al., J. Amer. Statist. Assoc., 2015] to screen for 
potential effects. When there are some null effects (some absences of interaction 
terms in the model), the maximum likelihood estimate has an explicit analytical 
solution, and it becomes possible to fit a model using the remaining synchronous 
effects without resorting to standard iterative methods (which are prohibitively 
costly). To accelerate the method further, we exploit a specific form of sparsity 
in the screened network involving connections between communities of more densely 
connected neural units. By segmenting the network into smaller communities, we can 
obtain exact MLE results for a majority of interactions, and consistent estimators 
for the rest. We use our method to analyze synchrony in a network of V1 neurons, 
and examine the population's varying excessively synchronous behavior under 
different experimental tasks.


Simultaneous dimensionality reduction and deconvolution of calcium imaging activity

Tze Hui Koh
Department of Biomedical Engineering, Carnegie Mellon University
Center for the Neural Basis of Cognition, Carnegie Mellon University 

Dimensionality reduction is widely used in electrophysiological studies to provide 
concise representations of neural population activity in terms of a small number of 
latent variables that can inform our understanding of brain function. In recent years, 
calcium imaging has emerged as a powerful tool for recording the simultaneous activity 
of large populations of identified cell types. Because the fluorescence from calcium 
imaging represents a transformed version of the underlying electrical activity of the 
neurons, it is unclear whether the same dimensionality reduction techniques applied 
to electrophysiological recordings would be appropriate for calcium imaging recordings. 
To assess this, we crafted a data simulation framework where a small number of 
simulated latent variables drove a larger number of inhomogeneous Poisson spike 
trains that were subsequently used to drive simulated calcium dynamics and fluorescence 
traces. We then compared the ability of factor analysis to recover the ground truth 
latent variables when applied directly to fluorescence traces versus when applied to 
deconvolved spikes estimated from the fluorescence traces. We found the two-stage 
approach of deconvolution and then dimensionality reduction extracted latent variables 
that more closely reflected the ground truth. We then created a probabilistic model 
that unifies dimensionality reduction and deconvolution, capturing the temporal 
structure of the calcium dynamics in the form of a first-order autoregressive process. 
In doing so, the method leverages the shared variability between neurons to extract 
the latent variables in the presence of temporally correlated observations from 
fluorescence traces. We developed two versions of this unified probabilistic model, 
one without temporal dynamics and one assuming linear dynamics in the latent space, 
which are both fit using the Expectation Maximization (EM) algorithm. By systematically 
varying parameters such as noise level and timescales of latent variables and calcium 
dynamics, we performed simulations in regimes that emulate real data. We found that 
in these regimes, our unified probabilistic models often outperform the two-stage 
methods. This work can help guide the use of dimensionality reduction in calcium 
imaging experiments. 

William E. Bishop (3) Steven M. Chase*(1,2) Byron M. Yu*(1,2,4)
1 Department of Biomedical Engineering, Carnegie Mellon University
2 Center for the Neural Basis of Cognition, Carnegie Mellon University
3 Janelia Research Campus, Howard Hughes Medical Institute
4 Department of Electrical and Computer Engineering, Carnegie Mellon University
* Denotes equal authorship


Impact of standard experience on tuning diversity and natural scene discrimination 
in primary visual cortex
Sandra J. Kuhlman
University of Pittsburgh Center for Neuroscience,  Center for the Neural Basis 
of Cognition, Department of Biological Sciences, Department of Biomedical Engineering, 
Carnegie Mellon University, 

Optimal encoding of visual scenes requires early-life visual experience. One 
strategy to determine how experience enhances vision is to identify the specific 
response features that are modulated by experience to improve natural scene encoding. 
We performed large field of view calcium imaging of excitatory neurons in primary 
visual cortex (V1) in awake standard- and dark-reared mice. The ability of functionally 
defined neurons to discriminate similar and dissimilar natural scenes was assessed. 
We found standard rearing experience improves the encoding of natural scenes in V1 by 
shifting neural preference away from Gabor-like simple edges towards more complex 
features present in natural scenes. The net impact of standard experience was a 
counter-intuitive decrease in neural responses to simple edges. Animals exposed to 
light after being deprived of vision during postnatal development failed to accurately 
encode similar natural scenes. Rather than refining the Gabor-like distribution of 
tuning in V1, our results indicate that early experience improves scene discrimination 
by enhancing sensitivity to features specifically present in natural scenes.

Patricia L. Stan (1,2*), Janne Kauttonen (2,3*), Brian B. Jeon (2,4), Thomas Fuchs (3), 
Steven M. Chase(2,4), Tai Sing Lee (2,5)
1 University of Pittsburgh Center for Neuroscience, 
2 Center for the Neural Basis of Cognition, 
3 Department of Biological Sciences, Carnegie Mellon University, 
4 Department of Biomedical Engineering, Carnegie Mellon University, 
5 Department of Computer Science, Carnegie Mellon University


High Frequency Phase Locking in Auditory Cortex to Continuous Speech  

Joshua P. Kulasingham
University of Maryland

The neural processing of natural sounds, such as speech, changes along the ascending 
auditory pathway, and is often characterized by a progressive reduction in 
representative frequencies. For instance, the well-known frequency-following response 
(FFR) of the auditory midbrain, measured with electroencephalography (EEG), is 
dominated by frequencies from ~100 Hz to several hundred Hz, and time-locks to acoustic 
features (waveform and envelope) at those rates. In contrast, cortical responses, 
whether measured by EEG or magnetoencephalography (MEG), are thought to be 
characterized by frequencies of a few Hz to a few tens of Hz, time locking to 
acoustic envelope features at those rates. In this study we show that this separation 
by frequency is overly simplistic. Using MEG, which is insensitive to subcortical 
structures, we investigate high-frequency cortical responses (80-300 Hz) to continuous 
speech using neural source-localized reverse correlation, whose kernels are called 
temporal response functions (TRFs). Continuous speech stimuli were presented to 40 
subjects (17 younger, 23 older) with clinically normal hearing and their MEG responses 
were analyzed in the 80-300 Hz band. The spatiotemporal profile of these response 
components are consistent with a purely cortical origin with ~40 ms peak latency 
and a right hemisphere bias. TRF analysis was performed using two separate aspects 
of the speech stimuli: a) the 80-300 Hz band of the speech waveform itself, and b) 
the 80-300 Hz envelope of the high frequency (300-4000 Hz) band of the speech stimulus. 
Both of these aspects contributed to the TRF, with the envelope dominating the response. 
Age-related differences were also analyzed to investigate a reversal previously 
seen along the ascending auditory pathway, whereby older listeners have weaker 
midbrain FFR responses than younger listeners, but, paradoxically, have stronger 
low frequency cortical responses. In contrast to these earlier results, this study 
did not find clear age-related magnitude differences in high frequency cortical 
responses. Together, these results suggest that the traditional EEG-measured FFR has 
distinct and separate contributions from both subcortical and cortical sources. The 
cortical responses at FFR-like frequencies share properties with both midbrain 
responses at the same frequencies and cortical responses at much lower frequencies.

Christian Brodbeck, Alessandro Presacco, Stefanie E. Kuchinsky,  Samira Anderson 
& Jonathan Z. Simon


Separation of hemodynamic signals from GCaMP fluorescence in widefield imaging 

Michael G. Moore
Neuroscience Program and Institute for Quantitative Health Science and 
Engineering, Michigan State University 

Widefield Calcium imaging can monitor activity from several distant cortical areas 
simultaneously and reveal systems-level cortical interactions. Interpretation of 
widefield images is confounded, however, by hemoglobin absorption, which can 
substantially alter the recorded intensities. One approach to demixing hemodynamics 
from calcium signals is to directly measure the hemoglobin absorption using 
multi-wavelength backscatter recordings. We present a parameter-free approach to 
demixing multi-wavelength data that yields spatially detailed demixing coefficients 
which account for changes in tissue type and vasculature. This is accomplished by 
training a model on data from GFP-expressing mice and then transferring the model 
onto GCaMP mice. We evaluate the performance of our approach, alongside other 
correction approaches, in awake mice of three commonly used transgenic lines.  
In addition, we compare model performance at multiple cortical locations and show 
that our approach can remove prominent vascular artifacts along the sagittal midline. 

Matt T. Valley (2) Mark Reimers (1), and Jack Waters (2) 
1. Neuroscience Program and Institute for Quantitative Health Science and Engineering, 
Michigan State University 
2. Allen Institute for Brain Science 


Reward-predictive sensory experience increases synaptic connectivity and excitation 
in primary somatosensory cortex Layer 2 

Stephanie Myal
Carnegie Mellon University

Pyramidal neurons in Layer 2 (L2 PYRs) are uniquely suited to integrate task-relevant 
sensory information, as they receive input from within and outside the cortical column 
and fire sparsely due to tight local inhibitory control, decreased excitability, and 
sparse interconnectivity (~10%). However, L2 is known to be quite plastic, showing 
changes in activity and connectivity with passive sensory experience and sensory 
deprivation. We sought to determine whether active sensory training could induce 
lasting L2 PYR plasticity, and whether cortical changes could be further enhanced 
with a reward-contingent stimulus. To engage the primary somatosensory cortex (S1), 
we used an automated behavioral training task in which mice (C57bl/6, 19-28 days old) 
must nose-poke for a water reward that is occasionally, randomly preceded by a gentle 
air puff onto the right whisker pad. To measure excitatory connectivity, we performed 
paired whole-cell recordings of PYRs in acute brain slices of left S1 L2, injecting 
current to induce presynaptic spikes while recording evoked EPSPs. We find that 
reward-non-contingent (Puff) exposure doubles L2 PYR connectivity and increases 
the ratio of bidirectional synapses, with no difference in amplitude, failure rate 
or paired-pulse ratio) from synapses found at baseline. However, the doubled 
connectivity is transitory, and reverses after 5-day training. In contrast, 
sensory-reward association training (SAT, in which air puff is predictive of 
water reward) also doubles connectivity, but does so persistently: after 5 days 
of training, connectivity remains elevated. To assess whether increased PYR 
connectivity results in recurrent excitation of L2, we electrically stimulated 
single L2 PYRs (10 spikes, 50msec ISI) while recording spontaneous events in 
nearby, non-connected PYRs. We found that SAT, but not Puff training increases 
event frequency in L2 during and after the stimulus period, suggesting that 
increased connectivity alone is not sufficient to increase L2 recurrent activity. 
In summary, sensory exposure (Puff) increases Layer 2 excitatory connectivity 
transiently, while salient, reward-predictive sensory exposure (SAT) triggers 
persistent increases in connectivity and recurrent activity. These differential 
circuit changes may specifically facilitate the encoding of task-relevant sensory 
stimuli and behavioral context in primary sensory cortex Layer 2.


Task difficulty has a distinct representation from the decision variable in the 
monkey parietal cortex  

Gouki Okazawa
Center for Neural Science, New York University 

The neural population of the monkey lateral intraparietal (LIP) cortex represents 
the formation of perceptual decisions communicated with saccadic eye movements. 
For example, in the direction discrimination task with random dots (Shadlen & 
Newsome 2001) the average PSTHs of LIP neural population ramp up at a rate 
proportional to the motion strength supporting a saccade toward the neurons’ 
response field. These classic experimental results have motivated models in 
which neurons integrate sensory evidence into a decision variable (DV). In these 
models, the DV magnitude dictates the choice and confidence associated with the 
choice. Here we analyzed LIP population responses in two perceptual tasks: direction 
discrimination with random dots (Shadlen & Newsome 2001) and a novel face 
categorization task (Okazawa et al 2018). We varied stimulus difficulty by changing 
motion coherence in the direction discrimination task and the distance of facial 
features to prototypes in the face categorization task. In both tasks, the LIP 
population simultaneously represented both the stimulus difficulty and the DV; 
population responses at each moment fell along a convex curve in the state space, 
where the location on the curve corresponded to the DV and depth from the base of 
the curve represented stimulus difficulty. The curves expanded over time, creating 
a 3D manifold that reflected the variety of neural trajectories in the state space 
during decision formation. This simultaneous representation of the stimulus difficulty 
indicates an explicit code for the certainty associated with the choice, as opposed 
to the indirect representation suggested in previous studies (Kiani & Shadlen 2009; 
Beck et al 2008). However, by analyzing neural data while monkeys performed a 
post-decision wagering task to report their confidence in the direction discrimination 
task, we show that the monkeys’ confidence judgment did not correlate with 
fluctuation of population neural responses along the stimulus difficulty axis. 
We therefore suggest that the explicit representation of stimulus difficulty is
most likely required to implement the computations performed for integration of 
sensory evidence, but it is not designed to be read out by downstream areas for 
computation of confidence. 

1 Shadlen MN, Newsome WT (2001) J Neurophysiol 86:1916-36 
2 Okazawa G, Sha L, Purcell BA, Kiani R (2018) Nat Comm 28:3479 
3 Kiani R, Shadlen MN (2009) Science 324:759-64 [4] Beck et al. (2008) Neuron 60:1142-52 

Authors and affiliations:
Roozbeh Kiani (1,2,3) 
1 Center for Neural Science, New York University, New York, NY 10003 
2 Department of Psychology, New York University, New York, NY 10003 
3 Neuroscience Institute, NYU Langone Medical Center, New York, NY 10016 


Manifold optimization for identifying shared and non-shared neural subspaces between 
active and passive complex movements

Vasileios Papadourakis 
Department of Organismal Biology and Anatomy, University of Chicago 

Dimensionality reduction methods are established as a common way to analyze 
neural population data and have provided insight into how neural populations 
coordinate and interact to generate perception and guide behavior. The motivation 
behind the use of dimensionality reduction is the fact that, as neurons tend to 
co-vary, the activity of many neurons can be well described by a low number of latent 
variables (dimensions). These latent variables are thought to emerge from network 
connectivity or to represent common inputs to the population.

An open question is if the underlying latent structure is preserved across 
different behaviors or if it is a byproduct of studying a limited number of 
experimental conditions. To answer this, we need tools to identify the existence 
and degree of overlap between subspaces of different origin. Importantly, such 
tools need to be able to accommodate data that come from a variety of possibly 
unconstrained behaviors, including tasks in which trial averaging is not necessarily 
meaningful such as foraging or grooming. 
To tackle these problems, we take advantage of a generalization of dimensionality 
reduction methods that defines an optimization problem with an objective (cost) 
function that should be minimized over orthogonal or unconstrained matrices. For 
example, in the case of PCA, the d principal components should maximize the projected 
variance while also being orthogonal to each other. To solve this, we define an 
objective function that is minimized when the projected variance is maximized; 
the orthogonality constraint is enforced by searching over the manifold of orthogonal 
This framework has two attractive properties. First, additional constraints can be 
added to the objective function. Extending the basic PCA paradigm to account for 
multiple conditions, we can search for latent dimensions that maximize the projected 
variance of one behavioral condition while at the same time minimize the projected 
variance of another behavioral condition. Second, the same generalization can be 
used to approximate different dimensionality reduction methods that are more 
appropriate to the problem at hand, such as Factor Analysis if we don’t want to 
trial average. 
We test these methods on motor cortex spiking activity recorded with a 
multi-electrode array during the execution and multi-sensory observation of a 
random target pursuit task. In this dataset, conditions are expected to have both 
shared and non-shared dimensions. Moreover, the nature of the task doesn’t allow 
trial averaging because the animals randomly explore the workspace of 2D reaching 
movements. We show how different objective functions and constraints behave and 
demonstrate methods to evaluate the results and tune the optimization parameters. 

Aaron J. Suminski (3,4), Kazutaka Takahashi (1), Nicholas G. Hatsopoulos (1,2) 
1. Department of Organismal Biology and Anatomy, University of Chicago, 
2. Committee on Computational Neuroscience, University of Chicago
3. Department of Neurological Surgery, University of Wisconsin Madison
4. Department of Biomedical Engineering, University of Wisconsin Madison


Efficient multi-neuron patch-clamp for microcircuit analysis in human brain tissue 
allowing inter-individual comparisontitle

Yangfan Peng
Charité-Universitätsmedizin Berlin

Comparing neuronal microcircuits across different brain regions, species and individuals 
can reveal common and divergent principles of network computation. Simultaneous 
patch-clamp recordings from multiple neurons offer the highest temporal and 
subthreshold resolution to analyse local synaptic connectivity. However, its 
establishment is technically complex and the experimental performance is limited by 
high   failure rates, long experimental times and small sample sizes. We introduce 
an in-vitro multipatch setup with an automated pipette pressure and cleaning system 
strongly facilitating recordings of up to 10 neurons simultaneously and subsequent 
patching of additional neurons . We provide   detailed instructions for hardware and 
software solutions that increase the usability, speed and data throughput of multipatch 
experiments. Our high-throughput approach allowed probing of 150 connections between 
17 neurons in one human cortical slice as well as screening of over 600 synaptic 
connections in tissue from a single patients. This technique  will facilitate the 
systematic analysis of microcircuits and allow unprecedented comparisons at the level 
of individuals.


Characterization of Continuous Spectral Dynamics in the Sleep EEG using an 
Extended Kalman Filter Approach 

Michael Prerau
MGH Department of Anesthesia, Critical Care, and Pain Management

Sleep has been shown to be a continuous and dynamic process in every physiological 
and behavioral system studied thus far. The ability to accurately describe these 
dynamics is therefore essential to understanding the way in which healthy and 
pathological brain activity evolves during sleep. Although current clinical staging 
has been instrumental in important advances in sleep medicine, it artificially 
discretizes the continuum of sleep into 30-second epochs of fixed sleep stages. 
As such, this discretization disagrees with our understanding of sleep circuitry 
dynamics and also fails to account for activity that does not into a single stage 
definition. Additionally, quantitative sleep electroencephalogram (EEG) analysis 
relying on spectral estimation is highly prone to “spectral bleeding”, as an 
oscillation may not fall fully within a fixed canonical band or unrelated oscillations 
may enter. It is therefore vital to progress in our understanding of sleep and 
related pathologies that we develop accurate, objective methods to capture the full 
dynamic nature of sleep neurophysiology. We describe a novel framework for more 
accurately characterizing the dynamics of multiple simultaneously-occurring 
oscillations within the sleep EEG. Given the time-frequency spectral representation 
of the sleep EEG, we estimate the peak frequency, power, and bandwidth of multiple 
oscillations (e.g. alpha, delta, sigma, theta) at each point in time. This is 
achieved by decomposing the EEG spectrogram into a set of time-varying parametric 
functions, the parameters of which are estimated by a modified extended Kalman filter. 
We present applications to simulated and experimental sleep EEG data, as well as to 
depth recordings from anesthetized rodents. In each case, the model robustly 
estimates the peak frequency, bandwidth, and power of each constituent oscillation 
more accurately than traditional bandpass methods. By developing a framework for 
modeling EEG oscillation dynamics, we provide a pathway towards a statistically-
principled, robust, flexible, and continuous characterization of brain dynamics 
during sleep, which is essential to characterizing the vast heterogeneity observed 
across both healthy and pathological populations. 

Patrick Stokes, MGH Department of Anesthesia, Critical Care, and Pain Management


Detection of putative synaptic connections from multi-electrode spike recordings: 
a model-based method with structural constraints

Naixin Ren
Department of Psychological Sciences, University of Connecticut

In simultaneous spike recordings, putative monosynaptic connections often appear 
in cross-correlograms as fast-onset, short-latency peaks (excitatory) or troughs 
(inhibitory connection). Advances in both in-vivo and in-vitro multielectrode 
arrays allow hundreds of neurons to be recorded simultaneously with thousands of 
potential synapses between them. However, detecting and characterizing synaptic 
connections at this scale is challenging. Here we use an extension of the Generalized 
Linear Model to describe correlograms between pairs of neurons to automatically detect 
synaptic connections. Our model separates the cross-correlogram into two parts: a slow 
effect due to common input and a fast effect due to the synapse. For the slow effects 
we learn a smooth basis using low-rank, nonlinear matrix factorization. For the fast 
effects we constrain presynaptic neuron type (based on Dale’s law), synaptic 
latencies (based on the fact that synaptic latency grows with the distance between 
the neurons) and time constants. We evaluate our model using two simulated integrate-
and-firing networks, one with recurrent connections, the other with spike trains from 
an in-vitro spike recording serving as presynaptic inputs to mimic the presynaptic 
activity in the real data. Our model outperforms two other synapse detection methods: 
jitter method and threshold method, especially on the weak connections. We then 
apply our model on in-vitro multielectrode arrays data (mouse somatosensory cortex) 
where our model recovers plausible connections from hundreds of neurons. Similar to 
previous work (Barthó et al., 2004) we find that the spike waveforms of the putative 
excitatory or inhibitory presynaptic neurons have distinct shapes. This method may be 
a useful tool for detecting and characterizing synapses in large-scale spike recordings 
both in-vivo and in-vitro.

Shinya Ito, Santa Cruz Institute for Particle Physics, University of California, 
Santa Cruz
Hadi Hafizi, Department of Physics, Indiana University, Bloomington, Indiana
John M. Beggs, Department of Physics, Indiana University, Bloomington, Indiana
Ian H Stevenson, Department of Psychological Sciences, University of Connecticut, 
Storrs, Connecticut


Moment-closure approaches to statistical mechanics and inference in models 
of neural dynamics

Michael Rule
University of Cambridge

Integrating large-scale neuronal recordings with models of emergent collective 
dynamics remains a central problem in statistical neuroscience. We illustrate a 
moment-closure approach to relate mechanistic descriptions of neural spiking to 
state-space models amenable to recursive Bayesian estimation. We focus on two 
classes of models common in computational neuroscience: autoregressive point-processes, 
which are commonly used to model spiking populations, and neural field models, which 
are popular analytically-tractable models of spatiotemporal dynamics. Inspired by 
recent advances in modelling of chemical reaction systems, the moment-closure 
approach yields tractable low-order approximations of the evolving population state 
distributions. These approximations capture how fluctuations and correlations 
interact with nonlinearities, and can be interpreted as latent state-space models 
of neural spiking activity. In the case of autoregressive point-process models, 
moment-closure provides a coarse-grained description of the system that captures 
nonlinear and stochastic effects on the slow dynamics. In the case of neural field 
models, moment-closure provides a model of both the mean-field and two-point 
correlation functions, and accounts for finite-size effects and correlations. 
Overall, moment closure methods provide a tractable route to low-dimensional 
approximations of population dynamics, and suggest a promising route forward for 
model coarse-graining that can be integrated with experimental datasets. 


Electrophysiological mechanisms behind false alarms in a change detection task.
Bikash Chandra Sahoo
University of Rochester

In a change detection task, false alarms i.e. responding when unnecessary responses 
are costly, are indicative of sub-optimality in the decision process and faltering 
of top-down attentional control. We developed a mechanistic framework of false 
alarms grounded on local field potentials recorded simultaneously from V4 and PFC 
in two adult male rhesus macaques performing a delayed non-match to sample task. 
We reasoned that false alarms are caused mainly by two factors: erroneous sensory 
evidence and stronger choice bias. From the perspective of sensory evidence, we 
hypothesized that information encoding would be poorer in case of false alarms 
compared to correct rejection trials, despite that the physical stimulus was the 
same. Also, the stronger the encoding, the slower the false alarm reaction times 
would be. We first extracted signal shared across electrodes using Gaussian Process 
Factor Analysis. Then, we used logistic regression to derive probabilities for which 
of two visual stimuli was shown, which was our estimate of the strength of stimulus 
encoding. We found that the strength of encoding positively correlated with false 
alarm reaction times only in V4, and only in the pre-stimulus interval i.e. the 
stronger the encoding of the past stimulus information the slower the reaction times. 
We also found that the strength of stimulus encoding was greater for correct 
rejection trials than for false alarms in both V4 and PFC. This pattern of results 
provides evidence for the critical role of sensory evidence in false alarms. Our 
current work aims to estimate the directional information flow between PFC and V4 
using lagged Canonical Correlation Analysis and quantify its contribution to false 
alarms. We hypothesize that higher information flow from PFC to V4 during the pre- 
and peri-stimulus period would result in higher choice bias and in turn more false 
alarms which are not necessarily due to corrupted evidence. 
Adam C. Snyder (1,2) 
1 Department of Brain and Cognitive Sciences, 
2 Department of Neurosciences, University of Rochester, Rochester, NY 


Efficient Spline Regression for Neural Spiking Data

Mehrnoosh Sarmashghi
Division of Systems Engineering, Boston University, Boston, MA

Point process Generalized linear models (GLMs) provide a powerful tool for
characterizing the coding properties and other associations in neural spiking data.
Spline basis functions are often used in point process GLMs, when the relationship
between the spiking and driving signals are nonlinear, but common choices for the
structure of these spline bases often lead to loss of statistical power and numerical
instability. This is particularly true when building history-dependent point process
models using cardinal spline bases, where the neural refractory period often results in
numerical instability in the computation of confidence intervals. Here, we propose a
modified set of spline basis functions that assumes a flat derivative at the endpoints 
andshow that this limits the uncertainty and numerical issues associated with cardinal
splines. We illustrate the application of this modified basis to the problem of
simultaneously estimating the place field and history-dependent properties of a set of
neurons from the CA1 region of rat hippocampus.

Uri T. Eden, Department of Mathematics and Statistics, Boston University, Boston, MA


Linking noise correlations to spatiotemporal population dynamics and network structure 

Yanliang Shi
Cold Spring Harbor Laboratory

Neocortical activity fluctuates endogenously, with much variability shared among 
neurons. These co-fluctuations are generally characterized as correlations between 
pairs of neurons, termed noise correlations. Noise correlations depend on anatomical 
dimensions, such as cortical layer and lateral distance, and they are also dynamically 
influenced by behavioral states, in particular, during spatial attention. Specifically, 
recordings from laterally separated neurons in superficial layers find a robust 
reduction of noise correlations during attention [1]. On the other hand, recordings 
from neurons in different layers of the same column find that changes of noise 
correlations differ across layers and overall are small compared to lateral 
noise-correlation changes [2]. Evidently, these varying patterns of noise correlations 
echo the wide-scale population activity, but the dynamics of population-wide 
fluctuations and their relationship to the underlying circuitry remain unknown. 
Here we present a theory which relates noise correlations to spatiotemporal dynamics 
of population activity and the network structure. The theory integrates vast data on 
noise correlations with our recent discovery that population activity in single 
columns spontaneously transitions between synchronous phases of vigorous (On) and 
faint (Off) spiking [3]. We develop a network model of cortical columns, which 
replicates cortical On-Off dynamics. Each unit in the network represents one layer--
-superficial or deep---of a single column. Units are connected laterally to their 
neighbors within the same layer, which correlates On-Off dynamics across columns. 
Visual stimuli and attention are modeled as external inputs to local groups of units. 
We study the model by simulations and also derive analytical expressions for 
distance-dependent noise correlations. To test the theory, we analyze linear 
microelectrode array recordings of spiking activity from all layers of the primate 
area V4 during an attention task. First, at the scale of single columns, the 
theory accurately predicts the broad distribution of attention-related changes of 
noise-correlations in our laminar recordings, indicating that they largely arise 
from the On-Off dynamics. Second, the network model mechanistically explains differences 
in attention-related changes of noise-correlations at different lateral distances. 
Due to spatial connectivity, noise correlations decay exponentially with lateral 
distance, characterized by the decay-constant called correlation length. Correlation 
length depends on the strength of lateral connections, but it is also modulated by 
attentional inputs, which effectively regulate the relative influence of lateral 
inputs. Thus changes of lateral noise-correlations mainly arise from changes in the 
correlation length. The model predicts that at intermediate lateral distances (<1mm), 
noise-correlation changes decrease or increase with distance, when the correlation-
length increases or decreases, respectively. To test these predictions, we used 
distances between receptive-field centers to estimate lateral shifts in our laminar 
recordings. We found that during attention, correlation length decreases in 
superficial and increases in deep layers, indicating differential modulation of 
superficial and deep layers. Our work provides a unifying framework that links 
network mechanisms shaping noise correlations to dynamics of population activity 
and underlying cortical circuit structure. 

1 Cohen & Maunsell, Nat Neurosci, 2009 
2 Nandy et al, Neuron, 2017 
3 Engel et al, Science, 2016


Applying the generalized linear model to cross-correlations for estimating 
interneuronal connections 

Shigeru Shinomoto

Advanced techniques for recording neuronal activity started to provide us with a 
number of parallel spike trains to analyze an animal’s status. Using the same data, 
interneuronal connections can be assessed by inspecting the degree of neuronal firing 
and its influence on the subsequent firing of other neurons. Neuroscientists have made 
such inference by taking extreme caution against drawing spurious connections between 
anatomically disconnected pairs, which may be counted as false-positives. However, 
by being conservative, connections important for information processing may have been 
overlooked, thus producing an enormous majority of false-negatives. To reconstruct 
neuronal circuitry, we attempted to compromise between these conflicting demands. We 
constructed a method of estimating neuronal connections in terms of postsynaptic 
potentials (PSPs), tolerating variation in the spiking activity in vivo and estimate 
the necessary duration of the spike recordings to verify neuronal connections [1]. 
By applying our method to rat hippocampal data, we show that the numbers and types of 
connections estimated from our calculations match the results inferred from other 
physiological cues. Thus our method provides the means to build a circuit diagram 
from recorded spike trains, thereby providing a basis for elucidating the differences 
in information processing in different brain regions. 

Ryota Kobayashi

[1] Ryota Kobayashi, Shuhei Kurita, Katsunori Kitano, Kenji Mizuseki, 
Barry J. Richmond, and Shigeru Shinomoto, Reconstructing Neuronal Circuitry from 
Parallel Spike Trains, bioRxiv (2018) 


Analysis of large-scale naturalistic human brain and behavior recordings 

Satpreet Singh
University of Washington 

Much of our understanding in human neuroscience has been informed by data collected 
in pre-designed and well-controlled experimental tasks, where timings of cues, stimuli, 
and behavioral responses are known precisely. Recent advances in data acquisition 
and machine learning have enabled us to study longer and increasingly naturalistic 
brain recordings, where we try to understand neural computations associated with 
spontaneous behaviors. Analyzing such unstructured, long-term, and multi-modal 
data with no a priori experimental design remains very challenging. Here we describe 
ongoing work to develop automated methods that uncover neural correlates of 
naturalistic human upper limb movements. We combine computer vision, time-series 
segmentation, and prediction algorithms to analyze large (~250 GB/subject) datasets 
of simultaneously recorded human electrocorticography (ECoG) and behavioral video 
data. In particular, we use computer vision to track human arm pose trajectories 
and then segment these trajectories in time using unsupervised state-space models. 
Our tracking is robust to variation in lighting, camera angle, and level of activity 
in the video. Importantly, we discover interpretable behavioral events and sequences 
using string-processing methods (regular expressions) on the extracted discrete 
state sequences. Our approach also extracts additional behavioral metadata associated 
with the movement events, such as reach angle, magnitude, and duration. Lastly, we 
uncover neural correlates associated with the movements; these naturalistic neural 
correlates further corroborate results from traditional, controlled experiments. 
In summary, we demonstrate a highly automated alternative workflow for analyzing 
simultaneously recorded human brain and behavioral video data. Our work also extends 
to developing neural decoders targeting brain-computer interface (BCI) applications. 
This pipeline has the capability to generate a substantial expansion in training 
data for BCI decoders, potentially improving their robustness to variability when 
deployed in real-world scenarios.

Steve Peterson(1), Nancy X. R. Wang(1,2), Rajesh P. N. Rao(1), and Bingni W. Brunton(1) 
1. University of Washington
2. IBM Research


Cross-Frequency Coupling Analysis using State Space Oscillator Models

Hugo Soulat

Cross-frequency coupling, and especially Phase Amplitude Coupling (PAC), is believed 
to play a major role in coordinating neural dynamics. Yet, standard analyses are 
subject to misleading characterization, quantification and interpretation. For 
instance, selecting and filtering out frequencies of interest, the presence of 
noise, non linearities or abrupt changes in the signal can all contribute to 
spurious PAC. In this paper, we propose a parametric approach based on state 
space oscillator models to estimate PAC characteristic parameters and their 
distribution. We choose a model formulation ingeniously proposed by Matsuda and 
colleagues that makes it straightforward to estimate both the phase and amplitude 
of oscillatory components and further improve statistical efficiency by introducing a 
parametric representation for the cross-frequency coupling relationship. Finally, we 
use those statistical models to compute credible intervals for the observed coupling 
via resampling from the posterior distribution of the estimated lattent oscillations 
and coupling relation. We use simulated datasets, rodent invasive and human non 
invasive recordings to show that our method not only addresses a majority of standard 
analysis caveats but also provides a less biased, more robust and more efficient 
PAC estimate.

Emily P. Stephen, Amanda M. Beck, Patrick L. Purdon MIT


Characterizing the relationship between functional connectivity and neurocognitive 
deficits in benign epilepsy with centrotemporal spikes 

Elizabeth Spencer
Department of Mathematics and Statistics, Boston University 

Benign epilepsy with centrotemporal spikes (BECTS) is the most common childhood focal 
epilepsy. While all patients spontaneously enter into remission by adolescence, 
BECTS is linked to the development of various sensorimotor deficits that in some cases 
follow patients into adulthood. There is evidence in other studies of focal epilepsies 
that the way the brain transiently coordinates the flow of information between 
cortical regions, i.e. functional connectivity, is disrupted. However, there is 
limited understanding of the specific differences in functional connectivity between 
BECTS patients at time of diagnosis, patients in remission, and healthy individuals, 
and whether these functional connectivity changes correlate with behavioral deficits. 
We hypothesize that the impact of BECTS during a critical period in cognitive 
development has long-lasting effects on the functional connections, and that the 
differences in functional connectivity provide the neurological basis for deficits 
present later in life. We propose a data analysis pipeline to address this hypothesis. 
We analyze high-density electroencephalography recordings sourced to the brain surface 
to map out the functional connections at different stages of BECTS, and compare these 
functional connections with age-matched controls to characterize how signaling between 
cortical areas is disrupted. Then, we determine which differences are predictive of 
task performance on language and motor tasks. By understanding the differences in 
brain network organization, we may understand why neurological impairments develop 
in certain individuals with BECTS and establish a direct relationship between 
functional connectivity and cognitive processes.

Dhinakaran Chinappen (2) Lauren Ostrowski (2) Daniel Song (2) Sally Stoyell 
(2) Catherine Chu (2) Mark Kramer (1)
1. Dept. of Mathematics and Statistics, Boston Univ., Boston, MA; 
2 Dept. of Neurol., Massachusetts Gen. Hosp., Boston, MA


Leveraging Markerless Computer Vision to Assess Interactions of Deep Mesencephalic 
Neural Activity and Postural Dynamics During Primate Locomotion 

Oliver Stanley
Department of Biomedical Engineering, Johns Hopkins University

The deep mesencephalic nucleus is a relatively large midbrain area implicated in a 
wide variety of behaviors and functions, including sensory modulation, locomotion,
motivation, attention, and eye movements. Its diverse connections include spinal, 
cortical, basal ganglia, and limbic inputs and outputs to thalamic nuclei and 
reticulospinal nuclei and tracts. Specific areas within DpMe have long been known 
to be related to locomotion and gait. Most classically, stimulation of this 
‘mesencephalic locomotor region’ was observed to initiate complex motor output 
such as walking and running in decerebrate cat. However, to date, the relationship 
between DpMe neuron activity and specific aspects of locomotion has remained 
relatively obscure. Here, we present analysis of single-unit activity recorded from 
rhesus macaque deep mesencephalic nucleus during a treadmill walking task. By 
utilizing recent advances in image feature tracking rooted in accessible tools for 
training and deploying deep neural networks to record three-dimensional posture of 
a behaving primate, we demonstrate that neural modulation in our recorded DpMe 
units is best approximated using multiple joint dynamics across all four limbs. 
Additionally, we discuss approaches for addressing challenges related to analyzing 
single unit activity in the context of the multiple timescales, transmission delays, 
and high dimensionality involved in posture and locomotion.

Dr. Erez Gugig (Department of Biomedical Engineering, Johns Hopkins University)
Dr. Kathleen Cullen (Department of Biomedical Engineering, Johns Hopkins University)


Evidence that posterior and anterior phase amplitude coupling distinguish 
unconsciousness from unarousability in propofol anesthesia

Emily Stephen

In the last several years, a controversy has arisen regarding whether the neural 
correlates of consciousness are in the front or the back of the brain. The controversy 
has recently expanded to include studies of anesthesia-induced unconsciousness: 
in particular, whether frontal EEG indicators can reliably predict unconsciousness. 
The disagreement refers to the finding that alpha band (8-12 Hz) oscillations in 
frontal cortex interact differently with the slow wave depending on the depth of 
propofol anesthesia: at light doses, alpha power is strongest at the trough of the 
slow wave (troughmax) and a higher doses it is strongest at the peak of the slow 
wave (peakmax). Patients can be aroused from an unconscious state during frontal 
troughmax dynamics, but not, apparently, during peakmax dynamics. By extending the 
phase amplitude coupling analysis (1) to non-frontal locations, (2) to other frequency 
bands beyond alpha, and (3) to the cortical surface using EEG source localization, 
we find that peakmax dynamics are a broadband phenomenon, suggesting that they may be 
reflective of cortical up- and down-states rather than coupled oscillations. In 
addition, posterior cortex exhibits broadband peakmax dynamics at lighter doses of 
propofol than frontal cortex, indicating that posterior cortical activity may be 
captured by cortical up- and down-states earlier than frontal cortex. This result 
supports the idea that loss of consciousness is not a singular phenomenon but rather 
involves several distinct shifts in brain state, relating to both unconsciousness 
and unarousability.


Neurovascular damage during microelectrode insertion decreases recording performance 
over time and may result in increased high frequency power

Kevin Steiger
University of Pittsburgh

Intracortical microelectrode arrays can record electrical signals in the brain 
for several months, and the viability of the signals are critical for informing 
basic neuroscience research as well the chronic performance of brain machine interfaces. 
The signal quality is often variable and reduces over time through both mechanical 
and biological failure modes related to inflammation and damage to the brain. 
Implantation of electrodes often results in rupturing of blood vessels, and implanting 
near penetrating vessels can exacerbate cerebral hemorrhage. Trauma to the brain 
from ischemic stroke or traumatic brain injury can result in a period of cortical 
hyperexcitability followed by decreased excitability. This transient shift from high 
to low excitability correlates with increased expression of excitatory (Vglut) or 
inhibitory markers (VGAT), respectively. Although neurovascular damage can result in 
ischemia, neuroinflammation, and cell death, the role of neurovascular damage in 
electrode viability and its effect on cortical activity has yet to be investigated. 
In this study, we used two-photon microscopy to identify and target sub-surface blood 
vessels during implantation of a 16-channel single shank Michigan electrode in the 
mouse visual cortex. We then recorded neural signals during visual stimulation for 
seven weeks. Our preliminary data show an 89 ± 11% decrease in single unit yield, 
30 ± 13% decrease in multi-unit firing rate, and a 39 ± 22% decrease in signal to 
noise firing rate ratio on average across the seven weeks in the arteriole damage 
group compared to control. Additionally, on average across the seven weeks, these 
data also suggest a 350 ± 252% increase in mean power, a 15 ± 28% increase in 
relative gamma (30-90 Hz) power and a 15 ± 34% decrease in relative sub-gamma 
(2-30 Hz) power over a 1s period following visual stimulus. This dramatic decrease 
in single unit yield, and increase in power driven by higher frequency oscillations 
in the group with arteriole injury suggest that minimizing early neurovascular damage 
during implantation may improve chronic recording performance, but also preserve 
the underlying excitability of the circuit.

Takashi Kozai, University of Pittsburgh


Omitted Variable Bias in GLMs of Neural Spiking Activity
Ian Stevenson
Department of Psychological Sciences, University of Connecticut

Generalized linear models (GLMs) have a wide range of applications in systems 
neuroscience describing the encoding of stimulus and behavioral variables, as 
well as the dynamics of single neurons. However, in any given experiment, many 
variables that have an impact on neural activity are not observed or not modeled. 
Here we demonstrate, in both theory and practice, how these omitted variables can 
result in biased parameter estimates for the effects that are included. In three 
case studies, we estimate tuning functions for common experiments in motor cortex, 
hippocampus, and visual cortex. We find that including traditionally omitted 
variables changes estimates of the original parameters and that modulation originally 
attributed to one variable is reduced after new variables are included. In GLMs 
describing single-neuron dynamics, we then demonstrate how post-spike history 
effects can also be biased by omitted variables. Here we find that omitted variable 
bias can lead to mistaken conclusions about the stability of single-neuron firing. 
Omitted variable bias can appear in any model with confounders - where omitted 
variables modulate neural activity and the effects of the omitted variables covary 
with the included effects. Understanding how and to what extent omitted variable bias 
affects parameter estimates is likely to be important for interpreting the parameters 
and predictions of many neural encoding models.



Reach-related activity in basal ganglia- and cerebellum-recipient thalamic nuclei 

Robert S. Turner
Department of Neurobiology, University of Pittsburgh
Center for the Neural Basis of Cognition, University of Pittsburgh 

Neurons in the ventrolateral (VL) nucleus of the thalamus serve as critical links 
by which the basal ganglia (BG) and cerebellum (Cb) communicate with cortical motor 
areas. In primates, BG and Cb afferents terminate in distinct anterior and posterior 
parts of VL (“VLa” and “VLp,” respectively). Thus, the respective contributions of 
BG and Cb to motor cortical function should be revealed through comparisons of the 
task-related activities of neurons in these two nuclei. We studied the single-unit 
activity of electrophysiologically-identified VLa and VLp neurons (n=184 and 114 
respectively) in two non-human primates during the performance of a reaching task. 
Even though VLa and VLp receive markedly different subcortical afferent inputs, 
neuronal activity in the two nuclei was surprisingly similar in many respects. 
Changes in firing rate around the time of reach onset were common in both nuclei 
and increases were the most common change detected (63% and 65% of changes, 
respectively). To analyze these peri-movement change in detail, especially neural 
activities that were time-locked to behavioral events, we developed a new method 
to estimate trial-by-trial the time of onset of movement-related activity. Large 
proportions of single-units had activity time-locked to go-cue- and/or movement-onset 
(58% and 55% of changes, respectively). Most of VLa neurons were time-locked to 
either cue-onset or movement-onset (88% of time-locked change) whereas time-locking 
to both cue-onset and movement-onset was common in VLp (33% of time-locked change). 
Although many response metrics were similar between the two nuclei,. we did find 
some marked differences. Movement-related decreases in firing were more common 
(32% vs. 23% of neurons) and longer-lasting (491ms vs. 357ms) in VLa than in VLp. 
Increases in firing generally began earlier in VLp than in VLa whereas the latencies 
of decreases did not differ between nuclei. Movement-related changes in VLa were 
largely monophasic whereas those in VLp were often polyphasic (e.g., increase/decrease 
couplets). Time-resolved linear models found that neurons in VLp encoded the direction 
of movement earlier and more strongly (i.e., higher R2 values) than did neurons in 
VLa. VLa neurons, in contrast, encoded movement velocity during the reaction time 
period. In addition, peri-movement activities of VLa neuron were affected by 
session time.

Daisuke Kase (1,2), Andrew Zimnik (3) 
1. Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA; 
2. Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA; 
3. Department of Neuroscience, Columbia University Medical Center, New York, NY


What is a systematic way to define information flow in neuroscience?

Praveen Venkatesh	
Carnegie Mellon University

We propose a formal, systematic methodology for examining information flow in the 
brain. Our method is based on constructing a graphical model of the underlying 
computational circuit, comprising nodes that represent neurons or groups of neurons, 
which are interconnected to reflect anatomy. Using this model, we provide an 
information-theoretic definition for information flow, based on conditional mutual 
information between the stimulus and the transmissions of neurons. Our definition of 
information flow organically emphasizes what the information is about: typically, 
this information is encapsulated in the stimulus or response of a specific 
neuroscientific task. We also give pronounced importance to distinguishing the 
defining of information flow from the act of estimating it. The information-theoretic 
framework we develop provides theoretical guarantees that were hitherto unattainable 
using statistical tools such as Granger Causality, Directed Information and Transfer 
Entropy, partly because they lacked a theoretical foundation grounded in neuroscience. 
Specifically, we are able to guarantee that if the "output" of the computational 
system shows stimulus-dependence, then there exists an "information path" leading 
from the input to the output, along which stimulus-dependent information flows. 
This path may be identified by performing statistical independence tests (or sometimes, 
conditional independence tests) at every edge. We are also able to obtain a 
fine-grained understanding of information geared towards understanding computation, 
by identifying which transmissions contain unique information and which are derived 
or redundant. Furthermore, our framework offers consistency-checks, such as 
statistical tests for detecting hidden nodes. It also allows the experimentalist to 
examine how information about independent components of the stimulus (e.g., color 
and shape of a visual stimulus in a visual processing task) flow individually. 
Finally, we believe that our structured approach suggests a workflow for informed 
experimental design: especially, for purposing stimuli towards specific objectives, 
such as identifying whether or not a particular brain region is involved in a 
given task. 


Functional connectivity estimation from nonsimultaneous recordings  

Guiseppe Vinci
Rice University

Neuronal functional connectivity is the statistical dependence structure of 
neurons’ activities. Functional connectivity is typically inferred from data 
recordings in the framework of graphical models, where a neuron is represented 
by a node, and an edge connects two nodes if the two neurons' activities share 
covariability conditionally on all other sources of variability in the network. 
Estimating functional connectivity helps us understand how neurons interact with 
one another while they process information under different stimuli and other 
experimental conditions. Functional connectivity estimation becomes a compelling 
statistical problem when based on calcium imaging data. Calcium imaging is a 
powerful technology that lets us record the activity of tens of thousands of neurons 
from the same brain. However, typically only smaller subsets of neurons are 
recorded at once to guarantee good temporal resolution of the recordings. In such 
framework, the joint activities of several pairs of neurons remain unobserved for 
which even the simplest metric of covariability, the sample covariance, is 
unavailable. In the Gaussian graphical model setting, the unavailability of parts 
of the covariance matrix translates into the unidentifiability of the precision 
matrix, which specifies the graph, unless additional assumptions are made. We call 
this problem "graph quilting" problem. We demonstrate that, under mild conditions, 
it is possible to correctly identify not only the functional connections of the 
observed pairs of neurons, but also a superset of the connections among the neurons 
that are never observed jointly. We further provide theoretical results about L1 
regularized Gaussian graph estimation in high-dimensions, and embed the problem in 
more general frameworks. We finally illustrate our methodology with an extensive 
simulation study and the analysis of calcium imaging data from mouse visual cortex.


Spatial generalization of repetition suppression in macaque visual cortex  

N. P. Williams
Ctr. for the Neural Basis of Cognition, Biol. Sci. Dept., Carnegie Mellon University

Neurons in macaque inferotemporal cortex (ITC) exhibit repetition suppression. 
When an image is presented twice at the same location, first as prime and then as 
probe, the neuronal response to the probe is reduced relative to the response to the 
prime. This effect is identity specific as indicated by the fact that suppression is 
greater when the probe is the same image as the prime than when it is different 
(McMahon and Olson, 2009; Sawamura and Vogels, 2013). It is also location specific 
as indicated by the fact that suppression is greater when the probe is presented at 
the same location as the prime than when it is presented at another location also 
within the neuron's receptive field (De Baene and Vogels, 2010). The aim of the present 
experiment was to determine how identity-specific and location-specific effects 
combine to determine neuronal response strength in ITC. We recorded neuronal 
responses to displays consisting of a prime, a delay and a probe, each 300 ms in 
duration. The stimuli were $5^{degree$ images of background-free objects presented 
in either the upper or lower contralateral visual field at horizontal and vertical 
eccentricity of $6^{degree$. We varied independently across trials the relation of 
the probe to the prime with respect to identity (same or different) and location 
(same or different). Upon analyzing data from 108 neurons (57 in monkey S and 51 
in monkey O), we found that the probe response was reduced relative to the prime 
response under all conditions including the condition in which the probe differed 
from the prime in both identity and location. However the degree of suppression 
varied across conditions. To analyze the pattern of variation, we carried out an 
ANOVA with identity (same or different) and location (same or different) as factors 
and with firing rate, mean normalized across the four conditions, as the dependent 
variable. This analysis revealed a significant main effect of identity (greater 
suppression when identity was the same, $p < 0.0001$, effect size = 7.0 Hz), a 
significant main effect of location (greater suppression when location was the same, 
$p = 0.0045$, effect size = 1.5 Hz) and a significant interaction effect (greater 
identity-based suppression when location was the same, p = 0.0001, effect size = 
2.4 Hz). The fact that suppression generalized across visual field quadrants suggests 
that suppression arises in part at the level of ITC because it is the first ventral 
stream processing stage at which the receptive fields of individual neurons typically 
span both visual field quadrants. However, the absence of complete spatial 
generalization suggests a role for low-level visual areas upstream from ITC. 
Repetition suppression is almost certainly the culmination at the level of ITC of 
adaptive processes occurring at multiple levels of the visual processing hierarchy.

C.R. Olson, Ctr. for the Neural Basis of Cognition, Biol. Sci. Dept., Carnegie 
Mellon University


Deep Dendrite: Bayesian inference of synaptic inputs from dendritic calcium imaging 

Jinyao Yan

In vivo calcium imaging can be used to probe the functional organization of synaptic 
activity across the dendritic arbor. Synaptic input onto spines can produce calcium 
transients that are largely isolated to the spine head. However – in otherwise 
unperturbed cells – the back-propagating action potential (bAP) also contributes 
strongly to the change in fluorescence measured at individual spines. To address 
this problem, we propose to perform Bayesian inference in a statistical model to 
separate these sources and infer the probability of both pre- and post-synaptic 
spiking activity. Our model is a simplified nonlinear approximation of the biophysical 
processes by which synaptic input and the bAP contribute to the fluorescence 
measurements at different sites. We use the framework of variational autoencoders 
(VAE) – a recent advance in machine learning – training a deep neural network (DNN) 
as part of the VAE to perform approximate Bayesian posterior inference over 
spike trains from fluorescence traces. In simulations, our approach successfully 
recovers correlations between simulated spine and soma activity from fluorescence 
signals. We also processed data from in vivo imaging of the basal dendrites of L2/3 
neurons in mouse frontal cortex and compare our method to conventional methods. 
This method is a crucial step towards measuring the transformation of synaptic input 
to somatic output in vivo.


Human Behaviour Analysis: Using Clustering for Identifying Co-evolving EEG Data Streams

National Institute of Technology, Meghalaya, India

Human behaviour analysis requires real time collection of EEG signals using portable 
devices. It results in continual generation of large volumes of time stamped data 
at a very high speed. This fast moving data of massive size is referred to as a 
data stream. Data concepts may keep evolving in data streams over time due to the 
dynamic nature of the data stream. Discovering associations between co-evolving 
EEG time series data streams can be very useful for understanding the human 
behaviour and different neurological disorders such as epilepsy, migraine, neuro 
infections, brain tumours, disorders of the nervous system Alzheimer disease etc. 
In this work, initially research challenges pertaining to the field of human 
behaviour analysis have been identified namely large volume, dynamic nature, noisy 
and inconsistent data, outlying behaviour etc. Further, an adaptive hierarchical 
clustering method has been proposed for finding associations between different 
co-evolving EEG signals which address some of these challenges. The proposed method 
is based on sliding window technique and incremental in nature. It has been applied 
on artificially generated EEG signals and preliminary experimental results show 
that the performance of the proposed method is better than other existing methods 
in terms of capturing different types of data evolution and cluster quality. This 
is a work in progress; we are in process of collecting real word dataset for further 
verification of the proposed method. 

Dr. Vipin Pal , Jerry W Sangma and Mekhla Sarkar 
National Institute of Technology Meghalaya, India


State-Space Global Coherence to Estimate the Spatio-Temporal Dynamics of the 
Coordinated Brain Activity 

Ali Yousefi
Harvard Medical School 

Characterizing coordinated brain dynamics present in high-density neural recordings 
is critical for understanding the neurophysiology of healthy and pathological brain 
states and to develop principled strategies for therapeutic interventions. In this 
research, we propose a new modeling framework called State Space Global Coherence 
(SSGC), which allows us to estimate neural synchrony across distributed brain 
activity with fine temporal resolution. In this modeling framework, the cross-spectral 
matrix of neural activity at a specific frequency is defined as a function of a 
dynamical state variable representing a measure of Global Coherence (GC); we then 
combine filter-smoother and Expectation-Maximization (EM) algorithms to estimate 
GC and the model parameters. We demonstrate a SSGC analysis in a 64-channel EEG 
recording of a human subject under general anesthesia and compare the modeling 
result with empirical measures of GC. We show that SSGC not only attains a finer 
time resolution but also provides more accurate estimation of GC.


A nonparametric sampling technique for inference of graph theoretical measures 
in EEG networks 

David Zhou

Electroencephalography (EEG) is used in many areas of biomedical signal processing 
to compute quantitative biomarkers of clinical states of interest, including disease, 
injury, and recovery. Graph theoretical measures have been proposed as potential 
biomarkers of EEG network activity, but their stability and performance are seldom 
critically evaluated in real-world and simulated settings, and at varying 
signal-to-noise (SNR) levels. Previous work has enabled the estimation of uncertainty 
in problems of network inference; however, few extensions have been made to 
propagate network uncertainty to measures of network structure. In this work, 
we assess the performance of three univariate graph theoretical measures (global e
fficiency, clustering coefficient, and betweenness) by estimating the variance of 
measures computed on large numbers of nonparametrically sampled surrogate networks. 
Network edges were inferred using a false-discovery-rate (FDR)-corrected threshold 
of p-values derived from hypothesis testing of the maximum cross-correlation, a 
commonly used coupling inference method in graph theoretical analysis. Next, 
surrogate networks were generated by random removal of detected edges according 
to the fixed FDR. We performed this procedure on simulated data and clinical data 
acquired from 16 control subjects and 12 patients with severe traumatic brain injury 
acutely and at follow-up. In simulated and clinical data, graph theoretical measures 
were found to have unstable characteristics in both low and high SNR settings. 
In simulated data, distributions of surrogate graph theoretical estimates were highly 
divergent from those of their ground-truth origins. Our findings identify key areas 
in which both uncertainty and error may be introduced by graph theoretical analysis 
into the computation of quantitative biomarkers, including high variance and low 
stability and fidelity.


Using deep learning to characterize cognitive population activity in the pulvinar

Feng Zhu
Neuroscience Graduate, Emory University

Mounting evidence has demonstrated that during spatial attention animals sample 
the visual environment in theta-rhythmic cycles, leading to alternating periods of 
either enhanced or diminished perceptual sensitivity at the attended location. 
Recent studies have revealed that such rhythmic sampling is tied to theta-band 
oscillatory activity in the attention network. However, it is unclear how such 
oscillatory processes tie to the trial-by-trial spiking activity of neuronal 
populations. Here we test whether dynamical systems approaches could precisely 
link neural population spiking activity to large-scale oscillations. We focus on 
the pulvinar, a higher-order nucleus of the thalamus that is engaged during visual 
selective attention. In order to estimate pulvinar population dynamics, we use a 
recently developed deep learning method, Latent Factor Analysis via Dynamical Systems 
(LFADS), which attempts to extract low-dimensional dynamics from neural population 
spiking activity on a single-trial, moment-by-moment basis. We train a single LFADS 
model on the spiking data from 351 multi-units across 10 recording sessions in the 
pulvinar of a macaque monkey performing a selective visual attention task. We first 
find that consistent low-dimensional dynamics describe pulvinar activity across trials 
and across recording sessions. We prove that the trial-to-trial variability in these 
dynamics explains some trial-to-trial variability in spiking activity. Next, we show 
that these population dynamics reveal precise signatures of oscillatory activity on 
a single-trial basis. We then perform a specific principle component analysis, jPCA 
to specifically extract the rotational/oscillatory activity from the population 
dynamics. We find that the frequency of the oscillatory activity is roughly in theta 
band (3-8 Hz). Our study provides a potential link between the spiking activity and 
large-scale oscillations in the attention network.

Ryan Ly, Princeton Neuroscience Institute, Princeton University
Sabine Kastner, Princeton Neuroscience Institute and Department of Psychology, 
Princeton University
Chethan Pandarinath, Department of Neurosurgery, Emory University, and 
Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of 
Technology/Emory University


Predictive processing by Purkinje cells in the vestibular cerebellum during 
active versus passive self-motion

Omid Zobeiri
Dept. Biomedical Engineering, McGill University 

The ability to distinguish between self-generated (reafference) vs. 
externally-applied (exafference) sensory signals is fundamental for ensuring 
accurate motor control as well as perceptual stability. This is particularly 
evident in the context of the vestibular system, in which the same central vestibular 
neurons that receive direct afferent input also directly project to motor centers 
to control vestibulo-spinal reflexes. Notably, while vestibulo-spinal reflexes are 
essential for providing a robust postural response to unexpected vestibular stimuli, 
they are counter-productive when the goal is to make active head movements. Previous 
studies by our group have shown that vestibular-only (VO) neurons in the vestibular 
nuclei at the first central stage of processing preferentially code vestibular 
exafference in both monkeys and mice. However, the neural mechanism underlying the 
suppression of vestibular reafference is unknown. Accordingly, here, to investigate 
the neuronal basis of vestibular reafferent suppression, we recorded from Purkinje 
cells in the vestibular cerebellum (anterior vermis, lobules IV-V). We made 
single-unit extracellular recordings in rhesus monkeys during comparable active and 
passive head rotational movements and used a semi-automatic clustering algorithm to 
detect simple and complex spikes of Purkinje cells. Our data showed that the Purkinje 
cells’ simple spike response 1) did not linearly encode the head/body angular 
velocity and 2) have differing direction-dependent sensitivity, with one group of 
neurons showing bidirectionality, a second showing directional rectification, and 
a third showing unidirectionality. First for each group, we fit neuronal responses 
for passive motion in each direction using linear dynamic models of head kinematics. 
Non-siginficant dynamic terms were identified using bootstrapping and then removed. 
Using this approach, we computed responses sensitivity for head movement in each 
direction of motion and then used them to determine the preferred direction of the 
cell. Next, comparable models were used to fit neuronal responses during active 
head movements, to allow us to compare responses to passive and active movements. 
We found that simple spike responses were markedly attenuated, for movements in both 
the preferred ($sim$70%, $p<0.0001$) and non-preferred ($sim$45%, $p<0.001$) 
directions in the active condition. Second, we completed an additional analysis of 
the timing of Purkinje cell complex spike activity in the passive and active head 
motion conditions. We found that the probability of a complex spike firing increased 
immediately ($sim$0.7, $p<0.01$) following the onset of head movement in the 
passive condition. In contrast, this effect was absent in the passive condition, 
suggesting that complex spikes are preferentially elicited in response to 
externally-applied versus self-generated vestibular inputs. We suggest that the 
higher probability of complex spikes following the onset of a passive movement 
would promote the increased sensitivity of the simple spike response to the vestibular 
stimulation. Taken together, these results provide new insights into the computations 
performed by Purkinje cells in anterior vermis that underlie the cancellation of 
vestibular reafference.

Dr. Kathleen Cullen; Dept. Biomedical Engineering, Johns Hopkins University