News & Events

Subscribe to email list

Please select the email list(s) to which you wish to subscribe.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA

Enter the characters shown in the image.

User menu

You are here

Point process models for sequence detection in neural spike trains

Tuesday, November 3, 2020 - 11:00 to 12:00
Scott Linderman, assistant professor of Statistics and Computer Science, Stanford University.
Statistics Seminar
Zoom*

*To join this seminar via Zoom, attendees will need to request connection details from headsec [at] stat.ubc.ca.

Post-seminar Q&A: Graduate students are invited to stay after the seminar for a Q&A with the speaker (~12pm12:30pm).

Abstract: Sparse sequences of neural spikes are posited to underlie aspects of working memory, motor production, and learning. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience. I will present our new work using NeymanScott processes—a class of doubly stochastic point processes—to model sequences as a set of latent, continuous-time, marked events that produce cascades of neural spikes. This sparse representation of sequences opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, as have been experimentally observed in neural circuits. Bayesian inference in this model requires integrating over the set of latent events, akin to inference in mixture of finite mixture (MFM) models. I will show how recent work on MFMs can be adapted to develop a collapsed Gibbs sampling algorithm for NeymanScott processes. Finally, I will present an empirical assessment of the model and algorithm on spike-train recordings from songbird higher vocal center and rodent hippocampus.