Machine Learning for Neuronal Data Analysis

Friday, 15 July, 2022
Tags: Education

A practical introductory course to basic machine learning techniques used to in systems neuroscience research.

lecturer: Balázs B Ujfalussy
https://koki.hun-ren.hu/researchgroups/biological-computation
ujfalussy.balazs@koki.hu

teaching assistant: Martin Blazsek
language: English
prerequisites: Interest in neuroscience, basic programming in python, linear algebra and probability theory
location: KOKI seminar room (1083, Budapest, 43 Szigony utca).

date: lectures: Tuesday 13:15-14:45

           practical sessions: Thursday, 16:15-17:45

credit: the lecture is part of the Neural Data Science specialization of the Info-bionics engineering MSc program at PPKE ITK.

textbook: selected chapters from C Bishop: PRML and Mathematics for ML.

 

Machine learning and computer science has a lot to learn from the brain when it comes to efficiency, robustness, generalisation and adaptivity, yet the code and the algorithms running on the neural hardware are poorly understood. Using the state of the art electrophysiological and optical techniques we are now able to monitor the activity of large number of neurons in behaving animals providing us an unprecedented opportunity to observe how interacting neural populations give rise to computation.

 

The aim of the course is to introduce students to recent approaches for analysing and interpreting neuronal population activity data. We will focus on generative models and take Bayesian perspective: we will learn how to build probabilistic models of the data and how to perform inference and learning using  these models. The course is a mixture of lectures focusing the theoretical background, discussing neuroscience experiments and practical sessions where students will apply the learned techniques to real neuronal data. Interactions between the students is highly encouraged.

 

Format: 14 lectures + 14 practical sessions centered around 6 different topics.

 

Each topic will consist of 1) an introductory lecture of the given technique; 2) the discussion of a classic research paper illustrating the application of the technique in practice; 3) the intruduction of example python notebooks applying the technique to neuroscience datasets and 4) presentations where students show their results in applying the learned technique to a novel dataset.

 

Lectures will start with a motivating neuroscience problem, often from a recent research paper, then we will continue with the introduction to the mathematical basis of the given analysis technique and finally discuss the scope and limitations.

 

Tutorials: Students will have access to jupyter notebooks where they can apply the learned techniques to analyse example datasets. The goal of these sessions is to get a hands on experience on formulating and testing scientific hypotheses using computational models and data analysis. Students are required to use their own laptops.

 

Syllabus

 

Introductory sessions:

  1. Neural networks as dynamical systems (feed-forward and recurrent networks, rate vs. spiking nets, linear, nonlinear nets, inhibitory and excitatory neurons, non-normal networks and selective amplification, dynamical systems recap).
  2. Introduction to jupyter lab through generating synthetic neuronal datasets.
  3. Recording techniques: neural networks in action. Electrophysiology (tetrodes and silicon probes) and optical imaging (Ca2+ and voltage imaging). challenges and hopes, the need of quantitative/computational models. 
  4. Introduction to the experimental datasets:
    • Hippocampal Ca2+ imaging data from the Makara-lab during navigation in virtual reality
    • Hippocampal tetrode recordings from the Buzsáki lab diring navigation in linear track - crcns.org
    • Neuropixel recordings from many brain regions during visual decision making task - Steinmetz dataset from neuromatch.
    • 2P imaging from visual cortex of mice performing a visual change detection task - Allen dataset from neuromatch.

 

Topics: 

  1. Encoding and supervised regression - Generalised Linear Models for explaining neural tuning and predicting spikes. Model comparison, regularisation, maximum likelihood (Stevenson et al., 2012, Pillow et al, 2008).
  2. Decoding and latent variable models - introduction from a generative perspective. Inference and decoding methods. Static Bayesian decoding: posterior distribution, cross-validation, bootstrapping (Pfeiffer and Foster, 2013).
  3. Latent variable models I: discrete variables. Introduction to unsupervised learning. Learning the parameters: maximum likelihood, EM algorithm, clustering and hidden Markov models (HMMs, Maboudi et al., 2018).
  4. Latent variable models II: continuous latents. Static, linear Gaussian models for unsupervised dimensionality reduction: PCA, FA, ICA - their assumptions and their validity in neural context. Relaxing assumpltions of linear Gaussian models: Gaussian Process Factor Analysis, Poisson LDS, demixed PCA and others. (Roweis and Ghahramani, 1999; Mante and Sussillo et al., 2013).
  5. Deep learning I: FF neuronal networks. Variational inference: deep neural networks, variational autoencoders and their application to neuronal data. LFADS (Pandarinath et al., 2017).
  6. Deep learning II: Recurrent neuronal networks. Training, interpretation, comparision with behavioral and neurophysiological data. Computation Through Neural Population Dynamics (Vyas et al., 2020).