Speakers
Morning talks (10:50–12:30)
Welcome address: Simon Godsill (10:50–11:00)
Keynote: George Cantwell (11:00–11:30)
What do we learn from adding realism to epidemic models?
In 2020, for some reason, there was an epidemic of epidemic models. At this time the foundational models of the field were long established, but they were not obviously providing much help. No doubt in an attempt to be helpful, we created ever more baroque models, incorporating ever more realism. What have we learned from these models, and what can be learned from these models?
I will give a brief outline of epidemic modelling and network approaches therein. I will outline some general and qualitative findings, and then discuss how – in future work – we might render the models useful in practice.
Gabriel Arpino (11:30 – 11:50)
Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression
Large-scale data is not only high-dimensional, but also heterogeneous. Real-world observations, when combined to form large datasets, often incorporate signals from different subpopulations. In this talk we will consider a simple heterogeneous model for high-dimensional inference, namely the Mixed Sparse Linear Regression model. We provide rigorous evidence for the existence of a fundamental statistical-computational tradeoff in a symmetric version of this model. We also prove that a very simple algorithm succeeds in recovering the underlying signals whenever the model parameters are not highly symmetric. To the best of our knowledge, this is the first thorough study of the interplay between mixture symmetry, signal sparsity, and their joint impact on the computational hardness of mixed sparse linear regression. This is joint work with Ramji Venkataramanan. https://proceedings.mlr.press/v195/arpino23a.html.
Greg Flamich (11:50–12:10)
Relative Entropy Coding with Greedy Poisson Rejection Sampling
Relative entropy coding is a fundamental data compression problem concerned with encoding a sample from a target distribution using as few bits as possible on average. Algorithms that solve this problem find applications in neural data compression and differential privacy and can serve as a more efficient alternative to quantization-based methods. In this talk, I will overview relative entropy coding, its applications, and limitations. Then, I will present greedy Poisson rejection sampling (GPRS), the first algorithm to encode samples from a broad class of distributions with optimal time complexity. Finally, I will point out some interesting open questions and future directions for research.
Charles Micou (12:10–12:30)
Brain-machine interface usage triggers emergence of new mental representations of a familiar environment
It's an interesting time to be a Brain-Machine Interface (BMI) researcher: new neuroimaging techniques can resolve activities of populations of neurons at single-cell resolution during behaviour. At the same time, machine learning techniques and contemporary computing hardware enable prediction of behavioural actions from neural activity. However, despite high accuracy in 'open-loop' decoding of neural activity, where neural activity is related to actions offline, what happens when such a predictive mapping is used in closed-loop to control behaviour remains a largely open question. This open question is particularly interesting for parts of the brain that can learn and adapt on rapid timescales, as this raises the possibility of any such mapping changing during BMI usage. We present results from a closed-loop BMI experiment that images from CA1 hippocampus in mice. CA1 is a particularly plastic brain region, known to encode the spatial surroundings of an animal and, perhaps, the more abstract concept of 'context'. We find that an alternative neural representation of the same environment emerges when mice control their navigation using BMI. We explore the ramifications of this difference in neural representations for producing accurate closed-loop BMI controllers.
Afternoon talks (2:00–3:30)
Keynote: Amanda Prorok (2:00–2:30)
Heterogeneous learning for multi-robot systems
How are we to orchestrate large teams of agents? How do we distill global goals into local robot policies? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize decentralized agent policies from global objectives. In this presentation, I first describe how we leverage data-driven approaches to learn interaction strategies that lead to coordinated and cooperative behaviours. I will introduce our work on Graph Neural Networks, and show how we use such architectures to learn multi-agent policies through differentiable communications channels. I then focus on recent results showing how heterogeneous learning paradigms contribute to resilient teambehaviours.
Herbie Bradley (2:30–2:50)
Evolutionary Algorithms and Large Language Models
Large Language Models (LLMs) have rapidly progressed in capability over recent years, exhibiting increasing competency in NLP tasks. Recent work has underscored the potential of LLMs to enable highly proficient novel evolutionary algorithms in both code and natural language domains. Motivated by these opportunities, we introduce OpenELM, an open-source Python library for designing evolutionary algorithms that use LLMs as an intelligent variation operator as well as for assessing fitness and measures of diversity.
Daniel Larby (2:50–3:10)
Design and tuning of passivity-based controllers for robotic surgery.
Robots are highly nonlinear systems, and passivity theory is a crucial tool in designing controllers for such systems and proving closed loop stability. However, tuning such controllers can be challenging. We demonstrate a method of passive controller design based upon constructing virtual mechanisms, which allows intuitive but flexible design of controller structures, and present efforts towards tuning such controllers using algorithmic differentiation and nonlinear optimisation techniques.
Austin Tripp (3:10–3:30)
Retro-fallback: a search algorithm for uncertain AND/OR graphs
Retrosynthetic planning is the task of proposing a series of reactions to synthesize a desired molecule from simpler, purchasable molecules. While previous works have proposed algorithms to find optimal solutions for a range of metrics, the fact that we have imperfect knowledge of the space of possible reactions is generally overlooked. This means we have uncertainty of the graph structure. To account for this, I present a formulation of uncertainty in AND/OR graphs using stochastic processes, and an algorithm called retro-fallback which greedily maximizes the probability that any solution succeeds. This talk with focus primarily on the algorithm details, but some experimental demonstrations are also available.