Articles and projects

Poincaré VAE

Hierarchical Representations with Poincaré Variational Auto-Encoders Link to this paper

In the real world, many observations may be assumed to be hierarchically structured, such as living organisms data which are related through the evolutionary tree. Also, it has been theoretically and empirically shown that data with hierarchical structure can efficiently be embedded in hyperbolic spaces. We therefore endow the VAE with a hyperbolic geometry and empirically show that it can better generalise to unseen data than its Euclidean counterpart, and can qualitatively recover the hierarchical structure.
Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh.

[arxiv] [code] [slides]


Disentangling Disentanglement for Variational Auto-Encoders Link to this paper

We develop a generalised notion of disentanglement in Variational Auto-Encoders by casting it as a decomposition of the latent representation, characterised by i) enforcing an appropriate level of overlap in the latent encodings of the data, and ii) regularisation of the average encoding to a desired structure, represented through the prior.
Emile Mathieu, Tom Rainforth, Siddharth Narayanaswamy, Yee Whye Teh.

[arxiv] [code] [slides]

sparse network

Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks Link to this paper

Empirical evidence suggests that heavy-tailed degree distributions occurring in many real networks are well-approximated by power laws with exponents η that may take values either less than and greater than two. We design and implement inference algorithms for a recently proposed class of models that generates η of all possible values.
B. Bloem-Reddy, A. Foster, E. Mathieu, Y. W. Teh
UAI 2018.
[uai] [arxiv] [code] [slides]

Dirichlet Process Mixture

Sampling and inference for discrete random probability measures in probabilistic programs Link to this paper

Paper presented at NIPS 2017 as part of the Advances in Approximate Bayesian Inference workshop. We theoretically and empirically demonstrate how to effectively deal with a class of Bayesian Non-parametric models, especially in the context of probabilistic programs.
Benjamin Bloem-Reddy, Emile Mathieu, Adam Foster, Tom Rainforth, Yee Whye Teh, Hong Ge, María Lomelí, Zoubin Ghahramani
NIPS AABI Workshop 2017.

Alan Turing

Turing.jl: A probabilistic programming language in Julia Link to this paper

I contributed to Turing, by allowing to handle certain class of Bayesian Nonparamtric models. I also implemented various sampling algorithms such as interacting particle Markov chain Monte Carlo, particle marginal Metropolis-Hastings, stochastic gradient Hamiltonian Monte Carlo and stochastic gradient Langevin dynamics.
Hong Ge, Kai Xu, Adam Scibior, Zoubin Ghahramani, Emile Mathieu

Hamiltonian Monte Carlo

Riemannian Manifold Hamiltonian Monte Carlo Link to this paper

As part of the Computational Statistics course taught by S. Allassonnière, I worked on a generalization of Hamiltonian Monte Carlo. We implemented in Python the Riemannian Manifold Hamiltonian Monte Carlo algorithm along with HMC and other baseline algorithms, and empirically compare performances on a Bayesian logistic regression problem.
Emile Mathieu, Kimia Nadjahi, 2017
[Presentation] [Sources]

Guided Policy Search

Policy Search: A review Link to this paper

This is the result of a project part of the Reinforcement Learning course taught by A.Lazaric. This project's goal was to write a review on Policy Search, a reinforcement learning's sub-field which proceeds by directly learning a parametrized policy without estimated any value function.
Charles Reizine, Emile Mathieu, 2016
[Report] [Presentation]

Factorial hidden Markov models

Factorial Hidden Markov Models Link to this paper

I studied in depth the factorial hidden Markov models article wrote by Zoubin Ghahramani and Michael I. Jordan as part of the Probabilistic Graphical Model course taught by Guillaume Obozinski and Francis Bach. Factorial hidden Markov models are an extension of classical hidden Markov models with a distributed representation of their state's space.
Emile Mathieu, 2016
[Report] [Poster]

Gaussian process bandits

Gaussian Process Bandits with Thompson Sampling Link to this paper

As part of the Graphs in ML course taught by Michal Valko, I proposed and implemented a Thompson Sampling algorithm for the Gaussian Process bandit setting. This setting is described in Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.
Emile Mathieu, 2016
[Report] [Presentation] [Sources]

Bicycle sharing system

Research Internship: Urban mobility data analysis Link to this paper

During an internship at Ifsttar, I applied probabilistic models such as LDA, along with web visualizations, to transportation’s data in order to enhance the understanding of commuters’ behavior.
Emile Mathieu, 2014

Mixture of experts

Learning from crowds Link to this paper

As part of the Introduction to Machine Learning taught by taught by Guillaume Obozinski, we studied and reimplemented two articles dealing with the supervised learning multiple annotators setting: Modeling annotator expertise: Learning when everybody knows a bit of something and Learning From Crowds.
Charles Reizine, Thomas Pesneau, Emile Mathieu, 2014
[Report] [Sources]

Contact me



Department of Statistics
University of Oxford
24-29 St Giles
Oxford OX1 3LB, UK