Sampling and inference for discrete random probability measures in probabilistic programs
Paper presented at NIPS 2017 as part of the Advances in Approximate Bayesian Inference workshop.
We theoretically and empirically demonstrate how to effectively deal with a class of Bayesian Non-parametric models, especially in the context of probabilistic programs.
Benjamin Bloem-Reddy, Emile Mathieu, Adam Foster, Tom Rainforth, Yee Whye Teh, Hong Ge, María Lomelí,
Zoubin Ghahramani, 2017
Turing.jl: A probabilistic programming language in Julia
I contributed to Turing, by allowing to handle certain class of Bayesian Nonparamtric models.
I also implemented various sampling algorithms such as interacting particle Markov chain Monte Carlo,
particle marginal Metropolis-Hastings,
stochastic gradient Hamiltonian Monte Carlo and
stochastic gradient Langevin dynamics.
Hong Ge, Kai Xu, Adam Scibior, Zoubin Ghahramani, Emile Mathieu
Riemannian Manifold Hamiltonian Monte Carlo
As part of the Computational Statistics course taught by S. Allassonnière, I worked on a generalization of Hamiltonian Monte Carlo. We implemented in Python the Riemannian Manifold Hamiltonian Monte Carlo algorithm along with HMC and other baseline algorithms, and empirically compare performances on a Bayesian logistic regression problem.
Emile Mathieu, Kimia Nadjahi, 2017
Policy Search: A review
This is the result of a project part of the Reinforcement Learning course taught by A.Lazaric. This project's goal was to write a review on Policy Search, a reinforcement learning's sub-field which proceeds by directly learning a parametrized policy without estimated any value function.
Charles Reizine, Emile Mathieu, 2016
Factorial Hidden Markov Models
I studied in depth the factorial hidden Markov models article wrote by Zoubin Ghahramani and Michael I. Jordan as part of the Probabilistic Graphical Model course taught by Guillaume Obozinski and Francis Bach. Factorial hidden Markov models are an extension of classical hidden Markov models with a distributed representation of their state's space.
Emile Mathieu, 2016
Gaussian Process Bandits with Thompson Sampling
As part of the Graphs in ML course taught by Michal Valko, I proposed and implemented a Thompson Sampling algorithm for the Gaussian Process bandit setting. This setting is described in Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.
Emile Mathieu, 2016
Research Internship: Urban mobility data analysis
During an internship at Ifsttar, I applied probabilistic models such as LDA, along with web visualizations, to transportation’s
data in order to enhance the understanding of commuters’ behavior.
Emile Mathieu, 2014
Learning from crowds
As part of the Introduction to Machine Learning taught by taught by Guillaume Obozinski, we studied and reimplemented two articles dealing with the supervised learning multiple annotators setting: Modeling annotator expertise:
Learning when everybody knows a bit of something and Learning From Crowds.
Charles Reizine, Thomas Pesneau, Emile Mathieu, 2014