Mathieu Rita
Inria-Microsoft Research Joint Lab, Ecole Normale Supérieure.
Centre Sciences des Données (CSD)
45 rue d'Ulm
75005, Paris, France
I am a Ph.D. student under the supervision of Emmanuel Dupoux (ENS/FAIR), Olivier Pietquin (Google Brain) and Florian Strub (DeepMind). I work between the Inria-Microsoft Research Joint Lab and the CoML team in Paris, which is located at ENS Paris.
Prior to that, I received an engineering degree from Ecole Polytechnique and a MSc degree in Mathematics, Computer Vision and Machine Learning from Ecole Normale Supérieure Paris-Saclay.
My research explores the theoretical and experimental aspects of training RL objectives with language models, with a specific focus on constructing self-play multi-agent systems. I particularly investigate how scaling populations and generations of agents can help address language learning challenges, such as overfitting, exploration or drift. As an application, I simulate language evolution and study the pre-requisites necessary to the emergence of language universals, such as compositionality.
News
Jan 22, 2023 | [📝 PAPER] Our paper Revisiting Populations in Multi-Agent Communication has been accepted at ICLR’23. |
---|---|
Sep 14, 2022 | [📝 PAPER] Our paper Emergent Communication: Generalization and Overfitting in Lewis Games has been accepted at NeurIPS’22. |
Jul 3, 2022 | [🇯🇵 WORKSHOP] Our Workshop “Machine Learning and the Evolution of Language” will take place at JCoLE’22 on September, 5th. Find all information on our webpage. |
Apr 29, 2022 | [💻 WORKSHOP] Our Workshop ‘Emergent communication: New frontiers’ takes place at ICLR’22 today |
Apr 22, 2022 | [📝 PAPER] Our paper On the Role of Population Heterogeneity in Emergent Communication is presented at ICLR’22 this week. |
Feb 1, 2021 | [✒️ PhD] I am starting a PhD on Emergent Communication under the supervision of Emmanuel Dupoux, Olivier Pietquin & Florian Strub. |
Jan 15, 2021 | [👨🎨 ART] Our generated video Dreamy Cops is exposed in the online CVPR Computer Vision Art Gallery. |
Nov 20, 2020 | [📝 PAPER] Our paper “LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently is presented to CoNLL’20. |