Bio: I grew up in Morocco where I did a bachelor-level training in physics, mathematics and philosophy. Then moved to France and graduated from the Ecole Polytechnique in 2012 .
After that I took a 4-year drift where I was a researcher in condensed matter Physics, created a web-media in Morocco and launched Wandida, an online science tutorials project before coming back to research. My current work is mostly on the robustness of (distributed) learning systems. My contributions led to the first provably Byzantine-resilient algorithm for Gradient Descent and a series of follow-ups. Besides research, I co-organize a weekly philosophy reading group and published a book on the safety and ethics of large-scale decision making, and AI safety in particular. I also have a keen interest in theoretical biology on which I collaborated with colleagues from the Johns Hopkins School of Medicine.
As a physicist at heart, my long term career goal is to practice computing and data science as a natural science. Computing is a sort of quantitative epistemology, the science of how much can be done and how much can be known, that is unfortunately rarely seen as such (at least not by the physics curricula I came from).
For the year 2020, I will be applying for tenure track professorships, please check out my one-page summary, my research statement and my teaching statement. (UPDATE: I fortunately received satisfactory offers but you can still read these documents if you are a prospective PhD candidate or a potential collaborator 🙂 ).
– Nov. 2019: my PhD “Robust Distributed Learning” has just been accepted and nominated for the EPFL best thesis award by a jury comprised of professors Francis Bach (ENS Ulm) & Martin Jaggi (EPFL) from the machine learning side, professors Rachid Guerraoui (EPFL) & Maurice Herlihy (Brown University) from the distributed systems side, and presided by prof. Babak Falsafi. Thanks to all of them for the time they put into reviewing my work.
(2020 UPDATE: the thesis now received the doctoral award for computer science and is short-listed for the cross-department award, thanks again to my jury members for their feedback).
– Nov. 2019: Our book Beneficial and Robust AI, written with game-theorist Dr Lê Nguyên Hoang and edited by EDP Sciences is out! the French version is already in libraries, the English version is following soon. Here is a concise interview with EPFL media service detailing what it is about.
Current Research: The focus is on the robustness of distributed learning systems and is two-fold.
–Robustness of biological networks (neural networks, biomolecular networks, gene-regulatory networks…): in this direction, I am interested in bounds on error propagation that might help, e.g., explain the emergence of essential genes.
–Robustness of distributed machine learning systems: in this direction (which took the bigger chunk of my PhD), I am looking for algorithms that enable a group of agents to learn together while not everyone involved is trusted. This covers situations such as data-poisoning, hacked machines, buggy software or asynchronous communication. My PhD work encompassed all these situations under the umbrella of Byzantine resilient machine learning and coined a series of provably safe algorithms (c.f. publications bellow or this interview with the Future of Life Institute).
I am fortunate to have Rachid Guerraoui -a world leader in distributed computing- as a PhD advisor who, while not doing any AI, machine learning or computational biology himself, provided me with the full freedom to initiate research directions of my own in these domains.
International Conference on Machine Learning (ICML), Stockholm, Sweden. 2018:
–Asynchronous Byzantine Machine Learning (the case of SGD). Long talk.
–The Hidden Vulnerability of Distributed Learning in Byzantium. Long talk.
IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, USA. 2017
–When Neurons Fail. (also given at the 2016 Biological Distributed Algorithms workshop in Chicago)
IEEE Symposium on Reliable Distributed Systems (SRDS), Hong Kong, China. 2017
–On the Robustness of a Neural Network.
International Conference on Silicon Photovoltaics (SiliconPV), Hamelin, Germany, 2013:
– Cyclic behaviour in a-Si:H Degradation – Understanding Passivation in High Efficiency Silicon Heterojunction Solar Cells. Plenary oral presentation.
Berkeley Center for Human Compatible Artificial Intelligence, Google Brain Seattle (federated learning), Ecocloud 2019 and the Applied Machine Learning Days 2019:
– Byzantine Resilient Machine Learning (invited).
AI governance forum Geneva 2019:
– AI safety beyond the killer robot cliché : from poisoning public debates to social media addiction (invited).
I was also invited to the 2019 Beneficial AI conference in Puerto Rico, in which I presented our work on Byzantine resilient machine learning, see for example this podcast recorded there or these two panels on controlling long-term AI and action items for the next generation of researchers in AI safety.
Public outreach :
The Practical AI podcast: on poisoning social media with false information, Byzantine machine learning and short term AI safety in general.
Swiss national radio (french): on Byzantine ML and Interruptibility.
AI safety and the three types of adversarial attacks,
why the safe interruptibility question matters ,
Byzantine fault tolerant machine learning.
Science4all (french): on poisoning AI with unreliable data.
Note: In the research group I work for, authors order is alphabetical. Exceptions to this rule are specified with *.
“Main publications“ are those for which I played a key role (coming up with the problem, algorithms, proofs, manuscript etc), “other publications” are those in which I played a secondary role (as detailed).
Main Publications :
-G. Damaskinos, E.M El Mhamdi, R. Guerraoui, A. Guirguis, S. Rouault. AggregaThor: Byzantine Machine Learning via Robust Gradient Aggregation. the conference on Systems and Machine Learning (SysML) 2019. paper. code (by G.D, A.G & S.R).
-E.M. El Mhamdi, R. Guerraoui, S. Rouault. The Hidden Vulnerability of Distributed Learning in Byzantium. International Conference on Machine Learning (ICML) 2018. Long Talk. paper.
-G. Damaskinos, E.M. El Mhamdi, R. Guerraoui, R. Patra, M. Taziki. Asynchronous Byzantine Machine Learning (the case of SGD), International Conference on Machine Learning (ICML) 2018. Long Talk. paper.
-E.M. El Mhamdi*, A. Kucharavy*, R. Guerraoui, R. Li. Predicting Complex Genetic Phenotypes Using Error Propagation in Weighted Networks, under review for a biology journal. Presented at the European Molecular Biology Laboratory (EMBL) November 2018 conference “from functional genomics to systems biology” and at the 2017 Biological Algorithms Workshop. preprint.
-P. Blanchard, E.M. El Mhamdi, R. Guerraoui, J. Stainer. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, Neural Information Processing Systems (NeurIPS) 2017. paper. video. (also appeared as a brief announcement at the 2017 ACM Principles of Distributed Computing conference (PODC)).
-E.M. El Mhamdi, R. Guerraoui, S. Rouault. On the Robustness of a Neural Network, IEEE Symposium on Reliable Distributed Systems (SRDS) 2017. paper.
-E.M. El Mhamdi, R. Guerraoui. When Neurons Fail, IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2017. paper.
-E.M. El Mhamdi*, J. Holovsky, B. Demaurex, C. Ballif, S. De Wolf. Is Light-Induced Degradation of a-Si:H/c-Si Interface Reversible? Applied Physics Letters. 2014. paper.
-E.M. El Mhamdi, R. Guerraoui. Fast and Secure Distributed Learning in High Dimension. preprint.
Other Publications :
E.M. El Mhamdi, R. Guerraoui, A. Maurer, V. Tempez. Exploring the Borderland of the Gathering Problem. Bulletin of the European Association of Theoretical Computer Sciences (EATCS) 2020. my colleague A.M. has some papers on the “gathering problem” (how agents, such as fish schools, agree on a position), the solutions to this problem from the distributed computing community are typically rule-based algorithms that each agent follows. I suggested that the gathering problem could easily be learnt by multi-agent reinforcement learning and V. Tempez joined us for his master thesis, resulting in this preprint, mainly based on his thesis, later merged with another work and compiled by A.M here.
E.M. El Mhamdi, R. Guerraoui, A. Guirguis, L.N. Hoang, S. Rouault. Genuinely Distributed Byzantine Machine Learning. ACM Principles of Distributed Computing conference PODC 2020. preprint.
In this paper, we take our first step outside the parameter-server setting where the server is trusted, and provide a Byzantine resilient solution for distributed learning with untrusted servers.
S.R. did most of the heavy-lifting (algo+proofs), A.G. provided experimental assistance and algorithmic contribution on the synchronous case. I provided the initial algorithmic idea, together with elements of the proofs and the manuscript writing as I was mentoring S.R. during his first years of PhD. LNH joined for the last iteration of the paper where he provided critical corrections to the proof.
E.M. El Mhamdi, R. Guerraoui, H. Hendrikx, A. Maurer. Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning. Neural Information Processing Systems (NeurIPS). 2017. Spotlight talk.paper.
This paper generalises the safe-interruptibility framework of Orseau and Armstrong to multi-agent systems. H.H. did all of the heavy-lifting here. (I was his master’s semester project mentor and only provided conceptual problem setting, together with pointers to classic RL proofs that were re-adapted from Singh et al., Littman et al. etc.).
R. Rößler*, L. Korte, C. Leendertz, N. Mingirulli, E.M. El Mhamdi, B. Rech, ZnO: Al/(p) a-Si: H Contact Formation and Its Influence on Charge Carrier Lifetime Measurements. 27th European Photovoltaic Solar Energy Conference (euPVsec). 2012. paper. This paper uses some of my undergraduate physics work.
E.M. El Mhamdi, R. Guerraoui, S. Volodin. Fatal Brain Damage. 2019. preprint.
in this paper, we provide a series of bounds on the fault tolerance properties of neural networks (this improves on and go beyond my two IPDPS and SRDS papers listed above). The work is led by S.V. whom I advised as a Masters research scholar. He worked on all the proofs while I only provided initial conjectures, problem setting and the writing.
E.M. El Mhamdi, R. Guerraoui, L.N Hoang, A. Maurer. Removing Algorithmic Discrimination (With Minimal Individual Error). 2019. preprint.
This work started from an informal group discussion that L.H.N and me mathematically formalised, then I realised that if you try to define what an “almost-non-discriminating” function is, you could somehow generalise the definition of “differential privacy”. We found that this was already proven by Cynthia Dwork, Moritz Hardt et al. in 2011 and discarded the project. Later on, A.M. revived it by proving that you could prove some interesting results on “de-biasing” group discrimination while preserving some individual accuracy.
H. Aslund, E.M. El Mhamdi, R. Guerraoui, A. Maurer. Virtuously Safe Reinforcement Learning. 2018. preprint.
This work is based on the master thesis of H.A which I supervised. He worked on all the technical proofs and most of the writing, I provided the initial problem setting, a few conjectures and the idea of relaxing the Greedy in the Limits with Infinite Exploration (GLIE) property to trade-off safe interruptibility with perturbed perception.
In 2018, I designed and taught a 40-hours course on the fundamentals of machine learning , statistics and the epistemology of induction to the 1st cohort of PhD students and executive Masters at the newly founded UM6P university in Morocco.
TA: Master-level courses (all at EPFL):
– Distributed Algorithms (2016,2017,2018) with Rachid Guerraoui.
– Optimisation for Machine Learning (2018) with Martin Jaggi.
– Distributed Information Systems (2017) with Karl Aberer.
plus several Bachelor-level courses, mainly on probabilities, statistics, physics and pure mathematics .
Master thesis / projects supervision (all at EPFL):
– Isabela Constantin, Spring 2019, now resident at Microsoft AI research.
– Sergei Volodin, Fall 2018, now at University of California, Berkeley (CHAI)
– Henrik Aslund, Spring 2018, now a PhD candidate at Imperial College London
– Hadrien Hendrikx, Fall 2017, now a PhD candidate at Ecole Normale Supérieure and INRIA.
– Philipe Yazdani, Spring 2017, now data scientist at Swisscom.
– Sébastien Rouault, Fall 2016, now a PhD candidate at EPFL and my most frequent co-author !
– Vladislav Tempez, Spring 2016, now a PhD candidate in Grenoble INP.
Reseach Grants & Service :
Grants: I wrote the research plans (~15pages) of the following grants
– A Theoretical Approach to Robustness in Biological Distributed Algorithms. (written with Alexandre Maurer). 400 k$. P.I: R. Guerraoui.
– Machine Learning in Byzantium. 800 k$. P.I: R. Guerraoui.
Service: I have been a reviewer / program committee member for:
– Neural Information Processing Systems (NeurIPS) 2018, 2019 (ranked among top reviewers in quality)
– International Conference on Learning Representations (ICLR) 2019, 2020
– International Conference on Machine Learning (ICML) 2019
– Association for the Advancement of Artificial Intelligence (AAAI) 2020
– Uncertainty in Artificial Intelligence (UAI) 2019
– International Conference on Distributed Computing (DISC) 2017 (external reviewer)
– Safe Machine Learning Workshop (SafeML) at ICLR 2019
Before the PhD, I started Wandida, now an EPFL library of university-level scientific tutorials. Before that, I co-founded Mamfakinch a moroccan web-media that won the 2012 Breaking Borders for free expression award from Google and Global Voices Online. I also worked for about a year as a research engineer in condensed matter physics.
With Dominique Boullier (professor of Sociology at Sciences Po Paris), we wrote a paper on the conceptual and epistemological toolboxes that social sciences might borrow from theoretical computer science and learning theory. It is due to appear in November 2019 in the Revue d’Anthropologie des Connaissances.
If you survived until this paragraph and understand French, then you can probably survive this 3h podcast with Science4all, one of the most prominent French science YouTubers. We discuss the important but highly overlooked question of why computer science should be regarded as quantitative epistemology, i.e the science of how much can be done and how much can be known.