My DPhil concerns safety and incentives in multi-agent and machine learning systems, with a view towards building provably safe and beneficial AI, and in practice spans game theory, machine learning, and control and verification. I am supervised by Michael WooldridgeAlessandro Abate, and Julian Gutierrez, and am also a DPhil Affiliate at the Future of Humanity Institute. Before coming to Oxford I worked as an intern on Imandra and was a research assistant for Jacques Fleuriot at the University of Edinburgh, where I completed my MSc in artificial intelligence under the supervision of Vaishak Belle. Prior to this I studied for my BSc in mathematics and philosophy at the University of Warwick with Walter Dean. For information about my previous work see some of the links below, or alternatively my CV.


My research interests are broad but are largely driven by the aim of improving robustness, explainability, and safety in AI systems, often through methods that combine both statistical and symbolic AI. Currently my efforts are focused on developing techniques to help rigourously identify or induce particular properties of multi-agent systems under their game-theoretic equilibria, especially those systems that operate in uncertain (partially known, partially observable, stochastic, etc.) environments. Related topics that I have worked on before include statistical relational learning, formal verification of learnt models, and deep symbolic reinforcement learning. Much of my work is also concerned with the problem of how agents represent and reason about preferences in the face of uncertainty. Examples of this include projects on representing non-Markovian reward structures using automata, learning models of cognitive biases using inverse reinforcement learning, and my master's dissertation on computational frameworks for moral decision-making. Thirdly and finally, I also take an interest in the governance, ethics, and societal impact of AI, though I do not research these topics full-time.


Outside of academia my main passion is music, but I also love film and art. I like to travel whenever I get the opportunity, especially to Scandinavia (where I previously lived), and in my spare time I enjoy reading, (vegan) cooking, print-making, and clubbing. Ethically and politically I consider myself an effective altruist (yes, it's a bit of strange name), a humanist, and a fabian (more generally, a democratic socialist).


Hello world.

I'm a DPhil student in computer science at the University of Oxford. Below you can find more about my work and academic interests, as well as some personal details, recent updates, and links to various other online profiles. Please feel free to email me if you'd like to get in touch.

20.08.21 - I recently co-authored two new papers on rational verification, the first of which (accepted to KR-21) generalises the paradigm to probabilistic systems, and the second of which (published in Applied Intelligence) provides an overview of the topic.


26.07.21 - I'll be joining the Center on Long-Term Risk as a summer research fellow for the next three months, where I'll working primarily on developing a formal theory of threats and offers.


03.05.21 - I'm looking forward to AAMAS-21 this week, where I'll be presenting a couple of my most recent papers (links to which are given further below).


13.04.21 - This weekend I'll be (virtually) attending the Stanford Existential Risks Conference.


31.03.21 - I'm excited to be joining The Future Society as part of their 2021-22 Affiliate Cohort, in order to assist with project development and implementation regarding their work on AI governance.


12.03.21 - The website for the Causal Incentives Working Group, a set of researchers from DeepMind, Oxford, and Toronto working to develop a causal theory of incentives, is now online. Check out our recent papers and software via the link above, and feel free to get in touch if interested in this work.


18.12.20 - I'm happy to announce that not one but two papers I co-authored earlier this year, "Multi-Agent Reinforcement Learning with Temporal Logic Specifications" and "Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice", have been accepted as full papers at AAMAS-21.


15.11.20 - The work from my master's thesis in Edinburgh was recently published in Data Mining and Knowledge Discovery as Learning Tractable Probabilistic Models for Moral Responsibility and Blame.


22.09.20 - I helped to write the new Young Fabians pamplet on AI, which launches today at the Labour Party Annual Conference 2020.


29.07.20 - I'm flattered to have been selected for a Departmental Teaching Award due to "excellent student feedback" as a class tutor for the AI and Computational Game Theory courses at Oxford this academic year.


26.06.20 - I'm helping with this year's UNIQ+ Digital Graduate Access Programme. If you're interested in graduate study at Oxford you can register interest for future graduate access programmes here.


15.02.20 - I'll be giving a short presentation on modelling agent incentives, and joining a panel discussion on AI alignment (alongside Rohin Shah and Michael Cohen) at next week's AI Ethics Meetup in London.


09.12.19 - My previous supervisor and co-author Vaishak Belle will be presenting our poster on Tractable Probabilistic Models for Moral Responsibility at the Knowledge Representation & Reasoning Meets Machine Learning (KR2ML) Workshop at NeurIPS on the 13th of December.


26.09.19 - I wrote a post on Medium about some of the work I've been doing using Imandra to analyse and verify properties of models learnt from data.


13.09.19 - This weekend I'll be in Switzerland attending the AI Governance Careers workshop at ETH Zurich.


01.07.19 - I'm very excited to be joining Aesthetic Integration as a research and engineering intern for the coming months in Edinburgh, where I'll be working on Imandra - a cloud-native automated reasoning engine.


17.05.19 - I have been invited to give a short presentation and join a panel discussion on 'Fair machines: Student perspective on Data Justice and Ethics' as part of the Data Justice Week taking place in Edinburgh between the 20th-24th of May.


26.04.19 - I will be attending the third AI Safety Camp in Ávila, Spain for the next week and a half, where I am working on a group project to improve preference elicitation by first learning models of biased and mistaken behaviour in agents.


09.04.19 - A short version of my first paper, Deep Tractable Probabilistic Models for Moral Responsibility, has been accepted for presentation at the Human-Like Computing Third Wave of AI Workshop (3AI-HLC 2019) taking place at Imperial College London on the 26th of April.


03.03.19 - I am delighted to have accepted an offer to study for a DPhil in computer science at the University of Oxford later this year, which will be generously funded by an EPSRC Doctoral Training Partnership studentship.