Why should I subscribe?
Put simply, you should subscribe if you’d like to receive longer-form writings from me directly to your inbox, so you’ll never miss anything. It’s free, and you should do it :)
It will also make me happy if you subscribe and I will like you more.
Why should I want to subscribe?
Sometimes, I write or say things that people find interesting. You might be one of those people. I think I have some things to say that are worth some people hearing.
Through this newsletter and the comment sections, you can also find people who share your interests.
Who even are you?
You can go deep on this question, but as may be relevant to readers, my background is as follows:
What motivates me
I literally just want one thing and that is for humanity to survive this most perilous century and flourish. This ultimately drives all my work, with emphasis on the surviving part as I think that’s more pressing.
Ok, I do care about other things as well, but you have to make choices in what you do. I also have personal priorities, but these are constrained to roughly: I think it’d be really nice to build a family one day, and I think it’s nice to maintain a network of friends and family and I want them to do well, etc.
Work
AI Policy
Currently I work on AI policy at ControlAI, a non-profit based in London that works to reduce the risks to humanity from artificial intelligence.
Among other work, a big focus of my efforts at ControlAI has been in contributing to A Narrow Path, our policy plan for humanity to survive AI and flourish. I've mainly worked on Phase 1: Stability, designing an international AI governance framework that should not collapse over time, drawing on ideas from projects I led in 2023 (taisc.org & aitreaty.org).
With others, I also work on ControlAI’s substack newsletter.
Forecasting
I started working as a Superforecaster with Good Judgment Inc in 2021. I'm also a professional forecaster for Sentinel, The Swift Centre, Samotsvety Forecasting, and the RAND Forecasting Initiative. In other words, when I do make predictions, they are generally accurate and well-calibrated, and people are willing to pay for that.
Nowadays, I just do this on weekends, mainly with Sentinel, a forecasting group which publishes a weekly substack newsletter on news events that have relevance to catastrophic risk. Along with my fellow sentinels
Projects
A 30% Chance of AI Catastrophe: Samotsvety’s Forecasts on AI Risks and the Impact of a Strong AI Treaty
This was a forecasting project I led with Samotsvety Forecasting where we sought to investigate the impact on AI catastrophic risk of different policy scenarios.
Treaty on Artificial Intelligence Safety And Cooperation (TAISC)
Here I authored a first attempt at a concrete treaty blueprint focused on mitigating catastrophic AI risk, known as the Treaty on Artificial Intelligence Safety And Cooperation (TAISC). You can view the treaty blueprint here, or an overview here
Urging an International AI Treaty: An Open Letter
This was an open letter project I led, calling for an international AI treaty to be developed, proposing policies such as global compute limits, a CERN for AI Safety, and an international compliance commission. We obtained signatures from leaders in the field such as Yoshua Bengio, Bart Selman, Max Tegmark, Gary Marcus, Yi Zeng, Victoria Krakovna, and other prominent figures such as Claire ‘Grimes’ Boucher, and the late Daniel Dennett.
This was featured in the Mail Online here: ‘Godfather’ of AI is among hundreds of experts calling for urgent action to prevent the ‘potentially catastrophic’ risks posed by technology
Other
I was born and grew up in southern England, taking my bachelor’s in Mathematics at the University of St Andrews, in Scotland, and my Master’s in Mathematics at the University of Bergen, in Norway, where I still am.
If you read this far, you might as well subscribe :)
Socials
You can find me in some other places, including:
