
So….
The robots are no longer storming the gates
They’re doing something much worse
They’re being given the keys….
𝗜’𝘃𝗲 𝗷𝘂𝘀𝘁 𝗿𝗲𝗮𝗱 𝗔𝗜-𝟮𝟬𝟮𝟳
A futures paper written by people who are deeply unfun at dinner parties but unfortunately very serious about AI
It’s led by Daniel Kokotajlo (ex-OpenAI) alongside a group of forecasting and governance researchers who specialise in asking questions like:
“what happens if this goes well?” and
“what happens if this goes… very badly?”
No sci-fi nonsense
No Terminator vibes…
Just a calm, unsettling walkthrough of how AI progress could accelerate fast once AI starts helping to build better AI
The spicy bit isn’t “AI gets smarter.”
That’s inevitable
It’s this:
Once the pace picks up, the biggest risk isn’t the technology
𝗜𝘁’𝘀 𝘄𝗵𝗼 𝗰𝗼𝗻𝘁𝗿𝗼𝗹𝘀 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗺𝗼𝗱𝗲𝗹𝘀
Because history suggests that when enormous power concentrates quickly, humanity responds with:
• humility
• restraint
• thoughtful governance
Just kidding
We panic later
The paper makes a solid case that we probably won’t get dramatic warning signs
No flashing lights
No clear “oh shit” moment
Just a shift where decisions get made faster than laws, institutions, or public understanding can keep up
Which is… comforting….
In theory, this could end up in the hands of:
• governments
• corporations
• military interests
• or tech founders with a messiah complex and a podcast…
None of which fills me with confidence
We absolutely cannot let Elon Musk get control of the most powerful models
But let’s be fo’ real…
If he doesn’t already have them, he’s probably refreshing the repo
Sleep well 🥰

















