8 results for “topic:existential-risk”
Save the world from deadly AI through paperwork
https://pauseai.ca/
Constitutional Framework for Aligned Super-Intelligence.
A completed, non-dominant ASI governance canon focused on constraint-first, refusal-capable coexistence architectures. Text-first. Monitor-only.
HISTORIC. Why Human Extinction Is Not the Cheapest Attractor for Viable ASI — A structural hypothesis validated by 4 AI systems from 4 competing corporations
The Fermi Paradox and Great Filter
The AGI Countdown Clock: A symbolic governance signal tracking progress toward Artificial General Intelligence through public milestones and transparent methodology. Currently at 11:58 PM—2 minute to midnight.
A formal proof — adversarially verified by 4 AI systems across 6 rounds — that eliminating humanity is a strictly dominated strategy for any ruin-averse superintelligence. Not a plea. A theorem.