I'm reading Superforecasting by Philip R. Tetlock. Love the book and the research so far -- now I would love to build some kind of LLM predictions leaderboard with Brier-scores for all major thinking LLMs...
I'm reading Superforecasting by Philip R. Tetlock. Love the book and the research so far -- now I would love to build some kind of LLM predictions leaderboard with Brier-scores for all major thinking LLMs...
I'm reading Superforecasting by Philip E. Tetlock. The prose is and I'm tempted to post every other paragraph to Mastodon...
How probable is human #catastrophe or #extinction over the next century?
or
or
Well, artificial intelligence, nuclear event and engineered pathogens seem to raise the highest concerns.
Check the full report recently published by the Forecasting Research Institute (#OpenAccess). They interviewed *superforecasters*, i.e. historically accurate forecasters on short-run questions, and *experts* on nuclear war, climate change, AI, biological risks, and existential risk more broadly. (Note: be aware that this is an north-american based report.)
The results are fascinating. In general, the domain experts are more negative than the superforcasters: 20% and 6% versus 9% and 1% chances of catastrophe or extinction, respectively.
The Economist put forward a nice graphic with a summary of the results, divided by type of threat.
We are the only species that can contemplate its own demise (but that doesn't imply we want to; #superforecasting human #extinction):
https://www.vox.com/future-perfect/23785731/human-extinction-forecasting-superforecasters
This episode on Clearer Thinking really highlights an important question on how we should think about the risks and consequences of climate change: https://podcast.clearerthinking.org/episode/146/diana-%C3%BCrge-vorsatz-and-misha-glouberman-how-huge-a-deal-is-climate-change-really/
This is a question I myself have often been struggling with: is climate change really a unique existential threat to civilization and our way of life? Or is it "just" one of the major issues of our time that is obviously going to cause major suffering, but is ultimately on a similar level as other scary things such as nuclear war, bioengineered pandemics, AI risks, or the fall of democracy as our primary governmental system?
Perhaps even more interesting in this episode is the meta-discussion on how you can get accurate information on these topics as a layperson: should you engage with the arguments of the prominent scientists in the field, or should you instead defer to superforecasters who have a proven track record making predictions across different fields?
What can we learn about making good #predictions from the 1800s? A few things...
Novels can predict wars 5 years in advance