Linking pages
- Ex-Googler Geoffrey Hinton and other A.I. doomers say it could kill us all. They’re focused on the wrong threat. https://slate.com/technology/2023/05/artificial-intelligence-existential-threat-google-geoffrey-hinton.html 34 comments
- MR Tries The Safe Uncertainty Fallacy - by Scott Alexander https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy 0 comments
- Is AI Alignable, Even in Principle? https://treeofwoe.substack.com/p/is-ai-alignable-even-in-principle 0 comments
- Tales Of Takeover In CCF-World - by Scott Alexander https://astralcodexten.substack.com/p/tales-of-takeover-in-ccf-world 0 comments
Linked pages
- AI suggested 40,000 new possible chemical weapons in just six hours - The Verge https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx 1436 comments
- Shtetl-Optimized » Blog Archive » Why am I not terrified of AI? https://scottaaronson.blog/?p=7064 49 comments
- MIRI announces new "Death With Dignity" strategy - LessWrong https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy 10 comments
- Counterarguments to the basic AI risk case - by Katja Grace https://worldspiritsockpuppet.substack.com/p/counterarguments-to-the-basic-ai 3 comments
- Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers 1 comment
- Yudkowsky Contra Christiano On AI Takeoff Speeds https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai 0 comments
- Willpower, Human and Machine - by Scott Alexander https://astralcodexten.substack.com/p/willpower-human-and-machine 0 comments
- The hot mess theory of AI misalignment: More intelligent agents behave less coherently | Jascha’s blog https://sohl-dickstein.github.io/2023/03/09/coherence.html 0 comments
Related searches:
Search whole site: site:astralcodexten.substack.com
Search title: Why I Am Not (As Much Of) A Doomer (As Some People)
See how to search.