- Research Priorities for Robust and Beneficial Artificial Intelligence (2015) [pdf] http://futureoflife.org/data/documents/research_priorities.pdf 3 comments
- Building safe artificial intelligence: specification, robustness, and assurance | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1 0 comments
- Scalable agent alignment via reward modeling | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84 0 comments
- Opinion | Killer robots? Superintelligence? Let’s not get ahead of ourselves. - The Washington Post https://www.washingtonpost.com/news/in-theory/wp/2015/11/04/killer-robots-superintelligence-lets-not-get-ahead-of-ourselves/ 0 comments
- Why many important minds have subscribed to the existential risk of AI | by Rohan Kshirsagar | Towards Data Science https://towardsdatascience.com/why-many-important-minds-have-subscribed-to-the-existential-risk-of-ai-a55cfc109cf8 0 comments
Search whole site: site:futureoflife.org
Search title: Research Priorities for Robust and Beneficial Artificial Intelligence (2015) [pdf]
See how to search.