Linking pages
- What is artificial intelligence? Your AI questions, answered. - Vox https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment 8 comments
- Why Not Slow AI Progress? - by Scott Alexander https://astralcodexten.substack.com/p/why-not-slow-ai-progress 2 comments
- Alignment of Language Agents. By Zachary Kenton, Tom Everitt, Laura… | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c 0 comments
- Ideas for high-impact careers beyond our priority paths - 80,000 Hours https://80000hours.org/2020/08/ideas-for-high-impact-careers-beyond-our-priority-paths 0 comments
- Scalable agent alignment via reward modeling | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84 0 comments
- Leveraging Learning in Robotics: RSS 2019 Highlights https://thegradient.pub/leveraging-learning-in-robotics-rss-2019-highlights/ 0 comments
- The case for building expertise to work on US AI policy, and how to do it - 80,000 Hours https://80000hours.org/articles/us-ai-policy/ 0 comments
- Designing agent incentives to avoid side effects | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107 0 comments
Linked pages
- Null References: The Billion Dollar Mistake - InfoQ https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/ 694 comments
- [1802.07740] Machine Theory of Mind https://arxiv.org/abs/1802.07740 35 comments
- [1606.06565] Concrete Problems in AI Safety https://arxiv.org/abs/1606.06565 3 comments
- http://futureoflife.org/data/documents/research_priorities.pdf 3 comments
- Specification gaming examples in AI - master list - Google Drive https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml 2 comments
- https://deepmind.com/blog/specifying-ai-safety-problems/ 1 comment
- [1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation https://arxiv.org/abs/1802.07228 1 comment
- Faulty Reward Functions in the Wild https://blog.openai.com/faulty-reward-functions/ 0 comments
- Max Tegmark: How to get empowered, not overpowered, by AI | TED Talk https://www.ted.com/talks/max_tegmark_how_to_get_empowered_not_overpowered_by_ai 0 comments
- [1711.09883] AI Safety Gridworlds https://arxiv.org/abs/1711.09883 0 comments
Related searches:
Search whole site: site:medium.com
Search title: Building safe artificial intelligence: specification, robustness, and assurance | by DeepMind Safety Research | Medium
See how to search.