Discussions not found
Sorry, we couldn't find anything for https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c.
See some search examples.
Linked pages
- [1803.03453] The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities https://arxiv.org/abs/1803.03453 37 comments
- Specification gaming: the flip side of AI ingenuity https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity 31 comments
- [1606.06565] Concrete Problems in AI Safety https://arxiv.org/abs/1606.06565 3 comments
- Tay (bot) - Wikipedia https://en.wikipedia.org/wiki/Tay_(bot) 1 comment
- Building safe artificial intelligence: specification, robustness, and assurance | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1 0 comments
- Scalable agent alignment via reward modeling | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84 0 comments
- https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf 0 comments
Related searches:
Search whole site: site:medium.com
Search title: Alignment of Language Agents. By Zachary Kenton, Tom Everitt, Laura… | by DeepMind Safety Research | Medium
See how to search.