- AI-Written Critiques Help Humans Notice Flaws https://openai.com/blog/critiques/ 6 comments programming
Linking pages
- New-and-Improved Content Moderation Tooling https://openai.com/blog/new-and-improved-content-moderation-tooling/ 2 comments
- Why I Think More NLP Researchers Should Engage with AI Safety Concerns – NYU Alignment Research Group https://wp.nyu.edu/arg/why-ai-safety/ 0 comments
- Open AI's 'Critique-Writing' Model To Describe Flaws In Summaries Can Be Used To Scale The Supervision Of Machine Learning Systems To Tasks That Are Difficult For Humans To Evaluate Directly - MarkTechPost https://www.marktechpost.com/2022/06/22/open-ais-critique-writing-model-to-describe-flaws-in-summaries-can-be-used-to-scale-the-supervision-of-machine-learning-systems-to-tasks-that-are-difficult-for-humans-to-evaluate-directly/ 0 comments
Linked pages
- ChatGPT https://chat.openai.com/ 752 comments
- Summarizing Books with Human Feedback https://openai.com/blog/summarizing-books/ 19 comments
- Aligning language models to follow instructions https://openai.com/blog/instruction-following/ 11 comments
- [2204.05862] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback https://arxiv.org/abs/2204.05862 1 comment
- Learning to Summarize with Human Feedback https://openai.com/blog/learning-to-summarize-with-human-feedback/ 0 comments
- Scalable agent alignment via reward modeling | by DeepMind Safety Research | Medium https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84 0 comments
Related searches:
Search whole site: site:openai.com
Search title: AI-Written Critiques Help Humans Notice Flaws
See how to search.