Linking pages
- Prompt Engineering | Lil'Log https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/ 59 comments
- Thinking about High-Quality Human Data | Lil'Log https://lilianweng.github.io/posts/2024-02-05-human-data-quality/ 4 comments
- Reward Hacking in Reinforcement Learning | Lil'Log https://lilianweng.github.io/posts/2024-11-28-reward-hacking/ 1 comment
- How to Build an Open-Domain Question Answering System? | Lil'Log https://lilianweng.github.io/posts/2020-10-29-odqa/ 0 comments
- Adversarial Attacks on LLMs | Lil'Log https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/ 0 comments
- Extrinsic Hallucinations in LLMs | Lil'Log https://lilianweng.github.io/posts/2024-07-07-hallucination/ 0 comments
Linked pages
- [1609.08144] Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation http://arxiv.org/abs/1609.08144 97 comments
- GitHub - salesforce/ctrl: Conditional Transformer Language Model for Controllable Generation https://github.com/salesforce/ctrl 40 comments
- [2009.01325] Learning to summarize from human feedback https://arxiv.org/abs/2009.01325 12 comments
- [1909.08593] Fine-Tuning Language Models from Human Preferences https://arxiv.org/abs/1909.08593 5 comments
- GitHub - uber-research/PPLM: Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models. https://github.com/uber-research/PPLM 3 comments
- [2104.08691] The Power of Scale for Parameter-Efficient Prompt Tuning https://arxiv.org/abs/2104.08691 1 comment
- [1706.03741] Deep reinforcement learning from human preferences https://arxiv.org/abs/1706.03741 0 comments
- [1908.07125] Universal Adversarial Triggers for Attacking and Analyzing NLP https://arxiv.org/abs/1908.07125 0 comments
- [1805.04833] Hierarchical Neural Story Generation https://arxiv.org/abs/1805.04833 0 comments
- How to generate text: using different decoding methods for language generation with Transformers https://huggingface.co/blog/how-to-generate 0 comments
- AutoPrompt - AutoPrompt https://ucinlp.github.io/autoprompt/ 0 comments
- [2103.10385] GPT Understands, Too https://arxiv.org/abs/2103.10385 0 comments
- [1909.05858] CTRL: A Conditional Transformer Language Model for Controllable Generation https://arxiv.org/abs/1909.05858 0 comments
- [1612.00005] Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space https://arxiv.org/abs/1612.00005 0 comments
- [1909.01214] Better Rewards Yield Better Summaries: Learning to Summarise Without References https://arxiv.org/abs/1909.01214 0 comments
- How to Build an Open-Domain Question Answering System? | Lil'Log https://lilianweng.github.io/posts/2020-10-29-odqa/ 0 comments
- [1912.02164] Plug and Play Language Models: A Simple Approach to Controlled Text Generation https://arxiv.org/abs/1912.02164 0 comments
Related searches:
Search whole site: site:lilianweng.github.io
Search title: Controllable Neural Text Generation | Lil'Log
See how to search.