OpenAI’s GPT Store Is Triggering Copyright Complaints
Who's Harry Potter? Approximate Unlearning in LLMs
Download PDF
Direct Preference Optimization: Your Language Model is Secretly a Reward Model | DPO paper explained
Diffusion Model Alignment Using Direct Preference Optimization
Download PDF
How to fine-tune GPT-3.5 or Llama 2 with a single instruction - TechTalks
Instruction Tuning with GPT-4
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation
Better Language Models Without Massive Compute
CarperAI, an EleutherAI lab, announces plans for the first open-source “instruction-tuned” language model.
UL2 20B: An Open Source Unified Language Learner