Our Approach to Frontier AI | Meta
Learning to Plan & Reason for Evaluation with Thinking-LLM-as-a-Judge
View PDF
Agent-as-a-Judge: Evaluate Agents with Agents
View PDF
CatTSunami: Accelerating Transition State Energy Calculations with...
View PDF
Better & Faster Large Language Models via Multi-token Prediction
Iterative Reasoning Preference Optimization
OpenEQA: From word models to world models
OpenEQA combines challenging open-vocabulary questions with the ability to answer in natural language. This results in a straightforward benchmark that demonstrates a strong understanding of the environment—and poses a considerable challenge to current foundational models. We hope this work motivates additional research into helping AI understand and communicate about the world it sees.
Revisiting Feature Prediction for Learning Visual Representations from Video | Research - AI at Meta
Download the Paper
Self-Rewarding Language Models
Download PDF
Audiobox: Unified Audio Generation with Natural Language Processing
Pearl: A Production-ready Reinforcement Learning Agent
Download PDF
WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models
Download PDF
Large Language Models for Compiler Optimization
Download PDF
Llama 2: Open Foundation and Fine-Tuned Chat Models
PDF
Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning | Meta AI Research
Download the Paper
Simple and Controllable Music Generation
LIMA: Less Is More for Alignment
Augmented Language Models: a Survey
DINOv2: Learning Robust Visual Features without Supervision
LLaMA: Open and Efficient Foundation Language Models - Meta Research | Meta Research
Toolformer: Language Models Can Teach Themselves to Use Tools