Greenblatt, R. et al. (2024). Alignment faking in large language models.#Alignment#Paper#Training#Anthropic·assets.anthropic.com·Dec 18, 2024Greenblatt, R. et al. (2024). Alignment faking in large language models.