Testing AI performance on less frequent aspects of language reveals insensitivity to underlying meaning
Frontiers | Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
Chat gpt robotics
LLaMA: Open and Efficient Foundation Language Models - Meta Research | Meta Research
Google Research, 2022 & beyond: Health
Pre-training generalist agents using offline reinforcement learning
ChatGPT for Robotics
Unique Identification of 50,000+ Virtual Reality Users from Head & Hand Motion Data
Embedding Recycling for Language Models
New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments
Auditing large language models: a three-layered approach
The Capacity for Moral Self-Correction in Large Language Models
Toolformer: Language Models Can Teach Themselves to Use Tools
On the importance of AI research beyond disciplines
GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models
Theory of Mind May Have Spontaneously Emerged in Large Language Models
FOLIO: Natural Language Reasoning with First-Order Logic
BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining
Progress measures for grokking via mechanistic interpretability
Dissociating language and thought in large language models: a cognitive perspective
Ask Me Anything: A simple strategy for prompting language models
Bassett, G., Blake, J., Mina, M., Carberry, A., Gravander, J., Grimson, W., ... & Riley, D. (2014). Philosophical Perspectives on Engineering and Technology Literacy, I.
Engineering philosophy
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
Extracting Training Data from Diffusion Models
Learning on tree architectures outperforms a convolutional feedforward network - Scientific Reports
Investigating cognitive neuroscience theories of human intelligence: A connectome‐based predictive modeling approach
A wearable cardiac ultrasound imager - Nature
Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research