CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
End-to-end Generative Pre-training for Multimodal Video Captioning
Home | Cohere
Ex-Googlers to build 'general intelligence' at Adept AI
Researchers Glimpse How AI Gets So Good at Language Processing | Quanta Magazine
Nvidia’s Next GPU Shows That Transformers Are Transforming AI
Multimodal Bottleneck Transformer (MBT): A New Model for Modality Fusion
Vision Language models: towards multi-modal deep learning | AI Summer
The great transformer: Examining the role of large language models in the political economy of AI - Dieuwertje Luitse, Wiebke Denkena, 2021
The neural architecture of language: Integrative modeling converges on predictive processing
State of AI Report 2021 - ONLINE
Microsoft AI Unveils 'TrOCR', An End-To-End Transformer-Based OCR Model For Text Recognition With Pre-Trained Models
Yoshida, D. et al. (2021). Reconsidering the Past: Optimizing Hidden States in Language Models.
Hugging Face Uses Block Pruning to Speedup Transformer Training While Maintaining Accuracy | Synced