CORE LLM

191 bookmarks
Newest
Top 12 beginner-friendly free AI courses | Uxcel
Top 12 beginner-friendly free AI courses | Uxcel
In this article, we present the top 12 free AI courses, great for an introduction to this field. Pick the best free AI introductory course for you.
·uxcel.com·
Top 12 beginner-friendly free AI courses | Uxcel
CHK - 13 foundational AI courses, resources from MIT | Open Learning
CHK - 13 foundational AI courses, resources from MIT | Open Learning
As artificial intelligence (AI) reshapes industries, powers innovation, and redefines how we live and work, understanding its core principles is increasingly important. We curated a list of 13 foundational AI courses and resources from MIT Open Learning — most of them free — to help you grasp the basics of AI, machine learning, machine vision, and algorithms.
·openlearning.mit.edu·
CHK - 13 foundational AI courses, resources from MIT | Open Learning
x Penseum | Free AI study guide maker
x Penseum | Free AI study guide maker
Generate notes, flashcards and quizzes in seconds with Penseum’s free AI study guide maker. Save hours each session—start learning smarter today.
·penseum.com·
x Penseum | Free AI study guide maker
one-hot encoding
one-hot encoding
One-hot encoding is a process used to convert categorical data into a numerical format that machine learning algorithms can understand. It works by creating a new binary column for each unique category in the original data. Each new column contains a value for that category and for all others. [1...
·docs.google.com·
one-hot encoding
ai backpropagation
ai backpropagation
Backpropagation, short for "backward propagation of errors," is a fundamental algorithm used to train artificial neural networks, particularly in the context of deep learning. It's a key component in enabling neural networks to learn from data and improve their predictive accuracy over time. [1] ...
·docs.google.com·
ai backpropagation
A hidden state is the output representation of the input tokens after they have been processed by a layer
A hidden state is the output representation of the input tokens after they have been processed by a layer
In a Transformer model, a hidden state is the output representation of the input tokens after they have been processed by a layer. Unlike in Recurrent Neural Networks (RNNs), where a hidden state carries a sequential memory, each hidden state in a Transformer is a vector that represents the comb...
·docs.google.com·
A hidden state is the output representation of the input tokens after they have been processed by a layer
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture

INCLUDES: The architecture of Transformers: self-attention, encoder–decoder design, positional encoding, and multi-head attention KEY CONCEPTS: Attention mechanism, embeddings, residual connections, normalization, feed-forward layers, and decoder workflows, Tokenization, Tokens

·datacamp.com·
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
In this article, we focus on building an intuitive understanding of attention. The attention mechanism was introduced in the “Attention Is All You Need” paper. It is the key element in the transformers architecture that has revolutionized LLMs.
·datacamp.com·
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
+ AI 1 CHEATSHEET TERMS
+ AI 1 CHEATSHEET TERMS
Access Google Drive with a Google account (for personal use) or Google Workspace account (for business use).
·drive.google.com·
+ AI 1 CHEATSHEET TERMS