Tutorials/Learning

464 bookmarks
Newest
Learn AI - Course for Beginners
Learn AI - Course for Beginners
Find Videos to learn AI, especially RAG, AI Agents, and MCP. If you want personal AI Coaching, check the details at https://www.blog.qualitypointtech.com/202...
·youtube.com·
Learn AI - Course for Beginners
Natural Language Processing Demystified
Natural Language Processing Demystified
A free, accessible course on Natural Language Processing with 15 modules and 9 notebooks of theory and practice, clearly explained.
·nlpdemystified.org·
Natural Language Processing Demystified
Introduction to Generative AI | Google Skills
Introduction to Generative AI | Google Skills
<p>This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.</p>
·skills.google·
Introduction to Generative AI | Google Skills
one-hot encoding
one-hot encoding
One-hot encoding is a process used to convert categorical data into a numerical format that machine learning algorithms can understand. It works by creating a new binary column for each unique category in the original data. Each new column contains a value for that category and for all others. [1...
·docs.google.com·
one-hot encoding
ai backpropagation
ai backpropagation
Backpropagation, short for "backward propagation of errors," is a fundamental algorithm used to train artificial neural networks, particularly in the context of deep learning. It's a key component in enabling neural networks to learn from data and improve their predictive accuracy over time. [1] ...
·docs.google.com·
ai backpropagation
A hidden state is the output representation of the input tokens after they have been processed by a layer
A hidden state is the output representation of the input tokens after they have been processed by a layer
In a Transformer model, a hidden state is the output representation of the input tokens after they have been processed by a layer. Unlike in Recurrent Neural Networks (RNNs), where a hidden state carries a sequential memory, each hidden state in a Transformer is a vector that represents the comb...
·docs.google.com·
A hidden state is the output representation of the input tokens after they have been processed by a layer
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture

INCLUDES: The architecture of Transformers: self-attention, encoder–decoder design, positional encoding, and multi-head attention KEY CONCEPTS: Attention mechanism, embeddings, residual connections, normalization, feed-forward layers, and decoder workflows, Tokenization, Tokens

·datacamp.com·
@ 3 DC - How Transformers Work: A Detailed Exploration of Transformer Architecture
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
In this article, we focus on building an intuitive understanding of attention. The attention mechanism was introduced in the “Attention Is All You Need” paper. It is the key element in the transformers architecture that has revolutionized LLMs.
·datacamp.com·
@ 3 DC - What is Attention and Why Do LLMs and Transformers Need It?
+ AI 1 CHEATSHEET TERMS
+ AI 1 CHEATSHEET TERMS
Access Google Drive with a Google account (for personal use) or Google Workspace account (for business use).
·drive.google.com·
+ AI 1 CHEATSHEET TERMS