Learn AI

559 bookmarks
Custom sorting
Making Retrieval Augmented Generation Better with @jamesbriggs
Making Retrieval Augmented Generation Better with @jamesbriggs
Join, Pinecone Developer Advocate, @jamesbriggs as he delves into retrieval augmented generation (RAG) and explores its role in enhancing Large Language Mode...
·youtube.com·
Making Retrieval Augmented Generation Better with @jamesbriggs
Advanced RAG 06 - RAG Fusion
Advanced RAG 06 - RAG Fusion
Colab: https://drp.li/PZG2tBlog Post: https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1Original Code: https://github.com/Raudas...
·youtube.com·
Advanced RAG 06 - RAG Fusion
Introduction to Linear Regression for Machine Learning
Introduction to Linear Regression for Machine Learning
In this post, I will go over the concept of simple linear regression, delve into the underlying mathematical principles of the algorithm, and explore its practical application in the field of machine learning.
·gettingstarted.ai·
Introduction to Linear Regression for Machine Learning
1706.03762 - Attention Is All You Need
1706.03762 - Attention Is All You Need
The paper presents the Transformer, an innovative attention-based model for sequence transduction that sets new benchmarks for efficiency and performance.
·emergentmind.com·
1706.03762 - Attention Is All You Need
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X
Essential ML papers:1. Transformers: Attention is All You Needhttps://t.co/oA5TGGqu9s2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understandinghttps://t.co/9ekAqIRQxs3. GPT: Language Models are Few-Shot Learnershttps://t.co/oBVEwfOoLB4. CNNs:…— James Lin (@jlinbio) January 6, 2024
·twitter.com·
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X
htmldocs
htmldocs
An online document editor for creating and automating generation of PDF documents from HTML/CSS. No installation required, REST API, and free to use. Hundreds of templates from invoices, reports, resumes, legal documents, and more.
·htmldocs.com·
htmldocs
Advances In Conversational Dialog State Management
Advances In Conversational Dialog State Management
Any Conversational User Interface needs to perform dialog state management, determining what the next dialog state & system response should…
·cobusgreyling.medium.com·
Advances In Conversational Dialog State Management
Stuff we figured out about AI in 2023
Stuff we figured out about AI in 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of …
·simonwillison.net·
Stuff we figured out about AI in 2023
Efficient LLM inference
Efficient LLM inference
On quantization, distillation, and efficiency
·artfintel.com·
Efficient LLM inference