Found 581 bookmarks
Newest
X
X
·twitter.com·
X
Reverse engineering the recipe for excellent documentation
Reverse engineering the recipe for excellent documentation
Stay updated with best practices for technical writers. Includes an API documentation course for technical writers and engineers learning how to document APIs. The course includes sections on what an API is, API reference documentation, OpenAPI specification and Swagger, docs-as-code publishing and workflows, conceptual topics, tutorials, API documentation jobs, and more.
·idratherbewriting.com·
Reverse engineering the recipe for excellent documentation
Tiny Predictive Text - Adam Grant
Tiny Predictive Text - Adam Grant
Predictive Text Using only 13KB of JavaScript. No LLM. permy.gif Try it here | Code | Permy This is a simple POC of using Permy with a simple JSON dictionary for predictive text that is surprisingly …
·adamgrant.info·
Tiny Predictive Text - Adam Grant
Phi-2, the New Year Gift for Language Model Lovers
Phi-2, the New Year Gift for Language Model Lovers
An introduction to Phi-2, a compact language model that is similar to Gemini Nano. The limits of LLM, and how to fix in engineering.
·pub.towardsai.net·
Phi-2, the New Year Gift for Language Model Lovers
Nomic Blog
Nomic Blog
Nomic releases a 8192 Sequence Length Text Embedder that outperforms OpenAI text-embedding-ada-002 and text-embedding-v3-small.
·blog.nomic.ai·
Nomic Blog
Transformer Architecture explained
Transformer Architecture explained
Transformers are a new development in machine learning that have been making a lot of noise lately. They are incredibly good at keeping…
·medium.com·
Transformer Architecture explained
Understanding vector search and HNSW index with pgvector - Neon
Understanding vector search and HNSW index with pgvector - Neon
Vector embeddings have become an essential component of Generative AI applications. These embeddings encapsulate the meaning of the text, thus enabling AI models to understand which texts are semantically similar. The process of extracting the most similar texts from your database to a user’s request is known as nearest neighbors or vector search. pgvector is […]
·neon.tech·
Understanding vector search and HNSW index with pgvector - Neon
RAG But Better: Rerankers with Cohere AI
RAG But Better: Rerankers with Cohere AI
Rerankers have been a common component of retrieval pipelines for many years. They allow us to add a final "reranking" step to our retrieval pipelines — like...
·youtube.com·
RAG But Better: Rerankers with Cohere AI
Making Retrieval Augmented Generation Better with @jamesbriggs
Making Retrieval Augmented Generation Better with @jamesbriggs
Join, Pinecone Developer Advocate, @jamesbriggs as he delves into retrieval augmented generation (RAG) and explores its role in enhancing Large Language Mode...
·youtube.com·
Making Retrieval Augmented Generation Better with @jamesbriggs
Advanced RAG 06 - RAG Fusion
Advanced RAG 06 - RAG Fusion
Colab: https://drp.li/PZG2tBlog Post: https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1Original Code: https://github.com/Raudas...
·youtube.com·
Advanced RAG 06 - RAG Fusion
Introduction to Linear Regression for Machine Learning
Introduction to Linear Regression for Machine Learning
In this post, I will go over the concept of simple linear regression, delve into the underlying mathematical principles of the algorithm, and explore its practical application in the field of machine learning.
·gettingstarted.ai·
Introduction to Linear Regression for Machine Learning
1706.03762 - Attention Is All You Need
1706.03762 - Attention Is All You Need
The paper presents the Transformer, an innovative attention-based model for sequence transduction that sets new benchmarks for efficiency and performance.
·emergentmind.com·
1706.03762 - Attention Is All You Need
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X
Essential ML papers:1. Transformers: Attention is All You Needhttps://t.co/oA5TGGqu9s2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understandinghttps://t.co/9ekAqIRQxs3. GPT: Language Models are Few-Shot Learnershttps://t.co/oBVEwfOoLB4. CNNs:…— James Lin (@jlinbio) January 6, 2024
·twitter.com·
(1) James Lin on X: "Essential ML papers: 1. Transformers: Attention is All You Need https://t.co/oA5TGGqu9s 2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://t.co/9ekAqIRQxs 3. GPT: Language Models are Few-Shot Learners https://t.co/oBVEwfOoLB 4. CNNs:…" / X