AI/ML

AI/ML

2194 bookmarks
Custom sorting
Let the LLM Write the Prompts: An Intro to DSPy in Compound AI Pipelines
Let the LLM Write the Prompts: An Intro to DSPy in Compound AI Pipelines
Large Language Models (LLMs) excel at understanding messy, real-world data, but integrating them into production systems remains challenging. Prompts can be unruly to write, vary by model and can be difficult to manage in the large context of a pipeline. In this session, we'll demonstrate incorporating LLMs into a geospatial conflation pipeline, using DSPy. We'll discuss how DSPy works under the covers and highlight the benefits it provides pipeline creators and managers. Talk By: Drew Breunig, Data Science Leader & Strategist, Overture Maps Foundation Databricks Named a Leader in the 2025 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms: https://www.databricks.com/blog/databricks-named-leader-2025-gartner-magic-quadrant-data-science-and-machine-learning Build and deploy quality AI agent systems: https://www.databricks.com/product/artificial-intelligence See all the product announcements from Data + AI Summit: https://www.databricks.com/events/dataaisummit-2025-announcements Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
·youtube.com·
Let the LLM Write the Prompts: An Intro to DSPy in Compound AI Pipelines
The BEST Way to Chunk Text for RAG
The BEST Way to Chunk Text for RAG
To try everything Brilliant has to offer—free—for a full 30 days, visit https://brilliant.org/AdamLucek/ You’ll also get 20% off an annual premium subscript...
·youtube.com·
The BEST Way to Chunk Text for RAG
MonoQwen-Vision, the first visual document reranker - LightOn
MonoQwen-Vision, the first visual document reranker - LightOn
We introduce MonoQwen2-VL-v0.1, the first visual document reranker to enhance the quality of the retrieved visual documents and take these pipelines to the next level. Reranking a small number of candidates with MonoQwen2-VL-v0.1 achieve top results on the ViDoRe leaderboard.
·lighton.ai·
MonoQwen-Vision, the first visual document reranker - LightOn
If it cites em dashes as proof, it came from a tool.
If it cites em dashes as proof, it came from a tool.
It's a safe bet that most of us have encountered the age-old admonition to "never judge a book by its cover" at some point in our lives. There is a deep wisdom in that advice---wisdom that seems to go completely out the window as soon as a certain type of person spots a certain type of punctuation.
·scottsmitelli.com·
If it cites em dashes as proof, it came from a tool.
Optimizing RAG With Reasoning Models
Optimizing RAG With Reasoning Models
Orion Weller presents new frontiers in information retrieval, focusing on how instruction following and reasoning capabilities from large language models can be integrated into retrieval systems. He introduces Promptriever, a fast embedder that can follow instructions, and Rank1, a powerful but slower reasoning reranker, demonstrating their ability to unlock new types of queries and significantly improve performance. 00:00 - New Frontiers in IR: Instruction Following and Reasoning 00:07 - Language Models (LLMs) & Their Key Capabilities 00:20 - Instruction Following 00:57 - Reasoning (Test-Time Compute) 01:41 - Bridging LLMs to Information Retrieval (IR) 01:52 - Evolution of Search (Google 1999 vs. Today) 02:17 - SearchGPT and Its Limitations 02:38 - Search Hasn't Changed Fundamentally 03:16 - Keyword Search (Traditional IR) 04:11 - Semantic Search (Modern IR) 04:38 - Instruction-Based Search (Proposed IR) 05:25 - Challenge: Reranking Alone Isn't Enough 06:02 - Prompt & Reasoning-Based Search (Advanced IR) 06:42 - What is an Instruction in IR? (Attributes & NLU) 07:31 - Call to Action: Prompt Retrievers Like LLMs 07:46 - Introducing Promptriever & Rank1 08:23 - Bi-Encoder vs. Cross-Encoder Architecture 09:10 - Can We Make Promptable Retrievers? (Promptriever's Idea) 10:08 - Generating Synthetic Instructions 10:34 - Promptriever Experimental Settings 11:20 - Promptriever Evaluation Data (FollowIR & InstructIR) 12:28 - Promptriever Instruction Following Results 12:59 - Promptriever Results: Out-of-Domain (OOD) with Generic Prompts 13:10 - Promptriever: Generic Prompt Examples 13:58 - Promptriever Performance with Generic Prompts (BEIR OOD) 14:44 - Promptriever: Robustness to Paraphrased Prompts 15:16 - Promptriever Summary 16:04 - Introducing Rank1 (Test-Time Compute for IR) 16:22 - Test-Time Compute in LLMs (O1 AIME example) 17:08 - What Does Test-Time Compute Look Like in IR? (Rank1 Example) 18:01 - Rank1 Evaluation Data (BRIGHT dataset) 18:50 - Rank1: Example of Model Reasoning (Leetcode Problem) 19:35 - Rank1 Results (BRIGHT, NevIR, mFollowIR) 20:15 - Rank1: Direct Comparison of Reasoning Chain 20:33 - Rank1: Finding New Relevant Documents (DL19/DL20) 21:05 - Re-judging Old Data (Explanation) 22:05 - Rank1 Summary 22:37 - The Goal: IR That Works Like LLMs 22:56 - Implications for Downstream Users 23:36 - Open Data/Open Source & Contact Info 23:45 - Q&A Session - Promptriever & Bi-Encoder 24:23 - Q&A Session - Operationalizing Promptriever 26:04 - Q&A Session - Cross-Encoder Integration 26:33 - Q&A Session - Meta-Search/Human-Provided Prompts 27:56 - Q&A Session - Rank1 vs. Frontier Reasoning Models 28:07 - Clarification on Rank1's Training Focus 28:30 - How Rank1 Compares to O3/Gemini 29:32 - Q&A Session - Fine-Tuning Rank1 30:19 - Q&A Session - Where to Find the Models 30:45 - Conclusion of Q&A
·youtube.com·
Optimizing RAG With Reasoning Models
The Prompt Foreman
The Prompt Foreman
Writing about technology, culture, media, data, and the ways they interact.
·dbreunig.com·
The Prompt Foreman
zebbern/claude-code-guide: Full guide on claude tips and tricks and how you can optimise your claude code the best & strive to find every command possible even hidden ones!
zebbern/claude-code-guide: Full guide on claude tips and tricks and how you can optimise your claude code the best & strive to find every command possible even hidden ones!
Full guide on claude tips and tricks and how you can optimise your claude code the best & strive to find every command possible even hidden ones! - zebbern/claude-code-guide
·github.com·
zebbern/claude-code-guide: Full guide on claude tips and tricks and how you can optimise your claude code the best & strive to find every command possible even hidden ones!
Proximal Policy Optimization (PPO) for LLMs Explained Intuitively
Proximal Policy Optimization (PPO) for LLMs Explained Intuitively
In this video, I break down Proximal Policy Optimization (PPO) from first principles, without assuming prior knowledge of Reinforcement Learning. By the end, you’ll understand the core RL building blocks that led to PPO, including: 🔵 Policy Gradient 🔵 Actor-Critic Models 🔵 The Value Function 🔵 The Generalized Advantage Estimate In the LLM world, PPO was used to train reasoning models like OpenAI's o1/o3, and presumably Claude 3.7, Grok 3, etc. It’s the backbone of Reinforcement Learning with Human Feedback (RLHF) -- which helps align AI models with human preferences and Reinforcement Learning with Verifiable Rewards (RLVR), which gives LLMs reasoning abilities. Papers: - PPO paper: https://arxiv.org/pdf/1707.06347 - GAE paper: https://arxiv.org/pdf/1506.02438 - TRPO paper: https://arxiv.org/pdf/1502.05477 Well-written blogposts: - https://danieltakeshi.github.io/2017/04/02/notes-on-the-generalized-advantage-estimation-paper/ - https://huggingface.co/blog/NormalUhr/rlhf-pipeline - https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/ Implementations: - (Original) OpenAI Baseslines: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/ppo2 - Hugging Face: https://github.com/huggingface/trl/blob/main/trl/trainer/ppo_trainer.py - Hugging Face docs: https://huggingface.co/docs/trl/main/en/ppo_trainer Mother of all RL books (Barto & Sutton): http://incompleteideas.net/book/RLbook2020.pdf 00:00 Intro 01:21 RL for LLMs 05:53 Policy Gradient 09:23 The Value Function 12:14 Generalized Advantage Estimate 17:17 End-to-end Training Algorithm 18:23 Importance Sampling 20:02 PPO Clipping 21:36 Outro Special thanks to Anish Tondwalkar for discussing some of these concepts with me. Note: At 21:10, A_t should have been inside the min. Thanks @t.w.7065 for catching this.
·youtube.com·
Proximal Policy Optimization (PPO) for LLMs Explained Intuitively
awwaiid/gremllm
awwaiid/gremllm
Delightfully cursed Python library by Brock Wilcox, built on top of LLM: from gremllm import Gremllm counter = Gremllm("counter") counter.value = 5 counter.increment() print(counter.value) # 6? print(counter.to_roman_numerals()) # VI? You …
·simonwillison.net·
awwaiid/gremllm
Guest Post: How I Scanned all of GitHub’s “Oops Commits” for Leaked Secrets ◆ Truffle Security Co.
Guest Post: How I Scanned all of GitHub’s “Oops Commits” for Leaked Secrets ◆ Truffle Security Co.
GitHub Archive logs every public commit, even the ones developers try to delete. Force pushes often cover up mistakes like leaked credentials by rewriting Git history. GitHub keeps these dangling commits, from what we can tell, forever. In the archive, they show up as “zero-commit” PushEvents.
·trufflesecurity.com·
Guest Post: How I Scanned all of GitHub’s “Oops Commits” for Leaked Secrets ◆ Truffle Security Co.