LLMs

LLMs

312 bookmarks
Custom sorting
Exterminate all rational AI scrapers
Exterminate all rational AI scrapers
Today I added an infinite-nonsense honeypot to my web site just to fuck with LLM scrapers, based on a "spicy autocomplete" program I wrote about 30 years ago. Well-behaved web crawlers will ignore it, but those "AI" people.... well, you know how they are. I'm intentionally not linking to the honeypot from here, for reasons, but I'll bet you can find it pretty easily (and without guessing ...
·jwz.org·
Exterminate all rational AI scrapers
A lot changed for LLMs in 2024
A lot changed for LLMs in 2024
I thought this was a fascinating post by Simon Willison: Things We Learned About LLMs in 2024 This increase in efficiency and reduction in price is my single favourite trend from 2024. I want the utility of LLMs at a fraction of the energy cost and it looks like that’
·birchtree.me·
A lot changed for LLMs in 2024
Things we learned about LLMs in 2024
Things we learned about LLMs in 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past …
·simonwillison.net·
Things we learned about LLMs in 2024
Wardley mapping the LLM ecosystem.
Wardley mapping the LLM ecosystem.
In How should you adopt LLMs?, we explore how a theoretical ride sharing company, Theoretical Ride Sharing, should adopt Large Language Models (LLMs). Part of that strategy’s diagnosis depends on understanding the expected evolution of the LLM ecosystem, which we’ve build a Wardley map to better explore. This map of the LLM space is interested in how product companies should address the proliferation of model providers such as Anthropic, Google and OpenAI, as well as the proliferation of LLM product patterns like agentic workflows, Retrieval Augmented Generation (RAG), and running evals to maintain performance as models change.
·lethain.com·
Wardley mapping the LLM ecosystem.
Stop Treating Your LLM Like a Database
Stop Treating Your LLM Like a Database
A look at why the batch paradigm is a relic of the past, how it hinders AI apps and why the future of AI demands a real-time event-streaming platform.
·thenewstack.io·
Stop Treating Your LLM Like a Database
Vector Databases Explained Simply
Vector Databases Explained Simply
Vector databases are quite popular right now, especially for building recommendation systems, adding context to chatbots and LLMs, or comparing content based on similarity. In this guide, I'll explain what vector databases are, how they work, and when to use them.
·getdeploying.com·
Vector Databases Explained Simply
Uber Creates GenAI Gateway Mirroring OpenAI API to Support Over 60 LLM Use Cases
Uber Creates GenAI Gateway Mirroring OpenAI API to Support Over 60 LLM Use Cases
Uber created a unified platform for serving large language models (LLMs) from external vendors and self-hosted ones and opted to mirror OpenAI API to help with internal adoption. GenAI Gateway provides a consistent and efficient interface and serves over 60 distinct LLM use cases across many areas.
·infoq.com·
Uber Creates GenAI Gateway Mirroring OpenAI API to Support Over 60 LLM Use Cases
How Do Generative AI Systems Work?
How Do Generative AI Systems Work?
Generative AI systems are prediction machines. This article breaks down neural networks and LLMs in nontechnical language.
·nngroup.com·
How Do Generative AI Systems Work?
Why Copilot is Making Programmers Worse at Programming
Why Copilot is Making Programmers Worse at Programming
Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.
·darrenhorrocks.co.uk·
Why Copilot is Making Programmers Worse at Programming
Boost LLM Results: When to Use Knowledge Graph RAG
Boost LLM Results: When to Use Knowledge Graph RAG
Augmenting RAG with a knowledge graph that assists with retrieval can enable the system to dive deeper into data sets to provide detailed responses.
·thenewstack.io·
Boost LLM Results: When to Use Knowledge Graph RAG
AI and the Real Hard Pill to Swallow
AI and the Real Hard Pill to Swallow
At Foundation for Economic Education (“The Ego vs. the Machine,” February 24), self-described “techno-optimist” Dylan Allman dismisses recent controversies over AI as a simple matter of wounded egos. “They feel, on some instinctual level, that if machines can do what they do — only better, faster, and more efficiently — then what value do they...
·c4ss.org·
AI and the Real Hard Pill to Swallow
Building a Graph RAG System from Scratch with LangChain: A Comprehensive Tutorial – News from generation RAG
Building a Graph RAG System from Scratch with LangChain: A Comprehensive Tutorial – News from generation RAG
Setting up the Development EnvironmentBuilding the Graph RAG SystemIndexing Data in Neo4jImplementing Retrieval and GenerationCode Walkthrough and ExamplesDeploying and Scaling the Graph RAG SystemConclusion and Future Directions Graph RAG (Retrieval Augmented Generation) is an innovative technique that combines the power of knowledge graphs with large language models (LLMs) to enhance the retrieval and generation of
·ragaboutit.com·
Building a Graph RAG System from Scratch with LangChain: A Comprehensive Tutorial – News from generation RAG
Building a Graph RAG System with Open Source Tools: A Comprehensive Guide – News from generation RAG
Building a Graph RAG System with Open Source Tools: A Comprehensive Guide – News from generation RAG
Introduction to Graph RAG Graph RAG (Retrieval-Augmented Generation) is a groundbreaking approach that combines the power of large language models (LLMs) with the structured knowledge representation of knowledge graphs. It addresses the limitations of traditional RAG techniques by leveraging the rich contextual information encoded in knowledge graphs, enabling more accurate and relevant search results. At
·ragaboutit.com·
Building a Graph RAG System with Open Source Tools: A Comprehensive Guide – News from generation RAG
The Current State of LLMs: Riding the Sigmoid Curve
The Current State of LLMs: Riding the Sigmoid Curve
The AI community is embracing the sigmoid curve — that after initial rapid growth, progress starts to level off as we hit natural limitations.
·thenewstack.io·
The Current State of LLMs: Riding the Sigmoid Curve
When not to LLM
When not to LLM
Here’s the latest installment in the series on working with LLMS: For certain things, the LLM is a clear win. If I’m looking at an invalid blob of JSON that won’t even parse, there’s no reaso…
·blog.jonudell.net·
When not to LLM
Method prevents an AI model from being overconfident about wrong answers
Method prevents an AI model from being overconfident about wrong answers
Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. Developed at MIT and the MIT-IBM Watson AI Lab, it aims to help users know when a model should be trusted.
·news.mit.edu·
Method prevents an AI model from being overconfident about wrong answers