LLMs

LLMs

312 bookmarks
Custom sorting
4 Reasons Your AI Agent Needs Code Interpreter
4 Reasons Your AI Agent Needs Code Interpreter
We will see code interpreters powering even more AI agents and apps as a part of the new ecosystem being built around LLMs, where a code interpreter represents a crucial part of an agent’s brain.
·thenewstack.io·
4 Reasons Your AI Agent Needs Code Interpreter
Do Enormous LLM Context Windows Spell the End of RAG?
Do Enormous LLM Context Windows Spell the End of RAG?
Now that LLMs can retrieve 1 million tokens at once, how long will it be until we don’t need retrieval augmented generation for accurate AI responses?
·thenewstack.io·
Do Enormous LLM Context Windows Spell the End of RAG?
Block AI crawlers
Block AI crawlers
I have very mixed opinions on LLMs, as they stand. This note won’t be digging into my thoughts there - I don’t want to have that discussion. However, while I’m not exactly doing cutting-edge research here, I do put effort into publishing for humans.
·ellie.wtf·
Block AI crawlers
SQL Schema Generation With Large Language Models
SQL Schema Generation With Large Language Models
We discover that mapping one domain (publishing) into another (the domain-specific language of SQL) works heavily to an LLM's strengths.
·thenewstack.io·
SQL Schema Generation With Large Language Models
How RAG Architecture Overcomes LLM Limitations
How RAG Architecture Overcomes LLM Limitations
Retrieval-augmented generation facilitates a radical makeover of LLMs and real-time AI environments to produce better, more accurate search results.
·thenewstack.io·
How RAG Architecture Overcomes LLM Limitations
Improving LLM Output by Combining RAG and Fine-Tuning
Improving LLM Output by Combining RAG and Fine-Tuning
When designing a domain-specific enterprise-grade conversational Q&A system to answer customer questions, Conviva found an either/or approach isn’t sufficient.
·thenewstack.io·
Improving LLM Output by Combining RAG and Fine-Tuning
Evaluation for LLM-Based Apps | Deepchecks
Evaluation for LLM-Based Apps | Deepchecks
Release high-quality LLM apps quickly without compromising on testing. Never be held back by the complex and subjective nature of LLM interactions.
·deepchecks.com·
Evaluation for LLM-Based Apps | Deepchecks
How to Cure LLM Weaknesses with Vector Databases
How to Cure LLM Weaknesses with Vector Databases
Vector databases enable businesses to affordably and sustainably adapt generic large language models for organization-specific use.
·thenewstack.io·
How to Cure LLM Weaknesses with Vector Databases
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
Introduction In today’s world, LLMs are everywhere, but what exactly is an LLM and what are they used for? LLM, an acronym for Large Language Model, is an AI model developed to understand and generate human-like language. LLMs are trained on huge data sets (hence “large”) to process and generate meaningful and relevant responses based
·vectorize.io·
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
Notes on how to use LLMs in your product.
Notes on how to use LLMs in your product.
Pretty much every company I know is looking for a way to benefit from Large Language Models. Even if their executives don’t see much applicability, their investors likely do, so they’re staring at the blank page nervously trying to come up with an idea. It’s straightforward to make an argument for LLMs improving internal efficiency somehow, but it’s much harder to describe a believable way that LLMs will make your product more useful to your customers.
·lethain.com·
Notes on how to use LLMs in your product.
Building a RAG for tabular data in Go with PostgreSQL & Gemini
Building a RAG for tabular data in Go with PostgreSQL & Gemini
In this article we explore how to combine a large language model (LLM) with a relational database to allow users to ask questions about their data in a natural way. It demonstrates a Retrieval-Augmented Generation (RAG) system built with Go that utilizes PostgreSQL and pgvector for data storage and retrieval. The provided code showcases the core functionalities. This is an overview of how the
·pgaleone.eu·
Building a RAG for tabular data in Go with PostgreSQL & Gemini
Game theory research shows AI can evolve into more selfish or cooperative personalities
Game theory research shows AI can evolve into more selfish or cooperative personalities
Researchers in Japan have effectively developed a diverse range of personality traits in dialogue AI using a large-scale language model (LLM). Using the prisoner's dilemma from game theory, Professor Takaya Arita and Associate Professor Reiji Suzuki from Nagoya University's Graduate School of Informatics' team created a framework for evolving AI agents that mimics human behavior by switching between selfish and cooperative actions, adapting its strategies through evolutionary processes. Their findings were published in Scientific Reports.
·techxplore.com·
Game theory research shows AI can evolve into more selfish or cooperative personalities
Local chat with Ollama and Cody
Local chat with Ollama and Cody
Learn how to use local LLM models to Chat with Cody without an Internet connection powered by Ollama.
·sourcegraph.com·
Local chat with Ollama and Cody
RAFT: A new way to teach LLMs to be better at RAG
RAFT: A new way to teach LLMs to be better at RAG
In this article, we will look at the limitations of RAG and domain-specific Fine-tuning to adapt LLMs to existing knowledge and how a team of UC Berkeley..
·techcommunity.microsoft.com·
RAFT: A new way to teach LLMs to be better at RAG
Using AI to Improve Bad Business Writing
Using AI to Improve Bad Business Writing
If you're a developer needing to write a business document, Jon Udell explores the benefits (and quirks) of LLM-assisted copy editing.
·thenewstack.io·
Using AI to Improve Bad Business Writing
Open WebUI
Open WebUI
A feature-rich self-hosted WebUI for LLMs, supported LLM runners include Ollama and OpenAI-compatible APIs.
·openwebui.com·
Open WebUI
Running Your Very Own Local LLM
Running Your Very Own Local LLM
Tools like Ollama let you experiment with large language models on an average PC
·yc.prosetech.com·
Running Your Very Own Local LLM