LLMs

LLMs

318 bookmarks
Custom sorting
Running Fabric Locally with Ollama: A Step-by-Step Guide - Bernhard Knasmüller on Software Development
Running Fabric Locally with Ollama: A Step-by-Step Guide - Bernhard Knasmüller on Software Development
In the realm of Large Language Models (LLMs), Daniel Miessler’s fabric project is a popular choice for collecting and integrating various LLM prompts. However, its default requirement to access the OpenAI API can lead to unexpected costs. Enter ollama, an alternative solution that allows running LLMs locally on powerful hardware like Apple Silicon chips or […]
·knasmueller.net·
Running Fabric Locally with Ollama: A Step-by-Step Guide - Bernhard Knasmüller on Software Development
4 Reasons Your AI Agent Needs Code Interpreter
4 Reasons Your AI Agent Needs Code Interpreter
We will see code interpreters powering even more AI agents and apps as a part of the new ecosystem being built around LLMs, where a code interpreter represents a crucial part of an agent’s brain.
·thenewstack.io·
4 Reasons Your AI Agent Needs Code Interpreter
Do Enormous LLM Context Windows Spell the End of RAG?
Do Enormous LLM Context Windows Spell the End of RAG?
Now that LLMs can retrieve 1 million tokens at once, how long will it be until we don’t need retrieval augmented generation for accurate AI responses?
·thenewstack.io·
Do Enormous LLM Context Windows Spell the End of RAG?
Block AI crawlers
Block AI crawlers
I have very mixed opinions on LLMs, as they stand. This note won’t be digging into my thoughts there - I don’t want to have that discussion. However, while I’m not exactly doing cutting-edge research here, I do put effort into publishing for humans.
·ellie.wtf·
Block AI crawlers
SQL Schema Generation With Large Language Models
SQL Schema Generation With Large Language Models
We discover that mapping one domain (publishing) into another (the domain-specific language of SQL) works heavily to an LLM's strengths.
·thenewstack.io·
SQL Schema Generation With Large Language Models
How RAG Architecture Overcomes LLM Limitations
How RAG Architecture Overcomes LLM Limitations
Retrieval-augmented generation facilitates a radical makeover of LLMs and real-time AI environments to produce better, more accurate search results.
·thenewstack.io·
How RAG Architecture Overcomes LLM Limitations
Improving LLM Output by Combining RAG and Fine-Tuning
Improving LLM Output by Combining RAG and Fine-Tuning
When designing a domain-specific enterprise-grade conversational Q&A system to answer customer questions, Conviva found an either/or approach isn’t sufficient.
·thenewstack.io·
Improving LLM Output by Combining RAG and Fine-Tuning
Evaluation for LLM-Based Apps | Deepchecks
Evaluation for LLM-Based Apps | Deepchecks
Release high-quality LLM apps quickly without compromising on testing. Never be held back by the complex and subjective nature of LLM interactions.
·deepchecks.com·
Evaluation for LLM-Based Apps | Deepchecks
How to Cure LLM Weaknesses with Vector Databases
How to Cure LLM Weaknesses with Vector Databases
Vector databases enable businesses to affordably and sustainably adapt generic large language models for organization-specific use.
·thenewstack.io·
How to Cure LLM Weaknesses with Vector Databases
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
Introduction In today’s world, LLMs are everywhere, but what exactly is an LLM and what are they used for? LLM, an acronym for Large Language Model, is an AI model developed to understand and generate human-like language. LLMs are trained on huge data sets (hence “large”) to process and generate meaningful and relevant responses based
·vectorize.io·
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
Notes on how to use LLMs in your product.
Notes on how to use LLMs in your product.
Pretty much every company I know is looking for a way to benefit from Large Language Models. Even if their executives don’t see much applicability, their investors likely do, so they’re staring at the blank page nervously trying to come up with an idea. It’s straightforward to make an argument for LLMs improving internal efficiency somehow, but it’s much harder to describe a believable way that LLMs will make your product more useful to your customers.
·lethain.com·
Notes on how to use LLMs in your product.
Building a RAG for tabular data in Go with PostgreSQL & Gemini
Building a RAG for tabular data in Go with PostgreSQL & Gemini
In this article we explore how to combine a large language model (LLM) with a relational database to allow users to ask questions about their data in a natural way. It demonstrates a Retrieval-Augmented Generation (RAG) system built with Go that utilizes PostgreSQL and pgvector for data storage and retrieval. The provided code showcases the core functionalities. This is an overview of how the
·pgaleone.eu·
Building a RAG for tabular data in Go with PostgreSQL & Gemini
Game theory research shows AI can evolve into more selfish or cooperative personalities
Game theory research shows AI can evolve into more selfish or cooperative personalities
Researchers in Japan have effectively developed a diverse range of personality traits in dialogue AI using a large-scale language model (LLM). Using the prisoner's dilemma from game theory, Professor Takaya Arita and Associate Professor Reiji Suzuki from Nagoya University's Graduate School of Informatics' team created a framework for evolving AI agents that mimics human behavior by switching between selfish and cooperative actions, adapting its strategies through evolutionary processes. Their findings were published in Scientific Reports.
·techxplore.com·
Game theory research shows AI can evolve into more selfish or cooperative personalities
Local chat with Ollama and Cody
Local chat with Ollama and Cody
Learn how to use local LLM models to Chat with Cody without an Internet connection powered by Ollama.
·sourcegraph.com·
Local chat with Ollama and Cody