What We Learned from a Year of Building with LLMs (Part I)
LLMs
How LLMs Can Unite Analog Event Promotion and Digital Calendars
As the barrier that separates data from not-data fades away, a long-envisioned calendar scenario becomes real with an LLM-backed Datasette plugin.
open-webui/open-webui: User-friendly WebUI for LLMs (Formerly Ollama WebUI)
User-friendly WebUI for LLMs (Formerly Ollama WebUI) - open-webui/open-webui
Running Fabric Locally with Ollama: A Step-by-Step Guide - Bernhard Knasmüller on Software Development
In the realm of Large Language Models (LLMs), Daniel Miessler’s fabric project is a popular choice for collecting and integrating various LLM prompts. However, its default requirement to access the OpenAI API can lead to unexpected costs. Enter ollama, an alternative solution that allows running LLMs locally on powerful hardware like Apple Silicon chips or […]
How to Run Llama 3 Locally with Ollama and Open WebUI
I’m a big fan of Llama. Meta releasing their LLM open source is a net benefit for the tech...
Don't worry about LLMs
All we have to do is get closer to the metal
4 Reasons Your AI Agent Needs Code Interpreter
We will see code interpreters powering even more AI agents and apps as a part of the new ecosystem being built around LLMs, where a code interpreter represents a crucial part of an agent’s brain.
LLMs’ Data-Control Path Insecurity - Schneier on Security
Do Enormous LLM Context Windows Spell the End of RAG?
Now that LLMs can retrieve 1 million tokens at once, how long will it be until we don’t need retrieval augmented generation for accurate AI responses?
Block AI crawlers
I have very mixed opinions on LLMs, as they stand. This note won’t be digging into my thoughts there - I don’t want to have that discussion. However, while I’m not exactly doing cutting-edge research here, I do put effort into publishing for humans.
SQL Schema Generation With Large Language Models
We discover that mapping one domain (publishing) into another (the domain-specific language of SQL) works heavily to an LLM's strengths.
How Andela Built Its AI-Based Platform Without an LLM
Its data-driven matching algorithms that pair people to positions employs a structured taxonomy to overcome other models’ limitations.
How RAG Architecture Overcomes LLM Limitations
Retrieval-augmented generation facilitates a radical makeover of LLMs and real-time AI environments to produce better, more accurate search results.
React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity
The evolution of software development over the past decade has been very frustrating. Little of it seems to makes sense, even to those of us who are right in the middle of it.
Improving LLM Output by Combining RAG and Fine-Tuning
When designing a domain-specific enterprise-grade conversational Q&A system to answer customer questions, Conviva found an either/or approach isn’t sufficient.
How To Control Access in LLM Data Plus Distributed Authorization
Oso explains how to use a vector database and retrieval-augmented generation to lock data in LLMs to permissions and decouples authorization data and logic.
WebAssembly, Large Language Models, and Kubernetes Matter
WebAssembly makes it quick and easy to download and run a complete LLM on a machine without any major setup.
Evaluation for LLM-Based Apps | Deepchecks
Release high-quality LLM apps quickly without compromising on testing. Never be held back by the complex and subjective nature of LLM interactions.
How to Cure LLM Weaknesses with Vector Databases
Vector databases enable businesses to affordably and sustainably adapt generic large language models for organization-specific use.
RAG vs. Fine Tuning: Which One is Right for You? - Vectorize
Introduction In today’s world, LLMs are everywhere, but what exactly is an LLM and what are they used for? LLM, an acronym for Large Language Model, is an AI model developed to understand and generate human-like language. LLMs are trained on huge data sets (hence “large”) to process and generate meaningful and relevant responses based
The end of the “best open LLM”
Modeling the compute versus performance tradeoff of many open LLMs.
Dump A Code Repository As A Text File, For Easier Sharing With Chatbots
Some LLMs (Large Language Models) can act as useful programming assistants when provided with a project’s source code, but experimenting with this can get a little tricky if the chatbot has n…
Using SQL-Powered RAG to Better Analyze Database Data with GenAI
Combining retrieval-augmented generation (RAG) with SQL makes it easier to apply LLMs to wring more insights from your company data.
Notes on how to use LLMs in your product.
Pretty much every company I know is looking for a way to benefit from Large Language Models. Even if their executives don’t see much applicability, their investors likely do, so they’re staring at the blank page nervously trying to come up with an idea. It’s straightforward to make an argument for LLMs improving internal efficiency somehow, but it’s much harder to describe a believable way that LLMs will make your product more useful to your customers.
Building a RAG for tabular data in Go with PostgreSQL & Gemini
In this article we explore how to combine a large language model (LLM) with a relational database to allow users to ask questions about their data in a natural way. It demonstrates a Retrieval-Augmented Generation (RAG) system built with Go that utilizes PostgreSQL and pgvector for data storage and retrieval. The provided code showcases the core functionalities. This is an overview of how the
Code in Context: How AI Can Help Improve Our Documentation
Revisiting a documentation sprint to explore how LLM-powered tools like Unblocked can help us understand and explain complex codebases.
Game theory research shows AI can evolve into more selfish or cooperative personalities
Researchers in Japan have effectively developed a diverse range of personality traits in dialogue AI using a large-scale language model (LLM). Using the prisoner's dilemma from game theory, Professor Takaya Arita and Associate Professor Reiji Suzuki from Nagoya University's Graduate School of Informatics' team created a framework for evolving AI agents that mimics human behavior by switching between selfish and cooperative actions, adapting its strategies through evolutionary processes. Their findings were published in Scientific Reports.
Local chat with Ollama and Cody
Learn how to use local LLM models to Chat with Cody without an Internet connection powered by Ollama.
How to Run a Local LLM via LocalAI, an Open Source Project
We look at an open source method to run large language models locally. LocalAI is an alternative to Ollama, a private company.
How to Detect and Clean up Data Contamination in LLMs
Data contamination (or data leakage) in AI occurs when the data used to train the model has been "contaminated" with overlapping data.