Type Constraints for LLM Output
LLMs
What Is a Large Language Model?
A primer on what large language models are, why they are used, the different types, and what the future may hold for LLM applications.
A Playground for LLM Apps: How AI Engineers Use Humanloop
In the LLM app stack, a playground is where developers can test out (and deploy) prompts. We discussed this new concept with Humanloop's CEO.
A Model API Gateway for 20+ LLMs
LLM App Ecosystem: What's New and How Cloud Native Is Adapting
Web3 failed to remake the developer ecosystem, but the emerging LLM stack is forcing the cloud native era to adapt. We examine its progress.
Meeting the Operational Challenges of Training LLMs
To train a large language model, you must overcome three big challenges: data, hardware and legal. It helps to be a large organization, too.
My Everyday LLM Uses
Deterministic, Structured LLM Output
Top 5 Large Language Models and How to Use Them Effectively
LLMs hold the key to generative AI, but some are more suited than others to specific tasks. Here's a guide to the five most powerful and how to use them.
How Large Language Models Assisted a Website Makeover
A successful first use of GPT-4 Code Interpreter raises the hope that LLMs can help democratize scripting.
compillmer
AI architecture #3: Deploying LLMs to private servers
🤖 What are LLMs? How can you use them? And why should you care?
Robots.txt for LLMs
A new series on LLM-assisted coding
In the 20th episode of my Mastodon series I pivoted to a new topic: LLM-assisted coding. After three posts in the new series, it got picked up by The New Stack. Here’s the full list so far, I…
Literate Programming with LLMs
Managing LLM Context Is a Knapsack Problem
LLMs can be more useful and less prone to hallucination when they’re able to read relevant documents, webpages, and prior conversations before responding to a new user question.
Categorization and Classification with LLMs
LlamaIndex and the New World of LLM Orchestration Frameworks
We take a look at LlamaIndex, which allows you to combine your own custom data with an LLM — without using fine-tuning or overly long prompts.
LMQL: Programming Large Language Models
LMQL is a query language for large language models (LLMs). It facilitates LLM interaction by combining the benefits of natural language prompting with the expressiveness of Python.
The LLMentalist Effect: how chat-based Large Language Models rep…
The new era of tech seems to be built on superstitious behaviour
Personal Lessons From LLMs
Overcoming LLM Hallucinations
How Containers, LLMs, and GPUs Fit with Data Apps
Containers, large language models (LLMs), and GPUs provide a foundation for developers to build services for what Nvidia CEO Jensen Huang describes as an "AI Factory."
What Large Language Models Can Do Well Now, and What They Can't
At QCon New York earlier this month, two OpenAI engineers demonstrated ChatGPT's newest feature, Functions, in one session. Another talk, however, pointed to the inherent limitations of LLMs.
Playing with Streamlit and LLMs.
Recently I’ve been chatting with a number of companies who are building out internal LLM labs/tools for their teams to make it easy to test LLMs against their internal usecases. I wanted to take a couple hours to see how far I could get using Streamlit to build out a personal LLM lab for a few usecases of my own.
See code on lethain/llm-explorer.
Altogether, I was impressed with how usable Streamlit is, and was able to build two useful tools in this timeframe:
How Hugging Face Positions Itself in the Open LLM Stack
What role does Hugging Face play in the generative AI developer ecosystem? We take a look at the company's savvy open source branding.
Why LLM-assisted table transformation is a big deal
Last week I had to convert a table in a Google Doc to a JSON structure that will render as an HTML page. This is the sort of mundane task that burns staggering amounts of information workers’…
Building GPT Applications on Open Source LangChain, Part 2
We’ll use the fast-rising LLM application framework for a practical example of how to use a GPT to help answer a question from a PDF document.
Prompts for Work & Play: Launching the Wolfram Prompt Repository
Curated collection of prompts for use with LLMs. Accessible interactively in Chat Notebooks and programmatically in functions like LLMFunction. Initial categories in the Prompt Repository cover personas, functions, modifiers.
LLMs For Software Portability