LLMs

LLMs

312 bookmarks
Custom sorting
First thoughts on the Humane Ai Pin
First thoughts on the Humane Ai Pin
Humane finally actually announced their first product: Ai Pin. Here's their introduction video. I love technology, and I think I'm generally enthusiastic about new tech that pushes things forwards. I'm enthusiastic about the ways LLMs are creating so many new things that weren't even possible literally one year ago, and
·birchtree.me·
First thoughts on the Humane Ai Pin
Techniques for Using LLMs to Improve SQL Queries
Techniques for Using LLMs to Improve SQL Queries
First we fixed a bug in an SQL query. Then we rethought the design of the query. Here are further ways to use LLMs to adjust your SQL queries.
·thenewstack.io·
Techniques for Using LLMs to Improve SQL Queries
Let’s Talk: Conversational Software Development
Let’s Talk: Conversational Software Development
Here’s number 12 in the series on LLM-assisted coding over at The New Stack: Let’s Talk: Conversational Software Development I keep coming back to the theme of the first article in this serie…
·blog.jonudell.net·
Let’s Talk: Conversational Software Development
Let's Talk: Conversational Software Development
Let's Talk: Conversational Software Development
Asking LLMs to write code is a life-changer, but so is talking to them about the process. Jon Udell continues to explore LLMs for coders.
·thenewstack.io·
Let's Talk: Conversational Software Development
How to Think Computationally about AI, the Universe and Everything
How to Think Computationally about AI, the Universe and Everything
In his TED Talk, Stephen Wolfram covers the emergence of space by the application of computational rules to spacetime, gravity and quantum mechanics to AI and LLMs. Computational irreducibility and the ruliad.
·writings.stephenwolfram.com·
How to Think Computationally about AI, the Universe and Everything
AutoGen | AutoGen
AutoGen | AutoGen
Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework
·microsoft.github.io·
AutoGen | AutoGen
AutoGen: Enabling next-generation large language model applications
AutoGen: Enabling next-generation large language model applications
Microsoft researchers are introducing AutoGen, a framework for simplifying the orchestration, optimization, and automation of workflows for large language model (LLM) applications—potentially transforming and extending what LLMs can do. Learn more.
·microsoft.com·
AutoGen: Enabling next-generation large language model applications
My "bicycle of the mind" moment with LLMs
My "bicycle of the mind" moment with LLMs
Critics of LLM-based products like ChatGPT, Claude, Midjourney, and other such products like to brush them off as just this year’s version of NFTs. They’re crypto bullshit being peddled by the same jokers who are just out there to stow disinformation and make a quick buck. I won’
·birchtree.me·
My "bicycle of the mind" moment with LLMs
How to Use LLMs for Dynamic Documentation
How to Use LLMs for Dynamic Documentation
Some explanations should be written by code authors. Others may best be generated on the fly by LLM-assisted code readers.
·thenewstack.io·
How to Use LLMs for Dynamic Documentation
6 Reasons Private LLMs Are Key for Enterprises
6 Reasons Private LLMs Are Key for Enterprises
There are many benefits of running a private LLM for your company or product, but it all boils down to being able to provide real-time data in context.
·thenewstack.io·
6 Reasons Private LLMs Are Key for Enterprises
Unbundling AI — Benedict Evans
Unbundling AI — Benedict Evans
ChatGPT and LLMs can do anything, so what can you do with them? How do you know? Do we move to chat bots as a magical general-purpose interface, or do we unbundle them back into single-purpose software?
·ben-evans.com·
Unbundling AI — Benedict Evans
Will AI hamper our ability to crawl the web for useful data?
Will AI hamper our ability to crawl the web for useful data?
As websites start to block Common Crawl, and as the project leans in to its role in training LLMs, will it become harder to use data from the web for other purposes?
·blog.ldodds.com·
Will AI hamper our ability to crawl the web for useful data?
Prompt Engineering
Prompt Engineering
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics. This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.
·lilianweng.github.io·
Prompt Engineering
Why gzip Just Beat a Large Language Model
Why gzip Just Beat a Large Language Model
A paper has shown that a compression algorithm – gzip – outperforms some large language models (LLMs) in some tasks. This has the NLP community …
·hendrik-erz.de·
Why gzip Just Beat a Large Language Model