Introducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm
Wolfram expands Notebooks, integrating LLM functionality into the new chat cell and Chat Notebook. Stephen Wolfram explains how chat-enabled and chat-driven versions work.
Prompt Engineering: Get LLMs to Generate the Content You Want
This article introduces prompt engineering to developers using large language models (LLMs) such as GPT-4 and PaLM. I will explain the types of LLMs, the importance of prompt engineering, various types of prompts with examples.
LLMs break the internet with Simon Willison (Changelog Interviews #534)
This week we’re talking about LLMs with Simon Willison. We can not avoid this topic. Last time it was Stable Diffusion breaking the internet. This time it’s LLMs breaking the internet. Large Language Models, ChatGPT, Bard, Claude, Bing, GitHub Copilot X, Cody…we cover it all.
LLM-Oriented Programming: Keeping Your Codebase Organized for Large Language Models
IntroductionI feel that the world of coding is changing. In a couple of years, I expect a lot of new developer tools based on LLMs to come up. They will like...
I haven’t spent much time playing around with the latest LLMs, and decided to spend some time doing so. I was particularly curious about the usecase of using embeddings to supplement user prompts with additional, relevant data (e.g. supply the current status of their recent tickets into the prompt where they might inquire about progress on said tickets). This usecase is interesting because it’s very attainable for existing companies and products to take advantage of, and I imagine it’s roughly how e.
While there’s been a truly remarkable advance in large language models as they continue to scale up, facilitated by being trained and run on larger and larger GPU clusters, there is still a need to be able to run smaller models on devices that have constraints on memory and processing power.
Being able to run models at the edge enables creating applications that may be more sensitive to user privacy or latency considerations - ensuring that user data does not leave the device.