Will AI hamper our ability to crawl the web for useful data?
As websites start to block Common Crawl, and as the project leans in to its role in training LLMs, will it become harder to use data from the web for other purposes?
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.
Top 5 Large Language Models and How to Use Them Effectively
LLMs hold the key to generative AI, but some are more suited than others to specific tasks. Here's a guide to the five most powerful and how to use them.
In the 20th episode of my Mastodon series I pivoted to a new topic: LLM-assisted coding. After three posts in the new series, it got picked up by The New Stack. Here’s the full list so far, I…
LLMs can be more useful and less prone to hallucination when they’re able to read relevant documents, webpages, and prior conversations before responding to a new user question.
LMQL is a query language for large language models (LLMs). It facilitates LLM interaction by combining the benefits of natural language prompting with the expressiveness of Python.
Containers, large language models (LLMs), and GPUs provide a foundation for developers to build services for what Nvidia CEO Jensen Huang describes as an "AI Factory."
What Large Language Models Can Do Well Now, and What They Can't
At QCon New York earlier this month, two OpenAI engineers demonstrated ChatGPT's newest feature, Functions, in one session. Another talk, however, pointed to the inherent limitations of LLMs.