Lang chain academy introduction to lang graph motivation

GenAI
New LLM Pre-training and Post-training Paradigms
New LLM Pre-training and Post-training Paradigms: A Look at How Moderns LLMs Are Trained
Evaluating the Effectiveness of LLM-Evaluators (aka LLM-as-Judge)
Use cases, techniques, alignment, finetuning, and critiques against LLM-evaluators.
Text classification
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Batch inference using Foundation Model APIs - Azure Databricks
Learn how to do batch inference using a provisioned throughput endpoint.
Tutorial: Deploy and query a custom model
Learn the overview and basic steps for performing model serving on Databricks.
Fine Tune your LLMs with Mosaic AI Model Training
dbdemos - Databricks Lakehouse demos : Fine Tune your LLMs with Mosaic AI Model Training
LLM Chatbot With Retrieval Augmented Generation (RAG) and DBRX
dbdemos - Databricks Lakehouse demos : LLM Chatbot With Retrieval Augmented Generation (RAG) and DBRX
Tutorials | Databricks
Discover the power of Lakehouse. Install demos in your workspace to quickly access best practices for data ingestion, governance, security, data science and data warehousing.
Databricks Generative AI Cookbook — Databricks Generative AI Cookbook
ChunkViz
Web site created using create-react-app
An all-in-one CLI app to run LLMs locally
The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge
databricks/genai-cookbook
My Journey towards “Databricks Certified Generative AI Engineer Associate”
My experiences from preparing for and successfully passing the (beta) exam
The GraphRAG Manifesto: Adding Knowledge to GenAI - Graph Database & Analytics
Discover why GraphRAG will subsume vector-only RAG and emerge as the default RAG architecture for most use cases.
Parlance - Educational Resources
Educational resources on LLMs
Generative AI with Azure Cosmos DB
Leverage Azure Cosmos DB for generative AI workloads for automatic scalability, low latency, and global distribution to handle massive data volumes and real-...
GraphRAG: New tool for complex data discovery now on GitHub
GraphRAG, a graph-based approach to retrieval-augmented generation (RAG) that significantly improves question-answering over private or previously unseen datasets, is now available on GitHub. Learn more:
Alex Strick van Linschoten - How to think about creating a dataset for LLM finetuning evaluation
I summarise the kinds of evaluations that are needed for a structured data generation task.
Aligning LLM-as-a-Judge with Human Preferences
Deep dive into self-improving evaluators in LangSmith, motivated by the rise of LLM-as-a-Judge evaluators plus research on few-shot learning and aligning human preferences.
Roadmap: AI Infrastructure
Read our roadmap on the next wave of enterprise data software in the age of AI.
Applied LLMs - What We’ve Learned From A Year of Building with LLMs
A practical guide to building successful LLM products, covering the tactical, operational, and strategic.
Redefining RAG: Azure Document Intelligence + Azure CosmosDB Mongo vCore
1. About this blogThis time, I’ll be developing an application designed for use within our FlyersSoft company, to improve workforce efficiency. Idea is to introduce CosmicTalent, an application designed to empower HR and managers in effectively navigating employee information. By leveraging CosmicTalent, users can efficiently filter and identify eligible employees based on specific task requirements. 🚀 Few key takeaways Advanatages of Azure CosmosDB Mongo vCore’s native vector search capabilities over Azure Vector Search.
Using DuckDB for Embeddings and Vector Search
Machine learning, Scala, and interactive computing
AzureDataRetrievalAugmentedGenerationSamples/Python/CosmosDB-NoSQL_VectorSearch/CosmosDB-NoSQL-Vector_AzureOpenAI_Tutorial.ipynb at main · microsoft/AzureDataRetrievalAugmentedGenerationSamples
Samples to demonstrate pathways for Retrieval Augmented Generation (RAG) for Azure Data - microsoft/AzureDataRetrievalAugmentedGenerationSamples
Developing an LLM: Building, Training, Finetuning
REFERENCES:
1. Build an LLM from Scratch book: https://mng.bz/M96o
2. Build an LLM from Scratch repo: https://github.com/rasbt/LLMs-from-scratch
3. Slides: https://sebastianraschka.com/pdf/slides/2024-build-llms.pdf
4. LitGPT: https://github.com/Lightning-AI/litgpt
5. TinyLlama pretraining: https://lightning.ai/lightning-ai/studios/pretrain-llms-tinyllama-1-1b
DESCRIPTION:
This video provides an overview of the three stages of developing an LLM: Building, Training, and Finetuning. The focus is on explaining how LLMs work by describing how each step works.
OUTLINE:
00:00 – Using LLMs
02:50 – The stages of developing an LLM
05:26 – The dataset
10:15 – Generating multi-word outputs
12:30 – Tokenization
15:35 – Pretraining datasets
21:53 – LLM architecture
27:20 – Pretraining
35:21 – Classification finetuning
39:48 – Instruction finetuning
43:06 – Preference finetuning
46:04 – Evaluating LLMs
53:59 – Pretraining & finetuning rules of thumb
The architecture of today's LLM applications
Here’s everything you need to know to build your first LLM app and problem spaces you can start exploring today.
An interview with the most prolific jailbreaker of ChatGPT and other leading LLMs
Pliny the Prompter has been finding ways to jailbreak, or remove the prohibitions and restrictions on leading LLMs, since last year.
Announcing LangChain RAG Template Powered by Redis - Redis
Explore the new LangChain RAG Template with Redis integration. Streamline AI development with efficient, adaptive APIs.
Build your own AI copilot with vCore-based Azure Cosmos DB for MongoDB and Azure OpenAI - Training
Build your own AI copilot with vCore-based Azure Cosmos DB for MongoDB and Azure OpenAI.