Learn AI

559 bookmarks
Custom sorting
Extracting structured data using Box AI
Extracting structured data using Box AI
In this article, we’ll demonstrate how to extract structured data from a document using the Box AI API.
·medium.com·
Extracting structured data using Box AI
[2407.21783] The Llama 3 Herd of Models
[2407.21783] The Llama 3 Herd of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively...
·arxiv.org·
[2407.21783] The Llama 3 Herd of Models
How To Get into AI (Realistically) w/ ML Engineer
How To Get into AI (Realistically) w/ ML Engineer
Advance your career in Artificial Intelligence with Simplilearn’s Artificial Intelligence Engineer Program:  https://bit.ly/GodaGoSimplilearnThis is Part 2 o...
·youtube.com·
How To Get into AI (Realistically) w/ ML Engineer
LangGraph Deep Dive: Build Better Agents
LangGraph Deep Dive: Build Better Agents
LangGraph is an agent framework from LangChain that allows us to develop agents via graphs. By building agents using graphs we have much more control and fle...
·youtube.com·
LangGraph Deep Dive: Build Better Agents
Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony
Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony
At the 2024 UC Berkeley AI Hackathon's Awards Ceremony, the atmosphere was electric as Andrej Karpathy, founding member of OpenAI, delivered an inspiring keynote. Out of 371 projects, the top 8 teams took the stage to pitch their groundbreaking AI solutions. After intense deliberation by our esteemed judges, the big reveal came: up to $100K in prizes were awarded, celebrating innovation and creativity in AI for Good. Missed the live ceremony? Relive the excitement and watch the future of AI unfold! For more SkyDeck news, connect with us on ► LinkedIn: https://www.linkedin.com/company/skydeck-berkeley/ ► Instagram: https://www.instagram.com/berkeley_skydeck/ ► Twitter: https://twitter.com/SkyDeck_Cal Chapters: 0:00 Welcome 0:19 Caroline Winnett 4:05 Andrej Karpathy Keynote Speech 22:20 Pitch Overview 23:29 Judge Introductions 24:43 Revision 31:23 Agent.OS 38:54 Skyline 44:32 Spark 51:35 HearMeOut 57:05 Dispatch.Ai 1:02:04 ASL Bridgify 1:08:57 Greenwise 1:13:35 Special Prize 1 1:17:24 Special Prize 2 1:19:30 Special Prize 3 1:20:45 Special Prize 4 1:23:15 Special Prize 5 1:24:27 Special Prize 6 1:26:00 Special Prize 7 1:27:24 Special Prize 8 1:30:10 Grand Prize Winner #AIHackathon #UCBerkeleyAIHackathon #BerkeleyAIHackathon #Innovation #TechForGood #BerkeleySkyDeck #AI #LLM #AIforGood #HackingForGood #UCBerkeley #Startups #awardsceremony #Hackathon #TechInnovation #AndrejKarpathy
·youtu.be·
Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony
PDFTriage: Question Answering over Long, Structured Documents
PDFTriage: Question Answering over Long, Structured Documents
Large Language Models (LLMs) have issues with document question answering (QA) in situations where the document is unable to fit in the small context length of an LLM. To overcome this issue, most...
·arxiv.org·
PDFTriage: Question Answering over Long, Structured Documents
Imitation Intelligence, my keynote for PyCon US 2024
Imitation Intelligence, my keynote for PyCon US 2024
I gave an invited keynote at PyCon US 2024 in Pittsburgh this year. My goal was to say some interesting things about AI—specifically about Large Language Models—both to help catch …
·simonwillison.net·
Imitation Intelligence, my keynote for PyCon US 2024
The moment we stopped understanding AI [AlexNet]
The moment we stopped understanding AI [AlexNet]
Thanks to KiwiCo for sponsoring today's video! Go to https://www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first month of monthly lines an...
·youtube.com·
The moment we stopped understanding AI [AlexNet]
What is a "cognitive architecture"?
What is a "cognitive architecture"?
The second installment in our "In the Loop" series, focusing on cognitive architecture
·blog.langchain.dev·
What is a "cognitive architecture"?
What is an agent?
What is an agent?
Introducing a new series of musings on AI agents.
·blog.langchain.dev·
What is an agent?
turbopuffer: fast search on object storage
turbopuffer: fast search on object storage
turbopuffer is a vector database built on top of object storage, which means 10x-100x cheaper, usage-based pricing, and massive scalability
·turbopuffer.com·
turbopuffer: fast search on object storage
Extrinsic Hallucinations in LLMs
Extrinsic Hallucinations in LLMs
Hallucination in large language models usually refers to the model generating unfaithful, fabricated, inconsistent, or nonsensical content. As a term, hallucination has been somewhat generalized to cases when the model makes mistakes. Here, I would like to narrow down the problem of hallucination to be when the model output is fabricated and not grounded by either the provided context or world knowledge. There are two types of hallucination: In-context hallucination: The model output should be consistent with the source content in context.
·lilianweng.github.io·
Extrinsic Hallucinations in LLMs
How to self-host and hyperscale AI with Nvidia NIM
How to self-host and hyperscale AI with Nvidia NIM
Try out Nvidia NIM in the free playground https://nvda.ws/4avifodLearn how to build a futuristic workforce of AI agents, then self-host and scale them for an...
·youtube.com·
How to self-host and hyperscale AI with Nvidia NIM
zilliztech/GPTCache: GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with 🦜️🔗LangChain and 🦙llama_index, making it accessible to 🌎 developers working in any language.
zilliztech/GPTCache: GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with 🦜️🔗LangChain and 🦙llama_index, making it accessible to 🌎 developers working in any language.
GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with 🦜️🔗LangChain and 🦙llama_index, making it accessible to 🌎 developers working in any language. -...
·github.com·
zilliztech/GPTCache: GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with 🦜️🔗LangChain and 🦙llama_index, making it accessible to 🌎 developers working in any language.
Consistency Large Language Models: A Family of Efficient Parallel Decoders
Consistency Large Language Models: A Family of Efficient Parallel Decoders
TL;DR: LLMs have been traditionally regarded as sequential decoders, decoding one token after another. In this blog, we show pretrained LLMs can be easily taught to operate as efficient parallel decoders. We introduce Consistency Large Language Models (CLLMs), a new family of parallel decoders capable of reducing inference latency by efficiently decoding an $n$-token sequence per inference step. Our research shows this process – mimicking human cognitive process of forming complete sentences in mind before articulating word by word – can be effectively learned by simply finetuning pretrained LLMs.
·hao-ai-lab.github.io·
Consistency Large Language Models: A Family of Efficient Parallel Decoders
The AI Backend
The AI Backend
The AI Backend * work in progress, please provide feedback so we can improve Just like in 1995 it was obvious that every business needs an internet presence to stay competitive, in 2024 it's obvious that every software needs intelligence to stay competitive. Software products generally have 3 c...
·docs.google.com·
The AI Backend