Found 1798 bookmarks
Newest
NER Powered Semantic Search in Python
NER Powered Semantic Search in Python
Semantic search is a compelling technology allowing us to search using abstract concepts and meaning rather than relying on specific words. However, sometimes a simple keyword search can be just as valuable — especially if we know the exact wording of what we're searching for. Pinecone allows you to pair semantic search with a basic keyword filter. If you know that the document you're looking for contains a specific word or set of words, you simply tell Pinecone to restrict the search to only include documents with those keywords. We even support functionality for keyword search using sets of words with AND, OR, NOT logic. In this video, we will explore these features through a start-to-finish example of basic keyword search in Pinecone. 🌲 Pinecone Docs Page: https://www.pinecone.io/docs/examples/metadata-filtered-search/ 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP 00:00 NER Powered Semantic Search 01:19 Dependencies and Hugging Face Datasets Prep 04:18 Creating NER Entities with Transformers 07:00 Creating Embeddings with Sentence Transformers 07:48 Using Pinecone Vector Database 11:33 Indexing the Full Medium Articles Dataset 15:09 Making Queries to Pinecone 17:01 Final Thoughts
·youtube.com·
NER Powered Semantic Search in Python
CLIP: Connecting text and images
CLIP: Connecting text and images
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
·openai.com·
CLIP: Connecting text and images
OpenAI CLIP Explained | Multi-modal ML
OpenAI CLIP Explained | Multi-modal ML
OpenAI's CLIP explained simply and intuitively with visuals and code. Language models (LMs) can not rely on language alone. That is the idea behind the "Experience Grounds Language" paper, that proposes a framework to measure LMs' current and future progress. A key idea is that, beyond a certain threshold LMs need other forms of data, such as visual input. The next step beyond well-known language models; BERT, GPT-3, and T5 is "World Scope 3". In World Scope 3, we move from large text-only datasets to large multi-modal datasets. That is, datasets containing information from multiple forms of media, like *both* images and text. The world, both digital and real, is multi-modal. We perceive the world as an orchestra of language, imagery, video, smell, touch, and more. This chaotic ensemble produces an inner state, our "model" of the outside world. AI must move in the same direction. Even specialist models that focus on language or vision must, at some point, have input from the other modalities. How can a model fully understand the concept of the word "person" without *seeing* a person? OpenAI's Contrastive Learning In Pretraining (CLIP) is a world scope three model. It can comprehend concepts in both text and image and even connect concepts between the two modalities. In this video we will learn about multi-modality, how CLIP works, and how to use CLIP for different use cases like encoding, classification, and object detection. 🌲 Pinecone article: https://pinecone.io/learn/clip/ 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP
·youtube.com·
OpenAI CLIP Explained | Multi-modal ML
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts
Hi I am Fatih Kadir Akın,The curator of the popular "Awesome ChatGPT Prompts" repository on GitHub, and prompts.chat.In this comprehensive guide, you'll discover expert strategies for crafting compelling ChatGPT prompts that drive engaging and informative conversations. From understanding the principles of effective prompting to mastering the art of constructing clear and concise prompts, this e-book will provide you with the tools you need to take your ChatGPT conversations to the next level.Some of the highlighted contents:Overview of ChatGPT and its capabilitiesHow to write clear and concise promptsTips for avoiding jargon and ambiguitySteps for crafting effective ChatGPT promptsCommon mistakes to avoid when crafting ChatGPT promptsCommon issues that may arise when using ChatGPTCase studies and best practicesThe book is going to be updated by time.View Awesome ChatGPT Prompts RepositoryHow to Make Money with ChatGPT: Strategies, Tips, and Tactics
·fka.gumroad.com·
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts
PaLM-E: An Embodied Multimodal Language Model
PaLM-E: An Embodied Multimodal Language Model
Large language models excel at a wide range of complex tasks. However, enabling general inference in the real world, e.g., for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language model are multi-modal sentences that interleave visual, continuous state estimation, and textual input encodings. We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks including sequential robotic manipulation planning, visual question answering, and captioning. Our evaluations show that PaLM-E, a single large embodied multimodal model, can address a variety of embodied reasoning tasks, from a variety of observation modalities, on multiple embodiments, and further, exhibits positive transfer: the model benefits from diverse joint training across internet-scale language, vision, and visual-language domains. Our largest model, PaLM-E-562B with 562B parameters, in addition to being trained on robotics tasks, is a visual-language generalist with state-of-the-art performance on OK-VQA, and retains generalist language capabilities with increasing scale.
·arxiv.org·
PaLM-E: An Embodied Multimodal Language Model
Convolutional Neural Nets Explained and Implemented in Python (PyTorch)
Convolutional Neural Nets Explained and Implemented in Python (PyTorch)
Convolutional Neural Networks (CNNs) have been the undisputed champions of Computer Vision (CV) for almost a decade. Their widespread adoption kickstarted the world of deep learning; without them, the field of AI would look very different today. Rather than manual feature extraction, deep learning CNNs are capable of doing image classification, object detection, and much more automatically for a vast number of datasets and use cases. All they need is training data. Deep CNNs are the de-facto standard in computer vision. New models using vision transformers (ViT) and multi-modality may change this in the future, but for now, CNNs still dominate state-of-the-art benchmarks in vision. In this hands-on video, we will learn why this is, how to implement deep learning CNNs for computer vision tasks like image classification using Python and PyTorch, and everything you could need to know about well known CNNs like LeNet, AlexNet, VGGNet, and ResNet. 🌲 Pinecone article: https://pinecone.io/learn/cnn 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/nlp-transformers 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:59 What Makes a Convolutional Neural Network 03:24 Image preprocessing for CNNs 09:15 Common components of a CNN 11:01 Components: pooling layers 12:31 Building the CNN with PyTorch 14:14 Notable CNNs 17:52 Implementation of CNNs 18:52 Image Preprocessing for CNNs 22:46 How to normalize images for CNN input 23:53 Image preprocessing pipeline with pytorch 24:59 Pytorch data loading pipeline for CNNs 25:32 Building the CNN with PyTorch 28:08 CNN training parameters 28:49 CNN training loop 30:27 Using PyTorch CNN for inference
·youtube.com·
Convolutional Neural Nets Explained and Implemented in Python (PyTorch)
Glitch Tokens - Computerphile
Glitch Tokens - Computerphile
Language Models' Achilles heel: Rob Miles talks about "glitch" tokens, those mysterious words which, which result in gibberish when entered into some large language models. More from Rob Miles: http://bit.ly/Rob_Miles_YouTube The AI safety/alignment post: https://www.alignmentforum.org/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation https://www.facebook.com/computerphile https://twitter.com/computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: https://bit.ly/nottscomputer Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
·youtube.com·
Glitch Tokens - Computerphile
Medical Search Engine with SPLADE + Sentence Transformers in Python
Medical Search Engine with SPLADE + Sentence Transformers in Python
In this video we'll build a search engine for the medical field using hybrid search with NLP information retrieval models. We use hybrid search with sentence transformers and SPLADE for medical quesiton-answering. By using hybrid search we're able to search using both dense and sparse vectors. This allows us to cover semantics with the dense vectors, and features like exact matching and keyword search with the sparse vectors. For the sparse vectors we use SPLADE. SPLADE is the first sparse embedding method to outperform BM25 across a variety of tasks. It's an incredibly powerful technique that enables the typical sparse search advantages while also enabling learning term expansion to help minimize the vocabulary mismatch problem. The demo we work through here uses SPLADE and a sentence transformer model trained on MS-MARCO. These are all implemented via Hugging Face transformers. Finally, for the search component we use the Pinecone vector database. The only vector DB at the time of writing that natively supports SPLADE vectors.  🔗 Code notebook: https://github.com/pinecone-io/examples/blob/master/search/hybrid-search/medical-qa/pubmed-splade.ipynb 🎙️ Support me on Patreon: https://patreon.com/JamesBriggs 🎨 AI Art: https://www.etsy.com/uk/shop/IntelligentArtEU 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP 00:00 Hybrid search for medical field 00:18 Hybrid search process 02:42 Prerequisites and Installs 03:26 Pubmed QA data preprocessing step 08:25 Creating dense vectors with sentence-transformers 10:30 Creating sparse vector embeddings with SPLADE 18:12 Preparing sparse-dense format for Pinecone 21:02 Creating the Pinecone sparse-dense index 24:25 Making hybrid search queries 29:59 Final thoughts on sparse-dense with SPLADE #artificialintelligence #nlp #naturallanguageprocessing #machinelearning #searchengine
·youtube.com·
Medical Search Engine with SPLADE + Sentence Transformers in Python
9. OpenAI ChatGPT API (NEW GPT 3.5) and Whisper API - Python and Gradio Tutorial
9. OpenAI ChatGPT API (NEW GPT 3.5) and Whisper API - Python and Gradio Tutorial
In this video I use the new ChatGPT API and Whisper API's to have a conversation. I use my voice as input and ChatGPT speaks back to me using my computer's audio. 0:00 - Demo (What We're Building) 1:10 - High Level Walkthrough / Discussion 5:02 - Gradio User Interface (Microphone Recording) 9:07 - OpenAI Whisper API (Speech to Text) 11:34 - ChatGPT API (Chat Completion) 21:00 - Making OSX Talk 22:06 - Jay-Z Edition (Rapping Therapist) If you've been enjoying the AI content, I am starting a spinoff channel this year focused on AI in music, gaming, and design at https://youtube.com/@parttimeai Source Code: https://github.com/hackingthemarkets/chatgpt-api-whisper-api-voice-assistant Twitter: https://twitter.com/parttimelarry Buy Me a Drank: https://www.buymeacoffee.com/parttimelarry
·youtube.com·
9. OpenAI ChatGPT API (NEW GPT 3.5) and Whisper API - Python and Gradio Tutorial