Found 1785 bookmarks
Newest
jpWang/LiLT: Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)
jpWang/LiLT: Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)
Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022) - jpWang/LiLT: Official PyTorch implementati...
·github.com·
jpWang/LiLT: Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)
Neural Networks from Scratch - P.1 Intro and Neuron Code
Neural Networks from Scratch - P.1 Intro and Neuron Code
Building neural networks from scratch in Python introduction.Neural Networks from Scratch book: https://nnfs.ioPlaylist for this series: https://www.youtube....
·youtube.com·
Neural Networks from Scratch - P.1 Intro and Neuron Code
AI Will Create New Genres
AI Will Create New Genres
An optimistic look at the creative potential of generative AI.
·dkb.blog·
AI Will Create New Genres
ChatGPT - Visual Studio Marketplace
ChatGPT - Visual Studio Marketplace
Extension for Visual Studio Code - Use browser or official API integration for OpenAI ChatGPT, GPT3.5, GPT3 and Codex. Create new files & projects with one click. Copilot to learn code, add tests via GPT models. Google LaMDA Bard integration is work-in-progress.
·marketplace.visualstudio.com·
ChatGPT - Visual Studio Marketplace
Installs OpenAI Gym on MacOS -
Installs OpenAI Gym on MacOS -
Installs OpenAI Gym on MacOS - . GitHub Gist: instantly share code, notes, and snippets.
·gist.github.com·
Installs OpenAI Gym on MacOS -
Simple Setup of OpenAI Gym on MacOS
Simple Setup of OpenAI Gym on MacOS
OpenAI Gym is a toolkit for testing reinforcement learning algorithms. Gym is fun and powerful, but installation can be a challenge. This…
·andrewschrbr.medium.com·
Simple Setup of OpenAI Gym on MacOS
A Beginner's Guide to the CLIP Model - KDnuggets
A Beginner's Guide to the CLIP Model - KDnuggets
CLIP is a bridge between computer vision and natural language processing. I'm here to break CLIP down for you in an accessible and fun read! In this post, I'll cover what CLIP is, how CLIP works, and why CLIP is cool.
·kdnuggets.com·
A Beginner's Guide to the CLIP Model - KDnuggets
NLP+CSS 201 Tutorials
NLP+CSS 201 Tutorials
Tutorials for advanced natural language processing methods designed for computational social science research.
·nlp-css-201-tutorials.github.io·
NLP+CSS 201 Tutorials
Tutorial 2: Extracting Information from Documents
Tutorial 2: Extracting Information from Documents
Tutorial description: This workshop provides an introduction to information extraction for social science–techniques for identifying specific words, phrases...
·youtube.com·
Tutorial 2: Extracting Information from Documents
NER Powered Semantic Search in Python
NER Powered Semantic Search in Python
Semantic search is a compelling technology allowing us to search using abstract concepts and meaning rather than relying on specific words. However, sometimes a simple keyword search can be just as valuable — especially if we know the exact wording of what we're searching for. Pinecone allows you to pair semantic search with a basic keyword filter. If you know that the document you're looking for contains a specific word or set of words, you simply tell Pinecone to restrict the search to only include documents with those keywords. We even support functionality for keyword search using sets of words with AND, OR, NOT logic. In this video, we will explore these features through a start-to-finish example of basic keyword search in Pinecone. 🌲 Pinecone Docs Page: https://www.pinecone.io/docs/examples/metadata-filtered-search/ 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP 00:00 NER Powered Semantic Search 01:19 Dependencies and Hugging Face Datasets Prep 04:18 Creating NER Entities with Transformers 07:00 Creating Embeddings with Sentence Transformers 07:48 Using Pinecone Vector Database 11:33 Indexing the Full Medium Articles Dataset 15:09 Making Queries to Pinecone 17:01 Final Thoughts
·youtube.com·
NER Powered Semantic Search in Python
CLIP: Connecting text and images
CLIP: Connecting text and images
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
·openai.com·
CLIP: Connecting text and images
OpenAI CLIP Explained | Multi-modal ML
OpenAI CLIP Explained | Multi-modal ML
OpenAI's CLIP explained simply and intuitively with visuals and code. Language models (LMs) can not rely on language alone. That is the idea behind the "Experience Grounds Language" paper, that proposes a framework to measure LMs' current and future progress. A key idea is that, beyond a certain threshold LMs need other forms of data, such as visual input. The next step beyond well-known language models; BERT, GPT-3, and T5 is "World Scope 3". In World Scope 3, we move from large text-only datasets to large multi-modal datasets. That is, datasets containing information from multiple forms of media, like *both* images and text. The world, both digital and real, is multi-modal. We perceive the world as an orchestra of language, imagery, video, smell, touch, and more. This chaotic ensemble produces an inner state, our "model" of the outside world. AI must move in the same direction. Even specialist models that focus on language or vision must, at some point, have input from the other modalities. How can a model fully understand the concept of the word "person" without *seeing* a person? OpenAI's Contrastive Learning In Pretraining (CLIP) is a world scope three model. It can comprehend concepts in both text and image and even connect concepts between the two modalities. In this video we will learn about multi-modality, how CLIP works, and how to use CLIP for different use cases like encoding, classification, and object detection. 🌲 Pinecone article: https://pinecone.io/learn/clip/ 🤖 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 🎉 Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership 👾 Discord: https://discord.gg/c5QtDB9RAP
·youtube.com·
OpenAI CLIP Explained | Multi-modal ML
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts
Hi I am Fatih Kadir Akın,The curator of the popular "Awesome ChatGPT Prompts" repository on GitHub, and prompts.chat.In this comprehensive guide, you'll discover expert strategies for crafting compelling ChatGPT prompts that drive engaging and informative conversations. From understanding the principles of effective prompting to mastering the art of constructing clear and concise prompts, this e-book will provide you with the tools you need to take your ChatGPT conversations to the next level.Some of the highlighted contents:Overview of ChatGPT and its capabilitiesHow to write clear and concise promptsTips for avoiding jargon and ambiguitySteps for crafting effective ChatGPT promptsCommon mistakes to avoid when crafting ChatGPT promptsCommon issues that may arise when using ChatGPTCase studies and best practicesThe book is going to be updated by time.View Awesome ChatGPT Prompts RepositoryHow to Make Money with ChatGPT: Strategies, Tips, and Tactics
·fka.gumroad.com·
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts