Found 2451 bookmarks
Newest
The Illustrated Word2vec
The Illustrated Word2vec
Discussions: Hacker News (347 points, 37 comments), Reddit r/MachineLearning (151 points, 19 comments) Translations: Chinese (Simplified), French, Korean, Portuguese, Russian “There is in all things a pattern that is part of our universe. It has symmetry, elegance, and grace - those qualities you find always in that which the true artist captures. You can find it in the turning of the seasons, in the way sand trails along a ridge, in the branch clusters of the creosote bush or the pattern of its leaves. We try to copy these patterns in our lives and our society, seeking the rhythms, the dances, the forms that comfort. Yet, it is possible to see peril in the finding of ultimate perfection. It is clear that the ultimate pattern contains it own fixity. In such perfection, all things move toward death.” ~ Dune (1965) I find the concept of embeddings to be one of the most fascinating ideas in machine learning. If you’ve ever used Siri, Google Assistant, Alexa, Google Translate, or even smartphone keyboard with next-word prediction, then chances are you’ve benefitted from this idea that has become central to Natural Language Processing models. There has been quite a development over the last couple of decades in using embeddings for neural models (Recent developments include contextualized word embeddings leading to cutting-edge models like BERT and GPT2). Word2vec is a method to efficiently create word embeddings and has been around since 2013. But in addition to its utility as a word-embedding method, some of its concepts have been shown to be effective in creating recommendation engines and making sense of sequential data even in commercial, non-language tasks. Companies like Airbnb, Alibaba, Spotify, and Anghami have all benefitted from carving out this brilliant piece of machinery from the world of NLP and using it in production to empower a new breed of recommendation engines. In this post, we’ll go over the concept of embedding, and the mechanics of generating embeddings with word2vec. But let’s start with an example to get familiar with using vectors to represent things. Did you know that a list of five numbers (a vector) can represent so much about your personality?
·jalammar.github.io·
The Illustrated Word2vec
Limitless
Limitless
Go beyond your mind’s limitations: Personalized AI powered by what you’ve seen, said, and heard.
·limitless.ai·
Limitless
Stuff we figured out about AI in 2023
Stuff we figured out about AI in 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of …
·simonwillison.net·
Stuff we figured out about AI in 2023
Clap Or AI Gets It - Aftermath
Clap Or AI Gets It - Aftermath
Drama over YouTuber Marques Brownlee's review of the Humane AI pin shows how much AI is propped up by hype.
·aftermath.site·
Clap Or AI Gets It - Aftermath
BarGPT, AI Generated Cocktail Recipes
BarGPT, AI Generated Cocktail Recipes
Create AI generated cocktail recipes based on your tastes, ingredients on hand or any other ideas you have.
·bargpt.app·
BarGPT, AI Generated Cocktail Recipes
Building files-to-prompt entirely using Claude 3 Opus
Building files-to-prompt entirely using Claude 3 Opus
files-to-prompt is a new tool I built to help me pipe several files at once into prompts to LLMs such as Claude and GPT-4. When combined with my LLM command-line …
·simonwillison.net·
Building files-to-prompt entirely using Claude 3 Opus
Command R
Command R
Command R is a conversational model that excels in language tasks and supports multiple languages, making it ideal for coding use cases that require instruction models. It responds well to preambles that follow a specific structure and format, enhancing its performance.
·docs.cohere.com·
Command R
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progres...
·github.com·
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.
·972mag.com·
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
The cost of AI reasoning over time.
The cost of AI reasoning over time.
As time progresses, AI models are achieving higher reasoning accuracy while their associated costs continue to drastically decrease. What does it mean for our future?
·semaphore.substack.com·
The cost of AI reasoning over time.
Add WebPilot to your GPTs
Add WebPilot to your GPTs
AI powered Search, access any online information, and generate very long content
·webpilot.ai·
Add WebPilot to your GPTs