Tutorials/Learning

464 bookmarks
Newest
Best Local LLMs - 2025 : r/LocalLLaMA
Best Local LLMs - 2025 : r/LocalLLaMA
***Year end thread for the best LLMs of 2025!*** 2025 is almost done! Its been **a wonderful year** for us Open/Local AI enthusiasts. And its...
·reddit.com·
Best Local LLMs - 2025 : r/LocalLLaMA
LocalLlama
LocalLlama
Subreddit to discuss AI & Llama, the large language model created by Meta AI.
·reddit.com·
LocalLlama
ChatGPT Revenue and Usage Statistics (2025)
ChatGPT Revenue and Usage Statistics (2025)
ChatGPT was the chatbot that kickstarted the generative AI revolution, which has been responsible for hundreds of billions of dollars in data centres, graphics chips and AI startups. Launched by startup OpenAI, ChatGPT was an immediate success, reaching 100 million users in less than two months. The tool has been used by workers, students and people of all walks of life, as a search engine, essay reader and content creation tool, amongst many other use cases. For those unaware, OpenAI is a research laboratory founded by some of the biggest names in tech such as Elon Musk, Reid Hoffman, Peter Thiel, and Sam Altman. It has created a lot of press due to its breadth of capabilities, from answering questions and following up on secondary
·businessofapps.com·
ChatGPT Revenue and Usage Statistics (2025)
ChatGPTPromptGenius
ChatGPTPromptGenius
Welcome to r/ChatGPTPromptGenius, the subreddit where you can find and share the best AI prompts! Our community is dedicated to curating a collection of high-quality & standardized prompts that can be used to generate creative and engaging AI conversations. Whether you're looking for inspiration or just want to see what others are doing with AI, this is the place to be! This subreddit has a companion browser extension called AI Prompt Genius.
·reddit.com·
ChatGPTPromptGenius
Data Science
Data Science
A space for data science professionals to engage in discussions and debates on the subject of data science.
·reddit.com·
Data Science
CHK NEW - Hidden State Visualizations for Language Models - Jay Alamar - PART 1
CHK NEW - Hidden State Visualizations for Language Models - Jay Alamar - PART 1
Interfaces for exploring transformer language models by looking at input saliency and neuron activation. Explorable #1: Input saliency of a list of countries generated by a language model Tap or hover over the output tokens: Explorable #2: Neuron activation analysis reveals four groups of neurons, each is associated with generating a certain type of token Tap or hover over the sparklines on the left to isolate a certain factor: The Transformer architecture has been powering a number of the recent advances in NLP. A breakdown of this architecture is provided here . Pre-trained language models based on the architecture, in both its auto-regressive (models that use their own output as input to next time-steps and that process tokens from left-to-right, like GPT2) and denoising (models trained by corrupting/masking the input and that process tokens bidirectionally, like BERT) variants continue to push the envelope in various tasks in NLP and, more recently, in computer vision. Our understanding of why these models work so well, however, still lags behind these developments. This exposition series continues the pursuit to interpret and visualize the inner-workings of transformer-based language models. We illustrate how some key interpretability methods apply to transformer-based language models. This article focuses on auto-regressive models, but these methods are applicable to other architectures and tasks as well. This is the first article in the series. In it, we present explorables and visualizations aiding the intuition of: Input Saliency methods that score input tokens importance to generating a token. Neuron Activations and how individual and groups of model neurons spike in response to inputs and to produce outputs. The next article addresses Hidden State Evolution across the layers of the model and what it may tell us about each layer's role.
·jalammar.github.io·
CHK NEW - Hidden State Visualizations for Language Models - Jay Alamar - PART 1
@ What areas of AI is Tokenization used
@ What areas of AI is Tokenization used
Tokenization is a foundational process used across various fields of AI, primarily in natural language processing (NLP), It involves breaking down raw data into smaller, manageable units called "tokens" that AI models can process numerically. [1, 2, 3, 4, 5] Key areas where tokenization is used ...
·docs.google.com·
@ What areas of AI is Tokenization used
@ What are foundation models
@ What are foundation models
Foundation models are large AI models trained on massive, diverse datasets that can be adapted to a wide range of downstream tasks. Instead of being built for one specific purpose, like traditional machine learning models, they serve as a flexible base for many applications, such as natural lan...
·docs.google.com·
@ What are foundation models
CORE
CORE
For your inspiration, read later, media and stuff
·app.raindrop.io·
CORE
@ What are the primary uses of NLP
@ What are the primary uses of NLP
The primary uses of NLP include automating tasks like data analysis, machine translation, and customer service through chatbots. It also powers everyday applications such as speech recognition for voice assistants, email spam filters, and text summarization. NLP helps computers understand and pro...
·docs.google.com·
@ What are the primary uses of NLP
@ NLPs used for transformers
@ NLPs used for transformers
NLP is intrinsically linked with Transformer models. The Transformer is a revolutionary deep learning architecture that is now a foundation for most modern NLP tasks, including machine translation, text generation, and summarization. Transformers are used to process and understand human l...
·docs.google.com·
@ NLPs used for transformers
Embeddings are numerical representations of high-dimensional data (e.g., text, images) in a lower-dimensional space
Embeddings are numerical representations of high-dimensional data (e.g., text, images) in a lower-dimensional space
Referenced Embeddings are numerical representations of high-dimensional data like text and images, transformed into lower-dimensional vectors that machine learning models can process efficiently. These vectors capture semantic relationships, so similar items are placed closer together in the em...
·docs.google.com·
Embeddings are numerical representations of high-dimensional data (e.g., text, images) in a lower-dimensional space
AI vectors hidden states
AI vectors hidden states
AI hidden states are vectors that represent the intermediate memory of a neural network, particularly recurrent neural networks (RNNs) and Transformers. In an RNN, the hidden state vector is computed at each time step, combining the current input and the previous hidden state to carry information...
·docs.google.com·
AI vectors hidden states