Ethan Mollick, analyst: ‘Students who use AI as a crutch don’t learn anything’
The veteran professor, who has become a celebrity on social media, has published a book on how to better understand and use artificial intelligence in everyday life
The latest release of the Marimo Python reactive notebook project includes a neat new feature: you can now easily embed a custom chat interface directly inside of your notebook. Marimo …
pytudes/ipynb/CherylMind.ipynb at main · norvig/pytudes
There has been much debate on the degree to which Large Language Models (LLMs) have a theory of mind: a way of understanding what other people know and don't know. In this notebook I explore one small part of the issue by asking nine LLM chatbots to solve the Cheryl's Birthday Problem, a well-known logic puzzle in which different characters have different states of knowledge at different times.
NotebookLM’s automatically generated podcasts are surprisingly effective
Audio Overview is a fun new feature of Google’s NotebookLM which is getting a lot of attention right now. It generates a one-off custom podcast against content you provide, where …
The MLX ecosystem of libraries for running machine learning models on Apple Silicon continues to expand. Prince Canuma is actively developing this library for running vision models such as Qwen-2 …
The ultimate AI productivity app that protects your privacy. Bring all your apps and data into one AI-powered search and assistant. Get it for you and for your teams today.
GitHub - thiswillbeyourgithub/WDoc: Summarize and query from a lot of heterogeneous documents. Any LLM provider, any filetype, scalable, under developpement
Summarize and query from a lot of heterogeneous documents. Any LLM provider, any filetype, scalable, under developpement - thiswillbeyourgithub/WDoc
How streaming LLM APIs work | Simon Willison’s TILs
I decided to have a poke around and see if I could figure out how the HTTP streaming APIs from the various hosted LLM providers actually worked. Here are my notes so far.
GitHub - ictnlp/LLaMA-Omni: LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level. - ictnlp/LLaMA-Omni
Sourcetable is an AI spreadsheet that helps you analyze data and create reports. Chat with your data, create charts and graphs, build financial models, + more.