AI/ML

AI/ML

1485 bookmarks
Custom sorting
Freeing the chatbot
Freeing the chatbot
Intelligence, of a sort, is going to be all around us
·oneusefulthing.org·
Freeing the chatbot
Chatbots Have Thoroughly Infiltrated Scientific Publishing
Chatbots Have Thoroughly Infiltrated Scientific Publishing
One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis
·scientificamerican.com·
Chatbots Have Thoroughly Infiltrated Scientific Publishing
Pinokio
Pinokio
AI Browser
·pinokio.computer·
Pinokio
GitHub - truefoundry/cognita: RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
GitHub - truefoundry/cognita: RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry - GitHub - truefoundry/cognita: RAG (Retrieval Augmented Generation) Fra...
·github.com·
GitHub - truefoundry/cognita: RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
Snowflake Arctic Cookbook
Snowflake Arctic Cookbook
Today's big model release was Snowflake Arctic, an enormous 480B model with a 128×3.66B MoE (Mixture of Experts) architecture. It's Apache 2 licensed and Snowflake state that "in addition, we …
·simonwillison.net·
Snowflake Arctic Cookbook
What can LLMs never do?
What can LLMs never do?
On goal drift and lower reliability. Or, why can't LLMs play Conway's Game Of Life?
·strangeloopcanon.com·
What can LLMs never do?
James Grimmelmann (@jtlg@mastodon.lawprofs.org)
James Grimmelmann (@jtlg@mastodon.lawprofs.org)
Something exceptionally grim is happening on the Internet. In the last few months, the constant flood of algorithmically generated junk content has kicked into an AI-powered overdrive, and it is cutting a swath of destruction as it overwhelms search engines, filters, and moderation systems Call it Gresham's Law 2.0: bad content drives out good. I'm starting this thread to document it, because there is a *lot* happening all at once. #greshamslaw20
·mastodon.lawprofs.org·
James Grimmelmann (@jtlg@mastodon.lawprofs.org)
GitHub - timpaul/form-extractor-prototype
GitHub - timpaul/form-extractor-prototype

This tool extracts the structure from an image of a form.

It uses the Claude 3 LLM model by Anthropic.

A single extraction of an A4 form page costs about 10p.

It replicates the form structure in JSON, following the schema used by GOV.UK Forms.

It then uses that to generate a multi-page web form in the GOV.UK style.

·github.com·
GitHub - timpaul/form-extractor-prototype
WHY AI Works - YouTube
WHY AI Works - YouTube
Bertrand Serlet's thoughts on WHY LLMs and AI in general work so well, nowadays.
·m.youtube.com·
WHY AI Works - YouTube
LLMs and the Harry Potter problem
LLMs and the Harry Potter problem
Large language models may have big context windows, but they still aren't good enough at using the information in big contexts, especially in high value use-cases.
·pyqai.com·
LLMs and the Harry Potter problem
openelm/README-pretraining.md
openelm/README-pretraining.md
Apple released something big three hours ago, and I'm still trying to get my head around exactly what it is. The parent project is called CoreNet, described as "A library …
·simonwillison.net·
openelm/README-pretraining.md
A quote from Phi-3 Technical Report
A quote from Phi-3 Technical Report
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models …
·simonwillison.net·
A quote from Phi-3 Technical Report
The Illustrated Word2vec
The Illustrated Word2vec
Discussions: Hacker News (347 points, 37 comments), Reddit r/MachineLearning (151 points, 19 comments) Translations: Chinese (Simplified), French, Korean, Portuguese, Russian “There is in all things a pattern that is part of our universe. It has symmetry, elegance, and grace - those qualities you find always in that which the true artist captures. You can find it in the turning of the seasons, in the way sand trails along a ridge, in the branch clusters of the creosote bush or the pattern of its leaves. We try to copy these patterns in our lives and our society, seeking the rhythms, the dances, the forms that comfort. Yet, it is possible to see peril in the finding of ultimate perfection. It is clear that the ultimate pattern contains it own fixity. In such perfection, all things move toward death.” ~ Dune (1965) I find the concept of embeddings to be one of the most fascinating ideas in machine learning. If you’ve ever used Siri, Google Assistant, Alexa, Google Translate, or even smartphone keyboard with next-word prediction, then chances are you’ve benefitted from this idea that has become central to Natural Language Processing models. There has been quite a development over the last couple of decades in using embeddings for neural models (Recent developments include contextualized word embeddings leading to cutting-edge models like BERT and GPT2). Word2vec is a method to efficiently create word embeddings and has been around since 2013. But in addition to its utility as a word-embedding method, some of its concepts have been shown to be effective in creating recommendation engines and making sense of sequential data even in commercial, non-language tasks. Companies like Airbnb, Alibaba, Spotify, and Anghami have all benefitted from carving out this brilliant piece of machinery from the world of NLP and using it in production to empower a new breed of recommendation engines. In this post, we’ll go over the concept of embedding, and the mechanics of generating embeddings with word2vec. But let’s start with an example to get familiar with using vectors to represent things. Did you know that a list of five numbers (a vector) can represent so much about your personality?
·jalammar.github.io·
The Illustrated Word2vec