AI/ML

AI/ML

2309 bookmarks
Custom sorting
MACAW: An Accessible Tool for Molecular Embedding and Inverse Molecular Design
MACAW: An Accessible Tool for Molecular Embedding and Inverse Molecular Design
The growing capabilities of synthetic biology and organic chemistry demand tools to guide syntheses toward useful molecules. Here, we present Molecular AutoenCoding Auto-Workaround (MACAW), a tool that uses a novel approach to generate molecules predicted to meet a desired property specification (e.g., a binding affinity of 50 nM or an octane number of 90). MACAW describes molecules by embedding them into a smooth multidimensional numerical space, avoiding uninformative dimensions that previous methods often introduce. The coordinates in this embedding provide a natural choice of features for accurately predicting molecular properties, which we demonstrate with examples for cetane and octane numbers, flash points, and histamine H1 receptor binding affinity. The approach is computationally efficient and well-suited to the small- and medium-size datasets commonly used in biosciences. We showcase the utility of MACAW for virtual screening by identifying molecules with high predicted binding affinity to the histamine H1 receptor and limited affinity to the muscarinic M2 receptor, which are targets of medicinal relevance. Combining these predictive capabilities with a novel generative algorithm for molecules allows us to recommend molecules with a desired property value (i.e., inverse molecular design). We demonstrate this capability by recommending molecules with predicted octane numbers of 40, 80, and 120, which is an important characteristic of biofuels. Thus, MACAW augments classical retrosynthesis tools by providing recommendations for molecules on specification.
·pubs.acs.org·
MACAW: An Accessible Tool for Molecular Embedding and Inverse Molecular Design
How to Build a Retrieval Augmented Generative AI Application
How to Build a Retrieval Augmented Generative AI Application
RAG AI is a cutting-edge application that marries a Flask backend with a Streamlit frontend, creating a dynamic and interactive user experience. At its core,...
·youtube.com·
How to Build a Retrieval Augmented Generative AI Application
You Can Build an App in 60 Minutes with ChatGPT - Ep. 5 with Geoffrey Litt
You Can Build an App in 60 Minutes with ChatGPT - Ep. 5 with Geoffrey Litt
This show might be a first in the history of podcasts: Researcher Geoffrey Litt and I built an app together using ChatGPTapp and Replit in under 60 minutes—while we talked. We wanted to show how AI and ChatGPT change who gets to build software and how they usher in a world where everyone can modify and remix the apps they use every day. So we did it live, and ChatGPT delivered a working prototype at the end of the episode. It was a tiny glimpse of the future—and it pushes the boundaries of what a show can be. It honestly left me speechless and it'll change the way you think about software. If it does, make sure to subscribe, share, and leave us a review! Timestamps: 00:01:03 - Intro 00:01:36 - What is malleable software? 00:08:06 - Who gets to make software on the web? 00:14:50 - Deciding what app to build 00:22:06 - Starting on our app 00:31:07 - Don’t read the code first 00:47:55 - Starting from scratch could soon be a thing of the past 00:55:50 - Getting past those final error messages 01:03:31 - Voila! An app 01:04:50 - Effortless flow Links: https://www.geoffreylitt.com/2023/03/25/llm-end-user-programming.html https://every.to/chain-of-thought/what-comes-after-saas https://chat.openai.com/g/g-qPeu5SFW6-micro-web-app-coder
·youtube.com·
You Can Build an App in 60 Minutes with ChatGPT - Ep. 5 with Geoffrey Litt
Malleable software in the age of LLMs
Malleable software in the age of LLMs
All computer users may soon have the ability to author small bits of code. What structural changes does this imply for the production and distribution of software?
·geoffreylitt.com·
Malleable software in the age of LLMs
What Comes After SaaS?
What Comes After SaaS?
Bespoke apps for everyone—customized by AI
·every.to·
What Comes After SaaS?
hackerllama - The Random Transformer
hackerllama - The Random Transformer
Understand how transformers work by demistifying all the math behind them
·osanseviero.github.io·
hackerllama - The Random Transformer
Ten Noteworthy AI Research Papers of 2023
Ten Noteworthy AI Research Papers of 2023
This year has felt distinctly different. I've been working in, on, and with machine learning and AI for over a decade, yet I can't recall a time when these fields were as popular and rapidly evolving as they have been this year. To conclude an eventful 2023 in machine learning and AI research, I'm excited to share 10 noteworthy papers I've read this year. My personal focus has been more on large language models, so you'll find a heavier emphasis on large language model (LLM) papers than computer vision papers this year.
·magazine.sebastianraschka.com·
Ten Noteworthy AI Research Papers of 2023
AI or ain't: Eliza
AI or ain't: Eliza
Explore the intriguing history of Eliza, a pioneering chatbot, and learn how to implement a basic version in Go, unraveling the roots of conversational AI.
·zserge.com·
AI or ain't: Eliza
CultriX/MistralTrix-v1 · Hugging Face
CultriX/MistralTrix-v1 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
·huggingface.co·
CultriX/MistralTrix-v1 · Hugging Face
Eyes on tokenize
Eyes on tokenize
I was writing a tokenizer for SMILES and came across a recent paper by the IBM Research team on reaction standardisation which contained a ...
·baoilleach.blogspot.com·
Eyes on tokenize
The Narrated Transformer Language Model
The Narrated Transformer Language Model
AI/ML has been witnessing a rapid acceleration in model improvement in the last few years. The majority of the state-of-the-art models in the field are based on the Transformer architecture. Examples include models like BERT (which when applied to Google Search, resulted in what Google calls "one of the biggest leaps forward in the history of Search") and OpenAI's GPT2 and GPT3 (which are able to generate coherent text and essays). This video by the author of the popular "Illustrated Transformer" guide will introduce the Transformer architecture and its various applications. This is a visual presentation accessible to people with various levels of ML experience. Intro (0:00) The Architecture of the Transformer (4:18) Model Training (7:11) Transformer LM Component 1: FFNN (10:01) Transformer LM Component 2: Self-Attention(12:27) Tokenization: Words to Token Ids (14:59) Embedding: Breathe meaning into tokens (19:42) Projecting the Output: Turning Computation into Language (24:11) Final Note: Visualizing Probabilities (25:51) The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/ Simple transformer language model notebook: https://github.com/jalammar/jalammar.github.io/blob/master/notebooks/Simple_Transformer_Language_Model.ipynb Philosophers On GPT-3 (updated with replies by GPT-3): https://dailynous.com/2020/07/30/philosophers-gpt-3/ ----- Twitter: https://twitter.com/JayAlammar Blog: https://jalammar.github.io/ Mailing List: https://jayalammar.substack.com/ More videos by Jay: Jay's Visual Intro to AI https://www.youtube.com/watch?v=mSTCzNgDJy4 How GPT-3 Works - Easily Explained with Animations https://www.youtube.com/watch?v=MQnJZuBGmSQ
·youtube.com·
The Narrated Transformer Language Model
Will AI Change Our Memories?
Will AI Change Our Memories?
Go to https://www.squarespace.com/nerdwriter for 10% off your first purchase.GET THE PAPERBACK OF MY BOOK: https://amzn.to/3EPDQKtSupport Nerdwriter videos: ...
·youtube.com·
Will AI Change Our Memories?
The Illustrated Transformer
The Illustrated Transformer
Discussions: Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments) Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Italian, Japanese, Korean, Persian, Russian, Spanish 1, Spanish 2, Vietnamese Watch: MIT’s Deep Learning State of the Art lecture referencing this post Featured in courses at Stanford, Harvard, MIT, Princeton, CMU and others In the previous post, we looked at Attention – a ubiquitous method in modern deep learning models. Attention is a concept that helped improve the performance of neural machine translation applications. In this post, we will look at The Transformer – a model that uses attention to boost the speed with which these models can be trained. The Transformer outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from how The Transformer lends itself to parallelization. It is in fact Google Cloud’s recommendation to use The Transformer as a reference model to use their Cloud TPU offering. So let’s try to break the model apart and look at how it functions. The Transformer was proposed in the paper Attention is All You Need. A TensorFlow implementation of it is available as a part of the Tensor2Tensor package. Harvard’s NLP group created a guide annotating the paper with PyTorch implementation. In this post, we will attempt to oversimplify things a bit and introduce the concepts one by one to hopefully make it easier to understand to people without in-depth knowledge of the subject matter. 2020 Update: I’ve created a “Narrated Transformer” video which is a gentler approach to the topic: A High-Level Look Let’s begin by looking at the model as a single black box. In a machine translation application, it would take a sentence in one language, and output its translation in another.
·jalammar.github.io·
The Illustrated Transformer