‘Primate AI algorithm’ predicts genetic health risks | Financial Times
AI/ML
Top AI researcher dismisses AI 'extinction' fears, challenges 'hero scientist' narrative | VentureBeat
First of all, I think that there are just too many letters. Generally, I’ve never signed any of these petitions. I always tend to be a bit more careful when I sign my name on something. I don’t know why people are just signing their names so lightly.
Lindsey Graham pointed out the military use of AI. That is actually happening now. But Sam Altman couldn’t even give a single proposal on how the immediate military use of AI should be regulated. At the same time, AI has a potential to optimize healthcare so that we can implement a better, more equitable healthcare system, but none of that was actually discussed.
But now the hero scientist narrative has come back in. There’s a reason why in these letters, they always put Geoff and Yoshua at the top. I think this is actually harmful in a way that I never thought about.
’m not a fan of Effective Altruism (EA) in general. And I am very aware of the fact that the EA movement is the one that is actually driving the whole thing around AGI and existential risk. I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and they think only they can solve.
MinIO | High Performance, Kubernetes Native Object Storage
MinIO is a high-performance, S3 compatible object store. It is built for large scale AI/ML, data lake and database workloads. It runs on-prem and on any cloud (public or private) and from the data center to the edge. MinIO is software-defined and open source under GNU AGPL v3.
Asus will offer local ChatGPT-style AI servers for office use | Ars Technica
Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff
All the Hard Stuff Nobody Talks About when Building Products with LLMs | Honeycomb
Commercial LLMs like gpt-3.5-turbo and Claude are the best models to use for us right now. Nothing in the open source world comes close. However, this only means they’re the best of available options. They can take many seconds to produce a valid Honeycomb query, with latency ranging from two to 15+ seconds depending on the model, natural language input, size of the schema, makeup of the schema, and instructions in the prompt. As of this writing, although we have access to gpt-4’s API, it’s far too slow to work for our use case.
Why Being Critical is Essential in the Age of AI: Debunking Arguments Against Arguments Against LLMs - YouTube
On the Catastrophic Risk of AI
State of GPT | BRK216HFS - YouTube
clovaai/donut: Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
Japan Goes All In: Copyright Doesn't Apply To AI Training
Training data can be used "regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise."
Rent Cloud GPUs from $0.2/hour
The Ethical AI Startup Ecosystem. A deep dive into the startups devoted… | by Abhinav Raghunathan | Medium
Foundation models for reasoning on charts – Google AI Blog
Robot Passes Turing Test for Polyculture Gardening - IEEE Spectrum
SevaSk/ecoute: Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.
Most Popular AI Websites (Sorted by Monthly Traffic from High to Low)
s0md3v/roop: one-click deepfake (face swap)
Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters
Writing my own ChatGPT Code Interpreter | Rick Lamers' blog
Voyager | An Open-Ended Embodied Agent with Large Language Models
Lawyer cites fake cases invented by ChatGPT, judge is not amused
Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here’s the best summary as to what has happened: So, …
MIT MAS.S68!
Generative AI for Constructive Communication Course
Gorilla
Gorilla is a LLM that can provide appropriate API calls. It is trained on three massive machine learning hub datasets: Torch Hub, TensorFlow Hub and HuggingFace. We are rapidly adding new domains, including Kubernetes, GCP, AWS, OpenAPI, and more. Zero-shot Gorilla outperforms GPT-4, Chat-GPT and Claude. Gorilla is extremely reliable, and significantly reduces hallucination errors.
How To Finetune GPT Like Large Language Models on a Custom Dataset - Lightning AI
The poisoning of ChatGPT
If you can get an AI vendor to include a few tailored toxic entries—you don’t seem to need that many, even for a large model—the attacker can affect outcomes generated by the system as a whole.
The attacks apply to seemingly every modern type of AI model. They don’t seem to require any special knowledge about the internals of the system—black box attacks have been demonstrated to work on a number of occasions—which means that OpenAI’s secrecy is of no help.
They seem to be able to target specific keywords for manipulation. That manipulation can be a change in sentiment (always positive or always negative), meaning (forced mistranslations), or quality (degraded output for that keyword). The keyword doesn’t have to be mentioned in the toxic entries. Systems built on federated learning seem to be as vulnerable as the rest.
Turns out that language models can also be poisoned during fine-tuning
The researchers managed to do both keyword manipulation and degrade output with as few as a hundred toxic entries, and they discover that large models are less stable and more vulnerable to poisoning. They also discovered that preventing these attacks is extremely difficult, if not realistically impossible.
This means that OpenAI and ChatGPT as a product is overpriced. We don’t know if their products have serious defects or not. It means that OpenAI, as an organisation, is probably overvalued by investors.
The only rational option the rest of us have is to price them as if their products are defective and manipulated.
Dream bigger: Get started with Generative Fill, powered by Adobe Firefly Generative AI now in Photoshop | Adobe Blog
How Rogue AIs may Arise - Yoshua Bengio
Remove Background from Image for Free – remove.bg
csunny/DB-GPT: Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security