Prebuilt vs Building your own Deep Learning Machine vs GPU Cloud (AWS) | BIZON Custom Workstation Computers, Servers. Best Workstation PCs and GPU servers for AI/ML, deep learning, HPC, video editing, 3D rendering, CAD.
AI/ML
Build a super fast deep learning machine for under $1,000 – O’Reilly
How to build the perfect Deep Learning Computer and save thousands of dollars | by Jeff Chen | Mission.org | Medium
TitanXp vs GTX1080Ti for Machine Learning | Puget Systems
GPU Build - edegan.com
Building The Ultimate Machine Learning PC: A Step-by-Step Guide
LLM now provides tools for working with embeddings
A practical guide to deploying Large Language Models Cheap, Good *and* Fast
I try all the things to get Vicuna-13B-v1.5 deployed on an NVIDIA T4 so you don't have to.
A practical guide to deploying Large Language Models Cheap, Good *and* Fast
Joel Kang's extremely comprehensive notes on what he learned trying to run Vicuna-13B-v1.5 on an affordable cloud GPU server (a T4 at $0.615/hour). The space is in so much flux …
Teaching with AI
Wikipedia search-by-vibes through millions of pages offline
Check it out! https://leebutterman.com/wikipedia-search-by-vibes/
A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
As it turns out, it’s impossible to remove a user’s data from a trained A.I. model. Deleting the model entirely is also difficult—and there’s little regulation to enforce either option.
The End of the Take-Home Essay?
archived 25 Aug 2023 18:01:10 UTC
Making Large Language Models work for you
MagicEdit: High-Fidelity Temporally Coherent Video Editing
New LLM Foundation Models - by Sebastian Raschka, PhD
CoTracker: It is Better to Track Together
LIDA | LIDA: Automated Visualizations with LLMs
LLM: A CLI utility and Python library for interacting with Large Language Models
Nvidia's NeMo Guardrails: Full Walkthrough for Chatbots / AI
Nvidia's NeMo Guardrails is a new library for building conversational AI / chatbots. A guardrail is a semi or fully deterministic shield that use against specific behaviors, conversation topics, or even to trigger particular actions (like calling to a human for help).
We can use NeMo Guardrails for safety/topic guidance, deterministic dialogue, retrieval augmented generation (RAG), and conversational agents.
In this video, we'll explore NeMo Guardrails and get started building with the library.
🌲 Article:
https://www.pinecone.io/learn/nemo-guardrails-intro/
📌 Code:
https://github.com/pinecone-io/examples/tree/master/learn/generation/chatbots/nemo-guardrails
🎖️ Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/
👋🏼 AI Consulting:
https://aurelio.ai
👾 Discord:
https://discord.gg/c5QtDB9RAP
Twitter: https://twitter.com/jamescalam
LinkedIn: https://www.linkedin.com/in/jamescalam/
00:00 Nvidia's NeMo Guardrails
02:16 How Typical Chatbots Work
05:54 Dialogue Flows
07:53 Code Intro to NeMo Guardrails
12:06 How Guardrails Works Under the Hood
14:33 NeMo Guardrails Chatbot in Python
18:28 Speaking with Guardrails Chatbot
19:50 Future NeMo Guardrails Content
#artificialintelligence #nlp #ai #chatbot
WebLLM | Home
WebLLM supports Llama 2 70B now
The WebLLM project from MLC uses WebGPU to run large language models entirely in the browser. They recently added support for Llama 2, including Llama 2 70B, the largest and …
US Copyright Office wants to hear what people think about AI and copyright
People have until October 18th to comment.
Video: Data Exfiltration Vulnerabilities in LLM apps (Bing Chat, ChatGPT, Claude) · Embrace The Red
This video highlights various data exfiltration vulnerabilities I discovered and responsibly disclosed to Microsoft, Anthropic, ChatGPT and Plugin Developers. It also highlights the response and fixes of various vendors - with surprisingly different outcomes.
Phind: AI Search Engine and Pair Programmer
We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67% according to their official technical report in March. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset.
llm-tracker
The AIKEA Effect - Artur Piszek
Making Large Language Models work for you
I gave an invited keynote at WordCamp 2023 in National Harbor, Maryland on Friday. I was invited to provide a practical take on Large Language Models: what they are, how …
Murdered by My Replica?
Margaret Atwood responds to the revelation that pirated copies of her books are being used to train AI.
Can news outlets build a ‘trustworthy’ AI chatbot?
Tech sites including Macworld and PCWorld are using a new AI tool.