basic-memory/README.md at main · basicmachines-co/basic-memory
Basic Memory is a knowledge management system that allows you to build a persistent semantic graph from conversations with AI assistants, stored in standard Markdown files on your computer. Integra...
GitHub Copilot Custom Prompt Files & Folder Structure for Teams
Discover Rafferty Uy’s guide on structuring GitHub Copilot custom prompt files for team collaboration. Learn best practices and tips for effective prompt engineering.
Before generating SQL statements:
- Understand the relationship between the tables in the database.
- Determine the filtering criteria and conditions for data retrieval.
- Validate the expected outcome of the query.
- Think step-by-step and revalidate before responding.
StefanRoets06/Custom-Instructions-for-GitHub-Copilot: This guide is intended to help you provide better context for GitHub Copilot when working on programming-related tasks. Use this template to define your goals, preferences, and any specific guidelines you'd like Copilot to follow.
This guide is intended to help you provide better context for GitHub Copilot when working on programming-related tasks. Use this template to define your goals, preferences, and any specific guideli...
copilot-instructions.md has helped me so much. : r/ChatGPTCoding
Your plan MUST include:
- All functions/sections that need modification
- The order in which changes should be applied
- Dependencies between changes
- Estimated number of separate edits required
Smaller prompts, better answers with GitHub Copilot Custom Instructions
Working with GitHub Copilot in VS Code amps out your efficiency as a programmer - but did you know that adding a simple markdown file can boost this efficiency even more, while *also* decreasing the size of your prompt? Custom Instructions can help you and your team do so much more with GitHub Copilot, and @rconery will show you how in this video.
🔎 Chapters:
00:12 Simple, automatic instructions
02:07 Custom Git commit messages
03:26 Customizing Copilot functionality in VS Code
05:00 Going all in with markdown files as instructions
🔗 Links:
Get Copilot: https://aka.ms/get-copilot
Instruction Snippets for JSONC: https://gist.github.com/robconery/f93d016ace16feb7156f9b7905f3f499
🎙️ Featuring: @rconery
#vscode #copilot #githubcopilot
✅ Learn how to build robust and scalable software architecture: https://arjan.codes/checklist.
Want your AI tools to actually *do* something? In this video, I’ll show you how to integrate external tools with language models using **MCP (Model Context Protocol)**. You’ll learn two common architecture patterns, see real code examples, and get tips on keeping your setup clean and scalable. Whether you’re building for Claude, ChatGPT, or any other LLM—this is how you connect your backend to AI.
🔥 GitHub Repository: https://git.arjan.codes/2025/mcp-server.
🎓 ArjanCodes Courses: https://www.arjancodes.com/courses.
🔖 Chapters:
0:00 Intro
0:46 What is MCP?
3:14 YouTube MCP Version 1
7:58 YouTube MCP Version 2
12:18 Final Thoughts
#arjancodes #softwaredesign #python
Yann LeCun "Mathematical Obstacles on the Way to Human-Level AI"
Yann LeCun, Meta, gives the AMS Josiah Willard Gibbs Lecture at the 2025 Joint Mathematics Meetings on “Mathematical Obstacles on the Way to Human-Level AI.” This talk was introduced by Bryna Kra, Northwestern University, President of the AMS.
Malleable software: Restoring user agency in a world of locked-down apps
The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.
My current theory of agentic programming: people are amazing at adapting the tools they're given and totally underestimate the extent to which they do it, and the amount of skill we build doing that is an incidental consequence of how badly the tools are designed.
Vector Search RAG Tutorial – Combine Your Data with LLMs with Advanced Search
Learn how to use vector search and embeddings to easily combine your data with large language models like GPT-4. You will first learn the concepts and then create three projects.
✏️ Course developed by Beau Carnes.
💻 Code: https://github.com/beaucarnes/vector-search-tutorial
🔗 Access MongoDB Atlas: https://cloud.mongodb.com/
🏗️ MongoDB provided a grant to make this course possible.
⭐️ Contents ⭐️
⌨️ (00:00) Introduction
⌨️ (01:18) What are vector embeddings?
⌨️ (02:39) What is vector search?
⌨️ (03:40) MongoDB Atlas vector search
⌨️ (04:30) Project 1: Semantic search for movie database
⌨️ (32:55) Project 2: RAG with Atlas Vector Search, LangChain, OpenAI
⌨️ (54:36) Project 3: Chatbot connected to your documentation
🎉 Thanks to our Champion and Sponsor supporters:
👾 davthecoder
👾 jedi-or-sith
👾 南宮千影
👾 Agustín Kussrow
👾 Nattira Maneerat
👾 Heather Wcislo
👾 Serhiy Kalinets
👾 Justin Hual
👾 Otis Morgan
👾 Oscar Rahnama
--
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
❤️ Support for this channel comes from our friends at Scrimba – the coding platform that's reinvented interactive learning: https://scrimba.com/freecodecamp
This llama.cpp server vision support via libmtmd pull request—via Hacker News—was merged earlier today. The PR finally adds full support for vision models to the excellent llama.cpp project. It’s documented …
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models
GITHUB HUGGING FACE MODELSCOPE DISCORD
We release Qwen3 Embedding series, a new proprietary model of the Qwen model family. These models are specifically designed for text embedding, retrieval, and reranking tasks, built on the Qwen3 foundation model. Leveraging Qwen3’s robust multilingual text understanding capabilities, the series achieves state-of-the-art performance across multiple benchmarks for text embedding and reranking tasks. We have open-sourced this series of text embedding and reranking models under the Apache 2.
How to Fine Tune your own LLM using LoRA (on a CUSTOM dataset!)
That gameboy blender animation...took 6 hours to render 😅. Anyway, had a ton of fun coding this up and finally getting back to some proper ML. I've been thi...
New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B - and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging …