Developers guide how to build knowledge graph

GenAI
How to Build a Knowledge Graph in 7 Steps
Discover how to build a knowledge graph in 7 simple steps, from defining your use case to creating a model to ingesting your data.
Design and Develop a RAG Solution - Azure Architecture Center
How to plan a RAG project
Introduction to LlamaIndex - Hugging Face Agents Course
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
A Visual Guide to LLM Agents
Explore the main components of what makes LLM Agents special.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
Agents interact with their environment and typically consist of several important components
chain-of-thought
This is where planning comes in. Planning in LLM Agents involves breaking a given task up into actionable steps.
reasoning” and “thinking” a bit loosely as we can argue whether this is human-like thinking or merely breaking the answer down to structured steps.
without any examples (zero-shot prompting)
Providing examples (also called few-shot prompting7)
ReAct
Reflecting
These Multi-Agent systems usually consist of specialized Agents, each equipped with their own toolset and overseen by a supervisor.
three LLM roles
SELF-REFINE
To enable planning in LLM Agents, let’s first look at the foundation of this technique, namely reasoning.
Evaluating Chunking Strategies for Retrieval | Chroma Research
Transformation Agent | Weaviate
This Weaviate Agent is in technical preview.
Do you know the answer to these three questions? You should... 1. What are vector embeddings and embedding models? 2. What’s the benefit of having a vector database for vector search? 3. What’s on the next horizon for AI applications? I just finished a 3-part webinar series… pic.twitter.com/nFOAPHQw2q— Victoria Slocum (@victorialslocum) March 5, 2025
Weaviate Agentic Architectures eBook
Multi-vector embeddings (ColBERT, ColPali, etc.) | Weaviate
Learn how to use multi-vector embeddings in Weaviate.
How to Hack AI Agents and Applications
Learn how to hack AI agents and applications with this expert guide. Find vulnerabilities, prompt injection risks, and testing strategies for AI security.
Chat bot considerations
AIEBootcamp/09_Finetuning_Embeddings/Fine_tuning_Embedding_Models_for_RAG_using_RAGAS.ipynb at main · apatti/AIEBootcamp
AI Engineering bootcamp. Contribute to apatti/AIEBootcamp development by creating an account on GitHub.
15 Best Graph Visualization Tools for Your Neo4j Graph Database
Discover the best graph visualization tools for visualizing your Neo4j graph database, including development, exploration, dashboarding, and embedded tools.
AI-Tools
Many students and researchers are already using them - tools with integrated artificial intelligence (AI). What can AI-supported tools achieve, what opportunities do they offer and what are their limitations? The following list is an introductory selection which is not based on any value judgement.
AWS Flash - AWS Partner: Generative AI on AWS for Financial Services Industries (Technical) - AWS Skill Builder
Your learning center to build in-demand cloud skills.
Jérémy Ravenel on LinkedIn: What are the key ontology standards you should have in mind? Ontology… | 100 comments
What are the key ontology standards you should have in mind?
Ontology standards are crucial for knowledge representation and reasoning in AI and data… | 100 comments on LinkedIn
From PDFs to Insights: Structured Outputs from PDFs with Gemini 2.0
Learn how to extract structured data from PDFs with Gemini 2.0 and Pydantic.
GitHub - langchain-ai/langgraph-supervisor
Contribute to langchain-ai/langgraph-supervisor development by creating an account on GitHub.
recipes/weaviate-features/generative-search/generative_search_anthropic/rag_with_anthropic_citations.ipynb at main · weaviate/recipes
This repository shares end-to-end notebooks on how to use various Weaviate features and integrations! - weaviate/recipes
"regular people don't fine-tune VLMs"
but wtf not?
- skill gap
- high fine-tuning costs
- lack of standards and unified approaches
over the past few weeks I've been working on maestro - streamlined tool for VLM fine-tuning
link:
— SkalskiP (@skalskip92)
🚀 Getting Started — Oumi
Open source: it works!
Two months ago user durable-racoon posted about DocumentContextExtractor, their iteration on a technique for improving the accuracy of RAG that both and had made demo implementations of.
Contextual Retrieval improves the…
— LlamaIndex 🦙 (@llama_index)
The recipes repo is such an underrated developer resource.
Here are 8 notebooks you should know about:
1. Vanilla vector search:
2. Image similarity search:
3. Hybrid search:
4. Local RAG…
— Leonie (@helloiamleonie)
GitHub - getomni-ai/zerox: PDF to Markdown with vision models
PDF to Markdown with vision models.
Fine Tune DeepSeek R1 | Build a Medical Chatbot
In this video, we show you how to fine-tune DeepSeek R1, an open-source reasoning model, using LoRA (Low-Rank Adaptation). We'll also be using Kaggle, Hugging Face and Weights & Biases. We walk you through data preparation, model configuration, and optimization, including advanced techniques like four-bit quantization for efficient training on consumer GPUs.
By the end of this tutorial, you’ll be equipped with the skills to customize DeepSeek R1 for your own specialized tasks, such as medical reasoning.
🔗 Resources & Tutorials
Kaggle Notebook: https://www.kaggle.com/code/aan1994/fine-tuning-deepseek-r1-reasoning-model-youtube
How Transformers Work: https://www.datacamp.com/tutorial/how-transformers-work
Fine-Tuning DeepSeek R1 Reasoning Model: https://www.datacamp.com/tutorial/fine-tuning-deepseek-r1-reasoning-model
DeepSeek R1 Blog Overview: https://www.datacamp.com/blog/deepseek-r1
Understanding Janus Pro: https://www.datacamp.com/blog/janus-pro
DeepSeek R1 Project Walkthrough: https://www.datacamp.com/tutorial/deepseek-r1-project
DeepSeek vs ChatGPT: https://www.datacamp.com/blog/deepseek-vs-chatgpt
Qwen-2.5 MAX Model: https://www.datacamp.com/blog/qwen-2-5-max
DeepSeek R1 Ollama Tutorial: https://www.datacamp.com/tutorial/deepseek-r1-ollama
📕 Chapters
00:00 Introduction
00:30 Why Fine-Tuning DeepSeek Matters
02:30 LoRA Explained with a PS5 Factory Analogy
05:20 Tools & Setup Overview
09:00 Loading DeepSeek R1 Model and Tokenizer
16:10 Formatting Data for Fine-Tuning
23:00 Applying LoRA for Efficient Updates
34:00 Configuring Training Parameters
43:15 Running the Fine-Tuning Process on Kaggle
46:00 Comparing Model Performance After Fine-Tuning
47:50 Final Thoughts on Future Models
📱 Follow Us on Social Media
Facebook: https://www.facebook.com/datacampinc/
Twitter: https://twitter.com/datacamp
LinkedIn: https://www.linkedin.com/school/datacampinc/
Instagram: https://www.instagram.com/datacamp/
#deepseek #DeepSeekR1 #FineTuningAI #LearnAI #MachineLearning #Transformers #HuggingFace #Kaggle #WeightsAndBiases #LoRA #LargeLanguageModels #DeepSeekTutorial #AIResearch #AIOptimization #DataScience
transformerlab/transformerlab-app: Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer. - transformerlab/transformerlab-app
athina-ai/rag-cookbooks: This repository contains various advanced techniques for Retrieval-Augmented Generation (RAG) systems.
This repository contains various advanced techniques for Retrieval-Augmented Generation (RAG) systems. - athina-ai/rag-cookbooks
Qwen2.5-VL/cookbooks at main · QwenLM/Qwen2.5-VL
Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud. - QwenLM/Qwen2.5-VL
SurrealDB | Enhancing Retrieval-Augmented Generation with SurrealDB
GraphRAG: Enhancing Retrieval-Augmented Generation with SurrealDB, Gemini and DeepSeek