Found 470 bookmarks
Custom sorting
GraphBench: Next-generation graph learning benchmarking
GraphBench: Next-generation graph learning benchmarking

GraphBench: Next-generation graph learning benchmarking We present Graphbench, a comprehensive graph learning benchmark across domains and prediction regimes. GraphBench standardizes evaluation with consistent splits, metrics, and out-of-distribution checks, and includes a unified hyperparameter tuning framework. We also provide strong baselines with state-of-the-art message-passing and graph transformer models and easy plug-and-play code to get you started.

·linkedin.com·
GraphBench: Next-generation graph learning benchmarking
Why Versioning Matters for Graph Databases
Why Versioning Matters for Graph Databases
In this episode of Founders Discussion, TuringDB founders Adam Amara and Rémy Boutonnet sit down to discuss one of the most important and often overlooked ca...
Why Versioning Matters for Graph Databases | Founders Discussion with Adam & RémyTap to unmute2xWhy Versioning Matters for Graph Databases | Founders Discussion with Adam & RémyTuringDB 47 views 13 days agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.Pull up for precise seeking7:44•Up nextLiveUpcomingCancelPlay nowTuringDBSubscribeSubscribedTuringDB - A new fast graph database engine - The Engineering Discussion @CTO Remy Boutonnet29:38You're signed outVideos that you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmHideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.0:010:21 / 37:21Live•Watch full video••4:33First Dates: Βρήκαμε το χειρότερο ραντεβού όλων των εποχών | Luben TVLuben TV2.4m views • 2 years agoLivePlaylist ()Mix (50+)8:29How Jacob Collier Convinced The World He's A GeniusJacob de Jongh343k views • 2 months agoLivePlaylist ()Mix (50+)1:26:33Tom & Tapp: From Navy Flight Decks to Solving the Healthcare Data PuzzleLast Visit First 1 view • 36 minutes agoLivePlaylist ()Mix (50+)9:26Φραπες best ofrednight12345651k views • 8 days agoLivePlaylist ()Mix (50+)13:35We Tried Trunk-Based Development... The Results Were Shocking.Modern Software Engineering30k views • 11 days agoLivePlaylist ()Mix (50+)19:40The Exact Moment The AI Bubble Burst…Fads216k views • 1 day agoLivePlaylist ()Mix (50+)3:32Chernobyl Accident - Simulation only (no talk)Higgsino physics6.6m views • 1 year agoLivePlaylist ()Mix (50+)17:54Top 20 Hilariously Out of Touch Celebrity MomentsWatchMojo.com1.3m views • 6 months agoLivePlaylist ()Mix (50+)12:51Peter Can't Believe A Pyramid Scheme Business Model's Being Pitched | Dragons' DenDragons' Den9.8m views • 6 years agoLivePlaylist ()Mix (50+)8:58Honest Trailers | Stranger Things S5 (Part 1)Screen Junkies519k views • 12 days agoLivePlaylist ()Mix (50+)4:37Δημοσιογράφος ανταλλάσσει ΕΠΙΚΕΣ ΠΡΟΣΒΟΛΕΣ με τον Βαρουφάκη σε μία "ΧΑΡΟΥΜΕΝΗ" συνέντευξηWatchdog TV274k views • 7 months agoLivePlaylist ()Mix (50+)18:51🚀ASTRAIOS Podcast Series: Success Stories in EO & GNSS | François CaronASTRAIOS Project10 views • 10 days agoLivePlaylist ()Mix (50+) Why Versioning Matters for Graph Databases
·youtube.com·
Why Versioning Matters for Graph Databases
SEO sharing experience inbuilding a knowledge graph using LLM
SEO sharing experience inbuilding a knowledge graph using LLM
SEOs have been mildly obsessed with knowledge graphs for a while, but mostly because we want to ensure that our websites/brands/clients are well represented in Google’s Knowledge Graph, which makes sense. What not many of us have had experience with (myself included) is building our own knowledge graphs. It is technically challenging and not immediately apparent what the benefit is, and some people aren’t aware that it isn't exclusive to Google. Pioneers like Andrea Volpini and Dixon Jones have long been advocates for approaching this in a more nuanced way. Right now, I’m just learning in public, so stick with me. So what have I learned? - Structured data is IMPORTANT. Not just schema, but having tried to reliably infer semantic triples across a 1000+ document set using an LLM, it’s clear that easily understable/scrapable data is KEY - As a second point, I might not recommend doing this with an LLM at all… depends on your objective (more below). But using a merchant feed (for example) OR schema and graphing that would ultimately be easier. - It’s too easy to forget to provide explicit context for key information. For example, when you strip a page down to JUST its readable content, does it still make sense? Maybe the key question here is, “Why build yourself a knowledge graph, though?” It’s an excellent opportunity to learn, but also quite a painful experience - especially when each full processing run has taken about 18 hours, so you need a good reason. Here are my goals: - Searchable knowledge store/database, particularly looking for “facts”, great for content writing - Trying to document the knowledge an LLM can easily infer by way of a basic simulation of how other models may understand content. Not in a reverse-engineering sense, but to really highlight what information IS NOT clear/specific - A grounded knowledge store for LLM content generation - rather than a model making up key details, we can use this KG as a store of already signed-off details. Creating a basic API for Cursor/Claude Code to use as a Tool is where I am starting This is not an “are you even an SEO if you’re not building knowledge graphs in 2026” post - that would be madness. This process is challenging and clearly not worthwhile for many, many people. But if you have large-scale content challenges and want a really in-depth way to understand them, I can totally see the value. If this isn’t you, but you want to benefit from some elements of this, I’d probably ask Dixon about Waiky 😉 | 15 comments on LinkedIn
·linkedin.com·
SEO sharing experience inbuilding a knowledge graph using LLM
Enhancing Portfolio Diversification with Link Prediction: A Graph Data Science Approach - Neo4j Industry Use Cases
Enhancing Portfolio Diversification with Link Prediction: A Graph Data Science Approach - Neo4j Industry Use Cases

🚀 Rethinking Portfolio Diversification with Graph Data Science

Traditional correlation matrices only tell us where markets have been—not where they're going. In today’s hyper-connected financial landscape, that’s not enough.

In the latest work by Nuno Pedro L., we use Neo4j Graph Data Science to model equities as a dynamic network and apply Link Prediction to anticipate future relationships between assets. Instead of reacting to correlations after they form, we can predict them—uncovering hidden risks, emerging clusters, and new opportunities for statistical arbitrage before they appear in traditional models.

🔍 Why it matters:

  • Captures non-linear, evolving market structures
  • Reveals early signals of contagion or co-movement
  • Supports smarter diversification and proactive risk management

If you’re exploring the future of quantitative finance, network analytics, or portfolio intelligence, this approach is a game-changer.

📈 Graph data science isn’t just descriptive—it’s predictive.

·neo4j.com·
Enhancing Portfolio Diversification with Link Prediction: A Graph Data Science Approach - Neo4j Industry Use Cases
Graph Embeddings at scale with Spark and GraphFrames
Graph Embeddings at scale with Spark and GraphFrames

One of my biggest contributions to the GraphFrames project is scalable graph embeddings. While not perfect, my implementation is inexpensive to compute and horizontally scalable. It uses a combination of random walks and Hash2Vec, an algorithm based on random projection theory.

In the post, I provide the full code and an explanation of all the engineering decisions I made. For example, I explain why I used Reservoir Sampling for neighbor aggregation or Map Partitions instead of the DataFrame API.

The pull request (PR) has not been merged yet, so if you have any ideas on how to improve the approach, I would love to hear them! Overall, it appears to be a good, inexpensive way to create scalable embeddings of graph vertices that can easily be incorporated into existing classification or recommender system pipelines. Finally, GraphFrames will have real capabilities for graph data science! At least, I hope so. :)

·semyonsinchenko.github.io·
Graph Embeddings at scale with Spark and GraphFrames
Semantics, Platforms, and the Illusion of Control: Why Open Standards Alone Will Not Save Enterprise Meaning | LinkedIn
Semantics, Platforms, and the Illusion of Control: Why Open Standards Alone Will Not Save Enterprise Meaning | LinkedIn
Recent discussions around platform consolidation, real-time data pipelines, and AI architecture increasingly emphasize the strategic role of formal semantics. The argument is compelling: as cloud providers, streaming platforms, and AI stacks become vertically integrated, enterprises risk losing not
·linkedin.com·
Semantics, Platforms, and the Illusion of Control: Why Open Standards Alone Will Not Save Enterprise Meaning | LinkedIn
Hannah Bast and Ruben Verborgh discuss Benchmarking of Triple Stores and SPARQL engines
Hannah Bast and Ruben Verborgh discuss Benchmarking of Triple Stores and SPARQL engines
Join Hannah Bast and myself on Friday 12 December at 10am Eastern / 4pm CET to discuss “Benchmarking of Triple Stores and SPARQL engines” at https://lnkd.in/eEJr69zu
Join Hannah Bast and myself on Friday 12 December at 10am Eastern / 4pm CET to discuss “Benchmarking of Triple Stores and SPARQL engines
·linkedin.com·
Hannah Bast and Ruben Verborgh discuss Benchmarking of Triple Stores and SPARQL engines
Cosmograph graph visualization tool
Cosmograph graph visualization tool
Huge news for Cosmograph 🪐 While everyone was on Thanksgiving break, I was polishing up the next big Cosmograph update, which I'm finally ready to share! More than three years after the initial release, Cosmograph remains the only single-node web-based tool capable of visualizing graphs with 1 million points and way more than a million links due to its unique GPU Force Layout and Rendering engine cosmos.gl. However, it also had a few major weaknesses like poor memory management and limited analytical capabilities. Version 2.0 of Cosmograph solves these problems by incorporating: - DuckDB (the best in-memory analytics database); - Mosaic (the fastest cross-filtering and visual analytics framework for the web); - SQLRooms (an open-source React toolkit for human and agent collaborative analytics apps by Ilya Boyandin) as its foundation; - The latest version of cosmos.gl (our core force simulation and rendering engine that recently joined OpenJS) to give you even faster performance, more forces, and the long-awaited point-dragging functionality! What does this mean in practice? - Work with larger datasets and use SQL (thanks to WebAssembly and DuckDB); - Much better performance (filtering, timeline, changing visual properties of the graph, etc.); - Open Parquet files natively; - Save your graphs to the cloud and share them with the world easily. And if you work with ML embeddings and love Apple's Embedding Atlas (https://lnkd.in/gsWt6CNT), you'll love Cosmograph too since they have a lot in common. If all the above has excited you, go check out Cosmograph's new beautiful website, and share the news with the world 🙏 https://cosmograph.app | 41 comments on LinkedIn
Cosmograph
·linkedin.com·
Cosmograph graph visualization tool
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
·linkedin.com·
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities.
Most agentic systems hardcode their capabilities. 🔳This does not scale.Ontologies as executable metadata for the four core agent capabilities can solve this.
·linkedin.com·
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.