The Distributed Computing Environment (DCE) is a software system developed in the early 1990s from the work of the Open Software Foundation (OSF), a consortium founded in 1988 that included Apollo Computer (part of Hewlett-Packard from 1989), IBM, Digital Equipment Corporation, and others.[1][2] The DCE supplies a framework and a toolkit for developing client/server applications.[3] The framework includes:
In my latest filesystem-themed post I discussed a technique to perform distributed resource management more safely. This time I'll explain how one might effectively combine _Reed-Solomon coding_ and _cyclic redundancy checks_. The first gives us redundancy (we can lose disks and still recover our data), the second protects us against data corruption.
Thunderbird packs up to 6,144 CPU cores into a single AI accelerator and scales up to 360,000 cores — InspireSemi's RISC-V 'supercomputer-cluster-on-a-chip' touts higher performance than Nvidia GPUs
InspireSemi preps 4-way Thunderbird card with up to 6,144 RISC-V cores.
BitTorrent, also referred to as simply torrent, is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner. The protocol is developed and maintained by Rainberry, Inc., and was first released in 2001.[2]
Netsukuku the Anarchical Parallel Internet || kuro5hin.org
Developed by the Freaknet, Netsukuku is a new p2p routing
system, which will be utilised to build a worldwide distributed, anonymous and
anarchical network, separated from the Internet, without the support of any
servers, ISPs or authority controls. In a p2p network every node
acts as a router, therefore in order to solve the problem of computing and
storing the routes for 2^128 nodes, Netsukuku makes use of a new
meta-algorithm, which exploits the chaos to avoid cpu consumption and fractals
to keep the map of the whole net constantly under the size of 2Kb.
Netsukuku includes also the Abnormal
Netsukuku Domain Name Anarchy, a non hierarchical and decentralised system of
hostnames management which replaces the DNS. It runs on GNU/Linux.
Distributed Data Management Architecture - Wikipedia
Distributed Data Management Architecture (DDM) is IBM's open, published software architecture for creating, managing and accessing data on a remote computer. DDM was initially designed to support record-oriented files; it was extended to support hierarchical directories, stream-oriented files, queues, and system command processing; it was further extended to be the base of IBM's Distributed Relational Database Architecture (DRDA); and finally, it was extended to support data description and conversion. Defined in the period from 1980 to 1993, DDM specifies necessary components, messages, and protocols, all based on the principles of object-orientation. DDM is not, in itself, a piece of software; the implementation of DDM takes the form of client and server products. As an open architecture, products can implement subsets of DDM architecture and products can extend DDM to meet additional requirements. Taken together, DDM products implement a distributed file system.
Here’s a tour of the pre-alpha demo release of GNOME Online Desktop included in Fedora 8. Learn more about what it does and how you can get involved in the project.
There’s some talk on desktop-devel-list about exactly what “online desktop” means, and in private mail I got a good suggestion to focus it such that end users would understand. &#…
The Next Generation of the Enterprise WAN: From WAN to SD-WAN to Next-Gen WAN
With all the changes to enterprise WAN needs, enterprises are re-evaluating their WANs and seeking Next-Generation WANs that align with the latest set of enterprise challenges.
What SQL could learn from Elasticsearch Query DSL | Quesma Database Gateway
At Quesma we help customers to innovate faster by re-shaping the way applications are built and connected to their DBs. Quesma database gateway enables development teams to modernise and evolve application architecture.
Driplang: triggering when events happen (or don't)
This post describes multiple ways I’ve seen projects handle event triggering in the past and suggests a minor tweak that I believe will greatly benefit proje...
Data processing modes: Streaming, Batch, Request-Response - Nussknacker
Stream processing is a method of processing data where input ingestion and output production is continuous. Batch processing refers to processing a bounded set of data (batch) and producing output for the entire batch
How We Built The Tech That Powers Our Serverless Cloud
Today we're releasing the container orchestrator that powers the Bismuth Cloud platform - our homegrown system that enables us to deploy your applications in seconds.