A Nifty Tool For Counting Neopixels
Google’s Search AI Is Absolutely Horrible at Geography
Naveen Rao on X
ChatGPT's fate hangs in the balance as OpenAI reportedly edges closer to bankruptcy
Paper page - CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
samim on X
François Chollet on X
Chip Huyen on X
François Chollet on X
Siraj Raval on X
Where & When | 2023 Annular Eclipse – NASA Solar System Exploration
Music can be reconstructed from human auditory cortex activity using nonlinear decoding models
What Is a DPU?
“This is going to represent one of the three major pillars of computing going forward,” NVIDIA CEO Jensen Huang said
“The CPU is for general-purpose computing, the GPU is for accelerated computing, and the DPU, which moves data around the data center, does data processing.”
What's a DPU?
System on a chip that combines:
Industry-standard, high-performance, software-programmable multi-core CPU
High-performance network interface
Flexible and programmable acceleration engines
CPU v GPU v DPU: What Makes a DPU Different?
A DPU is a new class of programmable processor that combines three key elements. A DPU is a system on a chip, or SoC, that combines:
An industry-standard, high-performance, software-programmable, multi-core CPU, typically based on the widely used Arm architecture, tightly coupled to the other SoC components.
A high-performance network interface capable of parsing, processing and efficiently transferring data at line rate, or the speed of the rest of the network, to GPUs and CPUs.
A rich set of flexible and programmable acceleration engines that offload and improve applications performance for AI and machine learning, zero-trust security, telecommunications and storage, among others.
All these DPU capabilities are critical to enable an isolated, bare-metal, cloud-native computing platform that will define the next generation of cloud-scale computing.
At a minimum, there 10 capabilities the network data path acceleration engines need to be able to deliver:
Data packet parsing, matching and manipulation to implement an open virtual switch (OVS)
RDMA data transport acceleration for Zero Touch RoCE
GPUDirect accelerators to bypass the CPU and feed networked data directly to GPUs (both from storage and from other GPUs)
TCP acceleration including RSS, LRO, checksum, etc.
Network virtualization for VXLAN and Geneve overlays and VTEP offload
Traffic shaping “packet pacing” accelerator to enable multimedia streaming, content distribution networks and the new 4K/8K Video over IP (RiverMax for ST 2110)
Precision timing accelerators for telco cloud RAN such as 5T for 5G capabilities
Crypto acceleration for IPSEC and TLS performed inline, so all other accelerations are still operational
Virtualization support for SR-IOV, VirtIO and para-virtualization
Secure Isolation: root of trust, secure boot, secure firmware upgrades, and authenticated containers and application lifecycle management
Data Processing Units: What Are DPUs and Why Do You Want Them?
Key vendors in the DPU market include NVIDIA, Marvell, Fungible (acquired by Microsoft), Broadcom, Intel, Resnics, and AMD Pensando.
How do CPU, GPU and DPU differ from one another? | TechTarget
Saudi Arabia and UAE race to buy Nvidia chips to power AI ambitions
Craftsmanship in Perfumery: A Flourishing Narrative in the Fragrance World | LinkedIn
Canadian court hands Organigram partial win in ‘edible’ cannabis dispute
A "Ring of Fire" Solar Eclipse Will Be Visible in North America for the First Time in 11 Years—Here's How and When to See It
AI Used to Reproduce Music by Reading Minds - Decrypt
Alejandro Franceschi on LinkedIn: #sentiment #wga #sagaftra #ai #automation
Jackie Peters on LinkedIn: A good reminder!
Travis Laurendine on LinkedIn: We are so fucking back
Indrajeet Sisodiya on LinkedIn: Unique use case of AI used alongside actual footage to recreate moments…
How does the Readwise to Tana export integration work?
Tana: Readwise Integration
Tanarian Brain
PromptLab Gallery
GitHub - xorbitsai/inference: Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
San Diego man bought flat on cruise ship as it's cheaper than home and he can travel world