NADDOD
@naddodnetwork
Followers
55
Following
34
Media
163
Statuses
300
NADDOD is a professional provider of innovative optical networking solutions to AI, Data center, Enterprise and Telecom customers! Tel:+65 6018 4212
Joined March 2022
🤔 How to choose a #400G #OSFP #Ethernet #transceiver? SR4, 2xSR4, VR4, SR8, DR4 — how should these different variants be matched in #switch-to-#NIC interconnects? This article helps you gain a comprehensive understanding. Click to read the full article👇 https://t.co/9InJpm648d
naddod.com
Explore 400G OSFP Ethernet optical transceivers for modern data centers, AI and HPC networks. Learn OSFP advantages, use cases, and NADDOD’s 400G OSFP solutions for high-density, high-performance...
0
0
2
In this article, we look at the evolution of #RoCE networking, the limitations of traditional #Ethernet at scale, and how NVIDIA #SpectrumX improves lossless transport, congestion control, and performance predictability for #AIdatacenters. 📷 Read more:
naddod.com
This article explores the evolution of RoCE networking and the technical challenges of Ethernet in large-scale AI clusters, and explains how NVIDIA Spectrum-X improves lossless transport, congestion...
0
0
0
🚀 As #ScaleAcross becomes inevitable for #AI scaling, this article explores how #NVIDIAMetroX2 enables cross–data center training and inference via low-latency remote #interconnects, unlocking true #DistributedComputing. https://t.co/ytt57zUzN6
naddod.com
Against the backdrop of scale-across becoming the inevitable path for AI computing power expansion, this paper analyses how NVIDIA MetroX-2, a metropolitan-scale AI networking system, enables...
0
0
0
Physical #AI is bringing AI into the real world. From perceptual AI to generative AI, and now to physical AI, AI is no longer confined to the digital space but can perceive, reason, and act in real-world environments. Detailed analysis 👉 https://t.co/2yLdw9ReMf
#physicalai
naddod.com
Physical AI is driving artificial intelligence from the digital space into the real world. This article systematically introduces the development stages, core working mechanisms, typical applicatio...
0
0
0
💡 What happens when AI lands directly on your desk? DGX Spark brings large models into individual development environments, while DGX Station moves data center. AI is truly moving local and into the real world 👉 https://t.co/9EVfJgYSk4
naddod.com
An in-depth analysis of how DGX Spark and DGX Station bring data center–class AI computing to the Deskside , enabling local development, fine-tuning, and deployment of large models while reducing...
0
0
0
New additions to NADDOD #800G #OSFP modules! OSFP-800G-2DR4XH(MMS4X00-NM-FLT) Flat Top design, ideal for #NVIDIA DGX-H100. 🔗: https://t.co/Uwkg1XWDip OSFP-800G-2xDR4S(MMS4X00-NS-T) Up to 100m reach, ideal for SN5610 switches in RoCE networking. 🔗: https://t.co/v1tH3NtYtm
0
0
0
💡 #LLM inference slow? The KV-cache might be too far! #NVIDIA #BlueField-4 provides a dedicated context storage layer, allowing the #GPU to quickly access critical data, making inference more efficient and fluid. 👉 Read the full analysis https://t.co/5GPhAzCdtm
#AI #Networking
naddod.com
Learn how the BlueField-4 DPU optimizes context management for large language models in AI systems, achieving cross-node sharing, low-latency access, and system-level scalability through a dedicated...
0
0
0
This article breaks down how CUDA Cores, Tensor Cores, and RT Cores are positioned and optimized for different workloads—explaining their distinct roles and where each core delivers the most value in modern GPU computing. 📷 https://t.co/i0Le3Ihcr6
naddod.com
Starting with the SM organizational structure of NVIDIA GPUs, this paper outlines the architectural positioning and capability focus of internal computing cores such as CUDA core, Tensor core, and RT...
0
0
0
🚀 This article systematically interprets #inference chips, from performance optimization and energy efficiency advantages to product introduction, helping you understand why you need inference #chips. 📖 Click to read and learn more: https://t.co/saEJPptAAm
#AI
naddod.com
AI inference is becoming a core factor in computing power costs. This article systematically analyzes the key differences between AI training and inference, introduces the advantages of inference...
0
0
1
💡What determines the performance of #AI distributed training? Not framework, but the underlying communication primitives! Check this article to know more:
naddod.com
In-depth analysis of distributed training communication primitives to understand their role in large-scale AI training, and applying NCCL to illustrate how communication primitives affect the upper...
0
0
1
#NVIDIA #Spectrum-6 #Ethernet switches: #SN6810 single-chip 102.4T, #SN6800 four-chip expansion to 409.6T. Combined with #CPO optics and high-cardinality ports, it meets the needs of ultra-large-scale #AI clusters. Click here for the full analysis 👉 https://t.co/z1BTJRFPcz
naddod.com
In-depth analysis of the NVIDIA Spectrum-6 Ethernet switch: Supporting 102.4T single-chip bandwidth, CPO co-packaged optics, 224G SerDes, and a high-cardinality port design, suitable for scale-out...
0
0
0
🚀 The #NVIDIA #Rubin #platform is built for #nextgeneration #AIcomputing. Through a system-level co-design of #GPUs, #CPUs, interconnects, and networks, Rubin achieves a better balance between performance, cost, and security. 📷 Read Blog:
naddod.com
The NVIDIA Rubin platform is a new computing platform for next-generation AI. Through the co-design of GPUs, CPUs, interconnects, and networks, it achieves high performance, low cost, and system-le...
0
0
3
How do #NVIDIA #TensorCores actually work 🤔 and why have they become foundational to modern #AI #computing? This article breaks down the core concepts, underlying mechanisms, and architectural evolution behind Tensor Cores.📷 🔗: https://t.co/5BJreuwW2k
naddod.com
Focusing on the fundamental concepts, working principle, and evolutionary path of Tensor Cores within NVIDIA GPU microarchitectures, this article provides a systematic analysis of how Tensor Cores...
0
0
1
💡 In-depth analysis of #AWS #Trainium3 3nm self-developed #AI chip, FP8 computing power up to 2.52 PFLOPs, 144GB HBM3e memory! 📈 How do they differ from #NVIDIA #GB300 and #Google #TPU? 👉 Learn more now:
naddod.com
Explore the architecture, memory bandwidth, and system scalability of Amazon's 3nm AI chip, Trainium 3. Compare it with NVIDIA GB300 and Google TPU Ironwood to analyze enterprise selection strategies...
0
0
0
#NADDOD 1.6T #OSFP224 2×DR4 / DR8 powered by #Broadcom 3nm DSP (#Sian3 | BCM83628) , with a verified BER as low as 5E-14, and validated in large-scale cluster environments for stable mass production. ⚡ 2500 pcs in production, delivery from Jan! https://t.co/mZkJ1ZczYN
0
0
0
🚀 In-depth analysis of #Broadcom #Sian3 & #Sian2M How does Broadcom's #DSP PHY meet the high-efficiency interconnect requirements of #AI data center #clusters? Click to learn more 👇 https://t.co/dTmoZm6fC1
naddod.com
Analyzing Broadcom's Sian3 and Sian2M 200G/lane DSP technologies. Sian3 (3nm/SMF) and Sian2M (5nm/MMF) support 800G and 1.6T optical modules, meeting the high bandwidth, low power consumption, and...
0
0
0
❓ #OSFP vs QSFP-DD Why are the application scenarios different for #800G transceiver packages? Click to read now.👇 https://t.co/Vsxixgun4p
#AI #QSFPDD #Ethernet #roce
naddod.com
This article analyzes 800G OSFP and QSFP-DD form factor's technical differences, application scenarios, and deployment considerations to help data centers and high-performance networks make informed...
0
0
0
🚀 In-depth analysis of #NVIDIA GPU CUDA cores! Is more CUDA cores always better in a #GPU? What are the differences between #CUDA cores and CPU cores? This article provides a comprehensive understanding of CUDA cores. 👇 https://t.co/UReSOSFowl
#AI #Networking #AIComputing
naddod.com
In-depth analysis of NVIDIA GPU CUDA cores: parallel architecture, working principles, and practical applications in AI and gaming, comparing the performance differences between CUDA cores and CPU...
0
0
0
2025: #AI is Moving from "Experimental Technology" to "Basic Productivity" 📷 Focusing on #industrytrends, #computing power evolution, and commercialization, with an eye on emerging directions such as #embodiedintelligence and #quantum omputing. https://t.co/LK3M719mzy
naddod.com
This paper systematically reviews the current status and trends of the AI industry in 2025, the structure of the computing power industry chain, commercialization paths, and emerging scenarios such...
0
0
0
🎄 Merry Christmas & Happy Holidays! Thank you to everyone who’s been part of our journey this year, wishing you a warm, joyful, and relaxing holiday season. 🎅 #HappyHolidays #MerryChristmas
0
0
0