If you're building or experimenting with GenAI on Arm devices, there’s a great learning session coming up next week! Join Gian Marco Iodice, Principal Software Engineer at @Arm, and Digant Desai, Software Engineer at @Meta, as they unpack the latest updates to ExecuTorch,
2
7
26
Replies
[GB6 CPU] Unknown CPU CPU: Box64 v0.3.9 on TaiShan-v110 @1500 MHz (8C 8T) Min/Max/Avg: 2583/2586/2585 MHz CPUID: 40661 (GenuineIntel) Single: 359 Multi: 1963 https://t.co/FbCSKsrCRg
0
0
3
How does this have to do with "CUDA moat" (or swamp, depends on the way you look at it)? Or does this just rhyme well?
BREAKING CUDA MOAT EXPANDS: Today, NVIDIA has acquired SchedMD, makers of SLURM, a widely used "open source" workload scheduler. Many AI companies such as Mistral, Thinking Machines, parts of Meta's FAIR division, university academic labs use SLURM. NVIDIA's acquisition expands
1
0
4
Oracle cited Uber and the Oracle Red Bull Racing Formula 1 team as two early A4 customers, adding that “more than 1,000 customers worldwide have realized” the performance and efficiency gains of its Arm-based instances since they launched in 2021.
Oracle Cloud Infrastructure is launching the first public cloud instance powered by Ampere Computing’s custom, Arm-compatible AmpereOne M processor after the tech giant sold its minority stake in the chip designer to SoftBank Group.
0
0
3
I used to believe this as well. Till I started waiting on 2-hour builds on an high-end PC. Distributed caching helps ease the pain somewhat but still not a solid solution.
I feel like many developers forget that compile times only matter to developers and have zero impact on end users. Doing as much as possible at compile time usually improves end user experience. I write a lot of C++ with some projects heavy on templates, architecture-specific
1
0
1