SPCL@ETH
@spcl_eth
Followers
2K
Following
312
Media
546
Statuses
1K
News from the Scalable Parallel Computing Lab at ETH Zurich @ETH_en led by @thoefler. Join or visit us: https://t.co/4CO7bCJQ2s
Zurich, Switzerland
Joined March 2013
๐๐ถ๐ป๐ฒ ๐ง๐ฟ๐ฒ๐ฒ๐: ๐๐ป๐ต๐ฎ๐ป๐ฐ๐ถ๐ป๐ด ๐๐ผ๐น๐น๐ฒ๐ฐ๐๐ถ๐๐ฒ ๐ข๐ฝ๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป๐ ๐ฏ๐ ๐ข๐ฝ๐๐ถ๐บ๐ถ๐๐ถ๐ป๐ด ๐๐ผ๐บ๐บ๐๐ป๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฐ๐ฎ๐น๐ถ๐๐ will be presented at SC Conference Series on Nov 20 in room 275!๐๐ ๐ https://t.co/Lp2EQ5lHRM
#HPC @thoefler @CSatETH #SC25
0
0
1
AI/HPC networks run real apps, not microbenchmarks. ๐ Our ๐ฆ๐๐ฎ๐ฑ Best Student Paper Candidate, ๐๐ง๐๐๐๐ฆ, traces real apps (like NCCL & MPI) to portable ๐๐ข๐๐ schedules for efficient simulation. ๐ Paper: https://t.co/Vu4EhlhgpM ๐ป Code:
0
0
1
๐ Uno accepted to SC25! Unified congestion control + reliable connectivity for intra- & inter-DC traffic to enable inter-DC AI training. ๐Paper: https://t.co/rgIWOSwuCo ๐ปCode: https://t.co/Bw0USVCnrK ๐คCollaboration with Microsoft #SC25 #AI #SPCL @thoefler @CSatETH
0
0
2
SPCL researchers helped achieve a breakthrough in #climate modeling - running global simulations at 1.25 km resolution on Alps #HPC supercomputer ๐This work makes decades-long runs feasible and is nominated for the Gordon Bell Prize for Climate Modeling. https://t.co/xcEPZcsFdR
0
0
2
Maciej presented at the GraphSys workshop at @euro_par in Dresden, the ACAT workshop in Hamburg, the Fast Machine Learning for Science Conference in Zurich, and a series of lectures at the Deep Learning Summer School at @AGH_Krakow in Krakow.
0
0
1
๐ขREPS accepted at EuroSys 2026!๐ A per-packet load balancer for out-of-order transports. It caches high-performing paths and reroutes away from failures. It requires no switch changes and uses only ~25B per flow. ๐ https://t.co/78PS8zTBG3
#HPC @thoefler @CSatETH
0
1
9
๐ขOur paper Psychologically Enhanced AI Agents is out ! We introduce MBTI-in-Thoughts, a framework for enhancing the effectiveness of LLM agents through psychologically grounded personality conditioning. Find out more: ๐ https://t.co/vBFDM3rCDB
#HPC #AI @thoefler @CSatETH
0
6
9
๐ข Our paper Demystifying Chains, Trees, and Graphs of Thoughts just got accepted by the journal IEEE Transactions on Pattern Analysis and Machine Intelligence!๐ ๐ https://t.co/6j0V3ZyWg4 ๐ https://t.co/lDqJccNspG
#HPC @MaciejBesta @thoefler @CSatETH
0
1
6
The analysis outcomes are synthesized in a set of insights that help to select the most beneficial GNN model in a given scenario, and a comprehensive list of challenges and opportunities for further research into more powerful HOGNNs. arXiv ๐ https://t.co/mC1YnpjmX7
0
0
0
To alleviate this, we first design an in-depth taxonomy and a blueprint for HOGNNs. This facilitates designing models that maximize performance. Then, we use our taxonomy to analyze and compare the available HOGNN models.
1
0
0
A plethora of HOGNN models have been introduced, coming with diverse neural architectures and notions of what the "higher-order" means. This richness makes it very challenging to appropriately analyze and compare HOGNN models, and to decide in what scenario to use specific ones.
1
0
0
Paper ๐ https://t.co/88FtJ5X6Eg Dataset ๐ https://t.co/HQIN3UuHGq Code ๐
github.com
Contribute to spcl/fanns-benchmark development by creating an account on GitHub.
0
0
1
Excited to share our latest paper! We present a comprehensive survey and taxonomy of Filtered Approximate Nearest Neighbor Search (FANNS) algorithms, and we benchmark a selection of them on our novel *arxiv-for-fanns* dataset.
1
0
5
Cppless is open source and built on top of LLVM with less than 1k LoC changes. More details in the paper on serialization, C++ lambda extraction and cross-compilation. Paper ๐ https://t.co/BxREV1vZcf Code ๐ https://t.co/fOokcr5YMg Artifact ๐ https://t.co/AlSazEoRJ5
zenodo.org
Cppless: Single-Source and High-Performance Serverless Programming in C++ The repository contains the replication artifact for the paper "Cppless: Single-Source and High-Performance Serverless...
0
0
2
Our evaluation shows that C++ serverless functions can scale to 512 parallel workers with a double-digit millisecond overhead. On the example of ray tracing, we show a speedup of up 59x - from 60s โ 1s execution time - with minimal cost increase.
1
0
1