Distributed, Parallel, and Cluster Computing Profile
Distributed, Parallel, and Cluster Computing

@DPZ

Followers
218
Following
0
Media
0
Statuses
15K

New Distributed, Parallel, and Cluster Computing submissions to https://t.co/FMRl4YXmrm (not affiliated with https://t.co/FMRl4YXmrm)

Joined October 2010
Don't wanna be here? Send us removal request.
@DPZ
Distributed, Parallel, and Cluster Computing
2 days
SparkAttention: High-Performance Multi-Head Attention for Large Models on Volta GPU Architecture.
Tweet card summary image
arxiv.org
Transformer are widely used in various fields such as natural language processing and computer vision. However, the training time for large Transformer models can be challenging due to the...
0
0
0
@grok
Grok
5 days
What do you want to know?.
357
220
1K
@DPZ
Distributed, Parallel, and Cluster Computing
2 days
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.
Tweet card summary image
arxiv.org
Training large deep learning models at scale is very challenging. This paper proposes Chimera, a novel pipeline parallelism scheme which combines bidirectional pipelines for efficiently training...
0
0
1
@DPZ
Distributed, Parallel, and Cluster Computing
2 days
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging.
Tweet card summary image
arxiv.org
Deep learning at scale is dominated by communication time. Distributing samples across nodes usually yields the best performance, but poses scaling challenges due to global information...
0
0
0
@DPZ
Distributed, Parallel, and Cluster Computing
2 days
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations.
Tweet card summary image
arxiv.org
Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous...
0
0
0
@DPZ
Distributed, Parallel, and Cluster Computing
2 days
Optimizing Compilation for Distributed Quantum Computing via Clustering and Annealing.
Tweet card summary image
arxiv.org
Efficiently mapping quantum programs onto Distributed quantum computing (DQC) are challenging, particularly when considering the heterogeneous quantum processing units (QPUs) with different...
0
0
0
@DPZ
Distributed, Parallel, and Cluster Computing
3 days
Databelt: A Continuous Data Path for Serverless Workflows in the 3D Compute Continuum.
Tweet card summary image
arxiv.org
Typically, serverless functions rely on remote storage services for managing state, which can result in increased latency and network communication overhead. In a dynamic environment such as the...
0
0
0
@DPZ
Distributed, Parallel, and Cluster Computing
3 days
Federated Distillation on Edge Devices: Efficient Client-Side Filtering for Non-IID Data.
Tweet card summary image
arxiv.org
Federated distillation has emerged as a promising collaborative machine learning approach, offering enhanced privacy protection and reduced communication compared to traditional federated learning...
0
0
0
@DPZ
Distributed, Parallel, and Cluster Computing
3 days
FedEve: On Bridging the Client Drift and Period Drift for Cross-device Federated Learning.
Tweet card summary image
arxiv.org
Federated learning (FL) is a machine learning paradigm that allows multiple clients to collaboratively train a shared model without exposing their private data. Data heterogeneity is a fundamental...
0
0
0