Asynchronous Federated Learning with Dynamic Client Scheduling (AFL-DCS) enables asynchronous aggregation with intelligent client selection based on computational capabilities and staleness management.
The asynchronous aggregation incorporates staleness weighting:
\[w^{t+1} = \text{Agg}\left(\{(w_k^{\tau_k}, s_k, n_k)\}_{k \in S_t}\right)\]where:
Staleness-weighted aggregation:
\[w^{t+1} = \sum_{k \in S_t} \frac{n_k \cdot \alpha^{s_k}}{\sum_{j \in S_t} n_j \cdot \alpha^{s_j}} w_k^{\tau_k}\]where $\alpha \in (0, 1]$ is the staleness discount factor.
The implementation is located at src/unbitrium/aggregators/afl_dcs.py.
Older updates receive lower weights:
\[s_k > s_j \implies \omega_k < \omega_j \text{ (all else equal)}\]Verification: Weight decreases monotonically with staleness.
When all clients complete simultaneously ($s_k = 0$ for all $k$):
\[\text{AFL-DCS} \equiv \text{FedAvg}\]Verification: Zero staleness produces FedAvg results.
Clients exceeding staleness threshold are excluded:
\[s_k > s_{max} \implies k \notin S_t\]Verification: Stale clients dropped from aggregation.
Aggregation proceeds without waiting for all clients:
\[|S_t| \geq K_{min} \implies \text{aggregate}\]Verification: Aggregation triggers when minimum clients available.
Configuration:
Expected Behavior:
Configuration:
Expected Behavior:
Configuration:
Expected Behavior:
Configuration:
Expected Behavior:
| $\alpha$ | Staleness Tolerance | Use Case |
|---|---|---|
| 1.0 | Infinite (no discount) | Trusted, stable network |
| 0.9 | Moderate | Default |
| 0.5 | Low | High churn environments |
| 0.1 | Very low | Strict freshness required |
| Metric | Range | Notes |
|---|---|---|
avg_staleness |
$[0, s_{max}]$ | Mean staleness of aggregated updates |
straggler_rate |
$[0, 1]$ | Fraction of excluded stragglers |
aggregation_frequency |
$(0, \infty)$ | Aggregations per unit time |
wait_time |
$[0, T_{max}]$ | Time waiting for minimum clients |
throughput |
$(0, \infty)$ | Updates processed per second |
Input: All clients exceed $s_{max}$
Expected Behavior:
Input: One client always fastest
Expected Behavior:
Input: Multiple updates arrive simultaneously
Expected Behavior:
Input: $s_{max} = 0$
Expected Behavior:
def set_seed(seed: int = 42) -> None:
import random, numpy as np, torch
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
scheduler:
type: "dynamic"
min_clients: 5
max_staleness: 10
staleness_discount: 0.9
timeout_ms: 5000
Asynchronous systems expose timing information:
Malicious clients could:
Per-aggregation: \(T = O(|S_t| \cdot P)\)
Breakdown:
| Setting | Synchronous | AFL-DCS | Speedup |
|---|---|---|---|
| Homogeneous | 100s | 95s | 1.05x |
| Heterogeneous (2x) | 200s | 130s | 1.54x |
| Heterogeneous (10x) | 1000s | 250s | 4.0x |
| Method | Accuracy | Training Time |
|---|---|---|
| Sync FedAvg | 85.2% | 100% |
| AFL-DCS | 84.8% | 55% |
Xie, C., et al. (2019). Asynchronous federated optimization. arXiv preprint.
Nguyen, J., et al. (2022). Federated learning with buffered asynchronous aggregation. In AISTATS.
Chen, M., et al. (2020). Asynchronous online federated learning for edge devices with non-iid data. In IEEE BigData.
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-01-04 | Initial validation report |
Copyright 2026 Olaf Yunus Laitinen Imanov and Contributors. Released under EUPL 1.2.