Personalized Federated Similarity (pFedSim) extends FedSim by combining global model aggregation with personalized local layers. It decouples the model into shared feature extractors and client-specific prediction heads.
The model is decomposed into shared parameters $\theta$ and personalized parameters $\phi_k$:
\[w_k = (\theta_k, \phi_k)\]The shared layers are aggregated with similarity weighting:
\[\theta^{t+1} = \sum_{k=1}^{K} \omega_k^t \cdot \theta_k^t\]where:
\[\omega_k^t = \frac{\text{sim}(\theta_k^t, \theta^t)}{\sum_{j=1}^{K} \text{sim}(\theta_j^t, \theta^t)}\]The personalized layers $\phi_k$ are retained locally and not aggregated.
The implementation is located at src/unbitrium/aggregators/pfedsim.py.
Shared and personalized layers are correctly identified:
\[|\theta| + |\phi| = |w|\]Verification: Sum of shared and personalized parameter counts equals total.
Personalized layers are not modified during aggregation:
\[\phi_k^{t+1} = \phi_k^t + \Delta\phi_k^{local}\]Verification: Personalized parameters only change from local training.
Weights for shared layer aggregation sum to unity:
\[\sum_{k=1}^K \omega_k = 1\]Verification: Weight normalization confirmed.
Configuration:
Expected Behavior:
Configuration:
Expected Behavior:
Configuration:
Expected Behavior:
| Layer Type | Aggregation | Personalization |
|---|---|---|
| Feature extractor (conv, norm) | Global | No |
| Final classifier (fc) | None | Yes |
| Optional adaptation layers | None | Yes |
| Metric | Range | Notes |
|---|---|---|
shared_param_count |
$[0, P]$ | Number of shared parameters |
personal_param_count |
$[0, P]$ | Number of personalized parameters |
avg_similarity |
$[-1, 1]$ | Mean similarity on shared layers |
personalization_benefit |
$[-\infty, \infty]$ | Acc improvement from personalization |
Input: Empty personalized layer specification
Expected Behavior:
Input: All layers marked as personalized
Expected Behavior:
Input: Client joins with no personalized history
Expected Behavior:
def set_seed(seed: int = 42) -> None:
import random, numpy as np, torch
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
personalization:
shared_layers:
- "conv1"
- "conv2"
- "bn1"
personalized_layers:
- "fc"
- "classifier"
Personalized layers provide inherent privacy:
Shared layer updates may still reveal:
where $P_{shared}$ is the number of shared parameters.
Server: \(S_{server} = O(P_{shared})\)
Client: \(S_{client} = O(P_{shared} + P_{personal})\)
| Method | Global Acc | Personalized Acc |
|---|---|---|
| FedAvg | 42.3% | - |
| FedSim | 45.8% | - |
| pFedSim | 41.2% | 58.7% |
| Method | Writer-level Acc |
|---|---|
| Local Only | 84.2% |
| FedAvg | 82.8% |
| pFedSim | 89.4% |
Collins, L., et al. (2021). Exploiting shared representations for personalized federated learning. In ICML.
Li, D., & Wang, J. (2019). FedMD: Heterogeneous federated learning via model distillation. In NeurIPS Workshop.
Arivazhagan, M. G., et al. (2019). Federated learning with personalization layers. arXiv preprint.
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-01-04 | Initial validation report |
Copyright 2026 Olaf Yunus Laitinen Imanov and Contributors. Released under EUPL 1.2.