This document provides comprehensive guidelines for developing and extending Unbitrium.
| Requirement | Version | Purpose |
|---|---|---|
| Python | >= 3.10 | Runtime |
| Git | >= 2.30 | Version control |
| PyTorch | >= 2.0 | Deep learning |
| Make | Any | Build automation |
# Clone the repository
git clone https://github.com/olaflaitinen/unbitrium.git
cd unbitrium
# Create virtual environment
python -m venv .venv
# Activate virtual environment
# On Windows:
.venv\Scripts\activate
# On Unix/macOS:
source .venv/bin/activate
# Install in development mode
pip install -e ".[dev,docs]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Type checking
mypy src/
# Linting
ruff check src/
# Import the library
python -c "import unbitrium; print(unbitrium.__version__)"
unbitrium/
.github/ # GitHub configuration
workflows/ # CI/CD workflows
assets/ # Static assets
benchmarks/ # Benchmark scripts
configs/ # Benchmark configurations
docs/ # Documentation
api/ # API reference
research/ # Research notes
tutorials/ # Tutorial files
validation/ # Validation reports
examples/ # Example scripts
src/
unbitrium/ # Main package
aggregators/ # Aggregation algorithms
bench/ # Benchmark utilities
core/ # Core simulation infrastructure
datasets/ # Dataset loaders
metrics/ # Heterogeneity metrics
partitioning/ # Data partitioning
privacy/ # Privacy mechanisms
simulation/ # Client/server simulation
systems/ # Device/energy models
tests/ # Test suite
| File | Purpose |
|---|---|
pyproject.toml |
Package configuration |
mkdocs.yml |
Documentation configuration |
.pre-commit-config.yaml |
Pre-commit hooks |
CONTRIBUTING.md |
Contribution guidelines |
| Branch | Purpose |
|---|---|
main |
Stable release branch |
develop |
Integration branch |
feature/* |
Feature development |
bugfix/* |
Bug fixes |
release/* |
Release preparation |
# Sync with upstream
git fetch upstream
git checkout main
git rebase upstream/main
# Create feature branch
git checkout -b feature/my-feature
# Format code
black src/ tests/
isort src/ tests/
# Lint
ruff check src/ tests/
# Type check
mypy src/
# Run tests
pytest
Follow conventional commit format:
<type>(<scope>): <description>
[optional body]
[optional footer]
Types: feat, fix, docs, style, refactor, test, chore
Examples:
feat(aggregators): add FedNova aggregation algorithm
fix(partitioning): correct edge case in Dirichlet sampling
docs(tutorials): add tutorial for custom metrics
src/unbitrium/aggregators/:"""NewAggregator implementation.
Description of the algorithm.
Author: Your Name <email@example.com>
License: EUPL-1.2
"""
from __future__ import annotations
from typing import Any
import torch
from unbitrium.aggregators.base import Aggregator
class NewAggregator(Aggregator):
"""Description of aggregator.
Args:
param1: Description.
Example:
>>> agg = NewAggregator(param1=0.1)
"""
def __init__(self, param1: float = 0.1) -> None:
"""Initialize aggregator."""
self.param1 = param1
def aggregate(
self,
updates: list[dict[str, Any]],
current_global_model: torch.nn.Module,
) -> tuple[torch.nn.Module, dict[str, float]]:
"""Aggregate client updates.
Args:
updates: Client updates.
current_global_model: Current model.
Returns:
Updated model and metrics.
"""
# Implementation
pass
Export in src/unbitrium/aggregators/__init__.py
Add tests in tests/test_aggregators.py
Document in docs/api/aggregators.md
src/unbitrium/metrics/:def compute_new_metric(
labels: np.ndarray,
client_indices: dict[int, list[int]],
) -> float:
"""Compute new metric.
Args:
labels: Class labels.
client_indices: Client data indices.
Returns:
Metric value.
"""
# Implementation
pass
Export in src/unbitrium/metrics/__init__.py
Add tests in tests/test_metrics.py
# All tests
pytest
# Specific file
pytest tests/test_aggregators.py
# Specific test
pytest tests/test_aggregators.py::TestFedAvg::test_aggregate_empty
# With coverage
pytest --cov=src/unbitrium --cov-report=html
# Parallel execution
pytest -n auto
class TestNewFeature:
"""Tests for new feature."""
def test_basic_functionality(self) -> None:
"""Test basic use case."""
result = new_function(input)
assert result == expected
@pytest.mark.parametrize("input,expected", [
(1, 1),
(2, 4),
(3, 9),
])
def test_parametrized(self, input: int, expected: int) -> None:
"""Test with various inputs."""
assert square(input) == expected
@pytest.mark.slow
def test_expensive_operation(self) -> None:
"""Test that requires significant computation."""
# Long-running test
| Marker | Purpose |
|---|---|
@pytest.mark.slow |
Long-running tests |
@pytest.mark.gpu |
GPU-required tests |
@pytest.mark.integration |
Integration tests |
# Build docs
mkdocs build
# Serve locally
mkdocs serve
# Open http://localhost:8000
Follow these guidelines:
import cProfile
import pstats
with cProfile.Profile() as pr:
# Code to profile
result = expensive_function()
stats = pstats.Stats(pr)
stats.sort_stats('cumulative')
stats.print_stats(10)
import torch
# Check memory usage
print(f"Allocated: {torch.cuda.memory_allocated() / 1e9:.2f} GB")
print(f"Cached: {torch.cuda.memory_reserved() / 1e9:.2f} GB")
# Clear cache
torch.cuda.empty_cache()
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("unbitrium")
| Issue | Solution |
|---|---|
| Import errors | Check installation: pip install -e "." |
| Type errors | Run mypy: mypy src/ |
| Test failures | Run specific test with -v flag |
src/unbitrium/__init__.pypyproject.tomlCHANGELOG.mdLast updated: January 2026