Skip to main content

Multi-Organization Management

For holding companies, enterprise groups, and organizations with multiple business units, managing sustainability data across entities requires coordination. This guide covers patterns for automated multi-organization workflows.

Organization Structures

Dcycle supports hierarchical organization structures:
Acme Holding Corp (Parent)
├── Acme Spain
│   ├── Acme Madrid Office
│   └── Acme Barcelona Warehouse
├── Acme France
│   └── Acme Paris Office
├── Acme Logistics
│   ├── Fleet Division
│   └── Warehousing Division
└── Acme UK
    └── Acme London Office
Each organization can have:
  • Its own users and permissions
  • Separate facilities, vehicles, and data
  • Independent or consolidated reporting

Common Use Cases

1. Consolidated Corporate Reporting

Aggregate emissions data across all subsidiaries:
#!/bin/bash
# consolidated_report.sh

YEAR=${1:-2024}
OUTPUT_DIR="/reports/$YEAR"
mkdir -p "$OUTPUT_DIR"

# Get all organization IDs
ORG_IDS=$(dc org list --format json | jq -r '.[].id')

echo "📊 Generating consolidated report for $YEAR..."

# Collect data from each organization
for org_id in $ORG_IDS; do
    org_name=$(dc org show $org_id --format json | jq -r '.name' | tr ' ' '_')
    echo "  Processing: $org_name"

    # Set organization context
    dc org set $org_id

    # Export emissions data
    dc emissions summary --year $YEAR --format json > "$OUTPUT_DIR/${org_name}_emissions.json"

    # Export facility data
    dc facility list --format json > "$OUTPUT_DIR/${org_name}_facilities.json"

    # Export vehicle data
    dc vehicle list --format json > "$OUTPUT_DIR/${org_name}_vehicles.json"
done

# Consolidate into single report
echo "📈 Consolidating data..."
python scripts/consolidate_report.py "$OUTPUT_DIR" > "$OUTPUT_DIR/consolidated_report.json"

echo "✅ Report generated: $OUTPUT_DIR/consolidated_report.json"

2. Centralized Data Upload

Upload data to multiple organizations from a central source:
#!/bin/bash
# centralized_upload.sh

# Organization mapping (org_id -> data_prefix)
declare -A ORG_MAP=(
    ["uuid-spain"]="ES"
    ["uuid-france"]="FR"
    ["uuid-uk"]="UK"
)

DATA_DIR="/data/monthly"
MONTH=$(date -d "last month" +%Y-%m)

for org_id in "${!ORG_MAP[@]}"; do
    prefix="${ORG_MAP[$org_id]}"
    echo "📤 Uploading data for $prefix..."

    # Set organization
    dc org set $org_id

    # Upload country-specific files
    if [ -f "$DATA_DIR/${prefix}_vehicles_$MONTH.csv" ]; then
        dc vehicle upload "$DATA_DIR/${prefix}_vehicles_$MONTH.csv" --yes
    fi

    if [ -f "$DATA_DIR/${prefix}_invoices_$MONTH.csv" ]; then
        dc invoice upload "$DATA_DIR/${prefix}_invoices_$MONTH.csv" --yes
    fi

    echo "  ✓ $prefix complete"
done

3. Cross-Organization Comparison

Compare performance across business units:
#!/bin/bash
# compare_organizations.sh

YEAR=2024
RESULTS=()

echo "📊 Comparing organizations for $YEAR..."
echo ""

# Collect metrics from each org
for org_id in $(dc org list --format json | jq -r '.[].id'); do
    dc org set $org_id

    # Get org info
    info=$(dc org show $org_id --format json)
    name=$(echo $info | jq -r '.name')

    # Get emissions
    emissions=$(dc emissions summary --year $YEAR --format json)
    total=$(echo $emissions | jq -r '.total_tco2e // 0')
    scope1=$(echo $emissions | jq -r '.scope_1_tco2e // 0')
    scope2=$(echo $emissions | jq -r '.scope_2_tco2e // 0')
    scope3=$(echo $emissions | jq -r '.scope_3_tco2e // 0')

    # Output row
    printf "%-30s %10.1f %10.1f %10.1f %10.1f\n" "$name" "$total" "$scope1" "$scope2" "$scope3"
done | column -t
Output:
Organization                   Total      Scope1     Scope2     Scope3
──────────────────────────────────────────────────────────────────────
Acme Spain                     1234.5     234.1      456.7      543.7
Acme France                     876.3     123.4      234.5      518.4
Acme UK                         654.2      98.7      187.3      368.2
Acme Logistics                 2345.6     987.6      234.5     1123.5

Automation Patterns

Pattern 1: Hub and Spoke

Central team manages automation, subsidiaries provide data:
                    ┌─────────────────┐
                    │  Central Team   │
                    │  (Automation)   │
                    └────────┬────────┘

            ┌────────────────┼────────────────┐
            │                │                │
            ▼                ▼                ▼
    ┌───────────────┐ ┌───────────────┐ ┌───────────────┐
    │  Subsidiary A │ │  Subsidiary B │ │  Subsidiary C │
    │  (Data only)  │ │  (Data only)  │ │  (Data only)  │
    └───────────────┘ └───────────────┘ └───────────────┘
# Central automation config
organizations:
  - id: uuid-subsidiary-a
    name: Subsidiary A
    data_source: sftp://subsidiary-a.example.com/exports
    schedule: "0 6 * * 1"  # Weekly Monday 6 AM

  - id: uuid-subsidiary-b
    name: Subsidiary B
    data_source: s3://bucket/subsidiary-b/
    schedule: "0 6 1 * *"  # Monthly 1st at 6 AM

  - id: uuid-subsidiary-c
    name: Subsidiary C
    data_source: api://erp.subsidiary-c.example.com
    schedule: "0 6 * * *"  # Daily 6 AM

Pattern 2: Federated

Each subsidiary manages their own automation with central oversight:
# Central monitoring script
import json
from datetime import datetime, timedelta

def check_subsidiary_health():
    """Monitor data freshness across all subsidiaries"""

    orgs = json.loads(subprocess.run(
        ["dc", "org", "list", "--format", "json"],
        capture_output=True
    ).stdout)

    issues = []

    for org in orgs:
        # Switch to org
        subprocess.run(["dc", "org", "set", org["id"]])

        # Check last upload date
        recent = json.loads(subprocess.run(
            ["dc", "logistics", "requests", "list",
             "--from", (datetime.now() - timedelta(days=30)).strftime("%Y-%m-%d"),
             "--format", "json"],
            capture_output=True
        ).stdout)

        if len(recent) == 0:
            issues.append(f"{org['name']}: No uploads in last 30 days")

    return issues

Pattern 3: API Key Per Organization

For complete isolation, use separate API keys:
# Environment file per organization
# /etc/dcycle/spain.env
DCYCLE_API_KEY=key_for_spain
DCYCLE_ORG_ID=uuid_spain

# /etc/dcycle/france.env
DCYCLE_API_KEY=key_for_france
DCYCLE_ORG_ID=uuid_france
#!/bin/bash
# Run upload with specific org context

source /etc/dcycle/$1.env
dc logistics upload data/$1_viajes.csv --type requests --yes

Reporting Across Organizations

Consolidated Emissions Report

# generate_consolidated_report.py
import json
import subprocess
from collections import defaultdict

def get_org_emissions(org_id, year):
    """Get emissions for a single organization"""
    subprocess.run(["dc", "org", "set", org_id])

    result = subprocess.run(
        ["dc", "emissions", "summary", "--year", str(year), "--format", "json"],
        capture_output=True
    )
    return json.loads(result.stdout)

def generate_consolidated_report(year):
    """Generate consolidated report across all organizations"""

    # Get all organizations
    orgs = json.loads(subprocess.run(
        ["dc", "org", "list", "--format", "json"],
        capture_output=True
    ).stdout)

    consolidated = {
        "year": year,
        "total_tco2e": 0,
        "by_scope": defaultdict(float),
        "by_organization": []
    }

    for org in orgs:
        emissions = get_org_emissions(org["id"], year)

        org_data = {
            "id": org["id"],
            "name": org["name"],
            "country": org.get("country"),
            "total_tco2e": emissions.get("total_tco2e", 0),
            "scope_1": emissions.get("scope_1_tco2e", 0),
            "scope_2": emissions.get("scope_2_tco2e", 0),
            "scope_3": emissions.get("scope_3_tco2e", 0),
        }

        consolidated["by_organization"].append(org_data)
        consolidated["total_tco2e"] += org_data["total_tco2e"]
        consolidated["by_scope"]["scope_1"] += org_data["scope_1"]
        consolidated["by_scope"]["scope_2"] += org_data["scope_2"]
        consolidated["by_scope"]["scope_3"] += org_data["scope_3"]

    return consolidated

if __name__ == "__main__":
    report = generate_consolidated_report(2024)
    print(json.dumps(report, indent=2))

Year-over-Year Comparison by Subsidiary

#!/bin/bash
# yoy_comparison.sh

echo "Year-over-Year Comparison by Subsidiary"
echo "========================================"
echo ""

printf "%-25s %12s %12s %12s\n" "Organization" "2023" "2024" "Change"
printf "%-25s %12s %12s %12s\n" "------------" "----" "----" "------"

for org_id in $(dc org list --format json | jq -r '.[].id'); do
    dc org set $org_id > /dev/null

    name=$(dc org show $org_id --format json | jq -r '.name' | cut -c1-25)

    emissions_2023=$(dc emissions summary --year 2023 --format json | jq -r '.total_tco2e // 0')
    emissions_2024=$(dc emissions summary --year 2024 --format json | jq -r '.total_tco2e // 0')

    if (( $(echo "$emissions_2023 > 0" | bc -l) )); then
        change=$(echo "scale=1; (($emissions_2024 - $emissions_2023) / $emissions_2023) * 100" | bc)
        printf "%-25s %12.1f %12.1f %11.1f%%\n" "$name" "$emissions_2023" "$emissions_2024" "$change"
    else
        printf "%-25s %12.1f %12.1f %12s\n" "$name" "$emissions_2023" "$emissions_2024" "N/A"
    fi
done

Best Practices

Consistent Naming

Use consistent naming conventions across organizations for facilities, vehicle types, and categories.

Centralized Templates

Maintain CSV templates centrally to ensure data consistency across subsidiaries.

Permission Boundaries

Use separate API keys when subsidiaries shouldn’t access each other’s data.

Audit Logging

Log which organization context was used for each operation for compliance.

Next Steps