Federation & Multi-Cluster: Running Workloads Across Heata and Traditional Cloud
Overview
If you already run Kubernetes in AWS, Azure, GCP, or on-prem, you don't have to choose between your existing infrastructure and Heata. Multi-cluster management tools let you treat Heata as another cluster in your fleet — deploy from a single control plane, shift workloads based on cost or carbon targets, and burst onto Heata when you need extra capacity.
This guide covers how to connect your Heata cluster to the multi-cluster tools you're likely already using, with working examples for each.
The big picture
┌─────────────────────┐
│ Your Control Plane │
│ (management cluster │
│ or SaaS console) │
└──────────┬──────────┘
│
┌────────────────┼─────────────────┐
│ │ │
┌──────▼──────┐ ┌──────▼──────┐ ┌───────▼──────┐
│ Cloud │ │ On-prem │ │ Heata │
│ Cluster │ │ Cluster │ │ Cluster │
│ (AKS/EKS/ │ │ │ │ │
│ GKE) │ │ │ │ Node label: │
│ │ │ │ │ role.heata │
│ │ │ │ │ .co/capture │
│ │ │ │ │ -heat=true │
└─────────────┘ └─────────────┘ └──────────────┘
│ │ │
Standard K8s Standard K8s Standard K8s
workloads workloads workloads
+
waste heat
heats homes
From your management layer's perspective, Heata is just another Kubernetes cluster. The API is standard. The workload manifests are standard. The only difference is the role.heata.co/capture-heat=true node label, and what happens with the heat.
Multi-cluster tool comparison
| Tool | Type | Best for | Heata integration |
|---|---|---|---|
| Azure Arc | Managed SaaS | Azure-centric organisations, GitOps at scale | Register Heata as an Arc-connected cluster |
| Rancher | Self-hosted / SaaS | Multi-cloud fleet management, teams already using Rancher | Import Heata as a managed cluster |
| ArgoCD | GitOps | Declarative deployment to multiple clusters | Add Heata as a target cluster in your GitOps repo |
| Admiralty | Open-source | Lightweight cross-cluster scheduling, burst scenarios | Virtual node schedules pods to Heata |
All of these work with Heata because Heata runs standard Kubernetes. No plugins, adapters, or compatibility layers required.
Using another tool? Heata works with any Kubernetes-compatible multi-cluster tool — Flux CD, Liqo, KubeFed, or anything that speaks the Kubernetes API. Talk to us and we'll help you get connected.
Azure Arc
Azure Arc lets you manage any Kubernetes cluster — including Heata — from the Azure portal alongside your AKS clusters.
Connecting your Heata cluster
# Install the Arc agents on the Heata cluster
az connectedk8s connect \
--name heata-cluster \
--resource-group my-resource-group \
--kube-config ~/heata-kubeconfig.yaml
Once connected, your Heata cluster appears in the Azure portal under Kubernetes - Azure Arc. You can:
- Deploy workloads via Azure GitOps (Flux)
- Apply Azure Policy to your Heata cluster
- View pod logs and events from the Azure portal
- Use Azure Monitor for container insights
GitOps deployment via Arc
# Create a GitOps configuration that deploys to Heata
az k8s-configuration flux create \
--name my-app-config \
--cluster-name heata-cluster \
--resource-group my-resource-group \
--cluster-type connectedClusters \
--namespace my-app \
--scope namespace \
--url https://github.com/your-org/k8s-manifests \
--branch main \
--kustomization name=app path=./overlays/heata
Placement with Arc Fleets
If you use Azure Arc Fleets to manage placement across clusters, you can use a ClusterResourcePlacement to target Heata for specific workloads:
apiVersion: placement.kubernetes-fleet.io/v1
kind: ClusterResourcePlacement
metadata:
name: batch-workloads
spec:
resourceSelectors:
- group: batch
version: v1
kind: Job
labelSelector:
matchLabels:
tier: background
policy:
placementType: PickN
numberOfClusters: 1
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchLabels:
environment: heata
This tells Arc to place all Jobs labelled tier: background onto your Heata cluster.
Rancher
Rancher provides a unified management console for Kubernetes clusters across any infrastructure.
Importing the Heata cluster
- In the Rancher UI, go to Cluster Management > Import Existing
- Give it a name (e.g.
heata-production) - Copy the generated
kubectl applycommand - Run it against your Heata cluster:
kubectl apply --kubeconfig ~/heata-kubeconfig.yaml \
-f https://rancher.your-domain.com/v3/import/xxxxxxx.yaml
Your Heata cluster now appears in the Rancher dashboard alongside your other clusters.
Deploying workloads via Rancher
Once imported, you can use Rancher's UI or Fleet (Rancher's GitOps tool) to deploy to Heata:
# fleet.yaml — deploy batch workloads to Heata
defaultNamespace: batch-jobs
helm:
releaseName: batch-processor
chart: ./charts/batch-processor
values:
nodeSelector:
role.heata.co/capture-heat: "true"
resources:
requests:
cpu: "4"
memory: "16Gi"
targetCustomizations:
- name: heata
clusterSelector:
matchLabels:
location: heata
Burst to Heata
A common pattern: run your baseline on existing cloud, and burst heavy or batch workloads to Heata:
# fleet.yaml — different targets for different workloads
helm:
chart: ./charts/my-app
targetCustomizations:
# Latency-sensitive services stay on your cloud cluster
- name: cloud
clusterSelector:
matchLabels:
location: cloud
helm:
values:
replicas: 3
# Batch processing goes to Heata (cheaper, greener)
- name: heata
clusterSelector:
matchLabels:
location: heata
helm:
values:
replicas: 10
nodeSelector:
role.heata.co/capture-heat: "true"
ArgoCD
If you use ArgoCD for GitOps, adding Heata as a deployment target takes one command.
Register the Heata cluster
# Add Heata as a target cluster in ArgoCD
# The first argument is the context name from your kubeconfig file
argocd cluster add heata-context \
--kubeconfig ~/heata-kubeconfig.yaml \
--name heata
Deploy to Heata via an ApplicationSet
Use an ApplicationSet to deploy the same app to multiple clusters, or route specific apps to Heata:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: batch-jobs
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- list:
elements:
- cluster: cloud-production
url: https://cloud-k8s-api.your-domain.com
values:
nodeSelector: ""
- cluster: heata
url: https://heata-k8s-api.your-domain.com
values:
nodeSelector: "role.heata.co/capture-heat: 'true'"
template:
metadata:
name: "batch-jobs-{{.cluster}}"
spec:
project: default
source:
repoURL: https://github.com/your-org/k8s-manifests
path: batch-jobs
targetRevision: main
destination:
server: "{{.url}}"
namespace: batch
Heata-only Application
For workloads that should only run on Heata:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: data-pipeline
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/k8s-manifests
path: data-pipeline
targetRevision: main
destination:
server: https://heata-k8s-api.your-domain.com
namespace: data-pipeline
syncPolicy:
automated:
prune: true
selfHeal: true
Admiralty
Admiralty is a lightweight, open-source multi-cluster scheduler. It's ideal for burst-to-Heata scenarios where you want your existing cluster's scheduler to transparently offload pods.
How it works
Your cloud cluster Heata cluster
┌─────────────────────┐ ┌─────────────────────┐
│ │ │ │
│ User submits Pod │ │ │
│ │ │ │ │
│ ▼ │ │ │
│ Admiralty creates │ schedule │ Real Pod runs │
│ "virtual pod" and │ ──────────> │ on Heata nodes │
│ schedules to │ │ with nodeSelector │
│ Heata via virtual │ │ role.heata.co/ │
│ node │ status │ capture-heat=true │
│ ▲ │ <────────── │ │
│ Status synced back │ │ │
│ │ │ │
└─────────────────────┘ └─────────────────────┘
Setup
# Install Admiralty in both clusters (requires cert-manager v1.0+)
helm install admiralty oci://public.ecr.aws/admiralty/admiralty \
--namespace admiralty --create-namespace \
--version 0.17.0 \
--kubeconfig ~/cloud-kubeconfig.yaml
helm install admiralty oci://public.ecr.aws/admiralty/admiralty \
--namespace admiralty --create-namespace \
--version 0.17.0 \
--kubeconfig ~/heata-kubeconfig.yaml
Create a Target in your cloud cluster pointing to Heata:
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Target
metadata:
name: heata
spec:
kubeconfigSecret:
name: heata-kubeconfig
Scheduling a pod to Heata
First, label the namespace to enable multi-cluster scheduling:
kubectl label namespace batch-jobs multicluster-scheduler=enabled
Then annotate your pod to allow multi-cluster scheduling:
apiVersion: batch/v1
kind: Job
metadata:
name: ml-training
namespace: batch-jobs
spec:
template:
metadata:
annotations:
multicluster.admiralty.io/elect: ""
spec:
nodeSelector:
role.heata.co/capture-heat: "true"
containers:
- name: trainer
image: your-registry.io/ml-trainer:latest
resources:
requests:
cpu: "8"
memory: "32Gi"
restartPolicy: Never
The pod is submitted to your cloud cluster, but Admiralty transparently schedules it to Heata. Logs and status are synced back.
Common patterns
Pattern 1: Burst to Heata
Run your baseline workloads on your existing cloud cluster. When you need extra capacity — CI runs, batch processing, training jobs — burst to Heata. This is the simplest pattern and requires no always-on federation.
Normal load: [ Cloud cluster ]
Peak / batch: [ Cloud cluster ] + [ Heata cluster ]
How: Use ArgoCD, Rancher Fleet, or your existing GitOps tool to deploy batch workloads specifically to the Heata cluster.
Pattern 2: Workload placement by type
Route workloads based on their characteristics:
Latency-sensitive APIs ──> Cloud cluster (close to users)
Batch / background jobs ──> Heata cluster (cheaper, greener)
ML training ──> Heata cluster (dedicated resources)
Dev / staging ──> Heata cluster (cost-effective)
How: Use cluster selectors in your GitOps tool, or Admiralty for transparent scheduling.
Pattern 3: Active-active with failover
Run workloads on both clusters simultaneously for redundancy, or configure one as a failover target:
Primary: [ Cloud cluster ] ──active──
Failover: [ Heata cluster ] ──standby / active──
How: Use Rancher Fleet, ArgoCD ApplicationSets, or KubeFed to replicate deployments across clusters.
Choosing the right approach
| Scenario | Recommended tool | Complexity |
|---|---|---|
| "We use k8s and want the simplest possible setup" | Just add the kubeconfig and deploy directly | Minimal |
| "We use Azure and want to manage everything from one place" | Azure Arc | Low |
| "We already use Rancher" | Rancher | Low |
| "We use ArgoCD for GitOps" | ArgoCD | Low |
| "We want transparent burst-to-Heata from our cluster" | Admiralty | Medium |
| "We need uniform policy across all clusters" | Azure Arc or Rancher | Medium |
You don't need to commit to a federation tool to get started. Many clients begin by simply adding the Heata kubeconfig to their CI pipeline and deploying batch jobs directly. Federation tools become valuable as you scale or want more sophisticated placement policies.
FAQ
Do I need a specific Kubernetes version on my existing clusters? No. Heata runs standard Kubernetes (K3s, currently 1.31.x). Any multi-cluster tool that supports standard Kubernetes works with Heata.
Does my existing cluster need network access to the Heata cluster? For most tools, the management layer (ArgoCD, Rancher, your CI pipeline) needs to reach the Heata API server. Worker-to-worker networking between clusters is not required for batch workloads. If you need cross-cluster service mesh or pod-to-pod networking, talk to us about VPN peering.
Will my workloads have higher latency on Heata? Heata infrastructure is in the UK. For latency-sensitive APIs serving global users, keep those on cloud regions close to your users. For batch processing, CI/CD, training, and background jobs, latency is irrelevant and Heata is a better fit.
Can I set resource quotas on the Heata cluster? Yes. We configure resource quotas based on your plan. You can also set your own quotas within your namespace.
What if I want to stop using Heata?
Remove Heata from your federation tool or stop deploying to it. Your workloads continue running on your other clusters. There are no Heata-specific dependencies in your manifests beyond the nodeSelector.
Is Heata compatible with service meshes like Istio or Linkerd? Yes. You can install a service mesh on your Heata cluster. For cross-cluster mesh, talk to us about the networking requirements.
Get started
You don't need to adopt federation to start using Heata. The simplest path:
- Get your kubeconfig from us
- Add the
nodeSelectorto a batch job kubectl apply
Federation and multi-cluster tooling come later, when you're ready to scale.
- Email: techsupport@heata.co
- Web: heata.co
Support
If you have questions or run into issues:
- Email: techsupport@heata.co
- Slack: We can set up a shared channel with your team
- Std Response time: Within 4 business hours for production issues, speak to us about over options.
Your compute on Heata provides free hot water for families around the UK.