Heata Kubernetes: Client Onboarding Guide
Overview
Heata provides managed Kubernetes clusters running on our distributed, low-carbon infrastructure. If you already use Kubernetes, deploying workloads to Heata is as simple as targeting a node label. You get direct access to a standard Kubernetes API — no proprietary tooling, no custom SDKs, no lock-in.
Your workloads run on dedicated, isolated infrastructure while the heat generated provides free hot water to UK households.
What we provide
When you onboard, Heata provisions the following for you:
| Component | Details |
|---|---|
| Kubernetes API endpoint | A standard, secured K8s API server accessible over the internet via WireGuard VPN or allowlisted IPs (Speak to us about other peering options) |
| Kubeconfig file | A ready-to-use kubeconfig with credentials scoped to your dedicated cluster |
| Dedicated worker nodes | Isolated VMs on Heata hardware, labelled and ready to accept your workloads |
| Node labels | All Heata worker nodes carry the label role.heata.co/capture-heat=true |
| Monitoring endpoint | Optional Prometheus-compatible metrics endpoint for your workloads |
You interact with the cluster using standard kubectl, Helm, or any Kubernetes client library. There is nothing Heata-specific in how you deploy.
Prerequisites
- Familiarity with Kubernetes (deployments, jobs, pods)
kubectlinstalled locally (install guide)- Your kubeconfig file (provided by Heata during onboarding)
Step 1: Configure kubectl
Save the kubeconfig file we provide and point kubectl at it:
# Option A: Set as default (simple, but overwrites your existing kubeconfig)
cp heata-kubeconfig.yaml ~/.kube/config
# Option B: Use alongside existing clusters
export KUBECONFIG=~/.kube/config:~/heata-kubeconfig.yaml
kubectl config use-context heata
Verify access:
kubectl get nodes
You should see your dedicated Heata worker nodes:
NAME STATUS ROLES AGE VERSION
heata-worker-01 Ready <none> 12d v1.31.3+k3s1
heata-worker-02 Ready <none> 12d v1.31.3+k3s1
heata-worker-03 Ready <none> 12d v1.31.3+k3s1
All worker nodes will carry the label role.heata.co/capture-heat=true:
kubectl get nodes -l role.heata.co/capture-heat=true
Step 2: Deploy a workload
The key concept is simple: target the node label. Use a nodeSelector to ensure your workloads land on Heata nodes. This is the same mechanism you already use for any node affinity in Kubernetes — nothing new to learn.
Example: Batch Job
Here's a simple Kubernetes Job that runs on Heata infrastructure:
apiVersion: batch/v1
kind: Job
metadata:
name: data-processing-job
spec:
template:
spec:
nodeSelector:
role.heata.co/capture-heat: "true"
containers:
- name: processor
image: your-registry.io/data-processor:latest
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "4"
memory: "8Gi"
command: ["python", "process.py"]
env:
- name: INPUT_BUCKET
value: "s3://your-bucket/input"
- name: OUTPUT_BUCKET
value: "s3://your-bucket/output"
restartPolicy: Never
backoffLimit: 3
kubectl apply -f job.yaml
kubectl get jobs -w
That's it. Your job runs on Heata hardware, and the heat it generates provides hot water to a UK household.
Example: Parallel Batch Processing
For workloads that benefit from parallelism — model training, data pipelines, rendering — use Kubernetes' built-in parallel job support:
apiVersion: batch/v1
kind: Job
metadata:
name: parallel-batch
spec:
completions: 50
parallelism: 10
template:
spec:
nodeSelector:
role.heata.co/capture-heat: "true"
containers:
- name: worker
image: your-registry.io/batch-worker:latest
resources:
requests:
cpu: "1"
memory: "2Gi"
restartPolicy: Never
This runs 50 tasks, 10 at a time, spread across your Heata worker nodes.
Example: CronJob
Schedule recurring workloads with a standard CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: nightly-etl
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
nodeSelector:
role.heata.co/capture-heat: "true"
containers:
- name: etl
image: your-registry.io/etl-pipeline:latest
command: ["python", "run_etl.py"]
restartPolicy: OnFailure
Example: Long-running Deployment
Heata isn't just for batch jobs. You can run persistent services too:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inference-api
spec:
replicas: 3
selector:
matchLabels:
app: inference-api
template:
metadata:
labels:
app: inference-api
spec:
nodeSelector:
role.heata.co/capture-heat: "true"
containers:
- name: api
image: your-registry.io/inference-api:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "2"
memory: "8Gi"
Step 3: Verify and monitor
Check your workload is running on Heata nodes:
# See which node your pods are scheduled on
kubectl get pods -A -o wide
# Check job status
kubectl describe job data-processing-job
# Stream logs
kubectl logs -f job/data-processing-job
That's it
If you know Kubernetes, you know how to use Heata. There are no proprietary APIs, no custom annotations beyond the node label, and no vendor-specific deployment tools. Your existing manifests, Helm charts, and CI/CD pipelines work with a one-line nodeSelector addition.
How it works
Your infrastructure Heata infrastructure
│ │
│ 1. kubectl apply / helm install │
│ ────────────────────────────────────> │
│ │
│ 2. Scheduler places pods on nodes │
│ with role.heata.co/capture-heat │
│ │
│ 3. Workload runs on dedicated, │
│ isolated Heata hardware │
│ │
│ 4. Heat from compute provides │
│ hot water to UK households │
│ │
│ 5. Results available via standard │
│ K8s APIs (logs, status, events) │
│ <──────────────────────────────────── │
Security
| Concern | How we handle it |
|---|---|
| Cluster isolation | Your workloads run on a dedicated VM cluster. No shared control plane with other clients. |
| Network encryption | All inter-node traffic is encrypted over WireGuard tunnels. |
| API access | Kubernetes API access is restricted to your credentials, over https WireGuard VPN or IP allowlisting. |
| Node isolation | Worker nodes run in dedicated KVM/QEMU virtual machines with isolated CPU, memory, and storage. |
| Image pulling | Your container images are pulled directly by the worker node. Heata does not inspect, cache, or modify your images. |
FAQ
Do I need to change my container images? No. Use any standard container image from any registry — Docker Hub, GitHub Container Registry (GHCR), AWS ECR, Azure ACR, Google Artifact Registry, or your own private registry. If it runs in Docker or Kubernetes today, it runs on Heata without changes.
Can I use Helm charts?
Yes. Add the nodeSelector to your chart's values and deploy as normal. For example:
# values-heata.yaml
nodeSelector:
role.heata.co/capture-heat: "true"
helm install my-release my-chart -f values-heata.yaml
Can I use existing CI/CD pipelines to deploy?
Absolutely. If your pipeline already runs kubectl apply or helm install, just add the kubeconfig context and node selector. There is no Heata-specific tooling required.
What Kubernetes version do you run? We run K3s, a certified Kubernetes distribution. Current version is 1.31.x. We coordinate upgrades with you in advance.
Can I run stateful workloads? Yes. We support PersistentVolumeClaims backed by local storage on the worker nodes. Talk to us about your storage requirements.
What happens if a node goes down?
Kubernetes handles this automatically. Pods are rescheduled to healthy nodes. For Jobs, failed pods are retried based on your backoffLimit.
Is there egress to the internet? Yes. Your pods have outbound internet access for pulling images, calling APIs, or uploading results. Ingress can be configured on request.
Can I run workloads on both Heata and my existing cloud? Yes. See our Federation & Multi-Cluster Guide for how to manage workloads across Heata and traditional cloud clusters.
Support
If you have questions or run into issues:
- Email: techsupport@heata.co
- Slack: We can set up a shared channel with your team
- Std Response time: Within 4 business hours for production issues, speak to us about over options.
Your compute on Heata provides free hot water for families around the UK.