Spot-like VMs
Need compute on demand at the lowest possible price? Heata offers spot-like virtual machines on our distributed infrastructure.
Ideal for fault-tolerant workloads like batch processing, data pipelines, CI/CD, and distributed training where interruptions can be handled gracefully.
What you get
Each VM runs in full KVM isolation on Heata hardware:
| Feature | Details |
|---|---|
| OS | Ubuntu 22.04 LTS (other images available on request) |
| Resources | Choose a flavor — standard, compute, or memory — or talk to us about custom specs |
| Root access | Full root — install anything you need |
| Networking | Outbound internet access, WireGuard encrypted inter-VM networking |
| Boot scripts | Custom scripts that run on first boot and/or every boot |
| Optional K3s | Enable K3s to turn your VMs into a Kubernetes cluster automatically |
VM flavors
| Flavor | vCPU | RAM | Best for |
|---|---|---|---|
| standard | 2 | 4 GiB | General purpose, web services, light processing |
| compute | 4 | 8 GiB | Build jobs, data pipelines, CI/CD |
| memory | 8 | 16 GiB | ML training, large datasets, in-memory workloads |
Need a different size? Talk to us about custom configurations.
Getting started with the Heata CLI
Install
# macOS / Linux
brew install heata/tap/heata
# Or download the binary directly
curl -fsSL https://get.heata.co/cli | sh
Authenticate
heata auth login --api-key <your-api-key>
Your API key is provided during onboarding. You can also set it via environment variable:
export HEATA_API_KEY=<your-api-key>
Create a cluster
heata cluster create my-cluster --replicas 3 --flavor standard
That's it. Three VMs are provisioned across Heata's network, connected via WireGuard, and ready for SSH access.
Check status
heata cluster get my-cluster
ID: 550e8400-e29b-41d4-a716-446655440000
Name: my-cluster
Status: ready
Replicas: 3 / 3 ready
Image: ubuntu-22.04
Flavor: standard
Created: 2026-03-24T12:34:56Z
Scale up or down
heata cluster scale my-cluster 5
List all clusters
heata cluster list
Delete a cluster
heata cluster delete my-cluster
Boot scripts
Boot scripts let you customise what happens when your VMs start. Organise them in a directory:
my-scripts/
├── first-boot/ # Runs once when VM is created
│ ├── 01-install-deps.sh
│ └── 02-create-app.sh
└── every-boot/ # Runs on every boot (including restarts)
└── 01-start-app.sh
Deploy them alongside your cluster:
heata cluster create my-cluster \
--replicas 3 \
--flavor standard \
--scripts ./my-scripts/
Scripts in first-boot/ run once when the VM is first provisioned — use these for installing packages, deploying your application, and one-time setup. Scripts in every-boot/ run on every boot including after auto-heal restarts — use these for starting services and connectivity checks.
Scripts are executed in alphanumeric order within each directory - This is by ASCII value, following POSIX filesystem ordering (numbers 0–9, uppercase A–Z, then lowercase a–z); using a numbered prefix is recommended to make this clear.
Full example
Here's a complete working setup that provisions VMs, installs dependencies, deploys a Python health-check service, and starts it on every boot.
1. Install dependencies (first-boot)
my-scripts/first-boot/01-install-deps.sh — runs once when the VM is created:
#!/bin/bash
# Install application dependencies.
# This runs once when the VM is first created.
set -euo pipefail
apt-get update -qq
apt-get install -y -qq python3 python3-venv curl jq > /dev/null
echo "[$(date)] Installed dependencies" >> /var/log/heata-client.log
2. Deploy your application (first-boot)
my-scripts/first-boot/02-create-app.sh — creates the app and systemd service:
#!/bin/bash
# Set up a sample Python health-check server.
# This runs once when the VM is first created.
set -euo pipefail
mkdir -p /opt/myapp
cat > /opt/myapp/healthcheck.py <<'PYEOF'
from http.server import HTTPServer, BaseHTTPRequestHandler
import json, datetime, socket
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
body = json.dumps({
"hostname": socket.gethostname(),
"time": datetime.datetime.utcnow().isoformat() + "Z",
"status": "ok",
})
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(body.encode())
def log_message(self, *args):
pass
HTTPServer(("0.0.0.0", 8080), Handler).serve_forever()
PYEOF
cat > /etc/systemd/system/myapp-healthcheck.service <<'SVCEOF'
[Unit]
Description=My App Health Check
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/usr/bin/python3 /opt/myapp/healthcheck.py
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
SVCEOF
systemctl daemon-reload
systemctl enable myapp-healthcheck.service
echo "[$(date)] Created health-check app" >> /var/log/heata-client.log
3. Start on every boot (every-boot)
my-scripts/every-boot/01-start-app.sh — runs on every VM boot, including after auto-heal restarts:
#!/bin/bash
# Start the health-check service and verify internet connectivity.
# This runs on every VM boot (including after auto-heal restarts).
set -euo pipefail
systemctl start myapp-healthcheck.service
# Quick connectivity check
if curl -sf --max-time 5 https://httpbin.org/ip > /dev/null; then
echo "[$(date)] Boot: internet OK, app started" >> /var/log/heata-client.log
else
echo "[$(date)] Boot: internet unreachable, app started anyway" >> /var/log/heata-client.log
fi
4. Deploy
heata cluster create my-cluster \
--replicas 1 \
--flavor standard \
--scripts ./my-scripts/
Your VM boots, runs the first-boot scripts, and your health-check service is live on port 8080.
Optional: K3s clustering
Add --k3s to automatically form a Kubernetes cluster from your VMs. Your access node becomes the control plane and VMs auto-join as workers:
heata cluster create my-k8s-cluster \
--replicas 3 \
--flavor compute \
--k3s
Once ready, you get full kubectl access:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
my-k8s-cluster-wk-01 Ready <none> 5m v1.31.3+k3s1
my-k8s-cluster-wk-02 Ready <none> 5m v1.31.3+k3s1
my-k8s-cluster-wk-03 Ready <none> 5m v1.31.3+k3s1
For the full Kubernetes experience including managed clusters and federation, see:
CLI reference
| Command | Description |
|---|---|
heata auth login --api-key <key> |
Authenticate with your API key |
heata auth whoami |
Check current auth status |
heata cluster create <name> |
Create a new VM cluster |
heata cluster list |
List all clusters |
heata cluster get <id> |
Get cluster details and status |
heata cluster scale <id> <replicas> |
Scale a cluster up or down |
heata cluster delete <id> |
Delete a cluster |
heata config view |
View current CLI configuration |
Create flags
| Flag | Default | Description |
|---|---|---|
--replicas |
1 | Number of VMs to provision |
--image |
ubuntu-22.04 |
Base OS image (ubuntu-22.04, ubuntu-20.04, debian-12) |
--flavor |
standard |
VM size: standard, compute, or memory |
--scripts |
— | Path to directory containing first-boot/ and every-boot/ scripts |
--k3s |
false | Install K3s and form a Kubernetes cluster |
--access-node-type |
client_managed |
Access node type: client_managed or heata_managed |
FAQ
Can I SSH into my VMs? Yes. We provide access via your dedicated access node over WireGuard VPN.
What happens if a VM goes down?
VMs are auto-healed — if a VM becomes unresponsive, it is automatically restarted. Your every-boot scripts run again on restart, bringing your application back up.
Can I scale up after initial deployment?
Yes. Run heata cluster scale my-cluster 5 to add more VMs across the network.
Can I use these as Kubernetes workers?
Yes. Add --k3s when creating your cluster and your VMs automatically join as K3s workers with your access node as the control plane.
What images are available? Ubuntu 22.04 (default), Ubuntu 20.04, and Debian 12. Contact us if you need other images.
Is there a minimum commitment? Talk to us — we offer flexible terms depending on your workload profile.
Support
- Email: sales@heata.co
- Web: heata.co
Your compute on Heata provides free hot water for families around the UK.