This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

Before you can use SpinKube, you’ll need to get it installed. We have several complete installation guides that covers all the possibilities; these guides will guide you through the process of installing SpinKube on your Kubernetes cluster.

1 - Quickstart

Learn how to setup a Kubernetes cluser, install SpinKube and run your first Spin App.

This Quickstart guide demonstrates how to set up a new Kubernetes cluster, install the SpinKube and deploy your first Spin application.

Prerequisites

For this Quickstart guide, you will need:

  • kubectl - the Kubernetes CLI
  • Rancher Desktop or Docker Desktop for managing containers and Kubernetes on your desktop
  • k3d - a lightweight Kubernetes distribution that runs on Docker
  • Helm - the package manager for Kubernetes

Set up Your Kubernetes Cluster

  1. Create a Kubernetes cluster with a k3d image that includes the containerd-shim-spin prerequisite already installed:
k3d cluster create wasm-cluster \
  --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.15.1 \
  --port "8081:80@loadbalancer" \
  --agents 2

Note: Spin Operator requires a few Kubernetes resources that are installed globally to the cluster. We create these directly through kubectl as a best practice, since their lifetimes are usually managed separately from a given Spin Operator installation.

  1. Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
kubectl wait --for=condition=available --timeout=300s deployment/cert-manager-webhook -n cert-manager
  1. Apply the Runtime Class used for scheduling Spin apps onto nodes running the shim:

Note: In a production cluster you likely want to customize the Runtime Class with a nodeSelector that matches nodes that have the shim installed. However, in the K3d example, they’re installed on every node.

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.runtime-class.yaml
  1. Apply the Custom Resource Definitions used by the Spin Operator:
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml

Deploy the Spin Operator

Execute the following command to install the Spin Operator on the K3d cluster using Helm. This will create all of the Kubernetes resources required by Spin Operator under the Kubernetes namespace spin-operator. It may take a moment for the installation to complete as dependencies are installed and pods are spinning up.

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.3.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Lastly, create the shim executor:

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.shim-executor.yaml

Run the Sample Application

You are now ready to deploy Spin applications onto the cluster!

  1. Create your first application in the same spin-operator namespace that the operator is running:
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
  1. Forward a local port to the application pod so that it can be reached:
kubectl port-forward svc/simple-spinapp 8083:80
  1. In a different terminal window, make a request to the application:
curl localhost:8083/hello

You should see:

Hello world from Spin!

Next Steps

Congrats on deploying your first SpinApp! Recommended next steps:

2 - Installing on Linode Kubernetes Engine (LKE)

This guide walks you through the process of installing SpinKube on LKE.

This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes Engine (LKE).

Prerequisites

This guide assumes that you have an Akamai Linode account that is configured and has sufficient permissions for creating a new LKE cluster.

You will also need recent versions of kubectl and helm installed on your system.

Creating an LKE Cluster

LKE has a managed control plane, so you only need to create the pool of worker nodes. In this tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should be fine for installing SpinKube and running up to around 100 Spin apps.

You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because containers consume substantially more resources than Spin apps do.

In the Linode web console, click on Kubernetes in the right-hand navigation, and then click Create Cluster.

LKE Creation Screen Described Below

You will only need to make a few choices on this screen. Here’s what we have done:

  • We named the cluster spinkube-lke-1. You should name it according to whatever convention you prefer
  • We chose the Chicago, IL (us-ord) region, but you can choose any region you prefer
  • The latest supported Kubernetes version is 1.30, so we chose that
  • For this testing cluster, we chose No on HA Control Plane because we do not need high availability
  • In Add Node Pools, we added two Dedicated 4 GB simply to show a cluster running more than one node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node cluster. Click Add to add these, and ignore the warning about minimum sizes.

Once you have set things to your liking, press Create Cluster.

This will take you to a screen that shows the status of the cluster. Initially, you will want to wait for all of your Node Pool to start up. Once all of the nodes are online, download the kubeconfig file, which will be named something like spinkube-lke-1-kubeconfig.yaml.

The kubeconfig file will have the credentials for connecting to your new LKE cluster. Do not share that file or put it in a public place.

For all of the subsequent operations, you will want to use the spinkube-lke-1-kubeconfig.yaml as your main Kubernetes configuration file. The best way to do that is to set the environment variable KUBECONFIG to point to that file:

$ export KUBECONFIG=/path/to/spinkube-lke-1-kubeconfig.yaml

You can test this using the command kubectl config view:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://REDACTED.us-ord-1.linodelke.net:443
  name: lke203785
contexts:
- context:
    cluster: lke203785
    namespace: default
    user: lke203785-admin
  name: lke203785-ctx
current-context: lke203785-ctx
kind: Config
preferences: {}
users:
- name: lke203785-admin
  user:
    token: REDACTED

This shows us our cluster config. You should be able to cross-reference the lkeNNNNNN version with what you see on your Akamai Linode dashboard.

Install SpinKube Using Helm

At this point, install SpinKube with Helm. As long as your KUBECONFIG environment variable is pointed at the correct cluster, the installation method documented there will work.

Once you are done following the installation steps, return here to install a first app.

Creating a First App

We will use the spin kube plugin to scaffold out a new app. If you run the following command and the kube plugin is not installed, you will first be prompted to install the plugin. Choose yes to install.

We’ll point to an existing Spin app, a Hello World program written in Rust, compiled to Wasm, and stored in GitHub Container Registry (GHCR):

$ spin kube scaffold --from ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0 > hello-world.yaml

Note that Spin apps, which are WebAssembly, can be stored in most container registries even though they are not Docker containers.

This will write the following to hello-world.yaml:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: spin-rust-hello
spec:
  image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
  executor: containerd-shim-spin
  replicas: 2

Using kubectl apply, we can deploy that app:

$ kubectl apply -f hello-world.yaml
spinapp.core.spinoperator.dev/spin-rust-hello created

With SpinKube, SpinApps will be deployed as Pod resources, so we can see the app using kubectl get pods:

$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
spin-rust-hello-f6d8fc894-7pq7k   1/1     Running   0          54s
spin-rust-hello-f6d8fc894-vmsgh   1/1     Running   0          54s

Status is listed as Running, which means our app is ready.

Making An App Public with a NodeBalancer

By default, Spin apps will be deployed with an internal service. But with Linode, you can provision a NodeBalancer using a Service object. Here is a hello-world-service.yaml that provisions a nodebalancer for us:

apiVersion: v1
kind: Service
metadata:
  name: spin-rust-hello-nodebalancer
  annotations:
    service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
  labels:
    core.spinoperator.dev/app-name: spin-rust-hello
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    core.spinoperator.dev/app.spin-rust-hello.status: ready
  sessionAffinity: None

When LKE receives a Service whose type is LoadBalancer, it will provision a NodeBalancer for you.

You can customize this for your app simply by replacing all instances of spin-rust-hello with the name of your app.

We can create the NodeBalancer by running kubectl apply on the above file:

$ kubectl apply -f hello-world-nodebalancer.yaml
service/spin-rust-hello-nodebalancer created

Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using kubectl get service spin-rust-hello-nodebalancer:

$ get service spin-rust-hello-nodebalancer
NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
spin-rust-hello-nodebalancer   LoadBalancer   10.128.235.253   172.234.210.123   80:31083/TCP   40s

The EXTERNAL-IP field tells us what the NodeBalancer is using as a public IP. We can now test this out over the Internet using curl or by entering the URL http://172.234.210.123/hello into your browser.

$ curl 172.234.210.123/hello
Hello world from Spin!

Deleting Our App

To delete this sample app, we will first delete the NodeBalancer, and then delete the app:

$ kubectl delete service spin-rust-hello-nodebalancer
service "spin-rust-hello-nodebalancer" deleted
$ kubectl delete spinapp spin-rust-hello
spinapp.core.spinoperator.dev "spin-rust-hello" deleted

If you delete the NodeBalancer out of the Linode console, it will not automatically delete the Service record in Kubernetes, which will cause inconsistencies. So it is best to use kubectl delete service to delete your NodeBalancer.

If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai Linode dashboard, navigate to Kubernetes, and press the Delete button. This will destroy all of your worker nodes and deprovision the control plane.

3 - Installing on Microk8s

This guide walks you through the process of installing SpinKube using Microk8s.

This guide walks through the process of installing and configuring Microk8s and SpinKube.

Prerequisites

This guide assumes you are running Ubuntu 24.04, and that you have Snap enabled (which is the default).

The testing platform for this installation was an Akamai Edge Linode running 4G of memory and 2 cores.

Installing Spin

You will need to install Spin. The easiest way is to just use the following one-liner to get the latest version of Spin:

$ curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash

Typically you will then want to move spin to /usr/local/bin or somewhere else on your $PATH:

$ sudo mv spin /usr/local/bin/spin

You can test that it’s on your $PATH with which spin. If this returns blank, you will need to adjust your $PATH variable or put Spin somewhere that is already on $PATH.

A Script To Do This

If you would rather work with a shell script, you may find this Gist a great place to start. It installs Microk8s and SpinKube, and configures both.

Installing Microk8s on Ubuntu

Use snap to install microk8s:

$ sudo snap install microk8s --classic

This will install Microk8s and start it. You may want to read the official installation instructions before proceeding. Wait for a moment or two, and then ensure Microk8s is running with the microk8s status command.

Next, enable the TLS certificate manager:

$ microk8s enable cert-manager

Now we’re ready to install the SpinKube environment for running Spin applications.

Installing SpinKube

SpinKube provides the entire toolkit for running Spin serverless apps. You may want to familiarize yourself with the SpinKube quickstart guide before proceeding.

First, we need to apply a runtime class and a CRD for SpinKube:

$ microk8s kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.runtime-class.yaml
$ microk8s kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml

Both of these should apply immediately.

We then need to install KWasm because it is not yet included with Microk8s:

$ microk8s helm repo add kwasm http://kwasm.sh/kwasm-operator/
$ microk8s helm install kwasm-operator kwasm/kwasm-operator --namespace kwasm --create-namespace --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.15.1
$ microk8s kubectl annotate node --all kwasm.sh/kwasm-node=true

The last line above tells Microk8s that all nodes on the cluster (which is just one node in this case) can run Spin applications.

Next, we need to install SpinKube’s operator using Helm (which is included with Microk8s).

$ microk8s helm install spin-operator --namespace spin-operator --create-namespace --version 0.3.0 --wait oci://ghcr.io/spinkube/charts/spin-operator

Now we have the main operator installed. There is just one more step. We need to install the shim executor, which is a special CRD that allows us to use multiple executors for WebAssembly.

$ microk8s kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.shim-executor.yaml

Now SpinKube is installed!

Running an App in SpinKube

Next, we can run a simple Spin application inside of Microk8s.

While we could write regular deployments or pod specifications, the easiest way to deploy a Spin app is by creating a simple SpinApp resource. Let’s use the simple example from SpinKube:

$ microk8s kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml

The above installs a simple SpinApp YAML that looks like this:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: simple-spinapp
spec:
  image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
  replicas: 1
  executor: containerd-shim-spin

You can read up on the definition in the documentation.

It may take a moment or two to get started, but you should be able to see the app with microk8s kubectl get pods.

$ microk8s kubectl get po
NAME                              READY   STATUS    RESTARTS   AGE
simple-spinapp-5c7b66f576-9v9fd   1/1     Running   0          45m

Troubleshooting

If STATUS gets stuck in ContainerCreating, it is possible that KWasm did not install correctly. Try doing a microk8s stop, waiting a few minutes, and then running microk8s start. You can also try the command:

$ microk8s kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator

Testing the Spin App

The easiest way to test our Spin app is to port forward from the Spin app to the outside host:

$ microk8s kubectl port-forward services/simple-spinapp 8080:80

You can then run curl localhost:8080/hello

$ curl localhost:8080/hello
Hello world from Spin!

Where to go from here

So far, we installed Microk8s, SpinKube, and a single Spin app. To have a more production-ready version, you might want to:

Bonus: Configuring Microk8s ingress

Microk8s includes an NGINX-based ingress controller that works great with Spin applications.

Enable the ingress controller: microk8s enable ingress

Now we can create an ingress that routes our traffic to the simple-spinapp app. Create the file ingress.yaml with the following content. Note that the service.name is the name of our Spin app.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
       - path: /
         pathType: Prefix
         backend:
           service:
             name: simple-spinapp
             port:
               number: 80

Install the above with microk8s kubectl -f ingress.yaml. After a moment or two, you should be able to run curl [localhost](http://localhost) and see Hello World!.

Conclusion

In this guide we’ve installed Spin, Microk8s, and SpinKube and then run a Spin application.

To learn more about the many things you can do with Spin apps, go to the Spin developer docs. You can also look at a variety of examples at Spin Up Hub.

Or to try out different Kubernetes configurations, check out other installation guides.

4 - Installing on Azure Kubernetes Service

In this tutorial you’ll learn how to deploy SpinKube on Azure Kubernetes Service (AKS).

In this tutorial, you install Spin Operator on an Azure Kubernetes Service (AKS) cluster and deploy a simple Spin application. You will learn how to:

  • Deploy an AKS cluster
  • Install Spin Operator Custom Resource Definition and Runtime Class
  • Install and verify containerd shim via Kwasm
  • Deploy a simple Spin App custom resource on your cluster

Prerequisites

Please ensure you have the following tools installed before continuing:

  • kubectl - the Kubernetes CLI
  • Helm - the package manager for Kubernetes
  • Azure CLI - cross-platform CLI for managing Azure resources

Provisioning the necessary Azure Infrastructure

Before you dive into deploying Spin Operator on Azure Kubernetes Service (AKS), the underlying cloud infrastructure must be provisioned. For the sake of this article, you will provision a simple AKS cluster. (Alternatively, you can setup the AKS cluster following this guide from Microsoft.)

# Login with Azure CLI
az login

# Select the desired Azure Subscription
az account set --subscription <YOUR_SUBSCRIPTION>

# Create an Azure Resource Group
az group create --name rg-spin-operator \
    --location germanywestcentral

# Create an AKS cluster
az aks create --name aks-spin-operator \
    --resource-group rg-spin-operator \
    --location germanywestcentral \
    --node-count 1 \
    --tier free \
    --generate-ssh-keys

Once the AKS cluster has been provisioned, use the aks get-credentials command to download credentials for kubectl:

# Download credentials for kubectl
az aks get-credentials --name aks-spin-operator \
    --resource-group rg-spin-operator

For verification, you can use kubectl to browse common resources inside of the AKS cluster:

# Browse namespaces in the AKS cluster
kubectl get namespaces

NAME              STATUS   AGE
default           Active   3m
kube-node-lease   Active   3m
kube-public       Active   3m
kube-system       Active   3m

Deploying the Spin Operator

First, the Custom Resource Definition (CRD) and the Runtime Class for wasmtime-spin-v2 must be installed.

# Install the CRDs
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml

# Install the Runtime Class
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.runtime-class.yaml

The following installs cert-manager which is required to automatically provision and manage TLS certificates (used by the admission webhook system of Spin Operator)

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

The Spin Operator chart also has a dependency on Kwasm, which you use to install containerd-wasm-shim on the Kubernetes node(s):

# Add Helm repository if not already done
helm repo add kwasm http://kwasm.sh/kwasm-operator/
helm repo update

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.15.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

To verify containerd-wasm-shim installation, you can inspect the logs from the Kwasm Operator:

# Inspect logs from the Kwasm Operator
kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator

{"level":"info","node":"aks-nodepool1-31687461-vmss000000","time":"2024-02-12T11:23:43Z","message":"Trying to Deploy on aks-nodepool1-31687461-vmss000000"}
{"level":"info","time":"2024-02-12T11:23:43Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is still Ongoing"}
{"level":"info","time":"2024-02-12T11:24:00Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is Completed. Happy WASMing"}

The following installs the chart with the release name spin-operator in the spin-operator namespace:

helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.3.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Lastly, create the shim executor::

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.shim-executor.yaml

Deploying a Spin App to AKS

To validate the Spin Operator deployment, you will deploy a simple Spin App to the AKS cluster. The following command will install a simple Spin App using the SpinApp CRD you provisioned in the previous section:

# Deploy a sample Spin app
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml

Verifying the Spin App

Configure port forwarding from port 8080 of your local machine to port 80 of the Kubernetes service which points to the Spin App you installed in the previous section:

kubectl port-forward services/simple-spinapp 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Send a HTTP request to http://127.0.0.1:8080/hello using curl:

# Send an HTTP GET request to the Spin App
curl -iX GET http://localhost:8080/hello
HTTP/1.1 200 OK
transfer-encoding: chunked
date: Mon, 12 Feb 2024 12:23:52 GMT

Hello world from Spin!%

Removing the Azure infrastructure

To delete the Azure infrastructure created as part of this article, use the following command:

# Remove all Azure resources
az group delete --name rg-spin-operator \
    --no-wait \
    --yes

5 - Installing on Rancher Desktop

This tutorial shows how to integrate SpinKube and Rancher Desktop.

Rancher Desktop is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop.

Prerequisites

  • An operating system compatible with Rancher Desktop (Windows, macOS, or Linux).
  • Administrative or superuser access on your computer.

Step 1: Installing Rancher Desktop

  1. Download Rancher Desktop:
  2. Install Rancher Desktop:
    • Run the downloaded installer and follow the on-screen instructions to complete the installation.

Step 2: Configure Rancher Desktop

  • Open Rancher Desktop.
  • Navigate to the Preferences -> Kubernetes menu.
  • Ensure that the Enable Kubernetes is selected and that the Enable Traefik and Install Spin Operator Options are checked. Make sure to Apply your changes.

Rancher Desktop

  • Make sure to select rancher-desktop from the Kubernetes Contexts configuration in your toolbar.

Kubernetes contexts

  • Make sure that the Enable Wasm option is checked in the PreferencesContainer Engine section. Remember to always apply your changes.

Rancher preferences

  • Once your changes have been applied, go to the Cluster DashboardMore ResourcesCert Manager section and click on Certificates. You will see the spin-operator-serving-cert is ready.

Certificates tab

Step 3: Creating a Spin Application

  1. Open a terminal (Command Prompt, Terminal, or equivalent based on your OS).
  2. Create a new Spin application: This command creates a new Spin application using the http-js template, named hello-k3s.
  $ spin new -t http-js hello-k3s --accept-defaults
  $ cd hello-k3s
  1. We can edit the /src/index.js file and make the workload return a string “Hello from Rancher Desktop”:
export async function handleRequest(request) {
    return {
        status: 200,
        headers: {"content-type": "text/plain"},
        body: "Hello from Rancher Desktop" // <-- This changed
    }
}

Step 4: Deploying Your Application

  1. Push the application to a registry:
$ npm install
$ spin build
$ spin registry push ttl.sh/hello-k3s:0.1.0

Replace ttl.sh/hello-k3s:0.1.0 with your registry URL and tag.

  1. Scaffold Kubernetes resources:
$ spin kube scaffold --from ttl.sh/hello-k3s:0.1.0

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-k3s
spec:
  image: "ttl.sh/hello-k3s:0.1.0"
  executor: containerd-shim-spin
  replicas: 2

This command prepares the necessary Kubernetes deployment configurations.

  1. Deploy the application to Kubernetes:
$ spin kube deploy --from ttl.sh/hello-k3s:0.1.0

If we click on the Rancher Desktop’s “Cluster Dashboard”, we can see hello-k3s:0.1.0 running inside the “Workloads” dropdown section:

Rancher Desktop Preferences Wasm

To access our app outside of the cluster, we can forward the port so that we access the application from our host machine:

$ kubectl port-forward svc/hello-k3s 8083:80

To test locally, we can make a request as follows:

$ curl localhost:8083
Hello from Rancher Desktop

The above curl command or a quick visit to your browser at localhost:8083 will return the “Hello from Rancher Desktop” message:

Hello from Rancher Desktop

6 - Installing with Helm

This guide walks you through the process of installing SpinKube using Helm.

Prerequisites

For this guide in particular, you will need:

  • kubectl - the Kubernetes CLI
  • Helm - the package manager for Kubernetes

Install Spin Operator With Helm

The following instructions are for installing Spin Operator using a Helm chart (using helm install).

Prepare the Cluster

Before installing the chart, you’ll need to ensure the following are installed:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml
  • Kwasm Operator is required to install WebAssembly shims on Kubernetes nodes that don’t already include them. Note that in the future this will be replaced by runtime class manager.
# Add Helm repository if not already done
helm repo add kwasm http://kwasm.sh/kwasm-operator/

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.15.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

Chart prerequisites

Now we have our dependencies installed, we can start installing the operator. This involves a couple of steps that allow for further customization of Spin Applications in the cluster over time, but here we install the defaults.

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml
  • Next we create a RuntimeClass that points to the spin handler called wasmtime-spin-v2. If you are deploying to a production cluster that only has a shim on a subset of nodes, you’ll need to modify the RuntimeClass with a nodeSelector::
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.runtime-class.yaml
  • Finally, we create a containerd-spin-shim SpinAppExecutor. This tells the Spin Operator to use the RuntimeClass we just created to run Spin Apps:
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.shim-executor.yaml

Installing the Spin Operator Chart

The following installs the chart with the release name spin-operator:

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.3.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Upgrading the Chart

Note that you may also need to upgrade the spin-operator CRDs in tandem with upgrading the Helm release:

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml

To upgrade the spin-operator release, run the following:

# Upgrade Spin Operator using Helm
helm upgrade spin-operator \
  --namespace spin-operator \
  --version 0.3.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Uninstalling the Chart

To delete the spin-operator release, run:

# Uninstall Spin Operator using Helm
helm delete spin-operator --namespace spin-operator

This will remove all Kubernetes resources associated with the chart and deletes the Helm release.

To completely uninstall all resources related to spin-operator, you may want to delete the corresponding CRD resources and the RuntimeClass:

kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.shim-executor.yaml
kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.runtime-class.yaml
kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.3.0/spin-operator.crds.yaml

7 - Installing the `spin kube` plugin

Learn how to install the kube plugin.

The kube plugin for spin (The Spin CLI) provides first class experience for working with Spin apps in the context of Kubernetes.

Prerequisites

Ensure you have the Spin CLI (version 2.3.1 or newer) installed on your machine.

Install the plugin

Before you install the plugin, you should fetch the list of latest Spin plugins from the spin-plugins repository:

# Update the list of latest Spin plugins
spin plugins update
Plugin information updated successfully

Go ahead and install the kube using spin plugin install:

# Install the latest kube plugin
spin plugins install kube

At this point you should see the kube plugin when querying the list of installed Spin plugins:

# List all installed Spin plugins
spin plugins list --installed

cloud 0.7.0 [installed]
cloud-gpu 0.1.0 [installed]
kube 0.1.1 [installed]
pluginify 0.6.0 [installed]

Compiling from source

As an alternative to the plugin manager, you can download and manually install the plugin. Manual installation is commonly used to test in-flight changes. For a user, installing the plugin using Spin’s plugin manager is better.

Please refer to the spin-plugin-kube GitHub repository for instructions on how to compile the plugin from source.