This is the multi-page printable view of this section. Click here to print.
Miscellaneous
1 - Compatibility
See the following list of compatible Kubernetes distributions and platforms for running the Spin Operator:
- Amazon Elastic Kubernetes Service (EKS)
- Azure Kubernetes Service (AKS)
- Civo Kubernetes
- Digital Ocean Kubernetes (DOKS)
- Google Kubernetes Engine (GKE)
- k3d
- minikube (explicitly pass
--container-runtime=containerd
and ensure you’re on minikube version>= 1.33
) - Scaleway Kubernetes Kapsule
Disclaimer: Please note that this is a working list of compatible Kubernetes distributions and platforms. For managed Kubernetes services, it’s important to be aware that cloud providers may choose to discontinue support for specific dependencies, such as container runtimes. While we strive to maintain the accuracy of this documentation, it is ultimately your responsibility to verify with your Kubernetes provider whether the required dependencies are still supported.
How to validate Spin Operator Compatibility
If you would like to validate Spin Operator’s compatibility with a new specific Kubernetes distribution or platform or simply test one of the platforms listed above yourself, follow these steps for validation:
Install the Spin Operator: Begin by installing the Spin Operator within the Kubernetes cluster. This involves deploying the necessary dependencies and the Spin Operator itself. (See Installing with Helm)
Create, Package, and Deploy a Spin App: Proceed by creating a Spin App, packaging it, and successfully deploying it within the Kubernetes environment. (See Package and Deploy Spin Apps)
Invoke the Spin App: Once the Spin App is deployed, ensure at least one request was successfully served by the Spin App.
Container Runtime Constraints
The Spin Operator requires the target nodes that would run Spin applications to support containerd
version 1.6.26+
or
1.7.7+
.
Use the kubectl get nodes -o wide
command to see which container runtime is installed per node:
# Inspect container runtimes per node
kubectl get nodes -o wide
NAME STATUS VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
generalnp-vmss000000 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
generalnp-vmss000001 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
generalnp-vmss000002 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
2 - Integrations
SpinKube Integrations
KEDA
Kubernetes Event-Driven Autoscaling (KEDA) provides event-driven autoscaling for Kubernetes workloads. It allows Kubernetes to automatically scale applications in response to external events such as messages in a queue, enabling more efficient resource utilization and responsive scaling based on actual demand, rather than static metrics. KEDA serves as a bridge between Kubernetes and various event sources, making it easier to scale applications dynamically in a cloud-native environment. If you would like to see how SpinKube integrates with KEDA, please read the “Scaling With KEDA” tutorial which deploys a SpinApp and the KEDA ScaledObject instance onto a cluster. The tutorial also uses Bombardier to generate traffic to test how well KEDA scales our SpinApp.
Rancher Desktop
The release of Rancher Desktop 1.13.0 comes with basic support for running WebAssembly (Wasm) containers and deploying them to Kubernetes. Rancher Desktop by SUSE, is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop. If you would like to see how SpinKube integrates with Rancher Desktop, please read the “Integrating With Rancher Desktop” tutorial which walks through the steps of installing the necessary components for SpinKube (including the CertManager for SSL, CRDs and the KWasm runtime class manager using Helm charts). The tutorial then demonstrates how to create a simple Spin JavaScript application and deploys the application within Rancher Desktop’s local cluster.
3 - Spintainer Executor
The Spintainer Executor
The Spintainer (a play on the words Spin and container) executor is a SpinAppExecutor that runs Spin applications directly in a container rather than via the shim. This is useful for a number of reasons:
- Provides the flexibility to:
- Use any Spin version you want.
- Use any custom triggers or plugins you want.
- Allows you to use SpinKube even if you don’t have the cluster permissions to install the shim.
Note: We recommend using the shim for most use cases. The spintainer executor is best saved as a workaround.
How to create a spintainer executor
The following is some sample configuration for a spintainer executor:
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinAppExecutor
metadata:
name: spintainer
spec:
createDeployment: true
deploymentConfig:
installDefaultCACerts: true
spinImage: ghcr.io/fermyon/spin:v2.7.0
Save this into a file named spintainer-executor.yaml
and then apply it to the cluster.
kubectl apply -f spintainer-executor.yaml
How to use a spintainer executor
To use the spintainer executor you must reference it as the executor of your SpinApp
.
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: simple-spinapp
spec:
image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
replicas: 1
executor: spintainer
How the spintainer executor works
The spintainer executor executes your Spin application in a container created from the image specified by .spec.deploymentConfig.spinImage
. The container image must have a Spin binary be the entrypoint of the container. It will be started with the following args.
up --listen {spin-operator-defined-port} -f {spin-operator-defined-image} --runtime-config-file {spin-operator-defined-config-file}
For ease of use you can use the images published by the Spin project here. Alternatively you can craft images for your own unique need.
4 - Upgrading to v0.4.0
Spin Operator v0.4.0 introduces a breaking API change. The SpinApp and SpinAppExecutor are moving from the spinoperator.dev
to spinkube.dev
domains. This is a breaking change and therefore requires a re-install of the Spin Operator when upgrading to v0.4.0.
Migration steps
Uninstall any existing SpinApps.
Note: Back em’ up! TODO
kubectl get spinapps.core.spinoperator.dev -o yaml > spinapps.yaml
kubectl delete spinapp.core.spinoperator.dev --all
Uninstall any existing SpinAppExecutors.
kubectl delete spinappexecutor.core.spinoperator.dev --all
Uninstall the old Spin Operator.
Note: If you used a different release name or namespace when installing the Spin Operator you’ll have to adjust the command accordingly. Alternatively, if you used something other than Helm to install the Spin Operator, you’ll need to uninstall it following whatever approach you used to install it.
helm uninstall spin-operator --namespace spin-operator
Uninstall the old CRDs.
kubectl delete crd spinapps.core.spinoperator.dev kubectl delete crd spinappexecutors.core.spinoperator.dev
Modify your SpinApps to use the new
apiVersion
. Now you’ll need to modify theapiVersion
in your SpinApps, replacingcore.spinoperator.dev/v1alpha1
withcore.spinkube.dev/v1alpha1
.Note: If you don’t have your SpinApps tracked in source code somewhere than you will have backed up the SpinApps in your cluster to a file named
spinapps.yaml
in step 1. If you did this then you need to replace theapiVersion
in thespinapps.yaml
file. Here’s a command that can help with that:sed 's|apiVersion: core.spinoperator.dev/v1alpha1|apiVersion: core.spinkube.dev/v1alpha1|g' spinapps.yaml > modified-spinapps.yaml
Install the new CRDs.
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.4.0/spin-operator.crds.yaml
Re-install the SpinAppExecutor.
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.4.0/spin-operator.shim-executor.yaml
If you had other executors you’ll need to install them too.
Install the new Spin Operator.
# Install Spin Operator with Helm helm install spin-operator \ --namespace spin-operator \ --create-namespace \ --version 0.4.0 \ --wait \ oci://ghcr.io/spinkube/charts/spin-operator
Re-apply your modified SpinApps. Follow whatever pattern you normally follow to get your SpinApps in the cluster e.g. Kubectl, Flux, Helm, etc.
Note: If you backed up your SpinApps in step 1, you can re-apply them using the command below:
kubectl apply -f modified-spinapps.yaml
Upgrade your
spin kube
plugin. If you’re using thespin kube
plugin you’ll need to upgrade it to the new version so that the scaffolded apps are still valid.spin plugins upgrade kube