1 - Overview

A high level overview of the SpinKube sub-projects

Project Overview

SpinKube is a new open source project that streamlines the experience of developing, deploying, and operating Wasm workloads on Kubernetes, using Spin in tandem with the runwasi and KWasm open source projects.

With SpinKube, you can leverage the advantages of using WebAssembly (Wasm) for your workloads:

  • Artifacts are significantly smaller in size compared to container images.
  • Artifacts can be quickly fetched over the network and started much faster (*Note: We are aware of several optimizations that still need to be implemented to enhance the startup time for workloads).
  • Substantially fewer resources are required during idle times.

All of this while being able to integrate with Kubernetes primitives (DNS, probes, autoscaling, metrics, and a lot more cloud native and CNCF projects) thanks to Spin Operator.

SpinKube Project Overview Diagram

Spin Operator watches Spin App Custom Resources and realizes the desired state in the Kubernetes cluster. The foundation of this project was built using the kubebuilder framework and contains a Spin App Custom Resource Definition (CRD) and controller.

To get started, check out our Spin Operator quickstart.

2 - Quickstart

Learn how to setup a Kubernetes cluser, install the Spin Operator and run your first Spin App

3 - spin-operator

Spin Operator

What Is Spin Operator?

Spin Operator is a Kubernetes operator which empowers platform engineers to deploy Spin applications as custom resources to their Kubernetes clusters. Spin Operator provides an elegant solution for platform engineers looking to improve efficiency without compromising on performance while maintaining workload portability.

Why Spin Operator?

By bringing the power of the Spin framework to Kubernetes clusters, Spin Operator provides application developers and platform engineers with the best of both worlds. For developers, this means easily building portable serverless functions that leverage the power and performance of Wasm via the Spin developer tool. For platform engineers, this means using idiomatic Kubernetes primitives (secrets, autoscaling, etc.) and tooling to manage these workloads at scale in a production environment, improving their overall operational efficiency.

How Does Spin Operator Work?

Built with the kubebuilder framework, Spin Operator is a Kubernetes operator. Kubernetes operators are used to extend Kubernetes automation to new objects, defined as custom resources, without modifying the Kubernetes API. The Spin Operator is composed of two main components:

  • A controller that defines and manages Wasm workloads on k8s.
  • The “SpinApps” Custom Resource Definition (CRD).

SpinApps CRDs can be composed manually or generated automatically from an existing Spin application using the spin kube scaffold command. The former approach lends itself well to CI/CD systems, whereas the latter is a better fit for local testing as part of a local developer workflow.

Once an application deployment begins, Spin Operator handles scheduling the workload on the appropriate nodes (thanks to the Runtime Class Manager, previously known as Kwasm) and managing the resource’s lifecycle. There is no need to fetch the containerd shim spin binary or mutate node labels. This is all managed via the Runtime Class Manager, which you will install as a dependency when setting up Spin Operator.

Next Steps

For application developers interested in writing their first Spin App, check out the spin kube plugin documentation. By the end of the quickstart, you’ll have an application ready to deploy to your Kubernetes cluster of choice.

For platform engineers interested in cluster operations and working with Spin Operator, visit our Spin Operator quickstart. We provide you with a simple quickstart SpinApp to use alongside spin operator.

3.1 - Prerequisites

Prerequisites

The following prerequisites are required.

Go

If building the Spin Operator from source or contributing to the development of Spin Operator then you will require Go version v1.22.0+ to be installed on your machine. Otherwise, please ignore this section, and move to the next prerequisite.

TinyGo

Please also install the latest version of TinyGo

Managing Containers and Kubernetes Locally

If you’d like to run Spin Operator locally, you have the option of using Rancher Desktop or Docker Desktop, as the interface for managing containers and Kubernetes clusters directly from your workstation.

Rancher Desktop

If you choose Rancher Desktop it is recommended to install version 1.13.0 or higher. Additionally, you will need to configure it to support WebAssembly applications.

Docker Desktop

If you choose Docker Desktop it is recommended to install version 17.03 or higher. Additionally, you will need to configure it to support WebAssembly applications.

Kubectl

If you’d like to manage your Spin applications with kubectl, then Spin Operator requires that you have kubectl version v1.27.0+ installed.

K3d

If running/deploying your Spin application involves the use of k3d, then the Spin Operator requires that you have k3d installed and that you have access to a Kubernetes v1.27.0 cluster.

containerd

If running/deploying your Spin application involves the use of k3s or a baremetal host, then the Spin Operator requires that you have containerD installed.

Helm

If running/deploying your Spin application involves the use of Helm, then the Spin Operator requires that you have Helm installed on your system.

Spin

Please install the latest version (2.3.1 or newer) of Spin on your local machine for creating Spin Apps.

Bombardier

Installing Bombardier is not required to use Spin Operator. Bombardier is used in tutorials like Scaling Spin App With Horizontal Pod Autoscaling to generate load to test autoscaling.

Azure CLI

Installing Azure CLI is not required to use Spin Operator. Azure CLI is used to provision Azure Kubernetes Service (AKS) and necessary Azure resources as part of the Deploy Spin Operator on Azure Kubernetes Service tutorial.

3.2 - Quickstart

Learn how to setup a Kubernetes cluster, install the Spin Operator and run your first Spin App

This Quickstart guide demonstrates how to set up a new Kubernetes cluster, install the Spin Operator and deploy your first Spin application.

Prerequisites

Ensure necessary prerequisites are installed.

For this Quickstart in particular, you will need:

  • kubectl - the Kubernetes CLI
  • Rancher Desktop or Docker Desktop for managing containers and Kubernetes on your desktop
  • k3d - a lightweight Kubernetes distribution that runs on Docker
  • Helm - the package manager for Kubernetes

Set up Your Kubernetes Cluster

  1. Create a Kubernetes cluster with a k3d image that includes the containerd-shim-spin prerequisite already installed:
k3d cluster create wasm-cluster \
  --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.14.1 \
  --port "8081:80@loadbalancer" \
  --agents 2

Note: Spin Operator requires a few Kubernetes resources that are installed globally to the cluster. We create these directly through kubectl as a best practice, since their lifetimes are usually managed separately from a given Spin Operator installation.

  1. Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
  1. Apply the Runtime Class used for scheduling Spin apps onto nodes running the shim:

Note: In a production cluster you likely want to customize the Runtime Class with a nodeSelector that matches nodes that have the shim installed. However, in the K3d example, they’re installed on every node.

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
  1. Apply the Custom Resource Definitions used by the Spin Operator:
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

Deploy the Spin Operator

Execute the following command to install the Spin Operator on the K3d cluster using Helm. This will create all of the Kubernetes resources required by Spin Operator under the Kubernetes namespace spin-operator. It may take a moment for the installation to complete as dependencies are installed and pods are spinning up.

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Lastly, create the shim executor:

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Run the Sample Application

You are now ready to deploy Spin applications onto the cluster!

  1. Create your first application in the same spin-operator namespace that the operator is running:
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
  1. Forward a local port to the application pod so that it can be reached:
kubectl port-forward svc/simple-spinapp 8083:80
  1. In a different terminal window, make a request to the application:
curl localhost:8083/hello

You should see:

Hello world from Spin!

Next Steps

Congrats on deploying your first SpinApp! Recommended next steps:

3.3 - Installation

Learn how to install Spin Operator

In this section you’ll learn how to install Spin Operator and run the operator either on your local machine or on a Kubernetes cluster.

3.3.1 - Installing with Helm

This guide walks you through the process of installing Spin Operator using Helm.

Prerequisites

Please ensure that your system has all of the prerequisites installed before continuing.

For this guide in particular, you will need:

  • kubectl - the Kubernetes CLI
  • Helm - the package manager for Kubernetes

Install Spin Operator With Helm

The following instructions are for installing Spin Operator using a Helm chart (using helm install).

Prepare the Cluster

Before installing the chart, you’ll need to ensure the following:

The Custom Resource Definition (CRD) resources are installed. This includes the SpinApp CRD representing Spin applications to be scheduled on the cluster.

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

A RuntimeClass resource class that points to the spin handler called wasmtime-spin-v2 will be created. If you are deploying to a production cluster that only has a shim on a subset of nodes, you’ll need to modify the RuntimeClass with a nodeSelector:.

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml

The containerd-spin-shim SpinAppExecutor custom resource is installed. This tells Spin Operator to use the containerd shim executor to run Spin apps:

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Chart prerequisites

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3
# Add Helm repository if not already done
helm repo add kwasm http://kwasm.sh/kwasm-operator/

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

Installing the Spin Operator Chart

The following installs the chart with the release name spin-operator:

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Upgrading the Chart

Note that you may also need to upgrade the spin-operator CRDs in tandem with upgrading the Helm release:

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

To upgrade the spin-operator release, run the following:

# Upgrade Spin Operator using Helm
helm upgrade spin-operator \
  --namespace spin-operator \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Uninstalling the Chart

To delete the spin-operator release, run:

# Uninstall Spin Operator using Helm
helm delete spin-operator --namespace spin-operator

This will remove all Kubernetes resources associated with the chart and deletes the Helm release.

To completely uninstall all resources related to spin-operator, you may want to delete the corresponding CRD resources and, optionally, the RuntimeClass:

kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml
kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

3.4 - Tutorials

This section consists of tutorials in the context of Spin Operator

3.4.1 - Assigning variables to Spin Apps

Configure Spin Apps using values from Kubernetes ConfigMaps and Secrets

By using variables, you can alter application behavior without recompiling your SpinApp. When running in Kubernetes, you can either provide constant values for variables, or reference them from Kubernetes primitives such as ConfigMaps and Secrets. This tutorial guides your through the process of assigning variables to your SpinApp.

Prerequisites

Ensure necessary prerequisites are installed.

For this tutorial in particular, you should either have the Spin Operator running locally or running on your Kubernetes cluster.

Build and Store SpinApp in an OCI Registry

We’re going to build the SpinApp and store it inside of a ttl.sh registry. Move into the apps/variable-explorer directory and build the SpinApp we’ve provided:

# Build and publish the sample app
cd apps/variable-explorer
spin build
spin registry push ttl.sh/variable-explorer:1h

Note that the tag at the end of ttl.sh/variable-explorer:1h indicates how long the image will last e.g. 1h (1 hour). The maximum is 24h and you will need to repush if ttl exceeds 24 hours.

For demonstration purposes, we use the variable explorer sample app. It reads three different variables (log_level, platform_name and db_password) and prints their values to the STDOUT stream as shown in the following snippet:

let log_level = variables::get("log_level")?;
let platform_name = variables::get("platform_name")?;
let db_password = variables::get("db_password")?;

println!("# Log Level: {}", log_level);
println!("# Platform name: {}", platform_name);
println!("# DB Password: {}", db_password);

Those variables are defined as part of the Spin manifest (spin.toml), and access to them is granted to the variable-explorer component:

[variables]
log_level = { default = "WARN" }
platform_name = { default = "Fermyon Cloud" }
db_password = { required = true }

[component.variable-explorer.variables]
log_level = "{{ log_level }}"
platform_name = "{{ platform_name }}"
db_password = "{{ db_password }}"

Configuration data in Kubernetes

In Kubernetes, you use ConfigMaps for storing non-sensitive, and Secrets for storing sensitive configuration data. The deployment manifest (config/samples/variable-explorer.yaml) contains specifications for both a ConfigMap and a Secret:

kind: ConfigMap
apiVersion: v1
metadata:
  name: spinapp-cfg
data:
  logLevel: INFO
---
kind: Secret
apiVersion: v1
metadata:
  name: spinapp-secret
data:
  password: c2VjcmV0X3NhdWNlCg==

Assigning variables to a SpinApp

When creating a SpinApp, you can choose from different approaches for specifying variables:

  1. Providing constant values
  2. Loading configuration values from ConfigMaps
  3. Loading configuration values from Secrets

The SpinApp specification contains the variables array, that you use for specifying variables (See kubectl explain spinapp.spec.variables).

The deployment manifest (config/samples/variable-explorer.yaml) specifies a static value for platform_name. The value of log_level is read from the ConfigMap called spinapp-cfg, and the db_password is read from the Secert called spinapp-secret:

kind: SpinApp
apiVersion: core.spinoperator.dev/v1alpha1
metadata:
  name: variable-explorer
spec:
  replicas: 1
  image: ttl.sh/variable-explorer:1h
  executor: containerd-shim-spin
  variables:
    - name: platform_name
      value: Kubernetes
    - name: log_level
      valueFrom:
        configMapKeyRef:
          name: spinapp-cfg
          key: logLevel
          optional: true
    - name: db_password
      valueFrom:
        secretKeyRef:
          name: spinapp-secret
          key: password
          optional: false

As the deployment manifest outlines, you can use the optional property - as you would do when specifying environment variables for a regular Kubernetes Pod - to control if Kubernetes should prevent starting the SpinApp, if the referenced configuration source does not exist.

You can deploy all resources by executing the following command:

kubectl apply -f config/samples/variable-explorer.yaml

configmap/spinapp-cfg created
secret/spinapp-secret created
spinapp.core.spinoperator.dev/variable-explorer created

Inspecting runtime logs of your SpinApp

To verify that all variables are passed correctly to the SpinApp, you can configure port forwarding from your local machine to the corresponding Kubernetes Service:

kubectl port-forward services/variable-explorer 8080:80

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

When port forwarding is established, you can send an HTTP request to the variable-explorer from within an additional terminal session:

curl http://localhost:8080
Hello from Kubernetes

Finally, you can use kubectl logs to see all logs produced by the variable-explorer at runtime:

kubectl logs -l core.spinoperator.dev/app-name=variable-explorer

# Log Level: INFO
# Platform Name: Kubernetes
# DB Password: secret_sauce

3.4.2 - Deploy Spin Operator on Azure Kubernetes Service

In this tutorial you’ll learn how to deploy Spin Operator on Azure Kubernetes Service (AKS)

In this tutorial, you install Spin Operator on an Azure Kubernetes Service (AKS) cluster and deploy a simple Spin application. You will learn how to:

  • Deploy an AKS cluster
  • Install Spin Operator Custom Resource Definition and Runtime Class
  • Install and verify containerd shim via Kwasm
  • Deploy a simple Spin App custom resource on your cluster

Prerequisites

Please see the following sections in the Prerequisites page and fulfill those prerequisite requirements before continuing:

  • kubectl - the Kubernetes CLI
  • Helm - the package manager for Kubernetes
  • Azure CLI - cross-platform CLI for managing Azure resources

Provisioning the necessary Azure Infrastructure

Before you dive into deploying Spin Operator on Azure Kubernetes Service (AKS), the underlying cloud infrastructure must be provisioned. For the sake of this article, you will provision a simple AKS cluster. (Alternatively, you can setup the AKS cluster following this guide from Microsoft.)

# Login with Azure CLI
az login

# Select the desired Azure Subscription
az account set --subscription <YOUR_SUBSCRIPTION>

# Create an Azure Resource Group
az group create --name rg-spin-operator \
    --location germanywestcentral

# Create an AKS cluster
az aks create --name aks-spin-operator \
    --resource-group rg-spin-operator \
    --location germanywestcentral \
    --node-count 1 \
    --tier free \
    --generate-ssh-keys

Once the AKS cluster has been provisioned, use the aks get-credentials command to download credentials for kubectl:

# Download credentials for kubectl
az aks get-credentials --name aks-spin-operator \
    --resource-group rg-spin-operator

For verification, you can use kubectl to browse common resources inside of the AKS cluster:

# Browse namespaces in the AKS cluster
kubectl get namespaces

NAME              STATUS   AGE
default           Active   3m
kube-node-lease   Active   3m
kube-public       Active   3m
kube-system       Active   3m

Deploying the Spin Operator

First, the Custom Resource Definition (CRD) and the Runtime Class for wasmtime-spin-v2 must be installed.

# Install the CRDs
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

# Install the Runtime Class
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml

The following installs cert-manager which is required to automatically provision and manage TLS certificates (used by the admission webhook system of Spin Operator)

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

The Spin Operator chart also has a dependency on Kwasm, which you use to install containerd-wasm-shim on the Kubernetes node(s):

# Add Helm repository if not already done
helm repo add kwasm http://kwasm.sh/kwasm-operator/
helm repo update

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

To verify containerd-wasm-shim installation, you can inspect the logs from the Kwasm Operator:

# Inspect logs from the Kwasm Operator
kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator

{"level":"info","node":"aks-nodepool1-31687461-vmss000000","time":"2024-02-12T11:23:43Z","message":"Trying to Deploy on aks-nodepool1-31687461-vmss000000"}
{"level":"info","time":"2024-02-12T11:23:43Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is still Ongoing"}
{"level":"info","time":"2024-02-12T11:24:00Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is Completed. Happy WASMing"}

The following installs the chart with the release name spin-operator in the spin-operator namespace:

helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Lastly, create the shim executor::

kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Deploying a Spin App to AKS

To validate the Spin Operator deployment, you will deploy a simple Spin App to the AKS cluster. The following command will install a simple Spin App using the SpinApp CRD you provisioned in the previous section:

# Deploy a sample Spin app
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml

Verifying the Spin App

Configure port forwarding from port 8080 of your local machine to port 80 of the Kubernetes service which points to the Spin App you installed in the previous section:

kubectl port-forward services/simple-spinapp 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Send a HTTP request to http://127.0.0.1:8080/hello using curl:

# Send an HTTP GET request to the Spin App
curl -iX GET http://localhost:8080/hello
HTTP/1.1 200 OK
transfer-encoding: chunked
date: Mon, 12 Feb 2024 12:23:52 GMT

Hello world from Spin!%

Removing the Azure infrastructure

To delete the Azure infrastructure created as part of this article, use the following command:

# Remove all Azure resources
az group delete --name rg-spin-operator \
    --no-wait \
    --yes

3.4.3 - Integrating With Docker Desktop

This tutorial shows how to integrate SpinKube and Docker Desktop

Docker Desktop is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop.

Prerequisites

The prerequisites for this tutorial are the Docker Desktop and assets listed in the SpinKube quickstart. Let’s dive in.

Docker Desktop

First, install the latest version of Docker Desktop.

Docker Desktop Preferences

WebAssembly (Wasm) support is still an in-development (Beta) feature of Docker Desktop. Wasm support is disabled by default. To turn it on, open your Docker Desktop settings menu and click the gear icon in the top right corner of the navigation bar. Click Extensions from the menu on the left and ensure that boxes relating to Docker Marketplace and Docker Extensions system containers are checked (as shown in the image below). Checking these boxes enables the “Features in development” extension.

Docker Desktop Extensions

Please ensure that you press “Apply & restart” to save any changes.

Click on Features in development from the menu on the left, and enable the following two options:

  • “Use containerd for pulling and storing images”: This turns on containerd support, which is necessary for Wasm.
  • “Enable Wasm”: This installs the Wasm subsystem, which includes containerd shims and Spin (among other things).

Docker Desktop Enable Wasm

Make sure you press “Apply & restart” to save the changes.

Docker Desktop is Wasm-ready!

Click on “Kubernetes” and check the “Enable Kubernetes” box, as shown below.

Enable Kubernetes

Make sure you press “Apply & restart” to save the changes.

Select docker-desktop from the Kubernetes Contexts configuration in your toolbar.

Kubernetes Context

SpinKube

The following commands are from the SpinKube Quickstart guide. Please refer to the quickstart if you have any queries.

The following commands install all of the necessary items that can be found in the quickstart:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator
helm repo add kwasm http://kwasm.sh/kwasm-operator/

helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.1

kubectl annotate node --all kwasm.sh/kwasm-node=true

Creating Our Spin Application

Next, we create a new Spin app using the Javascript template:

spin new -t http-js hello-docker --accept-defaults
cd hello-docker
npm install

We then edit the Javascript source file (the src/index.js file) to match the following:

export async function handleRequest(request) {
    return {
        status: 200,
        headers: {"content-type": "text/plain"},
        body: "Hello from Docker Desktop" // <-- This changed
    }
}

All that’s left to do is build the app:

spin build

Deploying Our Spin App to Docker

We publish our application using the spin registry command:

docker push tpmccallum/hello-docker

The command above will return output similar to the following:

Using default tag: latest
The push refers to repository [docker.io/tpmccallum/hello-docker]
latest: digest: sha256:f24bf4fae2dc7dd80cad25b3d3a6bceb566b257c03b7ff5b9dd9fe36b05f06e0 size: 695

Once published, we can read the configuration of our published application using the spin kube scaffold command:

spin kube scaffold -f tpmccallum/hello-docker

The above command will return something similar to the following YAML:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-docker
spec:
  image: "tpmccallum/hello-docker"
  executor: containerd-shim-spin
  replicas: 2

We can run this using the following command:

spin kube deploy --from docker.io/tpmccallum/hello-docker

If we look at the “Images” section of Docker Desktop we see tpmccallum/hello-docker:

Docker Desktop Images

We can test the Wasm-powered Spin app that is running via Docker using the following request:

curl localhost:3000

Which returns the following:

Hello from Docker Desktop

3.4.4 - Integrating With Rancher Desktop

This tutorial shows how to integrate SpinKube and Rancher Desktop

Rancher Desktop is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop.

Prerequisites

The prerequisites for this tutorial are Rancher Desktop and assets listed in the SpinKube quickstart. Let’s dive in.

Rancher Desktop

First, install the latest version of Rancher Desktop.

Rancher Desktop Preferences

Check the “Container Engine” section of your “Preferences” to ensure that containerd is your runtime and that “Wasm” is enabled. As shown below.

Rancher Desktop Preferences Wasm

Also, select rancher-desktop from the Kubernetes Contexts configuration in your toolbar.

Rancher Desktop Preferences Wasm

SpinKube

The following commands are from the SpinKube Quickstart guide. Please refer to the quickstart if you have any queries.

The following commands install all of the necessary items that can be found in the quickstart:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator
helm repo add kwasm http://kwasm.sh/kwasm-operator/

helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.1

kubectl annotate node --all kwasm.sh/kwasm-node=true

Creating Our Spin Application

Next, we create a new Spin app using the Javascript template:

spin new -t http-js hello-k3s --accept-defaults
cd hello-k3s
npm install

We then edit the Javascript source file (the src/index.js file) to match the following:

export async function handleRequest(request) {
    return {
        status: 200,
        headers: {"content-type": "text/plain"},
        body: "Hello from Rancher Desktop" // <-- This changed
    }
}

All that’s left to do is build the app:

spin build

Deploying Our Spin App to Rancher Desktop with SpinKube

We publish our application using the spin registry command:

spin registry push ttl.sh/hello-k3s:0.1.0

Once published, we can read the configuration of our published application using the spin kube scaffold command:

spin kube scaffold --from ttl.sh/hello-k3s:0.1.0

The above command will return something similar to the following YAML:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-k3s
spec:
  image: "ttl.sh/hello-k3s:0.1.0"
  executor: containerd-shim-spin
  replicas: 2

Now, we can deploy the app into our cluster:

spin kube deploy --from ttl.sh/hello-k3s:0.1.0

If we click on the Rancher Desktop’s “Cluster Dashboard”, we can see hello-k3s:0.1.0 running inside the “Workloads” dropdown section:

Rancher Desktop Preferences Wasm

To access our app outside of the cluster, we can forward the port so that we access the application from our host machine:

kubectl port-forward svc/hello-k3s 8083:80

To test locally, we can make a request as follows:

curl localhost:8083

The above curl command will return the following:

Hello from Rancher Desktop

3.4.5 - Package and Deploy Spin Apps

Learn how to package and distribute Spin Apps using either public or private OCI compliant registries

This article explains how Spin Apps are packaged and distributed via both public and private registries. You will learn how to:

  • Package and distribute Spin Apps
  • Deploy Spin Apps
  • Scaffold Kubernetes Manifests for Spin Apps
  • Use private registries that require authentication

Prerequisites

Ensure the necessary prerequisites are installed.

For this tutorial in particular, you need

Creating a new Spin App

You use the spin CLI, to create a new Spin App. The spin CLI provides different templates, which you can use to quickly create different kinds of Spin Apps. For demonstration purposes, you will use the http-go template to create a simple Spin App.

# Create a new Spin App using the http-go template
spin new --accept-defaults -t http-go hello-spin

# Navigate into the hello-spin directory
cd hello-spin

The spin CLI created all necessary files within hello-spin. Besides the Spin Manifest (spin.toml), you can find the actual implementation of the app in main.go:

package main

import (
	"fmt"
	"net/http"

	spinhttp "github.com/fermyon/spin/sdk/go/v2/http"
)

func init() {
	spinhttp.Handle(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "text/plain")
		fmt.Fprintln(w, "Hello Fermyon!")
	})
}

func main() {}

This implementation will respond to any incoming HTTP request, and return an HTTP response with a status code of 200 (Ok) and send Hello Fermyon as the response body.

You can test the app on your local machine by invoking the spin up command from within the hello-spin folder.

Packaging and Distributing Spin Apps

Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can be distributed using any registry that implements the Open Container Initiative Distribution Specification (a.k.a. “OCI Distribution Spec”).

The spin CLI simplifies packaging and distribution of Spin Apps and provides an atomic command for this (spin registry push). You can package and distribute the hello-spin app that you created as part of the previous section like this:

# Package and Distribute the hello-spin app
spin registry push --build ttl.sh/hello-spin:24h

It is a good practice to add the --build flag to spin registry push. It prevents you from accidentally pushing an outdated version of your Spin App to your registry of choice.

Deploying Spin Apps

To deploy Spin Apps to a Kubernetes cluster which has Spin Operator running, you use the kube plugin for spin. Use the spin kube deploy command as shown here to deploy the hello-spin app to your Kubernetes cluster:

# Deploy the hello-spin app to your Kubernetes Cluster
spin kube deploy --from ttl.sh/hello-spin:24h

spinapp.core.spinoperator.dev/hello-spin created

Scaffolding Spin Apps

In the previous section, you deployed the hello-spin app using the spin kube deploy command. Although this is handy, you may want to inspect, or alter the Kubernetes manifests before applying them. You use the spin kube scaffold command to generate Kubernetes manifests:

spin kube scaffold --from ttl.sh/hello-spin:24h
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-spin
spec:
  image: "ttl.sh/hello-spin:24h"
  replicas: 2

By default, the command will print all Kubernetes menifests to STDOUT. Alternatively, you can specify the out argument to store the manifests to a file:

# Scaffold manifests to spinapp.yaml
spin kube scaffold --from ttl.sh/hello-spin:24h \
    --out spinapp.yaml

# Print contents of spinapp.yaml
cat spinapp.yaml
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-spin
spec:
  image: "ttl.sh/hello-spin:24h"
  replicas: 2

Distributing and Deploying Spin Apps via private registries

It is quite common to distribute Spin Apps through private registries that require some sort of authentication. To publish a Spin App to a private registry, you have to authenticate using the spin registry login command.

For demonstration purposes, you will now distribute the Spin App via GitHub Container Registry (GHCR). You can follow this guide by GitHub to create a new personal access token (PAT), which is required for authentication.

# Store PAT and GitHub username as environment variables
export GH_PAT=YOUR_TOKEN
export GH_USER=YOUR_GITHUB_USERNAME

# Authenticate spin CLI with GHCR
echo $GH_PAT | spin registry login ghcr.io -u $GH_USER --password-stdin

Successfully logged in as YOUR_GITHUB_USERNAME to registry ghcr.io

Once authentication succeeded, you can use spin registry push to push your Spin App to GHCR:

# Push hello-spin to GHCR
spin registry push --build ghcr.io/$GH_USER/hello-spin:0.0.1

Pushing app to the Registry...
Pushed with digest sha256:1611d51b296574f74b99df1391e2dc65f210e9ea695fbbce34d770ecfcfba581

In Kubernetes you store authentication information as secret of type docker-registry. The following snippet shows how to create such a secret with kubectl leveraging the environment variables, you specified in the previous section:

# Create Secret in Kubernetes
kubectl create secret docker-registry ghcr \
    --docker-server ghcr.io \
    --docker-username $GH_USER \
    --docker-password $CR_PAT

secret/ghcr created

Scaffold the necessary SpinApp Custom Resource (CR) using spin kube scaffold:

# Scaffold the SpinApp manifest
spin kube scaffold --from ghcr.io/$GH_USER/hello-spin:0.0.1 \
    --out spinapp.yaml

Before deploying the manifest with kubectl, update spinapp.yaml and link the ghcr secret you previously created using the imagePullSecrets property. Your SpinApp manifest should look like this:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-spin
spec:
  image: ghcr.io/$GH_USER/hello-spin:0.0.1
  imagePullSecrets:
    - name: ghcr
  replicas: 2
  executor: containerd-shim-spin

$GH_USER should match the actual username provided while running through the previous sections of this article

Finally, you can deploy the app using kubectl apply:

# Deploy the spinapp.yaml using kubectl
kubectl apply -f spinapp.yaml
spinapp.core.spinoperator.dev/hello-spin created

3.4.6 - Scaling Spin App With Horizontal Pod Autoscaling (HPA)

This tutorial illustrates how one can horizontally scale Spin Apps in Kubernetes using Horizontal Pod Autscaling (HPA)

Horizontal scaling, in the Kubernetes sense, means deploying more pods to meet demand (different from vertical scaling whereby more memory and CPU resources are assigned to already running pods). In this tutorial, we configure HPA to dynamically scale the instance count of our SpinApps to meet the demand.

Prerequisites

Please see the following sections in the Prerequisites page and fulfil those prerequisite requirements before continuing:

  • Docker - for running k3d
  • kubectl - the Kubernetes CLI
  • k3d - a lightweight Kubernetes distribution that runs on Docker
  • Helm - the package manager for Kubernetes
  • Bombardier - cross-platform HTTP benchmarking CLI

We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these steps to configure HPA autoscaling on your desired Kubernetes environment.

Setting Up Kubernetes Cluster

Run the following command to create a Kubernetes cluster that has the containerd-shim-spin pre-requisites installed: If you have a Kubernetes cluster already, please feel free to use it:

k3d cluster create wasm-cluster-scale \
  --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.14.1 \
  -p "8081:80@loadbalancer" \
  --agents 2

Deploying Spin Operator and its dependencies

First, you have to install cert-manager to automatically provision and manage TLS certificates (used by Spin Operator’s admission webhook system). For detailed installation instructions see the cert-manager documentation.

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

Next, run the following commands to install the Spin Runtime Class and Spin Operator Custom Resource Definitions (CRDs):

Note: In a production cluster you likely want to customize the Runtime Class with a nodeSelector that matches nodes that have the shim installed. However, in the K3d example, they’re installed on every node.

# Install the RuntimeClass
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml

# Install the CRDs
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

Lastly, install Spin Operator using helm and the shim executor with the following commands:

# Install Spin Operator
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

# Install the shim executor
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Great, now you have Spin Operator up and running on your cluster. This means you’re set to create and deploy SpinApps later on in the tutorial.

Set Up Ingress

Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can reach your SpinApp once we’ve created it in future steps:

# Setup ingress following this tutorial https://k3d.io/v5.4.6/usage/exposing_services/
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hpa-spinapp
            port:
              number: 80
EOF

Hit enter to create the ingress resource.

Deploy Spin App and HorizontalPodAutoscaler (HPA)

Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the Spin App in the apps/cpu-load-gen folder of the Spin Operator repository.

We can take a look at the SpinApp and HPA definitions in our deployment file below/. As you can see, we have set our resources -> limits to 500m of cpu and 500Mi of memory per Spin application and we will scale the instance count when we’ve reached a 50% utilization in cpu and memory. We’ve also defined support a maximum replica count of 10 and a minimum replica count of 1:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hpa-spinapp
spec:
  image: ghcr.io/spinkube/spin-operator/cpu-load-gen:20240311-163328-g1121986
  enableAutoscaling: true
  resources:
    limits:
      cpu: 500m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 400Mi
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: spinapp-autoscaler
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hpa-spinapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50

For more information about HPA, please visit the following links:

Below is an example of the configuration to scale resources:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hpa-spinapp
spec:
  image: ghcr.io/spinkube/spin-operator/cpu-load-gen:20240311-163328-g1121986
  executor: containerd-shim-spin
  enableAutoscaling: true
  resources:
    limits:
      cpu: 500m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 400Mi
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: spinapp-autoscaler
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hpa-spinapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

Let’s deploy the SpinApp and the HPA instance onto our cluster (using the above .yaml configuration). To apply the above configuration we use the following kubectl apply command:

# Install SpinApp and HPA
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/hpa.yaml

You can see your running Spin application by running the following command:

kubectl get spinapps
NAME          AGE
hpa-spinapp   92m

You can also see your HPA instance with the following command:

kubectl get hpa
NAME                 REFERENCE                TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
spinapp-autoscaler   Deployment/hpa-spinapp   6%/50%    1         10        1          97m

Please note: The Kubernetes Plugin for Spin is a tool designed for Kubernetes integration with the Spin command-line interface. The Kubernetes Plugin for Spin has a scaling tutorial that demonstrates how to use the spin kube command to tell Kubernetes when to scale your Spin application up or down based on demand).

Generate Load to Test Autoscale

Now let’s use Bombardier to generate traffic to test how well HPA scales our SpinApp. The following Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). If a request is not responded to within 5 seconds that request will timeout:

# Generate a bunch of load
bombardier -c 40 -t 5s -d 3m http://localhost:8081

To watch the load, we can run the following command to get the status of our deployment:

kubectl describe deploy hpa-spinapp
...
---

Available      True    MinimumReplicasAvailable
Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   hpa-spinapp-544c649cf4 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  11m    deployment-controller  Scaled up replica set hpa-spinapp-544c649cf4 to 1
  Normal  ScalingReplicaSet  9m45s  deployment-controller  Scaled up replica set hpa-spinapp-544c649cf4  to 4
  Normal  ScalingReplicaSet  9m30s  deployment-controller  Scaled up replica set hpa-spinapp-544c649cf4  to 8
  Normal  ScalingReplicaSet  9m15s  deployment-controller  Scaled up replica set hpa-spinapp-544c649cf4  to 10

3.4.7 - Scaling Spin App With Kubernetes Event-Driven Autoscaling (KEDA)

This tutorial illustrates how one can horizontally scale Spin Apps in Kubernetes using Kubernetes Event-Driven Autoscaling (KEDA)

KEDA extends Kubernetes to provide event-driven scaling capabilities, allowing it to react to events from Kubernetes internal and external sources using KEDA scalers. KEDA provides a wide variety of scalers to define scaling behavior base on sources like CPU, Memory, Azure Event Hubs, Kafka, RabbitMQ, and more. We use a ScaledObject to dynamically scale the instance count of our SpinApp to meet the demand.

Prerequisites

Please see the following sections in the Prerequisites page and fulfil those prerequisite requirements before continuing:

  • kubectl - the Kubernetes CLI
  • Helm - the package manager for Kubernetes
  • Docker - for running k3d
  • k3d - a lightweight Kubernetes distribution that runs on Docker
  • Bombardier - cross-platform HTTP benchmarking CLI

We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these steps to configure KEDA autoscaling on your desired Kubernetes environment.

Setting Up Kubernetes Cluster

Run the following command to create a Kubernetes cluster that has the containerd-shim-spin pre-requisites installed: If you have a Kubernetes cluster already, please feel free to use it:

k3d cluster create wasm-cluster-scale \
  --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.14.1 \
  -p "8081:80@loadbalancer" \
  --agents 2

Deploying Spin Operator and its dependencies

First, you have to install cert-manager to automatically provision and manage TLS certificates (used by Spin Operator’s admission webhook system). For detailed installation instructions see the cert-manager documentation.

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add and update Jetstack repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

Next, run the following commands to install the Spin Runtime Class and Spin Operator Custom Resource Definitions (CRDs):

Note: In a production cluster you likely want to customize the Runtime Class with a nodeSelector that matches nodes that have the shim installed. However, in the K3d example, they’re installed on every node.

# Install the RuntimeClass
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml

# Install the CRDs
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

Lastly, install Spin Operator using helm and the shim executor with the following commands:

# Install Spin Operator
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

# Install the shim executor
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

Great, now you have Spin Operator up and running on your cluster. This means you’re set to create and deploy SpinApps later on in the tutorial.

Set Up Ingress

Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can reach your Spin App once we’ve created it in future steps:

# Setup ingress following this tutorial https://k3d.io/v5.4.6/usage/exposing_services/
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: keda-spinapp
            port:
              number: 80
EOF

Hit enter to create the ingress resource.

Setting Up KEDA

Use the following command to setup KEDA on your Kubernetes cluster using Helm. Different deployment methods are described at Deploying KEDA on keda.sh:

# Add the Helm repository
helm repo add kedacore https://kedacore.github.io/charts

# Update your Helm repositories
helm repo update

# Install the keda Helm chart into the keda namespace
helm install keda kedacore/keda --namespace keda --create-namespace

Deploy Spin App and the KEDA ScaledObject

Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the Spin App in the apps/cpu-load-gen folder of the Spin Operator repository.

We can take a look at the SpinApp and the KEDA ScaledObject definitions in our deployment files below. As you can see, we have explicitly specified resource limits to 500m of cpu (spec.resources.limits.cpu) and 500Mi of memory (spec.resources.limits.memory) per SpinApp:

# https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/keda-app.yaml
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: keda-spinapp
spec:
  image: ghcr.io/spinkube/spin-operator/cpu-load-gen:20240311-163328-g1121986
  executor: containerd-shim-spin
  enableAutoscaling: true
  resources:
    limits:
      cpu: 500m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 400Mi
---

We will scale the instance count when we’ve reached a 50% utilization in cpu (spec.triggers[cpu].metadata.value). We’ve also instructed KEDA to scale our SpinApp horizontally within the range of 1 (spec.minReplicaCount) and 20 (spec.maxReplicaCount).:

# https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/keda-scaledobject.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cpu-scaling
spec:
  scaleTargetRef:
    name: keda-spinapp
  minReplicaCount: 1
  maxReplicaCount: 20
  triggers:
    - type: cpu
      metricType: Utilization
      metadata:
        value: "50"

The Kubernetes documentation is the place to learn more about limits and requests. Consult the KEDA documentation to learn more about ScaledObject and KEDA’s built-in scalers.

Let’s deploy the SpinApp and the KEDA ScaledObject instance onto our cluster with the following command:

# Deploy the SpinApp
kubectl apply -f config/samples/keda-app.yaml
spinapp.core.spinoperator.dev/keda-spinapp created

# Deploy the ScaledObject
kubectl apply -f config/samples/keda-scaledobject.yaml
scaledobject.keda.sh/cpu-scaling created

You can see your running Spin application by running the following command:

kubectl get spinapps

NAME          READY REPLICAS   EXECUTOR
keda-spinapp  1                containerd-shim-spin

You can also see your KEDA ScaledObject instance with the following command:

kubectl get scaledobject

NAME          SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS   READY   ACTIVE   AGE
cpu-scaling   apps/v1.Deployment   keda-spinapp      1     20    cpu        True    True     7m

Generate Load to Test Autoscale

Now let’s use Bombardier to generate traffic to test how well KEDA scales our SpinApp. The following Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). If a request is not responded to within 5 seconds that request will timeout:

# Generate a bunch of load
bombardier -c 40 -t 5s -d 3m http://localhost:8081

To watch the load, we can run the following command to get the status of our deployment:

kubectl describe deploy keda-spinapp
...
---

Available      True    MinimumReplicasAvailable
Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   keda-spinapp-76db5d7f9f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  84s   deployment-controller  Scaled up replica set hpa-spinapp-76db5d7f9f  to 2 from 1
  Normal  ScalingReplicaSet  69s   deployment-controller  Scaled up replica set hpa-spinapp-76db5d7f9f  to 4 from 2
  Normal  ScalingReplicaSet  54s   deployment-controller  Scaled up replica set hpa-spinapp-76db5d7f9f  to 8 from 4
  Normal  ScalingReplicaSet  39s   deployment-controller  Scaled up replica set hpa-spinapp-76db5d7f9f  to 16 from 8
  Normal  ScalingReplicaSet  24s   deployment-controller  Scaled up replica set hpa-spinapp-76db5d7f9f  to 20 from 16

3.4.8 - Share Spin Operator Image

Discover how to build and push the Spin Operator image

You can build and push the Spin Operator image using the docker-build and docker-push targets specified in the Makefile.

  • The docker-build task invokes local docker tooling and builds a Docker image matching your local system architecture.
  • The docker-push task invokes local docker tooling and pushes the Docker image to a container registry.

You can chain both targets using make docker-build docker-push to perform build and push at once.

Ensure to provide the fully qualified image name as IMG argument to push your custom Spin Operator image to the desired container registry:

# Build & Push the Spin Operator Image
make docker-build docker-push IMG=<some-registry>/spin-operator:tag

This image ought to be published in the personal registry you specified. And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above command doesn’t work.

Build and Push multi-arch Images

There are scenarios where you may want to build and push the Spin Operator image for multiple architectures. This is done by using the underlying Buildx capabilitites provided by Docker.

You use the docker-build-and-publish-all target specified in the Makefile to build and push the Spin Operator image for multiple architectures at once:

# Build & Push the Spin Operator Image for arm64 and amd64
make docker-build-and-publish-all

Optionally, you can specify desired architectures by providing the PLATFORMS argument as shown here:

# Build & Push the Spin Operator Image form desired architectures explicitly
make docker-build-and-publish-all PLATFORMS=linux/arm64,linux/amd64

3.4.9 - Spin Operator Development

To Deploy on the Cluster

Build and push your image to the location specified by IMG:

make docker-build docker-push IMG=<some-registry>/spin-operator:tag

NOTE: This image ought to be published in the personal registry you specified. And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work.

Apply the Runtime Class to the cluster:

kubectl apply -f config/samples/spin-runtime-class.yaml

Install the CRDs into the cluster:

make install

Deploy cert-manager to the cluster:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml

NOTE: Cert-manager is required to manage the TLS certificates for the admission webhooks.

Deploy the Manager to the cluster with the image specified by IMG:

make deploy IMG=<some-registry>/spin-operator:tag

NOTE: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.

Create instances of your solution You can apply the samples (examples) from the config/sample:

kubectl apply -k config/samples/

NOTE: Ensure that the samples has default values to test it out.

To Uninstall

Delete the instances (CRs) from the cluster:

kubectl delete -k config/samples/

Delete the APIs(CRDs) from the cluster:

make uninstall

Delete the Runtime Class from the cluster:

kubectl delete -f config/samples/spin-runtime-class.yaml

UnDeploy the controller from the cluster:

make undeploy

UnDeploy cert-manager from the cluster:

kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml

Packaging and deployment via Helm

The Spin Operator chart is assembled via a combination of helmify using the kustomize manifests from the config directory as well as other non-kustomize items such as the NOTES.txt and Chart.yaml.

NOTE: Manual changes to helmify-generated resources, including the values.yml file and applicable resources in templates are not persisted across helmify invocations.

Generate the Helm chart:

make helm-generate

Install the Helm chart onto the cluster:

Note: CRDs and the wasm-spin-v2 RuntimeClass are currently not installed as part of the chart. You’ll need to ensure these are present via the method(s) mentioned above.

make helm-install

Follow the release notes printed after helm installs the chart for next steps.

Upgrade the Helm release on the cluster:

make helm-upgrade

Delete the Helm release from the cluster:

make helm-uninstall

3.4.10 - Running locally

Learn how to run Spin Operator on your local machine

Prerequisites

Please ensure that your system has all the prerequisites installed before continuing.

Fetch Spin Operator (Source Code)

Clone the Spin Operator repository:

git clone https://github.com/spinkube/spin-operator.git

Change into the Spin Operator directory:

cd spin-operator

Setting Up Kubernetes Cluster

Run the following command to create a Kubernetes k3d cluster that has the containerd-shim-spin pre-requisites installed:

k3d cluster create wasm-cluster --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.14.1 -p "8081:80@loadbalancer" --agents 2

Run the following command to install the Custom Resource Definitions (CRDs) into the cluster:

make install

Running spin-operator

Run the following command to run the Spin Operator locally:

make run

In a fresh terminal, run the following command to create a Runtime Class named wasmtime-spin-v2:

kubectl apply -f - <<EOF
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin-v2
handler: spin
EOF

A SpinAppExecutor is a custom resource that defines how an application should be executed. The SpinAppExecutor provides ways for the user to configure how the application should run, like which runtime class to use.

SpinAppExecutors can be defined at the namespace level, allowing multiple SpinAppExecutors to refer to different runtime classes in the same namespace. This means that a SpinAppExecutor must exist in every namespace where a SpinApp is to be executed.

Run the following command to create a SpinAppExecutor using the same runtime class as the one created above:

kubectl apply -f - <<EOF
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinAppExecutor
metadata:
  name: containerd-shim-spin
spec:
  createDeployment: true
  deploymentConfig:
    runtimeClassName: wasmtime-spin-v2
EOF

Running the Sample Application

Run the following command, in a different terminal window:

kubectl apply -f ./config/samples/simple.yaml

Run the following command to obtain the name of the pod you have running:

kubectl get pods

The above command will return information similar to the following:

NAME                              READY   STATUS    RESTARTS   AGE
simple-spinapp-5b8d8d69b4-snscs   1/1     Running   0          3m40s

Using the NAME from above, run the following kubectl command to listen on port 8083 locally and forward to port 80 in the pod:

kubectl port-forward simple-spinapp-5b8d8d69b4-snscs 8083:80

The above command will return the following forwarding mappings:

Forwarding from 127.0.0.1:8083 -> 80
Forwarding from [::1]:8083 -> 80

In a fresh terminal, run the following command:

curl localhost:8083/hello

The above command will return the following message:

Hello world from Spin!

3.4.11 - Running on a Cluster

Learn how to run Spin Operator on a Kubernetes cluster

Prerequisites

Please ensure that your system has all the prerequisites installed before continuing.

Running on Your Kubernetes Cluster

This is the standard development workflow for when you want to test running Spin Operator on a Kubernetes cluster. This is harder than running Spin Operator on your local machine, but deploying Spin Operator into your cluster lets you test things like webhooks.

Note that you need to install cert-manager for webhook support.

To install cert-manager with the default config

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml

Deploy the Manager to the cluster with the image specified by IMG:

make deploy IMG=<some-registry>/spin-operator:tag

NOTE: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.

To create instances of your solution, apply the samples (examples) from the config/sample:

kubectl apply -k config/samples/

NOTE: Ensure that the samples has default values to test it out.

3.4.12 - Uninstall

How to uninstall custom resources and Spin Operator from your Cluster

These are commands to delete, uninstall and undeploy resources.

Delete (CRs)

The following command will delete the instances (CRs) from the cluster:

kubectl delete -k config/samples/

Delete APIs(CRDs)

The following command will uninstall CRDs from the Kubernetes cluster specified in ~/.kube/config:

make uninstall

Call with ignore-not-found=true to ignore resource not found errors during deletion.

UnDeploy

The following command will undeploy the controller from the Kubernetes cluster specified in ~/.kube/config:

make undeploy

Call with ignore-not-found=true to ignore resource not found errors during deletion.

3.5 - Support

This section provides support resources in the context of Spin Operator

3.5.1 - Feedback

How to provide feedback to the Spin Operator project

The following is a list of suggestions that might assist you with providing feedback to the Spin Operator project.

Discord

For questions or support, please visit the Spin Operator Discord channel.

Feature requests and bug reports

If you would like to file a feature or a bug report request, please open a new issue.

Documentation

If you would like to provide feedback on the documentation please open a new issue.

3.5.2 - Troubleshooting

How to troubleshoot Spin Operator

The following is a list of common error messages and potential troubleshooting suggestions that might assist you with your work.

No endpoints available for service “spin-operator-webhook-service”

When following the quickstart guide the following error can occur when running the kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml command:

Error from server (InternalError): error when creating "https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml": Internal error occurred: failed calling webhook "mspinappexecutor.kb.io": failed to call webhook: Post "https://spin-operator-webhook-service.spin-operator.svc:443/mutate-core-spinoperator-dev-v1alpha1-spinappexecutor?timeout=10s": no endpoints available for service "spin-operator-webhook-service"

To address the error above, first look to see if Spin Operator is running:

get pods -n spin-operator
NAME                                                READY   STATUS              RESTARTS   AGE
spin-operator-controller-manager-5bdcdf577f-htshb   0/2     ContainerCreating   0          26m

If the above result (ready 0/2) is returned, then use the name from the above result to kubectl describe pod of the spin-operator:

kubectl describe pod spin-operator-controller-manager-5bdcdf577f-htshb -n spin-operator

If the above command’s response includes the message SetUp failed for volume "cert" : secret "webhook-server-cert" not found, please check the certificate. The spin operator requires this certificate to serve webhooks, and the missing certificate could be one reason why the spin operator is failing to start.

The command to check the certificate and the desired output is as follows:

kubectl get certificate -n spin-operator
NAME                         READY   SECRET                AGE
spin-operator-serving-cert   True    webhook-server-cert   11m

Instead of the desired output shown above you may be getting the No resources found in spin-operator namespace. response from the command. For example:

kubectl get certificate -n spin-operator
No resources found in spin-operator namespace.

To resolve this issue, please try to install the Spin Operator again. Except this time, use the helm upgrade --install syntax instead of just helm install:

helm upgrade --install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Once the Spin Operator is installed you can try and run the kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml command again. The issue should be resolved now.

Error Validating Data: Connection Refused

When trying to run the kubectl apply -f <URL> command (for example installing the cert-manager etc.) you may encounter an error similar to the following:

$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml

error: error validating "https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:6443/openapi/v2?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

This is because no cluster exists. You can create a cluster using the k3d command or install and configure a client like Rancher Desktop or Docker Desktop to manage Kubernetes clusters on your behalf.

Installation Failed

When trying to install a new version of a chart you may get the following error:

Error: INSTALLATION FAILED: cannot re-use a name that is still in use

For example, if you have installed v0.14.0 of kwasm-operator using the following helm install command:

helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.0

Reissuing the above command with the new version v0.14.1 will result in the following error - Error: INSTALLATION FAILED: cannot re-use a name that is still in use. To use the same command when installing and upgrading a release, use upgrade --install (as referenced here in the official Helm documentation). For example:

helm upgrade --install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.14.1

Cluster Already Exists

When trying to create a cluster (e.g. a cluster named wasm-cluster) you may receive an error message similar to the following:

FATA[0000] Failed to create cluster 'wasm-cluster' because a cluster with that name already exists

Cluster Information

With k3d installed, you can use the following command to get a cluster list:

$ k3d cluster list
NAME           SERVERS   AGENTS   LOADBALANCER
wasm-cluster   1/1       2/2      true

With `kubectl installed, you can use the following command to dump cluster information (this is much more verbose):

kubectl cluster-info dump

Cluster Delete

With k3d installed, you can delete the cluster by name, as shown in the command below:

$ k3d cluster delete wasm-cluster
INFO[0000] Deleting cluster 'wasm-cluster'
INFO[0002] Deleting cluster network 'k3d-wasm-cluster'
INFO[0002] Deleting 1 attached volumes...
INFO[0002] Removing cluster details from default kubeconfig...
INFO[0002] Removing standalone kubeconfig file (if there is one)...
INFO[0002] Successfully deleted cluster wasm-cluster!

Too long: must have at most 262144 bytes

When running kubectl apply -f my-file.yaml, the following error can occur if the yaml file is too large:

Too long: must have at most 262144 bytes

Using the --server-side=true option resolves this issue:

kubectl apply --server-side=true -f my-file.yaml

Redis Operator

Noted an error when installing Redis Operator:

$ helm repo add redis-operator https://spotahome.github.io/redis-operator
"redis-operator" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "redis-operator" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm install redis-operator redis-operator/redis-operator
Error: INSTALLATION FAILED: failed to install CRD crds/databases.spotahome.com_redisfailovers.yaml: error parsing : error converting YAML to JSON: yaml: line 4: did not find expected node content

Used the following commands to enforce using a different version of Redis Operator (whilst waiting on this PR fix to be merged).

$ helm install redis-operator redis-operator/redis-operator --version 3.2.9
NAME: redis-operator
LAST DEPLOYED: Mon Jan 22 12:33:54 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

error: requires go version

When building apps like the cpu-load-gen Spin app, you may get the following error if your TinyGo is not up to date. The error requires go version 1.18 through 1.20 but this is not necessarily the case. It is recommended that you have the latest go installed e.g. 1.21 and downgrading is unnecessary. Instead please go ahead and install the latest version of TinyGo to resolve this error:

user@user:~/spin-operator/apps/cpu-load-gen$ spin build
Building component cpu-load-gen with `tinygo build -target=wasi -gc=leaking -no-debug -o main.wasm main.go`
error: requires go version 1.18 through 1.20, got go1.21

3.6 - Reference

Reference documentation for Custom Resource Definitions (CRDs)

3.6.1 - SpinApp

Custom Resource Definition (CRD) reference for SpinApp

Resource Types:

SpinApp

SpinApp is the Schema for the spinapps API

NameTypeDescriptionRequired
apiVersionstringcore.spinoperator.dev/v1alpha1true
kindstringSpinApptrue
metadataobjectRefer to the Kubernetes API documentation for the fields of the `metadata` field.true
specobjectSpinAppSpec defines the desired state of SpinApp
false
statusobjectSpinAppStatus defines the observed state of SpinApp
false

SpinApp.spec

back to parent

SpinAppSpec defines the desired state of SpinApp

NameTypeDescriptionRequired
executorstringExecutor controls how this app is executed in the cluster.

Defaults to whatever executor is available on the cluster. If multiple executors are available then the first executor in alphabetical order will be chosen. If no executors are available then no default will be set.

true
imagestringImage is the source for this app.
true
checksobjectChecks defines health checks that should be used by Kubernetes to monitor the application.
false
deploymentAnnotationsmap[string]stringDeploymentAnnotations defines annotations to be applied to the underlying deployment.
false
enableAutoscalingbooleanEnableAutoscaling indicates whether the app is allowed to autoscale. If true then the operator leaves the replica count of the underlying deployment to be managed by an external autoscaler (HPA/KEDA). Replicas cannot be defined if this is enabled. By default EnableAutoscaling is false.

Default: false
false
imagePullSecrets[]objectImagePullSecrets is a list of references to secrets in the same namespace to use for pulling the image.
false
podAnnotationsmap[string]stringPodAnnotations defines annotations to be applied to the underlying pods.
false
replicasintegerNumber of replicas to run.

Format: int32
false
resourcesobjectResources defines the resource requirements for this app.
false
runtimeConfigobjectRuntimeConfig defines configuration to be applied at runtime for this app.
false
serviceAnnotationsmap[string]stringServiceAnnotations defines annotations to be applied to the underlying service.
false
variables[]objectVariables provide Kubernetes Bindings to Spin App Variables.
false
volumeMounts[]objectVolumeMounts defines how volumes are mounted in the underlying containers.
false
volumes[]objectVolumes defines the volumes to be mounted in the underlying pods.
false

SpinApp.spec.checks

back to parent

Checks defines health checks that should be used by Kubernetes to monitor the application.

NameTypeDescriptionRequired
livenessobjectLiveness defines the liveness probe for the application.
false
readinessobjectReadiness defines the readiness probe for the application.
false

SpinApp.spec.checks.liveness

back to parent

Liveness defines the liveness probe for the application.

NameTypeDescriptionRequired
failureThresholdintegerMinimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

Format: int32
Default: 3
false
httpGetobjectHTTPGet describes a health check that should be performed using a GET request.
false
initialDelaySecondsintegerNumber of seconds after the app has started before liveness probes are initiated. Default 10s.

Format: int32
Default: 10
false
periodSecondsintegerHow often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

Format: int32
Default: 10
false
successThresholdintegerMinimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

Format: int32
Default: 1
false
timeoutSecondsintegerNumber of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.

Format: int32
Default: 1
false

SpinApp.spec.checks.liveness.httpGet

back to parent

HTTPGet describes a health check that should be performed using a GET request.

NameTypeDescriptionRequired
pathstringPath is the path that should be used when calling the application for a health check, e.g /healthz.
true
httpHeaders[]objectHTTPHeaders are headers that should be included in the health check request.
false

SpinApp.spec.checks.liveness.httpGet.httpHeaders[index]

back to parent

HTTPHealthProbeHeader is an abstraction around a http header key/value pair.

NameTypeDescriptionRequired
namestring
true
valuestring
true

SpinApp.spec.checks.readiness

back to parent

Readiness defines the readiness probe for the application.

NameTypeDescriptionRequired
failureThresholdintegerMinimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

Format: int32
Default: 3
false
httpGetobjectHTTPGet describes a health check that should be performed using a GET request.
false
initialDelaySecondsintegerNumber of seconds after the app has started before liveness probes are initiated. Default 10s.

Format: int32
Default: 10
false
periodSecondsintegerHow often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

Format: int32
Default: 10
false
successThresholdintegerMinimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

Format: int32
Default: 1
false
timeoutSecondsintegerNumber of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.

Format: int32
Default: 1
false

SpinApp.spec.checks.readiness.httpGet

back to parent

HTTPGet describes a health check that should be performed using a GET request.

NameTypeDescriptionRequired
pathstringPath is the path that should be used when calling the application for a health check, e.g /healthz.
true
httpHeaders[]objectHTTPHeaders are headers that should be included in the health check request.
false

SpinApp.spec.checks.readiness.httpGet.httpHeaders[index]

back to parent

HTTPHealthProbeHeader is an abstraction around a http header key/value pair.

NameTypeDescriptionRequired
namestring
true
valuestring
true

SpinApp.spec.imagePullSecrets[index]

back to parent

LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.resources

back to parent

Resources defines the resource requirements for this app.

NameTypeDescriptionRequired
limitsmap[string]int or stringLimits describes the maximum amount of compute resources allowed.
false
requestsmap[string]int or stringRequests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits.
false

SpinApp.spec.runtimeConfig

back to parent

RuntimeConfig defines configuration to be applied at runtime for this app.

NameTypeDescriptionRequired
keyValueStores[]object
false
llmComputeobject
false
loadFromSecretstringLoadFromSecret is the name of the secret to load runtime config from. The secret should have a single key named "runtime-config.toml" that contains the base64 encoded runtime config. If this is provided all other runtime config is ignored.
false
sqliteDatabases[]objectSqliteDatabases provides spin bindings to different SQLite database providers. e.g on-disk or turso.
false

SpinApp.spec.runtimeConfig.keyValueStores[index]

back to parent

NameTypeDescriptionRequired
namestring
true
typestring
true
options[]object
false

SpinApp.spec.runtimeConfig.keyValueStores[index].options[index]

back to parent

NameTypeDescriptionRequired
namestringName of the config option.
true
valuestringValue is the static value to bind to the variable.
false
valueFromobjectValueFrom is a reference to dynamically bind the variable to.
false

SpinApp.spec.runtimeConfig.keyValueStores[index].options[index].valueFrom

back to parent

ValueFrom is a reference to dynamically bind the variable to.

NameTypeDescriptionRequired
configMapKeyRefobjectSelects a key of a ConfigMap.
false
secretKeyRefobjectSelects a key of a secret in the apps namespace
false

SpinApp.spec.runtimeConfig.keyValueStores[index].options[index].valueFrom.configMapKeyRef

back to parent

Selects a key of a ConfigMap.

NameTypeDescriptionRequired
keystringThe key to select.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the ConfigMap or its key must be defined
false

SpinApp.spec.runtimeConfig.keyValueStores[index].options[index].valueFrom.secretKeyRef

back to parent

Selects a key of a secret in the apps namespace

NameTypeDescriptionRequired
keystringThe key of the secret to select from. Must be a valid secret key.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the Secret or its key must be defined
false

SpinApp.spec.runtimeConfig.llmCompute

back to parent

NameTypeDescriptionRequired
typestring
true
options[]object
false

SpinApp.spec.runtimeConfig.llmCompute.options[index]

back to parent

NameTypeDescriptionRequired
namestringName of the config option.
true
valuestringValue is the static value to bind to the variable.
false
valueFromobjectValueFrom is a reference to dynamically bind the variable to.
false

SpinApp.spec.runtimeConfig.llmCompute.options[index].valueFrom

back to parent

ValueFrom is a reference to dynamically bind the variable to.

NameTypeDescriptionRequired
configMapKeyRefobjectSelects a key of a ConfigMap.
false
secretKeyRefobjectSelects a key of a secret in the apps namespace
false

SpinApp.spec.runtimeConfig.llmCompute.options[index].valueFrom.configMapKeyRef

back to parent

Selects a key of a ConfigMap.

NameTypeDescriptionRequired
keystringThe key to select.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the ConfigMap or its key must be defined
false

SpinApp.spec.runtimeConfig.llmCompute.options[index].valueFrom.secretKeyRef

back to parent

Selects a key of a secret in the apps namespace

NameTypeDescriptionRequired
keystringThe key of the secret to select from. Must be a valid secret key.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the Secret or its key must be defined
false

SpinApp.spec.runtimeConfig.sqliteDatabases[index]

back to parent

NameTypeDescriptionRequired
namestring
true
typestring
true
options[]object
false

SpinApp.spec.runtimeConfig.sqliteDatabases[index].options[index]

back to parent

NameTypeDescriptionRequired
namestringName of the config option.
true
valuestringValue is the static value to bind to the variable.
false
valueFromobjectValueFrom is a reference to dynamically bind the variable to.
false

SpinApp.spec.runtimeConfig.sqliteDatabases[index].options[index].valueFrom

back to parent

ValueFrom is a reference to dynamically bind the variable to.

NameTypeDescriptionRequired
configMapKeyRefobjectSelects a key of a ConfigMap.
false
secretKeyRefobjectSelects a key of a secret in the apps namespace
false

SpinApp.spec.runtimeConfig.sqliteDatabases[index].options[index].valueFrom.configMapKeyRef

back to parent

Selects a key of a ConfigMap.

NameTypeDescriptionRequired
keystringThe key to select.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the ConfigMap or its key must be defined
false

SpinApp.spec.runtimeConfig.sqliteDatabases[index].options[index].valueFrom.secretKeyRef

back to parent

Selects a key of a secret in the apps namespace

NameTypeDescriptionRequired
keystringThe key of the secret to select from. Must be a valid secret key.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the Secret or its key must be defined
false

SpinApp.spec.variables[index]

back to parent

SpinVar defines a binding between a spin variable and a static or dynamic value.

NameTypeDescriptionRequired
namestringName of the variable to bind.
true
valuestringValue is the static value to bind to the variable.
false
valueFromobjectValueFrom is a reference to dynamically bind the variable to.
false

SpinApp.spec.variables[index].valueFrom

back to parent

ValueFrom is a reference to dynamically bind the variable to.

NameTypeDescriptionRequired
configMapKeyRefobjectSelects a key of a ConfigMap.
false
fieldRefobjectSelects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
false
resourceFieldRefobjectSelects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
false
secretKeyRefobjectSelects a key of a secret in the pod's namespace
false

SpinApp.spec.variables[index].valueFrom.configMapKeyRef

back to parent

Selects a key of a ConfigMap.

NameTypeDescriptionRequired
keystringThe key to select.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the ConfigMap or its key must be defined
false

SpinApp.spec.variables[index].valueFrom.fieldRef

back to parent

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

NameTypeDescriptionRequired
fieldPathstringPath of the field to select in the specified API version.
true
apiVersionstringVersion of the schema the FieldPath is written in terms of, defaults to "v1".
false

SpinApp.spec.variables[index].valueFrom.resourceFieldRef

back to parent

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

NameTypeDescriptionRequired
resourcestringRequired: resource to select
true
containerNamestringContainer name: required for volumes, optional for env vars
false
divisorint or stringSpecifies the output format of the exposed resources, defaults to "1"
false

SpinApp.spec.variables[index].valueFrom.secretKeyRef

back to parent

Selects a key of a secret in the pod’s namespace

NameTypeDescriptionRequired
keystringThe key of the secret to select from. Must be a valid secret key.
true
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanSpecify whether the Secret or its key must be defined
false

SpinApp.spec.volumeMounts[index]

back to parent

VolumeMount describes a mounting of a Volume within a container.

NameTypeDescriptionRequired
mountPathstringPath within the container at which the volume should be mounted. Must not contain ':'.
true
namestringThis must match the Name of a Volume.
true
mountPropagationstringmountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
false
readOnlybooleanMounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
false
subPathstringPath within the volume from which the container's volume should be mounted. Defaults to "" (volume's root).
false
subPathExprstringExpanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive.
false

SpinApp.spec.volumes[index]

back to parent

Volume represents a named volume in a pod that may be accessed by any container in the pod.

NameTypeDescriptionRequired
namestringname of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
true
awsElasticBlockStoreobjectawsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
false
azureDiskobjectazureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
false
azureFileobjectazureFile represents an Azure File Service mount on the host and bind mount to the pod.
false
cephfsobjectcephFS represents a Ceph FS mount on the host that shares a pod's lifetime
false
cinderobjectcinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
false
configMapobjectconfigMap represents a configMap that should populate this volume
false
csiobjectcsi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
false
downwardAPIobjectdownwardAPI represents downward API about the pod that should populate this volume
false
emptyDirobjectemptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
false
ephemeralobjectephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed.

Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim).

Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod.

Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information.

A pod can use both types of ephemeral volumes and persistent volumes at the same time.

false
fcobjectfc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.
false
flexVolumeobjectflexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.
false
flockerobjectflocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running
false
gcePersistentDiskobjectgcePersistentDisk represents a GCE Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
false
gitRepoobjectgitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.
false
glusterfsobjectglusterfs represents a Glusterfs mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md
false
hostPathobjecthostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.

false
iscsiobjectiscsi represents an ISCSI Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
false
nfsobjectnfs represents an NFS mount on the host that shares a pod’s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
false
persistentVolumeClaimobjectpersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
false
photonPersistentDiskobjectphotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine
false
portworxVolumeobjectportworxVolume represents a portworx volume attached and mounted on kubelets host machine
false
projectedobjectprojected items for all in one resources secrets, configmaps, and downward API
false
quobyteobjectquobyte represents a Quobyte mount on the host that shares a pod’s lifetime
false
rbdobjectrbd represents a Rados Block Device mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md
false
scaleIOobjectscaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
false
secretobjectsecret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
false
storageosobjectstorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.
false
vsphereVolumeobjectvsphereVolume represents a vSphere volume attached and mounted on kubelets host machine
false

SpinApp.spec.volumes[index].awsElasticBlockStore

back to parent

awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

NameTypeDescriptionRequired
volumeIDstringvolumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
true
fsTypestringfsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine
false
partitionintegerpartition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).

Format: int32
false
readOnlybooleanreadOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
false

SpinApp.spec.volumes[index].azureDisk

back to parent

azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.

NameTypeDescriptionRequired
diskNamestringdiskName is the Name of the data disk in the blob storage
true
diskURIstringdiskURI is the URI of data disk in the blob storage
true
cachingModestringcachingMode is the Host Caching mode: None, Read Only, Read Write.
false
fsTypestringfsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
false
kindstringkind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared
false
readOnlybooleanreadOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false

SpinApp.spec.volumes[index].azureFile

back to parent

azureFile represents an Azure File Service mount on the host and bind mount to the pod.

NameTypeDescriptionRequired
secretNamestringsecretName is the name of secret that contains Azure Storage Account Name and Key
true
shareNamestringshareName is the azure share Name
true
readOnlybooleanreadOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false

SpinApp.spec.volumes[index].cephfs

back to parent

cephFS represents a Ceph FS mount on the host that shares a pod’s lifetime

NameTypeDescriptionRequired
monitors[]stringmonitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
true
pathstringpath is Optional: Used as the mounted root, rather than the full Ceph tree, default is /
false
readOnlybooleanreadOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
false
secretFilestringsecretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
false
secretRefobjectsecretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
false
userstringuser is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
false

SpinApp.spec.volumes[index].cephfs.secretRef

back to parent

secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].cinder

back to parent

cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

NameTypeDescriptionRequired
volumeIDstringvolumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
true
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
false
readOnlybooleanreadOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
false
secretRefobjectsecretRef is optional: points to a secret object containing parameters used to connect to OpenStack.
false

SpinApp.spec.volumes[index].cinder.secretRef

back to parent

secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].configMap

back to parent

configMap represents a configMap that should populate this volume

NameTypeDescriptionRequired
defaultModeintegerdefaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
items[]objectitems if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
false
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanoptional specify whether the ConfigMap or its keys must be defined
false

SpinApp.spec.volumes[index].configMap.items[index]

back to parent

Maps a string key to a path within a volume.

NameTypeDescriptionRequired
keystringkey is the key to project.
true
pathstringpath is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.
true
modeintegermode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false

SpinApp.spec.volumes[index].csi

back to parent

csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).

NameTypeDescriptionRequired
driverstringdriver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.
true
fsTypestringfsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.
false
nodePublishSecretRefobjectnodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.
false
readOnlybooleanreadOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
false
volumeAttributesmap[string]stringvolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.
false

SpinApp.spec.volumes[index].csi.nodePublishSecretRef

back to parent

nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].downwardAPI

back to parent

downwardAPI represents downward API about the pod that should populate this volume

NameTypeDescriptionRequired
defaultModeintegerOptional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
items[]objectItems is a list of downward API volume file
false

SpinApp.spec.volumes[index].downwardAPI.items[index]

back to parent

DownwardAPIVolumeFile represents information to create the file containing the pod field

NameTypeDescriptionRequired
pathstringRequired: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'
true
fieldRefobjectRequired: Selects a field of the pod: only annotations, labels, name and namespace are supported.
false
modeintegerOptional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
resourceFieldRefobjectSelects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
false

SpinApp.spec.volumes[index].downwardAPI.items[index].fieldRef

back to parent

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

NameTypeDescriptionRequired
fieldPathstringPath of the field to select in the specified API version.
true
apiVersionstringVersion of the schema the FieldPath is written in terms of, defaults to "v1".
false

SpinApp.spec.volumes[index].downwardAPI.items[index].resourceFieldRef

back to parent

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

NameTypeDescriptionRequired
resourcestringRequired: resource to select
true
containerNamestringContainer name: required for volumes, optional for env vars
false
divisorint or stringSpecifies the output format of the exposed resources, defaults to "1"
false

SpinApp.spec.volumes[index].emptyDir

back to parent

emptyDir represents a temporary directory that shares a pod’s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

NameTypeDescriptionRequired
mediumstringmedium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
false
sizeLimitint or stringsizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
false

SpinApp.spec.volumes[index].ephemeral

back to parent

ephemeral represents a volume that is handled by a cluster storage driver. The volume’s lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed.

Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim).

Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod.

Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information.

A pod can use both types of ephemeral volumes and persistent volumes at the same time.

NameTypeDescriptionRequired
volumeClaimTemplateobjectWill be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `-` where `` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).

An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.

This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.

Required, must not be nil.

false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate

back to parent

Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).

An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.

This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.

Required, must not be nil.

NameTypeDescriptionRequired
specobjectThe specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.
true
metadataobjectMay contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec

back to parent

The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.

NameTypeDescriptionRequired
accessModes[]stringaccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
false
dataSourceobjectdataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
false
dataSourceRefobjectdataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false
resourcesobjectresources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
false
selectorobjectselector is a label query over volumes to consider for binding.
false
storageClassNamestringstorageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
false
volumeAttributesClassNamestringvolumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled.
false
volumeModestringvolumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
false
volumeNamestringvolumeName is the binding reference to the PersistentVolume backing this claim.
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec.dataSource

back to parent

dataSource field can be used to specify either:

  • An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
  • An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
NameTypeDescriptionRequired
kindstringKind is the type of resource being referenced
true
namestringName is the name of resource being referenced
true
apiGroupstringAPIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec.dataSourceRef

back to parent

dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef:

  • While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects.
  • While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified.
  • While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
NameTypeDescriptionRequired
kindstringKind is the type of resource being referenced
true
namestringName is the name of resource being referenced
true
apiGroupstringAPIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
false
namespacestringNamespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec.resources

back to parent

resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

NameTypeDescriptionRequired
limitsmap[string]int or stringLimits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
requestsmap[string]int or stringRequests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec.selector

back to parent

selector is a label query over volumes to consider for binding.

NameTypeDescriptionRequired
matchExpressions[]objectmatchExpressions is a list of label selector requirements. The requirements are ANDed.
false
matchLabelsmap[string]stringmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
false

SpinApp.spec.volumes[index].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[index]

back to parent

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

NameTypeDescriptionRequired
keystringkey is the label key that the selector applies to.
true
operatorstringoperator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
true
values[]stringvalues is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
false

SpinApp.spec.volumes[index].fc

back to parent

fc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.

NameTypeDescriptionRequired
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine
false
lunintegerlun is Optional: FC target lun number

Format: int32
false
readOnlybooleanreadOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false
targetWWNs[]stringtargetWWNs is Optional: FC target worldwide names (WWNs)
false
wwids[]stringwwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.
false

SpinApp.spec.volumes[index].flexVolume

back to parent

flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.

NameTypeDescriptionRequired
driverstringdriver is the name of the driver to use for this volume.
true
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script.
false
optionsmap[string]stringoptions is Optional: this field holds extra command options if any.
false
readOnlybooleanreadOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false
secretRefobjectsecretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.
false

SpinApp.spec.volumes[index].flexVolume.secretRef

back to parent

secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].flocker

back to parent

flocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running

NameTypeDescriptionRequired
datasetNamestringdatasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated
false
datasetUUIDstringdatasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset
false

SpinApp.spec.volumes[index].gcePersistentDisk

back to parent

gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

NameTypeDescriptionRequired
pdNamestringpdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
true
fsTypestringfsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine
false
partitionintegerpartition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

Format: int32
false
readOnlybooleanreadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
false

SpinApp.spec.volumes[index].gitRepo

back to parent

gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.

NameTypeDescriptionRequired
repositorystringrepository is the URL
true
directorystringdirectory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.
false
revisionstringrevision is the commit hash for the specified revision.
false

SpinApp.spec.volumes[index].glusterfs

back to parent

glusterfs represents a Glusterfs mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md

NameTypeDescriptionRequired
endpointsstringendpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
true
pathstringpath is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
true
readOnlybooleanreadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
false

SpinApp.spec.volumes[index].hostPath

back to parent

hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.

NameTypeDescriptionRequired
pathstringpath of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
true
typestringtype for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
false

SpinApp.spec.volumes[index].iscsi

back to parent

iscsi represents an ISCSI Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md

NameTypeDescriptionRequired
iqnstringiqn is the target iSCSI Qualified Name.
true
lunintegerlun represents iSCSI Target Lun number.

Format: int32
true
targetPortalstringtargetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
true
chapAuthDiscoverybooleanchapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication
false
chapAuthSessionbooleanchapAuthSession defines whether support iSCSI Session CHAP authentication
false
fsTypestringfsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine
false
initiatorNamestringinitiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.
false
iscsiInterfacestringiscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp).
false
portals[]stringportals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
false
readOnlybooleanreadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.
false
secretRefobjectsecretRef is the CHAP Secret for iSCSI target and initiator authentication
false

SpinApp.spec.volumes[index].iscsi.secretRef

back to parent

secretRef is the CHAP Secret for iSCSI target and initiator authentication

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].nfs

back to parent

nfs represents an NFS mount on the host that shares a pod’s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

NameTypeDescriptionRequired
pathstringpath that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
true
serverstringserver is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
true
readOnlybooleanreadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
false

SpinApp.spec.volumes[index].persistentVolumeClaim

back to parent

persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

NameTypeDescriptionRequired
claimNamestringclaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
true
readOnlybooleanreadOnly Will force the ReadOnly setting in VolumeMounts. Default false.
false

SpinApp.spec.volumes[index].photonPersistentDisk

back to parent

photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine

NameTypeDescriptionRequired
pdIDstringpdID is the ID that identifies Photon Controller persistent disk
true
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
false

SpinApp.spec.volumes[index].portworxVolume

back to parent

portworxVolume represents a portworx volume attached and mounted on kubelets host machine

NameTypeDescriptionRequired
volumeIDstringvolumeID uniquely identifies a Portworx volume
true
fsTypestringfSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified.
false
readOnlybooleanreadOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false

SpinApp.spec.volumes[index].projected

back to parent

projected items for all in one resources secrets, configmaps, and downward API

NameTypeDescriptionRequired
defaultModeintegerdefaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
sources[]objectsources is the list of volume projections
false

SpinApp.spec.volumes[index].projected.sources[index]

back to parent

Projection that may be projected along with other supported volume types

NameTypeDescriptionRequired
clusterTrustBundleobjectClusterTrustBundle allows a pod to access the `.spec.trustBundle` field of ClusterTrustBundle objects in an auto-updating file.

Alpha, gated by the ClusterTrustBundleProjection feature gate.

ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector.

Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time.

false
configMapobjectconfigMap information about the configMap data to project
false
downwardAPIobjectdownwardAPI information about the downwardAPI data to project
false
secretobjectsecret information about the secret data to project
false
serviceAccountTokenobjectserviceAccountToken is information about the serviceAccountToken data to project
false

SpinApp.spec.volumes[index].projected.sources[index].clusterTrustBundle

back to parent

ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file.

Alpha, gated by the ClusterTrustBundleProjection feature gate.

ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector.

Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time.

NameTypeDescriptionRequired
pathstringRelative path from the volume root to write the bundle.
true
labelSelectorobjectSelect all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything".
false
namestringSelect a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector.
false
optionalbooleanIf true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles.
false
signerNamestringSelect all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated.
false

SpinApp.spec.volumes[index].projected.sources[index].clusterTrustBundle.labelSelector

back to parent

Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as “match nothing”. If set but empty, interpreted as “match everything”.

NameTypeDescriptionRequired
matchExpressions[]objectmatchExpressions is a list of label selector requirements. The requirements are ANDed.
false
matchLabelsmap[string]stringmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
false

SpinApp.spec.volumes[index].projected.sources[index].clusterTrustBundle.labelSelector.matchExpressions[index]

back to parent

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

NameTypeDescriptionRequired
keystringkey is the label key that the selector applies to.
true
operatorstringoperator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
true
values[]stringvalues is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
false

SpinApp.spec.volumes[index].projected.sources[index].configMap

back to parent

configMap information about the configMap data to project

NameTypeDescriptionRequired
items[]objectitems if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
false
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanoptional specify whether the ConfigMap or its keys must be defined
false

SpinApp.spec.volumes[index].projected.sources[index].configMap.items[index]

back to parent

Maps a string key to a path within a volume.

NameTypeDescriptionRequired
keystringkey is the key to project.
true
pathstringpath is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.
true
modeintegermode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false

SpinApp.spec.volumes[index].projected.sources[index].downwardAPI

back to parent

downwardAPI information about the downwardAPI data to project

NameTypeDescriptionRequired
items[]objectItems is a list of DownwardAPIVolume file
false

SpinApp.spec.volumes[index].projected.sources[index].downwardAPI.items[index]

back to parent

DownwardAPIVolumeFile represents information to create the file containing the pod field

NameTypeDescriptionRequired
pathstringRequired: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'
true
fieldRefobjectRequired: Selects a field of the pod: only annotations, labels, name and namespace are supported.
false
modeintegerOptional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
resourceFieldRefobjectSelects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
false

SpinApp.spec.volumes[index].projected.sources[index].downwardAPI.items[index].fieldRef

back to parent

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

NameTypeDescriptionRequired
fieldPathstringPath of the field to select in the specified API version.
true
apiVersionstringVersion of the schema the FieldPath is written in terms of, defaults to "v1".
false

SpinApp.spec.volumes[index].projected.sources[index].downwardAPI.items[index].resourceFieldRef

back to parent

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

NameTypeDescriptionRequired
resourcestringRequired: resource to select
true
containerNamestringContainer name: required for volumes, optional for env vars
false
divisorint or stringSpecifies the output format of the exposed resources, defaults to "1"
false

SpinApp.spec.volumes[index].projected.sources[index].secret

back to parent

secret information about the secret data to project

NameTypeDescriptionRequired
items[]objectitems if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
false
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false
optionalbooleanoptional field specify whether the Secret or its key must be defined
false

SpinApp.spec.volumes[index].projected.sources[index].secret.items[index]

back to parent

Maps a string key to a path within a volume.

NameTypeDescriptionRequired
keystringkey is the key to project.
true
pathstringpath is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.
true
modeintegermode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false

SpinApp.spec.volumes[index].projected.sources[index].serviceAccountToken

back to parent

serviceAccountToken is information about the serviceAccountToken data to project

NameTypeDescriptionRequired
pathstringpath is the path relative to the mount point of the file to project the token into.
true
audiencestringaudience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.
false
expirationSecondsintegerexpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.

Format: int64
false

SpinApp.spec.volumes[index].quobyte

back to parent

quobyte represents a Quobyte mount on the host that shares a pod’s lifetime

NameTypeDescriptionRequired
registrystringregistry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes
true
volumestringvolume is a string that references an already created Quobyte volume by name.
true
groupstringgroup to map volume access to Default is no group
false
readOnlybooleanreadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.
false
tenantstringtenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin
false
userstringuser to map volume access to Defaults to serivceaccount user
false

SpinApp.spec.volumes[index].rbd

back to parent

rbd represents a Rados Block Device mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

NameTypeDescriptionRequired
imagestringimage is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
true
monitors[]stringmonitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
true
fsTypestringfsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine
false
keyringstringkeyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
false
poolstringpool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
false
readOnlybooleanreadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
false
secretRefobjectsecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
false
userstringuser is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
false

SpinApp.spec.volumes[index].rbd.secretRef

back to parent

secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].scaleIO

back to parent

scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.

NameTypeDescriptionRequired
gatewaystringgateway is the host address of the ScaleIO API Gateway.
true
secretRefobjectsecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.
true
systemstringsystem is the name of the storage system as configured in ScaleIO.
true
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs".
false
protectionDomainstringprotectionDomain is the name of the ScaleIO Protection Domain for the configured storage.
false
readOnlybooleanreadOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false
sslEnabledbooleansslEnabled Flag enable/disable SSL communication with Gateway, default false
false
storageModestringstorageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.
false
storagePoolstringstoragePool is the ScaleIO Storage Pool associated with the protection domain.
false
volumeNamestringvolumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.
false

SpinApp.spec.volumes[index].scaleIO.secretRef

back to parent

secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].secret

back to parent

secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret

NameTypeDescriptionRequired
defaultModeintegerdefaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false
items[]objectitems If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
false
optionalbooleanoptional field specify whether the Secret or its keys must be defined
false
secretNamestringsecretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
false

SpinApp.spec.volumes[index].secret.items[index]

back to parent

Maps a string key to a path within a volume.

NameTypeDescriptionRequired
keystringkey is the key to project.
true
pathstringpath is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.
true
modeintegermode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

Format: int32
false

SpinApp.spec.volumes[index].storageos

back to parent

storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.

NameTypeDescriptionRequired
fsTypestringfsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
false
readOnlybooleanreadOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
false
secretRefobjectsecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
false
volumeNamestringvolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
false
volumeNamespacestringvolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.
false

SpinApp.spec.volumes[index].storageos.secretRef

back to parent

secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.

NameTypeDescriptionRequired
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
false

SpinApp.spec.volumes[index].vsphereVolume

back to parent

vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine

NameTypeDescriptionRequired
volumePathstringvolumePath is the path that identifies vSphere volume vmdk
true
fsTypestringfsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
false
storagePolicyIDstringstoragePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.
false
storagePolicyNamestringstoragePolicyName is the storage Policy Based Management (SPBM) profile name.
false

SpinApp.status

back to parent

SpinAppStatus defines the observed state of SpinApp

NameTypeDescriptionRequired
readyReplicasintegerRepresents the current number of active replicas on the application deployment.

Format: int32
true
activeSchedulerstringActiveScheduler is the name of the scheduler that is currently scheduling this SpinApp.
false
conditions[]objectRepresents the observations of a SpinApps's current state. SpinApp.status.conditions.type are: "Available" and "Progressing" SpinApp.status.conditions.status are one of True, False, Unknown. SpinApp.status.conditions.reason the value should be a CamelCase string and producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. SpinApp.status.conditions.Message is a human readable message indicating details about the transition. For further information see: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
false

SpinApp.status.conditions[index]

back to parent

Condition contains details for one aspect of the current state of this API Resource.

This struct is intended for direct use as an array at the field path .status.conditions. For example,

type FooStatus struct{
    // Represents the observations of a foo's current state.
    // Known .status.conditions.type are: "Available", "Progressing", and "Degraded"
    // +patchMergeKey=type
    // +patchStrategy=merge
    // +listType=map
    // +listMapKey=type
    Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`


    // other fields
}
NameTypeDescriptionRequired
lastTransitionTimestringlastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.

Format: date-time
true
messagestringmessage is a human readable message indicating details about the transition. This may be an empty string.
true
reasonstringreason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.
true
statusenumstatus of the condition, one of True, False, Unknown.

Enum: True, False, Unknown
true
typestringtype of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
true
observedGenerationintegerobservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.

Format: int64
Minimum: 0
false

3.6.2 - SpinAppExecutor

Custom Resource Definition (CRD) reference for SpinAppExecutor

Resource Types:

SpinAppExecutor

SpinAppExecutor is the Schema for the spinappexecutors API

NameTypeDescriptionRequired
apiVersionstringcore.spinoperator.dev/v1alpha1true
kindstringSpinAppExecutortrue
metadataobjectRefer to the Kubernetes API documentation for the fields of the `metadata` field.true
specobjectSpinAppExecutorSpec defines the desired state of SpinAppExecutor
false
statusobjectSpinAppExecutorStatus defines the observed state of SpinAppExecutor
false

SpinAppExecutor.spec

back to parent

SpinAppExecutorSpec defines the desired state of SpinAppExecutor

NameTypeDescriptionRequired
createDeploymentbooleanCreateDeployment specifies whether the Executor wants the SpinKube operator to create a deployment for the application or if it will be realized externally.
true
deploymentConfigobjectDeploymentConfig specifies how the deployment should be configured when createDeployment is true.
false

SpinAppExecutor.spec.deploymentConfig

back to parent

DeploymentConfig specifies how the deployment should be configured when createDeployment is true.

NameTypeDescriptionRequired
runtimeClassNamestringRuntimeClassName is the runtime class name that should be used by pods created as part of a deployment.
true

3.7 - Contributing

Contributing to Spin Operator

We are delighted that you are interested in making spin-operator better! Thank you! This document will guide you through making your first contribution to the project. We welcome and appreciate contributions of all types - opening issues, fixing typos, adding examples, one-liner code fixes, tests, or complete features.

First, any contribution and interaction on any Fermyon project MUST follow our Code of Conduct. Thank you for being part of an inclusive and open community!

If you plan on contributing anything complex, please go through the open issues and PR queue first to make sure someone else has not started working on it. If it doesn’t exist already, please open an issue so you have a chance to get feedback from the community and the maintainers before you start working on your feature.

Making Code Contributions to spin-operator

The following guide is intended to make sure your contribution can get merged as soon as possible.

If you have nix installed, all dependencies aside from docker will be available after running nix develop.

Otherwise you’ll need to install:

  • go version v1.22.0+
  • docker version 17.03+.
  • kubectl version v1.11.3+.
  • Access to a Kubernetes v1.11.3+ cluster (microk8s or k3s are commonly used by the maintainers)
  • make
  • please ensure you configure adding a GPG signature to your commits as well as appending a sign-off message (git commit -S -s)

Once you have set up the prerequisites and identified the contribution you want to make to spin-operator, make sure you can correctly build the project:

# clone the repository
$ git clone https://github.com/spinkube/spin-operator && cd spin-operator
# add a new remote pointing to your fork of the project
$ git remote add fork https://github.com/<your-username>/spin-operator
# create a new branch for your work
$ git checkout -b <your-branch>

# build spin-operator
$ make

# make sure compilation is successful
$ ./bin/manager --help

# run the tests and make sure they pass
$ make test

Now you should be ready to start making your contribution. To familiarize yourself with the spin-operator project, please read the README. Since most of spin-operator is written in Go, we try to follow the common Go coding conventions. If applicable, add unit or integration tests to ensure your contribution is correct.

NOTE: Run make --help for more information on all potential make targets.

Before You Commit

  • Run the lint task (make lint or make lint-fix)
  • Build the project and run the tests (make test)

spin-operator enforces lints and tests as part of continuous integration - running them locally will save you a round-trip on your pull request!

If everything works locally, you’re ready to commit your changes.

Committing and Pushing Your Changes

We require commits to be signed both with an email address and with a GPG signature.

Because of the way GitHub runs enforcement, the GPG signature isn’t checked until after all tests have run. Be sure to GPG sign up front, as it can be a bit frustrating to wait for all the tests and then get blocked on the signature!

$ git commit -S -s -m "<your commit message>"

Some contributors like to follow the Conventional Commits convention for commit messages.

We try to only keep useful changes as separate commits - if you prefer to commit often, please cleanup the commit history before opening a pull request.

Once you are happy with your changes you can push the branch to your fork:

# "fork" is the name of the git remote pointing to your fork
$ git push fork

Now you are ready to create a pull request. Thank you for your contribution!

4 - containerd-shim-spin

The Containerd Shim Spin is a project that integrates WebAssembly (Wasm) and WASI workloads into Kubernetes.

The Containerd Shim Spin repository, or “contained-shim-spin,” provides shim implementations for running WebAssembly (Wasm) / Wasm System Interface (WASI) workloads using runwasi as a library, whereby workloads built using the Spin framework can function similarly to container workloads in a Kubernetes environment.

The containerd-shim-spin is specifically designed to work with the Spin framework (a developer tool for building and running serverless Wasm applications). The shim ensures that Wasm workloads can be managed effectively within a Kubernetes environment, leveraging containerd’s capabilities.

In a Kubernetes cluster, specific nodes can be bootstrapped with Wasm runtimes and labeled accordingly to facilitate the scheduling of Wasm workloads. RuntimeClasses in Kubernetes are used to schedule Pods to specific nodes and target specific runtimes. By defining a RuntimeClass with the appropriate nodeSelector and handler, Kubernetes can direct Wasm workloads to nodes equipped with the necessary Wasm runtimes and ensure they are executed with the correct runtime handler.

Overall, the Containerd Shim Spin represents a significant advancement in integrating Wasm workloads into Kubernetes clusters, enhancing the versatility and capabilities of container orchestration.

4.1 - Concepts

The Containerd Shim Spin is a project that integrates WebAssembly (Wasm) and WASI workloads into Kubernetes, allowing these workloads to be managed and run as regular container workloads through the implementation of specialized containerd shims.

4.2 - Contributing

Contributing to Containerd Shim Spin

For information about contributing to the Containerd Shim Spin project visit the containerd-shim-spin GitHub repository.

5 - runtime-class-manager

Enhance Kubernetes with Runtime-Class-Manager; efficient Containerd shim handling.

The Runtime Class Manager, also known as the Containerd Shim Lifecycle Operator, is designed to automate and manage the lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation, update, removal, and configuration of shims, reducing manual errors and improving reliability in managing WebAssembly (Wasm) workloads and other containerd extensions.

The Runtime Class Manager provides a robust and production-ready solution for installing, updating, and removing shims, as well as managing node labels and runtime classes in a Kubernetes environment.

By automating these processes, the runtime-class-manager enhances reliability, reduces human error, and simplifies the deployment and management of containerd shims in Kubernetes clusters.

5.1 - Concepts

Enhance Kubernetes with Runtime-Class-Manager; efficient Containerd shim handling.

5.2 - Contributing

Contributing to Runtime Class Manager

For information about contributing to the Spin Operator project visit the runtime-class-manager GitHub repository.

6 - spin-plugin-kube

Enhance Kubernetes by enabling the execution of Wasm modules directly within a Kubernetes cluster

The Kubernetes plugin for Spin is designed to enhance Spin by enabling the execution of Wasm modules directly within a Kubernetes cluster. Specifically a tool designed for Kubernetes integration with the Spin command-line interface. This plugin works by integrating with containerd shims, allowing Kubernetes to manage and run Wasm workloads in a way similar to traditional container workloads.

The Kubernetes plugin for Spin allows developers to use the Spin command-line interface for deploying Spin applications; it provides a seamless workflow for building, pushing, deploying, and managing Spin applications in a Kubernetes environment. It includes commands for scaffolding new components as Kubernetes manifests, and deploying and retrieving information about Spin applications running in Kubernetes. This plugin is an essential tool for developers looking to streamline their Spin application deployment on Kubernetes platforms.

6.1 - Installation

Learn how to install the kube plugin for spin

The kube plugin for spin (The Spin CLI) provides first class experience for working with Spin apps in the context of Kubernetes.

Prerequisites

Ensure the necessary prerequisites are installed.

For this tutorial in particular, you will need

Install the plugin

Before you install the kube plugin for spin, you should fetch the list of latest Spin plugins from the spin-plugins repository:

# Update the list of latest Spin plugins
spin plugins update
Plugin information updated successfully

Go ahead and install the kube using spin plugin install:

# Install the latest kube plugin
spin plugins install kube

At this point you should see the kube plugin when querying the list of installed Spin plugins:

# List all installed Spin plugins
spin plugins list --installed

cloud 0.7.0 [installed]
cloud-gpu 0.1.0 [installed]
kube 0.1.0 [installed]
pluginify 0.6.0 [installed]

Compiling from source

As an alternative to the plugin manager, you can download and manually install the plugin. Manual installation is commonly used to test in-flight changes. For a user, installing the plugin using Spin’s plugin manager is better.

Please refer to the contributing guide for instructions on how to compile the plugin from source.

6.2 - Tutorials

This section consists of tutorials in the context of the Kubernetes plugin for Spin

6.2.1 - Autoscaler Support

A tutorial to show how autoscaler support can be enabled via the spin kube command

Horizontal autoscaling support

In Kubernetes, a horizontal autoscaler automatically updates a workload resource (such as a Deployment or StatefulSet) with the aim of automatically scaling the workload to match demand.

Horizontal scaling means that the response to increased load is to deploy more resources. This is different from vertical scaling, which for Kubernetes would mean assigning more memory or CPU to the resources that are already running for the workload.

If the load decreases, and the number of resources is above the configured minimum, a horizontal autoscaler would instruct the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.

The Kubernetes plugin for Spin includes autoscaler support, which allows you to tell Kubernetes when to scale your Spin application up or down based on demand. This tutorial will show you how to enable autoscaler support via the spin kube scaffold command.

Prerequisites

Regardless of what type of autoscaling is used, you must determine how you want your application to scale by answering the following questions:

  1. Do you want your application to scale based upon system metrics (CPU and memory utilization) or based upon events (like messages in a queue or rows in a database)?
  2. If you application scales based on system metrics, how much CPU and memory each instance does your application need to operate?

Choosing an autoscaler

The Kubernetes plugin for Spin supports two types of autoscalers: Horizontal Pod Autoscaler (HPA) and Kubernetes Event-driven Autoscaling (KEDA). The choice of autoscaler depends on the requirements of your application.

Horizontal Pod Autoscaling (HPA)

Horizontal Pod Autoscaler (HPA) scales Kubernetes pods based on CPU or memory utilization. This HPA scaling can be implemented via the Kubernetes plugin for Spin by setting the --autoscaler hpa option. This page deals exclusively with autoscaling via the Kubernetes plugin for Spin.

spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi

Horizontal Pod Autoscaling is built-in to Kubernetes and does not require the installation of a third-party runtime. For more general information about scaling with HPA, please see the Spin Operator’s Scaling with HPA section

Kubernetes Event-driven Autoscaling (KEDA)

Kubernetes Event-driven Autoscaling (KEDA) is an extension of Horizontal Pod Autoscaling (HPA). On top of allowing to scale based on CPU or memory utilization, KEDA allows for scaling based on events from various sources like messages in a queue, or the number of rows in a database.

KEDA can be enabled by setting the --autoscaler keda option:

spin kube scaffold --from user-name/app-name:latest --autoscaler keda --cpu-limit 100m --memory-limit 128Mi -replicas 1 --max-replicas 10

Using KEDA to autoscale your Spin applications requires the installation of the KEDA runtime into your Kubernetes cluster. For more information about scaling with KEDA in general, please see the Spin Operator’s Scaling with KEDA section

Setting min/max replicas

The --replicas and --max-replicas options can be used to set the minimum and maximum number of replicas for your application. The --replicas option defaults to 2 and the --max-replicas option defaults to 3.

spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi -replicas 1 --max-replicas 10

Setting CPU/memory limits and CPU/memory requests

If the node where an application is running has enough of a resource available, it’s possible (and allowed) for that application to use more resource than its resource request for that resource specifies. However, an application is not allowed to use more than its resource limit.

For example, if you set a memory request of 256 MiB, and that application is scheduled to a node with 8GiB of memory and no other appplications, then the application can try to use more RAM.

If you set a memory limit of 4GiB for that application, the webassembly runtime will enforce that limit. The runtime prevents the application from using more than the configured resource limit. For example: when a process in the application tries to consume more than the allowed amount of memory, the webassembly runtime terminates the process that attempted the allocation with an out of memory (OOM) error.

The --cpu-limit, --memory-limit, --cpu-request, and --memory-request options can be used to set the CPU and memory limits and requests for your application. The --cpu-limit and --memory-limit options are required, while the --cpu-request and --memory-request options are optional.

It is important to note the following:

  • CPU/memory requests are optional and will default to the CPU/memory limit if not set.
  • CPU/memory requests must be lower than their respective CPU/memory limit.
  • If you specify a limit for a resource, but do not specify any request, and no admission-time mechanism has applied a default request for that resource, then Kubernetes copies the limit you specified and uses it as the requested value for the resource.
spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --cpu-request 50m --memory-request 64Mi

Setting target utilization

Target utilization is the percentage of the resource that you want to be used before the autoscaler kicks in. The autoscaler will check the current resource utilization of your application against the target utilization and scale your application up or down based on the result.

Target utilization is based on the average resource utilization across all instances of your application. For example, if you have 3 instances of your application, the target CPU utilization is 50%, and each application is averaging 80% CPU utilization, the autoscaler will continue to increase the number of instances until all instances are averaging 50% CPU utilization.

To scale based on CPU utilization, use the --autoscaler-target-cpu-utilization option:

spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --autoscaler-target-cpu-utilization 50

To scale based on memory utilization, use the --autoscaler-target-memory-utilization option:

spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --autoscaler-target-memory-utilization 50

6.3 - CLI Reference

Spin Plugin kube CLI Reference

spin kube completion

spin kube completion --help
Generate the autocompletion script for kube for the specified shell.
See each sub-command's help for details on how to use the generated script.

Usage:
  kube completion [command]

Available Commands:
  bash        Generate the autocompletion script for bash
  fish        Generate the autocompletion script for fish
  powershell  Generate the autocompletion script for powershell
  zsh         Generate the autocompletion script for zsh

Flags:
  -h, --help   help for completion

spin kube completion bash

spin kube completion bash --help
Generate the autocompletion script for the bash shell.

This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.

To load completions in your current shell session:

	source <(kube completion bash)

To load completions for every new session, execute once:

#### Linux:

	kube completion bash > /etc/bash_completion.d/kube

#### macOS:

	kube completion bash > $(brew --prefix)/etc/bash_completion.d/kube

You will need to start a new shell for this setup to take effect.

Usage:
  kube completion bash

Flags:
  -h, --help              help for bash
      --no-descriptions   disable completion descriptions

spin kube completion fish

spin kube completion fish --help
Generate the autocompletion script for the fish shell.

To load completions in your current shell session:

	kube completion fish | source

To load completions for every new session, execute once:

	kube completion fish > ~/.config/fish/completions/kube.fish

You will need to start a new shell for this setup to take effect.

Usage:
  kube completion fish [flags]

Flags:
  -h, --help              help for fish
      --no-descriptions   disable completion descriptions

spin kube completion powershell

spin kube completion powershell --help
Generate the autocompletion script for powershell.

To load completions in your current shell session:

	kube completion powershell | Out-String | Invoke-Expression

To load completions for every new session, add the output of the above command
to your powershell profile.

Usage:
  kube completion powershell [flags]

Flags:
  -h, --help              help for powershell
      --no-descriptions   disable completion descriptions

spin kube completion zsh

spin kube completion zsh --help
Generate the autocompletion script for the zsh shell.

If shell completion is not already enabled in your environment you will need
to enable it.  You can execute the following once:

	echo "autoload -U compinit; compinit" >> ~/.zshrc

To load completions in your current shell session:

	source <(kube completion zsh)

To load completions for every new session, execute once:

#### Linux:

	kube completion zsh > "${fpath[1]}/_kube"

#### macOS:

	kube completion zsh > $(brew --prefix)/share/zsh/site-functions/_kube

You will need to start a new shell for this setup to take effect.

Usage:
  kube completion zsh [flags]

Flags:
  -h, --help              help for zsh
      --no-descriptions   disable completion descriptions

spin kube help

spin kube --help
Manage apps running on Kubernetes

Usage:
  kube [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  help        Help about any command
  scaffold    scaffold SpinApp manifest
  version     Display version information

Flags:
  -h, --help                help for kube
      --kubeconfig string   the path to the kubeconfig file
  -n, --namespace string    the namespace scope
  -v, --version             version for kube

spin kube scaffold

spin kube scaffold --help
scaffold SpinApp manifest

Usage:
  kube scaffold [flags]

Flags:
      --autoscaler string                            The autoscaler to use. Valid values are 'hpa' and 'keda'
      --autoscaler-target-cpu-utilization int32      The target CPU utilization percentage to maintain across all pods (default 60)
      --autoscaler-target-memory-utilization int32   The target memory utilization percentage to maintain across all pods (default 60)
      --cpu-limit string                             The maximum amount of CPU resource units the Spin application is allowed to use
      --cpu-request string                           The amount of CPU resource units requested by the Spin application. Used to determine which node the Spin application will run on
      --executor string                              The executor used to run the Spin application (default "containerd-shim-spin")
  -f, --from string                                  Reference in the registry of the Spin application
  -h, --help                                         help for scaffold
  -s, --image-pull-secret strings                    secrets in the same namespace to use for pulling the image
      --max-replicas int32                           Maximum number of replicas for the spin app. Autoscaling must be enabled to use this flag (default 3)
      --memory-limit string                          The maximum amount of memory the Spin application is allowed to use
      --memory-request string                        The amount of memory requested by the Spin application. Used to determine which node the Spin application will run on
  -o, --out string                                   path to file to write manifest yaml
  -r, --replicas int32                               Minimum number of replicas for the spin app (default 2)
  -c, --runtime-config-file string                   path to runtime config file

spin kube version

spin kube version

6.4 - Concepts

The Spin Plugin for Kubernetes is a Spin plugin for interacting with Kubernetes.

6.5 - Contributing

Contributing to the Spin plugin for Kubernetes

Prerequisites

To compile the plugin from source, you will need

Compiling the plugin from source

Fetch the source code from GitHub:

git clone git@github.com:spinkube/spin-plugin-kube.git
cd spin-plugin-kube

Compile and install the plugin:

make
make install

At this point you should see the kube plugin when querying the list of installed Spin plugins:

# List all installed Spin plugins
spin plugins list --installed

cloud 0.7.0 [installed]
cloud-gpu 0.1.0 [installed]
kube 0.1.0 [installed]
pluginify 0.6.0 [installed]

Contributing changes

Once your changes are made and tested locally please see the guidelines of contributing to Spin for more details about committing and pushing your changes.

7 - Compatibility

Spin Operator Compatibility

See the following list of compatible Kubernetes distributions and platforms for running the Spin Operator:

Disclaimer: Please note that this is a working list of compatible Kubernetes distributions and platforms. For managed Kubernetes services, it’s important to be aware that cloud providers may choose to discontinue support for specific dependencies, such as container runtimes. While we strive to maintain the accuracy of this documentation, it is ultimately your responsibility to verify with your Kubernetes provider whether the required dependencies are still supported.

How to validate Spin Operator Compatibility

If you would like to validate Spin Operator’s compatibility with a new specific Kubernetes distribution or platform or simply test one of the platforms listed above yourself, follow these steps for validation:

  1. Install the Spin Operator: Begin by installing the Spin Operator within the Kubernetes cluster. This involves deploying the necessary dependencies and the Spin Operator itself. (See Installing with Helm)

  2. Create, Package, and Deploy a Spin App: Proceed by creating a Spin App, packaging it, and successfully deploying it within the Kubernetes environment. (See Package and Deploy Spin Apps)

  3. Invoke the Spin App: Once the Spin App is deployed, ensure at least one request was successfully served by the Spin App.

Container Runtime Constraints

The Spin Operator requires the target nodes that would run Spin applications to support containerd version 1.6.26+ or 1.7.7+.

Use the kubectl get nodes -o wide command to see which container runtime is installed per node:

# Inspect container runtimes per node
kubectl get nodes -o wide
NAME                    STATUS   VERSION   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
generalnp-vmss000000    Ready    v1.27.9   Ubuntu 22.04.4 LTS   5.15.0-1056-azure   containerd://1.7.7-1
generalnp-vmss000001    Ready    v1.27.9   Ubuntu 22.04.4 LTS   5.15.0-1056-azure   containerd://1.7.7-1
generalnp-vmss000002    Ready    v1.27.9   Ubuntu 22.04.4 LTS   5.15.0-1056-azure   containerd://1.7.7-1

8 - Integrations

A high level overview of the SpinKube integrations

SpinKube Integrations

KEDA

Kubernetes Event-Driven Autoscaling (KEDA) provides event-driven autoscaling for Kubernetes workloads. It allows Kubernetes to automatically scale applications in response to external events such as messages in a queue, enabling more efficient resource utilization and responsive scaling based on actual demand, rather than static metrics. KEDA serves as a bridge between Kubernetes and various event sources, making it easier to scale applications dynamically in a cloud-native environment. If you would like to see how SpinKube integrates with KEDA, please read the “Scaling With KEDA” tutorial which deploys a SpinApp and the KEDA ScaledObject instance onto a cluster. The tutorial also uses Bombardier to generate traffic to test how well KEDA scales our SpinApp.

Rancher Desktop

The release of Rancher Desktop 1.13.0 comes with basic support for running WebAssembly (Wasm) containers and deploying them to Kubernetes. Rancher Desktop by SUSE, is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop. If you would like to see how SpinKube integrates with Rancher Desktop, please read the “Integrating With Rancher Desktop” tutorial which walks through the steps of installing the necessary components for SpinKube (including the CertManager for SSL, CRDs and the KWasm runtime class manager using Helm charts). The tutorial then demonstrates how to create a simple Spin JavaScript application and deploys the application within Rancher Desktop’s local cluster.

9 - Contributing to Docs

How to contribute to the SpinKube documentation

Contributing to SpinKube documentation

If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you:

  1. Click Edit this page in the top right-hand corner of the page.
  2. If you don’t already have an up-to-date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up-to-date version of the project to edit.

Previewing your changes locally

If you want to run your own local Hugo server to preview your changes as you work:

  1. Install Go
  2. Install Hugo
  3. Clone this documentation repository (git clone git@github.com:spinkube/documentation.git)
  4. Run hugo server in this site’s root (developer) directory. By default, the documentation will be available to view at http://localhost:1313/
  5. Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request

If you’ve found a problem in the docs, but you’re not sure how to fix it yourself, please create an issue in the documentation repository. You can also create an issue about a specific page by clicking the Create Issue button in the top right-hand corner of the page.

Contributing to the Containerd Shim Spin project

For information about contributing to the Containerd Shim Spin project visit the containerd-shim-spin GitHub repository.

Contributing to the Runtime Class Manager project

For information about contributing to the Spin Operator project visit the runtime-class-manager GitHub repository.

Contributing to the Spin Operator project

For information about contributing to the Spin Operator project visit the Spin Operator Contributing page.

Contributing to the spin kube Plugin project

For information about contributing to the spin kube Plugin project visit the spin-plugin-kube GitHub repository.

10 - Glossary of Terms

Glossary of Terms

The following glossary of terms is in the context of deploying, scaling, automating and managing Spin applications in containerized environments.

Chart

A Helm chart is a package format used in Kubernetes for deploying applications. It contains all the necessary files, configurations, and dependencies required to deploy and manage an application on a Kubernetes cluster. Helm charts provide a convenient way to define, install, and upgrade complex applications in a consistent and reproducible manner.

Cluster

A Kubernetes cluster is a group of nodes (servers) that work together to run containerized applications. It consists of a control plane and worker nodes. The control plane manages and orchestrates the cluster, while the worker nodes host the containers. The control plane includes components like the API server, scheduler, and controller manager. The worker nodes run the containers using container runtime engines like Docker. Kubernetes clusters provide scalability, high availability, and automated management of containerized applications in a distributed environment.

Container Runtime

A container runtime is a software that manages the execution of containers. It is responsible for starting, stopping, and managing the lifecycle of containers. Container runtimes interact with the underlying operating system to provide isolation and resource management for containers. They also handle networking, storage, and security aspects of containerization. Popular container runtimes include Docker, containerd, and CRI-O. They enable the deployment and management of containerized applications, allowing developers to package their applications with all the necessary dependencies and run them consistently across different environments.

Controller

A Controller is a core component responsible for managing the desired state of a specific resource or set of resources. It continuously monitors the cluster and takes actions to ensure that the actual state matches the desired state. Controllers handle tasks such as creating, updating, and deleting resources, as well as reconciling any discrepancies between the current and desired states. They provide automation and self-healing capabilities, ensuring that the cluster remains in the desired state even in the presence of failures or changes. Controllers play a crucial role in maintaining the stability and reliability of Kubernetes deployments.

Custom Resource (CR)

In the context of Kubernetes, a Custom Resource (CR) is an extension mechanism that allows users to define and manage their own API resources. It enables the creation of new resource types that are specific to an application or workload. Custom Resources are defined using Custom Resource Definitions (CRDs) and can be treated and managed like any other Kubernetes resource. They provide a way to extend the Kubernetes API and enable the development of custom controllers to handle the lifecycle and behavior of these resources. Custom Resources allow for greater flexibility and customization in Kubernetes deployments.

Custom Resource Definition (CRD)

A Custom Resource Definition (CRD) is an extension mechanism that allows users to define their own custom resources. It enables the creation of new resource types with specific schemas and behaviors. CRDs define the structure and validation rules for custom resources, allowing users to store and manage additional information beyond the built-in Kubernetes resources. Once a CRD is created, instances of the custom resource can be created, updated, and deleted using the Kubernetes API. CRDs provide a way to extend Kubernetes and tailor it to specific application requirements.

SpinApp CRD

The SpinApp CRD is a Kubernetes resource that extends the functionality of the Kubernetes API to support Spin applications. It defines a custom resource called “SpinApp” that encapsulates all the necessary information to deploy and manage a Spin application within a Kubernetes cluster. The SpinApp CRD consists of several key fields that define the desired state of a Spin application.

Here’s an example of a SpinApp custom resource that uses the SpinApp CRD schema:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: simple-spinapp
spec:
  image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.14.1"
  replicas: 1
  runtime: "containerd-shim-spin"

SpinApp CRDs are kept separate from Helm. If using Helm, CustomResourceDefinition (CRD) resources must be installed prior to installing the Helm chart.

You can modify the example above to customize the SpinApp via a YAML file. Here’s an updated YAML file with additional customization options:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: simple-spinapp
spec:
  image: 'ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.14.1'
  replicas: 3
  imagePullSecrets:
    - name: spin-image-secret
  serviceAnnotations:
    key: value
  podAnnotations:
    key: value
  resources:
    limits:
      cpu: '1'
      memory: 512Mi
    requests:
      cpu: '0.5'
      memory: 256Mi
  env:
    - name: ENV_VAR1
      value: value1
    - name: ENV_VAR2
      value: value2
  # Add any other user-defined values here

In this updated example, we have added additional customization options:

  • imagePullSecrets: An optional field that lets you reference a Kubernetes secret that has credentials for you to pull in images from a private registry.
  • serviceAnnotations: An optional field that lets you set specific annotations on the underlying service that is created.
  • podAnnotations: An optional field that lets you set specific annotations on the underlying pods that are created.
  • resources: You can specify resource limits and requests for CPU and memory. Adjust the values according to your application’s resource requirements.
  • env: You can define environment variables for your SpinApp. Add as many environment variables as needed, providing the name and value for each.

To apply the changes, save the YAML file (e.g. updated-spinapp.yaml) and then apply it to your Kubernetes cluster using the following command:

kubectl apply -f updated-spinapp.yaml

Helm

Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It uses charts, which are pre-configured templates, to define the structure and configuration of an application. Helm allows users to easily install, upgrade, and uninstall applications on a Kubernetes cluster. It also supports versioning, dependency management, and customization of deployments. Helm charts can be shared and reused, making it a convenient tool for managing complex applications in a Kubernetes environment.

Image

In the context of Kubernetes, an image refers to a packaged and executable software artifact that contains all the necessary dependencies and configurations to run a specific application or service. It is typically built from a Dockerfile and stored in a container registry. Images are used as the basis for creating containers, which are lightweight and isolated runtime environments. Kubernetes pulls the required images from the registry and deploys them onto the cluster’s worker nodes. Images play a crucial role in ensuring consistent and reproducible deployments of applications in Kubernetes.

Kubernetes

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a framework for running and coordinating containers across a cluster of nodes. Kubernetes abstracts the underlying infrastructure and provides features like load balancing, service discovery, and self-healing capabilities. It enables organizations to efficiently manage and scale their applications, ensuring high availability and resilience.

Open Container Initiative (OCI)

The Open Container Initiative (OCI) is an open governance structure and project that aims to create industry standards for container formats and runtime. It was formed to ensure compatibility and interoperability between different container technologies. OCI defines specifications for container images and runtime, which are used by container runtimes like Docker and containerd. These specifications provide a common framework for packaging and running containers, allowing users to build and distribute container images that can be executed on any OCI-compliant runtime. OCI plays a crucial role in promoting portability and standardization in the container ecosystem.

Pod

A Pod is the smallest and most basic unit of deployment. It represents a single instance of a running process in a cluster. A Pod can contain one or more containers that are tightly coupled and share the same resources, such as network and storage. Containers within a Pod are scheduled and deployed together on the same node. Pods are ephemeral and can be created, deleted, or replaced dynamically. They provide a way to encapsulate and manage the lifecycle of containerized applications in Kubernetes.

Role Based Access Control (RBAC)

Role-Based Access Control (RBAC) is a security mechanism in Kubernetes that provides fine-grained control over access to cluster resources. RBAC allows administrators to define roles and permissions for users or groups, granting or restricting access to specific operations and resources within the cluster. RBAC ensures that only authorized users can perform certain actions, helping to enforce security policies and prevent unauthorized access to sensitive resources. It enhances the overall security and governance of Kubernetes clusters.

Runtime Class

A Runtime Class is a resource that allows users to specify different container runtimes for running their workloads. It provides a way to define and select the runtime environment in which a Pod should be executed. By using Runtime Classes, users can choose between different container runtimes, based on their specific requirements. This flexibility enables the deployment of workloads with different runtime characteristics, allowing for better resource utilization and performance optimization in Kubernetes clusters.

Scheduler

A scheduler is a component responsible for assigning Pods to nodes in the cluster. It takes into account factors like resource availability, node capacity, and any defined scheduling constraints or policies. The scheduler ensures that Pods are placed on suitable nodes to optimize resource utilization and maintain high availability. It considers factors such as affinity, anti-affinity, and resource requirements when making scheduling decisions. The scheduler continuously monitors the cluster and makes adjustments as needed to maintain the desired state of the workload distribution.

Service

In Kubernetes, a Service is an abstraction that defines a logical set of Pods that enables clients to interact with a consistent set of Pods, regardless of whether the code is designed for a cloud-native environment or a containerized legacy application.

Spin

Spin is a framework designed for building and running event-driven microservice applications using WebAssembly (Wasm) components.

SpinApp Manifest

The goal of the SpinApp manifest is twofold:

  • to represent the possible options for configuring a Wasm workload running in Kubernetes
  • to simplify and abstract the internals of how that Wasm workload is executed, while allowing the user to configure it to their needs

As a result, the simplest SpinApp manifest only requires the registry reference to create a deployment, pod, and service with the right Wasm executor.

However, the SpinApp manifest currently supports configuring options such as:

  • image pull secrets to fetch applications from private registries
  • liveness and readiness probes
  • resource limits (and requests*)
  • Spin variables
  • volume mounts
  • autoscaling

Spin App Executor (CRD)

The SpinAppExecutor CRD is a Custom Resource Definition utilized by Spin Operator to determine which executor type should be used in running a SpinApp.

Spin Operator

Spin Operator is a Kubernetes operator in charge of handling the lifecycle of Spin applications based on their SpinApp resources.