Giter VIP home page Giter VIP logo

spinkube / spin-operator Goto Github PK

View Code? Open in Web Editor NEW
145.0 9.0 18.0 1.24 MB

Spin Operator is a Kubernetes operator that empowers platform engineers to deploy Spin applications as custom resources to their Kubernetes clusters

Home Page: https://www.spinkube.dev/docs/spin-operator/

License: Other

Dockerfile 1.13% Makefile 7.67% Go 86.51% Smarty 1.45% Nix 0.60% Shell 2.19% Rust 0.44%
webassembly kubernetes spin

spin-operator's Introduction

Spin Operator

The Spin Operator enables deploying Spin applications to Kubernetes. It watches SpinApp Custom Resources and realizes the desired state in the Kubernetes cluster. This project was built using the Kubebuilder framework and contains a Spin App CRD and controller.

Documentation

To learn more about the Spin Operator and the SpinKube organization, please visit the official Spin Operator documentation which is housed inside the the official SpinKube documentation.

At this point in the preview, we recommend testing Spin Operator on a local k3d cluster via make install. The quickstart guide will walk you through prequisites and the installation workflow.

Spin Operator installation via Helm chart for remote clusters while in private preview is WIP and can tracked here. In the meantime, please use the guidance from our quickstart guide.

Tutorials

There are a host of tutorials in the Spin Operator tutorials directory of the documentation. For example:

Feedback

The remaining articles are under construction. You're welcome to view and open both Spin Operator and documentation issues and feature requests. As this work is under development, please note that current features, functionality and supporting documentation are likely to change as the projects evolve and improvements are made.

For questions or support, please visit our Discord channel.

Contributing (Spin Operator)

If you would like to contribute, please visit this contributing page.

Contributing (Documentation)

If you would like to contribute to SpinKube and Spin Operator, please visit this contributing page.

The documentation is written using Hugo (as the static site generator), Docsy (as the technical documentation template) and GitHub pages (for hosting). However, during construction (prior to the website being rendered and publicly available) you are welcome to run a local copy of the documentation using the hugo server command. You can do so by following these instructions.

spin-operator's People

Contributors

adamreese avatar bacongobbler avatar calebschoepp avatar dependabot[bot] avatar endocrimes avatar lann avatar macolso avatar michellen avatar mikkelhegn avatar peterj avatar radu-matei avatar rajatjindal avatar thorstenhans avatar tpmccallum avatar vdice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spin-operator's Issues

Model Spin RuntimeConfig in the Spin App custom resource

It is Hard to integrate with other Kubernetes Operators if the Spin RuntimeConfig is a toml file in a secret. Partially because TOML, partially because various operators may be populating and updating different parts of the secret that we'd want to pull for different fields.

This means we likely want something like: (that we then render and turn into a secret for the pod)

runtimeConfig:
  keyValueStores:
    default:
      type: "azure_cosmos"
      account: "<cosmos-account>"
      database: "<cosmos-database>"
      container: "<cosmos-container>"
      key:
        secret: # secret is specified here bc we want to use a k8s secret, we might also wanna integrate with service accounts and stuff over time
          secretName: "my-cosmos-key"
          secretPath: ".key" # optional

consider reducing TerminationGracePeriodSeconds for spin-apps deployment/pod spec

I was trying to understand why the scaling down of spin apps (after manually editing the number of replicas) is taking so long. It is likely due to the default 30s value of TerminationGracePeriodSeconds when creating the spin-app pods.

I reduced TerminationGracePeriodSeconds to 2s on my local setup (via a custom build of spin-operator), after which the scale-down is quite fast now. I believe that this change will also help with HPAorKeda` based scaledown.

We should consider adding a decent default and should possibly make it configurable on the SpinApp CRD.

Document using Redis trigger app with spin-operator

@calebschoepp shows using Redis trigger app with spin-operator here:

# Setup cluster in one terminal
k3d cluster create wasm-cluster --image ghcr.io/deislabs/containerd-wasm-shims/examples/k3d:v0.10.0 -p "8081:80@loadbalancer" --agents 2
k apply -f spin-runtime-class.yaml
make install
make run

# Install Redis
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 -d)
echo $REDIS_PASSWORD

# Build and push app (make sure to change password in redis address of spin.toml
cd apps/order-processor
spin build
spin registry push ttl.sh/your-unique-name:1h

# Apply spinapp (make sure to update image)
k apply -f config/samples/redis.yaml

# Publish message to redis channel
kubectl run --namespace default redis-client --restart='Never'  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image docker.io/bitnami/redis:7.2.4-debian-11-r0 --command -- sleep infinity
kubectl exec --tty -i redis-client --namespace default -- bash
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-master
PUBLISH orders "hello world"

# Then go look at the logs of the spin app to see that it received the Redis message

The steps provided can be written up as a new file in the documentation/content folder and join the queue for approval/review.

Tutorial on how to configure Dapr with a SpinApp

This should be a step by step guide on how to configure your SpinApp to access Dapr.

Prequisites:

  • User has a k8s cluster with Spin Operator installed
  • User has a SpinApp they would like to configure with Dapr

Determine reasonable defaults for Spin Operator pod cpu/mem resources

Decide on default/baseline cpu/mem configuration for the Spin Operator.

Main operator container: https://github.com/fermyon/spin-operator/blob/main/config/manager/manager.yaml#L92-L100
and kube-rbac-proxy: https://github.com/fermyon/spin-operator/blob/main/config/default/manager_auth_proxy_patch.yaml#L28-L34

I'm assuming we'll want to ensure the defaults are friendly to resource-constrained or small-footprint clusters (minikube, kind, k3d).

Helm Chart should not include a runtime class definition

An "any node is ok" runtimeclass isn't broadly suitable for "real" clusters (i.e beyond a couple of nodes) - we should remove the runtime-class from the chart and have folks reference it via kubectl apply -f https://github.com/....... when installing via Helm if they want a "default".

Conceptual overview article for Spin Kube

This article should provide a new user a 5,000 foot view into the SpinKube project. After reading this article, they should understand:

  • What are the major pieces of this organization (shim, Spin Operator, spin k8s plugin, kwasm)
  • What each piece of technology is responsible for
  • Where to go next to get started with the technology
  • How to get involved with the project

We can explain concepts like Kubernetes operators susicewntly, but should heavily rely on linking to external content for in depth explanation. The goal is to provide enough ecosystem context so the user is grounded, and dedicated a majority of the text to explaining what problems SpinKube is uniquely solving for.

A high level figma diagram would be a "nice to have" :)

SpinApps don't work with images from ghcr.io

Apply the following SpinApp to the cluster

apiVersion: spinoperator.dev/v1
kind: SpinApp
metadata:
  name: simple-spinapp
spec:
  image: "ghcr.io/fermyon/spin-operator/hello-world:latest"
  replicas: 1

Note: You'll have to make the image public in our packages repo or publish your own image to your own account e.g.

spin build
spin registry push ghrc.io/username/app-name:latest

Observe that the SpinApp fails with ErrImgPull. The events in the pod:

│   Type     Reason     Age                    From               Message                                                           │
│   ----     ------     ----                   ----               -------                                                           │
│   Normal   Scheduled  4m13s                  default-scheduler  Successfully assigned default/simple-spinapp-7b7d69df56-c2xnn to  │
│ k3d-wasm-cluster-server-0                                                                                                         │
│   Normal   Pulling    2m41s (x4 over 4m14s)  kubelet            Pulling image "ghcr.io/fermyon/spin-operator/hello-world:latest"  │
│   Warning  Failed     2m41s (x4 over 4m13s)  kubelet            Failed to pull image "ghcr.io/fermyon/spin-operator/hello-world:l │
│ atest": rpc error: code = Unknown desc = failed to pull and unpack image "ghcr.io/fermyon/spin-operator/hello-world:latest": fail │
│ ed to unpack image on snapshotter overlayfs: mismatched image rootfs and manifest layers                                          │
│   Warning  Failed     2m41s (x4 over 4m13s)  kubelet            Error: ErrImagePull                                               │
│   Warning  Failed     2m28s (x6 over 4m12s)  kubelet            Error: ImagePullBackOff                                           │
│   Normal   BackOff    2m15s (x7 over 4m12s)  kubelet            Back-off pulling image "ghcr.io/fermyon/spin-operator/hello-world │
│ :latest"

Flaky integration test

I'm experiencing the integration tests flaking occasionally because of a race condition. I've been unable to diagnose the issue so far but I think it has something to do with this code.

test -s /Users/caleb/fermyon/spin-operator/bin/controller-gen && /Users/caleb/fermyon/spin-operator/bin/controller-gen --version | grep -q v0.13.0 || \
	GOBIN=/Users/caleb/fermyon/spin-operator/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/Users/caleb/fermyon/spin-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./api/..." paths="./cmd/..." paths="./internal/..." paths="./pkg/..." output:crd:artifacts:config=config/crd/bases
/Users/caleb/fermyon/spin-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./api/..." paths="./cmd/..." paths="./internal/..." paths="./pkg/..."
go fmt ./...
go vet ./...
test -s /Users/caleb/fermyon/spin-operator/bin/setup-envtest || GOBIN=/Users/caleb/fermyon/spin-operator/bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
/Users/caleb/fermyon/spin-operator/bin/setup-envtest use 1.28.3 --bin-dir /Users/caleb/fermyon/spin-operator/bin
Version: 1.28.3
OS/Arch: darwin/arm64
Path: /Users/caleb/fermyon/spin-operator/bin/k8s/1.28.3-darwin-arm64
KUBEBUILDER_ASSETS="/Users/caleb/fermyon/spin-operator/bin/k8s/1.28.3-darwin-arm64" go test ./... -coverprofile cover.out
?   	github.com/fermyon/spin-operator/api/v1	[no test files]
?   	github.com/fermyon/spin-operator/cmd	[no test files]
?   	github.com/fermyon/spin-operator/internal/constants	[no test files]
?   	github.com/fermyon/spin-operator/internal/logging	[no test files]
?   	github.com/fermyon/spin-operator/pkg/spinapp	[no test files]
ok  	github.com/fermyon/spin-operator/internal/controller	7.827s	coverage: 34.3% of statements
fatal error: concurrent map iteration and map write

goroutine 43 [running]:
sigs.k8s.io/controller-runtime/pkg/webhook/conversion.objectGVKs(0x14000331570, {0x1039c4e10?, 0x14000b82310})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/conversion/conversion.go:291 +0x130
sigs.k8s.io/controller-runtime/pkg/webhook/conversion.IsConvertible(0x14000331570?, {0x1039c4e10?, 0x14000b82310})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/conversion/conversion.go:227 +0x34
sigs.k8s.io/controller-runtime/pkg/envtest.modifyConversionWebhooks({0x14000818960, 0x3, 0x10?}, 0x14000331570, {{0x1400028aa80, 0x1, 0x1}, {0x1400032a6b8, 0x1, 0x1}, ...})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envtest/crd.go:357 +0x110
sigs.k8s.io/controller-runtime/pkg/envtest.InstallCRDs(_, {0x14000331570, {0x14000111360, 0x1, 0x1}, {0x14000818960, 0x3, 0x3}, 0x1, 0x2540be400, ...})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envtest/crd.go:101 +0xf4
sigs.k8s.io/controller-runtime/pkg/envtest.(*Environment).Start(0x14000180500)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envtest/server.go:282 +0x980
github.com/fermyon/spin-operator/internal/webhook.setupEnvTest(0x1400031a9c0)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:55 +0x308
github.com/fermyon/spin-operator/internal/webhook.TestCreateSpinAppWithNoExecutor(0x14000320401?)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:131 +0x28
testing.tRunner(0x1400031a9c0, 0x1039b5d60)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1595 +0xe8
created by testing.(*T).Run in goroutine 1
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1648 +0x33c

goroutine 1 [chan receive]:
testing.tRunner.func1()
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1561 +0x434
testing.tRunner(0x1400031a820, 0x14000193c28)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1601 +0x124
testing.runTests(0x1400026c280?, {0x1045dbf20, 0x8, 0x8}, {0x40?, 0x10387f5c0?, 0x1045f5b20?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:2052 +0x3b4
testing.(*M).Run(0x1400026c280)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1925 +0x538
main.main()
	_testmain.go:95 +0x1c8

goroutine 44 [select]:
github.com/stretchr/testify/assert.Eventually({0x10bb09318, 0x1400031ad00}, 0x14000202be0, 0x14000026000?, 0x14000202be0?, {0x0, 0x0, 0x0})
	/Users/caleb/go/pkg/mod/github.com/stretchr/[email protected]/assert/assertions.go:1847 +0x14c
github.com/stretchr/testify/require.Eventually({0x1039c1bb8, 0x1400031ad00}, 0x1400057bb98?, 0x2?, 0x2?, {0x0, 0x0, 0x0})
	/Users/caleb/go/pkg/mod/github.com/stretchr/[email protected]/require/require.go:401 +0x94
github.com/fermyon/spin-operator/internal/webhook.startWebhookServer(0x1400031ad00, 0x140000afcc0)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:117 +0x428
github.com/fermyon/spin-operator/internal/webhook.TestCreateSpinAppWithSingleExecutor(0x14000320401?)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:151 +0x3c
testing.tRunner(0x1400031ad00, 0x1039b5d68)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1595 +0xe8
created by testing.(*T).Run in goroutine 1
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1648 +0x33c

goroutine 45 [select]:
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start(0x140001785a0, {0x0?, 0x0}, {0x0?, 0x0})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:183 +0x33c
sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane.(*APIServer).Start(0x1400038a000)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/controlplane/apiserver.go:165 +0x4c
sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane.(*ControlPlane).Start(0x1400022c000)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/controlplane/plane.go:67 +0x134
sigs.k8s.io/controller-runtime/pkg/envtest.(*Environment).startControlPlane(0x1400022c000)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envtest/server.go:313 +0x94
sigs.k8s.io/controller-runtime/pkg/envtest.(*Environment).Start(0x1400022c000)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envtest/server.go:244 +0x380
github.com/fermyon/spin-operator/internal/webhook.setupEnvTest(0x1400031b040)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:55 +0x308
github.com/fermyon/spin-operator/internal/webhook.TestCreateSpinAppWithMultipleExecutors(0x14000320401?)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:186 +0x2c
testing.tRunner(0x1400031b040, 0x1039b5d58)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1595 +0xe8
created by testing.(*T).Run in goroutine 1
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1648 +0x33c

goroutine 46 [runnable]:
k8s.io/apimachinery/pkg/runtime/serializer.CodecFactory.UniversalDecoder({0x14000331570, {0x1039bde60, 0x14000303068}, {0x140003428c0, 0x3, 0x4}, {0x1039cc170, 0x14000323f40}}, {0x0?, 0x102f3a578?, ...})
	/Users/caleb/go/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/codec_factory.go:289 +0xf0
k8s.io/client-go/discovery.setDiscoveryDefaults(0x1400094f860)
	/Users/caleb/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:710 +0xc4
k8s.io/client-go/discovery.NewDiscoveryClientForConfigAndClient(0x14000c1a001?, 0x1038e3420?)
	/Users/caleb/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:739 +0x54
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0x140003ae900?, 0x14000c12180?)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/apiutil/restmapper.go:40 +0x20
sigs.k8s.io/controller-runtime/pkg/client.newClient(0x1400031b300?, {0x14000c04900, 0x14000331570, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:159 +0x1c0
sigs.k8s.io/controller-runtime/pkg/client.New(0x1039c1bb8?, {0x0, 0x14000331570, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:110 +0x54
github.com/fermyon/spin-operator/internal/webhook.setupEnvTest(0x1400031b380)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:69 +0x460
github.com/fermyon/spin-operator/internal/webhook.TestCreateInvalidSpinApp(0x14000320401?)
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:231 +0x28
testing.tRunner(0x1400031b380, 0x1039b5d50)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1595 +0xe8
created by testing.(*T).Run in goroutine 1
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/testing/testing.go:1648 +0x33c

goroutine 72 [syscall]:
syscall.syscall6(0x104bb0a68?, 0x14000003617?, 0x1400008ee28?, 0x10230c5c8?, 0x1400008ee28?, 0x9000010101c654?, 0x10b9ead50?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x1400008ee88?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x18?, 0x1400008eec4, 0x3?, 0x8?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x140005ac1b0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x1400017c9a0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 45
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 164 [sleep]:
time.Sleep(0x5f5e100)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/time.go:195 +0x10c
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.pollURLUntilOK({{0x1033cb2ab, 0x5}, {0x0, 0x0}, 0x0, {0x14000336fe0, 0xf}, {0x1033cf6d5, 0x8}, {0x0, ...}, ...}, ...)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:239 +0xd8
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 45
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:163 +0x270

goroutine 15 [syscall]:
syscall.syscall6(0x0?, 0x0?, 0x0?, 0x0?, 0x0?, 0x100010000?, 0x104f551b0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x140002fa688?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x0?, 0x140002fa6c4, 0x0?, 0x0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x140004e0000)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x14000344420)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 46
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 16 [syscall]:
syscall.syscall6(0x104bb0108?, 0x17?, 0x0?, 0x0?, 0x0?, 0x90000101010000?, 0x104f551b0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x140002f9688?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x0?, 0x140002f96c4, 0x0?, 0x0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x140003460f0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x140006a2000)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 44
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 114 [syscall]:
syscall.syscall6(0x104bb05b8?, 0x17?, 0x1039e4e28?, 0x103818ea0?, 0x1039e4e28?, 0x90000101014620?, 0x104f499a0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x140002fd688?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x1033eb16f?, 0x140002fd6c4, 0x1033d18f5?, 0xa?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x1400004a0f0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x1400020e2c0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 43
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 324 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile(0x140003210e0)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:183 +0x3c
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).Start.func1 in goroutine 302
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:136 +0xb0

goroutine 119 [syscall]:
syscall.syscall6(0x104bb05b8?, 0x1039c8317?, 0x1039bdb00?, 0x1039cb900?, 0x0?, 0x90000101010000?, 0x10b9ab290?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x14000020e88?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x1039c4578?, 0x14000020ec4, 0x0?, 0x1039c4b90?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x140004962d0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x140005a4160)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 43
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 165 [syscall]:
syscall.syscall6(0x1400045ae00?, 0x10230c824?, 0x1038189a0?, 0x10230cb68?, 0xffffffffffffffff?, 0xffff00010001ffff?, 0x104f499a0?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x1400045ae88?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x1400045aec0?, 0x1400045aec4, 0x1400045afa0?, 0x2?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x1400004a900)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x1400020eb00)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 45
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 141 [syscall]:
syscall.syscall6(0x14000088e00?, 0x10230c824?, 0x1038189a0?, 0x10230cb68?, 0xffffffffffffffff?, 0xffff00010001ffff?, 0x10b9ab290?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x14000088e88?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x14000088ec0?, 0x14000088ec4, 0x14000088fa0?, 0x2?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x14000496930)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x140005a4840)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 46
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 134 [syscall]:
syscall.syscall6(0x104bb13c8?, 0x17?, 0x1038189a0?, 0x0?, 0x5ac?, 0x90000101017530?, 0x10b9ab290?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x140004c5688?, 0x1023e9768?, 0x90?, 0x10394f260?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/zsyscall_darwin_arm64.go:43 +0x4c
syscall.Wait4(0x140004c56c0?, 0x140004c56c4, 0x140004c57a0?, 0x2?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x14000496450)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x140005a42c0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/os/exec/exec.go:890 +0x38
sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:175 +0x5c
created by sigs.k8s.io/controller-runtime/pkg/internal/testing/process.(*State).Start in goroutine 44
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/testing/process/process.go:173 +0x2f8

goroutine 322 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/cache/internal.(*Informers).Start(0x14000541400, {0x1039d5c90, 0x1400036b0e0})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cache/internal/informers.go:211 +0x40
sigs.k8s.io/controller-runtime/pkg/cluster.(*cluster).Start(0x8?, {0x1039d5c90?, 0x1400036b0e0?})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal.go:104 +0x70
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1(0x14000202c00)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:223 +0xd0
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile in goroutine 273
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:207 +0x204

goroutine 327 [select]:
sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Watch(0x140005180f0)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/certwatcher/certwatcher.go:126 +0x80
created by sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Start in goroutine 308
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/certwatcher/certwatcher.go:113 +0x1c4

goroutine 65 [IO wait]:
internal/poll.runtime_pollWait(0x10b9c4ca0, 0x72)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/netpoll.go:343 +0xa0
internal/poll.(*pollDesc).wait(0x140005f9f00?, 0x14000790000?, 0x0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x140005f9f00, {0x14000790000, 0xa000, 0xa000})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_unix.go:164 +0x200
net.(*netFD).Read(0x140005f9f00, {0x14000790000?, 0x140006c3828?, 0x1026549cc?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/fd_posix.go:55 +0x28
net.(*conn).Read(0x140004a4018, {0x14000790000?, 0x140006c3748?, 0x102313f0c?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/net.go:179 +0x34
crypto/tls.(*atLeastReader).Read(0x1400000fe48, {0x14000790000?, 0x1400000fe48?, 0x0?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:805 +0x40
bytes.(*Buffer).ReadFrom(0x1400053c2a8, {0x1039bd008, 0x1400000fe48})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/bytes/buffer.go:211 +0x90
crypto/tls.(*Conn).readFromUntil(0x1400053c000, {0x10bc0f658?, 0x140004a4018}, 0x9fb9?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:827 +0xd0
crypto/tls.(*Conn).readRecordOrCCS(0x1400053c000, 0x0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:625 +0x1e4
crypto/tls.(*Conn).readRecord(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0x1400053c000, {0x1400082b000, 0x1000, 0x1027df87c?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:1369 +0x168
bufio.(*Reader).Read(0x1400051c8a0, {0x14000236580, 0x9, 0x1045cb310?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/bufio/bufio.go:244 +0x1b4
io.ReadAtLeast({0x1039bca88, 0x1400051c8a0}, {0x14000236580, 0x9, 0x9}, 0x9)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/io/io.go:335 +0xa0
io.ReadFull(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x14000236580, 0x9, 0x172604d7?}, {0x1039bca88?, 0x1400051c8a0?})
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x58
golang.org/x/net/http2.(*Framer).ReadFrame(0x14000236540)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:498 +0x78
golang.org/x/net/http2.(*clientConnReadLoop).run(0x140006c3f88)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:2275 +0xf8
golang.org/x/net/http2.(*ClientConn).readLoop(0x14000828000)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:2170 +0x5c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 64
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:821 +0xabc

goroutine 302 [select]:
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).Start(0x140006a7380, {0x1039d5bb0, 0x10462bd80})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:444 +0x768
github.com/fermyon/spin-operator/internal/webhook.startWebhookServer.func1()
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:110 +0x48
created by github.com/fermyon/spin-operator/internal/webhook.startWebhookServer in goroutine 44
	/Users/caleb/fermyon/spin-operator/internal/webhook/admission_test.go:109 +0x2f8

goroutine 303 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile(0x14000320ea0)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:183 +0x3c
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).Start.func1 in goroutine 302
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:136 +0xb0

goroutine 304 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile(0x14000320f30)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:183 +0x3c
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).Start.func1 in goroutine 302
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:136 +0xb0

goroutine 305 [IO wait]:
internal/poll.runtime_pollWait(0x10b9c48c0, 0x72)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/netpoll.go:343 +0xa0
internal/poll.(*pollDesc).wait(0x1400032ee80?, 0xd0000001003420?, 0x0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x1400032ee80)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_unix.go:611 +0x250
net.(*netFD).accept(0x1400032ee80)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/fd_unix.go:172 +0x28
net.(*TCPListener).accept(0x14000202d80)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/tcpsock_posix.go:152 +0x28
net.(*TCPListener).Accept(0x14000202d80)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/tcpsock.go:315 +0x2c
crypto/tls.(*listener).Accept(0x140005e4de0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/tls.go:66 +0x30
net/http.(*Server).Serve(0x140002d0000, {0x1039c7fd0, 0x140005e4de0})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/http/server.go:3056 +0x2b8
sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start(0x14000225860, {0x1039d5c90?, 0x1400036b090})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:263 +0x774
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1(0x14000202520)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:223 +0xd0
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile in goroutine 304
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:207 +0x204

goroutine 273 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile(0x14000320fc0)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:183 +0x3c
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).Start.func1 in goroutine 302
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:136 +0xb0

goroutine 326 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile(0x14000321050)
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:183 +0x3c
created by sigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).Start.func1 in goroutine 325
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:136 +0xb0

goroutine 307 [syscall]:
syscall.syscall6(0x14000078bb8?, 0x10b9b23c8?, 0x88?, 0x104bb0130?, 0x14000078c28?, 0x1023729b4?, 0x104bb0108?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/sys_darwin.go:45 +0x68
golang.org/x/sys/unix.kevent(0x2000?, 0x14000026000?, 0x14000078c68?, 0x0?, 0x14000078c58?, 0x10240e63c?)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/unix/zsyscall_darwin_arm64.go:275 +0x54
golang.org/x/sys/unix.Kevent(0x14000078d00?, {0x0?, 0x14000078c98?, 0x10240e74c?}, {0x14000078e60?, 0x102313624?, 0x14000078cb8?}, 0x102313638?)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/unix/syscall_bsd.go:397 +0x40
github.com/fsnotify/fsnotify.(*Watcher).read(0x14000078d48?, {0x14000078e60?, 0x14000078d68?, 0xa})
	/Users/caleb/go/pkg/mod/github.com/fsnotify/[email protected]/backend_kqueue.go:777 +0x48
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0x14000204f50)
	/Users/caleb/go/pkg/mod/github.com/fsnotify/[email protected]/backend_kqueue.go:547 +0x94
created by github.com/fsnotify/fsnotify.NewBufferedWatcher in goroutine 305
	/Users/caleb/go/pkg/mod/github.com/fsnotify/[email protected]/backend_kqueue.go:184 +0x1fc

goroutine 308 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Start(0x140005180f0, {0x1039d5c90, 0x1400036b090})
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/certwatcher/certwatcher.go:118 +0x210
sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start.func1()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:214 +0x28
created by sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start in goroutine 305
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:213 +0x2ac

goroutine 309 [chan receive]:
sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start.func2()
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:248 +0x48
created by sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start in goroutine 305
	/Users/caleb/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:247 +0x6ac

goroutine 365 [IO wait]:
internal/poll.runtime_pollWait(0x10b9c49b8, 0x72)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/runtime/netpoll.go:343 +0xa0
internal/poll.(*pollDesc).wait(0x140001c1380?, 0x14000520000?, 0x0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x140001c1380, {0x14000520000, 0xa000, 0xa000})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/internal/poll/fd_unix.go:164 +0x200
net.(*netFD).Read(0x140001c1380, {0x14000520000?, 0x1400001b828?, 0x1026549cc?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/fd_posix.go:55 +0x28
net.(*conn).Read(0x140004a4058, {0x14000520000?, 0x1400001b748?, 0x102313d94?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/net/net.go:179 +0x34
crypto/tls.(*atLeastReader).Read(0x14000b12018, {0x14000520000?, 0x14000b12018?, 0x140005fdd40?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:805 +0x40
bytes.(*Buffer).ReadFrom(0x140004ce2a8, {0x1039bd008, 0x14000b12018})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/bytes/buffer.go:211 +0x90
crypto/tls.(*Conn).readFromUntil(0x140004ce000, {0x10bc0f658?, 0x140004a4058}, 0x9fb8?)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:827 +0xd0
crypto/tls.(*Conn).readRecordOrCCS(0x140004ce000, 0x0)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:625 +0x1e4
crypto/tls.(*Conn).readRecord(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0x140004ce000, {0x1400048f000, 0x1000, 0x1027df87c?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/crypto/tls/conn.go:1369 +0x168
bufio.(*Reader).Read(0x1400060d740, {0x1400038a2e0, 0x9, 0x1045cb310?})
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/bufio/bufio.go:244 +0x1b4
io.ReadAtLeast({0x1039bca88, 0x1400060d740}, {0x1400038a2e0, 0x9, 0x9}, 0x9)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/io/io.go:335 +0xa0
io.ReadFull(...)
	/opt/homebrew/Cellar/go/1.21.6/libexec/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x1400038a2e0, 0x9, 0x18139bd4?}, {0x1039bca88?, 0x1400060d740?})
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x58
golang.org/x/net/http2.(*Framer).ReadFrame(0x1400038a2a0)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:498 +0x78
golang.org/x/net/http2.(*clientConnReadLoop).run(0x1400001bf88)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:2275 +0xf8
golang.org/x/net/http2.(*ClientConn).readLoop(0x140008d2000)
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:2170 +0x5c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 364
	/Users/caleb/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:821 +0xabc
FAIL	github.com/fermyon/spin-operator/internal/webhook	3.849s
FAIL
make: *** [test] Error 1

Support an executor that runs Spin directly in a container

Currently we support containerd-shim-spin and cyclotron executors. We also want to support running Spin directly in a container which might be useful if you want to:

  • Configure plugins or trigger plugins not part of the containerd shim
  • Test a new version of Spin not supported by the containerd shim
  • Run workload in clusters where you cannot configure the runtimeClass

Support for Non-HTTP Triggers

One thing that would be incredibly value in a kubernetes deployment is the ability to use alternative triggers - this is currently blocked on deislabs/containerd-wasm-shims#193 - but we should run through a hypothetical example to understand if we're over-optimizing our api surface towards http.

Document assigning variables to a SpinApp

The documentation should provide information on how to specify variables for a SpinApp from different sources (eg. Kubernetes primitives such as ConfigMap and Secret)

Tutorial on how to set up ingress to your SpinApps

This tutorial should give prescriptive steps on how to direct traffic amongst multiple SpinApps running on a k8s cluster

Prerequisites:

  • User has a k8s cluster
  • User as >1 replica of a SpinApp they'd like to balance traffic across

consider removing kube-rbac-proxy from spin-operator deployment

Currently, kube-rbac-proxy is setup as part of spin-operator deployment, which is used to proxy the metrics endpoint (exposed by spin-operator).

This adds a few complexities to the setup:

  • if the Prometheus operator is running, we need to create an additional ServiceMonitor resource, and the operator takes care of the rest. (It seems we still need to disable certificate validation as Prometheus uses endpoint_ip to scrape metrics from endpoints, and it is not easy to get certs with dynamic pod IP addresses as SANs.)
  • if Prometheus is running (without the Prometheus operator), we need to configure a special scrape config for this (to account for TLS, CA, and SAN's options) and ensure that Prometheus mounts and uses this new scrape config.

Do people have strong opinions about continuing to use this or can we remove this from spin-operator (which means the metrics endpoint will be accessible without auth from within the cluster)?

SpinApp does not respond to `kubectl scale`

Given the following SpinApp:

><> kubectl get spinapp hello-rust -o yaml
apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"core.spinoperator.dev/v1","kind":"SpinApp","metadata":{"annotations":{},"name":"hello-rust","namespace":"default"},"spec":{"executor":"containerd-shim-spin","image":"bacongobbler/hello-rust:latest","replicas":2}}
  creationTimestamp: "2024-02-20T18:47:21Z"
  generation: 1
  name: hello-rust
  namespace: default
  resourceVersion: "1716"
  uid: 3a4565a9-e0ab-4933-a3ce-eb01b7c908e9
spec:
  checks: {}
  enableAutoscaling: false
  executor: containerd-shim-spin
  image: bacongobbler/hello-rust:latest
  replicas: 2
  resources: {}
  runtimeConfig: {}
status:
  activeScheduler: containerd-shim-spin
  conditions:
  - lastTransitionTime: "2024-02-20T18:49:31Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-02-20T18:47:21Z"
    message: ReplicaSet "hello-rust-77496795b6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  readyReplicas: 2

kubectl get spinapp hello-rust works:

><> kubectl get spinapp hello-rust
NAME         READY REPLICAS   EXECUTOR
hello-rust   2                containerd-shim-spin

But kubectl scale spinapp hello-rust does not.

><> k scale spinapp hello-rust --replicas=3
Error from server (NotFound): spinapps.core.spinoperator.dev "hello-rust" not found

spinappexecutors.core.spinoperator.dev containerd-shim-spin cannot be deleted

after removing workload / SpinApps it is not possible to remove spinappexecutor

$ k delete spinappexecutors.core.spinoperator.dev containerd-shim-spin --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
spinappexecutor.core.spinoperator.dev "containerd-shim-spin" force deleted
^C%
$ k describe spinappexecutors.core.spinoperator.dev containerd-shim-spin
Name:         containerd-shim-spin
Namespace:    default
Labels:       <none>
Annotations:  helm.sh/hook: post-install,post-upgrade
              helm.sh/hook-weight: 1
API Version:  core.spinoperator.dev/v1
Kind:         SpinAppExecutor
Metadata:
  Creation Timestamp:             2024-02-23T10:17:45Z
  Deletion Grace Period Seconds:  0
  Deletion Timestamp:             2024-02-23T10:21:44Z
  Finalizers:
    core.spinoperator.dev/finalizer
  Generation:        2
  Resource Version:  51313
  UID:               478608e7-78cb-4e7c-ba18-0be6db354329
Spec:
  Create Deployment:  true
  Deployment Config:
    Runtime Class Name:  wasmtime-spin-v2
Events:                  <none>

deletion works when removing finalizer with kubectl edit...

Create Quickstart documentation file

Create a Quickstart documentation file that shows the shortest/easiest path for a user to use Spin Operator.
Least amount of clicks/commands possible.

Hi @vdice

Couple of options to move this one forward:

  • An existing Quickstart file already exists at spinkube/spin-operator, so you could PR against that file with the Helm updates and any other changes.
  • There is also another PR over on SpinKube org that @macolso just created, called "Adding a "Getting Started" guide". I had a quick chat with Kenzie and we would like to give the option of using anything from that file as part of the quickstart updates. If you feel that Kenzie's file is a better jumping-off point, then please go ahead and use "Adding a "Getting Started" guide" instead (we can delete the original quickstart and rename Kenzie to quickstart.md).

The README.md links to the quickstart.md so we will need to make sure that link is landing a user in the correct spot :)

Helm chart should not include a hard dependency on kwasm operator

Although many will lifecycle their nodes through kwasm, we don't depend on their CRDs, and some will want to have specific (statically configured) node pools.

We should not explicitly depend on kwasm while that is the case (especially until runtime-class-manager is ready).

SpinApp CRD `Active Replicas` Status Column is unclear

Given the following manifest:

apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  name: hello-rust
spec:
  image: "bacongobbler/hello-rust:latest"
  replicas: 2
  executor: containerd-shim-spin

When I apply it to the cluster and run kubectl get spinapps, no replicas are shown.

><> k get spinapps
NAME         READY REPLICAS   EXECUTOR
hello-rust   2                containerd-shim-spin

However, the number of replicas are shown if I run k get spinapps hello-rust -o yaml:

><> k get spinapps hello-rust -o yaml
apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"core.spinoperator.dev/v1","kind":"SpinApp","metadata":{"annotations":{},"name":"hello-rust","namespace":"default"},"spec":{"executor":"containerd-shim-spin","image":"bacongobbler/hello-rust:latest","replicas":2}}
  creationTimestamp: "2024-02-20T19:07:23Z"
  generation: 1
  name: hello-rust
  namespace: default
  resourceVersion: "1663"
  uid: 41e1456e-6284-43ff-b37b-cbc302730b15
spec:
  checks: {}
  enableAutoscaling: false
  executor: containerd-shim-spin
  image: bacongobbler/hello-rust:latest
  replicas: 2
  resources: {}
  runtimeConfig: {}
status:
  activeScheduler: containerd-shim-spin
  conditions:
  - lastTransitionTime: "2024-02-20T19:07:25Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-02-20T19:07:23Z"
    message: ReplicaSet "hello-rust-77496795b6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  readyReplicas: 2

Possibly related to #86

Expose version number of operator

It would be nice to be able to fetch version number of controller via a http endpoint (or some other appropriate way). (e.g. for displaying in spin k8s version command output).

no endpoints available for service "spin-operator-webhook-service" when deploying to AKS

Following-ish these steps https://github.com/spinkube/spin-operator/blob/main/documentation/content/running-on-azure-kubernetes-service.md,
when installing the operator on AKS with a private image in ACR like

OPERATOR_REP=$AZURE_CONTAINER_REGISTRY_ENDPOINT/spin-operator
OPERATOR_IMG=$OPERATOR_REP:latest
make docker-build docker-push IMG=$OPERATOR_IMG
make install

kubectl apply -f spin-runtime-class.yaml
helm upgrade --install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --devel \
  --wait \
  --set controllerManager.manager.image.repository=$OPERATOR_REP \
  oci://ghcr.io/spinkube/spin-operator

does not seem to pickup the image overwrite and emits:

Error: GET "https://ghcr.io/v2/spinkube/spin-operator/tags/list": GET "https://ghcr.io/token?scope=repository%3Aspinkube%2Fspin-operator%3Apull&service=ghcr.io": unexpected status code 401: unauthorized: authentication required

when adapting to local repo

...
kubectl apply -f spin-runtime-class.yaml
helm upgrade --install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --devel \
  --wait \
  --set controllerManager.manager.image.repository=$OPERATOR_REP \
  ./charts/spin-operator

I get

        * Internal error occurred: failed calling webhook "mspinappexecutor.kb.io": failed to call webhook: Post "https://spin-operator-webhook-service.spin-operator.svc:443/mutate-core-spinoperator-dev-v1-spinappexecutor?timeout=10s": no endpoints available for service "spin-operator-webhook-service"

Service seems to be present

NAME                                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
spin-operator-certmanager                          ClusterIP   10.0.61.148    <none>        9402/TCP   9m15s
spin-operator-certmanager-webhook                  ClusterIP   10.0.199.132   <none>        443/TCP    9m15s
spin-operator-controller-manager-metrics-service   ClusterIP   10.0.224.219   <none>        8443/TCP   9m14s
spin-operator-kwasm-operator                       ClusterIP   10.0.147.20    <none>        80/TCP     9m14s
spin-operator-webhook-service                      ClusterIP   10.0.242.50    <none>        443/TCP    9m15s
Name:              spin-operator-webhook-service
Namespace:         spin-operator
Labels:            app.kubernetes.io/component=webhook
                   app.kubernetes.io/created-by=spin-operator
                   app.kubernetes.io/instance=spin-operator
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=spin-operator
                   app.kubernetes.io/part-of=spin-operator
                   app.kubernetes.io/version=v0.1.0
                   helm.sh/chart=spin-operator-0.1.0
Annotations:       meta.helm.sh/release-name: spin-operator
                   meta.helm.sh/release-namespace: spin-operator
Selector:          app.kubernetes.io/instance=spin-operator,app.kubernetes.io/name=spin-operator,control-plane=controller-manager
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.0.242.50
IPs:               10.0.242.50
Port:              <unset>  443/TCP
TargetPort:        9443/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

Helm Chart should not include a default `Executor`

Because the RuntimeClassName is configurable and important, the Helm Executor either needs to be easy to customize with values.yaml or not include a default executor. My preference would be for the latter to simplify the process of a production deployment.

Complete prerequisites page

A one-stop-shop for installing any prerequisites/dependencies/libraries.
All other pages can link to this prerequisite page to save redundancy/duplication in the documentation as we move onward.

SpinApp deployment fails due to missing replicas when enableAutoscaling: true

Deploying a SpinApp that should be scaled via HPA/KEDA (enableAutoscaling: true) currently fails due to missing value for replicas.

To repro, try to deploy config/samples/hpa.yaml:

kubectl apply -f config/samples/hpa.yaml

horizontalpodautoscaler.autoscaling/spinapp-autoscaler created
The SpinApp "hpa-spinapp" is invalid: spec.replicas: Required value

Although kubectl explain spinapp.spec.enableAutoscaling mentions this

EnableAutoscaling indicates whether the app is allowed to autoscale. If true then the operator leaves the replica count of the underlying deployment to be managed by an external autoscaler (HPA/KEDA). Replicas cannot be defined if this is enabled. By default EnableAutoscaling is false.

Deployment and auto-scaling (in the case of KEDA) #4 works if I specify both as shown here:

apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  name: keda-spinapp
spec:
  image: "ttl.sh/cpu-load-gen:1h"
  executor: containerd-shim-spin
  enableAutoscaling: true
  replicas: 1
  resources:
    limits:
      cpu: 500m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 400Mi

Support default config at the namespace level

Problem

If a user has wants to deploy many SpinApps into their cluster it is likely that a portion of the configuration for each SpinApp will be duplicated. For example you may want to provide runtime config to every single Spin app that configures the CosmosDB instance it should use as a KV provider.

Potential solution

Provide a way to configure default config for a namespace such that any SpinApp in that namespace inherits that default config.

Considerations

  • Where would this default config live? On SpinAppExecutor? On a new CR? Somewhere else?
  • What configuration would we want to allow to set defaults for. Runtime config seems like an obvious choice. Most of the other fields of the SpinApp CRD seem like things that would be useful to set defaults for too.

Image in SpinApp CR is being incorrectly cached

Changing the image of a SpinApp CR is not always working. For an unknown reason sometimes when you change the image it does not update. Even if you delete and re-apply the SpinApp it still does not use the new image.

@vdice and @ThorstenHans have observed this issue. I'll leave it to them to share more details in a comment below. My understanding is that @ThorstenHans has a reproduction of this issue. Also please feel free to edit this issue to be more accurate.

Operator ImagePullBackOff - private image not pulled

I guess with

ac24190

a regression was injected

screenshot_2024-02-22_14-14-10_638124366

and then private image is not correctly substituted in configuration

Failed to pull image "ghcr.io/spinkube/spin-operator:latest": rpc error: code = Unknown desc = failed to pull and unpack image "ghcr.io/spinkube/spin-operator:latest": failed to resolve reference "ghcr.io/spinkube/spin-operator:latest": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aspinkube%2Fspin-operator%3Apull&service=ghcr.io: 401 Unauthorized

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.