How to create conversion webhook for my operator with operator-sdk
You must have searched online a lot of other materials to learn how to create the webhooks with operator-sdk before reading this article. It is lucky for you to find this one, because this article will guarantee you an error-free process, and you don’t even need to look for more.
Prerequisites:
- Git
- Ko
- Kubernetes cluster. You can install Minikube or Docker Desktop for you environment. Any Kubernetes service will work fine. I was using Kubernetes v1.24.1.
Prepare your workstation:
- Build and install operator-sdk with the latest commit
You can download the the latest v1.22.1 version here or build the operator-sdk binary based on the commit `87cdc50`. Make sure you build one based on a commit, no earlier than that.
To build the binary from the source code, go to the path $GOPATH/src/github.com/operator-framework in your terminal:
cd $GOPATH/src/github.com/operator-framework
Download the source code of operator-sdk:
git clone git@github.com:operator-framework/operator-sdk.git
Go to the home directory of the operator-sdk:
cd operator-sdk
Build the binaries:
make install
The binary operator-sdk is automatically installed under $GOPATH/bin. Make sure your environment variable $PATH includes $GOPATH/bin. You can verify the installation with the command:
which operator-sdk
It will show you the path of the operator-sdk.
2. Install cert-manager
Since the webhook requires a TLS certificate that the apiserver is configured to trust, install the cert-manager with the following command:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
You can visit the release page to choose the specific version you need.
Create the classic operator example:
Let’s leverage the well-known memcached-operator presented by all operator-sdk tutorials. However, we will create v1alpha1 and v1beta1 as two versions of the CRD. Use Git to manage your source code, because we plan to create two branches for the source code, using one for the v1alpha1 CRD and the other for the v1beta1 CRD.
Create the project:
Create a work directory under $GOPATH/src for the project. This article picks up the path github.com/example.
mkdir $GOPATH/src/github.com/example
cd $GOPATH/src/github.com/example
mkdir memcached-operator
cd memcached-operator
Initialize the project with Git:
git init
Initialize the project with operator-sdk:
operator-sdk init --domain example.com --repo github.com/example/memcached-operator
The domain and repo names can be changed based on your needs.
Create a new API and controller for the v1alpha1 version:
operator-sdk create api --group cache --version v1alpha1 --kind Memcached --resource --controller
This command scaffolds the Memcached resource API and the controller. Next, let’s change the file api/v1alpha1/memcached_types.go
.
Define the API for the Memcached Custom Resource(CR) as bellow:
// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
//+kubebuilder:validation:Minimum=0
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
// MemcachedStatus defines the observed state of Memcached
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
Update the generated code by invoking the controller-gen utility to update the api/v1alpha1/zz_generated.deepcopy.go
:
make generate
Generate the CRD manifests:
make manifests
Let’s implement the controller in the easiest way. The reconcile loop does nothing except printing a message, and the controller only watches the changes on the newly created CR and the deployments it owns.
Go to the file controllers/memcached_controller.go
. Change the function Reconcile into:
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := ctrl.Log.WithValues("memcached", req.NamespacedName)
log.Info("Memcached resource has changed.")
latest := &cachev1alpha1.Memcached{}
err := r.Get(ctx, req.NamespacedName, latest)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to get Memcached")
return ctrl.Result{}, err
}
podNames := []string{"pod1", "pod2", "pod3"}
latest.Status.Nodes = podNames
err = r.Status().Update(ctx, latest)
if err != nil {
log.Error(err, "Failed to update Memcached status")
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
We hardcoded the pod names just for testing purpose.
Change the function SetupWithManager into
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1alpha1.Memcached{}).
WithOptions(controller.Options{MaxConcurrentReconciles: 2}).
Complete(r)
}
Specify permissions and generate RBAC manifests by adding the following contents:
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
...
}
Generate the ClusterRole
manifest at config/rbac/role.yaml
:
make manifests
Change the CR sample at config/samples/cache_v1alpha1_memcached.yaml into the following contents:
apiVersion: v1
kind: Namespace
metadata:
name: memcached-sample
---
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
name: memcached-sample
namespace: memcached-sample
spec:
size: 3
By far, we have created the basic structure of the memcached operator for the v1alpha1 version CRD.
We would like to add the binary under bin, so we can remove the line bin
in the file .gitignore.
Let’s save the work with the lovely Git. Add the following folders and files:
git add api
git add bin
git add config
git add controllers
git add hack
git add main.go
git add Dockerfile
git add go.mod
git add go.sum
git add PROJECT
git add Makefile
git add .dockerignore
git add .gitignore
Leverage the git command to save the commit:
git commit -a
Create a branch to save the work of v1alpha1 CRD:
git checkout -b v1alpha1
Build and test the operator:
You can choose any image repository to save the images. In the following steps, we take docker.io as the image repository. Specify the variable $USER and build the image:
export USER=<name>
make docker-build docker-push IMG=docker.io/$USER/memcached-operator:v0.0.1
We use the tag v0.0.1 for v1alpha1 CRD. Replace <name> with your name registered with docker.io.
After the image is successfully published, run the following command to deploy the operator:
make deploy IMG=docker.io/$USER/memcached-operator:v0.0.1
Check the deployment of the operator with:
kubectl get deploy -n memcached-operator-system
We can see the output as below:
Check the log with the command:
kubectl logs -f deploy/memcached-operator-controller-manager -n memcached-operator-system -c manager
We need to specify the container’s name manager here, as the deployment launched two containers.
Create the v1alpha1 CR with the command:
kubectl apply -f config/samples/cache_v1alpha1_memcached.yaml
Once this command runs, we should see a new log message:
INFO controllers.Memcached Memcached resource has changed. {"memcached": "memcached-sample/memcached-sample"}
This means the reconcile loop is called, as the CR is created.
Check the CR:
kubectl get Memcached memcached-sample -n memcached-sample -oyaml
This command will yield the contents of the CR in the yaml format as below.
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cache.example.com/v1alpha1","kind":"Memcached","metadata":{"annotations":{},"name":"memcached-sample","namespace":"memcached-sample"},"spec":{"size":3}}
creationTimestamp: "2022-07-21T14:00:49Z"
generation: 1
name: memcached-sample
namespace: memcached-sample
resourceVersion: "68011"
uid: 95068671-9f3a-4de4-80a8-be13cf23bfab
spec:
size: 3
status:
nodes:
- pod1
- pod2
- pod3
Now we have everything for v1alpha1 CRD in place.
To remove the CR:
kubectl delete Memcached memcached-sample -n memcached-sample
To remove the operator:
make undeploy IMG=docker.io/$USER/memcached-operator:v0.0.1
Create a new CRD version v1beta1:
Switch back to the master branch, and continue the development.
git checkout master
Create the new v1beta1 API:
operator-sdk create api --group cache --version v1beta1 --kind Memcached
We do not need to create the controller, because we have already had one. We only need the resource. Make the following options after running the above command:
The only thing we need to change is to allow the controller to reconcile based on the v1beta1 CR, not the v1alpha1 CR any more. To develop a Kubernetes operator, avoid reconciling on multiple versions of the same CRs for the controllers.
Create a field called replicaSize for the v1beta1 CRD. This is the crucial and only change comparing to v1alpha1 CRD. Let’s change the file api/v1beta1/memcached_types.go
.
Define the API for the Memcached Custom Resource(CR) as bellow:
// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
// +kubebuilder:validation:Minimum=0
// ReplicaSize is the size of the memcached deployment
ReplicaSize int32 `json:"replicaSize"`
}// MemcachedStatus defines the observed state of Memcached
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
Add the marker +kubebuilder:storageversion to indicate v1beta1 will be the storage version:
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:storageversion
// Memcached is the Schema for the memcacheds API
type Memcached struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec MemcachedSpec `json:"spec,omitempty"`
Status MemcachedStatus `json:"status,omitempty"`
}
Go to the file controllers/memcached_controller.go
, and replace all existences of cachev1alpha1 with cachev1beta1, and all v1alpha1 with v1beta1.
Change the import from
cachev1alpha1 "github.com/example/memcached-operator/api/v1alpha1"
to
cachev1beta1 "github.com/example/memcached-operator/api/v1beta1"
Change the function Reconcile into
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := ctrl.Log.WithValues("memcached", req.NamespacedName)
log.Info("Memcached resource has changed.")
latest := &cachev1beta1.Memcached{}
err := r.Get(ctx, req.NamespacedName, latest)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to get Memcached")
return ctrl.Result{}, err
}
podNames := []string{"pod1", "pod2", "pod3"}
latest.Status.Nodes = podNames
err = r.Status().Update(ctx, latest)
if err != nil {
log.Error(err, "Failed to update Memcached status")
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
Change the function SetupWithManager into
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1beta1.Memcached{}).
WithOptions(controller.Options{MaxConcurrentReconciles: 2}).
Complete(r)
}
Make sure the controller switch to watch v1beta1 CR.
Then, update the generated code and regenerate the CRD manifests:
make generate
make manifests
Both v1alpha1 and v1beta1 CRDs are serving, but only v1beta1 CRD will be stored.
Change the CR sample at config/samples/cache_v1beta1_memcached.yaml into the following contents:
apiVersion: v1
kind: Namespace
metadata:
name: memcached-sample
---
apiVersion: cache.example.com/v1beta1
kind: Memcached
metadata:
name: memcached-sample
namespace: memcached-sample
spec:
replicaSize: 3
Create the conversion webhook for v1beta1 resource:
operator-sdk create webhook --conversion --version v1beta1 --kind Memcached --group cache --force
Next, we need to implement the conversion.Hub and conversion.Convertible interfaces for your CRD types. We will leverage the v1beta1 as the storage version and the Hub, which means any resource version can convert into v1beta1.
Create a file named memcached_conversion.go under api/v1beta1 with the contents:
package v1beta1
// Hub marks this type as a conversion hub.
func (*Memcached) Hub() {}
The v1alpha1 resource needs to implement the conversion.Convertible interface, so that it is able to convert into and from v1beta1 resource. In this example, the attribute size in v1alpha1 matches replicaSize in v1beta1.
Create a file named memcached_conversion.go under api/v1alpha1 with the contents:
package v1alpha1
import (
"github.com/example/memcached-operator/api/v1beta1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
)
// ConvertTo converts this Memcached to the Hub version (vbeta1).
func (src *Memcached) ConvertTo(dstRaw conversion.Hub) error {
dst := dstRaw.(*v1beta1.Memcached)
dst.Spec.ReplicaSize = src.Spec.Size
dst.ObjectMeta = src.ObjectMeta
dst.Status.Nodes = src.Status.Nodes
return nil
}
// ConvertFrom converts from the Hub version (vbeta1) to this version.
func (dst *Memcached) ConvertFrom(srcRaw conversion.Hub) error {
src := srcRaw.(*v1beta1.Memcached)
dst.Spec.Size = src.Spec.ReplicaSize
dst.ObjectMeta = src.ObjectMeta
dst.Status.Nodes = src.Status.Nodes
return nil
}
Do not forget to set the ObjectMeta.
Update the generated code and regenerate the CRD manifests once more:
make generate
make manifests
Enable the webhook and certificate manager in manifests:
Go through a few kustomization.yaml files under config/crd, config/default, and config/webhook, and do a few uncomment and comment actions.
For config/crd/kustomization.yaml, uncomment the following lines
#- patches/webhook_in_memcacheds.yaml
#- patches/cainjection_in_memcacheds.yaml
into
patchesStrategicMerge:
- patches/webhook_in_memcacheds.yaml
- patches/cainjection_in_memcacheds.yaml
For config/default/kustomization.yaml, uncomment the following lines
#- ../webhook
#- ../certmanager
#- manager_webhook_patch.yaml
into
bases:
- ../webhook
- ../certmanager
patchesStrategicMerge:
- manager_webhook_patch.yaml
and all the line below vars:
#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
# fieldref:
# fieldpath: metadata.namespace
#- name: CERTIFICATE_NAME
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
#- name: SERVICE_NAMESPACE # namespace of the service
# objref:
# kind: Service
# version: v1
# name: webhook-service
# fieldref:
# fieldpath: metadata.namespace
#- name: SERVICE_NAME
# objref:
# kind: Service
# version: v1
# name: webhook-service
For config/webhook/kustomization.yaml, comment the following line:
- manifests.yaml
as
resources:
# - manifests.yaml
Change the file config/crd/patches/webhook_in_memcacheds.yaml, by adding conversionReviewVersions: [“v1alpha1”, “v1beta1”] under spec.conversion.webhook like:
# The following patch enables a conversion webhook for the CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: memcacheds.cache.example.com
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1alpha1", "v1beta1"]
clientConfig:
service:
namespace: system
name: webhook-service
path: /convert
Save the v1beta1 work with Git:
git add api
git add bin
git add config
Commit the change:
git commit -a
Build the image for v1beta1:
export USER=<name>
make docker-build docker-push IMG=docker.io/$USER/memcached-operator:v0.0.2
We use the tag v0.0.2 for v1beta1 CRD. Replace <name> with your name registered with docker.io.
Deploy the operator with v1beta1 resource:
make deploy IMG=docker.io/$USER/memcached-operator:v0.0.2
Check the deployment of the operator with:
kubectl get deploy -n memcached-operator-system
Check the log with the command:
kubectl logs -f deploy/memcached-operator-controller-manager -n memcached-operator-system -c manager
Still create the v1alpha1 CR with the command:
kubectl apply -f config/samples/cache_v1alpha1_memcached.yaml
This time, we will get the v1beta1 resource save in the cluster. The v1alpha1 resource is automatically converted into v1beta1.
Check the CR:
kubectl get Memcached memcached-sample -n memcached-sample -oyaml
We will get something like:
apiVersion: cache.example.com/v1beta1
kind: Memcached
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cache.example.com/v1alpha1","kind":"Memcached","metadata":{"annotations":{},"name":"memcached-sample","namespace":"memcached-sample"},"spec":{"size":3}}
creationTimestamp: "2022-07-21T14:37:05Z"
generation: 1
name: memcached-sample
namespace: memcached-sample
resourceVersion: "72276"
uid: 2b1bd7b7-085e-415e-978e-469b31893237
spec:
replicaSize: 3
status:
nodes:
- pod1
- pod2
- pod3
The resource is saved in the cluster in terms of v1beta1, though created by v1alpha1. The field size: 3 of v1alpha1 was converted into replicaSize: 3 of v1beta1 in the storage with the webhook.
If you install the deployment of v1alpha1 first, and then install the deployment of v1beta1, you probably still see the `apiVersion: cache.example.com/v1alpha1`. The kubectl command leverages cache mechanism. That is why you still see the old APIVersion, even if you update the CRD to v1beta1. To remove the cache of kubectl, run the command:
rm -rf ~/.kube/cache/*
Or you can run the following command to refresh the resources:
kubectl api-resources
After running one of the above commands, you can try to check the CR again.
Migrate the existing v1alpha1 resource into v1beta1 resource:
The CRD define the storage version, but it only applies to the new resource creation. How shall we deal with the resources at the older versions, which have relady existed in the cluster?
Based on the offical Kubernetes documents, we can follow the instructions here. However, is there a better way, easier and automated?
YES!
We can leverage the tool, knative.dev/pkg/apiextensions/storageversion/cmd/migrate, available in Knative common package, to migrate the existing resources.
Switch back to master branch:
git checkout master
Add knative.dev/pkg v0.0.0–20210309024624–0f8d8de5949d into the file go.mod.
Create a directory called post-install under config/, to host all the yamls regarding the migration.
Create config/post-install/tools.go with the contents:
//go:build tools
// +build tools
package tools
import (
// Needed for the storage version too.
_ "knative.dev/pkg/apiextensions/storageversion/cmd/migrate"
)
The purpose of this file is used to import the migration library.
Create config/post-install/clusterrole.yaml with the contents:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: memcached-operator-post-install-job-role
rules:
# Storage version upgrader needs to be able to patch CRDs.
- apiGroups:
- "apiextensions.k8s.io"
resources:
- "customresourcedefinitions"
- "customresourcedefinitions/status"
verbs:
- "get"
- "list"
- "update"
- "patch"
- "watch"
# Our own resources we care about.
- apiGroups:
- "cache.example.com"
resources:
- "memcacheds"
verbs:
- "get"
- "list"
- "create"
- "update"
- "delete"
- "patch"
- "watch"
Create config/post-install/serviceaccount.yaml with the contents:
apiVersion: v1
kind: ServiceAccount
metadata:
name: memcached-operator-post-install-job
namespace: memcached-operator-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: memcached-operator-post-install-job-role-binding
subjects:
- kind: ServiceAccount
name: memcached-operator-post-install-job
namespace: memcached-operator-system
roleRef:
kind: ClusterRole
name: memcached-operator-post-install-job-role
apiGroup: rbac.authorization.k8s.io
Create config/post-install/storage-version-migrator.yaml with the contents:
apiVersion: batch/v1
kind: Job
metadata:
name: storage-version-migration
namespace: memcached-operator-system
labels:
app: "storage-version-migration"
spec:
ttlSecondsAfterFinished: 600
backoffLimit: 10
template:
metadata:
labels:
app: "storage-version-migration"
spec:
serviceAccountName: memcached-operator-post-install-job
restartPolicy: OnFailure
containers:
- name: migrate
image: ko://github.com/example/memcached-operator/vendor/knative.dev/pkg/apiextensions/storageversion/cmd/migrate
args:
- "memcacheds.cache.example.com"
Specify the arg “memcacheds.cache.example.com” directly for this tool, so it will do the conversion.
Sync-up the go.sumby running
go mod tidy
Generate the dependencies for the project:
go mod vendor
Build the image for the migration tool:
ko resolve -f config/post-install -B -t 0.0.2
We also tag it with 0.0.2, since it is the conversion to v1beta1. The image will be published at docker.io/$USER/migrate:0.0.2.
Replace image: ko://github.com/example/memcached-operator/vendor/knative.dev/pkg/apiextensions/storageversion/cmd/migrate with image:docker.io/$USER/migrate:0.0.2, in the file config/post-install/storage-version-migrator.yaml.
By far, the job is ready for the resource migration from v1alpha1 to v1beta1.
Save the work with Git:
git add vendor
git add config
git commit -a
Save it into another branch called v1beta1:
git checkout -b v1beta1
Demostrate how the resource migration works:
There are two git branches available: v1alpha1 and v1beta1. Make sure the Kubernetes cluster has a clean environment with no memcached operator installed or a fresh new cluster to run the following steps, but do not forget to install the cert-manager.
Go to the v1alpha1 branch:
git checkout v1alpha1
Install the operator with the v1alpha1 resource:
make deploy IMG=docker.io/$USER/memcached-operator:v0.0.1
Create the v1alpha1 resource:
kubectl apply -f config/samples/cache_v1alpha1_memcached.yaml
Now, we have got the v1alpha1 resource saved in the cluster with the v1alpha1 memcached operator.
Verify the CR with the command:
kubectl get Memcached memcached-sample -n memcached-sample -oyaml
It should be saved as v1alpha1 as below:
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cache.example.com/v1alpha1","kind":"Memcached","metadata":{"annotations":{},"name":"memcached-sample","namespace":"memcached-sample"},"spec":{"size":3}}
creationTimestamp: "2022-07-21T17:39:23Z"
generation: 1
name: memcached-sample
namespace: memcached-sample
resourceVersion: "423064"
uid: b52cc78b-5356-4152-89a5-708969afeecb
spec:
size: 3
status:
nodes:
- pod1
- pod2
- pod3
Go to the v1beta1 branch:
git checkout v1beta1
Install the operator with the v1beta1 resource:
make deploy IMG=docker.io/$USER/memcached-operator:v0.0.2
The older version of the memcached operator is replaced with the newer version, but the it is still the v1alpha1 resource saved in the cluster.
Check the status of the CRD:
kubectl get crd memcacheds.cache.example.com -oyaml
We can see the storage version:
status:
...
storedVersions:
- v1alpha1
- v1beta1
Run the following command to migrate the resource:
kubectl apply -f config/post-install
Check the status of the CRD:
kubectl get crd memcacheds.cache.example.com -oyaml
We can see the storage version has changed into:
status:
...
storedVersions:
- v1beta1
Let’s check the CR:
kubectl get Memcached memcached-sample -n memcached-sample -oyaml
Please be aware that the CR does not change immeieately after the migration job is complete. It may take a few minutes to accoumplish the transition. Once the migration is done for the CR, we can get the CR as below:
apiVersion: cache.example.com/v1beta1
kind: Memcached
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cache.example.com/v1alpha1","kind":"Memcached","metadata":{"annotations":{},"name":"memcached-sample","namespace":"memcached-sample"},"spec":{"size":3}}
creationTimestamp: "2021-03-16T20:31:39Z"
generation: 1
name: memcached-sample
namespace: memcached-sample
resourceVersion: "835"
selfLink: /apis/cache.example.com/v1beta1/namespaces/memcached-sample/memcacheds/memcached-sample
uid: 6f6baba7-0222-4969-ad31-e3cf0e8aff15
spec:
replicaSize: 3
status:
nodes:
- pod1
- pod2
- pod3
The size: 3 in v1alpha is converted into replicaSize: 3 in v1beta1.
Again, do not forget to clean up the cache of kubectl, if you still see `cache.example.com/v1alpha1` in the APIVersion. The command is
rm -rf ~/.kube/cache/*
Webhook wants to mess up with me? Too young! Too simple! Sometimes naive!
This is how you create conversion webhook with operator-sdk to convert resources among different versions, and how you can migrate existing resources from the old version to the new version.
Follow Vincent, (and) you won’t derail!