Knative journey EP6: configure Knative with Operators

Vincent Hou
10 min readMar 3, 2020

--

Continued with the previous episode, it is exciting start to install Knative with Knative operators, but it is more fun to configure Knative with the only source of truth: custom resources. We will dig into each Knative operand one-by-one.

Configure Knative Serving with Serving Operator:

You are able to configure Knative Serving with the following options:

  • All the ConfigMaps
  • Private repository and private secret
  • SSL certificate for controller
  • Knative ingress gateway
  • Cluster local gateway

Currently, Knative operators are NOT able to configure the following options for Knative:

  • The Kubernetes spec-level policies. We cannot specify where and how the resources are launched or retrieved. For example, affinity, Envs, HPA, Image pull policy, Annotations, Replicas, probes, etc.
  • High availability. We cannot specify the number of resources, Knative can scale up or down.

All the ConfigMaps:

You are able to dynamically configure any ConfigMap defined in Knative Serving with the custom resource. The values in the custom resource will overwrite what are in the existing ConfigMaps. In the released manifest of Knative Serving, there are multiple ConfigMaps, e.g. config-autoscaler, config-default, config-deployment, etc.

It is a straightforward rule to change ConfigMaps by changing CR: All the ConfigMaps are named with the prefix config- . In general, they are all named after config-<name>. We define a key named config under the section spec to host the list of all ConfigMaps. Then use the name after the -sign, <name>, as the key under section spec.config to specify all the key-value pairs, which are exactly the same as we have in the section data for each ConfigMap.

Here is an example of how to setup a custom domain. As you see, we need to change the content of the ConfigMap config-domain in this example, into the following context:

apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
example.org: |
selector:

app: prod
example.com: ""

Let’s still use example.org and example.com as the domain names. Instead of saving the above content into a file and issue kubectl apply command. We can change the content of the operator CR into the following context:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
config:
domain:
example.org: |
selector:

app: prod
example.com: ""

Next, you need to change the CR by running the kubectl apply command against the above content.

If you want to add one more ConfigMap, e.g. config-autoscaler , by specifying stable-window into 60s. Continue to edit your operator CR into:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
config:
domain:
example.org: |
selector:
app: prod
example.com: ""
autoscaler:
stable-window: "60s"

Then, save the content in a file, and issue the kubectl apply -f command.

All the ConfigMaps are defined under the same namespace as the operator CR. Instead of going through all the ConfigMaps one-by-one, We can use the operator CR as the unique entry point to edit all of them.

Private repository and private secrets:

As in the latest released manifest of Knative Serving, there are six deployments: activator, autoscaler, controller, webhook, autoscaler-hpa & networking-istio, under apps/v1, and one image: queue-proxy, under caching.internal.knative.dev/v1alpha1. The images will be downloaded from the links specified in the spec.image section for each of the resources. Knative Serving Operator provides us a way to download the images from private repositories for Knative deployments and images.

Under the section spec of the operator CR, we can open a section of registry to contain all the keys to define the information about the private registry:

  • default: this key expects a string value, used to define image reference template for all Knative images. The format is in example-registry.io/custom/path/${NAME}:custom-tag. Since all your private images can be saved in the same repository with the same tag, the only difference is the image name. Please note that ${NAME} should be kept in the value you define, because it is a predefined container variable in operator. If you name the images after the deployment names: activator, autoscaler, controller, webhook, autoscaler-hpa & networking-istio, for all deployments, and name the image after queue-proxy, for the cache image, you do not need to do any further configuration in the next section override, because operator can automatically replace ${NAME} with the corresponding deployment name.
  • override: this key expects a map of a container name or image name to the full image location of the individual Knative image on a one-on-one basis. We usually need to configure this section when we do not have a common format of the image link for the deployments or images. Usually this key is used alternatively with the previous key default.
  • imagePullSecrets: this key is used to define a list of secrets to be used when pulling the knative images. The secret must be created in the same namespace as the Knative Serving deployments. You do not need to define any secret here if your image is publicly available. Configuration of this field is equivalent to the configuration of deploying images from a private container registry.

It sounds complicated to merely read the above instructions. We will use an example to better illustrate how we can define all the keys regarding your private image links and private secrets.

Example1 — download images in a predefined format without secrets:

Suppose you use the custom tag v0.13.0 for all your images, and all the image links you have are able to accessible without secrets, and defined in the expected format: docker.io/knative-images/${NAME}:v0.13.0, you need to define your operator CR with following content:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
default: docker.io/knative-images/${NAME}:v0.13.0

With that said, let’s confirm if your images are saved in the following link:

  • Image of activator: docker.io/knative-images/activator:v0.13.0.
  • Image of autoscaler: docker.io/knative-images/autoscaler:v0.13.0.
  • Image of controller: docker.io/knative-images/controller:v0.13.0.
  • Image of webhook: docker.io/knative-images/webhook:v0.13.0.
  • Image of autoscaler-hpa: docker.io/knative-images/autoscaler-hpa:v0.13.0.
  • Image of networking-istio: docker.io/knative-images/networking-istio:v0.13.0.
  • Image of the cache image queue-proxy: docker.io/knative-images/queue-proxy:v0.13.0.

This is the simplest case to define your custom image links. Be aware that if you use this approach, you must make sure that all the images in Knative Serving will be replaced with your own images.

Example 2 — download images individually without secrets:

If all the images are not saved in a uniformed format, we need to define them on a one-on-one basis.

Suppose we would like to download the images as below:

  • Image of activator: docker.io/knative-images-repo1/activator:v0.13.0.
  • Image of autoscaler: docker.io/knative-images-repo2/autoscaler:v0.13.0.
  • Image of controller: docker.io/knative-images-repo3/controller:v0.13.0.
  • Image of webhook: docker.io/knative-images-repo4/webhook:v0.13.0.
  • Image of autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0.
  • Image of networking-istio: docker.io/knative-images-repo6/prefix-networking-istio:v0.13.0.
  • Image of the cache image queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0.

Our operator CR needs to be revised as bellow:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
override:
activator: docker.io/knative-images-repo1/activator:v0.13.0
autoscaler: docker.io/knative-images-repo2/autoscaler:v0.13.0
controller: docker.io/knative-images-repo3/controller:v0.13.0
webhook: docker.io/knative-images-repo4/webhook:v0.13.0
autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0
networking-istio: docker.io/knative-images-repo6/prefix-networking-istio:v0.13.0
queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0

This is how we define the image links mapping to the container or image name on a one-on-one basis. Be aware that you do not need to replace all images for deployments and cache images in this approach. You only need to define the ones you need to replace.

Example 3— download images with secrets:

No matter you use the key default or override to define where to download your images, when you need to access your images with private secrets, you need to append a section called imagePullSecrets under spec.registry.

The secret we are about to use is called regcred . Fisrt, we need to create this secret in the same namespace as your Knative Serving resource or your operator CR. There are several ways to create your secret:

It is your responsibility to create the private secrets, used to access your images. After you create this secret, edit your operator CR by appending the content in bold:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
...
imagePullSecrets:
- name: regcred

If you need to add another secret called regcred-2, add it as below:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
...
imagePullSecrets:
- name: regcred
- name: regcred-2

SSL certificate for controller:

Knative Serving needs to access the container registry, based on the feature enabling tag to digest resolution. The Serving Operator CR allows you to specify either a custom ConfigMap or a Secret as a self-signed certificate for the deployment called controller . It enables the controller to trust registries with self-signed certificates.

Under the section spec of the operator CR, you can create a section of controller-custom-certs to contain all the fileds to define the certificate:

  • name: this field is used to specify the name of the ConfigMap or the Secret.
  • type: the value for this field can be either ConfigMap or Secret, indicating the type for the name.

If you create a configMap named testCertas the certificate, you need to change your CR into:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
controller-custom-certs:
name: testCert
type: ConfigMap

This will make sure this custom certificate is mounted as a volume to the containers launched by the deployment controller, and the environment variable is SSL_CERT_DIR set correctly.

Configuration of Knative ingress gateway:

To set up custom ingress gateway, follow “Step 1: Create Gateway Service and Deployment Instance” here.

Step 2: Update the Knative gateway

We use the field knative-ingress-gateway to override the knative-ingress-gateway. We only support the field selector to define the selector for ingress-gateway.

Instead of updating the gateway directly, we modify the operator CR as below:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
knative-ingress-gateway:
selector:
custom: ingressgateway

Step 3: Update Gateway Configmap

As we explained, all ConfigMaps can be edited as editing the the operator CR. For this example, append the content in bold to your operator CR:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
knative-ingress-gateway:
selector:
custom: ingressgateway
config:
istio:
gateway.knative-serving.knative-ingress-gateway: "custom-ingressgateway.istio-system.svc.cluster.local"

The key in spec.config.istio is in the format of gateway.{{gateway_namespace}}.{{gateway_name}}.

Configuration of cluster local gateway:

We use the field cluster-local-gateway to override the the gateway cluster-local-gateway. We only support the field selector to define the selector for the local gateway.

Default local gateway name:

Go through the guide here to use local cluster gateway.

After following the above step, your service and deployment for the local gateway are both named `cluster-local-gateway`. You only need to configure the operator CR as below:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
cluster-local-gateway:
selector:
istio: cluster-local-gateway

You can even skip the above change, since there is a gateway called cluster-local-gateway, which has `istio: cluster-local-gateway` as the default selector. If he operator CR does not define the section cluster-local-gateway, the default `istio: cluster-local-gateway`of the gateway cluster-local-gateway will be chosen.

Non-default local gateway name:

If you create custom service and deployment for local gateway with a name other than `cluster-local-gateway`, you need to update gateway configmap `config-istio` under the Knative Serving namespace, and change the selector for the gateway cluster-local-gateway.

If you name both of the service and deployment after `custom-local-gateway` in the namespace `istio-system`, with the label `custom: custom-local-gateway`, the operator CR should be like:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
cluster-local-gateway:
selector:
custom: custom-local-gateway

config:
istio:
local-gateway.knative-serving.cluster-local-gateway: "custom-local-gateway.istio-system.svc.cluster.local"

Configure Knative Eventing with Eventing Operator:

You are only able to configure Knative Eventing with the following options:

  • Private repository and private secret

We configure Eventing operator CR the same way as Serving operator CR, with the same fields. Refer to the section of “Private repository and private secret” in Serving operator for detailed instruction. The difference is that Knative Eventing Operator only allows us to customize the images for all deployments: eventing-controller, eventing-webhook, imc-controller,
imc-dispatcher, and broker-controller. You need to use these names as your names of the images in the repository, or to map your image links on a one-on-one basis.

Example1 — download images in a predefined format without secrets:

Suppose you use the custom tag v0.13.0 for all your images, and all the image links you have are able to accessible without secrets, and defined in the expected format: docker.io/knative-images/${NAME}:v0.13.0, you need to define your operator CR with following content:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
registry:
default: docker.io/knative-images/${NAME}:v0.13.0

With that said, let’s confirm if your images are saved in the following link:

  • Image of eventing-controller: docker.io/knative-images/eventing-controller:v0.13.0.
  • Image of eventing-webhook: docker.io/knative-images/eventing-webhook:v0.13.0.
  • Image of imc-controller: docker.io/knative-images/imc-controller:v0.13.0.
  • Image of imc-dispatcher: docker.io/knative-images/imc-dispatcher:v0.13.0.
  • Image of broker-controller: docker.io/knative-images/broker-controller:v0.13.0.

This is the simplest case to define your custom image links. Be aware that if you use this approach, you must make sure that all the images in Knative Eventing will be replaced with your own images.

Example 2 — download images individually without secrets:

If all the images are not saved in a uniformed format, we need to define them on a one-on-one basis.

Suppose we would like to download the images as below:

  • Image of eventing-controller: docker.io/knative-images-repo1/eventing-controller:v0.13.0.
  • Image of eventing-webhook: docker.io/knative-images-repo2/eventing-webhook:v0.13.0.
  • Image of imc-controller: docker.io/knative-images-repo3/imc-controller:v0.13.0.
  • Image of imc-dispatcher: docker.io/knative-images-repo4/imc-dispatcher:v0.13.0.
  • Image of broker-controller: docker.io/knative-images-repo5/broker-controller:v0.13.0.

Our operator CR needs to be revised as bellow:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
registry:
override:
eventing-controller: docker.io/knative-images-repo1/eventing-controller:v0.13.0
eventing-webhook: docker.io/knative-images-repo2/eventing-webhook:v0.13.0
imc-controller: docker.io/knative-images-repo3/imc-controller:v0.13.0
imc-dispatcher:: docker.io/knative-images-repo4/imc-dispatcher::v0.13.0
broker-controller: docker.io/knative-images-repo5/broker-controller:v0.13.0

This is how we define the image links mapping to the container or image name on a one-on-one basis. Be aware that you do not need to replace all images for deployments in this approach. You only need to define the ones you need to replace.

Example 3 — download images with secrets:

No matter you use the key default or override to define where to download your images, when you need to access your images with private secrets, you need to append a section called imagePullSecrets under spec.registry.

The secret we are about to use is called regcred . It is your responsibility to create the private secrets, used to access your images. After you create this secret, edit your operator CR by appending the content in bold:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
registry:
...
imagePullSecrets:
- name: regcred

If you need to add another secret called regcred-2, add it as below:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
registry:
...
imagePullSecrets:
- name: regcred
- name: regcred-2

--

--

Vincent Hou

A Chinese software engineer, used to study in Belgium and currently working in US, as Knative & Tekton Operator Lead and Istio Operator Contributor.