Knative journey EP3: development environment of Knative with IBM Cloud
I hope you’ve enjoyed the public domain name brought by IBM cloud, and some of basic examples for Knative. If you don’t know what I am talking about, please refer to EP1 and EP2. Kubernetes cluster in IBM Cloud supports Istio and Knative as add-ons. After you walked through the previous two articles, both of Istio and Knative have been enabled. You need to disable Knative add-on and keep Istio add-on, in order to proceed.
First, disable the Knative add-on in Kubernetes cluster:
ibmcloud ks cluster-addon-disable knative <cluster_name>
The value <cluster_name> is the name of the cluster you created in IBM cloud. Knative add-on has been removed, but Istio add-on is still valid. Let’s take Knative/serving as the example to set up development for Knative.
- Setting up the development environment for Knative serving:
Download the source code of Knative/serving. You may run the git clone command:
git clone git@github.com:knative/serving.git
Go to the home directory of this project, and run the command to deploy cert-manager
CRDs:
kubectl apply -f ./third_party/cert-manager-0.6.1/cert-manager-crds.yaml
while [[ $(kubectl get crd certificates.certmanager.k8s.io -o jsonpath='{.status.conditions[?(@.type=="Established")].status}') != 'True' ]]; do
echo "Waiting on Cert-Manager CRDs"; sleep 1
done
The deploy the cert-manager:
kubectl apply -f ./third_party/cert-manager-0.6.1/cert-manager.yaml
Please follow the section “Setup your environment” of Knative serving project to install the dependencies. The most important step is to configure several environment variables:
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='docker.io/{DOCKER_HUB_USERNAME}'
The argument {DOCKER_HUB_USERNAME} is your docker hub username. If you are using Mac OS, you can add the above line into the file ~/.bash_profile, and run “source ~/.bash_profile” to export all the variables.
If you are using Linux OS, you can add them into the file “~/.bashrc”.
Next, configure the network for Knative serving. You need to edit config-domain.yaml under config directory with the contents as below:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
labels:
serving.knative.dev/release: devel
data:
<ingress domain name>: ""
Where can you find the <ingress domain name>? Go to the the overview page of your Kubernetes cluster in IBM cloud. You can find it at “Ingress subdomain”. Here is the domain name for my cluster:
Run the following command to install Knative serving based on your local source code:
ko apply -f config/
You can verify if all the pods of Knative serving have been running:
kubectl get pods -n knative-serving
Alternatively, if you install your Knative with Knative operator, you can configure the ConfigMap config-domain with the following CR:
apiVersion: v1
kind: Namespace
metadata:
name: knative-serving
---
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
config:
domain:
<ingress domain name>: ""
At last but not least, you need to configure your ingress, so that the Istio knows the ingress subdomain name can be used to access all the services. Create a file called forward-ingress.yaml with the contents:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: iks-knative-ingress
namespace: istio-system
annotations:
ingress.bluemix.net/upstream-fail-timeout: "serviceName=knative-ingressgateway fail-timeout=30"
ingress.bluemix.net/upstream-max-fails: "serviceName=knative-ingressgateway max-fails=0"
ingress.bluemix.net/client-max-body-size: "size=200m"
spec:
rules:
- host: "*.<ingress domain name>"
http:
paths:
- path: /
backend:
serviceName: istio-ingressgateway
servicePort: 80
Again, replace <ingress domain name> with the real subdomain name of your cluster. As you see, we enable wildcard support, so that each new service deployed can be accessed via <service name>-<namespace>.<ingress domain name>. Run the command to make it effective:
kubectl apply -f forward-ingress.yaml
Furthermore, if you want to enable the TLS support, apply the following content for the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: iks-knative-ingress
namespace: istio-system
annotations:
ingress.bluemix.net/upstream-fail-timeout: "serviceName=knative-ingressgateway fail-timeout=30"
ingress.bluemix.net/upstream-max-fails: "serviceName=knative-ingressgateway max-fails=0"
ingress.bluemix.net/client-max-body-size: "size=200m"
spec:
rules:
- host: "*.<ingress domain name>"
http:
paths:
- path: /
backend:
serviceName: istio-ingressgateway
servicePort: 80
tls:
- hosts:
- "*.<ingress domain name>"
secretName: <secretName>
You can find your <secretName> with the following command:
ibmcloud ks cluster get --cluster <your cluster name>
Check the information for Ingress Secret. We will use this field as the secret name.
Many thanks to my colleague Doug Davis for helping me resolve the ingress domain name issue.
2. Setting up the development environment for Tektoncd pipeline:
Since the project tektoncd/pipeline will inherit all the assets in knative/build, we will electorate how tektoncd/pipeline is installed based on your source code.
Download the source code at tektoncd/pipeline.
git clone git@github.com:tektoncd/pipeline.git
Go through the requirements. As we have already had the kubernetes cluster and configured the local environment variables, we are ready to run the following command under the home directory of tektoncd/pipeline:
ko apply -f config/
By default, all the pipeline pods are running under the namespace “tekton-pipelines”.
If it succeeds, we can check the status of the pods by running:
kubectl get pods -n tekton-pipelines
The controller and webhook services should be up and running:
If your local source code has been changed, you can run the following command to redeploy the controller:
ko apply -f config/controller.yaml
To redepoy the webhook:
ko apply -f config/webhook.yaml
For more information about iterating this project, please refer to the section here.
3. Setting up the development environment for Knative eventing:
Download the source code of Knative/eventing:
git clone git@github.com:knative/eventing.git
Go to the home directory of Knative eventing, and run:
ko apply -f config/
All the pods will be installed under the namespace “knative-eventing”. Check them with the command as below:
kubectl get pods -n knative-eventing
You should be able to see all the pods running as:
Three of the major modules of Knative have been rooted on your local machine. For developer, it’s time to rock and roll.
Follow Vincent, (and) you won’t derail.