Monitoring Component Pack / Kubernetes with Prometheus – Grafana

You have successfully installed HCL ComponentPack for HCL Connections. Now you want to know what is happening in that black box ?
The goal is to see something like this

Steps to reproduce:
Update Kubernetes to 1.18.12
Install Helm3
– Install nfs-client
– install metrics
Install Traefik
– Install Prometheus

Disclaimer: use the following on your own risk.

Update Kubernetes to 1.18 because it’s now supported for Component Pack. If you are still on 1.11 you will be surprised after the update to 1.16. The coredns will no longer start. Coredns removed the proxy plugin, you need to switch to the forward plugin:
kubectl -n kube-system edit cm coredns
change ‘proxy . /etc/resolv.conf’ to ‘forward . /etc/resolv.conf’.
Going to 1.19 has not been an option. Some of the helm charts provide incompatible ingresss or services due to the api changes between 1.18 and 1.19

Installing Helm 3
At the moment the Component Pack still requires Helm 2. But both versions can be used in parallel.
Before installing Helm 3:
mv /usr/local/bin/helm /usr/local/bin/helm2
and after installing Helm 3:
mv /usr/local/bin/helm /usr/local/bin/helm3
Now my environment knows helm2 and helm3. It’s possible to migrate from helm 2 to helm 3 but I wanted an emergency option to be able to re-install Component Pack if needed. helm2 list shows the Component Pack installs. helm3 list will show all the new stuff.

nfs-client:
I’m lazy. I already have a working nfs server in my environment. I don’t want to handle every single pv/pvc manually. That’s why I use the nfs client provisioner. I cloned the repo and applied the 3 yaml files. I updated the deploy\deployment.yaml with the values for my nfs server.
kubectl apply -f deploy\class.yaml
kubectl apply -f deploy\deployment.yaml
kubectl apply -f deploy\rbac.yaml
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

The last line defines the new storage class as default.

metrics:
get the yaml and add the –kubelet-insecure-tls to the args.
I was not able to get the metrics pod up without this. Probably proper certificates with all the right SAN’s would help. But as this is only in my lab, I don’t care.


wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml


- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
kubectl apply -f components.yaml

Now you should get the metrics pod in the kube-system namespace.

Traefik:
helm3 show values traefik/traefik > values.yml to get the variables from the helm chart.

In order to prevent conflicts between the different ingress controllers I added this ingressClass.
Save it in ic.yaml and kubectl apply -f ic.yaml

apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller

There are 2 ingress controllers on my system now Traefik and cnx-ingress.

I defined the listening ports for web and websecure as 31080 and 31443 with
kubectl edit svc traefik.

There’s also a dashboard available for traefik.

Prometheus/Grafana
There are a lot of tutorials out there on how to install and configure prometheus.

So the compressed version:


helm3 repo add stable https://charts.helm.sh/stable
helm3 repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm3 repo update
helm3 show values prometheus-community/kube-prometheus-stack > values.yaml

edit values.yaml file:
– enable the ingress creation, add the ingressClassName: traefik-lb and define the hostname
– assign the volumeclaimtemplates the storage class, managed-nfs-storage in my case

storage:
volumeClaimTemplate:
spec:
storageClassName: managed-nfs-storage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
# selector: {}


kubectl create namespace monitoring
helm3 install prometheus prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml
helm3 install metrics-adapter prometheus-community/prometheus-adapter -n monitoring


update the grafana ingress, add the ingressClassName
kubectl -n monitoring edit ingress prometheus-grafana

spec:
ingressClassName: traefik-lb
rules:
- host: grafana.ume.li

If you are using more then one node, make sure that the appropriate network ports (9000/9100 TCP) are open between the nodes.

Next step is to create the routes on your front Proxy/LoadBalancer so that the site is available under a nice url.
The embedded grafana has already the prometheus configured. The only thing I added was this dashboard