Access your index – Elastic Search queries

HCL Component Pack is fully set-up now, you’re interested what’s going on in that mysterious elasticsearch part? In CP 6.5 you’ve been able to access the ElasticSearch data directly through the kibana pod. With CP7 this seems to be gone.

If you really want/need to access the ElasticSearch environment, you may use the following path on your own risk

Step 1: get the necessary certificates

#!/bin/sh

echo "check directory"
cwd=`pwd`
if [ ! -d "$cwd/certs" ]; then
 mkdir -p $cwd/certs
fi
version='elasticsearch-7-secret'
#version='elasticsearch-secret'
echo "extract certificates"
if [ ! -f "$cwd/certs/elasticsearch-healthcheck.crt.pem" ]; then
	kubectl view-secret -n connections $version elasticsearch-healthcheck.crt.pem > $cwd/certs/elasticsearch-healthcheck.crt.pem
fi 
if [ ! -f "$cwd/certs/elasticsearch-http.crt.pem" ]; then
	kubectl view-secret -n connections $version elasticsearch-http.crt.pem > $cwd/certs/elasticsearch-http.crt.pem
fi
if [ ! -f "$cwd/certs/elasticsearch-healthcheck.des3.key" ]; then
	kubectl view-secret -n connections $version elasticsearch-healthcheck.des3.key > $cwd/certs/elasticsearch-healthcheck.des3.key
fi

echo "done"

This script requires the kubectl krew plugins installed in your kubectl client.
Run this on your master or a client which has kubectl and the krew plugin installed to get the secrets.

Then you could use the following script as an example:

#!/bin/bash
# An util script so that you can interact with es like what official site suggested:
# for additional usage, if you need to do much more complicated operation, pls
# refer to offcial site:
# https://www.elastic.co/


set -o errexit
set -o pipefail
set -o nounset

# the directory that all cert placed
# change this to your cert directory.
cert_dir=./certs
HOST=`hostname`
URL_base="https://localhost:30098"
#CP6.5:
#URL_base="https://localhost:30099"
#CP6.5: KEY_PASS=`kubectl view-secret -n connections elasticsearch-secret elasticsearch-key-password.txt`

KEY_PASS=`kubectl view-secret -n connections elasticsearch-7-secret elasticsearch-key-password.txt`

echo "[$KEY_PASS]"
if [ "${1:-}" = "" ] || [ "${2:-}" = "" ]; then
  echo "usage: sendRequest.sh   param_method param_url [additional param]"
  echo "Request is send to es client(coordinating node) by default. add param"
  echo "to send request to es master node"
  echo "refer: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html"
  exit 107
fi

# save the HTTPS METHOD.
_method=$1
# and then shfit this argument.
# since we need to pass the rest args to curl command unchanged.
shift 1

# turn off expansion to avoid asterisk becoming current directory
set -f
# please ensure those
#   cert, password, key, cacert
# are at the right location.
response_text=$(curl \
   --insecure \
   --cert $cert_dir/elasticsearch-healthcheck.crt.pem:${KEY_PASS} \
   --key  $cert_dir/elasticsearch-healthcheck.des3.key \
   --cacert $cert_dir/elasticsearch-http.crt.pem \
   -X${_method} \
   ${URL_base}"$@")

# echo to return to caller.
echo ${response_text}
# turn expansion back to 'on'
set +f

Now you can use it like this

./sendRequest.sh GET /_cat/indices > indices.txt

Disclaimer: use at your own risk

Connections Component Pack Quick check

I had some issues with my componet pack pods and dns resolution.
The following command runs a ping to my internal ihs host (interservice url) on every pod.


kubectl get pods -n connections -o name | xargs -I{} kubectl -n connections exec {} -- sh -c "echo {} && ping ihs.interservice.url -c 1" | tee ping.log

Most of the connections pods display the ip. Some do actually ping, but on most pods, ping itself is prohibited.
Another option is to use nslookup


kubectl get pods -n connections -o name | xargs -I{} kubectl -n connections exec {} -- sh -c "echo {} && nslookup ihs.interservice.url " | tee nslookup.log

HCL Connections 7 – update

We updated our productive HCL Connections environment last week from 6.5CR1 to 7. This time we decided to do an inplace update.

Upgrade WAS to 8.5.5.18 on commandline :

cd /opt/ibm/InstallationManager/eclipse/tools
./imcl install com.ibm.websphere.ND.v85_8.5.5018.20200910_1821 -repositories /data/install/unpacked/wasfp18/ -acceptLicense

The Connections update itself did not provide any surprises.
The database wizard was a bit creepy as it wanted to do a db update from 2.5 to 7 in the first place. After switching to the db2inst1 user, it still wanted an update from 3 to 7 for wikis and files db. So we decided to go through the wizards connections.sql directory and run any upgrade-XY-70.sql scripts. There’s only one for homepage.

Export PDF: there’s a post install step hidden, deep inside the help on how to install wkhtmltopdf.

Upgrade the Component Pack to 7 required some steps.
I decided to upgrade our kubernetes environment to 1.18.latest first. Then install helm3 and migrate all the stuff from helm2 using helm 2to3 Plugin

What am I missing at the moment: all my previous metrics data. The elasticsearch got updated from 5.5 to 7 which does not automatically update the elasticsearch indices. I have not found a way to update these properly yet.

The next step was to prepare all the *.yml files from the samples to my environment.
After applying all the new helm builds all the pods updated. Nothing worked. All the pods were up, but neither /appreg nor /social worked.

I only got a 404 from nginx.

After some debugging with krew’s tail plugin, it was clear that I had some issues with the ingress definitions.

 

 
Looks like the ingress controller has some issues with regex and wildcard definitions.
To fix this issue I added these 2 ingress definitions.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: my-ingress-appreg
  namespace: connections
spec:
  rules:
  - host: 'connections.belsoft.ch'
    http:
      paths:
      - backend:
          serviceName: appregistry-client
          servicePort: 7000
        path: /appreg
        pathType: ImplementationSpecific
      - backend:
          serviceName: te-creation-wizard
          servicePort: 8080
        path: /te-creation-wizard
        pathType: ImplementationSpecific
      - backend:
          serviceName: community-template-service
          servicePort: 3000
        path: /comm-template
        pathType: ImplementationSpecific
      - backend:
          serviceName: admin-portal
          servicePort: 8080
        path: /cnxadmin
        pathType: ImplementationSpecific
      - backend:
          serviceName: sanity
          servicePort: 3000
        path: /sanity
        pathType: ImplementationSpecific

and

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: my-ingress-appregistry
  namespace: connections
spec:
  rules:
  - host: 'connections.belsoft.ch'
    http:
      paths:
      - backend:
          serviceName: appregistry-service
          servicePort: 3000
        path: /appregistry
        pathType: ImplementationSpecific
      - backend:
          serviceName: orient-web-client
          servicePort: 8000
        path: /social
        pathType: ImplementationSpecific
      - backend:
          serviceName: itm-services
          servicePort: 3000
        path: /itm
        pathType: ImplementationSpecific
      - backend:
          serviceName: community-suggestions
          servicePort: 3000
        path: /community_suggestions/api/recommend/communities
        pathType: ImplementationSpecific
      - backend:
          serviceName: connections-ui-poc
          servicePort: 3000
        path: /connections-ui
        pathType: ImplementationSpecific
      - backend:
          serviceName: teams-presence-service
          servicePort: 3000
        path: /teams-presence
        pathType: ImplementationSpecific
      - backend:
          serviceName: teams-share-service
          servicePort: 3000
        path: /teams-share-service
        pathType: ImplementationSpecific
      - backend:
          serviceName: teams-share-ui
          servicePort: 3000
        path: /teams-share-ui
        pathType: ImplementationSpecific
      - backend:
          serviceName: teams-tab-api
          servicePort: 3000
        path: /teams-tab/api
        pathType: ImplementationSpecific
      - backend:
          serviceName: teams-tab-ui
          servicePort: 8080
        path: /teams-tab
        pathType: ImplementationSpecific

Monitoring Component Pack / Kubernetes with Prometheus – Grafana

You have successfully installed HCL ComponentPack for HCL Connections. Now you want to know what is happening in that black box ?
The goal is to see something like this

Steps to reproduce:
Update Kubernetes to 1.18.12
Install Helm3
– Install nfs-client
– install metrics
Install Traefik
– Install Prometheus

Disclaimer: use the following on your own risk.

Update Kubernetes to 1.18 because it’s now supported for Component Pack. If you are still on 1.11 you will be surprised after the update to 1.16. The coredns will no longer start. Coredns removed the proxy plugin, you need to switch to the forward plugin:
kubectl -n kube-system edit cm coredns
change ‘proxy . /etc/resolv.conf’ to ‘forward . /etc/resolv.conf’.
Going to 1.19 has not been an option. Some of the helm charts provide incompatible ingresss or services due to the api changes between 1.18 and 1.19

Installing Helm 3
At the moment the Component Pack still requires Helm 2. But both versions can be used in parallel.
Before installing Helm 3:
mv /usr/local/bin/helm /usr/local/bin/helm2
and after installing Helm 3:
mv /usr/local/bin/helm /usr/local/bin/helm3
Now my environment knows helm2 and helm3. It’s possible to migrate from helm 2 to helm 3 but I wanted an emergency option to be able to re-install Component Pack if needed. helm2 list shows the Component Pack installs. helm3 list will show all the new stuff.

nfs-client:
I’m lazy. I already have a working nfs server in my environment. I don’t want to handle every single pv/pvc manually. That’s why I use the nfs client provisioner. I cloned the repo and applied the 3 yaml files. I updated the deploy\deployment.yaml with the values for my nfs server.
kubectl apply -f deploy\class.yaml
kubectl apply -f deploy\deployment.yaml
kubectl apply -f deploy\rbac.yaml
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

The last line defines the new storage class as default.

metrics:
get the yaml and add the –kubelet-insecure-tls to the args.
I was not able to get the metrics pod up without this. Probably proper certificates with all the right SAN’s would help. But as this is only in my lab, I don’t care.


wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml


- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
kubectl apply -f components.yaml

Now you should get the metrics pod in the kube-system namespace.

Traefik:
helm3 show values traefik/traefik > values.yml to get the variables from the helm chart.

In order to prevent conflicts between the different ingress controllers I added this ingressClass.
Save it in ic.yaml and kubectl apply -f ic.yaml

apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller

There are 2 ingress controllers on my system now Traefik and cnx-ingress.

I defined the listening ports for web and websecure as 31080 and 31443 with
kubectl edit svc traefik.

There’s also a dashboard available for traefik.

Prometheus/Grafana
There are a lot of tutorials out there on how to install and configure prometheus.

So the compressed version:


helm3 repo add stable https://charts.helm.sh/stable
helm3 repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm3 repo update
helm3 show values prometheus-community/kube-prometheus-stack > values.yaml

edit values.yaml file:
– enable the ingress creation, add the ingressClassName: traefik-lb and define the hostname
– assign the volumeclaimtemplates the storage class, managed-nfs-storage in my case

storage:
volumeClaimTemplate:
spec:
storageClassName: managed-nfs-storage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
# selector: {}


kubectl create namespace monitoring
helm3 install prometheus prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml
helm3 install metrics-adapter prometheus-community/prometheus-adapter -n monitoring


update the grafana ingress, add the ingressClassName
kubectl -n monitoring edit ingress prometheus-grafana

spec:
ingressClassName: traefik-lb
rules:
- host: grafana.ume.li

If you are using more then one node, make sure that the appropriate network ports (9000/9100 TCP) are open between the nodes.

Next step is to create the routes on your front Proxy/LoadBalancer so that the site is available under a nice url.
The embedded grafana has already the prometheus configured. The only thing I added was this dashboard

HCL Connections Activities Plus

Ran into a small issue with the activities plus. If you only want to install the activities plus in your test kubernetes cluster and you dont want to upload all images, then you will fail.

the support/setupImages.sh script has 2 errors in it.

  1. it does not recognize the parameter kudos-boards. ./setupImages.sh -st kudos-boards …. will fail. You need to change the line ‘starter_stack_options=’ and add ‘kudos-boards’ to the list.
  2. if you only run setupImages.sh -st kudos-boards it will not push all the required stuff. you need at least to run it with -st customizer,kudos-boards
    or change the #infra block in setupImages.sh to

            # Infra
            if [[ ${STARTER_STACKS} =~ "customizer" ]] || [[ ${STARTER_STACKS} =~ "orientme" ]] || [[ ${STARTER_STACKS} =~ "kudos-boards" ]]; then
                    arr=(haproxy.tar mongodb-rs-setup.tar mongodb.tar redis-sentinel.tar redis.tar appregistry-client.tar appregistry-service.tar)
                    for f in "${arr[@]}"; do
                            setup_image $f
                    done
            fi
    

I don’t like the default persistentVolume paths /pv-connections. I use /data for my test environment.
This needs a small tweak in the boards-cp.yaml file:

minio:
    persistentVolumePath: data
    nfs:
       server: 192.168.1.2

Connections Customizer Lite

As we are currently only using the Customizer from the stack formerly known as pink (sfkap) I was suprised to find the Customizer Lite in the downloads. As my connections 6 lab is running with the sfkap I decided to give my Connections 5.5 Lab an upgrade. The installationexperience was ok. It’s not HA, but that’s ok, as my lab is not HA either.
I only added an nginx reverse proxy on the same box. It should also be possible to run a dockerized nginx if needed, but I’m happy with the nginx on the host.

Applying some basic customizations, adding css or js files , seems to work with Connections 5.5.

In my opinion the effort to setup the customizer lite and use it, compared to fiddle around with hikariTheme/custom.css and some nasty javascripts/dojo plugins, is ok.

Time to play around…

IBM Customizer – Do not reboot the worker node – personal reminder

Rule #1: Drain the worker node before a reboot. 
Consequences if you do not follow the Rule #1: the mongo-db gets corrupt and you need to repair it.
Steps to repair the customizer DB: delete the contents in mongo-node-0, mongo-node-1 and mongo-node-2’s subdirectory /data/db/ and register the customizer apps in the appregistry again.

I’m sure that there’s a way to repair the mongo db directories…

IBM Component Pack 6.0.0.6 upgrade Kubernetes 1.11.1 to 1.11.5

According to this cve There’s a potential issue with kubernetes 1.11.1 which is used in the component pack 6.0.0.6.
So I was wondering if it is possible to upgrade the kubernetes version to the patched point release 1.11.5.
The long version can be found in the official documentation.

The short version:

#on Master Node

yum-config-manager --enable kubernetes

yum install kubeadm-1.11.5-0 --disableexcludes=kubernetes

kubeadm upgrade plan
kubeadm upgrade apply v1.11.5

#Masterupdate
kubectl drain $MASTER --ignore-daemonsets
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 --disableexcludes=kubernetes
kubectl uncordon $MASTER

yum-config-manager --disable kubernetes

#repeat for each master goto Masterupdate

#for each node
kubectl drain $NODE

#Nodeupdate
#on node $NODE

yum-config-manager --enable kubernetes
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 kubeadm-1.11.5-0 --disableexcludes=kubernetes

yum-config-manager --disable kubernetes
kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
systemctl daemon-reload
systemctl restart kubelet

#back on the master
kubectl uncordon $NODE

#repeat for each node: restart from Nodeupdate

The upgrade on my little lab environment (1 Master + 3 Worker) went smooth.

IBM Connections Component Pack 6.0.0.6 – reveal the elasticsearch secret

In order to combine elasticsearch and Connections you need some of the elasticsearch certificates. With version < 6.0.0.6 those certificates could be found in /opt/certs. With the latest release, they are created directly into your kubernetes cluster.

With this simple line, created by Toni Feric, you can export the certs


kubectl -n connections get secret elasticsearch-secret -o json | egrep '\.txt|\.p12|\.key|\.pem' | sed -e 's/^ *//' -e 's/,$//' -e 's/"//g' | awk -F':' '{print "echo"$2" | base64 -d >"$1}' | bash

The result should look like this: