IBM Connections Systemd on Centos / RHEL 7

Starting Deployment Manager as a SystemD Service.
Save the code below to /etc/systemd/system/dmgr.service

Description="Connections Deployment Manager"



Enable dmgr at boot time:
systemctl enable dmgr.service

Some more examples.
systemctl start dmgr.service
systemctl stop dmgr.service
systemctl status dmgr.service

Tagged with: ,

Connections – Mass file removal

I got a request to cleanup some files in a IBM Connections environment. Remove all the files created before 1.1.2015.
Getting the list of files is easy.

connect to files@
export to files.csv of del modified by nochardel
select hex(ID), id, title,create_date from where create_date < date('2015-01-01')@ connect reset@

This got me around 11'000 records.
In the api documentation I found the required API call.
After a while I got my first node.js app. Which was correctly posting the required requests against my server. Doing a testrun with 11'000 records was a bad idea. My environment was so fast, that it sent the 11'000 requests in under 10s.
The first couple of request passed, but then I got a lot of errors and the files app did not respond at all. With the help of simple rate limiter I throttled it down to 2 requests/per second. The process will now take a bit longer, but at least the connections environment stays alive.

Connections Customizer Lite

As we are currently only using the Customizer from the stack formerly known as pink (sfkap) I was suprised to find the Customizer Lite in the downloads. As my connections 6 lab is running with the sfkap I decided to give my Connections 5.5 Lab an upgrade. The installationexperience was ok. It’s not HA, but that’s ok, as my lab is not HA either.
I only added an nginx reverse proxy on the same box. It should also be possible to run a dockerized nginx if needed, but I’m happy with the nginx on the host.

Applying some basic customizations, adding css or js files , seems to work with Connections 5.5.

In my opinion the effort to setup the customizer lite and use it, compared to fiddle around with hikariTheme/custom.css and some nasty javascripts/dojo plugins, is ok.

Time to play around…

IBM Customizer – Do not reboot the worker node – personal reminder

Rule #1: Drain the worker node before a reboot. 
Consequences if you do not follow the Rule #1: the mongo-db gets corrupt and you need to repair it.
Steps to repair the customizer DB: delete the contents in mongo-node-0, mongo-node-1 and mongo-node-2’s subdirectory /data/db/ and register the customizer apps in the appregistry again.

I’m sure that there’s a way to repair the mongo db directories…

IBM Component Pack upgrade Kubernetes 1.11.1 to 1.11.5

According to this cve There’s a potential issue with kubernetes 1.11.1 which is used in the component pack
So I was wondering if it is possible to upgrade the kubernetes version to the patched point release 1.11.5.
The long version can be found in the official documentation.

The short version:

#on Master Node

yum-config-manager --enable kubernetes

yum install kubeadm-1.11.5-0 --disableexcludes=kubernetes

kubeadm upgrade plan
kubeadm upgrade apply v1.11.5

kubectl drain $MASTER --ignore-daemonsets
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 --disableexcludes=kubernetes
kubectl uncordon $MASTER

yum-config-manager --disable kubernetes

#repeat for each master goto Masterupdate

#for each node
kubectl drain $NODE

#on node $NODE

yum-config-manager --enable kubernetes
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 kubeadm-1.11.5-0 --disableexcludes=kubernetes

yum-config-manager --disable kubernetes
kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
systemctl daemon-reload
systemctl restart kubelet

#back on the master
kubectl uncordon $NODE

#repeat for each node: restart from Nodeupdate

The upgrade on my little lab environment (1 Master + 3 Worker) went smooth.

XPage partial refresh stopped working after upgrade to Domino 10

We updated our servers to domino 10. Suddenly one app did not work any more. After digging around I found this helpfull page in stackoverflow

The solution is as easy as putting xsp.error.disable.0380=true in the xsp properties

Tagged with: ,

Domino Query Language – personal reminder

DQL uses space as a sparator at the moment

a) view.form='MyForm'

is not the same as

b) view.form = 'MyForm'

=> a) does not work…

IBM Connections Component Pack – reveal the elasticsearch secret

In order to combine elasticsearch and Connections you need some of the elasticsearch certificates. With version < those certificates could be found in /opt/certs. With the latest release, they are created directly into your kubernetes cluster.

With this simple line, created by Toni Feric, you can export the certs

kubectl -n connections get secret elasticsearch-secret -o json | egrep '\.txt|\.p12|\.key|\.pem' | sed -e 's/^ *//' -e 's/,$//' -e 's/"//g' | awk -F':' '{print "echo"$2" | base64 -d >"$1}' | bash

The result should look like this:

IBM Component Pack – quest for the missing elasticsearchsecret

Now the Customizer is up and running. Now I tried to put the elasticsearch stack. Following the IBM documentation, I was able to add a new worker node, dedicated solely for elasticsearch.
Putting the additional images in my dockerregistry, installing the helm charts. So far so good. But my es-master, es-data and es-client would not start. The log revealed the missing elasticsearch-secret. After some hints from Nico directing my search to the connections-env part. There’s parameter “createSecret=false” which drew my attention.

A short helm delete connections-env --purge and reinstall of the connections-env with createSecret=true did the trick.

IBM Connections Component Pack 6 Installation – Customizer

Today I tried to install a small POC environment for the Customizer only. 1 master node and 2 workers.
Following the IBM documentation is straight forward. In my POC I relied on the Deploying a non-HA Kubernetes platform path.
I went through the docs and copied all the pieces into a couple of shell scripts. So I’d be able to restart from scratch if something goes wrong.

The only point I had issues was on how to setup the docker image registry. For my POC I did not want to fiddle around with certificates. So I decided to follow this link. If I had updated the daemon.json file on every worker node too, I would have been much more efficient. After this little change, my worker nodes were finally able to pull the images.

Despite the documentaions first impression, the overall installation experience was interesting.

At the moment I only run the customizer on my POC environment. After a clean reboot all 3 Nodes take less then 8GB ram. Lets see how this will raise if I start using the the customizer tomorrow.