Ran into a small issue with the activities plus. If you only want to install the activities plus in your test kubernetes cluster and you dont want to upload all images, then you will fail.

the support/setupImages.sh script has 2 errors in it.

  1. it does not recognize the parameter kudos-boards. ./setupImages.sh -st kudos-boards …. will fail. You need to change the line ‘starter_stack_options=’ and add ‘kudos-boards’ to the list.
  2. if you only run setupImages.sh -st kudos-boards it will not push all the required stuff. you need at least to run it with -st customizer,kudos-boards
    or change the #infra block in setupImages.sh to

            # Infra
            if [[ ${STARTER_STACKS} =~ "customizer" ]] || [[ ${STARTER_STACKS} =~ "orientme" ]] || [[ ${STARTER_STACKS} =~ "kudos-boards" ]]; then
                    arr=(haproxy.tar mongodb-rs-setup.tar mongodb.tar redis-sentinel.tar redis.tar appregistry-client.tar appregistry-service.tar)
                    for f in "${arr[@]}"; do
                            setup_image $f
                    done
            fi
    

I don’t like the default persistentVolume paths /pv-connections. I use /data for my test environment.
This needs a small tweak in the boards-cp.yaml file:

minio:
    persistentVolumePath: data
    nfs:
       server: 192.168.1.2

Update for Connections 6.0 CR 5: ICM (mail plugin) vs. Wikis

The file is now called com.ibm.lconn.wikis.web.resources_3.5.0.20191121-2152.jar
and the line number is now 1804.

As always. Use at your own risk…

Thanks to Christoph Stöttner , there’s a way to use a secondary directory on the Domino.

  1. Create a default Configuration in the secondary directory. In the LDAP settings allow write access.
  2. Add the LDAP user in the secondary directory’s ACL, if not already there as Editor with [UserCreator] and [UserModifier] Roles
  3. Restart LDAP task

Yesterday we upgraded our productive environment from Connections 6.0 CR6 to 6.5. This time we decided to do an in-place update. Updateing the base from WAS 8.5.5.15 to 8.5.5.16 and 6.0 CR6 to 6.5 would have taken around 2 hours of downtime in our case. Because we had an invalid proxy-config.tpl file (our fault), which broke the install process, it took 4h. Next time we should check the xml files first…

Today was “Invite” testing day. Combining the infos in the selfregistration-config.xml file and the hcl info we’ve been able to do the first tests.
Our current external users implementation, domino with an external directory does not fit for the new invite feature. The feature relies on ldap. It writes the user to ldap and then to the profiles. I’ve not been able to force the LDAP task to create the new user in my secondary directory.
I’ve now setup an openldap server for the external users.
If you do not want to handle openldap there’s a Domino way

As an external user I’m not able to change my password at the moment, but I expect this to be possible in a future update. The workaround would be to use the “reset guest password” workflow on the login page.

One thing I had to change in AppSrv01/installedApps/[cell]/Invite.ear/invite.war/pages/register.jsp:
add readonly=”true” to the [input id=”mail”…/] field, so the invited users are no longer able to change their email addresses.

Do NOT USE UMLAUTE in invite !!!!!

We are running Connections 6.0 CR5 with the Mail / calendar Plugin.
The plugin is officially not supported.

One issue we ran into is that the wiki page looses its header as soon as we create or edit a page.
The connections menu would appear below the page. Only a page refresh would bring the menu back to the top.

Analyzing the page source revealed, that the lotusMain was placed before the banner.

To fix this you need to alter the javascript. Always make a backup! Test it first. Use the following on your own risk:
It’s in the file shared/provision/webresources/com.ibm.lconn.wikis.web.resources_[XY].jar
unzip it
edit the file resources\scenes.js at line 1792 replace
frame.insertBefore(el,frame.lastChild);
with
//frame.insertBefore(el,frame.lastChild);
dojo.place(el,frame,'last');

Repack everything to the same filename com.ibm.lconn.wikis.web.resources_[XY].jar and use it to replace the one in the webresources.

As this is a javascript change you need to update the versionstamp, stop your nodes, clean the websphere temp directories and any cacheing proxies, before the change takes effect.
As stated before. use at your own risk.

Starting Deployment Manager as a SystemD Service.
Save the code below to /etc/systemd/system/dmgr.service


[Unit]
Description="Connections Deployment Manager"
Requires=network.service
After=network-online.target

[Service]
User=connections
ExecStart=/opt/ibm/WebSphere/AppServer/profiles/Dmgr01/bin/startManager.sh
ExecStop=/opt/ibm/WebSphere/AppServer/profiles/Dmgr01/bin/stopManager.sh
PIDFile=/opt/ibm/WebSphere/AppServer/profiles/Dmgr01/logs/dmgr/dmgr.pid
Type=forking
Restart=no
TimeoutStartSec=6000
TimemoutStopSec=600
LimitNOFILE=60000
LimitNPROC=12500

[Install]
WantedBy=multi-user.target

Enable dmgr at boot time:
systemctl enable dmgr.service

Some more examples.
systemctl start dmgr.service
systemctl stop dmgr.service
systemctl status dmgr.service

Tagged with: ,

I got a request to cleanup some files in a IBM Connections environment. Remove all the files created before 1.1.2015.
Getting the list of files is easy.

connect to files@
export to files.csv of del modified by nochardel
select hex(ID), id, title,create_date from files.media where create_date < date('2015-01-01')@ connect reset@

This got me around 11'000 records.
In the api documentation I found the required API call.
After a while I got my first node.js app. Which was correctly posting the required requests against my server. Doing a testrun with 11'000 records was a bad idea. My environment was so fast, that it sent the 11'000 requests in under 10s.
The first couple of request passed, but then I got a lot of errors and the files app did not respond at all. With the help of simple rate limiter I throttled it down to 2 requests/per second. The process will now take a bit longer, but at least the connections environment stays alive.

As we are currently only using the Customizer from the stack formerly known as pink (sfkap) I was suprised to find the Customizer Lite in the downloads. As my connections 6 lab is running with the sfkap I decided to give my Connections 5.5 Lab an upgrade. The installationexperience was ok. It’s not HA, but that’s ok, as my lab is not HA either.
I only added an nginx reverse proxy on the same box. It should also be possible to run a dockerized nginx if needed, but I’m happy with the nginx on the host.

Applying some basic customizations, adding css or js files , seems to work with Connections 5.5.

In my opinion the effort to setup the customizer lite and use it, compared to fiddle around with hikariTheme/custom.css and some nasty javascripts/dojo plugins, is ok.

Time to play around…

Rule #1: Drain the worker node before a reboot. 
Consequences if you do not follow the Rule #1: the mongo-db gets corrupt and you need to repair it.
Steps to repair the customizer DB: delete the contents in mongo-node-0, mongo-node-1 and mongo-node-2’s subdirectory /data/db/ and register the customizer apps in the appregistry again.

I’m sure that there’s a way to repair the mongo db directories…

According to this cve There’s a potential issue with kubernetes 1.11.1 which is used in the component pack 6.0.0.6.
So I was wondering if it is possible to upgrade the kubernetes version to the patched point release 1.11.5.
The long version can be found in the official documentation.

The short version:

#on Master Node

yum-config-manager --enable kubernetes

yum install kubeadm-1.11.5-0 --disableexcludes=kubernetes

kubeadm upgrade plan
kubeadm upgrade apply v1.11.5

#Masterupdate
kubectl drain $MASTER --ignore-daemonsets
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 --disableexcludes=kubernetes
kubectl uncordon $MASTER

yum-config-manager --disable kubernetes

#repeat for each master goto Masterupdate

#for each node
kubectl drain $NODE

#Nodeupdate
#on node $NODE

yum-config-manager --enable kubernetes
yum install kubectl-1.11.5-0 kubelet-1.11.5-0 kubeadm-1.11.5-0 --disableexcludes=kubernetes

yum-config-manager --disable kubernetes
kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
systemctl daemon-reload
systemctl restart kubelet

#back on the master
kubectl uncordon $NODE

#repeat for each node: restart from Nodeupdate

The upgrade on my little lab environment (1 Master + 3 Worker) went smooth.

Top