Skip to content

Develop on a minikube system

To deveop any of the two main compenets (GraceDB and GWCelry) of the LLAI (Low Latency Alert Infrastructure) one need a working test/deployment infrastructure and of a pipeline emulator that mimick the upload from tha analysis pipelines (meg, mock-event-generator). In this part of the guide we show a quick tour on how to have such environment ready (using minikube) and working. We will refer to the instruction for the single componet to intsall each of the them in a different K8S system.

Projects involved in LLAI development

The first step is to clone the different project repositories: The general procedure to be used to propse change in teh code involve that you created a private forks. Just for testing purpose and doing some test you may clone the master repository of thr two projects.

git clone https://git.ligo.org/computing/gracedb/server.git      server-gracedb
git clone https://git.ligo.org/emfollow/gwcelery.git             gwcelery
git clone https://git.ligo.org/emfollow/mock-event-generator.git meg

The suggested way for doing development is that you create two private forks of these projects in your name space. Once you have your private fork you may clone them with the command:

git@git.ligo.org:roberto.depietri/gracedb.git               server-gracedb
git@git.ligo.org:roberto.depietri/gwcelery.git              gwcelery
git@git.ligo.org:roberto.depietri/mock-event-generator.git  meg

Projects needed for LLAI deployment

Here is a description of all the components and projects needed for a fast and cutting edge installation on the whole LLAI in a Minikube K8s cluster.

git clone https://git.ligo.org/computing/gracedb/k8s/helm.git         gracedb-helm
git clone https://git.ligo.org/emfollow/k8s/helm.git                  gwcelery-helm
git clone https://git.ligo.org/emfollow/k8s/llai-deploy-sandboxed.git llai-deploy

Fast deployment instructions for Minikube

A fast way to have a functional sandboxed installation of the LLAI, including: GWCelery, GraceDB and the Hopskotch server (SCIMMA).

(1) Create a Minikube cluster.

The minikube "igwn.kube" claster may be created with the (The prerequised to work with a minikube cluster are described in Prerequisites) with the follwoing command that also specify teh resources allocate to the virtual machine used to run the K(S cluster

minikube start --profile igwn-kube --cpus=8 --memory=26GiB
Optionally, you can start the Minikube dashboard to monitor your cluster. It can be convenient to start it in a detached screen:
screen -S dashboard -dm sh -c "minikube -p igwn-kube dashboard"

How to clean up cluster

A cluster can be completely wiped out with the following command:

minikube start --profile igwn-kube stop
minikube start --profile igwn-kube delete

(2) Define the Helm charts to be used.

Here we assume that the command are given by a directory that contains the prevously names clone of git repository. The environment variable to point to the various Helm charts need for the installation and the K8S namespace to be sued my be defined as:

export HELM_gracedb=`pwd`/gracedb-helm/gracedb
export HELM_hopskotch=`pwd`/gracedb-helm/hopskotch
export HELM_gwcelery=`pwd`/gwcelery-helm/gwcelery
export HELM_meg=`pwd`/llai-deploy/meg

export HELM_namespace='default'

If you have already downloaded the GraceDB Helm charts:

export HELM_gracedb=gracedb-helm/gracedb
export HELM_hopskotch=gracedb-helm/hopskotch
How to download the Helm charts

The first step is to add the chart repository to Helm. In the following command substitute <username> with your albert.einstein username. When prompted for a password, use your read_api scoped personal access token (see Prerequisites).

helm repo add --force-update --username <username> gracedb-helm \
     https://git.ligo.org/api/v4/projects/15655/packages/helm/stable

(3) Deploy Hopskotch and GraceDB.

helm upgrade --install -n ${HELM_namespace}  hopskotch ${HELM_hopskotch}
helm upgrade --install -n ${HELM_namespace}  \
  gracedb ${HELM_gracedb}

(4) Configure network access to the GraceDB installation.

Make sure that the following entries are defined in your /etc/hosts file:

10.100.100.10 gracedb gracedb.default.svc.cluster.local
# TODO: hopskotch and redis
127.0.0.1 gracedb gracedb.default.svc.cluster.local
127.0.0.1 hopskotch hopskotch.default.svc.cluster.local
127.0.0.1 redis-server redis-server.default.svc.cluster.local

Then you have to start the tunnel service, for example using a detached screen:

screen -S tunnel -m sh -c "sudo minikube -p igwn-kube tunnel"

(5) Allow access and privileges to your account.

In order to configure your LIGO account to be able to access the GraceDB instance, run the following script using your albert.einstein username:-

ALL_PERMS=True ./llai-deploy/utility/add-account.sh <albert.einstein>

(6) Deploy MEG.

helm upgrade --install -n ${HELM_namespace} meg ${HELM_meg}

(7) Create the secrets and deploy GWCelery.

The GWCelery Helm chart needs the following K8s secrets to authenticate to GraceDB and the Hopskotch server. They are used to populate the following configuration files inside the GWCelery pods:

  • /home/gwcelery/.config/hop/auth.toml
  • /home/gwcelery/.netrc
  • /tmp/x509up_u1000

The simple way to create these secrets is to provide your x509 credentials (obtained with the ligo-proxy-init command) to the helper utility:

./llai-deploy/utility/add-gwcelery-secrets.sh /tmp/x509up_u501

How to custom define GWCelery secrets
kubectl create secret generic -n default gwcelery-secrets \
  --from-literal=auth.toml="dummy" \
  --from-literal=netrc="machine 'kafka://hopskotch-server.default.svc.cluster.local'  login dummy password dummy" \
  --from-file=x509up_u1000=/tmp/x509up_u501

The installation and deployment of GWCelery inside the igwn-kube K8S is achived with the following command:

helm upgrade --install -n ${HELM_namespace} gwcelery ${HELM_gwcelery}

At this point yoy have a compete installation of teh whole LLAI sytem on your minikube "igwn-kube" K8S cluster."

How to use the deplyment.

Test a GraceDB server version

Test a GWCeery version