Skip to content

Overview

This package constains instruction on how to deploy the various component of the Low Latency Alert infrastructure (also referred as LLAI) in a local Kuberneties (K8s) system. The various components includes

  • a fully functional local instance of GraceDB, the GRAvitational-wave Candidate Event Database.
  • a local version of the SCiMMA Hopskotch server, a cloud-based system of Kafka data streams to support secure Multi-Messenger Astrophysics (MMA) alert distribution with identity and access management integration.
  • a local instance of the Mock Event Generator
  • a local deployment of gwelery

Generality

The aim of the present package is to create a local (isolated) Kuberneties (k8s) cluster running both Grace-BD and gwcelery for developing and testing purpose. As K8s cluster we implement two different solution according to the different remote infrastructure: - minikube cluster, hereafter called igwn-kube. This was developed to best perform on virtual machines at INFN-CNAF datacenter. - k3s cluster, hereafter called igwn-k3s. This was developed to best perform on virtual machines CIT datacenter.

Both LLAI configuration may be tested also on own infrastructure. The minimum requirements are : (i) 4 CPUs, (ii) 8GByte of free memory, (iii) 40 GByte of disk space, and (iv) Intenet connection. Please note that for running gwcelery inside this ifrastructure require to at least 26 GBytes of allocated memory.

Prerequisites

LLAI requires to have installed: (i) a python environment; (ii) Helm 3; (iii) OpenSSL; (iv) A clone of this git repository and to use it as working folder.

Depending of architecture chosen, the following additional package are required:

See prerequisites page for instructions and preliminary actions. Take note that having both igwn-kube and igwn-k3s on same physical machine may cause some conflicts.

All instruction in this guide suppose that the cluster is alive and running

Activate igwn-kube on own local installation

If you are using your own deployed `igwn-kube` minikube installation:
# Activate the igwn environment
conda activate igwn-py39
# Ensure that Docker Desktop is active and Alive
systemctl --user start docker-desktop
# start igwn-kube
minikube -p igwn-kube start
To check that *igwn-kube* is in status *Running* use the command `minikube profile list`. This list the profile installed in minikube and correspondant status.

Activate igwn-k3s

If you are using your own deployed `igwn-k3s` k3s installation:
# Activate the igwn environment
conda activate igwn-py39
# Ensure that k3s is active and Alive
systemctl start k3s
# modify permission of k3s configuration file 
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
note that change the permission of k3s.yaml file could be a potential security issue that is flagged in each command, nevertheless it allow to operate on the cluster with your user without using `sudo` command. To check that *igwn-k3s* is in status *Running* use the command:
systemctl status k3s # check the status of k3s service
sudo k3s kubectl get pods -A # see what pod are running
sudo k3s kubectl get all -n kube-system # check the  Kubernetes objects active

GraceDB local instance

Instruction on how to deploy the local instance of GraceDB are reported here The web interface of gracedb should be available at the URL https://gracedb.default.svc.cluster.local/. If you already deployed the local instance of GraceDB and you want to access it remember to open the tunnel service, otherwise the website would be not reachable.

minikube -p igwn-kube tunnel