Skip to content

Overview

The aim of the present package is to create a local (isolated) Kuberneties (k8s) cluster running Low Latency Alert infrastructure (also referred to as LLAI) for developing and testing purpose.

This guide supports the deployment over the following systems:

igwn k3s cluster, also called igwn-k3s in this guide. This was developed to best perform on Linux virtual machine on the LDAS environment.

igwn k3s cluster, also called igwn-k3s in this guide. This was developed to best perform on own deployed Linux OS.

igwn minikube cluster, also called igwn-kube in this guide, was developed to best perform on virtual machines at INFN-CNAF datacenter.

igwn minikube cluster, also called igwn-kube in this guide, was developed to best perform on Mac OS X.

LLAI is composed of the following subsystems:

  • a fully functional local instance of GraceDB, the GRAvitational-wave Candidate Event Database.
  • a local version of the SCiMMA Hopskotch server, a cloud-based system of Kafka data streams to support secure Multi-Messenger Astrophysics (MMA) alert distribution with identity and access management integration.
  • a local instance of the Mock Event Generator
  • a local deployment of gwelery

Prerequisites

The minimum requirements for running LLAI are: (i) 4 CPUs, (ii) 8GByte of free memory, (iii) 40 GByte of disk space, and (iv) Internet connection. Please note that for running gwcelery inside this infrastructure requires at least 26 GBytes of allocated memory.

The following packages must be installed to setup the kubernetes cluster where deplou LLAI:

see prerequisites page for instructions and preliminary actions.

Warning

Please be aware that that having both minikube and k3s on the same system may cause some conflicts.

Activate the cluster

# Activate the igwn environment
conda activate igwn-py39
# Ensure that k3s is active and Alive
systemctl start k3s
To check that igwn-k3s is in status Running use the command:
systemctl status k3s # check the status of k3s service
kubectl get pods -A # see what pod are running
kubectl get all -n kube-system # check the  Kubernetes objects active

# Activate the igwn environment
conda activate igwn-py311
# Ensure that k3s is active and Alive
systemctl start k3s
# export KUBECONFIG variable (if not defined in .bashrc), substitute <username> with your local home folder
export KUBECONFIG='/home/<username>/.kube/k3s.yaml'

To check that igwn-k3s is in status Running use the command:

systemctl status k3s # check the status of k3s service
k3s kubectl get pods -A # see what pod are running
k3s kubectl get all -n kube-system # check the  Kubernetes objects active

# Activate the igwn environment
conda activate igwn-py311
# Ensure that Docker Desktop is active and Alive
systemctl --user start docker-desktop
# start igwn-kube
minikube -p igwn-kube start
# open tunnel to graceDB
minikube -p igwn-kube tunnel 
To check that igwn-kube is in status Running use the command minikube profile list. This list the profile installed in minikube and correspondant status.

# Activate the igwn environment
conda activate igwn-py311
# Ensure that Docker Desktop is active and Alive
systemctl --user start docker-desktop
# start igwn-kube
minikube -p igwn-kube start
# open tunnel to graceDB
minikube -p igwn-kube tunnel 
To check that igwn-kube is in status Running use the command minikube profile list. This list the profile installed in minikube and correspondant status.

LLAI deployment

Step to deploy the LLAI are:

  1. Deploy a local GRaceDB instance
  2. Deploy Mock Event Generator (MEG)
  3. Deploy gwcelery
  4. Verifying the full infrastucture

For a guide to quickstart on LDAS machines see Quickstart guide for LDAS machines.

Exploring LLAI

The web interface of gracedb is available at the URL https://gracedb.default.svc.cluster.local/.