Overview¶
The aim of the present package is to create a local (isolated) Kuberneties (k8s) cluster running Low Latency Alert infrastructure (also referred to as LLAI) for developing and testing purpose.
This guide supports the deployment over the following systems:
igwn k3s cluster, also called igwn-k3s
in this guide. This was developed to best perform on Linux virtual machine on the LDAS environment.
igwn k3s cluster, also called igwn-k3s
in this guide. This was developed to best perform on own deployed Linux OS.
igwn minikube cluster, also called igwn-kube
in this guide, was developed to best perform on virtual machines at INFN-CNAF datacenter.
igwn minikube cluster, also called igwn-kube
in this guide, was developed to best perform on Mac OS X.
LLAI is composed of the following subsystems:
- a fully functional local instance of GraceDB, the GRAvitational-wave Candidate Event Database.
- a local version of the SCiMMA Hopskotch server, a cloud-based system of Kafka data streams to support secure Multi-Messenger Astrophysics (MMA) alert distribution with identity and access management integration.
- a local instance of the Mock Event Generator
- a local deployment of gwelery
Prerequisites¶
The minimum requirements for running LLAI are: (i) 4 CPUs, (ii) 8GByte of free memory, (iii) 40 GByte of disk space, and (iv) Internet connection. Please note that for running gwcelery inside this infrastructure requires at least 26 GBytes of allocated memory.
The following packages must be installed to setup the kubernetes cluster where deplou LLAI:
- The igwn python environment through Conda
- Helm 3
- OpenSSL
- A clone of this git repository and to use it as working folder.
- k3s
- The igwn python environment through Conda
- Helm 3
- OpenSSL
- A clone of this git repository and to use it as working folder.
- k3s
- The igwn python environment through Conda
- Helm 3
- OpenSSL
- A clone of this git repository and to use it as working folder.
- Docker (>= 23.0)
- Minikube (>= 1.27)
- Kubectl >= 1.25 kubctl
- The igwn python environment through Conda
- Helm 3
- OpenSSL
- A clone of this git repository and to use it as working folder.
- Docker (>= 23.0)
- Minikube (>= 1.27)
- Kubectl >= 1.25 kubctl
see prerequisites page for instructions and preliminary actions.
Warning
Please be aware that that having both minikube
and k3s
on the same system may cause some conflicts.
Activate the cluster¶
# Activate the igwn environment
conda activate igwn-py311
# Ensure that k3s is active and Alive
systemctl start k3s
# export KUBECONFIG variable (if not defined in .bashrc), substitute <username> with your local home folder
export KUBECONFIG='/home/<username>/.kube/k3s.yaml'
To check that igwn-k3s is in status Running use the command:
# Activate the igwn environment
conda activate igwn-py311
# Ensure that Docker Desktop is active and Alive
systemctl --user start docker-desktop
# start igwn-kube
minikube -p igwn-kube start
# open tunnel to graceDB
minikube -p igwn-kube tunnel
minikube profile list
. This list the profile installed in minikube and correspondant status.
# Activate the igwn environment
conda activate igwn-py311
# Ensure that Docker Desktop is active and Alive
systemctl --user start docker-desktop
# start igwn-kube
minikube -p igwn-kube start
# open tunnel to graceDB
minikube -p igwn-kube tunnel
minikube profile list
. This list the profile installed in minikube and correspondant status.
LLAI deployment¶
Step to deploy the LLAI are:
- Deploy a local GRaceDB instance
- Deploy Mock Event Generator (MEG)
- Deploy gwcelery
- Verifying the full infrastucture
For a guide to quickstart on LDAS machines see Quickstart guide for LDAS machines.
Exploring LLAI¶
The web interface of gracedb is available at the URL https://gracedb.default.svc.cluster.local/.