Overview¶
The objective of this project is to deploy a local instance of the Low Latency Alert generation Infrastructure (LLAI) on Kubernetes for development and testing purposes.
Attention
This guide assumes that you have a basic understanding of Kubernetes. For comprehensive tutorials and in-depth exploration of specific aspects of Kubernetes, please refer to the official Kubernetes documentation.
We support the deployment of the LLAI on the following Kubernetes distributions/systems:
- K3s on Linux (with focus on the dedicated LDAS resources, a.k.a. the fluxuser machines)
- minikube on Linux and macOS
These instructions were tested on Almalinux 9 and macOS 12.
We target the following LLAI components:
- GraceDB, the GRAvitational-wave Candidate Event DataBase
- the SCiMMA Hopskotch server, a cloud-based system of Kafka data streams to support secure Multi-Messenger Astrophysics (MMA) alert distribution with identity and access management integration
- GWCelery, the service for annotating and orchestrating GW alerts
- the Mock Event Generator, a tool to re-upload past GW events to GraceDB.
When developing a component, it's essential to have all related components running simultaneously due to their interdependencies. This project was initially designed for the development of GWCelery. While we provide instructions for deploying GWCelery on Kubernetes, the most practical approach is to run GWCelery directly on the operating system and allow it to interact with the other components deployed on Kubernetes.
Resource requirements¶
The resource requirements for a minimal deployment of all the components are:
- 8 CPU physical cores (16 logical cores)
- 32 GB of memory
- 64 GB of disk space (mainly for GraceDB, it depends on the amount of events you plan to write)