Skip to content

GraceDB Local Istance

GraceDB is the GRAvitational-wave Candidate Event Database. For developing purpose one have two possibilities, deploy a basic working istance based on GraceDB Helm Charts or deploy a custom image.

GraceDB deployment

Hereafter we assume that igwn-kube or igwn-k3s is in status Running (see here).

Basic deployment

The deployment depends on the helm charts defined in GraceDB Helm Charts and Helm Charts defined in this repository.

the first step it to add repository address to Helm. In the following command substitute <username> with your albert.einstein username, while as password use the token string for a read_api scoped personal acces (see Prerequisites).

helm repo add --force-update --username <username> gracedb-helm \
          https://git.ligo.org/api/v4/projects/15655/packages/helm/stable

additional command for *igwn-kube* (minikube) installation Please not that menikube does not have the threafik service included. The gracedb chart will automaticaly instal this service, but require to add the corresponding helm repository.
helm repo add --force-update traefik https://traefik.github.io/charts


Hopskotch and GraceDB charts can be installed in the default namespace (obtaining a SandBoxed) installation of the igwn-alert and gracedb services in this way:

helm install -n default hopskotch gracedb-helm/hopskotch
helm install -n default gracedb gracedb-helm/gracedb

Depending on the architecture, additional setting may need to be passed to the installation chart (see here for the possible configuration). Here some examples:

*igwn-kube*
helm install -n default hopskotch gracedb-helm/hopskotch
helm install -n default gracedb gracedb-helm/gracedb
The installation status and the k8s cluster can be monitored using the [k8s dashboard](prerequisites.md)
*igwn-k3s* The main difference of the K3S system is that the "standard" storage class is not defined and the only availabe storage class is the "local-path" ones.
helm  --kubeconfig /etc/rancher/k3s/k3s.yaml  upgrade --install -n default \
      --set storageClassName=local-path \
      hopskotch gracedb-helm/hopskotch   
helm  --kubeconfig /etc/rancher/k3s/k3s.yaml upgrade --install -n default \
     --set traefik.install=false \
     --set storageClassName=local-path \
     gracedb gracedb-helm/gracedb
*k3s* at fluxuesr2.ldas.cit The main difference of the K3S system is that the "standard" storage class is not defined and the only availabe storage class is the "local-path" ones. The hostname to be used in the fluxuser2 system is also different to allow to use the local DNS predefined address. That means that their valuse must be specified.
helm  upgrade --install -n default \
      --set storageClassName=local-path \
      hopskotch gracedb-helm/hopskotch   
helm  upgrade --install -n default \
     --set traefik.install=false \
     --set storageClassName=local-path \
     --set publicName="gracedb-dev.ldas.cit" \
     gracedb gracedb-helm/gracedb

At this point, the local sandboxed deployment of GraceDB is available in the cluster, nevertheless some additional configurations are needed before to be able to access to it.

Running a specific version of gracedb If one want to deploy back a different server version of GraceDB server, the command is (to deploy version 2.27.2):
helm upgrade --install gracedb gracedb-helm/gracedb --reuse-values \
             --set gracedb.image="containers.ligo.org/computing/gracedb/server:gracedb-2.27.2"
Running a custom image on minikube To create a custum GraceDB image into the `ignw-kube` deployment (and using the tag:mytag) from a local fork of the gracedb server one has to run the followimg command:
minikube -p igwn-kube image build -t "gracedb-custom/development:mytag" .
To run this image in the minikube deployment:
helm upgrade --install gracedb gracedb-helm/gracedb --reuse-values   \ 
                      --set gracedb.image="gracedb-custom/development:mytag"
Check helm deployment in k3s To check Helm deployment status use the command
 helm --kubeconfig /etc/rancher/k3s/k3s.yaml  ls --namespace default
to do some troubleshooting this commad may help
kubectl -n default get events 

Accessing your local GraceDB deployment

The local SandBoxed GraceDB can be accessed at the URL https://gracedb.default.svc.cluster.local/. Before to be able to access to the website some operation must be followed

Configuring /etc/hosts

This is a local address that redirect to the web-server running inside igwn-kube. To allow the access, the address should be present in the local /etc/hosts file since the authetication need a logical address with full reverse naming.

Example of /etc/hosts file content

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 gracedb.default.svc.cluster.local
127.0.0.1 hopskotch
127.0.0.1 redis-server

Open a tunnel minikube service (for igwn-kube only)

A tunnel between the local machine and the k8s cluster have to be open with the command (execute in a separate terminal, closing it or killing the process will result in interruption of the tunnel connection):

minikube -p igwn-kube tunnel 
Depending on the driver used when executing minikube this command may be required to be executed as sudo (see, e.g., this comment). If a password is required, use the actual user (sudoer) password (this should be required three times: for gracedb-traefik, hopskotch-server, and traefik). Without this tunnel active the next step will fails.

Setting the user permissions

To add your username (e.h. albert.einstein@ligo.org) to the list of GraceDB users give the following command

ALL_PERMS=True ./utility/add-account.sh
In this way your local albert.einstein@ligo.org account will be active on the sandboxed installation (with all the permission). albert.einstein now can access the GraceDB database using its own X509 certificate (That can be created by the ligo-proxy-init -H 2400000 albert.einstein command) or using the web interface as described below.

utility folder is inside llai-deploy-sandboxed git folder.

installing the CA certificate in the browser

The access to GraceDB using the gracedb client needs to provide the signature of the CA autority used to create the certificate of the sandboxed instance. The needed certificate bundle cacerts.pem can be retrived using the command:

./utility/get_gracedb_ca
That would create the certificate in the current directory. With this certificate bundle the following command will give access to the sandboxed GraceDB server:
REQUESTS_CA_BUNDLE=cacerts.pem  gracedb -s https://gracedb.default.svc.cluster.local/api credentials server

Finalize the user configuration

The last step is the finalization of the permission setting for your local albert.einstein@ligo.org account. Access the admin interface of local sandboxed deployment of GraceDB at the URL https://gracedb.default.svc.cluster.local/admin/ (Username:admin, Password:mypassword). from Authentication and Authorization administration->Users search your local albert.einstein@ligo.org account. After entering in the Change user interface, in the permission section, choose all available groups, and Save.

Now you are an happy owner of a local instance of GraceDB

To see che global configuration deployed so far

kubectl get pods,svc,deployment,pv,pvc,secrets,jobs --all-namespaces

Clean-up a GraceDB deployment

To clean up the local depolyment

helm uninstall gracedb
helm uninstall hopskotch
helm uninstall meg
kubectl delete secrets gracedb-cert-manager-webhook-ca gracedb-ca gracedb-cert-tls
kubectl delete pvc postgres-persistent-storage-gracedb-postgres-0 db-data-gracedb-0 meg-data-meg-0
## kubectl delete secrets gracedb-cert-tls gracedb-postgres client-ca gracedb
## kubectl delete secrets gracedb-cert-manager-webhook-ca  gracedb-ca