Skip to main content

Command Palette

Search for a command to run...

Setup Ever Gauzy Platform on Digital Ocean Kubernetes

Updated
25 min read

Introduction

In this tutorial, we will guide you step by step on how to set up Ever Gauzy in K8S on Digital Ocean.

Ever Gauzy is an open Business Management Platform. Here are some features you can find in Gauzy:

  • Human Resources Management (HRM) with Time Management / Tracking and Employees Performance Monitoring

  • Customer Relationship Management (CRM)

  • Enterprise Resource Planning (ERP)

  • Projects / Tasks Management

  • Sales Management

  • Financial and Cost Management (including Accounting, Invoicing, etc)

  • Inventory, Supply Chain Management, and Production Management

  • Etc.

For more information about Gauzy, visit https://gauzy.co.

Prerequisites

Create a New DigitalOcean Account

If you've already done this, you can skip this step. If not, you have the option to use either the official method or our referral link: https://m.do.co/c/70545a065ad4.

If you decide to use a referral link, you should see a page similar to the one shown below. Choose your preferred signup method.

Here are a couple of other things you'll need throughout this guide:

  • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress.

  • kubectl: A command-line tool used to control and manage Kubernetes clusters.

  • git: A version control system that helps track changes in files, especially source code.

  • helm : A package manager for Kubernetes.

Setup Gauzy with DO Kubernetes

Step 1. Create PostgreSQL DB

To create a PostgreSQL database in Digital Ocean, go to your dashboard and under the "Manage" option, click on the "Databases" menu, you should see a "Create Database" button, as shown in the screenshot below.

After clicking on "Create Database," you'll reach the "Create Database Cluster" page, where you'll set up your database.

  • First, choose the database region. You may want to select the datacenter region closest to your location. In my case, I decided to go with 'San Francisco Database 2 SF02.'

  • In the "Choose a Database Engine" section, select the Postgres option. We recommend running the latest PostgreSQL (16.x) and enabling connection pooling for production workloads.

Please note that the region where we create our database should be the same as the region where we create our Kubernetes Cluster. We will cover this in the next step.

Let's choose a unique database cluster name. I'll name it ever-gauzy-db-demo.

You can either leave the other fields at their default values or customize them based on your requirements. Then, click on "Create Database Cluster" button to set up your database.

After clicking the button, you will be directed to the our Database page.

As shown in the image below, our database is still in the creation process. Your database will be ready once the blue progress bar reaches the end.

While the database server is being created, let's copy our connection details.

Copy and save our connection details (VPC Network) in a secure location, also we need to download the CA certificate.

We will need these details later when setting up Gauzy API in Kubernetes .

Step 2. Create Kubernetes Cluster

Go to your dashboard and under the "Manage" option, click on the "Kubernetes" menu, you should see a "Create Cluster" button, as shown in the screenshot below.

After clicking on "Create Cluster," you'll reach the "Create a Kubernetes cluster" page, where you'll set up your K8s Cluster.

Ensure it operates in the same region as your database, as outlined in the previous step.

Set the nodes size of your cluster, We recommend running at least 2 node clusters, each node with 8Gb RAM or more.

A minimum of 2 nodes is required to prevent downtime during upgrades or maintenance.

Provide a name for our Cluster and write it down, as we'll need to update it in our codebase while deploying the platform.

You can either leave the other fields at their default values or customize them based on your requirements.

Click the Create Cluster button to create your K8S cluster.

While the cluster is being created, let's install kubectl, which is a command-line tool used to control and manage Kubernetes clusters.

Step 3: Install kubectl

Before we start deploying our platform (Gauzy), we will need kubectl to deploy our platform's K8s manifest.

kubectl is the command-line tool for interacting with Kubernetes clusters. It allows you to deploy and manage applications, inspect cluster resources, view logs, and perform other administrative tasks.

Please refer to the following links to install kubectl.

After completing the installation, run the following command to verify.

kubectl version

# Client Version: v1.32.1
# Kustomize Version: v5.5.0
# Server Version: v1.31.1

Step 4: Download the Kube Config File for Our K8S Cluster

To be able to interact with our k8s cluster we need to download the cluster config file from our Digital ocean cluster interface.

Return to the Kubernetes menu under the "Manage" option, and click on the cluster we previously created. In the Overview section, you should see a button to download the configuration file, as illustrated in the image below.

Store it at ~/kube/gauzy-k8s-demo-kubeconfig.yaml (You can name the file as you prefer). This file contains essential information about the Kubernetes cluster, such as the server URL, authentication details, and context.

By default, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable.

  1. Export the kubeconfig path:
export KUBECONFIG=~/.kube/gauzy-k8s-demo-kubeconfig.yaml
  1. With the kubeconfig variable set, you open your terminal and run the command:
kubectl get nodes

This command, reaches out to the Kubernetes API server specified in the kubeconfig file. It authenticates using the provided credentials and retrieves information about the nodes in the cluster.

Your output should be similar to the following:

NAME                   STATUS   ROLES    AGE   VERSION
pool-edj6rnz5y-at6lf   Ready    <none>   4h   v1.32.1
pool-edj6rnz5y-at6lq   Ready    <none>   4h   v1.32.1

Step 5: Install helm

Helm is a package manager for Kubernetes that helps you manage complex applications. Think of it like apt/yum/homebrew but for Kubernetes applications.

We need this tool to deploy Traefik; we'll discuss it further in the next step.

Learn how to install Helm by visiting the following link:

After completing the installation, run the following command to verify.

helm version

# version.BuildInfo{Version:"v3.17.1", GitCommit:"980d8ac1939e39138101364400756af2bdee1da5", GitTreeState:"clean", GoVersion:"go1.23.5"}

Step 6: Using a Load Balancer with Traefik

Traefik is a modern HTTP reverse proxy and load balancer for Kubernetes that handles incoming traffic to your services. It automatically discovers services running in Kubernetes and creates the routing configuration for them.

In this section, you’ll install Traefik into your cluster and prepare it to be used with the certificates managed by cert-manager. We will also set up a load balancer, which will send incoming network traffic to your Traefik service from outside your cluster.

First, you’ll need to add the traefik Helm repository to your available repositories, which will allow Helm to find the traefik package:

helm repo add traefik https://traefik.github.io/charts

Once the command completes, you’ll receive confirmation that the traefik repository has been added to your computer’s Helm repositories:

# Output
"traefik" has been added to your repositories

Next, update your chart repositories:

helm repo update

The output will confirm that the traefik chart repository has been updated:

# Ouput
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
Update Complete. ⎈Happy Helming!⎈

Finally, install traefik into the traefik namespace you created in your cluster:

helm install traefik traefik/traefik

First, helm install tells Helm that you want to install a new application. The next word traefik is just a name you're giving to this specific installation - you could actually name it anything you want. This name helps you reference this specific installation later if you need to upgrade or uninstall it.

After that, traefik/traefik identifies which application you're installing. The first traefik refers to the Helm repository, while the second refers to the actual chart name.

Once you run the command, output similar to the following will print to the screen:

# Ouput
NAME: traefik
LAST DEPLOYED: Mon Feb 24 06:38:16 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
traefik with docker.io/traefik:v3.3.3 has been deployed successfully on default namespace !

Please note that the previous command will install Traefik in the default namespace.

Once the Helm chart is installed, Traefik will begin downloading on your cluster. To see whether Traefik is up and running, run kubectl get all to see all the Traefik resources created:

kubectl get all

Your output will appear similar to the output below:

NAME                           READY   STATUS    RESTARTS   AGE
pod/traefik-6ff695b798-d4j8j   1/1     Running   0          3m13s

NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
service/kubernetes   ClusterIP      10.109.0.1     <none>         443/TCP                      3d22h
service/traefik      LoadBalancer   10.109.1.155   **138.68.39.63**   80:31023/TCP,443:30257/TCP   3m16s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traefik   1/1     1            1           3m14s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/traefik-6ff695b798   1         1         1       3m15s

Depending on your cluster and when you ran the previous command, some of the names and ages may be different. If you see <pending> under EXTERNAL-IP for your service/traefik, keep running the kubectl get -n traefik all command until an IP address is listed. The EXTERNAL-IP is the IP address the load balancer is available from on the internet. Once an IP address is listed, make note of that IP address as your traefik_ip_address. You’ll use this address in the next section to set up your domain.

In this section, you installed Traefik into your cluster and have an EXTERNAL-IP you can direct your website traffic to. In the next section, you’ll make the changes to your DNS to send traffic from your domain to the load balancer.

Step 7: Accessing Traefik with Your Domain

Now that you have Traefik set up in your cluster and accessible on the internet with a load balancer, you'll need to update your domain's DNS to point to your Traefik load balancer.

You will need to create two DNS A records, both pointing to your Traefik loadBalancer's EXTERNAL-IP address that you noted in the previous step:

  • api.your-domain.com: This record will direct traffic to the Gauzy API service

  • app.your-domain.com: This record will direct traffic to the Gauzy web application

These DNS records will allow users to access both the API and web application through their respective subdomains. The Traefik load balancer will then handle routing the traffic to the appropriate service within your Kubernetes cluster based on the incoming request's hostname.

Make sure to replace your-domain.com with your actual domain name when creating these records in your DNS provider's configuration panel. After creating the records, it may take some time (usually between a few minutes to 48 hours) for the DNS changes to propagate across the internet.

Step 8: Setting Up cert-manager in Your Cluster

Now that you have your DNS records configured to point to your Traefik load balancer, now we are setting up SSL certificates to ensure secure communication for our Gauzy services.

Traditionally, when setting up secure certificates for a website, you would need to generate a certificate signing request and pay a trusted certificate authority to generate a certificate for you. You would then need to configure your web server to use that certificate and remember to go through that same process every year to keep your certificates up-to-date.

However, with the creation of Let’s Encrypt in 2014, it's now possible to acquire free certificates through an automated process. These certificates are only valid for a few months instead of a year, though, so using an automated system to renew those certificates is a requirement. To handle that, you'll use cert-manager, a service designed to run in Kubernetes that automatically manages the lifecycle of your certificates. This will ensure that both your Gauzy API (api.your-domain.com) and web application (app.your-domain.com) have valid SSL certificates that are automatically renewed when needed.

In this section, you will set up cert-manager to run in your cluster in its own cert-manager namespace.

First, install cert-manager using kubectl with cert-manager’s release file:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.0/cert-manager.yaml

By default, cert-manager will install in its own namespace named cert-manager. As the file is applied, a number of resources will be created in your cluster, which will appear in your output (some of the output is removed due to length):

# OUTPUT
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created

# some output excluded

deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

To verify our installation, check the cert-manager Namespace for running pods:

kubectl get pods --namespace cert-manager
# Output
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7979fbf6b6-twpg6              1/1     Running   0          15m
cert-manager-cainjector-68b64d44c7-t87t7   1/1     Running   0          15m
cert-manager-webhook-ff897cd5d-rznw6       1/1     Running   0          15m

This indicates that the cert-manager installation succeeded.

Now we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we’ll use the Let’s Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

Let’s create a test ClusterIssuer to make sure the certificate provisioning mechanism is functioning correctly. A ClusterIssuer is not namespace-scoped and can be used by Certificate resources in any namespace.

Open a file named staging_issuer.yaml in your favorite text editor:

nano staging_issuer.yaml

Paste in the following ClusterIssuer manifest:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-staging
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: **your_email_address_here**
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class: traefik

Here we specify that we’d like to create a ClusterIssuer called letsencrypt-staging, and use the Let’s Encrypt staging server. We’ll later use the production server to roll out our certificates, but the production server rate-limits requests made against it, so for testing purposes you should use the staging URL.

We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the ACME account’s private key. We also use the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

Roll out the ClusterIssuer using kubectl:

kubectl create -f staging_issuer.yaml

You should see the following output:

# Output
clusterissuer.cert-manager.io/letsencrypt-staging created

We’ll now repeat this process to create the production ClusterIssuer. Note that certificates will only be created after annotating and updating the Ingress resource provisioned in the previous step.

Open a file called prod_issuer.yaml in your favorite editor:

nano prod_issuer.yaml

Paste in the following manifest:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: **your_email_address_here**
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: traefik

Note the different ACME server URL, and the letsencrypt-prod secret key name.

When you’re done editing, save and close the file.

Roll out this Issuer using kubectl:

kubectl create -f prod_issuer.yaml

You should see the following output:

# Output
clusterissuer.cert-manager.io/letsencrypt-prod created

With our Let's Encrypt staging and production ClusterIssuers set up, we're now ready to configure Gauzy Services. Next, we will create the Ingress Resource and enable TLS encryption for the paths api.your-domain.com and app.your-domain.com.

In the next section, we will set up the Gauzy API service and enable TLS encryption for API domain.

Step 9: Setup Gauzy API in the Kubernetes cluster.

As mentioned earlier, Gauzy operates through two main services: the API service (backend) and the WEBAPP service (frontend). These services correspond to the DNS records we created (api.your-domain.com and app.your-domain.com respectively). In this section, we'll focus on deploying the API service, which serves as the backend of our application, to our Kubernetes cluster.

The API service will handle all the business logic and data operations, while being securely exposed through Traefik and protected with automatically managed SSL certificates. Let's proceed with the deployment configuration.

Open a file called gauzy-api.yaml in your favorite editor:

nano gauzy-api.yaml

Paste in the following manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gauzy-prod-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gauzy-prod-api
  template:
    metadata:
      labels:
        app: gauzy-prod-api
    spec:
      containers:
        - name: gauzy-prod-api
          image: ghcr.io/ever-co/gauzy-api:latest
          resources:
            requests:
              memory: "1536Mi"
              cpu: "1000m"
            limits:
              memory: "2048Mi"
          env:
            - name: API_HOST
              value: 0.0.0.0
            - name: DEMO
              value: "false"
            - name: NODE_ENV
              value: "production"
            - name: ADMIN_PASSWORD_RESET
              value: "true"
            - name: LOG_LEVEL
              value: "info"
            - name: API_BASE_URL
              value: "https://**api.your-domain.com**"
            - name: CLIENT_BASE_URL
              value: "https://**app.your-domain.com**"

            - name: DB_TYPE
              value: "postgres"
            - name: "DB_ORM"
              value: "typeorm"
            - name: DB_HOST
              value: "private-ever-gauzy-db-demo-do-user-8531843-0.f.db.ondigitalocean.com"
            - name: DB_SSL_MODE
              value: "true"
            - name: DB_CA_CERT
              value: |
                LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVRVENDQXFtZ0F3SUJBZ0lVQzA5THo4WWVo
                ..........
                RGRaUEdjakRoWGdUY3RSYm5TZ0N1c1FFRXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
            - name: DB_USER
              value: "doadmin"
            - name: DB_PASS
              value: "********" # Your password here
            - name: DB_NAME
              value: "defaultdb"
            - name: DB_PORT
              value: "25060"
            - name: DB_POOL_SIZE
              value: "10"
            - name: DB_POOL_SIZE_KNEX
              value: "10"

            - name: CLOUD_PROVIDER
              value: "DO"

            - name: REDIS_ENABLED
              value: "false"

            - name: DEFAULT_CURRENCY
              value: "USD"

            - name: ALLOW_SUPER_ADMIN_ROLE
              value: "true"

            - name: FILE_PROVIDER
              value: "LOCAL"

            - name: GAUZY_AI_GRAPHQL_ENDPOINT
              value: "https://**api.your-domain.com**/graphql"
            - name: GAUZY_AI_REST_ENDPOINT
              value: "https://**api.your-domain.com**/api"

            - name: MAGIC_CODE_EXPIRATION_TIME
              value: "600"

            - name: APP_NAME
              value: "Gauzy"
            - name: APP_LOGO
              value: "https://**app.your-domain.com**/assets/images/logos/logo_Gauzy.png"
            - name: APP_SIGNATURE
              value: "Gauzy"
            - name: APP_LINK
              value: "https://**app.your-domain.com**"
            - name: APP_EMAIL_CONFIRMATION_URL
              value: "https://**app.your-domain.com**/#/auth/confirm-email"
            - name: APP_MAGIC_SIGN_URL
              value: "https://**app.your-domain.com**/#/auth/magic-sign-in"
            - name: COMPANY_LINK
              value: "https://**your-company-domain.com**"
            - name: COMPANY_NAME
              value: "**You Company Name**"

          ports:
            - containerPort: 3000
              protocol: TCP

This Kubernetes manifest defines a Deployment for the Gauzy API service in a production environment. Here's a breakdown of its key components:

The deployment named gauzy-prod-api runs a single replica of the container using the latest Gauzy API image from GitHub Container Registry. It's configured with specific resource requirements: requesting 1.5GB of memory (with a 2GB limit) and 1 CPU core.

The container configuration includes numerous environment variables that determine how the API service operates:

  • Base URLs are set to the domains we configured earlier (api.your-domain.com and app.your-domain.com)

  • Database configuration points to a PostgreSQL instance on DigitalOcean, including SSL settings

  • Application-specific settings like currency, admin access, and file storage

  • Various URLs for the application's frontend features (email confirmation, magic sign-in, etc.)

The API service exposes port 3000 for TCP traffic, which will be the endpoint that Traefik routes traffic to when requests come to api.your-domain.com.

You'll need to replace sensitive values like the database password and customize the domain names and company information before applying this manifest to your cluster:

  1. Update the database configuration to align with your database details established in the previous section (create database):
- DB_HOST -> coordinate with "host" key
- DB_USER -> coordinate with "username" key
- DB_PASS -> coordinate with "password" key
- DB_NAME -> coordinate with "port" key
- DB_PORT -> coordinate with "port"
- DB_POOL_SIZE -> The pool size we set when creating our Connection Pool (10)
  1. In the previous step (creating the database), we downloaded our database connection certificate, named ca-certificate.crt. We need to convert its content to base64 and assign it to the DB_CA_CERT variable. Run the following command to convert the certificate to base64 format:
base64 ca-certificate.crt

You should see the Base64 value of our certificate file printed in the terminal. Copy this value and update the DB_CA_CERT variable with it.

  1. Don’t forget to replace the following variables with your setup values.

    1. api.your-domain.com your API service domain.

    2. app.your-domain.com your WebApp service domain.

    3. your-company-domain.com your company domain name, if applicable.

    4. You Company Name your company name.

  2. You can find all available variables to include in your deployment here:

We assumed you are setting up a production deployment, but Gauzy provides different docker images for different environments such as Demo, Stage, and Production, You find them here: Gauzy GitHub Packages.

After creating the gauzy-api.yaml file and ensuring you have set valid values that correspond to your setup, let's apply our deployment to the Kubernetes cluster:

kubectl apply -f gauzy-api.yaml

This command applies the Kubernetes Deployment configuration defined in the gauzy-api.yaml file. It creates or updates the Gauzy API Deployment in your cluster.

You should see the following output:

# Output
deployment.apps/gauzy-prod-api created

Verifying the Deployment:

kubectl get deploy

Use this command to check the status of your Deployments. You should see the following output:

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
gauzy-prod-api   1/1     1            1           17m
traefik          1/1     1            1           4h30m
  • NAME: The name of the Deployment

  • READY: The number of ready replicas / total desired replicas

  • UP-TO-DATE: The number of replicas updated to the latest version

  • AVAILABLE: The number of replicas available to users

  • AGE: How long the Deployment has been running

Checking Deployment Logs:

kubectl logs deploy/gauzy-prod-api --tail 10

This command retrieves the last 10 lines of logs from the Gauzy API Deployment. It's useful for verifying successful startup or debugging issues.

You should see the following similar output:

[Nest] 1  - 02/24/2025, 9:00:35 AM     LOG [RouterExplorer] Mapped {/api/dashboard-widget/count, GET} route +0ms
[Nest] 1  - 02/24/2025, 9:00:35 AM     LOG [RouterExplorer] Mapped {/api/dashboard-widget/pagination, GET} route +0ms
...
Application is running on <http://0.0.0.0:3000>
Listening at <http://0.0.0.0:3000/api>
✔ API Running: 29.132s
✔ Total API Startup Time: 29.133s

This log output shows the API routes being mapped and confirms that the application has started successfully.

Note: If you encounter errors, use the same kubectl logs command to view more detailed logs for troubleshooting. Adjust the --tail value or remove it entirely to see more log entries if needed.

Step 10: Expose the Gauzy API using Kubernetes Service and Ingress.

With our Gauzy API deployment set up, let's create a Kubernetes Service to expose the API within the cluster and potentially to external traffic. Here's how we can do that:

  1. Create a file named gauzy-api-service.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
  name: gauzy-api-service
spec:
  selector:
    app: gauzy-prod-api # This is the name for our deployment specification: template -> metadata -> labels -> app.
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: ClusterIP
  1. Apply the Service configuration:
kubectl apply -f gauzy-api-service.yaml

This Service will:

  • Select all pods with the label app: gauzy-prod-api

  • Forward traffic from port 80 to the container port 3000

  • Be accessible within the cluster using the service name gauzy-api-service

  1. Verify the Service creation:
kubectl get services

You should see your new service listed along with other existing services.

NAME                TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
gauzy-api-service   ClusterIP      10.109.3.62    <none>         80/TCP                       5m
kubernetes          ClusterIP      10.109.0.1     <none>         443/TCP                      4d7h
traefik             LoadBalancer   10.109.1.155   138.68.39.63   80:31023/TCP,443:30257/TCP   9h

With this API Service in place, other components within your Kubernetes cluster can now communicate with the Gauzy API using the service name gauzy-api-service.

Now let's create an Ingress based on Traefik to expose our API service to be accessed externally:

  1. Create a file named gauzy-api-ingress.yaml with the following content:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gauzy-api-ingress
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod

spec:
  ingressClassName: traefik
  rules:
    - host: api.your-domain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: gauzy-api-service # Previously created our service name.
                port:
                  number: 80
  tls:
    - secretName: letsencrypt-cert-api # Unique Secret for Each Ingress
      hosts:
        - api.your-domain.com

Remember to replace api.your-domain.com with your actual domain in the Ingress configuration.

  1. Apply the Ingress configuration:
kubectl apply -f gauzy-api-ingress.yaml

This Ingress will:

  • Uses Traefik as the ingress controller.

  • It's configured for HTTPS (websecure entrypoint) with TLS enabled.

  • The cert-manager.io/cluster-issuer: letsencrypt-prod annotation specifies the cluster-issuer, which should correspond to one of the issuers created in a previous step.

  • It routes traffic for api.your-domain.com to the gauzy-api-service on port 80.

  • The traefik.ingress.kubernetes.io/router.entrypoints annotation tells Traefik that traffic for this Ingress should be available via the websecure entrypoint. This is an entrypoint the Helm chart configures by default to handle HTTPS traffic and listens on traefik_ip_address port 443, the default for HTTPS.

  1. Verify the Ingress creation:
kubectl get ingress

You should see the following similar output:

NAME                CLASS    HOSTS                    ADDRESS        PORTS     AGE
gauzy-api-ingress   traefik  api.your-domain.com      138.68.39.63   80, 443   2m34s
  1. Verify the Gauzy API via Browser:

    After successfully creating our Service and Ingress, we can now interact with the Gauzy API. Open your Gauzy API domain (https://api.your-domain.com/api) in a browser; you should see a successful result, as shown in the picture below.

In this section, we created the Kubernetes Service to expose the API within the cluster, allowing the Ingress Resource to redirect external traffic to it.

Step 11: Setup Gauzy WEBAPP in the Kubernetes cluster.

With the Gauzy API set up, let's proceed to configure Gauzy Web within the Kubernetes cluster.

The webapp service will serve the Gauzy frontend application, providing the user interface and interacting with the API service. Like the API, it will be securely exposed through Traefik and protected with automatically managed SSL certificates. This setup ensures a secure and efficient delivery of the Gauzy web application to end-users.

Let's proceed with the deployment configuration for the webapp service, following a similar pattern to what we've done for the API, but tailored for frontend hosting requirements.

Open a file called gauzy-webapp.yaml in your favorite editor:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gauzy-prod-webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gauzy-prod-webapp
  template:
    metadata:
      labels:
        app: gauzy-prod-webapp
    spec:
      containers:
        - name: gauzy-prod-webapp
          image: ghcr.io/ever-co/gauzy-webapp:latest
          env:
            - name: DEMO
              value: "false"
            - name: API_BASE_URL
              value: "https://api.**your-domain.com**"
            - name: CLIENT_BASE_URL
              value: "https://app.**your-domain.com**"
            - name: DEFAULT_CURRENCY
              value: "USD"

          ports:
            - containerPort: 4200
              protocol: TCP

This Kubernetes manifest defines a Deployment for the Gauzy WEBAPP service in a production environment. Here's a breakdown of its key components:

The deployment named gauzy-prod-webapp runs a single replica of the container using the latest Gauzy API image from GitHub Container Registry.

The container configuration includes numerous environment variables that determine how the API service operates:

  • Base URLs are set to the domains we configured earlier (api.your-domain.com and app.your-domain.com)

  • Application-specific settings like currency.

The API service exposes port 4200 for TCP traffic, which will be the endpoint that Traefik routes traffic to when requests come to app.your-domain.com.

Don’t forget to replace the following variables with your setup values.

  1. api.your-domain.com your API service domain.

  2. app.your-domain.com your WebApp service domain.

You can find all available variables to include in your deployment here:

We assumed you are setting up a production deployment, but Gauzy provides different docker images for different environments such as Demo, Stage, and Production, You find them here: Gauzy GitHub Packages.

After creating the gauzy-webapp.yaml file and ensuring you have set valid values that correspond to your setup, let's apply our deployment to the Kubernetes cluster:

kubectl apply -f gauzy-webapp.yaml

This command applies the Kubernetes Deployment configuration defined in the gauzy-webapp.yaml file. It creates or updates the Gauzy WEBAPP Deployment in your cluster.

You should see the following output:

# Output
deployment.apps/gauzy-prod-webapp created

Verifying the Deployment:

kubectl get deploy

Use this command to check the status of your Deployments. You should see the following similar output:

NAME                READY   UP-TO-DATE   AVAILABLE   AGE
gauzy-prod-api      1/1     1            1           4h42m
gauzy-prod-webapp   1/1     1            1           2m18s
traefik             1/1     1            1           8h
  • NAME: The name of the Deployment

  • READY: The number of ready replicas / total desired replicas

  • UP-TO-DATE: The number of replicas updated to the latest version

  • AVAILABLE: The number of replicas available to users

  • AGE: How long the Deployment has been running

Step 12: Expose the Gauzy WEBAPP using Kubernetes Service and Ingress.

With our Gauzy WEBAPP deployment set up, let's create a Kubernetes Service to expose the WEBAPP within the cluster and potentially to external traffic. Here's how we can do that:

  1. Create a file named gauzy-webapp-service.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
  name: gauzy-webapp-service
spec:
  selector:
    app: gauzy-prod-webapp # This is the name for our deployment specification: template -> metadata -> labels -> app.
  ports:
    - protocol: TCP
      port: 80
      targetPort: 4200
  type: ClusterIP
  1. Apply the Service configuration:
kubectl apply -f gauzy-webapp-service.yaml

This Service will:

  • Select all pods with the label app: gauzy-prod-webapp

  • Forward traffic from port 80 to the container port 4200

  • Be accessible within the cluster using the service name gauzy-webapp-service

  1. Verify the Service creation:
kubectl get services

You should see your new service listed along with other existing services.

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
gauzy-api-service      ClusterIP      10.109.3.62    <none>         80/TCP                       128m
gauzy-webapp-service   ClusterIP      10.109.27.44   <none>         80/TCP                       5s
kubernetes             ClusterIP      10.109.0.1     <none>         443/TCP                      4d7h
traefik                LoadBalancer   10.109.1.155   138.68.39.63   80:31023/TCP,443:30257/TCP   9h

With this WEBAPP Service in place, other components within your Kubernetes cluster can now communicate with the Gauzy WEBAPP using the service name gauzy-webapp-service.

Now let's create an Ingress based on Traefik to expose our API service to be accessed externally:

  1. Create a file named gauzy-webapp-ingress.yaml with the following content:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gauzy-webapp-ingress
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod

spec:
  ingressClassName: traefik
  rules:
    - host: app.your-domain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: gauzy-webapp-service # Previously created our service name.
                port:
                  number: 80
  tls:
    - secretName: letsencrypt-cert-webapp # Unique Secret for Each Ingress
      hosts:
        - app.your-domain.com

Remember to replace app.your-domain.com with your actual domain in the Ingress configuration.

  1. Apply the Ingress configuration:
kubectl apply -f gauzy-webapp-ingress.yaml

This Ingress will:

  • Uses Traefik as the ingress controller.

  • It's configured for HTTPS (websecure entrypoint) with TLS enabled.

  • The cert-manager.io/cluster-issuer: letsencrypt-prod annotation specifies the cluster-issuer, which should correspond to one of the issuers created in a previous step.

  • It routes traffic for app.your-domain.com to the gauzy-webapp-service on port 80.

  • The traefik.ingress.kubernetes.io/router.entrypoints annotation tells Traefik that traffic for this Ingress should be available via the websecure entrypoint. This is an entrypoint the Helm chart configures by default to handle HTTPS traffic and listens on traefik_ip_address port 443, the default for HTTPS.

  1. Verify the Ingress creation:
kubectl get ingress

You should see the following similar output:

NAME                   CLASS     HOSTS                     ADDRESS        PORTS  AGE
gauzy-api-ingress      traefik   api.your-domain.com   138.68.39.63   80, 443    99m
gauzy-webapp-ingress   traefik   app.your-domain.com   138.68.39.63   80, 443    12s
  1. Verify the Gauzy API via Browser:

    After successfully creating our Service and Ingress, we can now interact with the Gauzy API. Open your Gauzy API domain (https://app.your-domain.com) in a browser; you should see the login page displayed as shown in the image below.

In this section, we created the Kubernetes Service to expose the WEBAPP within the cluster, allowing the Ingress Resource to redirect external traffic to it.

Conclusion

Congratulations! You have successfully set up Ever Gauzy on a Kubernetes cluster in DigitalOcean. Let's recap the key steps we've covered:

  1. Created a Kubernetes cluster on Digital Ocean

  2. Created Managed database on Digital Ocean

  3. Set up kubectl to manage your cluster

  4. Installed and configured Traefik as an ingress controller

  5. Deployed Ever Gauzy components (API and web app) to your cluster

  6. Configured DNS settings to route traffic to your application

By following this guide, you've deployed a scalable, production-ready instance of Ever Gauzy.

This setup leverages the power of Kubernetes for orchestration and Traefik for efficient traffic routing, all hosted on DigitalOcean's reliable infrastructure.

Some key benefits of this deployment method include:

  • Scalability: Easily scale your application by adjusting the number of replicas in your Kubernetes deployments.

  • Reliability: Kubernetes ensures high availability by automatically managing and replacing unhealthy pods.

  • Flexibility: This setup allows for easy updates and maintenance of your Ever Gauzy instance.

  • Cost-effective: DigitalOcean provides a cost-efficient platform for hosting Kubernetes clusters.

Remember to regularly update your Ever Gauzy components and Kubernetes configurations to ensure you're running the latest versions with the most up-to-date features and security patches.

For ongoing management, make sure to:

  • Monitor your cluster's health and performance

  • Regularly back up your data

  • Keep your Kubernetes version updated

  • Stay informed about Ever Gauzy updates and new features

With this setup, you're well-positioned to leverage Ever Gauzy for your business needs while benefiting from the robustness and flexibility of a Kubernetes-based deployment on DigitalOcean.

References

146 views