1. Developer Environment Setup

1.1. Tested Versions

Component Tested version

Linux

Ubuntu 22.04

Docker

29.1.5

Docker Compose CLI plugin

v5.0.1

Node.js (via nvm)

22

nvm

v0.40.2

Yarn (Corepack-managed)

1.22.21

Maven

3.9.8

1.2. Prerequisites

Linux operating system (Virtual Machine) with approx. 100GB of disk space. Tested with Ubuntu 22.04.

1.3. Packages

1.3.1. Docker (Install on Linux)

First we need to install the official repository in order for apt-get to install docker and docker compose:

# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

sudo apt update

Using this command, you will retrieve the specific version of docker for your Ubuntu version:

sudo apt-cache policy docker-ce | grep 29.1.5

From above commands you will get list of docker version, copy respected version. This will be for your specific version of Ubuntu:

sudo apt-get install docker-ce=5:25.0.1-1~ubuntu.22.04~jammy docker-ce-cli=5:25.0.1-1~ubuntu.22.04~jammy containerd.io

1.3.2. Docker Compose (Install on Linux)

This step requires the Docker repository to be setup from the above steps.

sudo apt-get install docker-compose-plugin

1.3.3. Node.js

Install Node Version Manager (NVM):

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash

Install node 22

nvm install 22
nvm use 22

1.3.4. Yarn

corepack enable
yarn set version 1.22.21

1.3.5. Maven

sudo apt install maven

1.3.6. Git

sudo apt install git

1.4. Setup Hosts file

To access the DINA services locally, you will need to add entries to your /etc/hosts file.

In Linux and Mac OSX, this is located at /etc/hosts. You should have the following entries, all pointing to the fixed IP of the Traefik container:

192.19.33.9 dina.local
192.19.33.9 api.dina.local
192.19.33.9 keycloak.dina.local

1.5. Local certificates

By default, everything will be accessible on https. In order to allow the local browser to work without issues and warnings [local development certificates](https://aafc-bicoe.github.io/dina-local-deployment/#_local_certificates) can be installed.

1.6. Clone Repositories

For local development, you will need to clone the dina-local-deployment repository, this contains the docker-compose files and scripts to run the local environment.

git clone --branch master https://github.com/AAFC-BICoE/dina-local-deployment.git

For UI development, you will need the UI repository:

git clone --branch dev https://github.com/AAFC-BICoE/dina-ui.git
cd dina-ui
yarn install
yarn workspace dina-ui build

For API development, you will need to clone the respective API you are working on. Here are all of the main DINA APIs:

git clone --branch dev https://github.com/AAFC-BICoE/object-store-api.git
git clone --branch dev https://github.com/AAFC-BICoE/seqdb-api.git
git clone --branch dev https://github.com/AAFC-BICoE/agent-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-user-api.git
git clone --branch dev https://github.com/AAFC-BICoE/natural-history-collection-api.git
git clone --branch dev https://github.com/AAFC-BICoE/loan-transaction-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-search-api.git

1.7. Visual Studio Code

Visual Studio Code can be installed directly on your Virtual Machine or on your Windows machine using the VSCode Remote instructions below.

Once you have VSCode downloaded, you can run this command to install the recommended extensions:

code --install-extension vscjava.vscode-java-pack
code --install-extension msjsdiag.debugger-for-chrome
code --install-extension ms-azuretools.vscode-containers
code --install-extension gabrielbb.vscode-lombok
code --install-extension esbenp.prettier-vscode
code --install-extension dbaeumer.vscode-eslint
code --install-extension firsttris.vscode-jest-runner
code --install-extension alefragnani.project-manager
code --install-extension firefox-devtools.vscode-firefox-debug
code --install-extension bruno-api-client.bruno

1.7.2. VSCode Remote (Optional)

You can avoid running a slow IDE inside your virtual machine by running VS Code on your host machine instead and connecting to the dev environment using VS Code Remote.

  1. Download VSCode for Windows

  2. Install the VSCode SSH Remote Extension

    code --install-extension ms-vscode-remote.remote-ssh
  3. Install Open-SSH Server on your VM

    sudo apt-get install openssh-server
    sudo systemctl enable ssh
    sudo systemctl start ssh
  4. Setup Port Forwarding on your VM

    In order for your Windows VSCode to communicate with your Virtual Machine, you will need to allow for SSH port forwarding.

    From the VirtualBox Manager window, click Settings` > Network Tab > Advanced > Port Forwarding > Green + icon and add the following rule:

    Table 1. SSH Port Forwarding Rule
    Name Protocol Host IP Host Port Guest IP Guest Port

    SSH

    TCP

    Leave Blank

    22

    Leave Blank

    22

    Click OK on both windows to finish setting up the port.

  5. Configure SSH Remote Extension

    Once you have the SSH Remote extension, using the command palette (CTRL + SHIFT + P) search for >Remote-SSH: Add New SSH Host…​.

    In the popup at the top of the screen you can enter the SSH command to connect to your virtual machine (Replacing USERNAME with your Ubuntu username):

    ssh USERNAME@localhost

Now you are connected to your Virtual Machine. You can open projects and use the terminal as if you were in the VM.

1.7.3. API Debugging (Optional)

To debug an API while using the local deployment, you can use the docker-compose.debug.yml config which can be enabled from the start_stop_dina.sh script.

Once enabled, you will be able to attach your VSCode to an API. The debugging ports can be found in the .env file.

Also remember that if you are running VSCode remotely, you will need to port forward the debugging port.

Here is an example of a launch.json that can be added to an API for VSCode to attach itself to the java debugger for a specific API:

{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "java",
      "name": "Attach to Collection API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5002"
    }
  ]
}

Just ensure that port lines up with the correct API and the port is exposed for VSCode to attach itself.

1.7.4. UI Debugging (Optional)

You will need the firefox-devtools.vscode-firefox-debug extension installed in order to debug the UI. This extension allows you to debug the UI using the Firefox browser.

For UI debugging, you will need to use firefox and the Remote Debugging feature. The following flags will need to be changed on firefox to enable remote debugging:

  1. Open Firefox and type about:config in the address bar. Click on "Accept the Risk and Continue".

  2. Search and change the following settings:

    `devtools.debugger.remote-enabled` to `true`
    `devtools.chrome.enabled` to `true`
    `devtools.debugger.prompt-connection` to `false`
    `devtools.debugger.remote-port` to `6000`
    `devtools.debugger.force-local` to `false`
  3. Close firefox.

  4. Open a terminal and run the following command to start firefox with remote debugging enabled:

    firefox -start-debugger-server
  5. You will need to add the following to your launch.json file in order to attach the debugger to the firefox instance, it can also be found below in the VSCode Launch Configurations section:

    {
      "name": "Attach to Firefox",
      "type": "firefox",
      "request": "attach",
      "host": "localhost",
      "port": 6000,
      "webRoot": "${workspaceFolder}/dina-ui",
      "url": "https://dina.local/",
      "pathMappings": [
        {
          "url": "webpack://_n_e/components",
          "path": "${workspaceFolder}/dina-ui/packages/dina-ui/components"
        },
        {
          "url": "webpack://_n_e/common-ui",
          "path": "${workspaceFolder}/dina-ui/packages/common-ui"
        }
      ]
    },
  6. Optional if using remote debugging, you will need to port forward the remote debugging port from your VM to your host machine. This can be done by adding a new port forwarding rule in VirtualBox.

    Table 2. SSH Port Forwarding Rule

    Name

    Protocol

    Host IP

    Host Port

    Guest IP

    Guest Port

    Firefox Remote Debugging

    TCP

    Leave Blank

    6000

    Leave Blank

    6000

    Now breakpoints should be able to set in for the DINA UI code and should automatically be hit when the code is executed from the browser.

1.7.5. VSCode Launch Configurations (Optional)

Here is the complete launch configuration for the DINA API projects using the default ports. This can be edited by going to the "Run and debug" tab in VSCode and clicking the gear icon to edit the launch.json file.

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Attach to Firefox",
      "type": "firefox",
      "request": "attach",
      "host": "localhost",
      "port": 6000,
      "webRoot": "${workspaceFolder}/dina-ui",
      "url": "https://dina.local/",
      "pathMappings": [
        {
          "url": "webpack://_n_e/components",
          "path": "${workspaceFolder}/dina-ui/packages/dina-ui/components"
        },
        {
          "url": "webpack://_n_e/common-ui",
          "path": "${workspaceFolder}/dina-ui/packages/common-ui"
        }
      ]
    },
    {
      "type": "java",
      "name": "Attach to Agent API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5001"
    },
    {
      "type": "java",
      "name": "Attach to Collection API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5002"
    },
    {
      "type": "java",
      "name": "Attach to Loan Transaction API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5003"
    },
    {
      "type": "java",
      "name": "Attach to User API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5004"
    },
    {
      "type": "java",
      "name": "Attach to Object Store API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5005"
    },
    {
      "type": "java",
      "name": "Attach to SeqDB API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5006"
    },
    {
      "type": "java",
      "name": "Attach to Export API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5007"
    },
    {
      "type": "java",
      "name": "Attach to Search CLI API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5008"
    },
    {
      "type": "java",
      "name": "Attach to Search WS API Container",
      "request": "attach",
      "hostName": "localhost",
      "port": "5009"
    }
  ]
}

2. DINA-UI Development (Hot Reloading)

  1. Configure the local dina-ui repo path

    • Variable: DINA_UI_REPO_DIRECTORY in the .env file

    • Default: points to ~/dina-ui

    • Ensure this path is pointing to your local dina-ui repo clone. This is where hot-reloading will monitor for changes.

  2. Enable the development profile

    • In the start_stop_dina.sh script, enable the docker-compose.dev.yml config by uncommenting the line that adds the dev docker-compose file:

...
DINA_CONFIGS+=('docker-compose.dev.yml')
...

Next, follow the instructions below on how to start the application:

3. Get Everything Running Locally

Once you have your development environment set up, you can run the DINA application locally.

The dina-local-deployment project contains a start_stop_dina.sh script that acts as a wrapper for Docker Compose. It allows you to select specific modules and configuration layers without typing out long docker compose commands manually.

3.1. Configuration

To configure your local stack, open start_stop_dina.sh in your text editor. There are two sections that you can modify.

Each line can be commented or uncommented to include or exclude that specific module or configuration.

3.1.1. DINA Module Configuration

The DINA_MODULES section defines which containers you wish to run.

Profile Name Description

user_api

Starts the User API service, which handles user authentication and management. This service is required for most other services to function.

object_store_api

Starts the Object Store API service, which handles file uploads, metadata, and storage (images, documents, etc.).

agent_api

Starts the Agent API service, which manages Agents (People and Organizations).

search_api

Starts the Search API and Elasticsearch related containers. Provides the search functionality, indexing data into Elasticsearch.

seqdb_api

Starts the SeqDB API service, which manages molecular and sequence database information.

export_api

Starts the Export API service, which handles data export requests and generation.

loan_transaction_api

Starts the Loan Transaction API service, which manages loans and transactions of specimens.

kibana

Starts the Kibana dashboard. useful for visualizing data in Elasticsearch.

prometheus

Starts Prometheus for scraping and monitoring metrics from the running services.

Important: Enabling too many modules may lead to high resource consumption on your local machine. Only enable the modules you need for your current development work.

3.1.2. DINA Configs

The DINA_CONFIGS section defines which docker-compose configuration files to include when starting the stack. Each configuration file adds or overrides settings in the base configuration.

Config File Purpose

docker-compose.base.yml

Required. Defines the core services, networks, and volumes. This is the foundation of the deployment.

docker-compose.local.yml

Required. Defines the core services, networks and volumes for a local development environment. This is where you can see each specific version of services used for local deployment.

docker-compose.dev.yml

Optional. Enables hot-reloading for the dina-ui service. Useful for frontend development.

docker-compose.debug.yml

Optional. Opens remote debugging ports on the Java containers. Enable this to attach a debugger from your IDE.

message-producing-override

Optional. Configures services to emit RabbitMQ messages upon data changes. Recommended for development purposes.

persistence-override

Optional. Configures databases to use persistent volumes on your host machine. Use this if you want your data to survive after running down.

keycloak/docker-compose.enable-dev-user.yml

Optional. Disables keycloak and automatically logs in a development user for easier testing without going through the login process.

3.2. Starting the Stack

To start the DINA application stack, run the following command from the root of the dina-local-deployment project:

./start_stop_dina.sh up

This will start all the selected configuration and follow the logs in your terminal. To run the stack in detached mode (in the background), use:

./start_stop_dina.sh up -d

3.3. Accessing the Application

After all the components have finished initializing, the UI will be available at http://dina.local/. By default, the following users are included:

  • cnc-su: super-user in the cnc group

  • cnc-user: a user in the cnc group

  • cnc-guest: a guest in the cnc group

  • cnc-ro: a read-only user in the cnc group

  • dina-admin: a dina-admin in the aafc group

The password is the same as the username for all users.

3.4. Stopping the Stack

To stop the DINA application stack, run the following command:

./start_stop_dina.sh stop

4. Instance Configuration

By setting environment variables for the DINA UI container, you can configure certain features and options for the runtime instance.

These settings can be configured in the .env file.

4.1. Instance Mode

The INSTANCE_MODE environment variable specifies the operational mode of the runtime instance.

For example, setting PROD configures the instance for production environments. If it’s set to a instance mode other than PROD, it will be displayed in the application’s header to inform users which mode they are currently using.

If using the Dev User mode (without Keycloak), this will need to be set to developer.

By default, developer is used, which is used to indicate the server is running in a local development environment.

4.2. Instance Name

The INSTANCE_NAME environment variable is currently used to generate the feedback URL within the UI.

For instance, setting GRDI indicates that this is the GRDI instance of DINA. The feedback link will display a badge on new issues to identify the instance (GRDI) from which the user is reporting the issue.

By default, the instance name is set to AAFC.

4.3. Supported Languages

The SUPPORTED_LANGUAGES environment variable is a comma-separated list of the ISO language codes that the application will support.

For example:

SUPPORTED_LANGUAGES_ISO=en,fr

Will allow the application to switch between English and French languages.

Supported languages:

ISO Code Language

en

English

fr

French

de

German

4.4. Supported Geoinformational Systems

The SUPPORTED_GEOINFORMATION_SYSTEMS environment variable is a comma-separated list of the geoinformational systems supported by the UI.

By default, OSM for OpenStreetMap is used.

Supported systems:

Code Name

OSM

OpenStreetMap

TGN

The Getty Thesaurus of Geographic Names

When using TGN, a reverse proxy will need to be setup since it’s using HTTP protocol. The following can be added to the traefik-dynamic.yml file to configure this reverse proxy:

http:
  routers:
    tgn:
      tls: true
      rule: "Host(`localhost`, `dina.local`) && PathPrefix(`/TGNService.asmx`)"
      middlewares:
        - addHost
      service: tgnLb
      priority: 10000
  middlewares:
    addHost:
        headers:
          customRequestHeaders:
            Host: vocabsservices.getty.edu
  services:
    tgnLb:
      loadBalancer:
        servers:
          - url: "http://vocabsservices.getty.edu"

The TGN_SEARCH_BASE_URL environment variable is used to configure the base URL for the reverse proxy. If this is not provided, it will just use the localhost (dina.local).

5. Using Local Images

Running local deployment without making any changes will run the docker-hub deployed API modules. For development purposes, you can follow these steps to test your API changes.

For the examples below, we are using Collection-API but the steps are the same for all the other API modules. Check out the dina-ui Development section for live reloading support for UI development.

  1. Clean and build your API changes

    Navigate to the root of the API project you wish to test locally, then run maven clean install. The -DskipTests argument can be used to skip the tests.

    cd ~/collection-api
    mvn clean install -DskipTests
  2. Build the docker image

    Each of the API modules, contain a Dockerfile which can be used to build an image. When running the command to build the image you can provide a tag which will be used in step 3.

    docker build -t collection-api:dev .
  3. Change the image name on the docker-compose.local.yml

    In the docker-compose.local.yml file you can find the API container and change the image to use the tag created in step 2.

      collection-api:
        image: collection-api:dev
        environment:
          keycloak.enabled: "true"
  4. Start/Restart the containers

    Re-run the command you used to start the container originally (with any settings you used like messaging or profiles), you can use the -d collection-api to just start the specific container.

    docker-compose -f docker-compose.base.yml -f docker-compose.local.yml up -d collection-api

You are now running your local changes. It’s important to understand that you are running the jar generated from step 1, so if you make more changes on the API, you will need to repeat the steps to test your latest changes.

5.1. Export Docker image to file

Exporting an image from the local registry:

docker save -o collection-api1.tar collection-api:dev

Loading a Docker image from a file:

docker load < collection-api1.tar

6. API Endpoint Testing

6.1. Applications

While any API client can be used, Bruno is our recommended API client and requires no 3rd party accounts.

Bruno can be downloaded from: https://www.usebruno.com/downloads

Once downloaded, you can open the DINA API collection by clicking the + icon at the top left where it says "Collections". A popup will appear, select "Open Collection". Then navigate to the dina-local-deployment/api-client/DINA-API/collection.bru file inside of the dina-local-deployment repo.

You will also need to select the environment variables for the instance you want to connect to, for local dina development you would select DINA Local. The dropdown at the top right of the screen lets you change the environment variables.

For local development, you will need to open the Bruno preferences (CTRL + ,), Under the General tab, you will see an option for "Use custom CA certificate", make sure this is checked on. Click the "Select file" option and navigate to /etc/ssl/certs/ca-certificates.crt. (Be sure that you have ran the Local Certificate steps in order for the dina local certificates to be inside of the ca-certificates.crt file)

6.2. Authentication

If using the Bruno API client with our collection loaded, the Authentication settings will be automatically loaded in.

Based on the environment you have selected, the authentication settings will be pre-configured for you.

When sending a request for the first time, a popup will appear asking you to login to the DINA application. Use your DINA credentials to login and obtain an access token. The access token will be automatically added to the request headers for you.

It will also automatically refresh the access token when it expires.

In other API clients, you can setup the authentication using these settings:

Table 3. Authentication Settings
Setting Value

Authentication Type

OAuth 2.0

Grant Type

Authorization Code

Authorization URL / Auth URL

https://dina.local/auth/realms/dina/protocol/openid-connect/auth

Access Token URL

https://dina.local/auth/realms/dina/protocol/openid-connect/token

Client ID

dina-public

Redirect URL / Callback URL

https://dina.local (Some API clients may require this to be empty so it can use their own URL)

Scope

openid

Then when sending the request, you should see a popup which will let you login to the DINA application.

6.3. Logout of Session

Sometimes you might need to login as a different user for testing, the instructions for clearing your sessions are below:

To logout of your session on the Bruno application, go to the collection settings by using the gear icon at the top right of the screen, then click Auth to access the authentication settings. At the bottom of the page there is a button to clear the cache.

6.4. Headers

Most of the API POST requests require a specific "Content-Type" header. The following headers are setup to be inherited to each request inside of the DINA so they do not need to be setup for each request.

Table 4. Common Headers
Key Value Description

Content-Type

application/vnd.api+json

This is only required for POST requests, by default Postman and Insomnia might put application/json which will cause an error.

This header is already included in the Bruno collection for you.

6.5. Additional Environments

Only included in the /api-client repo is the local development environment file. Other environment files can be downloaded from the DINA Wiki under the "Bruno Environments" section.

7. WireMock API Mocking

Mock API responses for frontend development and testing without a live backend using WireMock.

7.1. Setup

  1. Configure local hostnames: In docker-compose.local.yml, change the desired local hostnames in the dina-ui service environment to wiremock:8080. You can set it for multiple services if needed.

  2. Enable WireMock profile: In dina_start_stop.sh`, enable the wiremock profile and disable the API modules you want to mock.

Example Configuration:

docker-compose.local.yml

services:
  dina-ui:
    environment:
      # ...
      COLLECTION_API_ADDRESS: wiremock:8080  # Mocked API
      AGENT_API_ADDRESS: agent-api:8080      # Live API
      # ...

dina_start_stop.sh

#DINA_MODULES+=('collection_api')   # Disabled - using WireMock instead
DINA_MODULES+=('wiremock')          # Enable this.

7.2. Creating Mocks

Mock responses are organized in the following directory structure:

/wiremock/mappings/
  └── [API folder]/
      └── [Entity folder]/
          └── [mock-definition.json]

To add new mocks:

  1. Create a folder for the API you want to mock (e.g., collection-api, agent-api)

  2. Inside the API folder, create a folder for each entity type (e.g., collecting-event, person)

  3. Add your WireMock stub JSON files in the entity folder

Refer to the WireMock stubbing documentation for details on creating stub definitions.

7.3. Refresh Mocks

After editing mocks, reload without restarting the WireMock container:

curl -X POST http://localhost:8089/__admin/mappings/reset

8. PostgreSQL

It is possible to query PostgreSQL when the container is running.

Example on the collection database, collection pg schema:

docker exec -i CONTAINER_ID psql -U pguser collection -c "select * from collection.project"

9. Messaging

To enable Messaging based on RabbitMQ, combine the --profile search_api with the override file message-producing-override/docker-compose.override.messageProducer.yml.

docker-compose \
--profile search_api \
-f docker-compose.base.yml \
-f docker-compose.local.yml \
-f message-producing-override/docker-compose.override.messageProducer.yml \
up -d

10. Keycloak

10.1. Dev User

This option will allow the APIs to start without Keycloak but still function with a dev user.

The keycloak/docker-compose.enable-dev-user.yml config needs to be enabled inside of the start_stop_dina.sh:

DINA_CONFIGS+=('keycloak/docker-compose.enable-dev-user.yml')

For the UI to work without keycloak, you need to be using UI dev mode, which can be enabled by uncommenting the following line inside start_stop_dina.sh:

DINA_CONFIGS+=('docker-compose.dev.yml')

Inside of the docker-compose.enable-dev-user.yml file it can be configured which containers the settings are applied, by default it is applied to all.

The dina-ui container expects a comma-separated list of the group and roles.

For example:

/aafc/user, /bicoe/read-only

Then each of the API’s that you wish to use also need to be configured with the same group roles:

"groupRole": {
  "aafc": ["user"],
  "bicoe": ["read-only"]
}

10.2. Settings

Traefik is responsible for TLS termination so Keycloak is configured with KC_PROXY: edge.

Keycloak Admin console is available at https://keycloak.dina.local/auth.

The main reason for having the admin console on a different hostname is to simplify the rules for not exposing it.

11. Local certificates

In order to generate development certificates mkcert will be used.

11.1. Installation (Ubuntu 22-24)

sudo apt-get install wget libnss3-tools -y
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
sudo chmod +x mkcert-v1.4.4-linux-amd64
sudo mv mkcert-v1.4.4-linux-amd64 /usr/bin/mkcert

Test mkcert is installed correctly:

mkcert --version

Generate certificates for dina.local:

mkcert -cert-file certs/dina-local-cert.pem -key-file certs/dina-local-key.pem "dina.local" "api.dina.local" "keycloak.dina.local"

Then install the certificates so it’s trusted by Chrome and Firefox:

mkcert --install

12. Minikube

It is possible to use minikube to deploy DINA locally using the helm chart.

12.1. Installing minikube

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

12.2. Starting minikube

minikube start --addons=ingress --cpus=4 --cni=flannel --install-addons=true --kubernetes-version=stable --memory=6g

Change the cpus and memory according to your local resources.

Note: This is a sample command. The important flags here are:

 --addons=ingress
 --cni=flannel
 --install-addons=true

Note: Useful alias for running kubectl commands: alias k="minikube kubectl --"

12.3. Push local image

To push local (images that are built locally) to minikube :

From the host: minikube cache add myimage:mytag

See minikube Pushing images documentation for more information.

13. Helm

13.1. Installing Helm

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

14. How to Update ElasticSearch Mapping

Mapping settings are stored under elastic-configurator-settings folder.

To add a new field to Elasticsearch mapping file, first identify the correct location in the nested properties structure where the field belongs. Most fields are under mappings → properties → data → properties → attributes → properties and nested documents (representing the relationship) would go under mappings → properties → included → properties → attributes → properties. Insert your new field definition with its type (ElasticSearch type).

Always increment the version number to allow the automatic schema migration.

  "mappings": {
    "_meta" : {
      "version" : {
        "number" : "2.7"
      }
    }

ElasticSearch schema migration is applied by es-init-container.

15. JMeter

Apache JMeter is used to execute performance/stress test plan.

15.1. Installation

You will need to have java installed on your system. Insure it’s installed using:

$ java -version

Next, download the latest version of JMeter from the JMeter website and extract it to your desired location:

$ wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip

$ unzip apache-jmeter-5.5.zip

Then you can open up the binary and run jmeter to view the GUI:

$ cd /apache-jmeter-5.5/bin/

$ ./jmeter

15.2. How to run

You can start JMeter in GUI mode (default) and open the test plan to visualize it.

To run a test plan from command-line:

$ jmeter -n -t collection_api_module_testplan.jmx -l log.jtl

15.3. Module Structure

The JMeter test plan is constructed as follows

  • Module Test Plan

    • setUp Thread Group

    • Endpoint Thread Groups

    • tearDown Thread Group

15.3.1. setUp Thread Group

The setUp Thread Group will be executed prior to any regular thread groups, allowing us to setup access privileges as well as create any records that may be required by regular thread groups.

Refer to setUp Thread Group component reference

15.3.2. Endpoint Thread Group(s)

Each endpoint within the corresponding module will contain a corresponding endpoint thread group. A thread group will be responsible for setting up and testing the respective endpoint’s CRUD operations.

The layout of each thread group is as follows:

  • Endpoint User Defined Variables

    • All constant variables will be declared in this configuration element

  • HTTP Headers

    • Sets the access token and content-type headers, required to perform API requests

  • Module Token Controller

    • A Module Controller utilized to refresh the access token.

      • Module Controllers allow you to perform the actions of other controllers such as using the Token Controller from the setUp Thread Group or the CRUD controllers from other endpoints.

      • All controllers aside from the Module Controllers are Simple Controllers that offer no unique functionality, being used primarily for organizational purposes and to allow Module Controllers to replicate their functionality if needed.

  • Create/Update/Delete (CRUD) Endpoint Controllers

    • These controllers contain all Controller, Samplers, and Assertions associated with their respective CRUD operation.

      • Retrieval is tested within each controller when assertions are being made.

15.3.3. tearDown Thread Group

The tearDown Thread Group is executed after all of the other thread groups have terminated. This allows us to remove the records created within the setUp Thread Group without causing any conflicts with the thread group that user these records.

Refer to tearDown Thread Group component reference

15.4. CRUD Controllers

The CRUD Controllers are the thread group Create, Update, and Delete Controllers.

15.4.1. Create

This controller contains all JMeter elements required for testing record creation for the respective endpoint.

  • Endpoint setUp

    • Sets up random variables for the thread group to ensure unique values between threads.

    • Creates module records if the endpoint requires them for testing relationships fields.

  • Basic/Verbose

    • Creates a Basic record with the minimum fields and a Verbose record with all fields populated

      • Asserts that records have been created and values match with the variables used in the POST request.

  • w/ Empty Attributes

    • Attempts to create a record with no attributes specified.

      • Asserts that the correct error code is returned.

  • w/ Only User Group

    • Attempts to create a record with only the user group specified, missing possible required attributes

      • Asserts that the correct error code is returned.

      • If the user group is the only required attribute, this test is omitted as it is synonymous with the 'w/ Empty Attributes' test.

  • w/ Incorrect Type

    • Attempts to create a record with the type field set to not match the endpoint.

      • Asserts that the correct error code is returned.

15.4.2. Update

This controller contains all JMeter elements required for testing updating of records for the respective endpoint.

  • Basic/Verbose

    • Update the created Basic and Verbose records

      • Asserts that the values from the retrieved record match with the variables used in the PATCH request.

  • Verbose w/ Empty Body

    • Update Verbose entity with no request body.

      • Asserts that no values have been updated as a result of the request.

  • w/ Invalid UUID

    • Attempts to update a record with a UUID that does not correspond to an existing record.

      • Asserts that the correct error code is returned.

15.4.3. Delete

This controller contains all JMeter elements required for testing removal of records for the respective endpoint.

  • Basic/Verbose

    • Removes the records that have been created.

      • Asserts that they have been removed. Audit records may remain.

  • w/ Invalid UUID

    • Attempts to remove a record with a UUID that does not correspond to an existing record.

      • Asserts that the correct error code is returned.

  • Endpoint tearDown

    • Removes any module records if they were created in the 'Endpoint setUp' controller

15.4.4. Assertions

In all of the CRUD controllers, assertions are being made to ensure that the correct results are produced by each operation.

  • Assertions are managed by using an HTTP Request Sampler to retrieve the designated record and verifying the correctness of the fields.

    • The majority of assertions are conducted through a Groovy script by means of a JSR223 Assertion.

    • Assertions for Map attributes utilize the JSON Assertion instead for simpler comparison.

  • In addition to the assertions used to validate data, Response Assertions are used after every HTTP Request Sampler to ensure that the correct response code is also returned.

    • In order to for the Invalid CRUD Controller to return a valid result, a JSR223 Assertion with a Groovy script is used in place of the Response Assertion to verify the correct response code has been returned and set the successful attribute of the HTTP Request Sampler to be true.

16. Change default Docker address pool

The default Docker network addess pool may overlap with other networks (e.g. VPN).

Add to (or create) the file /etc/docker/daemon.json

{
  "bip": "192.168.32.1/24",
  "default-address-pools" : [
      {
          "base" : "192.168.33.0/24",
          "size" : 24
      }
   ]
}

bip: Bridge IP (bip) must not overlap with the default-address-pools.