1. Developer Environment Setup
1.1. Prerequisites
The following is required for DINA development:
-
Linux operating system
-
On Windows using Virtual Machine
-
On Windows using WSL2 (Not Recommended - Experimental)
-
-
Visual Studio Code (Install on Linux or see Remote setup below)
-
Docker (Install on Linux)
sudo apt install curl curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh sudo usermod -aG docker ${USER}
-
Docker Compose (Install on Linux)
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker} mkdir -p $DOCKER_CONFIG/cli-plugins curl -SL https://github.com/docker/compose/releases/download/v2.4.1/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
-
Node.js
Install Node Version Manager (NVM): https://github.com/nvm-sh/nvm#installing-and-updating
Install node 18 (codename hydrogen)
nvm install Hydrogen
-
Yarn
corepack enable yarn set version stable yarn install
-
Maven
sudo apt install maven
-
Git
sudo apt install git
1.2. Clone Repositories
To get started, you can git clone all of the projects to your home directory:
git clone --branch dev https://github.com/AAFC-BICoE/dina-ui.git
git clone --branch dev https://github.com/AAFC-BICoE/object-store-api.git
git clone --branch dev https://github.com/AAFC-BICoE/seqdb-api.git
git clone --branch dev https://github.com/AAFC-BICoE/agent-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-user-api.git
git clone --branch dev https://github.com/AAFC-BICoE/natural-history-collection-api.git
git clone --branch dev https://github.com/AAFC-BICoE/loan-transaction-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-search-api.git
You can also clone the OpenAPI specification projects:
git clone https://github.com/DINA-Web/loan-transaction-specs.git
git clone https://github.com/DINA-Web/object-store-specs.git
git clone https://github.com/DINA-Web/collection-specs.git
git clone https://github.com/DINA-Web/user-specs.git
git clone https://github.com/DINA-Web/agent-specs.git
git clone https://github.com/DINA-Web/sequence-specs.git
1.3. Visual Studio Code
Visual Studio Code can be installed directly on your Virtual Machine or on your Windows machine using the VSCode Remote instructions below.
1.3.1. Recommended Extensions
Once you have VSCode downloaded, you can run this command to install the recommended extensions:
code --install-extension vscjava.vscode-java-pack
code --install-extension msjsdiag.debugger-for-chrome
code --install-extension ms-azuretools.vscode-docker
code --install-extension gabrielbb.vscode-lombok
code --install-extension esbenp.prettier-vscode
code --install-extension ms-vscode.vscode-typescript-tslint-plugin
code --install-extension firsttris.vscode-jest-runner
code --install-extension alefragnani.project-manager
1.3.2. VSCode Remote (Optional)
You can avoid running a slow IDE inside your virtual machine by running VS Code on your host machine instead and connecting to the dev environment using VS Code Remote.
-
Download VSCode for Windows
-
Install the VSCode SSH Remote Extension
code --install-extension ms-vscode-remote.remote-ssh
-
Install Open-SSH Server on your VM
sudo apt-get install openssh-server sudo systemctl enable ssh sudo systemctl start ssh
-
Setup Port Forwarding on your VM
In order for your Windows VSCode to communicate with your Virtual Machine, you will need to allow for SSH port forwarding.
From the
VirtualBox Manager
window, clickSettings`
>Network
Tab >Advanced
>Port Forwarding
> Green+
icon and add the following rule:Table 1. SSH Port Forwarding Rule Name Protocol Host IP Host Port Guest IP Guest Port SSH
TCP
Leave Blank
22
Leave Blank
22
Click
OK
on both windows to finish setting up the port. -
Configure SSH Remote Extension
Once you have the
SSH Remote
extension, using the command palette (CTRL + SHIFT + P) search for>Remote-SSH: Add New SSH Host…
.In the popup at the top of the screen you can enter the SSH command to connect to your virtual machine (Replacing USERNAME with your Ubuntu username):
ssh USERNAME@localhost
Now you are connected to your Virtual Machine. You can open projects and use the terminal as if you were in the VM.
1.3.3. API Debugging
To debug an API while using the local deployment, you can use the docker-compose.debug.yml
config which can be enabled from the start_stop_dina.sh
script.
Once enabled, you will be able to attach your VSCode to an API. The debugging ports can be found in the .env
file.
Also remember that if you are running VSCode remotely, you will need to port forward the debugging port.
Here is an example of a launch.json
that can be added to an API for VSCode to attach itself to the java debugger for a specific API:
{
"version": "0.2.0",
"configurations": [
{
"type": "java",
"name": "Attach to Collection API Container",
"request": "attach",
"hostName": "localhost",
"port": "5002"
}
]
}
Just ensure that port lines up with the correct API and the port is exposed for VSCode to attach itself.
2. Instance Configuration
By setting environment variables for the DINA UI container, you can configure certain features and options for the runtime instance.
These settings can be configured in the .env
file.
2.1. Instance Mode
The INSTANCE_MODE
environment variable specifies the operational mode of the runtime instance.
For example, setting PROD
configures the instance for production environments. If it’s set to a instance mode other than PROD
, it will be displayed in the application’s header to inform users which mode they are currently using.
If using the Dev User mode (without Keycloak), this will need to be set to developer
.
By default, developer
is used, which is used to indicate the server is running in a local development environment.
2.2. Instance Name
The INSTANCE_NAME
environment variable is currently used to generate the feedback URL within the UI.
For instance, setting GRDI
indicates that this is the GRDI instance of DINA. The feedback link will display a badge on new issues to identify the instance (GRDI) from which the user is reporting the issue.
By default, the instance name is set to AAFC
.
2.3. Supported Languages
The SUPPORTED_LANGUAGES
environment variable is a comma-separated list of the ISO language codes that the application will support.
For example:
SUPPORTED_LANGUAGES_ISO=en,fr
Will allow the application to switch between English and French languages.
Supported languages:
ISO Code | Language |
---|---|
en |
English |
fr |
French |
de |
German |
2.4. Supported Geoinformational Systems
The SUPPORTED_GEOINFORMATION_SYSTEMS
environment variable is a comma-separated list of the geoinformational systems supported by the UI.
By default, OSM
for OpenStreetMap is used.
Supported systems:
Code | Name |
---|---|
OSM |
OpenStreetMap |
TGN |
The Getty Thesaurus of Geographic Names |
When using TGN
, a reverse proxy will need to be setup since it’s using HTTP
protocol. The following can be added to the traefik-dynamic.yml
file to configure this reverse proxy:
http:
routers:
tgn:
tls: true
rule: "Host(`localhost`, `dina.local`) && PathPrefix(`/TGNService.asmx`)"
middlewares:
- addHost
service: tgnLb
priority: 10000
middlewares:
addHost:
headers:
customRequestHeaders:
Host: vocabsservices.getty.edu
services:
tgnLb:
loadBalancer:
servers:
- url: "http://vocabsservices.getty.edu"
The TGN_SEARCH_BASE_URL
environment variable is used to configure the base URL for the reverse proxy. If this is not provided, it will just use the localhost (dina.local).
3. Using Local Images
Running local deployment without making any changes will run the docker-hub deployed API modules. For development purposes, you can follow these steps to test your API changes.
For the examples below, we are using Collection-API
but the steps are the same for all the other API modules. Check out the dina-ui Development
section for live reloading support for UI development.
-
Clean and build your API changes
Navigate to the root of the API project you wish to test locally, then run maven clean install. The
-DskipTests
argument can be used to skip the tests.cd ~/collection-api mvn clean install -DskipTests
-
Build the docker image
Each of the API modules, contain a
Dockerfile
which can be used to build an image. When running the command to build the image you can provide a tag which will be used in step 3.docker build -t collection-api:dev .
-
Change the image name on the
docker-compose.local.yml
In the
docker-compose.local.yml
file you can find the API container and change the image to use the tag created in step 2.collection-api: image: collection-api:dev environment: keycloak.enabled: "true"
-
Start/Restart the containers
Re-run the command you used to start the container originally (with any settings you used like messaging or profiles), you can use the
-d collection-api
to just start the specific container.docker-compose -f docker-compose.base.yml -f docker-compose.local.yml up -d collection-api
You are now running your local changes. It’s important to understand that you are running the jar generated from step 1, so if you make more changes on the API, you will need to repeat the steps to test your latest changes.
3.1. Export Docker image to file
Exporting an image from the local registry:
docker save -o collection-api1.tar collection-api:dev
Loading a Docker image from a file:
docker load < collection-api1.tar
4. DINA-UI Development
To enable dina-ui hot-reloading in dina-local-deployment, you need to configure the dina-ui repo location.
This variable is stored in the .env
as DINA_UI_REPO_DIRECTORY
. By default it looks in the home directory for dina-ui.
Any changes to the pointed DINA_UI_REPO_DIRECTORY
will cause the page to automatically reload.
To use, simply add the override docker-compose configuration for dev: -f docker-compose.dev.yml
.
docker-compose \
-f docker-compose.base.yml \
-f docker-compose.local.yml \
-f docker-compose.dev.yml up
You can also still use profiles or combine this with RabbitMQ messaging as well.
5. API Endpoint Testing
5.1. Applications
While any API client can be used, Bruno is our recommended API client and requires no 3rd party accounts.
Bruno can be downloaded from: https://www.usebruno.com/downloads
Once downloaded, you can open the DINA collection of API endpoints by going to File
> Open collection
> Navigate to dina-local-deployment/api-client/DINA-API and open that folder.
You will also need to select the environment variables for the instance you want to connect to, for local dina development you would select DINA Local
. The dropdown at the top right of the screen lets you change the environment variables.
For local development, you will need to open the Bruno preferences (CTRL
+ ,
), Under the General tab, you will see an option for "Use custom CA certificate", make sure this is checked on. Click the "Select file" option and navigate to /etc/ssl/certs/ca-certificates.crt
. (Be sure that you have ran the Local Certificate steps in order for the dina local certificates to be inside of the ca-certificates.crt
file)
Please note that bruno is still under active development.
5.2. Authentication
If using the Bruno API client with our collection loaded, the Authentication settings will be automatically loaded in.
Click the gear icon located at the top right of the screen. Then go to the Auth
where you can click the "Get Access Token". Currently you need to do this each time to generate the token but probably in the future this will happen automatically when the token has expired. Each request will need to use the inherit authentication option.
In other API clients, you can setup the authentication using these settings:
Setting | Value |
---|---|
Authentication Type |
OAuth 2.0 |
Grant Type |
Authorization Code |
Authorization URL / Auth URL |
https://dina.local/auth/realms/dina/protocol/openid-connect/auth |
Access Token URL |
https://dina.local/auth/realms/dina/protocol/openid-connect/token |
Client ID |
dina-public |
Redirect URL / Callback URL |
|
Scope |
openid |
Then when sending the request, you should see a popup which will let you login to the DINA application.
5.3. Logout of Session
Sometimes you might need to login as a different user for testing, the instructions for clearing your sessions are below:
To logout of your session on the Bruno application, go to the collection settings by using the gear icon at the top right of the screen, then click Auth
to access the authentication settings. At the bottom of the page there is a button to clear the session.
5.4. Headers
Most of the API POST requests require a specific "Content-Type" header. The following headers are setup to be inherited to each request inside of the DINA so they do not need to be setup for each request.
Key | Value | Description |
---|---|---|
Content-Type |
application/vnd.api+json |
This is only required for POST requests, by default Postman and Insomnia might put |
Crnk-Compact |
true |
Optional. Self and related links can make up to 60% of the response payload size and not always are those links of use. In this case the computation of those links is omitted. Further relationships without data are completely omitted. |
5.5. Additional Environments
Only included in the /api-client repo is the local development environment file. Other environment files can be downloaded from the DINA Wiki under the "Bruno Environments" section.
6. PostgreSQL
It is possible to query PostgreSQL when the container is running.
Example on the collection
database, collection
pg schema:
docker exec -i CONTAINER_ID psql -U pguser collection -c "select * from collection.project"
7. Messaging
To enable Messaging based on RabbitMQ, combine the --profile search_api
with the override file message-producing-override/docker-compose.override.messageProducer.yml
.
docker-compose \
--profile search_api \
-f docker-compose.base.yml \
-f docker-compose.local.yml \
-f message-producing-override/docker-compose.override.messageProducer.yml \
up -d
8. Keycloak
8.1. Dev User
This option will allow the APIs to start without Keycloak but still function with a dev user.
The keycloak/docker-compose.enable-dev-user.yml
config needs to be enabled inside of the
start_stop_dina.sh
:
DINA_CONFIGS+=('keycloak/docker-compose.enable-dev-user.yml')
For the UI to work without keycloak, you need to be using UI dev mode, which can be enabled by
uncommenting the following line inside start_stop_dina.sh
:
DINA_CONFIGS+=('docker-compose.dev.yml')
Inside of the docker-compose.enable-dev-user.yml
file it can be configured which containers
the settings are applied, by default it is applied to all.
The dina-ui container expects a comma-separated list of the group and roles.
For example:
/aafc/user, /bicoe/read-only
Then each of the API’s that you wish to use also need to be configured with the same group roles:
"groupRole": {
"aafc": ["user"],
"bicoe": ["read-only"]
}
8.2. Settings
Traefik is responsible for TLS termination so Keycloak is configured with KC_PROXY: edge
.
Keycloak Admin console is available at https://keycloak.dina.local/auth
.
The main reason for having the admin console on a different hostname is to simplify the rules for not exposing it.
9. Local certificates
In order to generate development certificates mkcert will be used.
9.1. Installation (Ubuntu 20)
sudo apt-get install wget libnss3-tools -y
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
sudo chmod +x mkcert-v1.4.3-linux-amd64
sudo mv mkcert-v1.4.3-linux-amd64 /usr/bin/mkcert
Test mkcert is installed correctly:
mkcert --version
Generate certificates for dina.local:
mkcert -cert-file certs/dina-local-cert.pem -key-file certs/dina-local-key.pem "dina.local" "api.dina.local" "keycloak.dina.local"
Then install the certificates so it’s trusted by Chrome and Firefox:
mkcert --install
10. Minikube
It is possible to use minikube to deploy DINA locally using the helm chart.
10.1. Installing minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
10.2. Starting minikube
minikube start --addons=ingress --cpus=4 --cni=flannel --install-addons=true --kubernetes-version=stable --memory=6g
Change the cpus and memory according to your local resources.
Note: This is a sample command. The important flags here are:
--addons=ingress
--cni=flannel
--install-addons=true
Note:
Useful alias for running kubectl commands: alias k="minikube kubectl --"
10.3. Push local image
To push local (images that are built locally) to minikube :
From the host: minikube cache add myimage:mytag
See minikube Pushing images documentation for more information.
11. Helm
11.1. Installing Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
12. JMeter
Apache JMeter is used to execute performance/stress test plan.
12.1. Installation
You will need to have java installed on your system. Insure it’s installed using:
$ java -version
Next, download the latest version of JMeter from the JMeter website and extract it to your desired location:
$ wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip
$ unzip apache-jmeter-5.5.zip
Then you can open up the binary and run jmeter to view the GUI:
$ cd /apache-jmeter-5.5/bin/
$ ./jmeter
12.2. How to run
You can start JMeter in GUI mode (default) and open the test plan to visualize it.
To run a test plan from command-line:
$ jmeter -n -t collection_api_module_testplan.jmx -l log.jtl
12.3. Module Structure
The JMeter test plan is constructed as follows
-
Module Test Plan
-
setUp Thread Group
-
Endpoint Thread Groups
-
tearDown Thread Group
-
12.3.1. setUp Thread Group
The setUp Thread Group will be executed prior to any regular thread groups, allowing us to setup access privileges as well as create any records that may be required by regular thread groups.
Refer to setUp Thread Group component reference
12.3.2. Endpoint Thread Group(s)
Each endpoint within the corresponding module will contain a corresponding endpoint thread group. A thread group will be responsible for setting up and testing the respective endpoint’s CRUD operations.
The layout of each thread group is as follows:
-
Endpoint User Defined Variables
-
All constant variables will be declared in this configuration element
-
-
HTTP Headers
-
Sets the access token and content-type headers, required to perform API requests
-
-
Module Token Controller
-
A Module Controller utilized to refresh the access token.
-
Module Controllers allow you to perform the actions of other controllers such as using the Token Controller from the setUp Thread Group or the CRUD controllers from other endpoints.
-
All controllers aside from the Module Controllers are Simple Controllers that offer no unique functionality, being used primarily for organizational purposes and to allow Module Controllers to replicate their functionality if needed.
-
-
-
Create/Update/Delete (CRUD) Endpoint Controllers
-
These controllers contain all Controller, Samplers, and Assertions associated with their respective CRUD operation.
-
Retrieval is tested within each controller when assertions are being made.
-
-
12.3.3. tearDown Thread Group
The tearDown Thread Group is executed after all of the other thread groups have terminated. This allows us to remove the records created within the setUp Thread Group without causing any conflicts with the thread group that user these records.
Refer to tearDown Thread Group component reference
12.4. CRUD Controllers
The CRUD Controllers are the thread group Create, Update, and Delete Controllers.
12.4.1. Create
This controller contains all JMeter elements required for testing record creation for the respective endpoint.
-
Endpoint setUp
-
Sets up random variables for the thread group to ensure unique values between threads.
-
Creates module records if the endpoint requires them for testing relationships fields.
-
-
Basic/Verbose
-
Creates a Basic record with the minimum fields and a Verbose record with all fields populated
-
Asserts that records have been created and values match with the variables used in the POST request.
-
-
-
w/ Empty Attributes
-
Attempts to create a record with no attributes specified.
-
Asserts that the correct error code is returned.
-
-
-
w/ Only User Group
-
Attempts to create a record with only the user group specified, missing possible required attributes
-
Asserts that the correct error code is returned.
-
If the user group is the only required attribute, this test is omitted as it is synonymous with the 'w/ Empty Attributes' test.
-
-
-
w/ Incorrect Type
-
Attempts to create a record with the type field set to not match the endpoint.
-
Asserts that the correct error code is returned.
-
-
12.4.2. Update
This controller contains all JMeter elements required for testing updating of records for the respective endpoint.
-
Basic/Verbose
-
Update the created Basic and Verbose records
-
Asserts that the values from the retrieved record match with the variables used in the PATCH request.
-
-
-
Verbose w/ Empty Body
-
Update Verbose entity with no request body.
-
Asserts that no values have been updated as a result of the request.
-
-
-
w/ Invalid UUID
-
Attempts to update a record with a UUID that does not correspond to an existing record.
-
Asserts that the correct error code is returned.
-
-
12.4.3. Delete
This controller contains all JMeter elements required for testing removal of records for the respective endpoint.
-
Basic/Verbose
-
Removes the records that have been created.
-
Asserts that they have been removed. Audit records may remain.
-
-
-
w/ Invalid UUID
-
Attempts to remove a record with a UUID that does not correspond to an existing record.
-
Asserts that the correct error code is returned.
-
-
-
Endpoint tearDown
-
Removes any module records if they were created in the 'Endpoint setUp' controller
-
12.4.4. Assertions
In all of the CRUD controllers, assertions are being made to ensure that the correct results are produced by each operation.
-
Assertions are managed by using an HTTP Request Sampler to retrieve the designated record and verifying the correctness of the fields.
-
The majority of assertions are conducted through a Groovy script by means of a JSR223 Assertion.
-
Assertions for Map attributes utilize the JSON Assertion instead for simpler comparison.
-
-
In addition to the assertions used to validate data, Response Assertions are used after every HTTP Request Sampler to ensure that the correct response code is also returned.
-
In order to for the Invalid CRUD Controller to return a valid result, a JSR223 Assertion with a Groovy script is used in place of the Response Assertion to verify the correct response code has been returned and set the successful attribute of the HTTP Request Sampler to be true.
-
13. Change default Docker address pool
The default Docker network addess pool may overlap with other networks (e.g. VPN).
Add to (or create) the file /etc/docker/daemon.json
{
"bip": "192.168.32.1/24",
"default-address-pools" : [
{
"base" : "192.168.33.0/24",
"size" : 24
}
]
}
bip
: Bridge IP (bip
) must not overlap with the default-address-pools
.