1. Developer Environment Setup
1.1. Prerequisites
The following is required for DINA development:
-
Linux operating system
-
On Windows using Virtual Machine
-
On Windows using WSL2 (Not Recommended - Experimental)
-
-
Visual Studio Code (Install on Linux or see Remote setup below)
-
Docker (Install on Linux)
sudo apt install curl curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh sudo usermod -aG docker ${USER}
-
Docker Compose (Install on Linux)
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker} mkdir -p $DOCKER_CONFIG/cli-plugins curl -SL https://github.com/docker/compose/releases/download/v2.4.1/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
-
Node.js
Install Node Version Manager (NVM): https://github.com/nvm-sh/nvm#installing-and-updating
Install node 18 (codename hydrogen)
nvm install Hydrogen
-
Yarn
corepack enable yarn set version stable yarn install
-
Maven
sudo apt install maven
-
Git
sudo apt install git
1.2. Clone Repositories
To get started, you can git clone all of the projects to your home directory:
git clone --branch dev https://github.com/AAFC-BICoE/dina-ui.git
git clone --branch dev https://github.com/AAFC-BICoE/object-store-api.git
git clone --branch dev https://github.com/AAFC-BICoE/seqdb-api.git
git clone --branch dev https://github.com/AAFC-BICoE/agent-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-user-api.git
git clone --branch dev https://github.com/AAFC-BICoE/natural-history-collection-api.git
git clone --branch dev https://github.com/AAFC-BICoE/loan-transaction-api.git
git clone --branch dev https://github.com/AAFC-BICoE/dina-search-api.git
You can also clone the OpenAPI specification projects:
git clone https://github.com/DINA-Web/loan-transaction-specs.git
git clone https://github.com/DINA-Web/object-store-specs.git
git clone https://github.com/DINA-Web/collection-specs.git
git clone https://github.com/DINA-Web/user-specs.git
git clone https://github.com/DINA-Web/agent-specs.git
git clone https://github.com/DINA-Web/sequence-specs.git
1.3. Visual Studio Code
Visual Studio Code can be installed directly on your Virtual Machine or on your Windows machine using the VSCode Remote instructions below.
1.3.1. Recommended Extensions
Once you have VSCode downloaded, you can run this command to install the recommended extensions:
code --install-extension vscjava.vscode-java-pack
code --install-extension msjsdiag.debugger-for-chrome
code --install-extension ms-azuretools.vscode-docker
code --install-extension gabrielbb.vscode-lombok
code --install-extension esbenp.prettier-vscode
code --install-extension ms-vscode.vscode-typescript-tslint-plugin
code --install-extension firsttris.vscode-jest-runner
code --install-extension alefragnani.project-manager
1.3.2. VSCode Remote (Optional)
You can avoid running a slow IDE inside your virtual machine by running VS Code on your host machine instead and connecting to the dev environment using VS Code Remote.
-
Download VSCode for Windows
-
Install the VSCode SSH Remote Extension
code --install-extension ms-vscode-remote.remote-ssh
-
Install Open-SSH Server on your VM
sudo apt-get install openssh-server sudo systemctl enable ssh sudo systemctl start ssh
-
Setup Port Forwarding on your VM
In order for your Windows VSCode to communicate with your Virtual Machine, you will need to allow for SSH port forwarding.
From the
VirtualBox Manager
window, clickSettings`
>Network
Tab >Advanced
>Port Forwarding
> Green+
icon and add the following rule:Table 1. SSH Port Forwarding Rule Name Protocol Host IP Host Port Guest IP Guest Port SSH
TCP
Leave Blank
22
Leave Blank
22
Click
OK
on both windows to finish setting up the port. -
Configure SSH Remote Extension
Once you have the
SSH Remote
extension, using the command palette (CTRL + SHIFT + P) search for>Remote-SSH: Add New SSH Host…
.In the popup at the top of the screen you can enter the SSH command to connect to your virtual machine (Replacing USERNAME with your Ubuntu username):
ssh USERNAME@localhost
Now you are connected to your Virtual Machine. You can open projects and use the terminal as if you were in the VM.
1.3.3. API Debugging
To debug an API while using the local deployment, you can use the docker-compose.debug.yml
config which can be enabled from the start_stop_dina.sh
script.
Once enabled, you will be able to attach your VSCode to an API. The debugging ports can be found in the .env
file.
Also remember that if you are running VSCode remotely, you will need to port forward the debugging port.
Here is an example of a launch.json
that can be added to an API for VSCode to attach itself to the java debugger for a specific API:
{
"version": "0.2.0",
"configurations": [
{
"type": "java",
"name": "Attach to Collection API Container",
"request": "attach",
"hostName": "localhost",
"port": "5002"
}
]
}
Just ensure that port lines up with the correct API and the port is exposed for VSCode to attach itself.
2. Using Local Images
Running local deployment without making any changes will run the docker-hub deployed API modules. For development purposes, you can follow these steps to test your API changes.
For the examples below, we are using Collection-API
but the steps are the same for all the other API modules. Check out the dina-ui Development
section for live reloading support for UI development.
-
Clean and build your API changes
Navigate to the root of the API project you wish to test locally, then run maven clean install. The
-DskipTests
argument can be used to skip the tests.cd ~/collection-api mvn clean install -DskipTests
-
Build the docker image
Each of the API modules, contain a
Dockerfile
which can be used to build an image. When running the command to build the image you can provide a tag which will be used in step 3.docker build -t collection-api:dev .
-
Change the image name on the
docker-compose.local.yml
In the
docker-compose.local.yml
file you can find the API container and change the image to use the tag created in step 2.collection-api: image: collection-api:dev environment: keycloak.enabled: "true"
-
Start/Restart the containers
Re-run the command you used to start the container originally (with any settings you used like messaging or profiles), you can use the
-d collection-api
to just start the specific container.docker-compose -f docker-compose.base.yml -f docker-compose.local.yml up -d collection-api
You are now running your local changes. It’s important to understand that you are running the jar generated from step 1, so if you make more changes on the API, you will need to repeat the steps to test your latest changes.
2.1. Export Docker image to file
Exporting an image from the local registry:
docker save -o collection-api1.tar collection-api:dev
Loading a Docker image from a file:
docker load < collection-api1.tar
3. DINA-UI Development
To enable dina-ui hot-reloading in dina-local-deployment, you need to configure the dina-ui repo location.
This variable is stored in the .env
as DINA_UI_REPO_DIRECTORY
. By default it looks in the home directory for dina-ui.
Any changes to the pointed DINA_UI_REPO_DIRECTORY
will cause the page to automatically reload.
To use, simply add the override docker-compose configuration for dev: -f docker-compose.dev.yml
.
docker-compose \
-f docker-compose.base.yml \
-f docker-compose.local.yml \
-f docker-compose.dev.yml up
You can also still use profiles or combine this with RabbitMQ messaging as well.
4. API Endpoint Testing
4.1. Applications
While any API client can be used, Bruno is our recommended API client and requires no 3rd party accounts.
Bruno can be downloaded from: https://www.usebruno.com/downloads
Once downloaded, you can open the DINA collection of API endpoints by going to File
> Open collection
> Navigate to dina-local-deployment/api-client/DINA-API and open that folder.
You will also need to select the environment variables for the instance you want to connect to, for local dina development you would select DINA Local
. The dropdown at the top right of the screen lets you change the environment variables.
For local development, you will need to open the Bruno preferences (CTRL
+ ,
), Under the General tab, you will see an option for "Use custom CA certificate", make sure this is checked on. Click the "Select file" option and navigate to /etc/ssl/certs/ca-certificates.crt
. (Be sure that you have ran the Local Certificate steps in order for the dina local certificates to be inside of the ca-certificates.crt
file)
Please note that bruno is still under active development.
4.2. Authentication
If using the Bruno API client with our collection loaded, the Authentication settings will be automatically loaded in.
Click the gear icon located at the top right of the screen. Then go to the Auth
where you can click the "Get Access Token". Currently you need to do this each time to generate the token but probably in the future this will happen automatically when the token has expired. Each request will need to use the inherit authentication option.
In other API clients, you can setup the authentication using these settings:
Setting | Value |
---|---|
Authentication Type |
OAuth 2.0 |
Grant Type |
Authorization Code |
Authorization URL / Auth URL |
https://dina.local/auth/realms/dina/protocol/openid-connect/auth |
Access Token URL |
https://dina.local/auth/realms/dina/protocol/openid-connect/token |
Client ID |
dina-public |
Redirect URL / Callback URL |
|
Scope |
openid |
Then when sending the request, you should see a popup which will let you login to the DINA application.
4.3. Logout of Session
Sometimes you might need to login as a different user for testing, the instructions for clearing your sessions are below:
To logout of your session on the Bruno application, go to the collection settings by using the gear icon at the top right of the screen, then click Auth
to access the authentication settings. At the bottom of the page there is a button to clear the session.
4.4. Headers
Most of the API POST requests require a specific "Content-Type" header. The following headers are setup to be inherited to each request inside of the DINA so they do not need to be setup for each request.
Key | Value | Description |
---|---|---|
Content-Type |
application/vnd.api+json |
This is only required for POST requests, by default Postman and Insomnia might put |
Crnk-Compact |
true |
Optional. Self and related links can make up to 60% of the response payload size and not always are those links of use. In this case the computation of those links is omitted. Further relationships without data are completely omitted. |
4.5. Additional Environments
Only included in the /api-client repo is the local development environment file. Other environment files can be downloaded from the DINA Wiki under the "Bruno Environments" section.
5. PostgreSQL
It is possible to query PostgreSQL when the container is running.
Example on the collection
database, collection
pg schema:
docker exec -i CONTAINER_ID psql -U pguser collection -c "select * from collection.project"
6. Messaging
To enable Messaging based on RabbitMQ, combine the --profile search_api
with the override file message-producing-override/docker-compose.override.messageProducer.yml
.
docker-compose \
--profile search_api \
-f docker-compose.base.yml \
-f docker-compose.local.yml \
-f message-producing-override/docker-compose.override.messageProducer.yml \
up -d
7. Keycloak
7.1. Settings
Traefik is responsible for TLS termination so Keycloak is configured with KC_PROXY: edge
.
Keycloak Admin console is available at https://keycloak.dina.local/auth
.
The main reason for having the admin console on a different hostname is to simplify the rules for not exposing it.
8. Local certificates
In order to generate development certificates mkcert will be used.
8.1. Installation (Ubuntu 20)
sudo apt-get install wget libnss3-tools -y
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
sudo chmod +x mkcert-v1.4.3-linux-amd64
sudo mv mkcert-v1.4.3-linux-amd64 /usr/bin/mkcert
Test mkcert is installed correctly:
mkcert --version
Generate certificates for dina.local:
mkcert -cert-file certs/dina-local-cert.pem -key-file certs/dina-local-key.pem "dina.local" "api.dina.local" "keycloak.dina.local"
Then install the certificates so it’s trusted by Chrome and Firefox:
mkcert --install
9. JMeter
Apache JMeter is used to execute performance/stress test plan.
9.1. Installation
You will need to have java installed on your system. Insure it’s installed using:
$ java -version
Next, download the latest version of JMeter from the JMeter website and extract it to your desired location:
$ wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip
$ unzip apache-jmeter-5.5.zip
Then you can open up the binary and run jmeter to view the GUI:
$ cd /apache-jmeter-5.5/bin/
$ ./jmeter
9.2. How to run
You can start JMeter in GUI mode (default) and open the test plan to visualize it.
To run a test plan from command-line:
$ jmeter -n -t collection_api_module_testplan.jmx -l log.jtl
9.3. Module Structure
The JMeter test plan is constructed as follows
-
Module Test Plan
-
setUp Thread Group
-
Endpoint Thread Groups
-
tearDown Thread Group
-
9.3.1. setUp Thread Group
The setUp Thread Group will be executed prior to any regular thread groups, allowing us to setup access privileges as well as create any records that may be required by regular thread groups.
Refer to setUp Thread Group component reference
9.3.2. Endpoint Thread Group(s)
Each endpoint within the corresponding module will contain a corresponding endpoint thread group. A thread group will be responsible for setting up and testing the respective endpoint’s CRUD operations.
The layout of each thread group is as follows:
-
Endpoint User Defined Variables
-
All constant variables will be declared in this configuration element
-
-
HTTP Headers
-
Sets the access token and content-type headers, required to perform API requests
-
-
Module Token Controller
-
A Module Controller utilized to refresh the access token.
-
Module Controllers allow you to perform the actions of other controllers such as using the Token Controller from the setUp Thread Group or the CRUD controllers from other endpoints.
-
All controllers aside from the Module Controllers are Simple Controllers that offer no unique functionality, being used primarily for organizational purposes and to allow Module Controllers to replicate their functionality if needed.
-
-
-
Create/Update/Delete (CRUD) Endpoint Controllers
-
These controllers contain all Controller, Samplers, and Assertions associated with their respective CRUD operation.
-
Retrieval is tested within each controller when assertions are being made.
-
-
9.3.3. tearDown Thread Group
The tearDown Thread Group is executed after all of the other thread groups have terminated. This allows us to remove the records created within the setUp Thread Group without causing any conflicts with the thread group that user these records.
Refer to tearDown Thread Group component reference
9.4. CRUD Controllers
The CRUD Controllers are the thread group Create, Update, and Delete Controllers.
9.4.1. Create
This controller contains all JMeter elements required for testing record creation for the respective endpoint.
-
Endpoint setUp
-
Sets up random variables for the thread group to ensure unique values between threads.
-
Creates module records if the endpoint requires them for testing relationships fields.
-
-
Basic/Verbose
-
Creates a Basic record with the minimum fields and a Verbose record with all fields populated
-
Asserts that records have been created and values match with the variables used in the POST request.
-
-
-
w/ Empty Attributes
-
Attempts to create a record with no attributes specified.
-
Asserts that the correct error code is returned.
-
-
-
w/ Only User Group
-
Attempts to create a record with only the user group specified, missing possible required attributes
-
Asserts that the correct error code is returned.
-
If the user group is the only required attribute, this test is omitted as it is synonymous with the 'w/ Empty Attributes' test.
-
-
-
w/ Incorrect Type
-
Attempts to create a record with the type field set to not match the endpoint.
-
Asserts that the correct error code is returned.
-
-
9.4.2. Update
This controller contains all JMeter elements required for testing updating of records for the respective endpoint.
-
Basic/Verbose
-
Update the created Basic and Verbose records
-
Asserts that the values from the retrieved record match with the variables used in the PATCH request.
-
-
-
Verbose w/ Empty Body
-
Update Verbose entity with no request body.
-
Asserts that no values have been updated as a result of the request.
-
-
-
w/ Invalid UUID
-
Attempts to update a record with a UUID that does not correspond to an existing record.
-
Asserts that the correct error code is returned.
-
-
9.4.3. Delete
This controller contains all JMeter elements required for testing removal of records for the respective endpoint.
-
Basic/Verbose
-
Removes the records that have been created.
-
Asserts that they have been removed. Audit records may remain.
-
-
-
w/ Invalid UUID
-
Attempts to remove a record with a UUID that does not correspond to an existing record.
-
Asserts that the correct error code is returned.
-
-
-
Endpoint tearDown
-
Removes any module records if they were created in the 'Endpoint setUp' controller
-
9.4.4. Assertions
In all of the CRUD controllers, assertions are being made to ensure that the correct results are produced by each operation.
-
Assertions are managed by using an HTTP Request Sampler to retrieve the designated record and verifying the correctness of the fields.
-
The majority of assertions are conducted through a Groovy script by means of a JSR223 Assertion.
-
Assertions for Map attributes utilize the JSON Assertion instead for simpler comparison.
-
-
In addition to the assertions used to validate data, Response Assertions are used after every HTTP Request Sampler to ensure that the correct response code is also returned.
-
In order to for the Invalid CRUD Controller to return a valid result, a JSR223 Assertion with a Groovy script is used in place of the Response Assertion to verify the correct response code has been returned and set the successful attribute of the HTTP Request Sampler to be true.
-
10. Change default Docker address pool
The default Docker network addess pool may overlap with other networks (e.g. VPN).
Add to (or create) the file /etc/docker/daemon.json
{
"bip": "192.168.32.1/24",
"default-address-pools" : [
{
"base" : "192.168.33.0/24",
"size" : 24
}
]
}
bip
: Bridge IP (bip
) must not overlap with the default-address-pools
.