Application Setup
Backend Services
sh-protos
When working with sh-protos, open a new terminal and run npm start. This start the project in watch mode and automatically rebuilds the project when changes are made. And since the project is mapped to container volumes in docker-compose.yml, you don't need to reisntall the package on every change.
Migrations
Follow this step to generate a new migrations
- cd into in the migrations directory
- Run
npm run generate:migration ${name}. e.gnpm run generate:migration addNewColumn - Fill out migration file.
To run migrations follow these steps:
- cd into in the migrations directory
- install dependencies and add env variables.
- Run
npm run migrations(You need to be on node v10 to for this to work)
Known Errors and Solutions
Error: [PackageLoader] The "grpc" package is missing. Please, make sure to install this library ($ npm install grpc) to take advantage of ClientGrpcProxy.
Solution: exec into the docker container and do npm install grpc
Service PORTS
Defaults
- 6000 - posrt for device manager websockets
- 6379 - redis
- 3030 - redis/bull frontend dashboard
HTTP
- 80(Container) => 8081(Host if needed) - data service
- 80(Container) => 8082(Host if needed) - message service
- 80(Container) => 8083(Host if needed) - payment service
- 80(Container) => 8084(Host if needed) - proxy service
- 80(Container) => 8085(Host if needed) - user service
- 8086(Container) => 8086(Host if needed) - customer web service
- 8090(Container) => 8090(Host if needed) - customer web servicev2
- 80(Container) => 9000(Host if needed) - workflow_executor
- 80(Container) => 9001(Host if needed) - scheduler
- 80(Container) => 9002(Host if needed) - browser_executor
- 80(Container) => 9003(Host if needed) - device_manager
- 80(Container) => 9004(Host if needed) - qms_redis
- 8080(Container) => 9005(Host if needed) - selenoid-ui
GRPC
- 5000 - USER SERVICE
- 5000 - BROWSER SERVICE
- 5000 - Device manager
WEBSOCKETS
- 6000 - Device manager
Deprecated features
- Overview page.
- Remove get-workflow-statistics endpoint.
Self signed certificates
Idea Because of the HIPPA requirements, we need all inter-service interactions to be SSL. That is why we have niternal balancer service
Example Service DS want's to make a HTTP request to service MS Instead of making request directly to http://ms/some-endpoint We need to do: https://balancer/ms/some-endpoint
Note We need Node.JS to trust self signed certificates To do that we need balancer.crt file on container installed For example: /home/node/certs/balancer.crt
In this case we need env variable NODE_EXTRA_CA_CERTS to be set with path: NODE_EXTRA_CA_CERTS=/home/node/certs/balancer.crt
Note #2 Node.JS has its own list of trusted certificates, so updating container-level certificates with update-ca-certificates with balancer.crt installed in /usr/local/share/ca-certificates won't help
Before deployment
Go into ./docker/balancer and run
sh ./gen_certs.shIt will generate two files: balancer.crt and balancer.key balancer.crt will be used for making requests between services through the balancer service
Make sure that env variable NODE_EXTRA_CA_CERTS is set on .env
Make sure that certificates are attached to conatiners volumes
Note for existing containers: check for .env.example files, to check if service needs updated .env file
Selenoid: Go to browser_executor/application/browsers.json, get the image selenoid/chrome image and pull it from docker bug before starting browser_executor. This image is needed for running automations. e.g.
docker pull selenoid/vnc_chrome:98.0. VNC enabled docker images are required to view live running sessions.
Running the App locally
To run the application locally you need to open two terminals. One for the database tunnel, and the other for docker compose. I assume you already added the needed .env variables to the services you want to start. Some services are required for the cws/console to work. These include balancer prs us ds workflow_executor redis
- On one of the terminals run
ssh -g -N -L 5555:{DATABASE_HOST}:5432 -i {your_key_file_path} {server_url} - On the second terminal run
docker-compose upordocker compose updepending the your docker flavour.
Test enviroment:
Generate certs
To use local database, you need to make copy of dev database. Install postgres client and get schema from remote database
- go to docker/database, run
sh ./gen_cert.sh
- go to docker/database, run
Get Schema and seed data
- On one of the terminals run
ssh -g -N -L 5555:{DATABASE_HOST}:5432 -i {your_key_file_path} {server_url} - Get database schema
pg_dump -h localhost -p 5555 -U sh_al -d sh -s > docker/database/dump/schema.sql - Get seed data
pg_dump -h localhost -p 5555 -U sh_al -d sh -T "\"WorkflowTasks\"" -T "\"Messages\"" -T "\"Appointments\"" -T "\"BalanceOperations\"" -T "\"PatientInsurances\"" --data-only> docker/database/dump/data.sqlexclude WorkflowTasks and Messages because they are so large and dont containt important info - run
psql -U sh_al -d sh -h localhost -p 5432 -f docker/database/dump/schema.sqlandpsql -U sh_al -d sh -h localhost -p 5432 -f docker/database/dump/data.sqlover your local database in order to create db with schema and populate with data
- On one of the terminals run
Run tests
To run the tests, run docker compose -f docker-compose.test.yml up {SERVICE_NAME}. Make changes and rerun the tests to see output.
- Run App on Test Instance
To run the application on the test insance, run docker compose -f docker-compose.preview.yml up {SERVICE_NAME}.