## NoVNC container Build This directory contains the bazel rules to build the novnc container image, which includes nginx configuration to serve the NoVNC html files and act as a proxy for websocket connections. Nginx in the container should be started by running the `runNginx.sh` script, which is used to perform all necessary pre-start steps, as well as running nginx itself. ### vnc_lite.html **Note:** Most of this section is old, out of date and unlikely to work as is. Locally hosting vnc_lite.html can still work, however it requires much more manual interaction and may no longer be the easiest method of development. This section is no longer officially supported, enter at your own risk. Changes have been made to the default `vnc_lite.html` provided by NoVNC to customise it to include dsds specific requirements. These changes can be seen [below](#vnc_litehtml-changes). A copy of the file is included in the repo at `novnc/vnc_lite.html`, which is copied over the file provided by NoVNC during the novnc container build. To make changes to the vnc_lite.html file, it is usually easier to serve novnc locally, which allows you to quckly edit the files in a familiar development environment and view the changes in your web browser. To do this you will need a copy of the novnc repo and the edge-infra repo cloned locally, and you will need access to an IEN with a working novnc container. Follow these steps, making sure to update the repo locations on your machine `EDGE_INFRA` and `NOVNC`: ```bash # Tag from build_script.sh NOVNC_TAG="v1.3.0" # Repo location EDGE_INFRA="${HOME}/edge/edge-infra" NOVNC="${HOME}/vnc/noVNC/noVNC" # Checkout correct version cd "${NOVNC}" git checkout "${NOVNC_TAG}" # Copy the edge-infra version of vnc_lite.html into the novnc repo ln -s "${EDGE_INFRA}/cmd/sds/novnc/novnc/vnc_lite.html" index.html # Copy in the svg icons into the novnc repo ln -s "${EDGE_INFRA}/cmd/sds/novnc/novnc/favicon.svg" favicon.svg ln -s "${EDGE_INFRA}/cmd/sds/novnc/novnc/visibility.svg" visibility.svg ln -s "${EDGE_INFRA}/cmd/sds/novnc/novnc/visibility_off.svg" visibility_off.svg # serve the vnc_lite.html, defaults to port 8000 python3 -m http.server >> server.log 2>&1 & ``` Set up port forwarding for access to the websocket proxy ```bash kubectl port-forward services/novnc 8081:80 ``` You can now access vnc_lite.html in the browser at an address such as `http://localhost:8000/?port=8081&path=/ws?token=ien-a0163e00000a`. For testing the `clusterName` and `bannerName` parameters: `http://localhost:8000/?bannerName=my-test-banner&clusterName=my-test-store&port=8081&path=/ws?token=ien-a0163e00000a` You can now make edits to the local copy of vnc_lite.html in the edge-infra repo and refresh the browser to see those updates. **Note: Testing this way should only be used during development of a new feature.** **You should always test the feature fully by recreating the container and applying it to an IEN before it is considered complete** ### vnc_lite.html changes **Note:** Additional updates have been done in the meantime This summarises the changes made to the NoVNC version of vnc_lite.html 1. Update titlebar and banner with node, cluster and banner name 1. Add an auto reconnect capability - if connection to the VNC server were to be lost, then a feature has been added to attempt reconnection automatically if the browser tab is still in focus. 1. Replace use of browser prompt with a modal to request password when connecting. 1. Simplify required url by removing the need to duplicate store and banner id in the websocket path ## Integration Testing Integration tests for novnc are defined in the `novnc_test.go` file. These are L2 integration tests, and will require a kind cluster if you wish to run them locally. A mock vncserver is created in the kind cluster by installing a simple websocket server as a daemon set that returns its host node name. The `local` kustomization of the novnc component is then installed into the test namespace. ### Setup The integration tests also depend on the `daemonsetdns` component. The integration tests will install the `daemonsetdns` component into the cluster if it does not already exist on the cluster, or it will use the existing component if it is present on the cluster. For the integration tests to successfully install the component they will need access to the container image, which can be done by building and pushing the image: ```bash just push --bazel-configs=local cmd/sds/daemonsetdns/... ``` #### Re-Installing the Daemonsetdns component If changes have been made to the daemonsetdns component you may wish to test novnc with the new version of daemonsetdns. To do this you will need to remove the existing `daemonsetdns` component from the cluster for the tests to install the new version the next time they are run. This can be done using the following commands ```bash kubectl delete clusterrole daemonsetdns kubectl delete clusterrolebindings daemonsetdns kubectl delete ns daemonsetdns ``` ### Testing To run the tests locally, first push all required containers: ```bash just push --bazel-configs=local cmd/sds/vnc/... just push --bazel-configs=local cmd/sds/novnc/... ``` When running the tests you may need to specify that the tests should use the local registry, which can be done using the `--workloads_repo=localhost:21700` argument. The test can be run using bazel, e.g. ``` bazel test --workloads_repo=localhost:21700 --nocache_test_results //cmd/sds/novnc:novnc_test --config=integration --test_arg=-integration-level=2 ``` or using `rosa`: ``` rosa //cmd/sds/novnc:novnc_test --workloads_repo=localhost:21700 -- -integration-level=2 ```