Often Domino users would like to be able to access on-premise services from their Domino Workspace. In order to do this, we need make some changes in order for users to seamlessly use their on-premise services.
But please note, this assumes you or an admin has already established a VPN tunnel or some other connectivity in AWS to connect back to your on-premise network. We will simply be focused on the Domino and EKS side of things here.
First, so that we can execute kubectl commands, we will need to connect to the EKS cluster that is running your Domino deployment, to do so, you can either update your kubeconfig on your local machine, if possible, or you can connect directly in AWS.
Since CoreDNS is now the default on EKS since v1.12, we are also assuming that is what you are using. If not, feel free to install CoreDNS on your EKS cluster by following the AWS documentation (https://docs.aws.amazon.com/eks/latest/userguide/coredns.html).
First thing we would like to do is update the default configmap that is setup by default in the cluster to contain our DNS server. We can accomplish this by editing it on the command line with the command:
kubectl -n kube-system edit configmap coredns
This should open the configmap for editing. We want to add a new block of code containing our DNS server to the Corefile section in that config. There should be an existing config that looks something like:
Corefile: | .:53 { log errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance }
After the last bracket ( } ), we want to add another section for our DNS server. For the purpose of this example, I am just going to use the Google domain and DNS servers. We should end up with something that looks like this in the configmap:
Corefile: | .:53 { log errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } google.com:53 { forward . 8.8.8.8 }
Where you would replace google.com with your on-premise host, and the IP address after the `forward . ` to contain the IP address(es) that we need to forward any lookups to, which might be your load balancer for the VPN tunnel, or something similar. Ensure to keep the period (.) in between forward and the IP addresses being added.
After we have edited the file, we can save it and we should get a confirmation message saying it was saved successfully. This change often does not apply immediately, so we can speed up the process by killing off the core-dns pods so that they forcibly restart via this command:
kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system
After executing that command, we can ensure that the pods come back up as healthy with a "Running" status via:
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
Now that we have successfully updated the configmap for the CoreDNS pods, we can move back into Domino. The last step will be adding a Pre-Run Setup Script that will add a new search domain to the /etc/resolv.conf for the Workspaces that users create.
Navigate to Domino -> Environments -> Select the Environment you would like to update -> Edit Definition -> Scroll down to expand the Advanced section -> Enter the following in the Pre Setup Script section:
cat /etc/resolv.conf >> /tmp/resolv.conf.tmp sed -i '2s/$/ google.com/' /tmp/resolv.conf.tmp cat /tmp/resolv.conf.tmp > /etc/resolv.conf
Replacing "google.com" with your domain that you are using from the first step. Note, the trailing slash on the end of the domain here is important to keep!
After that, save and build the environment and we can test the change by starting a new Jupyter Workspace -> open a terminal -> and running a nslookup for the subdomain you are attempting to add. So, continuing to use the google example here, I should be able to do:
nslookup maps
And that should contain output such as the following:
ubuntu@run-5ea9d22e84e7430006080fdb-7bfkr:/mnt$ nslookup maps Server: 10.100.0.10 Address: 10.100.0.10#53 Non-authoritative answer: Name: maps.google.com Address: 172.217.14.206 Name: maps.google.com Address: 2607:f8b0:400a:808::200e
Showing that it was successful at resolving maps to maps.google.com! And should now be ready to resolve your on-premise URLs.
Comments
0 comments
Please sign in to leave a comment.