Installing & configuring TLJH
This page describes the installation and configuration of "The Littlest Jupyter Hub" a.k.a. TLJH
Running TLJH inside a docker container is not supported, since it depends on systemd. Hence the alternative is to run a LXC ubuntu container and install TLJH inside of that container.
The approach is partially described here https://linuxcontainers.org/lxd/getting-started-cli/#ubuntu
On the host (i.e. liszt)
sudo apt install snapd snap install lxd --channel=4.0/stable lxc launch ubuntu:20.10 tljh lxd init lxc list lxc start tljh
After the init command (above) follow the instructions here https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration and use sensible defaults. Once this new container runs we want to manipulate the setting such that it will always claim/set the same IP address for it's internal network interface. The reason for this is that we need to be able to set a fixed IP for the nginxproxymanager to use so the container can be reached from outside.
lxc config device override tljh eth0 lxc list --columns ns4 lxc config device set tljh eth0 ipv4.address 10.83.150.32 lxc restart tljh
The above mentioned IP address is obtained by simply inspecting the network interface inside the container using:
lxc shell tljh ip add
The combination of docker iptables settings and deploying lxc is a little troublesome but can be fixed. If nothing is done this causes lxc not to have access to the outside (i.e. the internet). The two statements below should remedy the situation. The first (commented out) tries to be specific but did not work (further investigation needed). The second is more generic and because of that may not be judged safe enough. However in this case I think it is fine and acceptable.
#iptables -I DOCKER-USER -i lxdbr0 -o enp3s0 -j ACCEPT iptables -I DOCKER-USER -j ACCEPT
Inside TLJH shell
After making sure the lxc container can be accessed over the internet and can also itself access internet we continue below to install tljh itself (NB: lxc shell tljh).
sudo apt update; apt upgrade sudo apt install python3 python3-dev git curl curl -L https://tljh.jupyter.org/bootstrap.py | sudo -E python3 - --admin ganymede
The name ganymede is chosen for the admin account as it is a moon of Jupiter the planet. Coincidentally it is not only the largest moon of Jupiter but the largest moon in our solar system and even bigger than the planet mercury although not as massive.
Applications need to know the IPs of hosts on the cluster. Hence as a precautionary measure we add them to the /etc/hosts file like so:
root@tljh:~# cat /etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts #JBRv 192.168.166.130 liszt 192.168.166.222 beethoven #JBR^
Sharing data
solution one
Apart from sharing data using the nbgitpuller extension we might also want to employ a local directory (read only) with some material. This is realised as described below (NB: lxc shell tljh):
mkdir -p /srv/data/shared_data cd /etc/skel ln -s /srv/data/shared_data shared_data
By putting this is the /etc/skel it will be reproduced when new users are added (see https://tljh.jupyter.org/en/latest/howto/content/share-data.html)
solution two
Another approach it is to use nbgitpuller as described here in the section "using git with TLJH".
solution three
Here we employ nextcloud in combination with rclone as described here.
Troubleshooting
After an update (an reboot) it might occur that the container is not running. When executing lxc shell tljh you might be shown this message:
snap-confine has elevated permissions and is not confined but should be. Refusing to continue to avoid permission escalation attacks
This problem can be solved correctly (re)configuring apparmor like below (as root):
apparmor_parser -r /etc/apparmor.d/*snap-confine* apparmor_parser -r /var/lib/snapd/apparmor/profiles/snap-confine* systemctl enable --now apparmor.service systemctl start apparmor
Of course apparmor need to be installed! (it should already be).