Wiki source code of Tensorflow, Keras, Pytorch, Theano/Aesara, etc,...
Last modified by Jan Rhebergen on 2022/01/24 15:57
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | = Tensorflow installation and configuration = | ||
| 2 | |||
| 3 | These libraries (see title) enable deeplearning. Deep learning usually means application of (convolutional)-neural-nets. | ||
| 4 | |||
| 5 | * Tensorflow is deeplearning toolkit originally from Google and very powerful but can be tricky to set up. | ||
| 6 | * PyTorch is originally from Facebook and a little easier to use and good for quick prototyping (Tesla uses it in production). | ||
| 7 | * Theano is a Python deeplearning library that is good for teaching and development and gaining a better understanding. Development is continued in a fork named [[Aesara>>https://github.com/pymc-devs/aesara/releases/tag/rel-2.0.0]] | ||
| 8 | * Keras is a Python library that serves as a API on top of Tensorflow. | ||
| 9 | * CUDA is the low-level library that enables the use of NVidia GPU for data-science applications. | ||
| 10 | |||
| 11 | A more extensive explanation of the above mentioned tools can be found on Wikipedia. Below we describe how to install Tensorflow and CUDA. We originally chose to employ Pop!_OS as our operating system because it support data-science applications and libraries so well. | ||
| 12 | |||
| 13 | **NB:** we will only install tensorman (Tensorflow manager) and CUDA libraries. | ||
| 14 | |||
| 15 | In essence we will follow the instructions as given by System76 the creator of Pop!_OS: | ||
| 16 | |||
| 17 | [[https:~~/~~/support.system76.com/articles/cuda/>>https://support.system76.com/articles/cuda/]] | ||
| 18 | |||
| 19 | [[https:~~/~~/support.system76.com/articles/tensorman/>>https://support.system76.com/articles/tensorman/]] | ||
| 20 | |||
| 21 | (% class="box" %) | ||
| 22 | {{{apt install system76-cuda-latest}}} | ||
| 23 | |||
| 24 | This command can potentially pull in a lot of packages (2GB) so be patient. Subsequently install the following (latest if you can) package. | ||
| 25 | |||
| 26 | (% class="box" %) | ||
| 27 | {{{apt install system76-cudnn-11.1}}} | ||
| 28 | |||
| 29 | The latter package may pull in a similar amount depending on the version of ##'latest'##. If they are the same it will be limited, if the ##cdnn## is running behind it can be substantial. | ||
| 30 | |||
| 31 | For switching between version (if mutiple are installed) use this: | ||
| 32 | |||
| 33 | (% class="box" %) | ||
| 34 | {{{update-alternatives --config cuda | ||
| 35 | nvcc -V | ||
| 36 | }}} | ||
| 37 | |||
| 38 | To get going with Tensorflow we install ##tensorman## (the Tensorflow manager) | ||
| 39 | |||
| 40 | (% class="box" %) | ||
| 41 | {{{apt install tensorman}}} | ||
| 42 | |||
| 43 | For NVIDIA CUDA support, the following package must also be installed: | ||
| 44 | |||
| 45 | (% class="box" %) | ||
| 46 | {{{apt install nvidia-container-runtime}}} | ||
| 47 | |||
| 48 | Users that work with ##tensorman## need to be in the ##docker## group. Edit the ##/etc/group## file and add the user names to the docker group entry: | ||
| 49 | |||
| 50 | (% class="box" %) | ||
| 51 | {{{docker:x:998:jan,denise,romario,bas,stan,gertjan}}} | ||
| 52 | |||
| 53 | //Don't forget to run// (% class="mark" %)##grpconv##(%%) //to effectuate the additions!// | ||
| 54 | |||
| 55 | All the above should satisfy that which is needed on a system level. Depending on what you want to do how you want to use it you may need to install ##tensorflow## or ##keras## related ##conda## packages. | ||
| 56 | |||
| 57 | For more detailed information check out the PDF file attached here. | ||
| 58 | |||
| 59 | (% class="box" %) | ||
| 60 | {{{ | ||
| 61 | (base) jan@liszt:~$ nvidia-smi | ||
| 62 | Fri May 28 21:16:05 2021 | ||
| 63 | +-----------------------------------------------------------------------------+ | ||
| 64 | | NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 | | ||
| 65 | |-------------------------------+----------------------+----------------------+ | ||
| 66 | | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | ||
| 67 | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | ||
| 68 | | | | MIG M. | | ||
| 69 | |===============================+======================+======================| | ||
| 70 | | 0 GeForce RTX 2080 Off | 00000000:01:00.0 Off | N/A | | ||
| 71 | | N/A 32C P8 6W / N/A | 752MiB / 7980MiB | 0% Default | | ||
| 72 | | | | N/A | | ||
| 73 | +-------------------------------+----------------------+----------------------+ | ||
| 74 | |||
| 75 | +-----------------------------------------------------------------------------+ | ||
| 76 | | Processes: | | ||
| 77 | | GPU GI CI PID Type Process name GPU Memory | | ||
| 78 | | ID ID Usage | | ||
| 79 | |=============================================================================| | ||
| 80 | | 0 N/A N/A 3223 G /usr/lib/xorg/Xorg 167MiB | | ||
| 81 | | 0 N/A N/A 3436 G /usr/bin/gnome-shell 12MiB | | ||
| 82 | | 0 N/A N/A 694607 C /usr/bin/python3 193MiB | | ||
| 83 | | 0 N/A N/A 694761 C /usr/bin/python3 375MiB | | ||
| 84 | +-----------------------------------------------------------------------------+ | ||
| 85 | }}} | ||
| 86 | |||
| 87 | = Notes = | ||
| 88 | |||
| 89 | As the [[system 76 tensorman webpage>>https://support.system76.com/articles/tensorman/]] already notes the installation and configuration of tensorflow can be a hairy issue. Of course there are multiple instructables and youtube clips available that help but very few offer a sustainable solution. With a sustainable solution I mean a solution which will survive update, is compatible with the environment and can be safely update/upgraded. The tensorflow docker container offer this solution and ##tensorman## is the tool to manage it. The documentation supplied is a bit scarce and often does not address specific but not uncommon use cases. Searching the web you will find various post and solutions. The current use should know that if you load tensorflow in jupyter it does not mean your code is executing on the GPU! To have code executing on the GPU using tensorflow one can use ##tensorman## to execute it (by hand). The best solution however is to create a customised tensorflow docker container that also has Jupyter (notebook/lab) and will be able to run your code per default on the GPU. |