Docker Guide For AI Research

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Docker Guide

For AI Research

By
Elvin Nur Furqon

Version 1.0
Table of Contents
1. Optimized Pytorch for Nvidia GPU...................................................................................3
1.1 Prerequisites........................................................................................................................3
1.2 Running Pytorch Using Docker.............................................................................................3
1.2.1 Pull Images.................................................................................................................................................3
1.2.2 Create Container........................................................................................................................................3
1.2.2 Start Docker...............................................................................................................................................3

2. Optimized Tensorflow for Nvidia GPU..............................................................................4


2.1 Prerequisites........................................................................................................................4
2.2 Running Tensorflow Using Docker........................................................................................4
2.2.1 Pull Images.................................................................................................................................................4
2.2.2 Create Container........................................................................................................................................4
2.2.2 Start Docker...............................................................................................................................................5

3. MMDetection..................................................................................................................5
3.1 Prerequisites........................................................................................................................6
3.2 Running MMDetection Using Docker...................................................................................6
3.2.1 Build image................................................................................................................................................6
3.2.2 Create Container........................................................................................................................................6
3.2.2 Start Docker...............................................................................................................................................6

4. All Yolo Version................................................................................................................7


3.3 Prerequisites........................................................................................................................7
3.4 Running MMDetection Using Docker...................................................................................7
2.2.1 Build image................................................................................................................................................7
2.2.2 Create Container........................................................................................................................................7
1.2.2 Start Docker...............................................................................................................................................7
1. Optimized Pytorch for Nvidia GPU
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Automatic
differentiation is done with a tape-based system at both a functional and neural network layer
level. This functionality brings a high level of flexibility and speed as a deep learning framework
and provides accelerated NumPy-like functionality. NGC Containers are the easiest way to get
started with PyTorch. The PyTorch NGC Container comes with all dependencies included,
providing an easy place to start developing common applications, such as conversational AI,
natural language processing (NLP), recommenders, and computer vision.

The PyTorch NGC Container Is optimized for GPU acceleration, and contains a validated set
of libraries that enable and optimize GPU performance. This container also contains software
for accelerating ETL (DALI, RAPIDS), Training (cuDNN, NCCL), and Inference (TensorRT)
workloads.

1.1 Prerequisites
Using the PyTorch NGC Container requires the host system to have the following installed:
 Docker
https://docs.docker.com/engine/install/ubuntu/
 Nvidia GPU Drivers
https://www.nvidia.com/download/index.aspx
 Nvidia Container Toolkit
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-
guide.html
1.2 Running Pytorch Using Docker
1.2.1 Pull Images
Select tags depends on pytorch version you want to use it
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags

Example : $ docker pull nvcr.io/nvidia/pytorch:23.09-py3

1.2.2 Create Container


To create container in docker, type command docker run as below :

$ docker run --gpus all -it -v [local_dir]:[container_dir] --name [name]


nvcr.io/nvidia/pytorch:xx.xx-py3

local_dir : volume local will be attach in container


container_dir : volume container that attached
name : name container
1.2.2 Start Docker
When you exit the interactive docker, you need to start again the docker
$ docker start [name]
$ docker attach [name]

name : name container

2. Optimized Tensorflow for Nvidia GPU


TensorFlow is an open source platform for machine learning. It provides comprehensive tools
and libraries in a flexible architecture allowing easy deployment across a variety of platforms
and devices. NGC Containers are the easiest way to get started with TensorFlow. The TensorFlow
NGC Container comes with all dependencies included, providing an easy place to start
developing common applications, such as conversational AI, natural language processing (NLP),
recommenders, and computer vision.

The TensorFlow NGC Container is optimized for GPU acceleration, and contains a validated set
of libraries that enable and optimize GPU performance. This container may also contain
modifications to the TensorFlow source code in order to maximize performance and
compatibility. This container also contains software for accelerating ETL (DALI, RAPIDS), Training
(cuDNN, NCCL), and Inference (TensorRT) workloads.

2.1 Prerequisites
Using the PyTorch NGC Container requires the host system to have the following installed:
 Docker
https://docs.docker.com/engine/install/ubuntu/
 Nvidia GPU Drivers
https://www.nvidia.com/download/index.aspx
 Nvidia Container Toolkit
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-
guide.html
2.2 Running Tensorflow Using Docker
2.2.1 Pull Images
Select tags depends on tensorflow version you want to use it
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags

Example : $ docker pull nvcr.io/nvidia/tensorflow:23.09-py3

2.2.2 Create Container


To create container in docker, type command docker run as below :
$ docker run --gpus all -it -v [local_dir]:[container_dir] --name [name]
nvcr.io/nvidia/pytorch:xx.xx-py3

local_dir : volume local will be attach in container


container_dir : volume container that attached
name : name container

2.2.2 Start Docker


When you exit the interactive docker, you need to start again the docker
$ docker start [name]
$ docker attach [name]

name : name container

3. MMDetection
MMDetection is an object detection toolbox that contains a rich set of object detection,
instance segmentation, and panoptic segmentation methods as well as related components and
modules
MMDetection consists of 7 main parts, apis, structures, datasets, models, engine, evaluation
and visualization.
 apis provides high-level APIs for model inference.
 structures provides data structures like bbox, mask, and DetDataSample.
 datasets supports various dataset for object detection, instance segmentation, and
panoptic segmentation.
o transforms contains a lot of useful data augmentation transforms.
o samplers defines different data loader sampling strategy.
 models is the most vital part for detectors and contains different components of a
detector.
o detectors defines all of the detection model classes.
o data_preprocessors is for preprocessing the input data of the model.
o backbones contains various backbone networks.
o necks contains various neck components.
o dense_heads contains various detection heads that perform dense predictions.
o roi_heads contains various detection heads that predict from RoIs.
o seg_heads contains various segmentation heads.
o losses contains various loss functions.
o task_modules provides modules for detection tasks. E.g. assigners, samplers, box
coders, and prior generators.
o layers provides some basic neural network layers.
 engine is a part for runtime components.
o runner provides extensions for MMEngine’s runner.
o schedulers provides schedulers for adjusting optimization hyperparameters.
o optimizers provides optimizers and optimizer wrappers.
o hooks provides various hooks of the runner.
 evaluation provides different metrics for evaluating model performance.
 visualization is for visualizing detection results.

3.1 Prerequisites
Using the MMDetection requires the host system to have the following installed:
 Docker
https://docs.docker.com/engine/install/ubuntu/

3.2 Running MMDetection Using Docker


3.2.1 Build image
Select tags depends on mmdetection version you want to use it and download it
https://github.com/open-mmlab/mmdetection/tags

Example :
$ wget https://github.com/open-mmlab/mmdetection/archive/refs/tags/v3.1.0.tar.gz
$ tar -xvf v3.1.0.tar.gz
$ cd mmdetection
$ docker build -t mmdetection docker/

3.2.2 Create Container


To create container in docker, type command docker run as below :

$ docker run --gpus all -it -v [local_dir]:[container_dir] --name [name] mmdetection

local_dir : volume local will be attach in container


container_dir : volume container that attached
name : name container

3.2.2 Start Docker


When you exit the interactive docker, you need to start again the docker
$ docker start [name]
$ docker attach [name]

name : name container


4. All Yolo Version
YOLO is a state-of-the-art computer vision model, YOLO model contains out-of-the-box support
for object detection, classification, and segmentation tasks, accessible through a Python
package as well as a command line interface.

4.1 Prerequisites
Using the MMDetection requires the host system to have the following installed:
 Docker
https://docs.docker.com/engine/install/ubuntu/
 Nvidia GPU Drivers
https://www.nvidia.com/download/index.aspx
 Nvidia Container Toolkit
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-
guide.html

4.2 Running MMDetection Using Docker


4.2.1 Select Yolo Version
Below is Yolo Version that already tested:
YoloV3 : https://github.com/ultralytics/yolov3
YoloV4 : https://github.com/WongKinYiu/PyTorch_YOLOv4
YoloV5 : https://github.com/ultralytics/yolov5
YoloV6 : https://github.com/meituan/YOLOv6
YoloV7 : https://github.com/WongKinYiu/yolov7
YoloV8 : https://github.com/ultralytics/ultralytics

Example :
$ wget https://github.com/WongKinYiu/yolov7/archive/refs/tags/v0.1.tar.gz
$ tar -xvf v0.1.tar.gz
$ cd yolov7
$ docker build -t mmdetection docker/

4.2.2 Create Container


To create container in docker, type command docker run as below :

$ docker run --gpus all -it -v [local_dir]:[container_dir] --name [name] mmdetection

local_dir : volume local will be attach in container


container_dir : volume container that attached
name : name container
4.2.2 Start Docker
When you exit the interactive docker, you need to start again the docker
$ docker start [name]
$ docker attach [name]

name : name container

You might also like