In this post you’ll learn how to use nvidia-docker with Tensorflow.

Pulling the Tensorflow GPU Image

Use the correct image based on your CUDA version:

  • CUDA 8.0 use tensorflow/tensorflow:latest-gpu
  • CUDA 7.5 use tensorflow/tensorflow:1.0.0-rc0-gpu

Test nvidia-smi

nvidia-docker run --rm tensorflow/tensorflow:latest-gpu nvidia-smi

That’s it! Now your ready to run your Tensorflow code.

Example Dockerized Tensorflow Project using GPU

1) Create a Dockerfile

FROM tensorflow/tensorflow:latest-gpu

COPY . /app
WORKDIR /app

2) Create app.py

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

# You should see the following output
# ...
# I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
# name: GeForce 940M
# │major: 5 minor: 0 memoryClockRate (GHz) 1.176
# ...

3) Build a running nvidia-docker

nvidia-docker -t example_tf_gpu .
nvidia-docker run -it example_tf_gpu python app.py
⤧  Next post Training on Large Scale Image Datasets with Keras