keras/docker/README.md
Mostafa Abdulhamid df84c69676 Docker image for test and experiment Keras (#3035)
* Docker image for test and experiment Keras

 - Docker image with CUDA support on ubuntu 14.04
 - nvidia-docker script to forward the GPU to the container
 - MakeFile to simplify docker commands for build, run, test, ..etc
 - Add useful tools like jupyter notebook, ipdb, sklearn for experiments

* update nvidia-docker plugin

* use .theanorc in Dockerfile

* Add tensorflow to the docker image

* update Docker image to cuDNN v5

* test fixes

* move docker to sub directory

* README for docker

* Fix typos

* Add visualization to Dockerfile
2016-07-26 18:29:56 -07:00

59 lines
1.7 KiB
Markdown

# Using Keras via Docker
This directory contains `Dockerfile` to make it easy to get up and running with
Keras via [Docker](http://www.docker.com/).
## Installing Docker
General installation instructions are
[on the Docker site](https://docs.docker.com/installation/), but we give some
quick links here:
* [OSX](https://docs.docker.com/installation/mac/): [docker toolbox](https://www.docker.com/toolbox)
* [ubuntu](https://docs.docker.com/installation/ubuntulinux/)
## Running the container
We are using `Makefile` to simplify docker commands within make commands.
Build the container and start a jupyter notebook
$ make notebook
Build the container and start an iPython shell
$ make ipython
Build the container and start a bash
$ make bash
For GPU support install NVidia drivers (ideally latest) and
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker). Run using
$ make notebook GPU=0 # or [ipython, bash]
Switch between Theano and TensorFlow
$ make notebook BACKEND=theano
$ make notebook BACKEND=tensorflow
Mount a volume for external data sets
$ make DATA=~/mydata
Prints all make tasks
$ make help
You can change Theano parameters by editing `/docker/theanorc`.
Note: If you would have a problem running nvidia-docker you may try the old way
we have used. But it is not recommended. If you find a bug in the nvidia-docker report
it there please and try using the nvidia-docker as described above.
$ export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda.* | xargs -I{} echo '-v {}:{}')
$ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
$ docker run -it -p 8888:8888 $CUDA_SO $DEVICES gcr.io/tensorflow/tensorflow:latest-gpu