Building a machine learning homelab (w/ Docker + Linux + Nvidia 1080 GPU)

Lessons Learned

Before I begin, I’ll start with a bit of brief background on how and why I developed my current machine learning homelab.  I’ve spent much of my professional career working with technology on the cutting edge of what’s possible with modern machine learning.  My personal background is more on the web development and back end infrastructure side of things, so I’ve helped monitor and improve the general reliability and tractability of lots of different software including machine learning models.

However, sometimes I just want to noodle around (or should I say dabble?) with various machine learning models on my own at home–without worrying about nuking a production system.  So, I designed and built the homelab system described below as a simple, easy to manage machine learning test bed using my existing home gaming desktop.  I went through a lot of trial and error to get to this final design, but I’ll just present the final product as I’m currently using it.  Please drop me an email or leave a comment if you find any bugs or have any suggestions on how I can improve my design.


Part of what makes this system so easy to manage is the integration between the Docker stack and the Linux kernel.  As such, Linux is required, and non-Ubuntu Linux environments may behave somewhat differently depending on the distribution.  For this tutorial, I’ll assume the following environment to start…

  1. Ubuntu 14.04+
  2. Attached modern Nvidia GPU (I’m using a Nvidia 1080 GTX)
    • Note: AMD cards will likely not work for GPU acceleration (the same is true for systems with no GPU of course), but that should  not cause problems in CPU-only mode as described below.
  3. Docker installed
  4. Recent Nvidia drivers installed
  5. Git installed

Setup nvidia-docker

(Skip this section for CPU-only mode)

Once you’ve got your system ready, it’s time to install the magic package that will make our Nvidia GPU integration work.  It’s called “nvidia-docker” and allows us to run docker containers that will automatically connect to your local GPU(s) and make them available to the container software…



Please see the above linked documents for detailed instructions on setting up your environment (especially the apt settings).  However, the following will work for many users as a quick-and-dirty setup process, assuming you already have some Nvidia drivers and Docker installed.

1. Get Docker

(Skip this section if you’ve already got a recent Docker installation)

$ sudo apt-get install docker-ce

2. Install Nvidia drivers

(Skip this section if you’ve already got recent Nvidia drivers for your GPU)

Install a compatible nvidia driver package on your local system.

$ sudo apt-get install nvidia-384

3. Get nvidia-docker

Get sources for the latest nvidia-docker package.

$ curl -s -L | sudo apt-key add - 
$ curl -s -L | sudo tee /etc/apt/sources.list.d/nvidia-docker.list 
$ sudo apt-get update

Install the latest nvidia-docker package.

$ sudo apt-get install nvidia-docker2

Reload the Docker daemon configuration.

$ sudo pkill -SIGHUP dockerd

4. Setup Jupyter Notebooks:

Change to your home directory, or some other directory where you’d like the keras package to live.  Keras is a machine learning package with a built-in Jupyter Notebooks environment that we will leverage.

$ cd $HOME

Get the keras code base using git.

$ git clone

Change directories to the keras folder.

$ cd keras

Set up git submodules.

$ git submodule init
$ git submodule update

Change directories to the docker folder.

$ cd docker

Launch the Jupyter Notebook environment.  Make sure to note the notebook URL link in the terminal output from this command.

(Leave out “GPU=0” for CPU-only mode)

$ make notebook BACKEND=tensorflow GPU=0 # Note URL in output

5. Point your favorite web browser at the “…” URL you just generated

Open this URL in your web browser of choice and you’ll now have access to a persistent environment that you can use to run machine learning models.  The default setup is running a tensorflow environment, but many related tools will also work in this environment.  You may also want to try out the theano flavor of the environment by switching the backend option to “BACKEND=theano”.

6. From the Jupyter Notebooks landing page, navigate to workspace>examples>deep-learning-keras-tensorflow

Run the example notebook “0. Preamble.ipynb” by clicking on it, then clicking Cell>Run All

In the deep-learning-keras-tensorflow folder, you will find several Jupyter Notebooks (all ending in “*.ipynb”) that will run using your attached GPU.  There are many other notebooks you can download and play with without GPU support as well.  Have a look around and see what you can find.

NOTE: Any data not stored under the “workspace” directory on the Jupyter Notebooks landing page will not persist once you stop your Jupyter Notebooks Keras container.


A good chunk of getting modern machine learning models to work is just setting up the proper infrastructure.  I’ve found other guides online either overly verbose or incomplete as far as how to set up modern machine learning infrastructure in a homelab setting.  Hopefully I’ve helped fill that gap somewhat here.  Please comment below or shoot me an email if you find bugs or have general comments on my post.

Leave a Reply

Your email address will not be published. Required fields are marked *