Recently, the C.R.A.W.LAB has been doing more machine learning research, driven by our Autonomous Surface Vehicle (ASV) research. The work started with our entry into the 2016 Maritime RobotX Challenge and leverages the great body of work being done by autonomous car researchers.
Much of this machine learning work is well suited for processing on the GPU and most libraries that facilitate that processing rely on NVIDIA's CUDA platform, necessitating the use of an NVIDIA graphics card. This makes is difficult to do this work on any modern Mac, as they all utilize AMD graphics cards. Fortunately, the advancements in high-speed external buses have made an external GPU, or eGPU, an option.
In this post, I'll document how I set up an external GPU with my mid-2012 15-inch Retina MacBook Pro (MacbookPro10,1).
The equipment that I'm using is:
Akitio Node eGPU box - This box has been shipping for only a short time now, but received great review and is extremely reasonably priced. It is somewhat larger than I expected it to be based on the pictures on the web.
GEFORCE® GTX 1080 Ti Graphics Card - This GPU is just short of the top-of-the-line Titan series from NVIDIA.1 Many recommend it as a good value to performance tradeoff. This site maintains a good summary of current graphics card offerings for machine learning, and, as long as I've followed it, stayed up-to-date.
Apple Thunderbolt 3 to Thunderbolt 2 Adapter - This adapter seems to be intended to be used on the "computer" side of the connection, but I'm using it on the "device" side without any problems.
My MacBook Pro is only a Thunderbolt 1 capable device. This necessitates the use of the adapter listed above. It also forces us to enable Thunderbolt 3 support in macOS by disabling System Integrity Protection. To do, we have to boot into recovery mode by holding ⌘-R (Command-R) during startup. Once in recovery mode, open the Terminal from the Utilities menu, then type:
Then, reboot. Once rebooted, we'll run a script developed by the eGPU community. To do so, open the terminal and run:
cd ~/Desktop && curl -o automate-eGPU.sh https://raw.githubusercontent.com/goalque/automate-eGPU/master/automate-eGPU.sh && chmod +x automate-eGPU.sh && sudo ./automate-eGPU.sh
This string of commands will download the script to your desktop, modify its permissions to allow it to run, then run it. Just follow the prompts in the script, which will download the most recent NVIDIA drivers and install them, plus a few housekeeping things to set the system up for the external GPU. When finished, your NVIDIA card should be set up and show up in the Graphics/Displays tab of System Information.
NVIDIA CUDA Installation and Setup
As I mentioned above, most of the machine learning libraries rely on the NVIDIA's CUDA platform. So, we need to download the NVIDIA CUDA toolkit and install it. The installation guide has additional information. Many also require the NVIDIA CUDA® Deep Neural Network library (cuDNN) to be installed. Note that this installation might require adding the library installation path to your path, which is explained in NVIDIA's installation instructions. The exact way to do so and path will depend on your setup and CUDA version, so it's best you check NVIDIA's instructions.
Machine Learning Frameworks
In the C.R.A.W.LAB, we use the Anaconda distribution of Python, along with its conda package manager. This makes it easy to set up and manage virtual environments to avoid conflicts between learning packages and create reproducible environments.
So far, I've been able to install and verify installation of Keras using the conda-forge binary. This also installs and enables TensorFlow, as Keras is built on top of it. PyTorch was more trouble, as it requires building from source for MacOS installs with CUDA support.
With this setup I was able to reduce the processing times for some simple examples by more than 96% from the MacBook's built-in GPU. The GPU processor was only running at about 50%, so I'm convinced (for now) that I'll be limited by the Thunderbolt 1 bandwidth on my laptop before the GPU capabilities for most problems we'll be working on (for now). I'm excited about this work.
Additional Resources and Links
Since the original draft of this post, I also have followed this exact process with a NVIVIA TITAN Xp. ↩