Sunday, May 13, 2018

Running Kaggle Kernels with a GPU

https://www.kaggle.com/dansbecker/running-kaggle-kernels-with-a-gpu/code


Intro

Kaggle provides free access to NVidia K80 GPUs in kernels. This benchmark shows that enabling a GPU to your Kernel results in a 12.5X speedup during training of a deep learning model.
This kernel was run with a GPU. I compare run-times to a kernel training the same model on a CPU here.
The total run-time with a GPU is 994 seconds. The total run-time for the kernel with only a CPU is 13,419 seconds. This is a 12.5X speedup (total run-time with only a CPU is 13.5X as long).
Limiting the comparison only to model training, we see a reduction from 13,378 seconds on CPU to 950 seconds with a GPU. So the model training speed-up is a little over 13X.
The exact speed-up varies based on a number of factors including model architecture, batch-size, input pipeline complexity, etc. That said, the GPU opens up much great possibilities in Kaggle kernels.
If you want to use these GPU's for deep learning projects, you'll likely find our Deep Learning Course the fastest path around to get up to speed so you can run your own projects. We're also adding new image processing datasets to our Datasets platform and we always have many Competitions for you to try out new ideas using these free GPU's.
The following text shows how to enable a GPU and gives details on the benchmark.

No comments:

Post a Comment