Google Compute Engine has support for launching instances with GPUs.

To use with googleComputeEngineR the main function is gce_vm_gpu() which will set some defaults for you before passing the arguments to gce_vm():

If not specified, this function will enter defaults to get a GPU instance up and running using the deep learning VM project as specified in this google article

Modify the defaults as you wish by passing them into the function.

  • acceleratorCount: 1
  • acceleratorType: “nvidia-tesla-p4”
  • scheduling: list(onHostMaintenance = “TERMINATE”, automaticRestart = TRUE)
  • image_project: “deeplearning-platform-release”
  • image_family: “tf-latest-cu92”
  • predefined_type: “n1-standard-8”
  • metadata: “install-nvidia-driver” = “True”
vm <- gce_vm_gpu(name = "gpu")

# if you want most GPU units
vm <- gce_vm_gpu(name = "gpu", acceleratorCount = 4)

You can check the installation via gpu_check_gpu() which returns the output of the nvidia status command via SSH.

You can see the GPUs available for your project and zone via gce_list_gpus()

GPUs are more restricted in the zones they are available than normal instances, you will get an error if you try to use a GPU outside a zone its available within.

From the above list, if you wanted to select another GPU you would then issue:

vm <- gce_vm_gpu(name = "gpu", acceleratorCount = 4, acceleratorType = "nvidia-tesla-k80")

Deeplearning for R - the rstudio-gpu template has deeplearning Docker images available at rocker/ml that install:

  • NVIDIA GPU drivers via CUDA
  • RStudio/R/Tidyverse
  • Tensorflow
  • Keras
  • xgboost
  • mxnet
  • h20

This is an appropriate workstation to go through the “Deep Learning with R” book by François Chollet and J.J. Allaire.

A template for gce_vm() is setup to launch the above image with GPU support for nvidia-tesla-p4. It uses gce_vm_gpu() internally to configure the VM.

You may need to configure a zone that has the nvidia-tesla-p4 GPU, or pick another in the project/zone you want to launch in - for example the below zone “europe-west1-b” didn’t have the default GPU nvidia-tesla-p4 so another that was is selected:

Testing rstudio-gpu template

Wait for 5 mins on startup for the instance to bootup. After that you should be able to login at the ip address it gives you.

This is the “hello world” script from the Deep learning with R book. You should be able to run it, and see from the message feedback that its using a gpu_device to compute the model: