[Question] The best option to run different Deep Learning models in parallel

Greetings all,

My lab plans to purchase machines with a deep learning framework to support our work (I have two other colleagues). The 2 options that we have are:

  1. Buy 3 PCs with a single GPU
  2. Buy 1 Server Rack with 3 GPUs
    My colleagues and I work on different problems, thus, we will run different DL models. My question is, which of the above option to be the best option? If we bought the 2nd option, is it possible to tell our DL model to run on a specific GPU (so that three of us can work in parallel)?
    Thanks in advance.
© Copyright 2013-2020 Analytics Vidhya