Hands-on with the NVIDIA DIGITS DevBox for Deep Learning

Figure 3: The NVIDIA DIGITS DevBox fully unboxed.

I’ve got a big announcement today:

I will be doing more Deep Learning and Convolutional Neural Network tutorials on the PyImageSearch blog over the coming months.

I’m dead serious about this — and I’ve put my money where my mouth is and invested in some real hardware for deep learning.

To learn more about my investment, the NVIDIA DIGITS DevBox, and the new tutorials coming to the PyImageSearch blog, keep reading.

Hands-on with the NVIDIA DIGITS DevBox for Deep Learning

For anyone that is interested in the NVIDIA DIGITS DevBox for deep learning — and perhaps more importantly, the rational that led to me purchasing a pre-configured deep learning system instead of building my own — I’ve included my experience in working through the decision process, making the purchase, and unboxing the system. In future blog posts, I’ll be reviewing how I’ve setup and configured the system for my own optimal setup.

NVIDIA DIGITS DevBox specs

Let me tell you: the NVIDIA DIGITS DevBox is a beast.

In terms of system specs, the DevBox sports:

  • 4 Titan X GPUs (12GB of memory per board)
  • 64GB DDR4 RAM
  • Asus X99-E WS motherboard
  • Core i7-5930K 6 core processor operating at 3.5GHz.
  • Three 3TB SATA 3.5″ hard drives configured in RAID5 (useful for storing massive datasets)
  • 512GB SSD cache for RAID
  • 250GB SATA internal SSD (where you’re store your system files, source code, and other “most accessed” files)
  • 1600W power supply

But it’s not just just the hardware that makes the NVIDIA DIGITS DevBox awesome. It also comes pre-configured with:

  • Ubuntu 14.04
  • NVIDIA drivers
  • NVIDIA CUDA Toolkit 6.0-7.5
  • cuDNN 4.0
  • Caffe, Theano, Torch

This machine is no joke — and it’s not cheap either.

Weighing in at $15,000, this isn’t your standard desktop machine — this system is intended for researchers, companies, and universities that are doing real work in deep learning.

So, you’re probably wondering…

“You’re not a deep learning researcher, Adrian. And while you own companies that involve computer vision + machine learning, why do you need such a beastly machine?”

Great question.

Which brings me to the next section of this blog post…

Investing in the future of PyImageSearch, my companies, and myself

On the surface, you likely see two sides of Adrian Rosebrock:

  1. The blogger who writes weekly blog posts and email announcements.
  2. The wordsmithteacher, and educator who has authored Practical Python and OpenCV as well as the PyImageSearch Gurus course.

But there is also a third side that doesn’t often get discussed on the PyImageSearch blog (besides the occasional note here and there): the entrepreneur and (occasional) consultant.

It’s becoming increasingly rare that I can take on new consulting/contracting work, but when I do, I tend to be very selective about what the project and budget is. And over the past few years, I’ve noticed I’ve been using more and more deep learning within my projects (both for contracting work and for personal/business projects).

This may appear in stark contrast (and on the surface, perhaps a bit hypocritical) to my blog post on getting off the deep learning bandwagon — but the title of that article wasn’t the primary point.

Instead, the purpose of that (controversial) blog post was to drive a single detail home:

“Machine learning isn’t a tool. It’s a methodology with a rational thought process that is entirely dependent on the problem we are trying to solve. We shouldn’t blindly apply algorithms and see what sticks [aside from spot-checking]. We need to sit down, explore the feature space (both empirically and in terms of real-world implications), and then consider our best mode of action.”

Deep learning, just like Support Vector Machines, Random Forests, and other machine learning algorithms all have a rational process and assumptions to when we should use each particular model.  There is a time and a place where we use deep learning — you just need to be mindful in your selection of algorithms for a particular program.

My hope in publishing deep learning tutorials on the PyImageSearch blog is to better illuminate when and where deep learning is appropriate for computer vision tasks.

So what does this have to do with the PyImageSearch blog?

Great question.

I’ve said it before in other blog posts, and I’ll say it again here today:

What I love about running the PyImageSearch blog is writing tutorials related to what you, the reader, want to hear about.

Every day I get more and more requests for deep learning tutorials. And up until 2 months ago, I was still writing the PyImageSearch Gurus courseI simply did not have the time, energy, or attention span to start planning out deep learning tutorials — which by their very definition take a lot more time and effort (in terms of thought process, computational effort, and experiments) for me to create.

Now that I’ve finished writing the PyImageSearch Gurus course, I’ve reclaimed a bit of my time.

But more importantly, I’ve reclaimed a bunch of my energy and attention — both of which are critical in creating high-quality tutorials. Over the years I spent in graduate school, writing my dissertation, running the PyImageSearch blog, authoring Practical Python and OpenCV, and creating the PyImageSearch Gurus course, I’ve mastered the ability to bang out 5,000+ words in a single sitting. Time isn’t a problem for me when it comes to writing — what really matters are my energy and attention levels.

Over the next year, you can expect more deep learning tutorials to be published on the PyImageSearch blog. It won’t be an immediate change, but it will slowly ramp up over the next 3-4 months as I start creating a backlog of posts.

The point is this:

If you’re interested in deep learning, specifically deep learning and computer vision, PyImageSearch is the place to be.

Running the numbers

Okay, so I’ve already mentioned that I invested $15,000 in an NVIDIA DIGITS DevBox — that’s not a small amount of money by any means.

So how did I justify this huge number?

As I mentioned above, I’m an entrepreneur, a scientist, and at heart, a business person — this implies there is (some sort of) logic behind my actions. If you show me the numbers, they work out, and they align with my goals — I can justify the investment and plan accordingly.

I started out the assessment by looking at the facts:

  • Fact #1: I am currently spending either $0.65 per hour on an Amazon EC2 g2.2xlarge instance or $2.60 per hour on an g2.8xlarge instance. Most of my EC2 hours are spent using a g2.8xlarge instance.
  • Fact #2: Both of these EC2 instances have less RAM than the NVIDIA DIGITS DevBox. And based on the way Amazon computes virtual-CPUs, the g2.8xlarge has only marginally better CPU (again, if you can trust the vCPU allocation).
  • Fact #3: Currently, EC2 GPU instances have only 4GB of memory with 1,536 cores (the Titan X has 12GB memory and 3,072 cores).
  • Fact #4: The NVIDIA DIGITS DevBox has 4 Titan X GPUs (totaling 48GB of memory). The g2.8xlarge has 4 K520 GPUs (totaling 16GB of memory).
  • Fact #5: 4GB alone just doesn’t cut it for larger datasets without spreading computation across multiple GPUs. That’s fine, but doesn’t seem worth it in the long run. Ideally, I can run either four experiments in parallel or spread the computation across four cards, thereby decreasing the time it takes to train a given model. The Titan X is clearly the winner here.

Next, I ran the numbers to determine the intersection between the hourly rates of an g2.8xlarge instance and the $15,000 upfront investment:

Figure 1: Plotting the intersection between the EC2 g2.8xlarge hourly rate and the upfront cost of $15,000 for the NVIDIA DIGITS DevBox.

Figure 1: Plotting the intersection between the EC2 g2.8xlarge hourly rate and the upfront cost of $15,000 for the NVIDIA DIGITS DevBox. The break-even point is at approximately 240 days.

This gives me approximately 5,769 hours (~240 days) of compute time on an g2.8xlarge instance.

Note: That’s 23,076 hours (2.6 years) on a g2.2xlarge instance — again, I’ll reiterate the point that I’m mainly using g2.8xlarge instances.

With a break even point of only 240 days (where it can take days to weeks for a single model to train), the decision started to become more clear.

Now, the next question I had to ask myself was:

“Do I order the hardware, put it together myself, and save money? Or do I go with a pre-configured machine pay a bit of markup?”

I’ll get a fair amount of negative feedback for this point, but in my opinion, I tend to lean towards “done for you solutions”.

Why?

Three reasons: I have limited timeenergy, and attention.

Anytime I can spend money to pay a professional to outsource a task that I am either (1) not good at, (2) don’t like doing, or (3) is not worth my time/energy, I’ll tend to move the task off my plate — this is the exact rationale that enables me to work smarter instead of harder.

So, let’s presume that I could buy the hardware to create a comparable system to the NVIDIA DIGITS DevBox for approximately $8,000 — that saves me $7,000 right?

Well, not so fast.

I’m not going to say what my hourly consulting rate is, but let’s (for the sake of this argument) say I charge $250 per hour of my time: $7,000 / 250 per hour = 28 hours.

In order for this time-to-money tradeoff (let alone the attention and energy it will take), within 28 hours of my own time, I need to:

  • Research the hardware I need.
  • Purchase it.
  • Get it all together in my office.
  • Assemble the machine.
  • Install the OS, software, drivers, etc.

Can I do all this in 28 hours with minimum context switching?

Honestly, I don’t know. I probably could.

But what if I’m wrong?

And the better question to ask is:

What if something breaks?

I’m not a hardware person, I don’t enjoy working with hardware — it’s just the way I’ve always been.

If I build my own system, I’m my own support staff. But if I go with NVIDIA, I have the entire DevBox team to help support troubleshoot, and resolve the issue.

So, let’s say that it takes me a grand total of 15 hours to order the hardware, put it together, install the required components, and ensure that it’s working properly.

That leaves me with 28 – 15 = 13 hours of my time left to handle any troubleshooting issues that occur over the lifetime of the machine.

Is that realistic?

No, it’s not.

And from this perspective (i.e., my perspective), this investment makes sense. You may be in a totally different situation — but between the projects I’m currently working on, the companies I run, and the PyImageSearch blog, I’ll be utilizing deep learning a lot more in the coming future.

Factor in the fact that I not only value my time, but also by energy and attention, this further justifies the upfront cost. Plus, this better enables me to create awesome deep learning tutorials for the PyImageSearch blog.

Note: I’m not factoring in the increase of my utility bill which will happen, but in the long run, this becomes marginalized due to my ability to save time, be more efficient, and ship faster.

Ordering the NVIDIA DIGITS DevBox

Placing an order for the NVIDIA DIGITS DevBox isn’t as simple as opening a webpage, entering your credit card information, and clicking the “Checkout” button. Instead, I needed to contact NVIDIA and fill out the access form.

Within 48 hours I was in talks with a representative where I created the PO (Purchase Order) and shipment. Once the terms were agreed upon, I cut a check and overnighted it to NVIDIA. Overall, it was quite the painless process.

However, I will note that if you’re not doing a lot of shipping/receiving, the actual shipment of the DevBox may be a bit confusing. I personally don’t have much experience in logistics, but luckily, my father does.

I called him up for some clarity regarding “Freight terms” and “EX-WORKS”. 

Essentially, the EX-WORKS comes down to:

EX-WORKS (EXW) is an international trade term that describes an agreement in which the seller is required to make goods ready for pickup at his or her own place of business. All other transportation costs and risks are assumed by the buyer. (source)

What this boils down to is simple:

  1. NVIDIA will be putting together your system.
  2. But once it’s boxed up and on their loading bay, the responsibility is on you.

How did I handle this?

I used my FedEx business account and shelled out extra cash for insurance on the shipment. Not a big deal.

The only reason I included this section in the blog post is to help out others who are in a similar situation who may be unfamiliar with the terms.

Unboxing the NVIDIA DIGITS DevBox

The DevBox ships in a hefty ~55lb box, but the cardboard is extremely thick and well constructed:

Figure 2: The box the NVIDIA Digits DevBox ships in.

Figure 2: The box the NVIDIA Digits DevBox ships in.

The DevBox is also very securely packed in a styrofoam container to prevent any damage to your machine.

While unboxing the DevBox may not be as exciting as opening a new Apple product for the first time, it’s still quite the pleasure. After unboxing, the NVIDIA DIGITS DevBox machine itself measures 18″ in height, 13″ in width, and 16.3″ in depth:

Figure 3: The NVIDIA DIGITS DevBox fully unboxed.

Figure 3: The NVIDIA DIGITS DevBox fully unboxed.

You’ll notice there are three hard drive slots on the front of the machine:

Figure 4: Three hard drive slots on the front of the DevBox.

Figure 4: Three hard drive slots on the front of the DevBox.

This is where you slide in the three 3TB hard drives included in the shipment of your DevBox:

Figure 5: The DevBox ships with your 3TB hard drives. Luckily, you don't need to purchase these separately.

Figure 5: The DevBox ships with your 3TB hard drives. Luckily, you don’t need to purchase these separately.

Slotting the drives in couldn’t be easier:

Figure 6: Slotting the drives into their respective bays is easy. After slotting, the drives are securely locked in place.

Figure 6: Slotting the drives into their respective bays is easy. After slotting, the drives are securely locked in place.

On first boot, you need to connect a monitor, keyboard, and mouse to configure the machine:

Figure 7: Make sure you connect your monitor to the first graphics card in the system.

Figure 7: Make sure you connect your monitor to the first graphics card in the system.

Pressing the power button starts the magic:

Figure 8: Booting the DevBox.

Figure 8: Booting the DevBox.

Next, you’ll see the boot sequence:

Figure 9: Going through the motions and booting the NVIDIA DIGITS DevBox.

Figure 9: Going through the motions and booting the NVIDIA DIGITS DevBox.

After the machine finishes booting, you’ll need to configure Ubuntu (just like you would with a standard install). After setting your keyboard layout, timezone, username, password, etc., the rest of the configuration is automatically taken care of.

It’s also worth mentioning that the BIOS dashboard is quite informative and beautiful:

Figure 10: The BIOS dashboard on the NVIDIA DIGITS DevBox.

Figure 10: The BIOS dashboard on the NVIDIA DIGITS DevBox.

Overall, I’m quite pleased with the setup process. In under 30 minutes, I had the entire system setup and ready to go.

You’re not plugging that directly into the wall, are you?

Protect your investment — get a quality Uninterrupted Power Supply (UPS). I’ll detail exactly which UPS (and rack) I chose in the next blog post. The images gathered for this blog post were mainly for demonstration purposes during the initial unboxing.

In short, I would not recommend having your DevBox sitting directly on top of carpet or plugged into an outlet without a UPS behind it — that’s just asking for trouble.

Summary

Admittedly, this is a lengthy blog post, so if you’re looking for the TLDR, it’s actually quite simple:

  1. I’ll be doing a lot more deep learning tutorials on the PyImageSearch blog in the next year. It will start with a slow ramp-up over the next 3-4 months with more consistency building towards the end of the year.
  2. In order to facilitate the creation of better deep learning tutorials (and for use with my own projects), I’ve put my money where my mouth is and invested in an NVIDIA DIGITS DevBox.
  3. The DevBox was a delight to setup…although there are a few practical tips and tricks that I’ll be sharing in next week’s blog post.
  4. If you’re interested in diving into the world of deep learning, the PyImageSearch blog is the place to be.

It’s worth noting that I already cover deep learning inside the PyImageSearch Gurus course, so if you’re interested in learning more about Neural Networks, Deep Belief Networks, and Convolutional Neural Networks, be sure to join the course.

Otherwise, I’ll be slowly increasing the frequency of deep learning tutorials here on the PyImageSearch blog.

Finally, be sure to signup for the PyImageSearch Newsletter using the form below to be notified when these new deep learning posts are published!

, , ,

52 Responses to Hands-on with the NVIDIA DIGITS DevBox for Deep Learning

  1. Rafael Espericueta June 6, 2016 at 11:25 am #

    I’m wondering why you didn’t go with Exxact Corporations offering.
    It’s $8K rather than $15K.

    I certainly hope you don’t have a good reason, since I’ve already ordered my deep learning dream machine from them.

    • Adrian Rosebrock June 7, 2016 at 3:21 pm #

      I honestly haven’t worked with them before — no good reason other than that. Enjoy your new machine, and let me know what you think of it!

  2. Lex June 6, 2016 at 11:28 am #

    I’m sure anyone with a keen interest in computer vision has stumbled on ML articles and thought “Damn that looks interesting”, I know I have. My ML knowledge is VERY limited, and I for one am really looking forward to the next bunch of articles!

    Cheers Adrian, keep it up.

  3. Yurii June 6, 2016 at 11:29 am #

    You probably should pay more for the electricity, so your 15,000 price is not a constant. It also a line, but its slope is smaller that for the AWS GPU line.

    • Adrian Rosebrock June 7, 2016 at 3:20 pm #

      As I mentioned to Pim above, the utility cost will increase — but in terms of a business expense, that’s a round-off error at the end of the year.

  4. jack June 6, 2016 at 11:35 am #

    does this mean people will need this system to follow your future blogs?

    • Adrian Rosebrock June 7, 2016 at 3:19 pm #

      No, certainly not. I’ll be detailing how to setup an Amazon EC2 system for deep learning soon. You’ll also be able to run more the “more simple” examples on your CPU. The main reason I’m using the DevBox is for the speed, ability to work in parallel, and to facilitate tutorial creation.

  5. Thuan June 6, 2016 at 12:33 pm #

    I am looking forward to see more your posts about deep learning. Thanks.

  6. Nikolay June 6, 2016 at 1:01 pm #

    It`s very cool news.

  7. Victor June 6, 2016 at 2:02 pm #

    Nice!!! Cool “toy” 🙂 Looking forward to deep learning articles

  8. Pim June 6, 2016 at 2:07 pm #

    Nice box!
    Some thoughts: Figure 1 does not take into account the increase in computing power that will happen in the next 200 days. With the release of the Nvidia 1080 it reasonable to assume the cost of a g2.8xlarge will come down within the next 200 days. Similarly building a Nvidia 1080 based system now will be more cost effective than a Nvidia Titan X based system.
    Finally, running this box 200 days straight will also have an significant effect on your power bill…
    and then there’s the noise, generated heath etc.
    Anyway, this seems to get really complex if you want to take everything into account.
    Enjoy your box and keep us posted!

    • Adrian Rosebrock June 7, 2016 at 3:18 pm #

      A couple of notes:

      1. The 1080 has only 8GB of memory versus the Titan X. Going from 4GB on the K520 is nice, but it doesn’t compare to the 12GB of the Titan X.

      2. In my opinion, the 1080 is more geared towards gamers. It will likely be another year or two before NVIDIA releases another GPU that is heavily targeted at deep learning researchers.

      3. Yes, running this machine will increase my electrical bill, there is no doubt in that. However, this increase is less of an issue (in my case) because it’s simply a business expense. The money has to come from some where, sure, but it’s not that big of a deal in the long run.

      4. I presume that the g2.8xlarge price will decrease in the future. But by how much? And when? These are total unknowns.

  9. David Hoffman June 6, 2016 at 4:58 pm #

    Congrats on the big investment. I look forward to the deep learning posts!

    Also–I think you made a slight typo and meant Three x 3 TB instead of GB on your specs!

    • Adrian Rosebrock June 7, 2016 at 3:15 pm #

      Thanks for the tip David! I’ve made sure resolve this issue.

  10. Anesis June 6, 2016 at 6:43 pm #

    I would be so happy if you would choose Tensorflow as your main toolkit to make your deep learning tutorials =)

    • Adrian Rosebrock June 7, 2016 at 3:14 pm #

      I’ll likely use TensorFlow in the future, but to start, most tutorials will be using Keras and mxnet.

  11. Jay Chen June 6, 2016 at 7:53 pm #

    Thank you for your sharing!

  12. Reza June 6, 2016 at 9:01 pm #

    Thank you for the details! In fact, you are a great teacher and writer.

  13. Rick Lee June 6, 2016 at 9:10 pm #

    Hi Adrian,

    Nice post and I enjoyed it very much. I particularly like the way you explain why you come to certain decisions.

    Just one question. In your future deep learning examples, would you expect readers to have more powerful machines? Currently, I’m trying out many of your examples in a VM in a typical Mac or sometimes, in a Raspberry Pi.

    BTW, I recently read a post about how Dan Goldin expand his scientific knowledge in a new area at 75 and build an AI startup. That amazed me very much. Anyway, look forward to learn more about deep learning.

    Rick

    • Adrian Rosebrock June 7, 2016 at 3:14 pm #

      Hey Rick — most of my examples in future blog posts will assume that you have a GPU or an Amazon EC2 GPU instance. Some examples will be able to run on the CPU as well (although only the most basic ones). In general, I wouldn’t recommend using the Pi for the future tutorials.

  14. Duncan June 6, 2016 at 9:47 pm #

    Nice to see some Deep Learning articles coming up.

    Would be nice to see some timing tests on your DevBox against Amazon (same code and data), and you may want to track power usage to offset the electricity cost.

    Cheers,
    Duncan

    • Adrian Rosebrock June 7, 2016 at 3:12 pm #

      Thanks for the suggestion Duncan. And I’ll certainly try to provide timing tests when I can.

  15. Atena Nguyen June 7, 2016 at 2:54 am #

    Great, waiting for your 1st Deep Learning tutorial

  16. Greg Shabanov June 7, 2016 at 9:11 am #

    WOW! Great job !

    • Adrian Rosebrock June 7, 2016 at 2:44 pm #

      Thanks Greg! 🙂

  17. bao li June 7, 2016 at 11:51 am #

    Really great, waitting for your deep learning tutorials!

    • Adrian Rosebrock June 7, 2016 at 2:45 pm #

      There will be plenty of them!

  18. Abu June 7, 2016 at 1:14 pm #

    Looking forward to the deep learning tutorials on this blog! Hopefully they don’t require a DevBox. 🙂

    • Adrian Rosebrock June 7, 2016 at 2:45 pm #

      Most of them won’t. I’m trying to make it a goal that all tutorails will be able to run on EC2 instances. Some of the more simple tutorials will also (ideally) run on the CPU.

  19. Stoney Vintson June 7, 2016 at 1:35 pm #

    Some addtional excellent articles on hardware for deep learning are at Tim Dettmer’s blog.
    http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/
    http://timdettmers.com/2014/08/14/which-gpu-for-deep-learning/

    Let Adrian be your fast track guide into the jungle of deep learning. Then supplement
    his course additional notes.
    Andrej Karpathy & Fei Fei Li
    http://cs231n.stanford.edu/syllabus.html
    Richard Socher
    http://cs224d.stanford.edu/syllabus.html

    Vincent Vanhoucke, Google
    Deep Learning w/ Tensorflow on Udacity
    http://blog.udacity.com/2016/01/putting-deep-learning-to-work.html
    https://www.udacity.com/course/deep-learning–ud730

    • Adrian Rosebrock June 7, 2016 at 2:46 pm #

      Thanks for sharing! 🙂 I’m a big fan of the CS231n course — I’ve been through it myself and really enjoyed it. They did a fantastic job.

  20. Harvey June 9, 2016 at 3:53 pm #

    OK, I’m green with envy.

    • Robin Kinge June 11, 2016 at 4:49 pm #

      Ditto..

      • Adrian Rosebrock June 12, 2016 at 9:33 am #

        Don’t worry! I’ll be sharing the knowledge I learn from using this system.

  21. saimadhu June 10, 2016 at 6:30 am #

    Thanks alot for the post.

  22. sakhamo June 15, 2016 at 1:31 am #

    I see how Crazy you are to dive deep into deep learning….
    Would love to see you work on one complex real world challenge.
    “I want you to design and train your model to detect and identify “Empty” and “Occupied” Parking spots

    • Adrian Rosebrock June 15, 2016 at 12:30 pm #

      Thanks for the suggestion Sakhamo!

  23. Mike June 17, 2016 at 1:09 pm #

    Nice write up and a very nice box indeed. I’m still very much a beginner when it comes to Deep Learning, hopefully I can come up with a similar justification and logic for my own box in the future. Looking forward to your tutorials.

    In the meantime I’ll be sticking with AWS, as I have found the spot pricing on the g2 instances ~ 1/3 cost of the on-demand pricing to fit within my budget for now, and still being way faster than my laptop. Hopefully, AWS will introduce some updated GPU instance soon with newer GPUs. I’ve also found some of the marketplace AMI such as this Tensorflow one pretty useful (if there are other beginners here) since I’m not exactly a Linux installation wizard: https://aws.amazon.com/marketplace/pp/B01EYKBEQ0/

    • Adrian Rosebrock June 18, 2016 at 8:15 am #

      The spot instances are a huge cost saver, but unfortunately they don’t work for me in my particular cases since I need absolute, uninterrupted run-time. But for the price, spot instances are great.

  24. mtsm June 19, 2016 at 8:50 am #

    Hi, have you been able to get TF to run on g2.8xlarge and actually having it recognize GPUs and parallelize across GPUs?

    • Adrian Rosebrock June 20, 2016 at 5:31 pm #

      I have not tried TensorFlow on the g2.8xlarge instance yet. For multi-GPU environments, I prefer to use mxnet.

  25. Daniel July 21, 2016 at 2:28 am #

    Just thought I would live my two cents here. Great post and great work. I have followed your work for some time now. Keep it up.

    In regards to your hardware, we build these systems in house for the work we done with machine learning, computer vision and drones.

    I have the parts up on PC Part Picker and here are the links to a few of these boxes we use.

    At a fraction of the cost obviously..

    A lightweight simple workstation for basic stuff. The card that is on there now is not what we have in it – this is our base configuration. But we have the same set up with a 980ti Hybrid from EVGA and works great!

    http://pcpartpicker.com/user/animusoft/saved/Dbf6Mp

    For a beefier set up we use this one

    http://pcpartpicker.com/user/animusoft/saved/HBptt6

    Works well all running Ubuntu 14.04

    One thing we will add a note here on is PC Part Picker itself. Its a great website and tool for doing your research. Moverover, they already filter out what is compatible and what is not so that makes it a whole lot easier. And it links it straight to Amazon.

    Each time we build a system it takes about 30-60 minutes top from start to finish.

    • Adrian Rosebrock July 21, 2016 at 12:41 pm #

      This is a great resource, thanks for sharing Daniel. In my particular situation, I still wouldn’t have gone this route, mainly because the time/cost tradeoff for me to setup the hardware (when I don’t like working with hardware) would not have been worth it for me. But for people who are looking to save money or enjoy the hardware aspect, this is a fantastic resource.

  26. ASP August 26, 2016 at 11:52 am #

    Hi Adrian!
    Great Post! Thanks for sharing. We have procured a dev-box recently (not directly from NVIDIA though). We are 3-4 of us currently sharing this machine. Do you happen to know how we would go about setting up DIGITS for each user? Currently, everyone’s models are getting dumped into the same folder. Unfortunately, the box didn’t come with good pre-configured permissions set up for multiple users.

    • Adrian Rosebrock August 29, 2016 at 2:07 pm #

      I’m honestly not sure about a per-user install of DIGITS. I’m sure it’s possible via some PATH manipulation, but I would suggest posting on the official DIGITS GitHub Issues regarding that particular question.

  27. Sean August 31, 2016 at 9:11 pm #

    Have you had any trouble with cooling?

    My GPUs run at 84 degrees C, which is technically within limits, but still alarming. The fans never go higher than 50% duty cycle. I’m considering liquid cooling.

    • Adrian Rosebrock September 1, 2016 at 11:00 am #

      I personally don’t have any issues with cooling, but it is something that I do keep a close eye on.

  28. James September 24, 2017 at 12:38 pm #

    Hi Adrian,

    Great useful read! I was wondering if you knew about companies that could build the dev box for you at a reasonable price? I know NVIDIA used to set them up but I can’t find others/

    Thanks

    James

    • Adrian Rosebrock September 26, 2017 at 8:40 am #

      Take a look at the Lamda DevBox which is (essentially) the same thing.

      • Egor Panfilov October 16, 2017 at 4:31 am #

        Hello, Adrian! Thanks a lot for the link! Lambda DevBoxes look very interesting (Pascal GPUs: 1080TI, Titan Xp; Ubuntu 16.04; 10GBE).

Trackbacks/Pingbacks

  1. Considerations when setting up deep learning hardware - PyImageSearch - June 13, 2016

    […] last week’s blog post, I discussed my investment in an NVIDIA DIGITS DevBox for deep […]

  2. I'm writing a book on Deep Learning and Convolutional Neural Networks (and I need your advice). - PyImageSearch - December 12, 2016

    […] already have my NVIDIA DIGITS DevBox which has been running around the clock performing experiments and gathering results for nearly 6 […]

Leave a Reply