GPUs and 3D printing: Are their worlds set to collide?
By Paul Mignone
We at ResBaz had a nostalgic moment earlier this week. Michael Wang, a #ResBaz original, and now NVIDIA’s solutions architect for Australia and New Zealand, returned to his old ‘stomping ground’ to host Melbourne’s premier GPU user’s group. It’s been nearly a year since the last GPU meetup, and it recommenced with Mike giving a great summary of the highlights for this years GPU Technology Conference (GTC).

Welcome Back!
For those that weren’t able to make the the meetup, I strongly recommend viewing the GTC keynote speech (below) as it will be 100 minutes well spent. For me personally, the highlight of both Mike’s talk and Jen-Hsun keynote speech, was the recent success of NVIDIA’s and Stanford University’s collaboration to create the world’s largest artificial neural network. This network was built for modelling how the human brain learns.
Nvidia CEO Jen-Hsun Huang delivering the keynote speech (Video streaming by Ustream)
The original ’Google Brain’ artificial intelligence system used to model human learning, consisted of 1000 CPU servers (i.e., 16,000 cores), costed USD 5,000,000 to build and consumed over 600 kWatts of power. The new Stanford AI lab used only 3 servers, which contained 12 NVIDIA GPUs (i.e., 18,432 CUDA cores), costing only USD 33,000 and consuming only 4 kWatts of power. When viewing cost and power savings of this magnitude, it’s no surprise that the top 10 Green 500 supercomputers are powered by GPUs. Australia’s very own CSIRO also makes the top 10 in this list, with it’s GPU cluster crunching 2,358 MFLOPS per watt.

The stats say it all…(Image Source)
In recent times, both the GPU computing and 3D printing movements have picked up some striking similarities. First, both technologies have come down significantly in cost, allowing more consumers and researchers to access the technology. Just like 3D printing, access to GPUs can give researches the ability to reduce both the cost and time to achieve their research objectives (e.g., publications). Second, the learning curve in using these technologies are becoming progressively shallow, with GPU libraries and compilers available for high-level programming languages such as Python and Matlab.
A world first? #3dprinting from the #cloud https://t.co/ubkE3036sa #nvidia #gridforums #gpu
— Mike Wang (@mikepcw)April 28, 2014
The technologies are also set for a collision course of epic delight for both businesses and consumers. In the following post, Mike describes an experiment using an NVIDIA GRID-accelerated virtual desktop session where users can not only create 3D CAD models, but they can also send those models directly to a 3D printer from the cloud without having to save to a physical storage medium (e.g. a USB thumb drive). For those that are interested in hearing more about this technology, I strongly recommend joining the Melbourne’s GPU user's group. It’s fast approaching the 100 members milestone, and will be awarding prizes to celebrate this fantastic achievement.
