Real-time rendering and The CG industry

We’ve been seeing some rapid development in GPU technology, particularly in its advancements towards matching the realism achieved with offline renders on interactive media platforms, such as in gaming or VR. Oats Studios’ ADAM is a visually stunning short film created with the real-time rendering capabilities of Unity, a game development platform. Epic Games’ Unreal Engine 4 was used to render K2SO in Star Wars: Rogue One, and the list of examples is ever growing. With architectural and automotive visualization jumping on the bandwagon, it’s beginning to look like offline rendering will be left in the dust sooner rather than later. But what will this mean for the business ecosystem of 3d rendering?

Image of Unreal Engine 4 render scene
Image credit OATS Studios

Today’s production pipeline is still reliant on traditional rendering, and will continue to be until real-time image analysis can catch up to offline ray tracing engines, which are continuously developed to render faster and faster. This parallel advancement of two different solutions may sometimes seem directed towards the same goal, in a way that would ultimately leave only one accepted and widely used in the future. It’s easy to think that if at some point, real-time rendering wins the race, the landscape of commercial CG will change immensely, and different service providers in that sector may have to reexamine their business models to accommodate this. Render farm or GPU rental services, for example, have been the means for studios to survive in the fast-paced industry because of their scalability and affordability. That we see high-quality 3d in commercials or TV these days is because render farms allow productions to manage render times while continuing to work on turnarounds or more content. If frames that would take hours to render using today’s engines could be reduced to minutes with real-time rendering, it wouldn’t seem reasonable for anyone to need to render on farms or rented GPU servers at all. Consequently, external render engines as we know them may then be obsolete, since all rendering will be saving snapshots of what’s already drawn on a 3d applications viewport, more or less.

Of course, all of this is conjecture, and this possibility has no doubt been mulled over by companies with a stake in all this. The fact that development for traditional renderers hasn’t slowed down means the above scenario isn’t going to happen for a while, or there are possibilities for both solutions that could contribute to rendering in different but equally important ways. Perhaps the evolutionary trajectory of real-time rendering, though reaching a level of usability in mediums dependent on offline rendering, stops short of where traditional rendering is going. We may see photorealism in offline renders reach new heights, at faster render speeds. V-Ray is already introducing Hybrid rendering, which makes use of both CPU cores and GPUs at the same time, which could pave the way for its competitors, and third-party rendering services. On the other side of the 3d world, Blender 2.8 is at its beta stage and features their own real-time rendering engine called EEVEE, as well as the ability to employ both GPU and CPU to speed up rendering. Real-time rendering may not overthrow offline rendering, but enhance productivity and pre visualization, as well as serve as a sufficient rendering alternative in certain cases. Netflix’s Love Death Robots, produced by Blur Studios, is a testament to the continuous development of traditional ray tracing, and the growing accessibility of 3d rendering for media other than mainstream cinema. In the end, only time will tell, but at very least, the evolution of real-time rendering is a sign of big changes in the world of CG.

So now that we’ve talked about real-time and traditional rendering, let’s have a little look at the processors that serve as the main driving force for each.

GPU vs CPU for 3d rendering

GPU rendering has nested itself in the imaginations of many 3d artists as the light at the end of the long and dismal production tunnel. Touted as the ultimate solution for render times and the future of 3d across many avenues, GPU rendering has definitely become quite the buzz word these past few years. But many CPU loyalists have offered credible arguments in favor of traditional rendering, and that many studios still rely heavily on CPU based render farms begs the question: what’s the catch?

Before looking at the caveats, let’s examine the distinguishing advantages of GPU rendering.

NVIDIA’s GPU RTX line (image credit: NVIDIA)


With only rendering in mind, a GPU can outperform a CPU by executing the render instructions on many more cores, exponentially reducing render times. On top of that, multiple GPUs can be used for rendering a scene. The implication of this is that one workstation with multiple GPU cards is now an alternative to buying multiple CPUs which would require more physical space and upkeep, or being dependent on render farms, which, although arguably less costly, doesn’t provide the security or control of something in house.


Real-time rendering is making its way to popular 3d programs such as Cinema 4d and Blender, and other apps are sure to follow suit, but this has been something inherent to GPU based renderers like Redshift and Octane. Being able to interact with a realized scene in the viewport makes the 3d creation process more streamlined than having to render previews all the time.

Efficiency is what GPU rendering is all about, and it’s this new level of effectiveness that’s getting the 3d community pumped, but the more skeptical users insist that this is all too good to be true, and for valid reasons. Here are some things about GPU rendering that need to be considered:


The high latency in communication between GPUs and system memory means each allocated GPU must itself, have enough memory to accommodate a scene, and if multiple GPUs are used, the card with the lowest memory determines whether the scene can be rendered at all. This limits what can be rendered on GPUs since even though there are several ways to optimize a scene for rendering, larger and more complex scenes are bound to need more than graphics cards can currently provide.


Because GPUs rely on drivers, the frequency in which these drivers are updated can pose stability issues with operating systems and some programs. Bugs need to be addressed with every update, which can hinder productivity on a GPU dependent pipeline.

Less GPU render farms on the market

Access to a cloud-based rendering solution is always a good thing to have despite the speed gains GPU rendering can provide, and the services that provide rendering for GPU based engines are limited. There is, however, a growing number of GPU server rental solutions that can be utilized for this purpose.

The increasing viability of GPU rendering will definitely shape the landscape of the 3d rendering industry in the years to come, but CPU rendering stands on solid ground, and the future holds a place for both solutions. The concept of hybrid rendering, which employs both the GPU and CPU is an example of the harmonious relationship we can expect from both rendering approaches. Where CG technology will bring us, only time can tell, but it’s safe to say that excruciatingly long render times will soon be a thing of the past.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply