A little bit about me

Tuesday, August 13, 2019

Further Pathtracing and Lessons Learned

Since the last article, I've made some substantial changes. I've added screenspace settings to the build, added multiple rays per pixel, randomly generated the scene at runtime, added movement controls, added fast movement compensation with adaptive temporal accumulation for VR, and finally I added a light Gaussian blur effect to reduce noise.

Dynamically changing the temporal accumulation in addition to improving the reprojection warping allows the user to get a clean image while still remaining temporally stable and responsive to faster movement.

It's clear that simply increasing the rays per pixel has little effect on image quality while absolutely demolishing any semblance of performance. The resulting image remains noisy and generally unsatisfactory.
Temporal accumulation does a substantially better job while not impacting performance and only introducing a somewhat obnoxious temporal artifact that is mitigated by higher framerates and movement compensation. Temporal accumulation alone however still leaves a fine grain noise with pixel sized bright specks and black spots.
Finally, the softer Gaussian blur helps remove those pixel specks and spots and results in a fairly clean image better than nearly anything short of intelligent hardware accelerated denoising.

As a side experiment I tried building to UWP and using holographic remoting to run the project on the Hololens. Performance was great even at 4x resolution due to the low 720p resolution of the Hololens displays but wifi speeds & reliability as well as compression artifacts were problematic.

Simple high raycount per pixel
Lower rays/pixel and no Gaussian blur but high temporal accumulation

With light Gaussian blur, lower rays/pixel, and heavy temporal accumulation

Things that worked:
  • Separate rendering for each eye for XR
  • Moving Unity GameObjects and meshes each frame
  • Gaussian blur to reduce noise
  • Dynamically changing RenderTexture resolution
  • Distorting temporal accumulation based on 3DOF head rotation
  • Changing temporal accumulation rate based on head movement rate
  • Hololens support with Holographic remoting
Things that didn't:
  • Foveated rendering with Graphics.CopyTexture()
  • OpenGL ES 3.1 compute shaders for Android
  • MultiGPU parallel compute shaders
Things that might be possible but aren't easy:
  • RT core acceleration
  • MultiGPU support with RT cores and/or shaders
  • AI tensor core or CPU intelligent realtime denoising
  • Dynamic rate of rays per pixel (based on pixel coordinates)
  • Foveated rendering with that rays/pixel value
  • Complex meshes without stranglehold bandwidth and calculation bottlenecks

Friday, June 21, 2019

How to Pathtrace in VR

Before I start, I want to thank David Kuri of the VW Virtual Engineering Lab in Germany for his example project and tutorial that I've based my modifications off of.

The following is his tutorial:

This link is to the BitBucket Git repo:

My project can be found at  https://github.com/Will-VW/Pathtracing

To start, this is a project I worked on as an experiment while part of the Volkswagen Virtual Engineering Lab in the Bay Area.

While searching for interesting raytracing implementations to challenge our 2080 TI, we were coming up fairly short. We found a couple lackluster and simple DirectX projects, poor Unity and Unreal support, and marginally impressive demos on rails. That is until we found our colleague's project using simple compute shaders in regular Unity.

Obviously we forgo RTX and DXR acceleration and for the moment we do not have denoising or proper mesh support but we gain near unlimited flexibility and control.

As far as we are aware (as of June 2019), there are no raytracing or pathtracing applications, demos, games, roadmaps, plans, or anything else with any sort of VR support or connection. We decided to change that.

I took David's existing project that worked in "simple" single camera pancake views and modified it until I could get it to work properly with SteamVR.

Temporal accumulation was the biggest issue and due to the fact that both right and left eyes were from different perspectives and the constant VR camera movement it was effectively impossible to have anything more than a single frame of accumulation in the headset. I first used instances to split the left and right eye textures so accumulation would be separated. I then used a shader to account for head rotation and adjust the accumulated frames accordingly to attempt to line up with the newest perspective.

Both efforts were effective. The former far more so than the latter which still suffered from artifacts due to the distortion at high FOVs like VR headsets have.

Next I targeted a dynamic resolution so that the number of rays projected each frame could be controlled to suit our needs. I changed the RenderTexture resolution appropriately and allowed the program to render based off of the VR headset resolution rather than the Unity mirrored view resolution.

Later, I attempted foveated rendering to improve performance which remained lackluster. I learned how expensive the Graphics.Blit() and Graphics.CopyTexture() methods are in bandwidth and found that while foveated rendering was possible, it hurt performance more than it helped.

I then added in support for moving around mesh objects that were imported at runtime based on the GameObject Transform. This allowed me to put in a moving and rotating block that moved around the scene. I also learned of the inefficiencies of the existing mesh pathtracing technique and began to explore more implicit surface calculations.

I looked at SLI support to parallelize the pathtracing but found that compute shaders do not support multi-GPU and CUDA/OpenCL options were unviable for our purposes.

Thus we managed to get a fully pathtraced program running at full 90FPS in a Vive Pro at native resolution and 1 ray per pixel with perfectly accurate reflections, lighting, ambient occlusion, and shadows. Albeit with a lot of noise, no RTX acceleration, limited mesh usage due to performance, no standard materials or shaders, and no effective foveated rendering.

Tuesday, June 11, 2019

On The Current State Of Raytracing

I imagine most people at all familiar with rendering and graphics are aware of Nvidia's recent 'RTX' technologies and Microsoft's DXR. Nvidia of course providing hardware acceleration in the form of 'RT cores' and Microsoft making the DirectX standards to use them.

We've seen pretty demo's like the Star Wars cinematic elevator video first rendered on DGX super computers then revealed to run -albeit at a far far lower level of fidelity- on consumer graphics cards in realtime.We've seen a couple games like Battlefield V, Shadow of the Tomb Raider, and Metro Exodus support raytraced reflections, shadows, and global illumination respectively.

Nvidia even recently released a mostly pathtraced version of Quake II based on a Q2VKPT open source fully pathtraced implementation.

We've also seen mention of impressive unaccelerated raytracing in cryengine using a RX Vega 56 as well as mention of raytracing on the PS5 and acceleration on Xbox Scarlet.

All these demos are quite impressive and we've even seen both Unity and Unreal engine add support for DXR in publicly available demos and betas so developers can begin to push these technologies in their applications.
It almost seems like rendering technology is suddenly accelerating at light speed. That is until you begin to look at the current situation more holistically.

The current implementations offer many substantial drawbacks that cumulatively and often individually destroy the viability of the technology.

First, performance is a huge issue. The reflection only implementation in Battlefield V absolutely demolished framerates from between 58 to 64% on the RTX 2080 and 2080 TI. The $1250 2080 TI pushing a measly "high 60s" framerate at 1080p. DXR is also limited to DX12 which particularly in the frostbite engine that BFV runs on causes serious stuttering and frametime issues. A later patch ended up improving performance after many months but that 2080 TI still doesn't manage 90FPS on high at 1080p. We're still left with a ~40% performance penalty for just those raytraced reflections.

Remedy Games themselves remarked on the enormous penalties of RTX implementations in their custom Northlight Engine. They say regarding the frametime penalties, "This is a total of 9.2 ms per frame and thus an almost one-third higher computational overhead if we take 30 frames per second as the basis". Notably, this is still at 1080p with the aforementioned $1250 2080 TI. While gamers are used to 144Hz refresh rates with frametimes well below 7ms it's hard to stomach raytraced effects that would more than double frametimes for often unsubstantial graphical improvements. Current rasterized fakery has gotten so sophisticated and convincing that raytracing just isn't as substantial as it otherwise would be.

Current RTX implementations are basically suggesting that instead of pushing 4k144 or 1440p240 like many gamers are trying to do or even stay consistent and push 4k60 or 1440p144 or even regress somewhat to 1440p60 or 1080p120 that we regress all the way to 1080p60 in the best implementations with the best consumer card currently available priced more than $500 higher than the last gen 1080 TI at launch.

We're going to need to see more than a radical improvement in performance for this to be viable in current generations games. We're left hoping that Nvidia's launch of RTX was truly so incompetent that such gains were left on the table more than 8 months after launch. Though seeing that even Windows didn't support DXR or RTX features on launch and it took months for the first game to support any RTX to come out it's not such a stretch.

Second, compatibility is horrible. Only three published games support RTX raytracing as of June 2019. Arguably Quake II could be considered if you want to include a mod. Those raytracing tools in Unity and Unreal I mentioned earlier are still in very early beta these 8 months after RTX launch with little sign of rapid improvement. Bugs are ubiquitous, backwards comparability with 1000 series GTX cards, while possible, leaves performance an order of magnitude worse than middling RTX performance. Implementation into existing projects is all but impossible at this point without limiting yourself to specific raytracing specific alpha builds of the game engine editors.

Notably, while VR is quickly becoming extremely popular with PSVR and PCVR reaching over 4 million users each and Steam getting 1 million monthly active VR users as well as Facebook launching the $400 Oculus Quest, there is absolutely no RTX support whatsoever. There isn't so much as a hint that maybe, just maybe, one of the dozen multi-billion dollar companies currently pouring hundreds of millions into VR are even considering exploring raytracing support in VR.
Obviously performance is a bigger consideration for VR as well if compatibility was even considered.

Related to compatibility, most raytracing tools are locked down from developers. While some major studios are working closely with Nvidia and Microsoft to integrate DXR and RTX into their custom game engines and indie developers will presumably get proper tools in Unreal and Unity eventually, right now there are very few options to actually modify an implementation of raytracing yourself. Current DirectX raytracing implementations often and seemly universally forgo hardware acceleration so those RT cores become moot. 

Third, as alluded to before, rasterized fakery has gotten so good that it's really difficult to justify the aforementioned issues. While realtime denoising has gotten pretty good, there are still many noticeable artifacts in all available demos. The current level of fidelity reached with these implementations falls very far short of even these established baselines in rasterized games. While raytracing will almost invariably provide more realistic lighting and effects, that doesn't mean that they will look better than rasterized games. Remedy themselves faked incredibly good looking lighting in their last game Quantum Break. Even photorealism is nearly achieved using these primitive techniques that manage to both perform so much better and offer so much better compatibility than any raytracing implementation currently available.

All of this leaves such a sour taste in my mouth regarding realtime raytracing that it's hard to imagine that it will have any real place in gaming outside of the very narrow niche of enthusiast gamers with far too much money to spend that also don't care about pushing super high resolutions, VR, or high framerates. I really do hope I'm wrong.