A little bit about me

Friday, March 24, 2023

Increasing Shutter Angles Beyond 360°

The world of cinematography has long held on to the axiom that 24 FPS with a 180° shutter is cinema and any derivation therefrom is sacrilege because of everything from the 'soap opera' effect to props looking fake to inadequate levels of motion blur.

The few challenges to that self-evident truth like The Hobbit are endlessly criticized. The half-baked attempts to mitigate the problems of the low framerate standard, like interpolation, are rebuked by big name film makers who advocate for a 'film maker mode' so you get to see each any every one of those 24 frames and the illustrious chop and flicker between each one.

The force of progress, however, is insuperable and resisting the benefits of higher framerates can only go so far. Filmmakers are able to use high framerate recordings and downsample them in the post production to create a result that is indistinguishable from a low framerate recording but allows them to modify various parameters after filming is complete. This process has gone so far that industry tools not only allow for simple downscaling but also the artificial introduction of literal judder or variance in frame times.

I will not define shutter angle and exposure length in this piece but their definitions are crucial to understanding these concepts.

Obviously when exposure time is infinitely short, downsampling framerate is as simple as discarding frames. Let's say however that you are using a 180° shutter and are filming at 144FPS. That would result in a 1/288th of a second exposure time. Let's also say you wish to downscale to 24FPS with a 180° shutter. That is no longer simple nor even possible to do without causing unnatural artifacts. As the following image shows (if teal indicates time when a frame is exposed), there simply isn't information available in the original recording to reproduce the lower framerate video.

This problem however disappears if we simply use a 360° shutter for the higher framerate source footage. We simply have three frames of exposure for each frame of 24fps 180° shutter angle exposure.

The video linked here shows a professional example of this process: https://vimeo.com/105838602#t=1m51s

The logic of such a process is that by simply combining the images of multiple frames shot with a 360° shutter you are able to create an effective shutter speed multiple times longer than the one is was originally shot with. So with a 1/144th second exposure frame from a 144FPS video one can combine it with both its neighbors to create a 1/48th second exposure theoretically indistinguishable from a 1/48th second exposure created by native 24FPS 180° shutter recordings. 

The natural corollary of that capability is that with a simple rolling average, one is capable of creating a video with an effective shutter angle greater than 360° directly contrary to traditional cinematographic understanding. Even camera manufacturer RED claims unequivocally that: 

"The larger the angle, the slower the shutter speed, all the way up to the limit of 360°, where the shutter speed could become as slow as the frame rate."

They are far from alone in their misconception. The ability to create effective shutter angles greater than 360° allows filmmakers to create nearly arbitrarily high framerate videos without reducing the level of motion blur whatsoever compared to something like 24FPS with a 180° shutter.

Ultimately, the only honest arguments for using 24FPS in modern film making boil down to cheapening out or clinging to an objectively reduced level of fidelity like not unlike SDTV to prevent the viewer from being able see the content clearly and pretending like it's a creative choice.

 

Thursday, November 26, 2020

Explaining the Need for Varifocal Lenses in VR

 Vision comes with a few components that give you a sense of depth. Parallax is the effect where (over time) moving a perspective shows near objects moving faster than far objects giving the effect of depth. Stereoscopic vision is similar but you have two perspectives in different places which place near objects further apart than far objects and your mind creates a sense of depth from that. Finally there is focal distance. Focal distance can give you an effect of scale and distance from a single perspective. Much like 'tilt-shift' makes things look small, messing with the focus can emphasize the relative and even the absolute distance and size of objects.

The Index & Rift have a set focal distance of about 2 meters while the vive has a focal distance of 0.75m. That means that if you look at an object at that exact distance then it will be perfectly in focus in addition to having correct stereo and parallax effects. When you look at an object at a different distance then the dissonance in the focal distance from the other effects causes discomfort and 'blurriness'.

Focus on your finger a few cm from your eye with a background a few meters away. Move your finger further and further until you notice the background come into focus. You'll notice that there's a bit of a logarithmic scale in terms of how much the focal effect presents itself based on distance. The difference in focus between 2 meters and infinity is a lot less significant than the difference between 5cm and 25cm.

Effectively, current headsets are well tuned to keep objects in the same virtual room as you in focus until you get to close. Objects in the far distance aren't easy to make out because of low resolution but if they weren't, you would see that they are also slightly blurry because they're out of focus. Near objects like your hands that you bring to your face will go out of focus a lot more easily and they're also large relative to the display so resolution isn't an issue. They will be stereoscopically accurate but completely out of focus. You should be able to notice that if you close one eye and try and focus on an still object (like a book with text) near the camera, you should be able to focus on it clearly and even read it easily but the scale will appear to be incorrect and very large.

Varifocal lenses can either move the lenses like a camera to change the focal distance based on your eye movements and where in the virtual world you are looking or they can toggle lenses on and off to match the focal distance you should be seeing as closely as possible. The new issue that arises is that now all objects in the scene are going to be in focus all the time regardless of where you are focused. While better, this is still quite inaccurate. The solution can be digital by artificially blurring all objects as accurately as possible but there are also ways to have multiple displays with different focal distances or even change the focal distance differently in different parts of the screen.

Eye tracking is a prerequisite for such tech so foveated rendering will appear first in consumer headsets then we'll probably have to wait another generation to get this tech. I've used the Half-dome oculus prototype varifocal headset and while it's nice to be able to focus correctly, the digital blurring and eye tracking leave a lot to be desired and honestly, I think improving refresh rate and FOV along with adding effective foveated rendering will provide far larger improvements than varifocal lenses.

Tuesday, August 13, 2019

Further Pathtracing and Lessons Learned


Since the last article, I've made some substantial changes. I've added screenspace settings to the build, added multiple rays per pixel, randomly generated the scene at runtime, added movement controls, added fast movement compensation with adaptive temporal accumulation for VR, and finally I added a light Gaussian blur effect to reduce noise.


Dynamically changing the temporal accumulation in addition to improving the reprojection warping allows the user to get a clean image while still remaining temporally stable and responsive to faster movement.

It's clear that simply increasing the rays per pixel has little effect on image quality while absolutely demolishing any semblance of performance. The resulting image remains noisy and generally unsatisfactory.
Temporal accumulation does a substantially better job while not impacting performance and only introducing a somewhat obnoxious temporal artifact that is mitigated by higher framerates and movement compensation. Temporal accumulation alone however still leaves a fine grain noise with pixel sized bright specks and black spots.
Finally, the softer Gaussian blur helps remove those pixel specks and spots and results in a fairly clean image better than nearly anything short of intelligent hardware accelerated denoising.

As a side experiment I tried building to UWP and using holographic remoting to run the project on the Hololens. Performance was great even at 4x resolution due to the low 720p resolution of the Hololens displays but wifi speeds & reliability as well as compression artifacts were problematic.

Simple high raycount per pixel
Lower rays/pixel and no Gaussian blur but high temporal accumulation

With light Gaussian blur, lower rays/pixel, and heavy temporal accumulation


Things that worked:
  • Separate rendering for each eye for XR
  • Moving Unity GameObjects and meshes each frame
  • Gaussian blur to reduce noise
  • Dynamically changing RenderTexture resolution
  • Distorting temporal accumulation based on 3DOF head rotation
  • Changing temporal accumulation rate based on head movement rate
  • Hololens support with Holographic remoting
Things that didn't:
  • Foveated rendering with Graphics.CopyTexture()
  • OpenGL ES 3.1 compute shaders for Android
  • MultiGPU parallel compute shaders
Things that might be possible but aren't easy:
  • RT core acceleration
  • MultiGPU support with RT cores and/or shaders
  • AI tensor core or CPU intelligent realtime denoising
  • Dynamic rate of rays per pixel (based on pixel coordinates)
  • Foveated rendering with that rays/pixel value
  • Complex meshes without stranglehold bandwidth and calculation bottlenecks

Friday, June 21, 2019


How to Pathtrace in VR


Before I start, I want to thank David Kuri of the VW Virtual Engineering Lab in Germany for his example project and tutorial that I've based my modifications off of.

The following is his tutorial:
http://blog.three-eyed-games.com/2018/05/03/gpu-ray-tracing-in-unity-part-1/

This link is to the BitBucket Git repo:
https://bitbucket.org/Daerst/gpu-ray-tracing-in-unity/src/Tutorial_Pt3/

My project can be found at  https://github.com/Will-VW/Pathtracing



To start, this is a project I worked on as an experiment while part of the Volkswagen Virtual Engineering Lab in the Bay Area.

While searching for interesting raytracing implementations to challenge our 2080 TI, we were coming up fairly short. We found a couple lackluster and simple DirectX projects, poor Unity and Unreal support, and marginally impressive demos on rails. That is until we found our colleague's project using simple compute shaders in regular Unity.

Obviously we forgo RTX and DXR acceleration and for the moment we do not have denoising or proper mesh support but we gain near unlimited flexibility and control.

As far as we are aware (as of June 2019), there are no raytracing or pathtracing applications, demos, games, roadmaps, plans, or anything else with any sort of VR support or connection. We decided to change that.

I took David's existing project that worked in "simple" single camera pancake views and modified it until I could get it to work properly with SteamVR.

Temporal accumulation was the biggest issue and due to the fact that both right and left eyes were from different perspectives and the constant VR camera movement it was effectively impossible to have anything more than a single frame of accumulation in the headset. I first used instances to split the left and right eye textures so accumulation would be separated. I then used a shader to account for head rotation and adjust the accumulated frames accordingly to attempt to line up with the newest perspective.

Both efforts were effective. The former far more so than the latter which still suffered from artifacts due to the distortion at high FOVs like VR headsets have.

Next I targeted a dynamic resolution so that the number of rays projected each frame could be controlled to suit our needs. I changed the RenderTexture resolution appropriately and allowed the program to render based off of the VR headset resolution rather than the Unity mirrored view resolution.

Later, I attempted foveated rendering to improve performance which remained lackluster. I learned how expensive the Graphics.Blit() and Graphics.CopyTexture() methods are in bandwidth and found that while foveated rendering was possible, it hurt performance more than it helped.

I then added in support for moving around mesh objects that were imported at runtime based on the GameObject Transform. This allowed me to put in a moving and rotating block that moved around the scene. I also learned of the inefficiencies of the existing mesh pathtracing technique and began to explore more implicit surface calculations.

I looked at SLI support to parallelize the pathtracing but found that compute shaders do not support multi-GPU and CUDA/OpenCL options were unviable for our purposes.

Thus we managed to get a fully pathtraced program running at full 90FPS in a Vive Pro at native resolution and 1 ray per pixel with perfectly accurate reflections, lighting, ambient occlusion, and shadows. Albeit with a lot of noise, no RTX acceleration, limited mesh usage due to performance, no standard materials or shaders, and no effective foveated rendering.

Tuesday, June 11, 2019

On The Current State Of Raytracing


I imagine most people at all familiar with rendering and graphics are aware of Nvidia's recent 'RTX' technologies and Microsoft's DXR. Nvidia of course providing hardware acceleration in the form of 'RT cores' and Microsoft making the DirectX standards to use them.

We've seen pretty demo's like the Star Wars cinematic elevator video first rendered on DGX super computers then revealed to run -albeit at a far far lower level of fidelity- on consumer graphics cards in realtime.We've seen a couple games like Battlefield V, Shadow of the Tomb Raider, and Metro Exodus support raytraced reflections, shadows, and global illumination respectively.


Nvidia even recently released a mostly pathtraced version of Quake II based on a Q2VKPT open source fully pathtraced implementation.

We've also seen mention of impressive unaccelerated raytracing in cryengine using a RX Vega 56 as well as mention of raytracing on the PS5 and acceleration on Xbox Scarlet.


All these demos are quite impressive and we've even seen both Unity and Unreal engine add support for DXR in publicly available demos and betas so developers can begin to push these technologies in their applications.
It almost seems like rendering technology is suddenly accelerating at light speed. That is until you begin to look at the current situation more holistically.

The current implementations offer many substantial drawbacks that cumulatively and often individually destroy the viability of the technology.


First, performance is a huge issue. The reflection only implementation in Battlefield V absolutely demolished framerates from between 58 to 64% on the RTX 2080 and 2080 TI. The $1250 2080 TI pushing a measly "high 60s" framerate at 1080p. DXR is also limited to DX12 which particularly in the frostbite engine that BFV runs on causes serious stuttering and frametime issues. A later patch ended up improving performance after many months but that 2080 TI still doesn't manage 90FPS on high at 1080p. We're still left with a ~40% performance penalty for just those raytraced reflections.

Remedy Games themselves remarked on the enormous penalties of RTX implementations in their custom Northlight Engine. They say regarding the frametime penalties, "This is a total of 9.2 ms per frame and thus an almost one-third higher computational overhead if we take 30 frames per second as the basis". Notably, this is still at 1080p with the aforementioned $1250 2080 TI. While gamers are used to 144Hz refresh rates with frametimes well below 7ms it's hard to stomach raytraced effects that would more than double frametimes for often unsubstantial graphical improvements. Current rasterized fakery has gotten so sophisticated and convincing that raytracing just isn't as substantial as it otherwise would be.

Current RTX implementations are basically suggesting that instead of pushing 4k144 or 1440p240 like many gamers are trying to do or even stay consistent and push 4k60 or 1440p144 or even regress somewhat to 1440p60 or 1080p120 that we regress all the way to 1080p60 in the best implementations with the best consumer card currently available priced more than $500 higher than the last gen 1080 TI at launch.

We're going to need to see more than a radical improvement in performance for this to be viable in current generations games. We're left hoping that Nvidia's launch of RTX was truly so incompetent that such gains were left on the table more than 8 months after launch. Though seeing that even Windows didn't support DXR or RTX features on launch and it took months for the first game to support any RTX to come out it's not such a stretch.


Second, compatibility is horrible. Only three published games support RTX raytracing as of June 2019. Arguably Quake II could be considered if you want to include a mod. Those raytracing tools in Unity and Unreal I mentioned earlier are still in very early beta these 8 months after RTX launch with little sign of rapid improvement. Bugs are ubiquitous, backwards comparability with 1000 series GTX cards, while possible, leaves performance an order of magnitude worse than middling RTX performance. Implementation into existing projects is all but impossible at this point without limiting yourself to specific raytracing specific alpha builds of the game engine editors.

Notably, while VR is quickly becoming extremely popular with PSVR and PCVR reaching over 4 million users each and Steam getting 1 million monthly active VR users as well as Facebook launching the $400 Oculus Quest, there is absolutely no RTX support whatsoever. There isn't so much as a hint that maybe, just maybe, one of the dozen multi-billion dollar companies currently pouring hundreds of millions into VR are even considering exploring raytracing support in VR.
Obviously performance is a bigger consideration for VR as well if compatibility was even considered.

Related to compatibility, most raytracing tools are locked down from developers. While some major studios are working closely with Nvidia and Microsoft to integrate DXR and RTX into their custom game engines and indie developers will presumably get proper tools in Unreal and Unity eventually, right now there are very few options to actually modify an implementation of raytracing yourself. Current DirectX raytracing implementations often and seemly universally forgo hardware acceleration so those RT cores become moot. 


Third, as alluded to before, rasterized fakery has gotten so good that it's really difficult to justify the aforementioned issues. While realtime denoising has gotten pretty good, there are still many noticeable artifacts in all available demos. The current level of fidelity reached with these implementations falls very far short of even these established baselines in rasterized games. While raytracing will almost invariably provide more realistic lighting and effects, that doesn't mean that they will look better than rasterized games. Remedy themselves faked incredibly good looking lighting in their last game Quantum Break. Even photorealism is nearly achieved using these primitive techniques that manage to both perform so much better and offer so much better compatibility than any raytracing implementation currently available.



All of this leaves such a sour taste in my mouth regarding realtime raytracing that it's hard to imagine that it will have any real place in gaming outside of the very narrow niche of enthusiast gamers with far too much money to spend that also don't care about pushing super high resolutions, VR, or high framerates. I really do hope I'm wrong.

Friday, August 31, 2018

How to Record in True 120FPS on PC


This is a comprehensive how to guide outlining how I managed to record native 120FPS footage from PC games. This guide will generally work at other framerates and resolutions but will obviously require different settings/hardware.

First you'll need a few tools.
You're going to need a graphics card that can both run games at 120FPS and has hardware encoding first and foremost.

GPU

Nvidia GPUs feature NVENC which encodes different formats at different levels of performance based on the generation of the chip. 
Wikipedia has a nice summary which says:

GTX 700 series cards support ~1080p 240FPS H264 YUV420 encoding.

GTX 900 cards support HEVC H265 encoding as well and supports 2160p 60FPS encoding.

GTX 1000 cards support 10-bit and HEVC 8k encoding and offers "doubles the encoding performance" of GTX 900 cards. 2160p 120FPS?

RTX 2000 Sixth generation NVENC implements HEVC 8K encoding at 30FPS

Intel integrated GPUs offer QuickSync accelerated encoding with the expected lower performance when compared with AMD/Nvidia solutions.

AMD GPUs feature VCE encoding which again varies based on model.
To quote AMD's Robert:
Tahiti/Pitcairn (HD 7000 Series):
  • 1080p60 h.264
Hawaii/Bonaire (R9 200 Series):
  • 1080p87 h.264
Fiji/Tonga (Fury/Fury X):
  • 4K60 h.264
  • 1440p120 h.264
  • 1080p240 h.264
Polaris (RX 400/500 Series):
  • 1440p60 h.264
  • 1080p120 h.264
  • 4K60 h.265
  • 1440p120 h.265
  • 1080p240 h.265
Vega (Vega 56/64):  
at 1080p
  •  H264 Speed: 219 fps
  •  H264 Speed: 180 fps
  •  H264 Speed: 101 fps
  •  H265 Speed: 243 fps
  •  H265 Balanced/Quality: 219 fps
at 1440p
  • H264 Speed: 129 fps
  • H264 Balanced: 107 fps
  • H264 Quality: 75 fps
  • H265 Speed: 137 fps
  • H265 Balanced/Quality: 124 fps
Navi supports 4k90 encoding or 1080p360 in H264.
It also supports 4k60 or 1080p360 in H265. 
https://gpureport.cz/info/Graphics_Architecture_06102019.pdf

Overhead

This will give you an overview of the sort of performance you can expect when encoding with GPU acceleration but you also have to consider that the hardware encoding is not entirely independent of the rest of the GPU and may suffer and drop frames if you're running at over 80% GPU usage.

Software

Next you'll need a few programs in order to record properly.
While Nvidia and AMD provide software in their drivers to record your screen, they have limited options and max out at 60FPS.

OBS (open broadcaster software) is a free and open source recording software that allows a lot of configuration when recording or streaming.

Handbrake (also free and open source) is a very popular transcoding software that allows you to re-encode the videos you record so that they can be distributed online and/or stored in a more efficient format.

MSI Afterburner + Rivatuner (free and proprietary) are hardware/software monitoring tools that allow you make sure your GPU usage is <80%, framerates are solid at 120FPS, and cap your framerate exactly at 120FPS

Finally you'll need whatever program or game you'll be recording.

Configuration

First make sure your game will run at 120FPS because many have hard or soft framerate caps at 60 or even 30FPS.

PCGamingWiki is a fantastic resource that will tell you what features a given game supports.

An example page of Battlefield 4 shows that by default the game is locked at 200FPS but can be unlocked entirely with a console command/config file edit.

Another example is Burnout Paradise Remastered which was released in August of 2018 yet lacks any support for framerates above 60FPS and has no known way to unlock it.

There are also comprehensive lists of games that can support 120FPS, support at least 60FPS, or do not support even 60FPS.

Once you get the right game, you're going to want to make sure you can record properly.

Configuring OBS can be a challenge and takes a bit of trial and error.

First install and open the program then open the settings menu (under the 'file' drop down menu). Click on the video tab then set the "Integer FPS Value" to what you want. In this case 120.

You should then set canvas resolution to the resolution of the display you'll be recording then the output resolution to what you want to record at (probably 1080p).

Then you'll want to switch to the "Output" tab and change the "output mode" drop down menu to "Advanced". Then click on the recording tab and set the values you'd like. The bitrate set below (25,000kbps) is on the low side and I'd recommend closer to 100,000kbps for good 1080p 120FPS HEVC video. Generally you want the peak bitrate to be the target +50%. When selecting the encoder make sure to use the hardware accelerated one.



Finally, make sure you've added the screen you want to record as a "source" in the main screen and preview the output to be sure. Then make a quick test and ensure there are no artifacts and the recording saves properly.


When you're ready to record. Open Afterburner. It will automatically start rivatuner which will be listed in the windows notification area. Just click it and it will open.


 Select the Framerate limit and set it to 120FPS.

Transcoding

Open handbrake and import the file you recorded.

Make sure you start with the VP8 1080p30 preset. Click on the 'Video' tab and change the Framerate from '30' to '120' or 'same as source'. Set the 'consistent quality' slider between 20 and 30.



You'll want to cut down your video in Handbrake in order to lessen the time it takes to transcode and upload. If you plan to share the clip online then make sure it's 60 seconds or shorter.

Allow the video to transcode. It may take up to an hour depending on your processor and your settings but 10 minutes is normal. The process often appears to hang at 99% but it often finishes successfully if you let it sit for a little while.

Sharing

So now we finally get to the hard part. Before this point everything can work well without significant limitations. This is the bottleneck and the largest hurdle to widespread 120FPS video adoption.
Unless you want to host your video on your site yourself then there only a couple video hosts that supports 120FPS. Gfycat supports 1440p (4k?) 120FPS video up to 60 seconds in length with audio (using an account). Twitch also supports unlimited 120FPS streaming but with low bitrate of 6000kbps.

I should add for the sake of comprehensiveness that technically it's possible to edit a 120FPS video to half speed locally then upload to youtube and select the 2x playback speed option but I do not recommend it at all. Audio is borked by this process, the viewer must make changes, and playback is not performant. I have also learned that twitch may be capable of 120FPS somehow but I have yet to find any real confirmation so I will update this after I test it.

For Gfycat, make sure you upload directly to https://gfycat.com/upload. The create tools butcher the video and make results like this: https://gfycat.com/gifs/detail/DecisiveUntimelyGypsymoth

You also want to make sure you cropped the very beginning of the video because it will stutter. This is what a raw recording uploaded to gfycat looks like: https://gfycat.com/gifs/detail/ThornyMeagerIcelandichorse

Uploading is as fast as your internet connection but processing is slow and unlike youtube you don't get a link you can share until it's done so make sure you can keep that tab open during that time.

When properly done you will end up with results like these: https://www.willse.org/p/120fps.html

With Twitch, just use normal OBS streaming settings and set the framerate to 120FPS. It will say that the quality setting is "1080p60" but it's actually 120FPS which can be confirmed by looking at the video stats or downloading the video. This is a sample I recorded of Half-Life Alyx with unrelated broken audio: https://www.twitch.tv/videos/596319612

If you decide to host an .mp4 version of your video yourself then you should be able to simply embed it into any standard webpage. An example video can be found here:
https://www.blurbusters.com/hfr-120fps-video-game-recording/

In this case you're clearly not limited by length, resolution, framerate, sound or bitrate anymore.

Edit: 

Gfycat has improved their video support substantially since I wrote this and they now support audio and 1440p high bitrate 120Hz video. The following video is an example of a very high bitrate video through the entire pipeline.


via Gfycat


Edit 2: Added new information about twitch streaming.


Monday, June 6, 2016

On Core i5 bottlenecking


Sure, a good Core i5 is going to be more than you'll ever need for regular gaming.

That is until you actually decide to play one of a huge variety of surprisingly CPU intensive games.


For reference, I have a relatively high end Core i5-4690 paired with an EVGA GTX 970 FTW with a 1080p 144 Hz monitor and according to post after post online I should never run into any CPU bottlenecks because single core performance doesn't get much better and games don't really use that many threads. Right?

Anandtech's benchmarks say that my processor is the best CPU based on the Haswell architecture or earlier to run a game like Battlefield 4 with a similar GPU to mine.

Tom's hardware remarks:

"Top-end CPUs offer rapidly diminishing returns when it comes to gaming performance. As such, we have a hard time recommending anything more expensive than the Core i5-6600K"

TechSpot benchmarks show no significant difference in Battlefield 4 performance with a 290X between a core i3-3220 and a core i7-4960X.

Logical Increments still recommends a similar caliber CPU to mine (Core i5-6600) for GPUs a decent bit more powerful than mine such as a R9 Fury.

And /r/buildapc notes in their wiki:

"[i5s] are currently the most popular CPU for high end gaming as the performance benefit of an i7 is negligible and not worth the price increase for many people."


After reading these comments from a large variety of very reputable sources in the PC building community one might be inclined to purchase a $200-$250 Core i5 rather than dishing out another $100 for a "negligible" performance improvement in "high end gaming".

The problem is that in everything I've tested in extremely popular games such as Battlefield 4, Rainbow Six Siege, Grand Theft Auto V, and Civilization V prove that my processor is woefully inadequate for the task.


Some anecdotes from my experience include Battlefield 4 running at roughly the same frame-rate (within 5%) running at the minimum preset verses the high preset. Looking at the Rivatuner overlay I see that on minimum settings my GPU is running at <60% usage the entire time. This is the clear marker of a CPU bottleneck.

Playing Rainbow Six Siege results in similar trends. I am unable to push the frame-rate of the game above about 125 Fps at 1080p regardless of the settings I choose. Another notable symptom is the fact that all four cores of my Core i5-4690 are pinned at 100% CPU usage the entire time I'm playing the game.

Civilization V takes its sweet time to process the AI once you get a few turns into a game.

Also, Grand Theft Auto V which was released initially in 2011 refuses to run over 100 Fps on my computer no matter what I've tried.

God forbid I try to download something in Steam when I play Battlefield 4 because instead of 100-150 Fps, I'm suddenly running a stutter filled mess at less than 45 Fps.


I've attached some screenshots in Rainbow Six Siege below with a RivaTuner overlay.


I also have reports from others like /u/UN1203who remarks that:

"Hey so the results are in and I have to say I was wrong, and you might be on to something. I was seeing FPS scaling all the way up to my max overclock of 4.8ghz on a 6600k. Here are the pics for you.
2.5ghz
3.5ghz
4.8ghz

I could not get back to the same spot for screenshot 3 because the map had changed, but I picked the spot on the map where my sustained fps was the lowest.
Ultimately I don't really remember what it was like to play BF4 at anything under 4.4-4.5ghz and I didn't figure it would scale at all after the 3.9ghz your chip puts out but it certainly does. So it looks like it's upgrade time for you my friend. Get more cores or get to overclocking... If cost is not a major concern for you as you say, wait for the broadwell-e to drop, put that bitch under water, and let her stretch her legs at 4.5ghz+, that will be an absolute monster. And with BF5 coming out soon you will want all the firepower you can get."

I'm stuck with this huge disparity between the information I'm given and the results I'm experiencing and finding the solution for this beyond my knowledge. I see massive stagnation in the high end enthusiast processor market with Broadwell-E failing to improve the performance per dollar rate over Haswell-E and even failing to improve maximum performance when overclocking according to a decent number of reports.

I guess I'll try upgrading to a core i7-5820k and hope that does the trick and I'll let you guys now how that turns out.

Thanks for reading.

Edit:
Some people are asking for a full spec. list so here it is:

Intel Core i5-4690
Gigabyte GA-Z97-HD3P
EVGA GTX 970 FTW
Windows 10 64-bit Professional
HyperX 16GB 1866MHz DDR3
Samsung 850 Evo 512GB
Acer GN246HL 1080p144Hz
Roccat Ryos MK Cherry MX Black
Coolermaster Recon
Blue Yeti
Fiio E10K
Sennheiser HD598 Special Edition

Edit 2:

I've read comments talking about how clearly I'm an idiot and there must be some other program in the background ruining my performance, Windows 10 is the spawn of Satan and idles the processor, and that I should just be happy getting around 100 Fps and just give up on expecting a locked 144 Fps on new games. I love how people love to condescend and spout "user error" on anything that makes them consider some other person's reality.

My response is simple, I'm not content with "good enough". I want and expect great performance with great hardware and I value smooth and responsive game-play above all else.


I understand that adaptive-sync will help but 100 Fps is 100 Fps. I want 144 Hz or higher.

I'd also like to clarify that BF4 stuttering while downloading from Steam is entirely due to CPU usage. I'm downloading to a different drive, my internet is fast enough, and I have more than enough RAM.