Shared Dream Progress Update #2 - The Shaders are Working!
As a bit of an update since the last post, I’ve managed to get the shaders working as intended for my ‘Shared Dream’ project! While I spent 20 hours researching a step that took 20 minutes to implement, I know the value is in learning where to apply that same 20 minutes the next time, and whether or not that will prove a valid solution! I’ve still yet to tackle interaction, but visually they’re functioning as intended!
I thought it might be fun to also discuss what approaches I learned wouldn’t work, instead of just the successes. Edison also learned 10,000 ways not to make a lightbulb! So, here’s my little trial-and-error process:
Overall Goal: floating ‘windows’ around the player offer views into different ‘Timelines’
Proposed Solution: Stencil Shaders to display and Raycasts for interaction
Shaders tell your GPU what to draw to the screen and how to do it. Stencil shaders operate by reading from and/or manipulating the “Stencil Buffer”, which is essentially a pixel-to-pixel grid of integers.
The Ref/Comp/Pass is where the real magic happens, giving me lots of control in how these overlapping figures can affect (or ignore!) each other in order to draw the “Map” [stencil buffer]. Then I can read from this buffer to choose what scene to render (if Ref 6, scene 2, etc), with a final ‘RenderTexture’ set to the main screen output. I can then interact via raycast, evaluate the stencil buffer, and based on that value, generate the appropriate “scene” interaction. Easy, right?
Problem 1a: Unity Cameras cannot receive a RenderTexture as input
Simply put, this approach was dead on arrival. Cameras can output to a RenderTexture, not the inverse. You can’t put shaders on the cameras, but on objects that the cameras see, with shaders being the “interpretation instructions”
Problem 1b: The stencil buffer exists ‘within’ HLSL; not directly accessible via scripting
Think of a computer’s CPU as a team of however many brilliant engineers [cores] you can get to solve a problem as fast as they can. They’re capable of highly complex tasks, but each person can only work on one thing at any time. This makes using the CPU to draw things to even a 1080p screen (2.07 million pixels) the equivalent of given 2-8ish smart people 2.07 million tiny little tasks. It’s going to take a while. A graphics card, on the other hand, is a Santa’s workshop of millions of idiot elves who can only do super simple math, but when you can dedicate one elf-per-pixel, you can finish all of those little tasks very quickly.
The stencil buffer is like the mailroom of the workshop, it needs to figure out which instructions get sent to which elves, and that’s all it does. If you let a random person come in [accessing the buffer through scripts on the CPU], start asking questions about where the mail is going, and start drinking “syrup”, they might make the mailroom erupt in spontaneous breakdance: that will really slow your elves down, so we can’t allow that.
Needlessly convoluted Elf reference aside, I can’t query the stencil buffer to get the value, so I need another approach.
Solution: Utilize Unity’s UI Canvas to make a fake ‘main camera view’ with… another stencil shader!
Since I can write a shader for an object that’s seen by the camera, all I need is to block the entire camera view with this object! So I created a screen-sized UI image, which has its own shader making a multi-pass read of the stencil buffer (that “Map” we drew earlier”). In the game itself, I have four cameras, each sending their output to several RenderTextures. This stencil shader will read the map, and then take chunks of pixels from each RenderTexture (almost a color-by-numbers, in a way), selectively rendering my different ‘Timelines’ based on what the values are behind that object (which are set by my ‘windows’).
The white rect is the Canvas, and the red block is one ‘window’, modifying the buffer that the Canvas reads to selectively render scenes where they overlap
A early example of what this “Map” looks like. Stencil shaders can be used to only edit the buffer, but I added “draw this color if…” functionality to make it easier to visualize the values.
Since the UI Canvas is an opaque ‘image’, and you technically aren’t seeing “through it” it is, in a very meta way, like a digital VR headset, replacing the entire field of view. There are definitely performance/cleanliness improvements to be made, but I’ll consider myself happy for my first attempt at solving a problem with shaders!
Now that things are displaying properly, the next step is interacting through these windows!