Dynamic occlusion on Quest 3 is at present solely supported in a handful of apps, however now it is increased high quality, makes use of much less CPU and GPU, and is barely simpler for builders to implement.
Occlusion refers back to the capacity of digital objects to seem behind actual objects, an important functionality for blended actuality headsets. Doing this for under pre-scanned surroundings is named static occlusion, whereas if the system helps altering surroundings and shifting objects it is referred to as dynamic occlusion.
Quest 3 launched with help for static occlusion however not dynamic occlusion. Just a few days later dynamic occlusion was launched as an “experimental” characteristic for builders, which means it could not be shipped on the Quest Retailer or App Lab, and in December that restriction was dropped.
Builders implement dynamic occlusion on a per-app foundation utilizing Meta’s Depth API, which gives a rough per-frame depth map generated by the headset. Integrating it’s a comparatively complicated course of, although. It requires builders to switch their shaders for all digital objects they need to be occluded, removed from the perfect state of affairs of a one-click resolution. As such, only a few Quest 3 blended actuality apps at present help dynamic occlusion.
One other downside with dynamic occlusion on Quest 3 is that the depth map may be very low decision, so you may see an empty hole across the edges of objects and it will not choose up particulars just like the areas between your fingers.
With v67 of the Meta XR Core SDK, although, Meta has barely improved the visible high quality of the Depth API and considerably optimized its efficiency. The corporate says it now makes use of 80% much less GPU and 50% much less CPU, liberating up additional assets for builders.
To make it simpler for builders to combine the characteristic, v67 additionally provides help for simply including occlusion to shaders constructed with Unity’s Shader Graph device, and refactors the code of the Depth API to make it simpler to work with.
I attempted out the Depth API with v67 and might verify it gives barely increased high quality occlusion, although it is nonetheless very tough. However v67 has one other trick up its sleeve that’s extra important than the uncooked high quality enchancment.
The Depth API now has an choice to exclude your tracked arms from the depth map in order that they are often masked out utilizing the hand monitoring mesh as a substitute. Some builders have been utilizing the hand monitoring mesh to do hands-only occlusion for a very long time now, even on Quest Professional for instance, and with v67 Meta gives a pattern displaying how to do that alongside the Depth API for occlusion of all the pieces else.
I examined this out and located it leads to considerably increased high quality occlusion in your arms, although it provides some visible inconsistencies at your wrist, the place the system transitions to occlusion being powered by the depth map.
Compared, Apple Imaginative and prescient Professional has dynamic occlusion solely in your arms and arms, as a result of it masks them out the identical approach Zoom masks you out quite than producing a depth map. Which means on Apple’s headset the standard of occlusion in your arms and arms is considerably increased, although you may see peculiarities like objects you are holding showing behind digital objects and being invisible in VR.
Quest builders can discover Depth API documentation for Unity right here and for Unreal right here.