"Real-Time Ray Tracing: Holy Grail or Fool's Errand?" is an older article, but I think it's short, simple, and to the point, which you don't see every day. The author Dean Calver gives a very brief comparison of rasterization and ray tracing then summarizes some of the problems with ray tracing--specifically aliasing, dynamic scenes, and global illumination.
I've written about problems with dynamic scenes in the past. Although many people prefer ray tracing for its simplicity, it requires acceleration structures to run quickly. In David Luebke's presentation he made an excellent statement that I think described the rasterization-versus-ray-tracing debate rather well.
"Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast."
Another problem I think isn't discussed much is texture locality. Although I don't consider this as big of a problem for ray tracing as dynamic scenes, it's a performance penalty. The problem involves ray tracing being an image-based algorithm. If you shoot a ray into the scene and it hits a textured object, you need to fetch the texture into the cache. If the next pixel uses a different texture, a new texture chunk must be fetched. This is in contrast to an object-based technique, where adjacent fragments on the object usually use adjacent texels. A worst case scenario for ray traced scene might be tracing textured grass against a textured wall. Every couple of pixels, you'd need to refetch the another texture block. Although this is a generalization since caches are big and the entire texture doesn't need to be fetched, it's still an issue and today's textures are getting rather large. With rasterization, the wall would be drawn with the texture, then the grass would be drawn with the other possibly creating no duplicate cache misses. If you read the original REYES paper by Cook et al., you'll see that spend a good chunk of the paper talking about texture locality, because it can severely impact performance.
I think nvidia isn't concerned whether ray tracing takes over the rendering pipeline as long as it's done on the GPU. Luebke and Parker's presentation goal was to show what nvidia hardware could do, and allow developer to decide what was best for them. More options kind of thing. Although I didn't care too much for the ray tracing portion of the demo, I did think he had compelling screenshots of what rasterization is currently capable of rendering.
Both presentations conclude that a hybrid algorithm is the future. Calver likes the idea of rasterizing the visible portion of the screen and using that information to ray trace secondary rays. Frankly, I don't think this idea is very promising. Rasterization is great for the visible rendering of the screen, but if you're doing secondary rays, you'll need the acceleration structures that you didn't want to build in the first place. If this wasn't an issue problem, then you could just ray trace everything. Ray tracing is very fast for primary rays if the acceleration structures are already built. You have tons of coherency assumptions. David Luebke's idea seems more plausible, where scene assets are sorted into ray-traced and rasterized passes. You can rasterize animated objects like people without having to build an acceleration structure and ray trace other objects that require effects reflections. But once again, what are you intersecting you're secondary rays with? In my diagram below, I have a sample scene using a ray-traced reflective sphere and a rasterized animated person. It's crude, but I'm trying to give a simple example.
The ray-traced sphere needs an updated acceleration structure to perform accurate reflections on the animated, rasterized figure
You rasterize the person just like you would in current games. Then you ray trace the sphere. The ray hits the sphere and you want to calculate the intersection of it with the person. But the person doesn't have an acceleration structure because it's being animated. If we had the acceleration structure built, we would have just ray traced it. I'm trying to say that I don't see how ray tracing secondary rays will ever play nice with rasterized, dynamic geometry.
Jon Olick's presentation used ray casting for static geometry, which I think provides more immediate benefits. Most games already have special rendering techniques for static geometry like lightmaps for all the floors and walls, so having special rendering "cheats" for static geometry isn't new. Although this doesn't provide higher-order effects like ray tracing does, it will significantly increase geometric complexity of static portions of the scene. Furthermore, this technique is relatively easy to incorporate into existing pipelines at relatively no cost.
Although I still don't like the current marketing push towards ray tracing including Intel's, I am mildly excited for Intel's Larrabee. As long as it rasterizes competitively fast, it could allow developers to easily add new features and extensions to the pipeline faster than current GPU hardwares are able.
Oh, and though I don't have anything to say about the other presentations, kudos to nvidia for always doing a good job posting their presentations. I did see Sarah Tariq's hair presentation and think we're getting close to seeing scenes with characters with animated, beautiful, frothy, tangly, wavy, shiny hair. Check it out.
1 comment:
Economics Assignment Help
Assignment Writing Services UK
Essay Help On USOAC
Assignment Help At Artiestenplatform
Assignment Help At formtools
Homework Help At formtools
Assignment Help At Omgserv
Assignment Help At Aekatowice
Assignment Help At Tiroconarcoalicante
Assignment Help At Boat-Specs
Post a Comment