I'm finally leaving academia and searching for a career that provides satisfaction and a real salary. I've applied to a lot of big names and gotten fairly decent results.
Pixar has interviewed me over the phone and is flying me out next week to their Seattle Renderman team. Although I don't have a ton of experience in film production, I'm hoping any job they offer me will lead to a full-time development position on one of their rendering products. They didn't ask any technical interview questions besides what I've done, so I expect I'll get some of that face to face.
Intel's Visual Computing Group is flying me out to Hillsboro, OR next week for an interview on their driver development team for Larrabee GPU designing the DirectX driver. I was under the impression that most of this work had been done considering that they provided performance benchmarks for many games like Gears of War in their paper. The interviewer said he couldn't specify how complete the implementation was when I asked him, he did say that with Microsoft's active development of the DirectX API, there will always be new additions to the driver. I've heard working at Intel can be a little intense and they push there employees relatively hard, a couple of people who have worked there say that they actually like that about Intel. I don't know DirectX very well, but from what I understand, it's a lot easier for driver development than OpenGL for several reasons. Hrmm, I guess that means I won't be working on a Linux/FreeBSD machine...
I also had a phone interview with a couple of developers from Google. I applied to Google without a clear idea what I'd be doing there and just assuming it would be a great place to work. The first developer to call me was Sean Pidgin, one of the project leads for Pidgin (formerly GAIM) and the project head for Google Talk. They started out talking a little about my background, but did have several casual technical questions about threading and networking as well as some formal coding I did over a Google Docs document we shared. I'm still not used to coding on the fly especially over the phone, but I think I did alright. I always get nervous when at the end they ask me, "Can this be more efficient?". Of course it can, but the first thing I say doesn't mean it's the best thing I can do. Can I writer talk out a scene with the same finesse he/she publishes a work? They asked questions about things like race conditions with threads, binary tree searches, etc. Like I said, I still don't have a clear idea what I'd do at Google given that there job descriptions require knowledge of everything, but in a way that's pretty cool.
In general I'm fairly content with the interviews I've been able to receive. I've only been looking for a couple weeks and already have a few flights scheduled for on-site interviews. We'll see where I finally end up.
Sunday, September 28, 2008
State of Rasterization
I was down at Siggraph when the new OpenGL 3.0 specification was released. I was excited for the OpenGL BoF later that week to see all the specified updates they had promised. Monday of that same week, I read a slashdot article that kind of broke the news to me that the original enhancements to the specification like the object model were butchered. I was a little shocked, but was sincerely hoping it was like many slashdot articles, where the commenters simply read the summary and didn't bother reading the article. I had some naive hope that the webmaster accidentally posted the old specification up using the new file name, but after scanning the entire specification, I can see that wasn't the case. Many people were angry that they were left in the dark so long about the drop in features and the news that Khronos would drop the ball like ARB has done. Although I think Khronos screwed up, I still think a much-needed object model will is still on the road map and through these frequent "updates", I can see more state-friendly API. nVidia already released a new extension that provides texture building with binding that texture. Even though it's just an extension, I hope to eventually see many other functions go down that road.
nvidia has done a good job supporting OpenGL in various ways including graphics demos, presentations, APIs, etc. Although I don't see much use with some of the new additions to the pipeline--nvidia itself has told developers NOT to use the geometry shader--I do like some of the things presented in this new API.
nvidia's slides on OpenGL features on the GeForce 8 architecture. I'm linking them straight from the nvidia page until they tell me I can't
One of the cooler additions to the rendering pipeline IMHO is the transform feedback buffer. Basically when you send vertex data down the pipeline and it gets transformed by various matrices in the vertex shader, you don't really have an idea of what those vertices actually get transformed to. You could do some GPGPU thing where you print the vertex information to the screen and read it back, but this is tedious to implement and debug. A better solution would just be to store these transformations in the rendering API. One of the pesky things about shadow volumes is you have to compute the edge silhouettes. This has been solved in a respect in "Shadow Volumes on Programmable Graphics Hardware" by Brabec et al., and now can be implemented trivially using this new transform feedback buffer. Other work has been down to use silhouettes to provide some effect, so I can see Brabec's implementation applied to a whole slew of visual effects that require light or camera silhouette information. In their presentation, they use the geometry shader for silhouette detection, which I think is a mistake. Much like Barnes and Noble dedicating a section to Vampire Romance might be considered a mistake. Maybe that's a stretch.
Another exciting extension is the EXT_direct_state_access. Although this isn't the "object model" many people were expecting, I believe this kind of extension will generally lead into the state-friendly programming model we all wanted. How many times have we not known the state of OpenGL coming into a given function and changing the state would possibly break the assumptions a proceeding function would have? Once again kudos to nvidia for a great support and presentations.
nvidia has done a good job supporting OpenGL in various ways including graphics demos, presentations, APIs, etc. Although I don't see much use with some of the new additions to the pipeline--nvidia itself has told developers NOT to use the geometry shader--I do like some of the things presented in this new API.
nvidia's slides on OpenGL features on the GeForce 8 architecture. I'm linking them straight from the nvidia page until they tell me I can't
One of the cooler additions to the rendering pipeline IMHO is the transform feedback buffer. Basically when you send vertex data down the pipeline and it gets transformed by various matrices in the vertex shader, you don't really have an idea of what those vertices actually get transformed to. You could do some GPGPU thing where you print the vertex information to the screen and read it back, but this is tedious to implement and debug. A better solution would just be to store these transformations in the rendering API. One of the pesky things about shadow volumes is you have to compute the edge silhouettes. This has been solved in a respect in "Shadow Volumes on Programmable Graphics Hardware" by Brabec et al., and now can be implemented trivially using this new transform feedback buffer. Other work has been down to use silhouettes to provide some effect, so I can see Brabec's implementation applied to a whole slew of visual effects that require light or camera silhouette information. In their presentation, they use the geometry shader for silhouette detection, which I think is a mistake. Much like Barnes and Noble dedicating a section to Vampire Romance might be considered a mistake. Maybe that's a stretch.
Another exciting extension is the EXT_direct_state_access. Although this isn't the "object model" many people were expecting, I believe this kind of extension will generally lead into the state-friendly programming model we all wanted. How many times have we not known the state of OpenGL coming into a given function and changing the state would possibly break the assumptions a proceeding function would have? Once again kudos to nvidia for a great support and presentations.
Monday, September 15, 2008
Rasterization versus Ray Tracing
"Real-Time Ray Tracing: Holy Grail or Fool's Errand?" is an older article, but I think it's short, simple, and to the point, which you don't see every day. The author Dean Calver gives a very brief comparison of rasterization and ray tracing then summarizes some of the problems with ray tracing--specifically aliasing, dynamic scenes, and global illumination.
I've written about problems with dynamic scenes in the past. Although many people prefer ray tracing for its simplicity, it requires acceleration structures to run quickly. In David Luebke's presentation he made an excellent statement that I think described the rasterization-versus-ray-tracing debate rather well.
"Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast."
Another problem I think isn't discussed much is texture locality. Although I don't consider this as big of a problem for ray tracing as dynamic scenes, it's a performance penalty. The problem involves ray tracing being an image-based algorithm. If you shoot a ray into the scene and it hits a textured object, you need to fetch the texture into the cache. If the next pixel uses a different texture, a new texture chunk must be fetched. This is in contrast to an object-based technique, where adjacent fragments on the object usually use adjacent texels. A worst case scenario for ray traced scene might be tracing textured grass against a textured wall. Every couple of pixels, you'd need to refetch the another texture block. Although this is a generalization since caches are big and the entire texture doesn't need to be fetched, it's still an issue and today's textures are getting rather large. With rasterization, the wall would be drawn with the texture, then the grass would be drawn with the other possibly creating no duplicate cache misses. If you read the original REYES paper by Cook et al., you'll see that spend a good chunk of the paper talking about texture locality, because it can severely impact performance.
I think nvidia isn't concerned whether ray tracing takes over the rendering pipeline as long as it's done on the GPU. Luebke and Parker's presentation goal was to show what nvidia hardware could do, and allow developer to decide what was best for them. More options kind of thing. Although I didn't care too much for the ray tracing portion of the demo, I did think he had compelling screenshots of what rasterization is currently capable of rendering.
Both presentations conclude that a hybrid algorithm is the future. Calver likes the idea of rasterizing the visible portion of the screen and using that information to ray trace secondary rays. Frankly, I don't think this idea is very promising. Rasterization is great for the visible rendering of the screen, but if you're doing secondary rays, you'll need the acceleration structures that you didn't want to build in the first place. If this wasn't an issue problem, then you could just ray trace everything. Ray tracing is very fast for primary rays if the acceleration structures are already built. You have tons of coherency assumptions. David Luebke's idea seems more plausible, where scene assets are sorted into ray-traced and rasterized passes. You can rasterize animated objects like people without having to build an acceleration structure and ray trace other objects that require effects reflections. But once again, what are you intersecting you're secondary rays with? In my diagram below, I have a sample scene using a ray-traced reflective sphere and a rasterized animated person. It's crude, but I'm trying to give a simple example.
The ray-traced sphere needs an updated acceleration structure to perform accurate reflections on the animated, rasterized figure
You rasterize the person just like you would in current games. Then you ray trace the sphere. The ray hits the sphere and you want to calculate the intersection of it with the person. But the person doesn't have an acceleration structure because it's being animated. If we had the acceleration structure built, we would have just ray traced it. I'm trying to say that I don't see how ray tracing secondary rays will ever play nice with rasterized, dynamic geometry.
Jon Olick's presentation used ray casting for static geometry, which I think provides more immediate benefits. Most games already have special rendering techniques for static geometry like lightmaps for all the floors and walls, so having special rendering "cheats" for static geometry isn't new. Although this doesn't provide higher-order effects like ray tracing does, it will significantly increase geometric complexity of static portions of the scene. Furthermore, this technique is relatively easy to incorporate into existing pipelines at relatively no cost.
Although I still don't like the current marketing push towards ray tracing including Intel's, I am mildly excited for Intel's Larrabee. As long as it rasterizes competitively fast, it could allow developers to easily add new features and extensions to the pipeline faster than current GPU hardwares are able.
Oh, and though I don't have anything to say about the other presentations, kudos to nvidia for always doing a good job posting their presentations. I did see Sarah Tariq's hair presentation and think we're getting close to seeing scenes with characters with animated, beautiful, frothy, tangly, wavy, shiny hair. Check it out.
I've written about problems with dynamic scenes in the past. Although many people prefer ray tracing for its simplicity, it requires acceleration structures to run quickly. In David Luebke's presentation he made an excellent statement that I think described the rasterization-versus-ray-tracing debate rather well.
"Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast."
Another problem I think isn't discussed much is texture locality. Although I don't consider this as big of a problem for ray tracing as dynamic scenes, it's a performance penalty. The problem involves ray tracing being an image-based algorithm. If you shoot a ray into the scene and it hits a textured object, you need to fetch the texture into the cache. If the next pixel uses a different texture, a new texture chunk must be fetched. This is in contrast to an object-based technique, where adjacent fragments on the object usually use adjacent texels. A worst case scenario for ray traced scene might be tracing textured grass against a textured wall. Every couple of pixels, you'd need to refetch the another texture block. Although this is a generalization since caches are big and the entire texture doesn't need to be fetched, it's still an issue and today's textures are getting rather large. With rasterization, the wall would be drawn with the texture, then the grass would be drawn with the other possibly creating no duplicate cache misses. If you read the original REYES paper by Cook et al., you'll see that spend a good chunk of the paper talking about texture locality, because it can severely impact performance.
I think nvidia isn't concerned whether ray tracing takes over the rendering pipeline as long as it's done on the GPU. Luebke and Parker's presentation goal was to show what nvidia hardware could do, and allow developer to decide what was best for them. More options kind of thing. Although I didn't care too much for the ray tracing portion of the demo, I did think he had compelling screenshots of what rasterization is currently capable of rendering.
Both presentations conclude that a hybrid algorithm is the future. Calver likes the idea of rasterizing the visible portion of the screen and using that information to ray trace secondary rays. Frankly, I don't think this idea is very promising. Rasterization is great for the visible rendering of the screen, but if you're doing secondary rays, you'll need the acceleration structures that you didn't want to build in the first place. If this wasn't an issue problem, then you could just ray trace everything. Ray tracing is very fast for primary rays if the acceleration structures are already built. You have tons of coherency assumptions. David Luebke's idea seems more plausible, where scene assets are sorted into ray-traced and rasterized passes. You can rasterize animated objects like people without having to build an acceleration structure and ray trace other objects that require effects reflections. But once again, what are you intersecting you're secondary rays with? In my diagram below, I have a sample scene using a ray-traced reflective sphere and a rasterized animated person. It's crude, but I'm trying to give a simple example.
The ray-traced sphere needs an updated acceleration structure to perform accurate reflections on the animated, rasterized figure
You rasterize the person just like you would in current games. Then you ray trace the sphere. The ray hits the sphere and you want to calculate the intersection of it with the person. But the person doesn't have an acceleration structure because it's being animated. If we had the acceleration structure built, we would have just ray traced it. I'm trying to say that I don't see how ray tracing secondary rays will ever play nice with rasterized, dynamic geometry.
Jon Olick's presentation used ray casting for static geometry, which I think provides more immediate benefits. Most games already have special rendering techniques for static geometry like lightmaps for all the floors and walls, so having special rendering "cheats" for static geometry isn't new. Although this doesn't provide higher-order effects like ray tracing does, it will significantly increase geometric complexity of static portions of the scene. Furthermore, this technique is relatively easy to incorporate into existing pipelines at relatively no cost.
Although I still don't like the current marketing push towards ray tracing including Intel's, I am mildly excited for Intel's Larrabee. As long as it rasterizes competitively fast, it could allow developers to easily add new features and extensions to the pipeline faster than current GPU hardwares are able.
Oh, and though I don't have anything to say about the other presentations, kudos to nvidia for always doing a good job posting their presentations. I did see Sarah Tariq's hair presentation and think we're getting close to seeing scenes with characters with animated, beautiful, frothy, tangly, wavy, shiny hair. Check it out.
Wednesday, September 03, 2008
Siggraph 2008
Well, I went to Siggraph for the first time. It was an interesting experience. The convention consisted mostly of presentations/lectures, an exhibition hall of company booths, and the "Art Festival".
The presentations were fairly interesting. Nothing earth-shattering. A saw a few random posters that showed some novel work. Blue Sky Studios gave an amazing presentation of the art direction creating Horton Hears a Who. I didn't see the movie when it came out. I guess Cat in the Hat left a bad taste in my mouth for Suess-movie conversions. However, this movie looked fantastic. The speaker discussed several artistic approaches to modeling a Suess-like world of asymmetric trees, heavy, blobby, saggy objects, and color palettes for different scenes. From the clips I saw the movie looked amazing and was pretty hilarious.
Jon Olick from id gave a really good presentation. He has done a lot of level-of-detail work on games like Ratchet and Clank: Tools of Destruction. Although I've never played the game, many reviews I've read have commented on the amazing draw distance Insomniac achieved. I believe some of the implementations involved geomorphing, which he discussed. Jon finished the talk discussing the sparse-voxel octree for static geometry that John Carmack has proposed. I must state this proposal is only for STATIC geometry as the octree isn't rebuilt dynamically. This question was raised at the presentation and Jon quickly stated he wasn't attempting to solve dynamic octree construction that everyone has been working on for years. Although it only works for static geometry, it should provide a big leap in geometry resolution at comparable storage or traditional triangular meshes--these sparse octrees can be streamed in by the amount of screen footprint supposedly allowing extremely detailed assets. Furthermore, it should be easily combined to current pipelines that raster dynamic geometry. And even though ray tracing nuts (not researchers--just enthusiasts) supposed that this was a huge step towards full ray tracing, this method only works for ray casting so one still needs things like shadow maps for global illumination effects (like shadows). Overall I thought he gave the best presentation--mind he actually gave a demo of the proposal, which makes it much more convincing.
Intel gave several presentations on the Larrabee architecture and how it will not suck. They repeatedly stated how they weren't showing any benchmarks since there's no product for another couple years, so it's hard for me to really get excited about it. Larry Seiler gave the first presentation even though he's not actually on the Larrabee architecture paper. Weird as it seems, the guy seemed very cool. I stopped him in the hallway to ask him a few questions. He set his bag down and started chatting away happily. Another time I saw him sitting in a corner with a couple of young geeks with their laptops flipped open asking him questions. For a mid-50s guy from Intel to just sit on the ground and field questions seemed pretty cool to me. Although I don't have a lot of confidence that Larrabee will be a competitive GPU, I thought they picked a great spokesman. He said he was chosen specifically for his past experience with GPU architectures.
nVidia gave its ray tracing demo of a car flying through the city. I saw it first hand sitting on the front row while my old professor Steve Parker explained the technology. Although I think he's brilliant programmer, I wasn't very impressed with the whole demo. The scene was a cityscape with a car driving through it. I've expressed my concerns with ray tracing in previous posts like problems with texture coherency. The demo had very bland textures and for running on four nvidia quadroplexes, it really wasn't very impressive visually. They even mentioned a "canyon" shader, which gave the buildings a more realistic look. From what he explained, the shader was a linear ramp from dark to light just how canyons are typically darker at the bottom. When a linear luminance ramp is a topic of interest in 2008, I think the demo has failed. They zoomed in on the headlamps to show 7 bounces of ray tracing, which I guess is really important, but to me it just looked very noisy. I know it got a lot of people excited, but with the hardware they were running it on, I'd have rather seen a rasterized scene running at 128x anti-aliasing and per-fragment environment mapping. The demo was basically glorified ray tracing with accurate reflections. If they want accurate reflections, why didn't they just implement Jen Krueger's GPU Rendering of Secondary Effects. Sure, 3rd or 4th bounces have less accuracy, but if you saw the demo, the tertiary rays really didn't contribute much to the image quality. Like I said, I'd rather have seen a rasterized demo running on that beastly hardware. The talk itself wasn't too bad. David Luebke talked about effects that developers are eager to use and how nVidia's working to provide them with it--things like better soft shadow, depth of field, caustics, etc.
The exhibition hall was comprised of animation/production schools, production companies, development booths from Autodesk, zBrush, a bunch of motion capture companies, etc. It was fairly exciting at first but lost its appeal after a day. The Art Festival was also pretty fun.
Not sure if I'll attend next year. I felt like most of the students there only wanted to hit Dreamworks and Pixar for job interviews. One animator even asked why a CS person was attending Siggraph. I was a little shocked, but was polite. It's in New Orleans next year, which just got hit by Gustav. I remember shoveling mud out of people's second story houses. Why can't they get rid of the levees and turn New Orleans into the Venice of the US? It's below sea level and building levees and watching them fail again and again is getting old. You're below sea level and the current system isn't working. Time for a change.
The presentations were fairly interesting. Nothing earth-shattering. A saw a few random posters that showed some novel work. Blue Sky Studios gave an amazing presentation of the art direction creating Horton Hears a Who. I didn't see the movie when it came out. I guess Cat in the Hat left a bad taste in my mouth for Suess-movie conversions. However, this movie looked fantastic. The speaker discussed several artistic approaches to modeling a Suess-like world of asymmetric trees, heavy, blobby, saggy objects, and color palettes for different scenes. From the clips I saw the movie looked amazing and was pretty hilarious.
Jon Olick from id gave a really good presentation. He has done a lot of level-of-detail work on games like Ratchet and Clank: Tools of Destruction. Although I've never played the game, many reviews I've read have commented on the amazing draw distance Insomniac achieved. I believe some of the implementations involved geomorphing, which he discussed. Jon finished the talk discussing the sparse-voxel octree for static geometry that John Carmack has proposed. I must state this proposal is only for STATIC geometry as the octree isn't rebuilt dynamically. This question was raised at the presentation and Jon quickly stated he wasn't attempting to solve dynamic octree construction that everyone has been working on for years. Although it only works for static geometry, it should provide a big leap in geometry resolution at comparable storage or traditional triangular meshes--these sparse octrees can be streamed in by the amount of screen footprint supposedly allowing extremely detailed assets. Furthermore, it should be easily combined to current pipelines that raster dynamic geometry. And even though ray tracing nuts (not researchers--just enthusiasts) supposed that this was a huge step towards full ray tracing, this method only works for ray casting so one still needs things like shadow maps for global illumination effects (like shadows). Overall I thought he gave the best presentation--mind he actually gave a demo of the proposal, which makes it much more convincing.
Intel gave several presentations on the Larrabee architecture and how it will not suck. They repeatedly stated how they weren't showing any benchmarks since there's no product for another couple years, so it's hard for me to really get excited about it. Larry Seiler gave the first presentation even though he's not actually on the Larrabee architecture paper. Weird as it seems, the guy seemed very cool. I stopped him in the hallway to ask him a few questions. He set his bag down and started chatting away happily. Another time I saw him sitting in a corner with a couple of young geeks with their laptops flipped open asking him questions. For a mid-50s guy from Intel to just sit on the ground and field questions seemed pretty cool to me. Although I don't have a lot of confidence that Larrabee will be a competitive GPU, I thought they picked a great spokesman. He said he was chosen specifically for his past experience with GPU architectures.
nVidia gave its ray tracing demo of a car flying through the city. I saw it first hand sitting on the front row while my old professor Steve Parker explained the technology. Although I think he's brilliant programmer, I wasn't very impressed with the whole demo. The scene was a cityscape with a car driving through it. I've expressed my concerns with ray tracing in previous posts like problems with texture coherency. The demo had very bland textures and for running on four nvidia quadroplexes, it really wasn't very impressive visually. They even mentioned a "canyon" shader, which gave the buildings a more realistic look. From what he explained, the shader was a linear ramp from dark to light just how canyons are typically darker at the bottom. When a linear luminance ramp is a topic of interest in 2008, I think the demo has failed. They zoomed in on the headlamps to show 7 bounces of ray tracing, which I guess is really important, but to me it just looked very noisy. I know it got a lot of people excited, but with the hardware they were running it on, I'd have rather seen a rasterized scene running at 128x anti-aliasing and per-fragment environment mapping. The demo was basically glorified ray tracing with accurate reflections. If they want accurate reflections, why didn't they just implement Jen Krueger's GPU Rendering of Secondary Effects. Sure, 3rd or 4th bounces have less accuracy, but if you saw the demo, the tertiary rays really didn't contribute much to the image quality. Like I said, I'd rather have seen a rasterized demo running on that beastly hardware. The talk itself wasn't too bad. David Luebke talked about effects that developers are eager to use and how nVidia's working to provide them with it--things like better soft shadow, depth of field, caustics, etc.
The exhibition hall was comprised of animation/production schools, production companies, development booths from Autodesk, zBrush, a bunch of motion capture companies, etc. It was fairly exciting at first but lost its appeal after a day. The Art Festival was also pretty fun.
Not sure if I'll attend next year. I felt like most of the students there only wanted to hit Dreamworks and Pixar for job interviews. One animator even asked why a CS person was attending Siggraph. I was a little shocked, but was polite. It's in New Orleans next year, which just got hit by Gustav. I remember shoveling mud out of people's second story houses. Why can't they get rid of the levees and turn New Orleans into the Venice of the US? It's below sea level and building levees and watching them fail again and again is getting old. You're below sea level and the current system isn't working. Time for a change.
Subscribe to:
Posts (Atom)