Well, I went to Siggraph for the first time. It was an interesting experience. The convention consisted mostly of presentations/lectures, an exhibition hall of company booths, and the "Art Festival".
The presentations were fairly interesting. Nothing earth-shattering. A saw a few random posters that showed some novel work. Blue Sky Studios gave an amazing presentation of the art direction creating Horton Hears a Who. I didn't see the movie when it came out. I guess Cat in the Hat left a bad taste in my mouth for Suess-movie conversions. However, this movie looked fantastic. The speaker discussed several artistic approaches to modeling a Suess-like world of asymmetric trees, heavy, blobby, saggy objects, and color palettes for different scenes. From the clips I saw the movie looked amazing and was pretty hilarious.
Jon Olick from id gave a really good presentation. He has done a lot of level-of-detail work on games like Ratchet and Clank: Tools of Destruction. Although I've never played the game, many reviews I've read have commented on the amazing draw distance Insomniac achieved. I believe some of the implementations involved geomorphing, which he discussed. Jon finished the talk discussing the sparse-voxel octree for static geometry that John Carmack has proposed. I must state this proposal is only for STATIC geometry as the octree isn't rebuilt dynamically. This question was raised at the presentation and Jon quickly stated he wasn't attempting to solve dynamic octree construction that everyone has been working on for years. Although it only works for static geometry, it should provide a big leap in geometry resolution at comparable storage or traditional triangular meshes--these sparse octrees can be streamed in by the amount of screen footprint supposedly allowing extremely detailed assets. Furthermore, it should be easily combined to current pipelines that raster dynamic geometry. And even though ray tracing nuts (not researchers--just enthusiasts) supposed that this was a huge step towards full ray tracing, this method only works for ray casting so one still needs things like shadow maps for global illumination effects (like shadows). Overall I thought he gave the best presentation--mind he actually gave a demo of the proposal, which makes it much more convincing.
Intel gave several presentations on the Larrabee architecture and how it will not suck. They repeatedly stated how they weren't showing any benchmarks since there's no product for another couple years, so it's hard for me to really get excited about it. Larry Seiler gave the first presentation even though he's not actually on the Larrabee architecture paper. Weird as it seems, the guy seemed very cool. I stopped him in the hallway to ask him a few questions. He set his bag down and started chatting away happily. Another time I saw him sitting in a corner with a couple of young geeks with their laptops flipped open asking him questions. For a mid-50s guy from Intel to just sit on the ground and field questions seemed pretty cool to me. Although I don't have a lot of confidence that Larrabee will be a competitive GPU, I thought they picked a great spokesman. He said he was chosen specifically for his past experience with GPU architectures.
nVidia gave its ray tracing demo of a car flying through the city. I saw it first hand sitting on the front row while my old professor Steve Parker explained the technology. Although I think he's brilliant programmer, I wasn't very impressed with the whole demo. The scene was a cityscape with a car driving through it. I've expressed my concerns with ray tracing in previous posts like problems with texture coherency. The demo had very bland textures and for running on four nvidia quadroplexes, it really wasn't very impressive visually. They even mentioned a "canyon" shader, which gave the buildings a more realistic look. From what he explained, the shader was a linear ramp from dark to light just how canyons are typically darker at the bottom. When a linear luminance ramp is a topic of interest in 2008, I think the demo has failed. They zoomed in on the headlamps to show 7 bounces of ray tracing, which I guess is really important, but to me it just looked very noisy. I know it got a lot of people excited, but with the hardware they were running it on, I'd have rather seen a rasterized scene running at 128x anti-aliasing and per-fragment environment mapping. The demo was basically glorified ray tracing with accurate reflections. If they want accurate reflections, why didn't they just implement Jen Krueger's GPU Rendering of Secondary Effects. Sure, 3rd or 4th bounces have less accuracy, but if you saw the demo, the tertiary rays really didn't contribute much to the image quality. Like I said, I'd rather have seen a rasterized demo running on that beastly hardware. The talk itself wasn't too bad. David Luebke talked about effects that developers are eager to use and how nVidia's working to provide them with it--things like better soft shadow, depth of field, caustics, etc.
The exhibition hall was comprised of animation/production schools, production companies, development booths from Autodesk, zBrush, a bunch of motion capture companies, etc. It was fairly exciting at first but lost its appeal after a day. The Art Festival was also pretty fun.
Not sure if I'll attend next year. I felt like most of the students there only wanted to hit Dreamworks and Pixar for job interviews. One animator even asked why a CS person was attending Siggraph. I was a little shocked, but was polite. It's in New Orleans next year, which just got hit by Gustav. I remember shoveling mud out of people's second story houses. Why can't they get rid of the levees and turn New Orleans into the Venice of the US? It's below sea level and building levees and watching them fail again and again is getting old. You're below sea level and the current system isn't working. Time for a change.
No comments:
Post a Comment