Sunday, December 28, 2008

PC to PS3 (Little Big Planet, Orange Box, etc.)

Accepting console gaming
I have run Linux for several years now. It is a great operating system--yeah, I know it's really a kernel. I can always brag to my Windows and Mac friends of the ease of pulling down development libraries, packages, and files within seconds without having to scour the web.

I am a gamer, however, and have always wanted to play games without any hassle; to own a game and have everything just work. WINE has come a long way and has run World of Warcraft for me just fine, but with many other games I'm forced to tweak settings, use sub-par graphics, or worst-case scenario boot into Windows. To solve my woe, I decided to get a console. Hence, the PS3 sitting on my cabinet. I'm also looking forward to buying used games at discount, which is pretty much unavailable for PC gamers.

Configuring the PS3
Having not owned a console since the Sega Genesis--I had Resident Evil 4 and a Gamecube for a few months--I've fallen a little behind on the console world. The PS3 video snapped into my monitor via HDMI and I plugged the component audio cables into an old surround sound receiver. On starting the system, I passed through a brief configuration menu. One screen asked, "Do you want video and audio to go through HDMI?". Well, technically no, I just want HDMI for the video. So I answered no and my screen went black. I restarted the PS3 and answered yes to get video and setup audio afterwards, but I thought it was a funny question to ask if you're going to make a user's screen go black.

The PS3 menu is pretty sleek. It's a long horizontal bar of menus, where each focused menu will display vertical submenus of options. It works well and although I got used to it quickly, I don't see it as replacing a typical vertical menu.

Pleasant graphics
Both the PS3 and XBox 360 have had many consumer complaints in reference to the resolution many games and videos run at. For example, a game might render at 720p and be stretched to 1080 on the actual display. Although this is noticeable, all the games and videos that do it still look decent.

I had a chance to play a few demos and realized the incredible difference in graphical quality between video games. The aliasing in Mirror's Edge, for example, is horrible. The game looks well enough and has cool-colored art direction, but I can't help wonder if they couldn't at least do some post-process anti-aliasing. I've heard Microsoft actually has a stricter policy on anti-aliasing, but I can't reference that. I have been having fun with features and like having such quick access to all the different playable demos. It has been a lot of fun to download these demos and see if they're worthy of my wish list.

Orange Box
I enjoyed Half-Life 2 and always wanted to see how the first and second episodes turned out. The graphics are starting to show their age and the textures appear a bit blander than I'm used to compared to my PC. I'm still surprised how bad the loading times are in this game. Dying can take what feels like a minute to reload a level even if I died a few feet from the last load. What is the system paging in and out? Portal is a refreshing change. It satisfies that little first-person-shooter-puzzle-genre craving I never knew I had.

Although I haven't gotten into Team Fortress 2 to its full degree, I have seen it in action and have been very impressed with the graphics this game puts out. Some people were disappointed that Valve didn't attempt a realistic rendering system like everyone else in the industry. Their loss, I guess, because the game looks great. They have a great presentation, Illustrative Rendering in Team Fortress 2, that really showcases the art direction to make such a great looking game. Here's the high-quality video. If you like art direction or new shader designs, I highly recommend reading their slides and technical article.


Ratchet and Clank: Tools of Destruction
I played parts of the full version at a friend's house and downloaded the demo on my home machine. This is one the best looking games I have ever seen. The animations are smooth, the palettes are nicely colored, and everything is supremely sharp. The draw distances are incredible. Insomniac does Playstation exclusives and has always done a great job pushing a system. It also handles really well and is just fun to whack things with a wrench. I have included a couple screenshots stolen from Insomniac's site. The game looks even better in motion.




Folklore
This game got polar reviews with reviewers conflicted between its poor gameplay and great art direction. The game features about three art styles sometimes jumping between 2D and 3D cutscenes and looks great. It's a unique genre, but the first "boss battle" had me constantly fighting with the camera just to keep the enemy on my screen. I stopped after completing that section fearing the rest of the game would include the same tedious and frustrating camera coordination.

Playstation Home
Playstation Home got terrible reviews days before I received my PS3. I wasn't excited to enter this online world, so I was ambivalent to these postings. I was a little curious though, and gave it a quick pass through. I can add to other statements that there is no reason why anyone should use Playstation Home.

After a boring session of creating a customizable character, I ran around the courtyard waiting for other characters' skins to load on my machine while an empty shell of a person modeled their movements. Moving into new buildings required additional downloads. While I realize this happens only once, it is still annoying for the first visit to this online world.

Entering the theater, I sat down in an available chair and waited for the video to play. Many people were standing up in front of the screen talking to each other, while others sitting in the audience with me were calling those in front to stop being duchebags and sit down. I realized then that Playstation Home is the 3D visualization of all the people on the internet I try to avoid put in the same room as me.

There is nothing redeeming about it and although it's in beta, I can see no reason for anyone to use it. I believe a product should enter beta to work out bugs and get customers excited, but I believe most people like me will get a glance at it be extremely reluctant to return. Home might one day be a really excellent product, but it's going to take some extreme makeovers to make it such.

Little Big Planet
I have heard complaints about this game lacking game and being more of a tech demo. I must say that this game is amazing. I have always enjoyed platformers, and this game is a great platformer by itself. It features a zany story, where players can pass through levels in coop (either offline or online) with up to three others. Most levels can be completed alone, but it's funner to play with a friend(s).

The graphics looks sharp and feature crisp textures, HDR effects like bloom, and decent camera work. Every once and a while the camera will zoom too far in or out, but it's tolerable most of the time. Smoke and fire effects aren't perfect, but they're pleasant to watch.

The gameplay implements a simple jump and grab mechanic so the controls are easy to master. Like most platformers, the characters are required to do a lot of jumping, but this game includes a simple grab mechanic where the player can grab certain objects if they have a "grab-able" material like sponge instead of stone. This is used for hanging, swinging, pulling boxes and levers, etc. There's also a depth or layer component where players can move closer or farther from the camera. This is usually done automatically such that a player jumping from a depth of say 3 will naturally move forward to land on a platform of depth 2. The mechanic isn't perfect and sometimes causes problems (like totally unfair deaths) but is forgivable since it generally works well.

The gameplay is somewhat forgiving. There are frequent save points littered throughout the game that allow players to restart there after dying--this only works a few times before the save point runs out of lives and the players must restart the level. Also, objects that can kill the player by squishing or burning him/her usually give the player the benefit of the doubt by squeezing him/her through the crack or lightly singeing the player.

The real innovation of Little Big Planet is the excellent use of physics in the game. Each object has its own material with corresponding properties like friction coefficient, mass, texture, etc. It is fun to see your avatar catapulted over a wall or balancing on a teeter-totter-style platform.

The best thing about LBP is the level design. The game has been in production for quite some and the developers have mastered the engine creating many beautiful and crazy levels. I have seen almost every non-combative platforming mechanic put into these levels as well as new components that add new challegnes using physics, requiring teamwork with cooperative players, and just some new creative puzzles. Some levels require two, three, or even four players to pass certain puzzles. These are usually optional off-the-route treks to obtain additional collectibles.

It's hard to find good cooperative games these days so it's gratifying to see one that has really done such a great job nailing so many features. I haven't any gotten to the level editor, but it's on the to-do list. I can't imagine a video-game playing PS3 owner not purchasing this game unless they adamantly hate platformers.



Conclusion
My first few days using a console have been very satisfying. That feeling of running a game I own and not having to fight the system to get it to play is most pleasant and I urge other Linux users to consider having a separate machine like the PS3 to handle most of the their gaming needs. It's also a great Bluray player.

Thursday, December 11, 2008

Ray Tracing Basics

Dr. Steve Parker, who's academic team has done much to advance the field of ray tracing--specifically interactive ray tracing has allowed me host his lecture slides on my blog. Dr. Parker left the University of Utah earlier this year with Dr. Peter Shirley and several students to join nVidia in the hopes to produce high-quality, interactive ray tracing for consumers. Some have sourced Dr. Parker as writing the first "interactive" ray tracer, which he implemented years ago on some SGI computers.

These slides are taken from his CS 6620 - Introduction to Computer Graphics II course. It was basically all about writing fast and efficient ray tracers. The slides contain a lot of C++, but start with basic vector math. The slides have been compressed by me using Multivalent so they're quite a bit smaller than the originals.

The first lesson is a typical introduction to the course with pictures of ray-traced images and why ray tracing is useful. Those wishing to skip these details can probably jump into the second slide. They also include student project images including mine--the chess set. Most of these competition images lost a lot of quality for some reason--even uncompressed. You can see my image in an earlier post to see its original size and quality.

Also, if you use these slides, please keep the reference to Dr. Parker and his course.
Lesson 01 - Introduction to Ray Tracing
Lesson 02 - Geometry for Graphics
Lesson 03 - The Ray Tracing Algorithm
Lesson 04 - Ray Tracing Software
Lesson 05 - Ray Tracing Software
Lesson 06 - Ray-Object Intersections
Lesson 07 - Triangles and Materials
Lesson 08 - Materials
Lesson 09 - Materials II
Lesson 10 - Materials II
Lesson 11 - Materials
Lesson 12 - Heightfields
Lesson 13 - Sampling I
Lesson 14 - Sampling II
Lesson 15 - Color Theory
Lesson 16 - Texturing I
Lesson 17 - Texturing II
Lesson 18 - Displacement, Bump, and Volumes
Lesson 19 - Acceleration Structures
Lesson 20 - Acceleration Structures II
Lesson 21 - Acceleration Structures III
Lesson 22 - Acceleration Structures 4 Instances
Lesson 23 - Monte Carlo I
Lesson 24 - Monte Carlo II
Lesson 25 - Monte Carlo III

Language Shootout

I program quite a bit in my free time on various projects. I try to make sure I'm using the right tool for the job (quote the hammer and screw analogy here). Despite the range of programming tasks I've undertaken, I have only a handful of languages I use frequently: C\C++, Java, Python, and PHP. I like all these languages, but I try to make sure I choose the right one for the task. While sometimes I have to use a specific one when I absolutely need a 3rd party library, I have a few flexible ideas of when to use a given language.

C\C++ were the first two languages I learned, and they can do quite a bit. I usually choose this language when I absolutely need the fastest option available and I have the time to do the optimizations necessary to make it faster than Java--like SIMD operations.

I find Java to be a great prototyping language. By prototyping, I do not mean a prototype-based language, but a language that I can quickly develop a prototype for an idea I am playing around with. Java comes with a good GUI library, is faster than scripting languages, and has a good range of libraries to work with. I always hated doing something in C++ and realizing I needed to go hunt down a image library just to load a PNG file.

Despite what some people say, it is fast and has even beaten C\C++ on occasions--mostly where there are a lot of allocations and deallocations and the program isn't starting up and closing frequently. The Computer Language Benchmarks Game provides some performance analysis of several tests against many languages and Java is usually only ~10% slower. It is faster in some cases such as their binary-tree demo. I mentioned Chris Kulla's Sunflow ray tracer earlier, which is written in Java. He states his would be 20% faster if ported to C++, but I think his implementation is currently faster than most people's ray tracers for providing similar features.

I don't use Python much, but it's usually great for writing little scripts. I prefer this to Java as it provides faster programming for simple tasks. Although it is quite a bit slower than Java, it has a much faster init time so if I'm running the script frequently, it beats loading Java's large VM every time. Maybe that's a moot point.

PHP is only useful for writing web applications. Although this heavily limits its usefulness, nothing comes close to PHP's simplicity, ability to integrate with HTML, and huge list of useful functions. I always use PHP for anything related to dynamic web content.

I'm sure many will disagree with my criteria for languages. Some people insist ADA is the most useful language (like the creator of ADA). To each his own, but these languages have been good to me.

Reply to Comment #1
I received a comment on the "The Computer Language Benchmarks Game" link where I mentioned the test where Java beat C\C++ telling me to "note" that the Java program used a large heap, only beat C++ on one architecture, and that other C++ implementations were tested that were faster.

Language benchmarks will always lead to arguments about implementations, architectures, etc. I do not think Java is always faster than C++. I do not think Java is as fast as C++. But I do think they can be close. My point was Java beat C++ in one benchmark so to say "Java is slower" is a little unfair.

As I stated earlier, you can always make C++ faster than Java if you have the time and if speed is a concern. Chris Kulla's Sunflow ray tracer is fast--not fast for a Java ray tracer but fast for a ray tracer. I'm sure it could be much faster if he switched to C\C++, but he said it allowed faster development time at the cost of maybe 20% performance. To me, I'd rather take a performance hit like that if I had more time to make it a more feature-rich application.

Wednesday, December 10, 2008

Dreamworks Interview

After being on the job hunt a certain amount of time and seeing my carefully chosen list of possible employers start to dwindle, I started sending on tens of unsolicited resumes hoping someone would bite. One such company was Dreamworks Animation.

After putting together an impressive resume and cover that fit me as their ideal candidate, I sent it off. Seconds later, I received an automated response "The following addresses had permanent fatal errors". That sounded really bad. Not only were the errors fatal, but they were permanent and would remain so for the rest of time. I sent off another e-mail to the webmaster asking if my application was in fact received. That responded with another automated response claiming the mail server itself was down. I took it as a sign that working at this company was not meant to be. I received a phone call a few days later.

The conversation was typical for a first call--verified my information, my interests, what job they were trying to fill. Even though I applied for a programming position, they wanted to interview me for technical director. Since it was an interview, I accepted.

The Dreamworks interview process was actually quite simple for me. They scheduled a video conference and sent me down to a local FedEx Kinkos. In a large conference room filled only by myself and a TV/camera stand, I sat at the end of the table while I was grilled by Dreamworks employees.

The employees interviewed me in pairs or teams. The first team was "Team Oddball" as they called it, which asked trivia-like questions. Unlike my interviews at Intel, they actually remembered the answers to the trivia questions, which made the quiz a much more pleasant experience. One question, for example, was "If you have a three-gallon and a five-gallon bucket with unlimited water, how do you measure out four gallons?". The other questions ranged in difficulty. Most of them were pretty fun and allowed me to think and explain how I would do it.

The next pair asked more technical questions related to graphics, but it was a pretty simple interview. Subjects ranged from 2D intersections to level-of-detail. Every person that interviewed me was a technical director, which I really appreciated. I used the Q & A time to ask them about a typical day, what they liked, challenges, etc.

I was supposed to have an interview with the director of technical directors, Mark McGuire, who oversees all TDs on all movies, but he wasn't available that day. To make up for it, he called me the next day. He mostly wanted to make sure, as everyone had, that I had a very clear idea of what this position entailed. I always found that a difficult question to answer as I don't think I can ever understand a position unless it's a common profession or I spend a day doing it.

A few days later I received an actual offer from the company. The hiring manager explained compensation, the benefits like provided breakfast and lunch, etc. He worked hard to sell me on it. Even though I asked for more time to complete the hiring process with some other companies I was interviewing with at the time, I ended up accepting the offer. I start on the 5th of January. Although I didn't want to live in California, I'm excited to start at what appears to be a fun and dynamic company. By the way, if you haven't seen Kung Fu Panda, the humor alone sells it. The art direction is just more bang for your buck.

Thursday, November 20, 2008

FBO Multisampling

I was using pBuffers in OpenGL when FBOs were coming around. Moving to FBOs seemed like a huge increase in simplicity and functionality. I started using them all the time. However, I have always wanted to add a little bit of anti-aliasing, but never quite knew how.

Sometimes googling for OpenGL functions returns pages of garbage and spec files don't always provide clear examples to use. Well, I have found a web page of someone who has had the same problems and has posted his solution including the setup, blitting, etc.

Normally I wouldn't post a blog on a code snippet, but I think this deserves another link so more people will run across this little OpenGL gem.

Thanks, Stefan!

(click here) for multisampling in an FBO.

Friday, November 07, 2008

Sunflow - A Java ray tracer

After all the posts and articles I've written on why I don't think interactive ray tracing is the future of games, I have always said ray tracing has its place. One ray tracer I have become acquainted with a bit ago was Sunflow, a really neat Java ray tracer written by Chris Kulla.

Ajax model rendered in Sunflow by Tartiflette


First of all, this guy knows ray tracing. He's a programmer at Sony Imageworks and has created a very robust ray tracer with a very intuitive code base. Some may wonder why someone would write a ray tracer in Java when C++ is clearly faster. It's true, most C++ code runs faster than Java if its written by a competent programmer (which he is), and even Chris admits this. However, he notes that in his experience C++ is only about 20% faster and for a non-interactive renderer, this isn't much of an issue. Especially since writing it in Java would make network rendering much simpler (see project Helios). If you are familiar with ray tracers, you can open up the source code and easily navigate around and see what is going on.

The Sunflow project is a couple years old now and has a host of features including caustics, normal mapping, photon mapping, multi-threading, and a shader API that allows you to write your own materials in Java. Many people have provided exporters from Maya, Blender, XSI, and a few other modeling programs.

Although it's only version 0.07.2, it is stable and supports many features that most artists will be satisfied with. So check out the website or at least look at the gallery.

Thursday, October 02, 2008

Pixar Interview

Everyone knows Pixar would be a great place to work. At Siggraph it seemed aspiring animators and modelers were lining up with their reels and portfolios eager for a internship or residency at such a production house.

Yesterday I went up to Seattle to interview with the Pixar team in charge of the Renderman products. This office does not handle production. Instead it deals with the actual technology that the artists use for rendering, shading, and distributing the work among computers.

Having spent eight years in the northwest, it was a great feeling to be back. I had a couple of hours to walk around downtown Seattle before my interview. If you've never been to Seattle, it's a great city. The cityscape has a huge variety of architecture; each one has a unique and pleasing style. Pixar's office is currently located in Smith Tower, the oldest skyscraper in Seattle. For a moment I thought I had the wrong building until I asked the clerk. Upon entering, you walk down a narrow hall with an array of shiny golden elevators to your right. Stepping into these golden boxes, a liftman manually shuts the door and turns a lever for a given level. The Pixar office itself was fronted by a reddish office door with the old-style blurry glass and PIXAR in big black letters like you'd see in a detective's office. The door was cracked open slightly and I could feel the awesomeness vibes pulsating from the interior. How do people work in such extreme conditions?

The interview process was more involved than I expected. The actual interviews lasted about 3 hours. During that time, developers, engineers, prospective bosses, etc. were shuffled in and out. The interviews were quite casual and they seemed to focus more on my experience than on-the-fly technical quizzes you hear about at some firms. Most of the technical questions related to something on my resume to the regard of "Why did you do it that way?". The developers came in pairs of two and I had the chance to meet some really cool people.

One pair of developers that came in were Per H. Christensen and George Harker. I quoted something I heard from a ray tracing symposium a few years ago in reference to Renderman's code length. Per replied, "I said that". I felt a little silly. Maybe he was flattered. I also told them I've played around with Pixie, a Renderman-compliant renderer. I felt even sillier after George replied, "Yeah, I wrote that." Oops again. Need to do a better job keeping track of these things. After arriving back in Salt Lake and doing some homework, I realized I had actually heard Per's presentation at Siggraph on Practical Global Illumination with Irradiance Caching. Triple oops. I need to pay more attention to people who may one day interview me. They could be anywhere... The entire team seemed very talented.

After the interviews, I returned to the streets of Seattle. They recommended a good dinner location at a nearby pub. Brian Saunders told me I could get sweet potato fries if I asked for them. Out of curiosity I did. Sadly, a break down in communication happened and I was rendered sweet-potato fryless, so to speak. I sat at the bar and ordered, to which the bartender turned to a red head and said, "One burger and sweet potato fries". She asked how to cook the meat and went off. Thirty minutes later, I was getting anxious to get to the airport and still no food. I asked the red head the status of my burger. After coming back she said, "The guy you talked to didn't actually order it, so we'll give you a burger that was going to someone else". I was thinking, "Didn't he tell you to do it?". I didn't say anything. The burger and normal fries arrived and were delicious, so all turned out well. If I get the job or happen to return to Seattle, I'll have to go back and try the sweet potato fries. If so, expect a blog post.

Sunday, September 28, 2008

Job Hunting

I'm finally leaving academia and searching for a career that provides satisfaction and a real salary. I've applied to a lot of big names and gotten fairly decent results.

Pixar has interviewed me over the phone and is flying me out next week to their Seattle Renderman team. Although I don't have a ton of experience in film production, I'm hoping any job they offer me will lead to a full-time development position on one of their rendering products. They didn't ask any technical interview questions besides what I've done, so I expect I'll get some of that face to face.

Intel's Visual Computing Group is flying me out to Hillsboro, OR next week for an interview on their driver development team for Larrabee GPU designing the DirectX driver. I was under the impression that most of this work had been done considering that they provided performance benchmarks for many games like Gears of War in their paper. The interviewer said he couldn't specify how complete the implementation was when I asked him, he did say that with Microsoft's active development of the DirectX API, there will always be new additions to the driver. I've heard working at Intel can be a little intense and they push there employees relatively hard, a couple of people who have worked there say that they actually like that about Intel. I don't know DirectX very well, but from what I understand, it's a lot easier for driver development than OpenGL for several reasons. Hrmm, I guess that means I won't be working on a Linux/FreeBSD machine...

I also had a phone interview with a couple of developers from Google. I applied to Google without a clear idea what I'd be doing there and just assuming it would be a great place to work. The first developer to call me was Sean Pidgin, one of the project leads for Pidgin (formerly GAIM) and the project head for Google Talk. They started out talking a little about my background, but did have several casual technical questions about threading and networking as well as some formal coding I did over a Google Docs document we shared. I'm still not used to coding on the fly especially over the phone, but I think I did alright. I always get nervous when at the end they ask me, "Can this be more efficient?". Of course it can, but the first thing I say doesn't mean it's the best thing I can do. Can I writer talk out a scene with the same finesse he/she publishes a work? They asked questions about things like race conditions with threads, binary tree searches, etc. Like I said, I still don't have a clear idea what I'd do at Google given that there job descriptions require knowledge of everything, but in a way that's pretty cool.

In general I'm fairly content with the interviews I've been able to receive. I've only been looking for a couple weeks and already have a few flights scheduled for on-site interviews. We'll see where I finally end up.

State of Rasterization

I was down at Siggraph when the new OpenGL 3.0 specification was released. I was excited for the OpenGL BoF later that week to see all the specified updates they had promised. Monday of that same week, I read a slashdot article that kind of broke the news to me that the original enhancements to the specification like the object model were butchered. I was a little shocked, but was sincerely hoping it was like many slashdot articles, where the commenters simply read the summary and didn't bother reading the article. I had some naive hope that the webmaster accidentally posted the old specification up using the new file name, but after scanning the entire specification, I can see that wasn't the case. Many people were angry that they were left in the dark so long about the drop in features and the news that Khronos would drop the ball like ARB has done. Although I think Khronos screwed up, I still think a much-needed object model will is still on the road map and through these frequent "updates", I can see more state-friendly API. nVidia already released a new extension that provides texture building with binding that texture. Even though it's just an extension, I hope to eventually see many other functions go down that road.

nvidia has done a good job supporting OpenGL in various ways including graphics demos, presentations, APIs, etc. Although I don't see much use with some of the new additions to the pipeline--nvidia itself has told developers NOT to use the geometry shader--I do like some of the things presented in this new API.

nvidia's slides on OpenGL features on the GeForce 8 architecture. I'm linking them straight from the nvidia page until they tell me I can't



One of the cooler additions to the rendering pipeline IMHO is the transform feedback buffer. Basically when you send vertex data down the pipeline and it gets transformed by various matrices in the vertex shader, you don't really have an idea of what those vertices actually get transformed to. You could do some GPGPU thing where you print the vertex information to the screen and read it back, but this is tedious to implement and debug. A better solution would just be to store these transformations in the rendering API. One of the pesky things about shadow volumes is you have to compute the edge silhouettes. This has been solved in a respect in "Shadow Volumes on Programmable Graphics Hardware" by Brabec et al., and now can be implemented trivially using this new transform feedback buffer. Other work has been down to use silhouettes to provide some effect, so I can see Brabec's implementation applied to a whole slew of visual effects that require light or camera silhouette information. In their presentation, they use the geometry shader for silhouette detection, which I think is a mistake. Much like Barnes and Noble dedicating a section to Vampire Romance might be considered a mistake. Maybe that's a stretch.

Another exciting extension is the EXT_direct_state_access. Although this isn't the "object model" many people were expecting, I believe this kind of extension will generally lead into the state-friendly programming model we all wanted. How many times have we not known the state of OpenGL coming into a given function and changing the state would possibly break the assumptions a proceeding function would have? Once again kudos to nvidia for a great support and presentations.

Monday, September 15, 2008

Rasterization versus Ray Tracing

"Real-Time Ray Tracing: Holy Grail or Fool's Errand?" is an older article, but I think it's short, simple, and to the point, which you don't see every day. The author Dean Calver gives a very brief comparison of rasterization and ray tracing then summarizes some of the problems with ray tracing--specifically aliasing, dynamic scenes, and global illumination.

I've written about problems with dynamic scenes in the past. Although many people prefer ray tracing for its simplicity, it requires acceleration structures to run quickly. In David Luebke's presentation he made an excellent statement that I think described the rasterization-versus-ray-tracing debate rather well.

"Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast."

Another problem I think isn't discussed much is texture locality. Although I don't consider this as big of a problem for ray tracing as dynamic scenes, it's a performance penalty. The problem involves ray tracing being an image-based algorithm. If you shoot a ray into the scene and it hits a textured object, you need to fetch the texture into the cache. If the next pixel uses a different texture, a new texture chunk must be fetched. This is in contrast to an object-based technique, where adjacent fragments on the object usually use adjacent texels. A worst case scenario for ray traced scene might be tracing textured grass against a textured wall. Every couple of pixels, you'd need to refetch the another texture block. Although this is a generalization since caches are big and the entire texture doesn't need to be fetched, it's still an issue and today's textures are getting rather large. With rasterization, the wall would be drawn with the texture, then the grass would be drawn with the other possibly creating no duplicate cache misses. If you read the original REYES paper by Cook et al., you'll see that spend a good chunk of the paper talking about texture locality, because it can severely impact performance.

I think nvidia isn't concerned whether ray tracing takes over the rendering pipeline as long as it's done on the GPU. Luebke and Parker's presentation goal was to show what nvidia hardware could do, and allow developer to decide what was best for them. More options kind of thing. Although I didn't care too much for the ray tracing portion of the demo, I did think he had compelling screenshots of what rasterization is currently capable of rendering.

Both presentations conclude that a hybrid algorithm is the future. Calver likes the idea of rasterizing the visible portion of the screen and using that information to ray trace secondary rays. Frankly, I don't think this idea is very promising. Rasterization is great for the visible rendering of the screen, but if you're doing secondary rays, you'll need the acceleration structures that you didn't want to build in the first place. If this wasn't an issue problem, then you could just ray trace everything. Ray tracing is very fast for primary rays if the acceleration structures are already built. You have tons of coherency assumptions. David Luebke's idea seems more plausible, where scene assets are sorted into ray-traced and rasterized passes. You can rasterize animated objects like people without having to build an acceleration structure and ray trace other objects that require effects reflections. But once again, what are you intersecting you're secondary rays with? In my diagram below, I have a sample scene using a ray-traced reflective sphere and a rasterized animated person. It's crude, but I'm trying to give a simple example.

The ray-traced sphere needs an updated acceleration structure to perform accurate reflections on the animated, rasterized figure


You rasterize the person just like you would in current games. Then you ray trace the sphere. The ray hits the sphere and you want to calculate the intersection of it with the person. But the person doesn't have an acceleration structure because it's being animated. If we had the acceleration structure built, we would have just ray traced it. I'm trying to say that I don't see how ray tracing secondary rays will ever play nice with rasterized, dynamic geometry.

Jon Olick's presentation used ray casting for static geometry, which I think provides more immediate benefits. Most games already have special rendering techniques for static geometry like lightmaps for all the floors and walls, so having special rendering "cheats" for static geometry isn't new. Although this doesn't provide higher-order effects like ray tracing does, it will significantly increase geometric complexity of static portions of the scene. Furthermore, this technique is relatively easy to incorporate into existing pipelines at relatively no cost.

Although I still don't like the current marketing push towards ray tracing including Intel's, I am mildly excited for Intel's Larrabee. As long as it rasterizes competitively fast, it could allow developers to easily add new features and extensions to the pipeline faster than current GPU hardwares are able.

Oh, and though I don't have anything to say about the other presentations, kudos to nvidia for always doing a good job posting their presentations. I did see Sarah Tariq's hair presentation and think we're getting close to seeing scenes with characters with animated, beautiful, frothy, tangly, wavy, shiny hair. Check it out.

Wednesday, September 03, 2008

Siggraph 2008

Well, I went to Siggraph for the first time. It was an interesting experience. The convention consisted mostly of presentations/lectures, an exhibition hall of company booths, and the "Art Festival".

The presentations were fairly interesting. Nothing earth-shattering. A saw a few random posters that showed some novel work. Blue Sky Studios gave an amazing presentation of the art direction creating Horton Hears a Who. I didn't see the movie when it came out. I guess Cat in the Hat left a bad taste in my mouth for Suess-movie conversions. However, this movie looked fantastic. The speaker discussed several artistic approaches to modeling a Suess-like world of asymmetric trees, heavy, blobby, saggy objects, and color palettes for different scenes. From the clips I saw the movie looked amazing and was pretty hilarious.

Jon Olick from id gave a really good presentation. He has done a lot of level-of-detail work on games like Ratchet and Clank: Tools of Destruction. Although I've never played the game, many reviews I've read have commented on the amazing draw distance Insomniac achieved. I believe some of the implementations involved geomorphing, which he discussed. Jon finished the talk discussing the sparse-voxel octree for static geometry that John Carmack has proposed. I must state this proposal is only for STATIC geometry as the octree isn't rebuilt dynamically. This question was raised at the presentation and Jon quickly stated he wasn't attempting to solve dynamic octree construction that everyone has been working on for years. Although it only works for static geometry, it should provide a big leap in geometry resolution at comparable storage or traditional triangular meshes--these sparse octrees can be streamed in by the amount of screen footprint supposedly allowing extremely detailed assets. Furthermore, it should be easily combined to current pipelines that raster dynamic geometry. And even though ray tracing nuts (not researchers--just enthusiasts) supposed that this was a huge step towards full ray tracing, this method only works for ray casting so one still needs things like shadow maps for global illumination effects (like shadows). Overall I thought he gave the best presentation--mind he actually gave a demo of the proposal, which makes it much more convincing.

Intel gave several presentations on the Larrabee architecture and how it will not suck. They repeatedly stated how they weren't showing any benchmarks since there's no product for another couple years, so it's hard for me to really get excited about it. Larry Seiler gave the first presentation even though he's not actually on the Larrabee architecture paper. Weird as it seems, the guy seemed very cool. I stopped him in the hallway to ask him a few questions. He set his bag down and started chatting away happily. Another time I saw him sitting in a corner with a couple of young geeks with their laptops flipped open asking him questions. For a mid-50s guy from Intel to just sit on the ground and field questions seemed pretty cool to me. Although I don't have a lot of confidence that Larrabee will be a competitive GPU, I thought they picked a great spokesman. He said he was chosen specifically for his past experience with GPU architectures.

nVidia gave its ray tracing demo of a car flying through the city. I saw it first hand sitting on the front row while my old professor Steve Parker explained the technology. Although I think he's brilliant programmer, I wasn't very impressed with the whole demo. The scene was a cityscape with a car driving through it. I've expressed my concerns with ray tracing in previous posts like problems with texture coherency. The demo had very bland textures and for running on four nvidia quadroplexes, it really wasn't very impressive visually. They even mentioned a "canyon" shader, which gave the buildings a more realistic look. From what he explained, the shader was a linear ramp from dark to light just how canyons are typically darker at the bottom. When a linear luminance ramp is a topic of interest in 2008, I think the demo has failed. They zoomed in on the headlamps to show 7 bounces of ray tracing, which I guess is really important, but to me it just looked very noisy. I know it got a lot of people excited, but with the hardware they were running it on, I'd have rather seen a rasterized scene running at 128x anti-aliasing and per-fragment environment mapping. The demo was basically glorified ray tracing with accurate reflections. If they want accurate reflections, why didn't they just implement Jen Krueger's GPU Rendering of Secondary Effects. Sure, 3rd or 4th bounces have less accuracy, but if you saw the demo, the tertiary rays really didn't contribute much to the image quality. Like I said, I'd rather have seen a rasterized demo running on that beastly hardware. The talk itself wasn't too bad. David Luebke talked about effects that developers are eager to use and how nVidia's working to provide them with it--things like better soft shadow, depth of field, caustics, etc.

The exhibition hall was comprised of animation/production schools, production companies, development booths from Autodesk, zBrush, a bunch of motion capture companies, etc. It was fairly exciting at first but lost its appeal after a day. The Art Festival was also pretty fun.

Not sure if I'll attend next year. I felt like most of the students there only wanted to hit Dreamworks and Pixar for job interviews. One animator even asked why a CS person was attending Siggraph. I was a little shocked, but was polite. It's in New Orleans next year, which just got hit by Gustav. I remember shoveling mud out of people's second story houses. Why can't they get rid of the levees and turn New Orleans into the Venice of the US? It's below sea level and building levees and watching them fail again and again is getting old. You're below sea level and the current system isn't working. Time for a change.

Friday, July 18, 2008

Opinion on Ray-Traced Video Games

I've posted a brief commentary of what I feel about this hype mostly Intel is pushing about bringing ray tracing to video games. Although it's an interesting idea, I don't see it replacing rasterizing GPUs anytime in the near future for several reasons.

State of Ray Tracing (in Games)

Wednesday, March 12, 2008

First Place in Ray Tracing!

Our ray tracing class ended with a ray tracing competition that any student could optionally enter. The only requirement was our ray tracer had to implement at least two Monte-Carlo techniques. I chose to implement soft shadows using multiple samples on a light sphere and embossed reflections.

Although my ray tracer wasn't the most technically superior, it won the top $100 prize judged by an independent panel of professors. I will probably spend the money on groceries for next week...