Monday, April 27, 2009

Ray Tracing Simpler than Rasterizing?

In one short word... depends.

I would definitely agree that someone can sit down and bust out a simple ray tracer. When I was learning python, I hacked a 500-line script to render a simple scene. It has shadows, reflective materials, and a couple of primitives that would have to be discretized into triangles to render with a rasterizer. The only real library I used was an image library to actually put the image into a window.


  1. For handling reflections, I just reflect the ray direction off the surface and send it out again
  2. When shading something, I fire a ray towards the light source and if I hit something before I get there, I shadow the surface
  3. I added a procedural checker material with only a few lines of code based on the surface position


In my opinion, it's some pretty simple code to implement these rather significant features. If I wanted to add more complicated features like shade trees, it would have been a relatively simple addition. Not only was this script straight forward, but I implemented it right off the top of my head. I don't have the best of memory, so I'd like to reinforce these concepts are quite intuitive to remember.

Simple python ray tracer featuring procedural shading, Lambertian and reflective materials, and simple shadows

(source code)

Same work with a rasterizer
Now, let's imagine that I had to write the same kind of functionality in a simple scanline rasterizer.


  1. the procedural geometry is out--both the plane and the sphere have to be broken up into triangles, which isn't complicated, but surely not as elegant as some vector algebra
  2. I would need to implement or find a matrix library to handle all my projections, and since I don't have the projection matrix memorized, I'd need to go get a reference of that
  3. I would also need to implement the actual scanline algorithm to sweep through these polygons and implement a simple depth map
  4. For shadows, I'd need to render a shadow map making sure the objects casting the shadows are in the light frustum
  5. For reflections, I'd have to render an environment map, which would involve texture lookups and alignments, etc.
  6. The procedural checker material would probably be the easiest thing to implement, but I would have to interpolate the vertex attributes to the different pixels


I certainly can't elegantly fit all that in under 500 lines of code. It's pretty evident that implementing a few basic visual effects is obviously much simpler with ray tracing.

Where is Rasterization Simpler?
Ray tracing models light transport in a very physically-realistic manner. It handles pretty much any "effect" rasterization can do, but it comes at a big cost--mostly performance. Most ray tracing research currently is all about making the algorithm faster, which requires some very complex code to be written.

First, ray tracing pretty much requires the entire the scene including textures to fit into local memory. For example, if you fire a ray into the scene above and it hits the ball, that ball needs to know about the plane below it and the ball to the left of it to do a reflection. That's simple enough in this case, but what happens if your scene is five gigabytes and you have four gigabytes of RAM? You have to go through the incredibly slow process of getting that primitive from the hard drive and load it into memory. That's a ridiculously slow. And the very next ray may require another hard drive swap if it hits something not in memory.

One easy solution is to break your scene up into manageable layers and composite them, but then you have to start keeping track of what layer needs what. Like if you have a lake reflecting the far off mountains, both need to be in the same layer.

What's with these acceleration structures?
And what of these acceleration structures? Ray tracing is often touted as being able to push more geometry than rasterization as an acceleration structure typically allows the ray to prune large groups of primitives and avoid costly intersections. That's certainly nice if you happen to have one, but where do these structures come from? You build it yourself! And if you happen to be playing a game that requires any kind of deforming animation (almost any 3D game now a days), you have to rebuild these structures every frame.

Despite the research in this field, it's still expensive to maintain them as the scene changes. State of the Art in Ray Tracing Animated Scenes discusses some of the more recent techniques to speed up this process, but as Dr. Wald says, "There's no silver bullet.". You have to pick which acceleration structure to use in a given situation, and then you have to efficiently implement them.

With a rasterizer, you have to linearly go through your entire list of geometry. But if we're animating and deforming that geometry anyway, that's fine. With a ray tracer, animation means having to rebuild your acceleration structures. Why does ray tracing make animation so complicated when it should be so simple?

Ray Tracing Incoherency
Ray tracing is inherently incoherent in many ways. Firing a ray (or rays) through every pixel makes it difficult to know what object the renderer will want in the cache. For example, if one pixel hits a blade of grass, that geometry needs to be loaded into the cache. If the very next pixel is another piece of geometry, that blade may need to be swapped out again. For visibility rays you can optimize this a little bit, but with secondary rays (the entire purpose of ray tracing) these rays are going all over the place. Many elaborate schemes have been developed to attack this problem, but it's a difficult property that comes with ray tracing and won't be solved any time soon.

A rasterizer is inherently coherent. When you load a gigabyte-sized model into RAM and bring its gigabyte of textures, you render the entire scene, you're working with that one model until you need the next one. When doing a texture lookup, there's a really good chance that chunk of texture is still in your cache since the pixel before probably used an adjacent texture lookup. And when this model is completely rendered, you can throw it away and move onto the next one and let the depth buffer take care of the overlaps. Doesn't the depth buffer rock?

Both are Complex
In general, there's nothing really simple about production-quality ray tracing. It basically trades "graphics hacks" for system optimization hacks. David Luebke said in Siggraph 2008, "Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast". Every scene is bottlenecked by time and trying to get ray tracing working in that time isn't always "simple".

nVidia's Interactive Ray Tracing with CUDA presentation

Monday, April 20, 2009

Rasterization: King of Anti-Aliasing?

With the ongoing marketing of interactive ray tracing, I have continued to evaluate why rasterization will continue to be the dominant rendering method in games for quite some time. Ray tracing is considered by many as the end-all-be-all in computer graphics and some people seem to simply turn their brain off when it comes time to discuss its disadvantages. While conversing with an old colleague on that note, I realized I've never met a rasterization fanatic that insists everything has to be rasterized. Wouldn't it make more since to use them in areas they excelled in? As you read the next part, keep telling yourself rasterization still dominates the movie industry for good reasons and why those same principles apply in games.

Rasterization: King (or Queen) of Anti-Aliasing
It's pretty much undisputed (to sane individuals) that rasterization trumps ray tracing in terms of anti-aliasing. When Pixar's REYES renderer was being developed, one of their primary goals was providing high-quality anti-aliasing. This is one of rasterization's biggest advantages and one of the principal reasons off-line renderers still use rasterization for visibility rays. This has to do with the basic difference between ray tracers and rasterizes. With a ray tracer, one ray is fired per pixel and an intersection is computed for that ray. With anti-aliasing, more rays are fired into the scene per pixel. Thus if you want n samples, you fire n rays and do n intersections. These intersections are in 3D and are relatively expensive to compute. Rasterization, on the other hand, does a lot of work to get a primitive onto the screen. Once it's there, however, it can be sampled efficiently in 2D. So, rasterizers project geometry into 2D, and we'd like to sample in 2D since we're making a 2D image anyways. How convenient.

Once the primitive is on the screen, rasterizers can crank out the samples


Imagine a pixel on screen as you see above and one desires to perform nine samples instead of one to provide a more alias-free image. With ray tracing, each sample is an expensive intersection test. With rasterization, the renderer simply has to decide, "Does this sample lie inside the primitive or outside?" Hrmm, which one seems simpler? That that may seem trite since a computer is doing all the work, but it's the difference between solving a complex 3D intersection or a 2D region test. With ray tracing, firing n-times more rays is n-times slower. With rasterization, the brunt of the work is getting it on the screen. Once it's projected into 2D, samples are dirt cheap.

Who cares about anti-aliasing?
The topic of anti-aliasing certainly isn't sexy. It doesn't require secondary rays, funky texture maps, or any complex data structure. It's usually just more work. But it's one thing we haven't seen really pushed on GPUs--more settled on. A modern video card can do 16x full screen anti-aliasing and certainly could do much more.

Look at the image below. There's nothing really complicated in this scene. Certainly nothing that would require a ray tracer--mirrored reflections, simple smoke, some large textures, and a lot of little details. For all I know this image could have been ray traced--probably considering every modeler comes with some kind of ray tracer--but it screams for a rasterizer. You'll see games soon turning this stuff out in real time.



Anti-aliasing in games
With production studios sometimes requiring more than 64x samples per pixel, it's no wonder why companies are still rasterizing away. With current graphics cards easily performing 8x sampling per pixel while ray tracers chug away on beastly machines to compete, it would take very little for GPUs to flex their anti-aliasing muscle with even more samples. Anti-aliasing right now is pretty good and it's only getting better. Movie quality anti-aliasing in a game? Woo hoo!

Sunday, December 28, 2008

PC to PS3 (Little Big Planet, Orange Box, etc.)

Accepting console gaming
I have run Linux for several years now. It is a great operating system--yeah, I know it's really a kernel. I can always brag to my Windows and Mac friends of the ease of pulling down development libraries, packages, and files within seconds without having to scour the web.

I am a gamer, however, and have always wanted to play games without any hassle; to own a game and have everything just work. WINE has come a long way and has run World of Warcraft for me just fine, but with many other games I'm forced to tweak settings, use sub-par graphics, or worst-case scenario boot into Windows. To solve my woe, I decided to get a console. Hence, the PS3 sitting on my cabinet. I'm also looking forward to buying used games at discount, which is pretty much unavailable for PC gamers.

Configuring the PS3
Having not owned a console since the Sega Genesis--I had Resident Evil 4 and a Gamecube for a few months--I've fallen a little behind on the console world. The PS3 video snapped into my monitor via HDMI and I plugged the component audio cables into an old surround sound receiver. On starting the system, I passed through a brief configuration menu. One screen asked, "Do you want video and audio to go through HDMI?". Well, technically no, I just want HDMI for the video. So I answered no and my screen went black. I restarted the PS3 and answered yes to get video and setup audio afterwards, but I thought it was a funny question to ask if you're going to make a user's screen go black.

The PS3 menu is pretty sleek. It's a long horizontal bar of menus, where each focused menu will display vertical submenus of options. It works well and although I got used to it quickly, I don't see it as replacing a typical vertical menu.

Pleasant graphics
Both the PS3 and XBox 360 have had many consumer complaints in reference to the resolution many games and videos run at. For example, a game might render at 720p and be stretched to 1080 on the actual display. Although this is noticeable, all the games and videos that do it still look decent.

I had a chance to play a few demos and realized the incredible difference in graphical quality between video games. The aliasing in Mirror's Edge, for example, is horrible. The game looks well enough and has cool-colored art direction, but I can't help wonder if they couldn't at least do some post-process anti-aliasing. I've heard Microsoft actually has a stricter policy on anti-aliasing, but I can't reference that. I have been having fun with features and like having such quick access to all the different playable demos. It has been a lot of fun to download these demos and see if they're worthy of my wish list.

Orange Box
I enjoyed Half-Life 2 and always wanted to see how the first and second episodes turned out. The graphics are starting to show their age and the textures appear a bit blander than I'm used to compared to my PC. I'm still surprised how bad the loading times are in this game. Dying can take what feels like a minute to reload a level even if I died a few feet from the last load. What is the system paging in and out? Portal is a refreshing change. It satisfies that little first-person-shooter-puzzle-genre craving I never knew I had.

Although I haven't gotten into Team Fortress 2 to its full degree, I have seen it in action and have been very impressed with the graphics this game puts out. Some people were disappointed that Valve didn't attempt a realistic rendering system like everyone else in the industry. Their loss, I guess, because the game looks great. They have a great presentation, Illustrative Rendering in Team Fortress 2, that really showcases the art direction to make such a great looking game. Here's the high-quality video. If you like art direction or new shader designs, I highly recommend reading their slides and technical article.


Ratchet and Clank: Tools of Destruction
I played parts of the full version at a friend's house and downloaded the demo on my home machine. This is one the best looking games I have ever seen. The animations are smooth, the palettes are nicely colored, and everything is supremely sharp. The draw distances are incredible. Insomniac does Playstation exclusives and has always done a great job pushing a system. It also handles really well and is just fun to whack things with a wrench. I have included a couple screenshots stolen from Insomniac's site. The game looks even better in motion.




Folklore
This game got polar reviews with reviewers conflicted between its poor gameplay and great art direction. The game features about three art styles sometimes jumping between 2D and 3D cutscenes and looks great. It's a unique genre, but the first "boss battle" had me constantly fighting with the camera just to keep the enemy on my screen. I stopped after completing that section fearing the rest of the game would include the same tedious and frustrating camera coordination.

Playstation Home
Playstation Home got terrible reviews days before I received my PS3. I wasn't excited to enter this online world, so I was ambivalent to these postings. I was a little curious though, and gave it a quick pass through. I can add to other statements that there is no reason why anyone should use Playstation Home.

After a boring session of creating a customizable character, I ran around the courtyard waiting for other characters' skins to load on my machine while an empty shell of a person modeled their movements. Moving into new buildings required additional downloads. While I realize this happens only once, it is still annoying for the first visit to this online world.

Entering the theater, I sat down in an available chair and waited for the video to play. Many people were standing up in front of the screen talking to each other, while others sitting in the audience with me were calling those in front to stop being duchebags and sit down. I realized then that Playstation Home is the 3D visualization of all the people on the internet I try to avoid put in the same room as me.

There is nothing redeeming about it and although it's in beta, I can see no reason for anyone to use it. I believe a product should enter beta to work out bugs and get customers excited, but I believe most people like me will get a glance at it be extremely reluctant to return. Home might one day be a really excellent product, but it's going to take some extreme makeovers to make it such.

Little Big Planet
I have heard complaints about this game lacking game and being more of a tech demo. I must say that this game is amazing. I have always enjoyed platformers, and this game is a great platformer by itself. It features a zany story, where players can pass through levels in coop (either offline or online) with up to three others. Most levels can be completed alone, but it's funner to play with a friend(s).

The graphics looks sharp and feature crisp textures, HDR effects like bloom, and decent camera work. Every once and a while the camera will zoom too far in or out, but it's tolerable most of the time. Smoke and fire effects aren't perfect, but they're pleasant to watch.

The gameplay implements a simple jump and grab mechanic so the controls are easy to master. Like most platformers, the characters are required to do a lot of jumping, but this game includes a simple grab mechanic where the player can grab certain objects if they have a "grab-able" material like sponge instead of stone. This is used for hanging, swinging, pulling boxes and levers, etc. There's also a depth or layer component where players can move closer or farther from the camera. This is usually done automatically such that a player jumping from a depth of say 3 will naturally move forward to land on a platform of depth 2. The mechanic isn't perfect and sometimes causes problems (like totally unfair deaths) but is forgivable since it generally works well.

The gameplay is somewhat forgiving. There are frequent save points littered throughout the game that allow players to restart there after dying--this only works a few times before the save point runs out of lives and the players must restart the level. Also, objects that can kill the player by squishing or burning him/her usually give the player the benefit of the doubt by squeezing him/her through the crack or lightly singeing the player.

The real innovation of Little Big Planet is the excellent use of physics in the game. Each object has its own material with corresponding properties like friction coefficient, mass, texture, etc. It is fun to see your avatar catapulted over a wall or balancing on a teeter-totter-style platform.

The best thing about LBP is the level design. The game has been in production for quite some and the developers have mastered the engine creating many beautiful and crazy levels. I have seen almost every non-combative platforming mechanic put into these levels as well as new components that add new challegnes using physics, requiring teamwork with cooperative players, and just some new creative puzzles. Some levels require two, three, or even four players to pass certain puzzles. These are usually optional off-the-route treks to obtain additional collectibles.

It's hard to find good cooperative games these days so it's gratifying to see one that has really done such a great job nailing so many features. I haven't any gotten to the level editor, but it's on the to-do list. I can't imagine a video-game playing PS3 owner not purchasing this game unless they adamantly hate platformers.



Conclusion
My first few days using a console have been very satisfying. That feeling of running a game I own and not having to fight the system to get it to play is most pleasant and I urge other Linux users to consider having a separate machine like the PS3 to handle most of the their gaming needs. It's also a great Bluray player.

Thursday, December 11, 2008

Ray Tracing Basics

Dr. Steve Parker, who's academic team has done much to advance the field of ray tracing--specifically interactive ray tracing has allowed me host his lecture slides on my blog. Dr. Parker left the University of Utah earlier this year with Dr. Peter Shirley and several students to join nVidia in the hopes to produce high-quality, interactive ray tracing for consumers. Some have sourced Dr. Parker as writing the first "interactive" ray tracer, which he implemented years ago on some SGI computers.

These slides are taken from his CS 6620 - Introduction to Computer Graphics II course. It was basically all about writing fast and efficient ray tracers. The slides contain a lot of C++, but start with basic vector math. The slides have been compressed by me using Multivalent so they're quite a bit smaller than the originals.

The first lesson is a typical introduction to the course with pictures of ray-traced images and why ray tracing is useful. Those wishing to skip these details can probably jump into the second slide. They also include student project images including mine--the chess set. Most of these competition images lost a lot of quality for some reason--even uncompressed. You can see my image in an earlier post to see its original size and quality.

Also, if you use these slides, please keep the reference to Dr. Parker and his course.
Lesson 01 - Introduction to Ray Tracing
Lesson 02 - Geometry for Graphics
Lesson 03 - The Ray Tracing Algorithm
Lesson 04 - Ray Tracing Software
Lesson 05 - Ray Tracing Software
Lesson 06 - Ray-Object Intersections
Lesson 07 - Triangles and Materials
Lesson 08 - Materials
Lesson 09 - Materials II
Lesson 10 - Materials II
Lesson 11 - Materials
Lesson 12 - Heightfields
Lesson 13 - Sampling I
Lesson 14 - Sampling II
Lesson 15 - Color Theory
Lesson 16 - Texturing I
Lesson 17 - Texturing II
Lesson 18 - Displacement, Bump, and Volumes
Lesson 19 - Acceleration Structures
Lesson 20 - Acceleration Structures II
Lesson 21 - Acceleration Structures III
Lesson 22 - Acceleration Structures 4 Instances
Lesson 23 - Monte Carlo I
Lesson 24 - Monte Carlo II
Lesson 25 - Monte Carlo III

Language Shootout

I program quite a bit in my free time on various projects. I try to make sure I'm using the right tool for the job (quote the hammer and screw analogy here). Despite the range of programming tasks I've undertaken, I have only a handful of languages I use frequently: C\C++, Java, Python, and PHP. I like all these languages, but I try to make sure I choose the right one for the task. While sometimes I have to use a specific one when I absolutely need a 3rd party library, I have a few flexible ideas of when to use a given language.

C\C++ were the first two languages I learned, and they can do quite a bit. I usually choose this language when I absolutely need the fastest option available and I have the time to do the optimizations necessary to make it faster than Java--like SIMD operations.

I find Java to be a great prototyping language. By prototyping, I do not mean a prototype-based language, but a language that I can quickly develop a prototype for an idea I am playing around with. Java comes with a good GUI library, is faster than scripting languages, and has a good range of libraries to work with. I always hated doing something in C++ and realizing I needed to go hunt down a image library just to load a PNG file.

Despite what some people say, it is fast and has even beaten C\C++ on occasions--mostly where there are a lot of allocations and deallocations and the program isn't starting up and closing frequently. The Computer Language Benchmarks Game provides some performance analysis of several tests against many languages and Java is usually only ~10% slower. It is faster in some cases such as their binary-tree demo. I mentioned Chris Kulla's Sunflow ray tracer earlier, which is written in Java. He states his would be 20% faster if ported to C++, but I think his implementation is currently faster than most people's ray tracers for providing similar features.

I don't use Python much, but it's usually great for writing little scripts. I prefer this to Java as it provides faster programming for simple tasks. Although it is quite a bit slower than Java, it has a much faster init time so if I'm running the script frequently, it beats loading Java's large VM every time. Maybe that's a moot point.

PHP is only useful for writing web applications. Although this heavily limits its usefulness, nothing comes close to PHP's simplicity, ability to integrate with HTML, and huge list of useful functions. I always use PHP for anything related to dynamic web content.

I'm sure many will disagree with my criteria for languages. Some people insist ADA is the most useful language (like the creator of ADA). To each his own, but these languages have been good to me.

Reply to Comment #1
I received a comment on the "The Computer Language Benchmarks Game" link where I mentioned the test where Java beat C\C++ telling me to "note" that the Java program used a large heap, only beat C++ on one architecture, and that other C++ implementations were tested that were faster.

Language benchmarks will always lead to arguments about implementations, architectures, etc. I do not think Java is always faster than C++. I do not think Java is as fast as C++. But I do think they can be close. My point was Java beat C++ in one benchmark so to say "Java is slower" is a little unfair.

As I stated earlier, you can always make C++ faster than Java if you have the time and if speed is a concern. Chris Kulla's Sunflow ray tracer is fast--not fast for a Java ray tracer but fast for a ray tracer. I'm sure it could be much faster if he switched to C\C++, but he said it allowed faster development time at the cost of maybe 20% performance. To me, I'd rather take a performance hit like that if I had more time to make it a more feature-rich application.

Wednesday, December 10, 2008

Dreamworks Interview

After being on the job hunt a certain amount of time and seeing my carefully chosen list of possible employers start to dwindle, I started sending on tens of unsolicited resumes hoping someone would bite. One such company was Dreamworks Animation.

After putting together an impressive resume and cover that fit me as their ideal candidate, I sent it off. Seconds later, I received an automated response "The following addresses had permanent fatal errors". That sounded really bad. Not only were the errors fatal, but they were permanent and would remain so for the rest of time. I sent off another e-mail to the webmaster asking if my application was in fact received. That responded with another automated response claiming the mail server itself was down. I took it as a sign that working at this company was not meant to be. I received a phone call a few days later.

The conversation was typical for a first call--verified my information, my interests, what job they were trying to fill. Even though I applied for a programming position, they wanted to interview me for technical director. Since it was an interview, I accepted.

The Dreamworks interview process was actually quite simple for me. They scheduled a video conference and sent me down to a local FedEx Kinkos. In a large conference room filled only by myself and a TV/camera stand, I sat at the end of the table while I was grilled by Dreamworks employees.

The employees interviewed me in pairs or teams. The first team was "Team Oddball" as they called it, which asked trivia-like questions. Unlike my interviews at Intel, they actually remembered the answers to the trivia questions, which made the quiz a much more pleasant experience. One question, for example, was "If you have a three-gallon and a five-gallon bucket with unlimited water, how do you measure out four gallons?". The other questions ranged in difficulty. Most of them were pretty fun and allowed me to think and explain how I would do it.

The next pair asked more technical questions related to graphics, but it was a pretty simple interview. Subjects ranged from 2D intersections to level-of-detail. Every person that interviewed me was a technical director, which I really appreciated. I used the Q & A time to ask them about a typical day, what they liked, challenges, etc.

I was supposed to have an interview with the director of technical directors, Mark McGuire, who oversees all TDs on all movies, but he wasn't available that day. To make up for it, he called me the next day. He mostly wanted to make sure, as everyone had, that I had a very clear idea of what this position entailed. I always found that a difficult question to answer as I don't think I can ever understand a position unless it's a common profession or I spend a day doing it.

A few days later I received an actual offer from the company. The hiring manager explained compensation, the benefits like provided breakfast and lunch, etc. He worked hard to sell me on it. Even though I asked for more time to complete the hiring process with some other companies I was interviewing with at the time, I ended up accepting the offer. I start on the 5th of January. Although I didn't want to live in California, I'm excited to start at what appears to be a fun and dynamic company. By the way, if you haven't seen Kung Fu Panda, the humor alone sells it. The art direction is just more bang for your buck.

Thursday, November 20, 2008

FBO Multisampling

I was using pBuffers in OpenGL when FBOs were coming around. Moving to FBOs seemed like a huge increase in simplicity and functionality. I started using them all the time. However, I have always wanted to add a little bit of anti-aliasing, but never quite knew how.

Sometimes googling for OpenGL functions returns pages of garbage and spec files don't always provide clear examples to use. Well, I have found a web page of someone who has had the same problems and has posted his solution including the setup, blitting, etc.

Normally I wouldn't post a blog on a code snippet, but I think this deserves another link so more people will run across this little OpenGL gem.

Thanks, Stefan!

(click here) for multisampling in an FBO.