Monday, October 24, 2011

Upgrage to Ubuntu 11

Broken Upgrade Dialog - Bad Omen
With the release 11.10, my previous version of Ubuntu started receiving popup dialogs urging me to upgrade to the latest and greatest.  Options: "Don't Ask Again", "Ask Me Later", and "Upgrade Now" (more or less).  I had postponed the upgrade several times because I'm usually wary to upgrade since Ubuntu seems to have had many rough edges as of late.  I noticed "Ask Me Later" did nothing, not even close the dialog, so I had simply killed it myself.  Bad omen, I guess.  In the dialog I also saw some type of web-based page with some of the new features offered.  Much of it was cropped off in the little dialog.  Not only was the dialog non-resizeable so I couldn't expand this small page to see the new features, my mouse wheel didn't respect it so I had to use the painful overlay scrollbars that came out in 11.04 (more on that later).

Dash Improvements
There have been several improvements on the dash including expanded right-click support.  Some tasks have more functionality like the terminal button can open new windows and the CD/DVD button has standard options you'd expect like "Eject" or "Open".  Still a little confused about what options come from where.  I can't create new windows for chrome, but can for firefox.  Not sure what the distinction is besides some internal setting allowing firefox to do this.  Seems like if there were a default, it would be to allow the user to have as many sessions opened of whatever as they want.

The workspace icon still has no options to change the number of workspaces (at least via right-click).  I like to have around ten.  I'm told I can install compiz to configure this, but why don't I just get the option via right-click to bring up a properties dialog.  If it's important enough to be on the Dash by default, why doesn't it get custom options?

Global Menu
The global menu is still a little rough.  As mentioned on several other sites, LibreOffice isn't configured correctly to use this menu, which isn't a big deal.  The bigger problem lies with multi-window programs like gimp, which are incredibly annoying to deal with.  If the main window is not selected, the global menu defaults to... nothing.  That means if I change a layer or select a tool, I have to go back and select the main window to access the menu options at the top of the screen.  To me, this seems to be a bug in Unity.  On the other hand, how should it behave if the subwindow has its own menu?  In the short couple of days using 11.10, this has been an extreme annoyance using gimp.  I don't like how the menu is even hidden until mouseover, but that's another direction the Ubuntu team has taken up as a design goal (more on that later).  I don't care how familiar one gets with the menu layout in your applet of choice.  The fact of the matter is menu items are now harder to find because one has to move the mouse up the the menubar, wait for the menu to appear, and then move over to it.  I'm convinced this will slow down every single trip to the menu and I'm hoping the designers of this feature go crazy over it and disable the hiding in the next release.  While it doesn't make any sense to copy Apple's design decisions for the sake of copying Apple's design decisions, the fact that they don't do this should merit some serious thought as to why they didn't.  It's slower to use.  Period.

can just use default scroll bars

main window selected - main menu accessible
secondary window selected - no menu for you :(
Speaking of the menubar at the top, I've caught myself dragging windows close to the top of the screen and accidentally maximized them.  The double-click still works, so I don't know why a user would need a secondary option that potentially isn't what they really want to do.  I have multiple windows up side by side all the time and this just makes resizing them all the more annoying.  I can work around this issue by being more careful, but its a pointless feature that does nothing but increase the possibility of the desktop manager doing something other than intended.

Overlay Scrollbars
While some of the features I've discussed were introduced in 11.04, I'm seeing them now as I've finally moved away from the classic interface, and committed myself to becoming comfortable with this new version.  One feature that's appeared (or disappeared) in Ubuntu 11 is the scrollbars as many know and love.

The goal is to help people immerse themselves in their stuff, without being burdened with large amounts of widgetry which isn’t relevant to the thing they are trying to concentrate on. And when it comes to widgetry, there are few things left which take up more space than scrollbars. --Mark Shuttleworth

As mentioned earlier, Ubuntu designers have publicly stated the design goal to reduce the amount of "widgetry" on the page.  Mark Shuttleworth actually has a lengthy blog post on this topic and delves into great description of this new scrollbar design.

one overlay scrollbar among many
People are usually resistant to change and I certainly have to work as hard as most to take things in with an open mind.  At first I felt they had a webpage-javascript-menu feel to them, but I'm starting to tolerate them.  Unlike Mac OS X Lion, which has also recently removed the scrollbar in a similar fashion (to the dismay of some much frustrated users), Ubuntu at least keeps a small sliver of bar on the screen at all times to give an idea of where the user is on the page.

Ignoring the pomp and circumstance the Ubuntu design team seems to have made over the rollout of this feature (almost to a ridiculous level of pride), I really think it is a rather useless and even problematic problem for several reasons.

First, the display of these overlay scrollbars is application dependent.  As shown in the picture above, only one window in this particular screenshot uses this feature.  Since most of the time I'm in a web browser or my own IDE, I won't see this new feature they seem so proud of.  Having multiple types of scrollbars on the page seems to go against years of work Linux developers have done to get a consistent visual style across applications using different libraries, yet Ubuntu obviously felt reducing widgetry on a few of their applications was more important.  While this isn't a terrible problem for me, it also makes it harder to get used to as I'm now using normal scrollbars 95% of the time and the overlay scrollbars when I open up one of their Ubuntu-based apps.  I have to remember that the lack of scrollbar doesn't mean there's nothing to scroll.  So while Ubuntu is very proud of this feature, it just creates unneeded variation between windows at very little benefit except for making Ubuntu-related screenshots prettier.

Second, when innovating new design features, it's really important (to me at least), to justify the change.  Christian Giordano, one of the main implementers of the design, actually has a lengthy post, Introducing Overlay Scrollbars in Unity, discussing the overall design goals of the project.  He notes that removing "widgetry"--something they've obviously succeeded at--is one of the motivations to "provide a more immersive experience".  That's a nice way of saying prettier.  Regaining screenspace is hardly a problem given the 15-pixel-wide widget is a sliver on a one-to-two-thousand-pixel-wide screen.  He also adds an seemlingly ignored prerequisite in that it "shouldn’t conflict with the window resizing functionalities".  I think this has been brushed over by many critics.

shown in red, these are the areas of the border I
must now use to resize that side of the window
As shown above, resizing has indeed been affected.  The amount of content off page determines how much border real estate one has to resize the window.  The designers added a feature so if one mouses over the overlay for a certain amount of time (~3 seconds), it will dissolve out and one can resize using the full border.  I'm not interested in waiting to resize my window.  I have a feeling this will annoy many who mouse over the scrollbar while reading to get ready to move down the page only to click and find the scrollbar has since evaporated.

Why not just use a scroll wheel, one might ask?  Well, I have a scroll wheel so this doesn't greatly affect me (except in certain cases as described in the beginning where Ubuntu has overlay scrollbars, but doesn't respect the scroll wheel).  However, many users out there still don't have scroll wheels on their mouse.  70% of Apple computers sold have built-in touchpads, and all iMacs come with Apple's smart mouse with a built-in trackpad of sorts as the base mouse.  I do not expect similar numbers for Ubuntu Linux users--many of whom are probably using outdated lab machines without any scroll wheel on the keyboard or mouse.

Despite all my whining, it is apparently possible to disable the feature, but again, why put it in there in the first place?  While Ubuntu has been a pleasure over the last few years as a stable and user-friendly operating system, the new UI-drive championed by Shuttleworth has made for a rockier transition between iterations and bugs have become more and more common during upgrades.  While I'll probably continue with Ubuntu for awhile, I'm hoping they'll change their focus to something that will really improve my desktop experience... like building device drivers.

I got a bit of feedback from a few sources regarding my closing comment on device drivers--one in particular angrily commenting that it's device manufacturers' responsibility to create Linux drivers.  I try not to go into debates online as I feel like one can argue online with a stranger for hours, with whom I might find some quick concensus in person.  I've seen this phenomenon even with family members, where online arguments seem to flair over otherwise menial things.  

Anyway, despite my preference for Linux, I'm of the mind that a manufacturer has little moral or ethical obligation to write Linux drivers.  While driver availability for certain products has certainly improved a lot in the last ten years, its still in a very poor, incomplete state.  Saying manufacturers "should" write their own drivers for non-profitable platforms is a little idealistic, simply shown by the fact that so many new and popular devices don't support Linux.  As such, I was just stating that Ubuntu's engineers would do better in my opinion joining the ranks of many other Linux-advocating contributors by coordinating, creating, and polishing Linux drivers.  And that's all I have to say about that.  

Sunday, September 11, 2011

Single-Pass Wireframe Rendering Explained and Extended

I got a pretty good followup in my previous post with how to implement "Single-Pass Wireframe Rendering". I thought I'd take a second to briefly explain how the edge detection actually worked.

As I said before, the basic idea is that we want each fragment to know about how far it is from any given edge so we can color it appropriately.  First, we have to remember that OpenGL is an interpolation engine and it is very good at that.  When some attribute is assigned to a vertex and the triangle is rasterized, fragments created inside that triangle get some interpolated value of that attribute depending on how far it is from the surrounding vertices.

Shown below, a triangle has a three-channel attribute assigned to each vertex.  In the fragment shader, those values will be some mixture of the three channels.  You'll notice, for example that the triangle in the image below gets less red from left to right.  The first RGB channel starts at 1, but as one moves towards that channel goes to zero.

That's the basic gist of the attribute interpolation.  The nice thing about modern GPUs is it lets us put anything we want in these attributes.  They may be used in the fragment shader as colors, normals, texture coordinates, etc.   but OpenGL doesn't really care about how you plan on using these attributes as they all get interpolated the same way.  In the fragment shader, we can decide how to make sense of the interpolated values.

Finding Edges Programmatically
Notice the pixel values along the top-left area of the triangle above have a low green value because the left and top vertices have no green in them, so pixels moving towards that edge have less and less green.  Similarly, the right side of the triangle has pretty much no red in it, because the values in the vertices above and below it have no red.  The same holds true for the bottom edge of the triangle having no blue.  The insight to be gained here and which is used in "Single-Pass Wireframe Rendering" is that values along the edges of the triangle will have a very low value in at least one of the three channels.  If ANY of the channels is close to zero, that fragment is sitting on or very near an edge.

images taken from nVidia's Solid Wireframe paper

We could just assign similar values as here and just render edges if the value is below some threshold.  The problem, though, is that these values aren't in viewport space and we probably want to measure our line thickness in terms of pixels.  Otherwise the edge thickness on our screen would change depending on the size of the triangle (maybe you want that, whatever).

As shown in the picture above, we calculate the altitudes of each vertex in screen space and store them in some vertex attribute.  In the fragment shader, the value (d0,d1,d2) will be somewhere between the three vertex attributes.  As described above, if any of these channels d0, d1 or d2 is close to zero, that means we're sitting on an edge.

nVidia has an excellent paper called Solid Wireframe, which goes into a bit more detail how this works and provides some really great illustrations.

Excluding Edges on polygons
While rendering edges is nice, I may not want every edge of a given triangle to be rendered.  For example, if I have some five-sided polygon concave polygon that I break into triangles using some technique like ear clipping (pdf), I may not want the interior edges inside the polygon to be rendered.

A simple way to exclude an edge of a given polygon is to make sure that that value never goes to zero by setting the channels of the other vertices to some high amount Q.  This Q can be any value higher than your maximum edge-width.  In my program, I set it to 100 since I'll probably never be drawing edges thicker than that.

If Q is relatively high, fragments along that edge will not have low values in any channel
Designating which edges to exclude requires an additional vertex attribute sent down from the program.  I attach a float to each vertex with a 0 or 1 whether or not I want to exclude that edge from being rendered.  I then update my geometry shader accordingly.

my updated vertex shader...

#version 120
#extension GL_EXT_gpu_shader4 : enable
in vec3 vertex;
in vec4 color;
in vec3 normal;
in float excludeEdge;
varying vec3 vertWorldPos;
varying vec3 vertWorldNormal;
varying float vertExcludeEdge;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
vertWorldPos = (objToWorld * vec4(vertex,1.0)).xyz;
vertWorldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
vertExcludeEdge = excludeEdge;
gl_FrontColor = color;

and my updated geometry shader...

#version 120
#extension GL_EXT_gpu_shader4 : enable
#extension GL_EXT_geometry_shader4 : enable
varying in vec3 vertWorldPos[3];
varying in vec3 vertWorldNormal[3];
varying in float vertExcludeEdge[3];
varying out vec3 worldNormal;
varying out vec3 worldPos;
uniform vec2 WIN_SCALE;
noperspective varying vec3 dist;
void main(void)
float MEW = 100.0; // max edge width
// adapted from 'Single-Pass Wireframe Rendering'
vec2 p0 = WIN_SCALE * gl_PositionIn[0].xy/gl_PositionIn[0].w;
vec2 p1 = WIN_SCALE * gl_PositionIn[1].xy/gl_PositionIn[1].w;
vec2 p2 = WIN_SCALE * gl_PositionIn[2].xy/gl_PositionIn[2].w;
vec2 v0 = p2-p1;
vec2 v1 = p2-p0;
vec2 v2 = p1-p0;
float area = abs(v1.x*v2.y - v1.y * v2.x);
dist = vec3(area/length(v0),vertExcludeEdge[1]*MEW,vertExcludeEdge[2]*MEW);
worldPos = vertWorldPos[0];
worldNormal = vertWorldNormal[0];
gl_Position = gl_PositionIn[0];
dist = vec3(vertExcludeEdge[0]*MEW,area/length(v1),vertExcludeEdge[2]*MEW);
worldPos = vertWorldPos[1];
worldNormal = vertWorldNormal[1];
gl_Position = gl_PositionIn[1];
dist = vec3(vertExcludeEdge[0]*MEW,vertExcludeEdge[1]*MEW,area/length(v2));
worldPos = vertWorldPos[2];
worldNormal = vertWorldNormal[2];
gl_Position = gl_PositionIn[2];

Without edge removal, each triangle has its own edge

Triangle mesh rendered excluding certain edges

Saturday, September 10, 2011

Single-Pass Wireframe Rendering

It came time for me to add a wireframe to my mesh and just when I was about to do the standard two-pass approach of rendering out my mesh faces and then rendering my wireframe with GL_LINES over that, I came across Single-Pass Wireframe Rendering, a simple idea for rendering my faces and lines in just one pass.  The idea, to put it simply, is to add some smarts to the fragment code so when it's rendering fragments close to the sides of a face, it blends in an edge color.  The paper gives several reasons why this is a better approach including better performance and some really cool added abilities.  The best part is it's very easy to add to existing code without much modification.

Adding the Geometry Shader
My code already had a basic vertex/fragment shader for doing some basic lighting and I just needed to add geometry shader in between that could add an attribute to each vertex specifying how far the fragment would be from the edge in screen space.

Here's the geometry shader taken almost straight from their full paper off their site...

#version 120
#extension GL_EXT_gpu_shader4 : enable
#extension GL_EXT_geometry_shader4 : enable
varying in vec3 vertWorldPos[3];
varying in vec3 vertWorldNormal[3];
varying out vec3 worldNormal;
varying out vec3 worldPos;
uniform vec2 WIN_SCALE;
noperspective varying vec3 dist;
void main(void)
// taken from 'Single-Pass Wireframe Rendering'
vec2 p0 = WIN_SCALE * gl_PositionIn[0].xy/gl_PositionIn[0].w;
vec2 p1 = WIN_SCALE * gl_PositionIn[1].xy/gl_PositionIn[1].w;
vec2 p2 = WIN_SCALE * gl_PositionIn[2].xy/gl_PositionIn[2].w;
vec2 v0 = p2-p1;
vec2 v1 = p2-p0;
vec2 v2 = p1-p0;
float area = abs(v1.x*v2.y - v1.y * v2.x);

dist = vec3(area/length(v0),0,0);
worldPos = vertWorldPos[0];
worldNormal = vertWorldNormal[0];
gl_Position = gl_PositionIn[0];
dist = vec3(0,area/length(v1),0);
worldPos = vertWorldPos[1];
worldNormal = vertWorldNormal[1];
gl_Position = gl_PositionIn[1];
dist = vec3(0,0,area/length(v2));
worldPos = vertWorldPos[2];
worldNormal = vertWorldNormal[2];
gl_Position = gl_PositionIn[2];

If you're already familiar with the vertex/fragment shader pipeline, which has been around quite a few years longer than the geometry shader, you'll recognize nothing is too out of the ordinary. It takes a world position and normal, which is basically just passed off to the fragment shader for lighting purposes. Although I've done quite a bit of GLSL, this was my first attempt at using a geometry shader, and once I learned the basic idea, I found it pretty intuitive.

First, there are varying inputs from the vertex shader that come in as arrays--one element for each vertex. The names have to match up, so for the in vec3 vertWorldPos[3] attribute, there must be a corresponding out vec3 vertWorldPos designated in the vertex shader. The exception to this is predefined variables like gl_Position, which comes in as gl_PositionIn[]. Not sure why the OpenGL designers decided to add those two letters, but whatever.

WIN_SCALE is the screen size, which we multiply be the vertex position XY. This takes our vertex positions in viewport space and converts them to screen space since we want to measure our distances in pixels in the fragment shader. That's followed by some basic trig to calculate the area of the triangle, which is used to find the altitude of each vertex (the closest distance to the opposing edge). Because the altitude is already in screen space, the noperspective keyword is added to disable perspective correction.

The geometry shader is responsible for actually creating the primitives via the EmitVertex() and EndPrimitive() functions. When EmitVertex() function is called, it sends a vertex down the pipeline with attributes based on whatever the out attributes happen to be set to at the time. EndPrimitive() just tells OpenGL that the vertices already sent down are ready to be rasterized as a primitive.

The geometry shader can actually create additional geometry on the fly, but it comes with some caveats. We must designate in our C++ code an upper bound of how many vertices we might want to create. This geometry shader doesn't create any additional geometry, but it's still useful as it provides knowledge of the neighboring vertices to calculate the outgoing vertex altitudes.

Setting Up the Geometry Shader
Using Qt, the geometry shader is compiled just like the vertex and fragment shader.

QGLShader* vertShader = new QGLShader(QGLShader::Vertex);

QGLShader* geomShader = new QGLShader(QGLShader::Geometry);

QGLShader* fragShader = new QGLShader(QGLShader::Fragment);

QGLShaderProgramP program = QGLShaderProgramP(new QGLShaderProgram(parent));

The only other adjustment is when we bind our shader. Because the geometry shader can create more geometry than inputted, it requires giving OpenGL a heads up of how much geometry you might create. You don't necessarily have to create all the vertices you allocate, but it's just a heads up for OpenGL. You can all also configure the geometry shader to output a different type of primitive than inputted like creating GL_POINTS from GL_TRIANGLES. Because this geometry shader is just taking a triangle in and outputting a triangle, we can just set the number of outgoing vertices to the number going in. GL_GEOMETRY_INPUT_TYPE, GL_GEOMETRY_OUTPUT_TYPE, and GL_GEOMETRY_VERTICES_OUT need to be specified prior to linking the shader.

QGLShaderProgram* meshShader = panel->getShader();

// geometry-shader attributes must be applied prior to linking
meshShader->setUniformValue("WIN_SCALE", QVector2D(panel->width(),panel->height()));
meshShader->setUniformValue("objToWorld", objToWorld);
meshShader->setUniformValue("normalToWorld", normalToWorld);
meshShader->setUniformValue("cameraPV", cameraProjViewM);
meshShader->setUniformValue("cameraPos", camera->eye());
meshShader->setUniformValue("lightDir", -camera->lookDir().normalized());

We also need to modify the fragment shader to take this distance variable into account to see if our fragment is close to the edge.

#version 120
#extension GL_EXT_gpu_shader4 : enable
varying vec3 worldPos;
varying vec3 worldNormal;
noperspective varying vec3 dist;
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec4 singleColor;
uniform float isSingleColor;
void main() {
// determine frag distance to closest edge
float nearD = min(min(dist[0],dist[1]),dist[2]);
float edgeIntensity = exp2(-1.0*nearD*nearD);
vec3 L = lightDir;
vec3 V = normalize(cameraPos - worldPos);
vec3 N = normalize(worldNormal);
vec3 H = normalize(L+V);
vec4 color = isSingleColor*singleColor + (1.0-isSingleColor)*gl_Color;
float amb = 0.6;
vec4 ambient = color * amb;
vec4 diffuse = color * (1.0 - amb) * max(dot(L, N), 0.0);
vec4 specular = vec4(0.0);
edgeIntensity = 0.0;

// blend between edge color and normal lighting color
gl_FragColor = (edgeIntensity * vec4(0.1,0.1,0.1,1.0)) + ((1.0-edgeIntensity) * vec4(ambient + diffuse + specular));

And that's it! It takes a bit more work to get it working with quads, but once done you can do some pretty wild and awesome tricks as shown on the author's wireframe site.



And here's the vertex shader for reference, but as you can see it's quite simple because a bit of the processing it's did has been moved to the geometry shader. Nothing here really even has to do with the edge rendering.

#version 120
#extension GL_EXT_gpu_shader4 : enable
in vec3 vertex;
in vec4 color;
in vec3 normal;
varying vec3 vertWorldPos;
varying vec3 vertWorldNormal;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
vertWorldPos = (objToWorld * vec4(vertex,1.0)).xyz;
vertWorldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
gl_FrontColor = color;

Wednesday, September 07, 2011

Sunshine: Fixed Normals

I'm finally starting to make some noticeable headway in my Sunshine app.  It feels like everything I do has such a little impact on the UI, but then I tell myself it's all "infrastructure" and soon I'll be able to start adding features like crazy.  I'm probably lying to myself.

Imported lamp OBJ with calculated normals
Infrastructure - boost::python bindings
After a lot of help from the boost::python mailing list, I've managed to expose my Scene object to python.  Although there isn't a lot of the C++ code exposed, the "infrastructure" for it is there so I can expose classes and individual functions quickly on a need-to-expose basis.  Right now I have a python script, which compiles to a Qt resource in the binary and use it to import OBJs.  Although it's fairly generic (lacks material support, ignores normals/UVs), it would be plenty trivial to import other geometry types with very little effort.

Fixed Normals
As I mentioned, I'm ignoring the normals provided in the lamp OBJ for now and recompute the normals per face (no smoothing).  I've had a bug in the fragment shader for the longest time that I've ignored until I started moving the cube mesh away from the origin.  Looking at the GLSL, it looks like I was transforming my incoming normal by my object-to-world.  That made the translation factor muck up the vector.  I ended up adding a normal object-to-world matrix that right now just consists of the model rotation matrix.  It looks a lot better!

Basic Features
Import OBJs
Write python-based mesh importers
Tumble around scene in Maya-like camera
Render "test" to aqsis window

Next Step
Add a face mode and some tools (?in python?)
Send scene to aqsis
Make render settings (resolution at least) adjustable


Saturday, September 03, 2011

Adding Python Support using Boost::Python

Choosing an Embedded Language
In my multi-year toy project, I decided it would be useful to incorporate an embedded programming language into the app to make adding features a little easier.  It basically came down to three possible languages that I thought would be appropriate: python, javascript, and lua.  Lua is known for being light-weight and fast, and is used in many games like World of Warcraft to provide scripting to the user.  Javascript itself seems to have had a bit of a resurgence in recent years in terms of stepping outside of web development.  Both languages have a variety of implementations that seemed workable including luabind for lua and V8 (produced by Google) for javascript specificially for embedding into an application.  Both provide a reasonable interface for mapping C++ classes.

Although both languages seemed appealing, I wasn't really satisfied with how the mappings were written to expose the C++ classes.  Also, I have much more experience with python, so I had to decide if I wanted to learn how to bind a scripting engine to my application AND learn a new scripting language.  I've done quite a bit of javascript, but never for a large application and wasn't sure how well things would map between it and C++, as both lua and javascript don't technically support objects the same way as python.  I finally decided just to go with python.  Although not as fast or light-weight as lua or javascript, I felt it provided a better one-to-one mapping of my classes, and figured users might be more comfortable with a more C-like language for application scripting.

Which Python Binding?
I ended up trying several different libraries to embed python into my app.  PythonQt seemed like a good candidate as I was writing my app using the Qt libraries, but I encountered a few strange bugs and the community seemed to be stagnating--unfortunate as the API seemed really intuitive.  Both SIP and SWIG are popular for binding, but both require a special syntax in external files, and I wanted to modify my qmake build as little as possible and didn't want to learn a new syntax.  After finally experimenting with boost::python, I found the library allowed me to write my mappings inside C++ without learning any new syntax or much with my build system.

Using boost::python
I had a Scene class, which naturally handles everything in my scene, which I wanted to expose to python.  boost::python has a special function called BOOST_PYTHON_MODULE, which puts a class into a particular module, which is imported into python.  Once wrapping my Scene class with the class_ function, I could then import the Scene class into python from the "scene" module.  The boost::noncopyable is an optional argument that notes not to pass the Scene object to python by value, since my scene might be rather large in memory and I didn't want multiple copies.  This is more of a compiler rule as I still have to make sure I'm not passing the scene by value, but with that I get a compiler error if I try.

  class_<Scene, boost::noncopyable>("Scene");

I have also been trying to use smart pointers for heap-allocated objects.  I started out using QSharedPointer, but boost::shared_ptr is already supported by boost, so I ended up switching my smart pointers over to boost's.
typedef boost::shared_ptr SceneP;

Sending Yourself to Python
Once that is setup, I could then pass my Scene object over to python.  I immediately hit a snag.  In particular to my Scene class, the Scene actually contained my python engine and called the python code.

      object ignored = exec(EXAMPLE_PY_FUNCTION, pyMainNamespace);
      object processFileFunc = pyMainModule.attr("Foo").attr("processFile");
      processFileFunc(this, "test.txt"); // "this" being the Scene object

Although I created the scene in a smart pointer outside the class, I didn't have access to that smart pointer inside member functions to pass to python unless I passed it into the function, which seemed unnecessary to pass a Scene member function a smart pointer to itself.  I couldn't use "this" inside the member function as boost::python wouldn't know by default how to keep that in memory since other boost pointers were already pointing to my Scene object.  I couldn't just create a shared_ptr in the function either, because I didn't want my Scene deleted when the shared pointer goes out of scope when the function returns.

I was actually getting this error because boost didn't know how to deal with the this pointer without passing it by value (which I explicitly said I didn't want copied, right?).

Error in Python: : No to_python
(by-value) converter found for C++ type: Scene

It turns out boost::python has a special way to do deal with this situation using a special class called  boost::enable_shared_from_this, which my Scene class can inherit from.

class Scene : public boost::enable_shared_from_this<Scene>

boost::enable_shared_from_this provides two functions that allow the Scene object to create shared pointers inside member functions by calling shared_from_this(), which is inherited from boost::enable_shared_from_this.

      object ignored = exec(EXAMPLE_PY_FUNCTION, pyMainNamespace);
      object processFileFunc = pyMainModule.attr("Foo").attr("processFile");
      // pass the python function a shared pointer instead of "this"
      processFileFunc(shared_from_this(), "test.txt"); // can't use boost::shared_ptr(this) either

After updating the member function, the error went away and I could then send my Scene object to python inside and outside of the member function.  Python now has access to my scene object and I can start exposing some more functions and variables inside my Scene class.

Below is a short working example of sending a shared pointer to python inside and outside of a member function (or you can look here).  Special thanks to the boost::python mailing list, which was very helpful in getting me going.

#include <iostream>

#include <boost/python.hpp>
#include <boost/python/class.hpp>
#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include <boost/enable_shared_from_this.hpp>
using namespace boost::python;

object pyMainModule;
object pyMainNamespace;

  \"from scene import Scene\\n\" \\
  \"class Foo(object):\\n\" \\
  \"  @staticmethod\\n\" \\
  \"  def processFile(scene, filename):\\n\" \\
  \"    print(\'here\')\\n\"

std::string parse_python_exception();

class Scene : public boost::enable_shared_from_this<Scene>
  void sendYourselfToPython()
    try {
      object ignored = exec(EXAMPLE_PY_FUNCTION, pyMainNamespace);
      object processFileFunc = pyMainModule.attr(\"Foo\").attr(\"processFile\");
      processFileFunc(shared_from_this(), \"test.txt\");
    } catch (boost::python::error_already_set const &) {
      std::string perror = parse_python_exception();
      std::cerr << \"Error in Python: \" << perror << std::endl;
typedef boost::shared_ptr<Scene> SceneP;

  class_<Scene, boost::noncopyable>(\"Scene\");

main(int argc, char**argv)
  std::cout << \"starting program...\" << std::endl;

  pyMainModule = import(\"__main__\");
  pyMainNamespace = pyMainModule.attr(\"__dict__\");

  boost::python::register_ptr_to_python< boost::shared_ptr<Scene> >();
  PyImport_AppendInittab(\"scene\", &initscene);

  SceneP scene(new Scene());

  // sending Scene object to python inside member function

  try {
    object ignored = exec(EXAMPLE_PY_FUNCTION, pyMainNamespace);
    object processFileFunc = pyMainModule.attr(\"Foo\").attr(\"processFile\");

    // send Scene object to python using smart pointer
    processFileFunc(scene, \"test.txt\");
  } catch (boost::python::error_already_set const &) {
    std::string perror = parse_python_exception();
    std::cerr << \"Error in Python: \" << perror << std::endl;

// taken from
namespace py = boost::python;
std::string parse_python_exception() {
    PyObject *type_ptr = NULL, *value_ptr = NULL, *traceback_ptr = NULL;
    PyErr_Fetch(&type_ptr, &value_ptr, &traceback_ptr);
    std::string ret(\"Unfetchable Python error\");
    if (type_ptr != NULL) {
        py::handle<> h_type(type_ptr);
        py::str type_pstr(h_type);
        py::extract<std::string> e_type_pstr(type_pstr);
            ret = e_type_pstr();
            ret = \"Unknown exception type\";

    if (value_ptr != NULL) {
        py::handle<> h_val(value_ptr);
        py::str a(h_val);
        py::extract<std::string> returned(a);
            ret +=  \": \" + returned();
            ret += std::string(\": Unparseable Python error: \");

    if (traceback_ptr != NULL) {
        py::handle<> h_tb(traceback_ptr);
        py::object tb(py::import(\"traceback\"));
        py::object fmt_tb(tb.attr(\"format_tb\"));
        py::object tb_list(fmt_tb(h_tb));
        py::object tb_str(py::str(\"\\n\").join(tb_list));
        py::extract<std::string> returned(tb_str);
            ret += \": \" + returned();
            ret += std::string(\": Unparseable Python traceback\");
    return ret;

Compiling the Code
To compile the code, I used python-config to get the includes and flags.  python-config is a simple utility that queries the path of your python headers and libs depending on which version of python is installed and designated on you system.  It's a useful utility as I usually have several versions of python on my machine at a time.  It's especially nice not having to hard code your application's build system to a particular version of python.

python-config --includes
python-config --libs

g++ test.cpp -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -lpthread -ldl -lutil -lm -lpython2.7 -lboost_python

Sunday, August 07, 2011

Performance Gains in Jython 2.5.2

Jython, the java implementation of the python language, has long been slower than it's cpython counterpart.  I've personally seen some simple benchmarks from years past such as this, where jython would actually run 3-10 times slower.  Benchmarks between the two implementations can be tricky, as I'm sure there are are some things one does better than the other depending on each developer's design decisions.  jython 2.5.2 was released this last March and has noted several performance improvements on its updated features.

I wanted to do a quick test just to compare the latest performance improvements implemented by jython's development team.  I had a simple ray tracer lying around (link) from when I was learning python at my current employer.  This is a very simple script that renders the scene at 512x512 resolution in about half a minute.  The scene has no acceleration structures to speak of and no texturing, so it's just a series of brute force calculations and pretty much zero to cache compared to a more complex scene.

Scene consisting of a procedurally-checkered sphere, a reflective sphere, and a Lambertian plane lit from above by a point light.  

Online Graphing

I repeatedly ran the script using each implementation and averaged each result.  I reran the cpython implementation a couple more times to make sure a jump for one run was a fluke.  Apparently it was.

  • cpython 2.7.1 - 34s, 38s, 33s, 32s, 33s
  • jython 2.5.1 - 37s, 37s, 36s
  • jython 2.5.2 - 26s, 27s, 26s

Just a few thoughts and comments:

  • jython 2.5.2 seems to definitely have some noticeable performance increases over 2.5.1
  • jython may now be faster than cpython in certain situations (like the script above)
  • jvm startup time was not taken into account as timing was done inside python
  • for both jythons, JIT seemed to consistently kick in at around 10% of the run (I'm assuming it was JIT), where the script started to run faster
Although I use cpython for pretty much all my work, I'm pretty happy with the results and am excited to see jython make a bit a progess.  While one implementation may be faster than another in different circumstances, I think this shows jython's performance is at least similar to cpython for certain tasks.  Maybe future decisions to use one over the other will now depend on which libraries are more appropriate for a given project.  Of course, that may also change as I see importing cmodules on the jython TODO list.


In response to a handful of commenters, they asked for pypy to be included with the test.  I downloaded the latest x86 build for my system and reran the script three times--all three runs completing in eleven seconds.

Someone also mentioned if I had tried the new Java 7 build, which is the answer is yes.  I orginally downloaded java 7 to compare jython 2.5.2 against the newer version of java (which led to this benchmark).  I've heard a couple of features in particular slated for Java 7 would provide huge performance gains over Java 6 for the JVM scripting languages.  After running the two JVMs, however, I couldn't see much of a difference in performance.  Because of the newness of Java 7, I'm going to do some more investigating to make sure I had everything setup correctly to take advantage of these improvements.

Online Graphing

Tuesday, May 03, 2011

My thoughts on Ubuntu Unity

Over the weekend, I made the painful mistake of upgrading to Natty Narwhal 11.04, the newest release of Ubuntu Linux.  I was previously running 10.04 and realized I was stuck with Qt 4.6 and I desperately needed a feature in 4.7.  After starting the upgrade process, I realized I actually had missed the last upgrade 10.10, which was required before upgrading to 11.04.  I installed that and realized it contained Qt 4.7.  I could have stopped there and be content, but I didn't.

I love Ubuntu for various reasons but mostly for its usability.  I can do okay on other distributions, but as my life gets busier and busier, I need my operating system to "just work".  Sometimes I'm tempted to upgrade my distribution instantly when a new one comes out, but I usually regret it getting bit by stupid bugs and such.  Jumping to Natty Narwhal 11.04 was no different.  I had read several negative reviews on Natty Narwhal and wanted desperately to believe they were wrong, but I sadly had to change back to the classic interface.  And it only took me a few days.

Here are a few of the hurdles I hit--all of which having to do with Ubuntu's new Unity system.
  1. Restarting my machine, I couldn't log into X.  I found that my nvidia driver module wasn't being built for the upgraded kernel.  Somehow installing a package and skipping a seemingly-required module is totally okay.  To me, though, it continually hung on the Ubunutu "five-dot" splash screen.  I had to actually go get the linux-headers package for my kernel and then reinstall the nvidia-current package, which rebuilt the module correctly.  Couldn't the former be a dependency of the latter?  
  2. Unity's "taskbar" is a little buggy.  I'm not sure the rules for it staying out or autohiding, but it certainly wasn't consistent
  3. I don't consider myself a power user, but I like to have widgets like workspace tracking and the CPU histogram
  4. I missed all my icons at the top of the screen.  I open multiple terminals all the time for programming.  With the Unity taskbar, when I click on the terminal icon, it will either create one terminal, or take me to the workspace where my terminal is already opened.  I just want a new terminal window
  5. It's hard seeing what's on the screen without the traditional taskbar.  Windows got buried under each other and I didn't know where they were (the taskbar in Unity doesn't help, while the more classic taskbar shows me exactly what's running on this workspace).  Also, how do I change the number of workspaces?  I had ten--and I used every one--and now I have four
  6. This is a minor complaint that I can get used to, but Unity uses the "Universal Menubar" style ala Mac OS X, where the focused application has it's menubar at the top of the screen.  This isn't necessarily bad, but it hides all the menus until you mouse over the bar.  Why?  
  7. I actually had some random bugs including loading windows that appeared with no border and certain programs not receiving mouse input after I returned from a different workspace

To me Unity was like Knight and Day with Tom Cruise.  I sat down to watch it and I really, really wanted to like it despite all the negative reviews.  But it just didn't work.  Like this mediocre movie, it wasn't a horrible premise (besides a cliche hacker and a mystical power source the size of a AA battery), but it just wasn't executed correctly.  I believe most of these issues are fixable, but I'm afraid this new desktop has left a bad taste in a lot of people's mouths.  

Sunday, April 24, 2011

Integrating Sunflow

I've been playing around with some utilities for moving lights around a scene and rendering it in Sunflow for years.  I've done so much rewriting just to destroy it over and over again, I decided to just take much smaller bites and try doing something very tiny with the program.

  • import a PLY object (buggy)
  • add an attentuated spotlight
  • camera controls (incomplete)
  • adjust basic global illumination
My silly toy

no bounces

Saturday, March 26, 2011

Using jQuery UI's date picker on all django forms

After getting over the initial learning curves of django's forms, I've begun to really appreciate the level of customizability the developers have incorporated into the API.  By default, when sending a model to a form, any DateField members are converted to form DateField instances, which translates to a simple textfield.  This isn't very attractive and quickly leads to formatting complaints from users (and supervisors). There's another widget django provides for picking dates--three comboboxes for month, date, and year--but it's not a very elegant solution.  I thought it would be nice if, by default, all my forms could use jQuery UI's datepicker.  Here's what I did.

First, after some searching online, it seemed several people have experienced similar issues.  I happened to run across this answer on stackoverflow for customizing widgets for all forms.

Here's my final code.

def make_custom_datefield(f):
    formfield = f.formfield()
    if isinstance(f, models.DateField):
        formfield.widget.format = '%m/%d/%Y'
        formfield.widget.attrs.update({'class':'datePicker', 'readonly':'true'})
    return formfield

class ProjectForm(ModelForm):
    formfield_callback = make_custom_datefield
    class Meta:
        model = Project

First, on ProjectForm I specify a custom callback function for handling form fields.  This is actually really nice as I can apply this code to any form I want instead of marking individual fields for a form or applying the entire fix to every single model.

In make_custom_datefield I do a quick check to see if it's a model DateField instance.  If it is, I do some modifying.  First, I change the format of the widget so incoming data from the model match jQuery's format.  It might be possible to modify jQuery to match Django, but whatever.  Then I add on two custom attributes to the widget.  Both of these directly map to html attributes of the input tag.

<input class="datePicker" readonly="true" type="text" id="id_dateDue" />

The datePicker class is important so I can mark this input as a jQuery calendar in the browser.  I also mark it as readonly so users can't modify the date with bad formats.  Marking the input as read-only is a double-edged sword as it also prevents users from quickly entering dates months or years away.

Once that is working, I just need to add some code to my page that will mark all datePicker instances in javascript as being datepicker widgets.

// on page load

$(function() {
$( ".datePicker" ).datepicker();

And that's it!  I can now make jQuery UI datepickers the default widget on any django form I choose.

Saturday, February 05, 2011

PHP vs Python: A followup

I received a lot more commentary for my previous post than I anticipated, so I thought I'd followup with some comments and feedback I received.

PHP not dead, just out of use
First, the title of my previous post included the phrase "dead to me".  I in no way wanted to imply PHP was worthless to anyone and its large community was going away.  I honestly don't care if PHP sticks around forever.  My point is that as templating libraries for the web have gotten so good, much of the code is outside the template and the small benefit of embedding PHP directly into html is much less useful than it used to be.'s first description for those who might be new to the language are immediately shown, "PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML."  Within the first sentence they advertise PHP as being html-embeddable.  My point is, big woop.

PHP's gotten boring
Many people have commented on other forums about leaving PHP, because its gotten too boring.  And vice versa, leaving another language like python because it too got old and went back to PHP.  Whatever floats your boat, I guess.  In some ways arguing about the pleasure of programming in PHP over python is like arguing the merits of Spanish over German.  Clearly, German is superior.

People tend to have relatively strong opinions over rather mundane things ranging from favorite foods to their preferred brand of socks.  Programmers similarly can have very strong opinions over their languages of choice when sometimes it's their our to express themselves.  Reading blogs comparing lines of python to lines of ruby tend to put me to sleep.  Some comments on this subject included the ugliness of php or python.  Ugliness, of course, is a very subjective word.

Years ago I encountered a site railing against the java language.  I found most of the author's points, however, to be mostly an issue of taste.  One issue he mentioned was the case-sensitive nature java required/assumed.  This perplexed me seriously as almost every language I've ever used in my life was case sensitive.  Indeed, jumping over to TIOBE, it looks like the top ten languages all have case-sensitive syntax.  To OO developers, this style is often used to easily distinguish between class and object.  Calling java ugly for using a industry standard?  That's like calling a girl ugly because she's not a blonde asian.  My point is what exactly is ugly about python?  To me calling a programming language ugly demonstrates an inability to express one's possibly valid thoughts on a language.  For example, some perl I might considered "ugly" for its compact, symbol-based syntax and parameter passing.  Some find python ugly simply for lacking the C-style angle brackets between blocks--are they by contract considered beautiful?  One might call java ugly for its extreme use of compositing just do simple file I/O.  Or are these people literally calling some language visually displeasing?

So beautiful...

PHP's Community
One of PHP's biggest strengths is its availability across a large range of hosting providers.  PHP currently is the standard web language any hosting service must provide.  Hosts providing alternatives such as python and ruby-on-rails usually are a little harder to come by, which is a boon for PHP.  One commenter strongly disagreed with my statement of the python community's size compared to php.  The community can mean different things, so I'll admit it my mistake.  Again, looking at the TIOBE data, PHP is shown to be more popular than python (though the latter looks to overtake it next year).  Or you can simply google "php community" and see it garners more than 77 times the hits as "python community".  Maybe python programmers are anti-social...

Long live templating engines!
Concluding with my previous post's title, templating engines have replaced my "need" of PHP.  PHP isn't horrible by any means, but with capable templating engines in so many languages, anyone can just program in _insert_favorite_language_here_.  In this case, PHP is just another scripting language.  I thought about porting blocks of PHP code to python for readers to compare, but after a brief perusal I found the endeavor totally pointless considering the relatively little changes I'd make between them.  So instead of ending with a quip, I'll end with a question: Supposing a script-savvy developer was entering web development, what are some motivations for using PHP I haven't mentioned?

Wednesday, February 02, 2011

php to python: Why PHP is now dead to me

PHP started in 1995 and was a well-known web language by the time I was getting into web development around 2001.  At that time I had just picked up perl--yes, perl, which at the time seemed like the end-all, be-all sexiest language ever.  The only thing I liked about PHP, which I admit was a very big draw was the seemless fusion of content and code.  With PHP's delimiters, one could easily blend between the content and wiring.  Almost ten years later, though, PHP is starting to show serious age to the point of being a deprecated tool in the toolbox (like no one making Phillips screws anymore). 

PHP as a web language
My first grudge against PHP is it's not a very generic language.  People have successfully written non-web-based PHP applications, but this seems to be more of a PHP programmer, who doesn't want to learn a more robust language. After a brief scan of various PHP support sites, PHP development outside the realm of web development is basically negligible.  I see no reason, in fact, besides some amazing library I might not be aware of, for me to use PHP for a non-web-based application. 

PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML.  -

The syntax is ugly, in my opinion: requred semi-colons, arrows instead of periods to reference data, global functions to manage things like arrays instead of a more object-oriented approach, etc.  After several iterations of the language, features seem tacked on to the early implementations.  With python being faster and providing a larger development community, what motivates me to use PHP? 

The MVC architecture
The MVC architecture is a design philosophy of separating the application into three sections: the models, which manage the data of the application; the views, which typically represent the user interface of the application; and the controllers, which handle a lot of the management between the views and models.  While this design can be strictly conceptual to a programmer separating out functionality, various frameworks like Qt and CodeIgniter strictly separate these entities.  Here's where I believe php has lost it's magic.

The main allure of PHP that stole a lot of perl's magic was putting the controller elements doing the processing directly into the html.  Instead of managing large perl files mixed with html, people could make a mostly html-based file with a little PHP, which was much cleaner to read.  In that sense, all the view implementation was usually separated from the model and controller.  This seemed great at the time, but as MVC has gotten more prevalent, more and more PHP frameworks remove model/controller components from the html leaving small bits and pieces in its place.  In my experience, and what seems to be the practice now in PHP, large controllers and models are created completely separate to the actual html.  This creates a more coherent architecture, but it raises the question: now that we've moved so much PHP code away from html, why are we still using PHP?

The rise of templates
Templating libraries have been around for many, many years-even before PHP was popular.  However, since some programming language had to drive the application anyway, I believe, there was no reason to use one if one had to do his/her coding in C/C++ or perl still.  Now, however, many new languages have emerged such as ruby and python that provide a very small and more robust scripting experience.  Because they are so general purpose, many more examples exist outside of web development.  This is important as web applications have continually provided more and more functionality than just serving up semi-static web pages.  The little bits of php code that was once in my html files are now easily replaced with some templating library like mako or genshi, and I can program in a more general-purpose language, whose techniques will apply to non-web-development tasks as well.  Again, now that PHP has been moved outside of html, why should I use it over python or ruby? 

Things I'll miss
Alas, PHP, I hardly knew thee.  One thing I'll slightly miss, though, is the startup time in PHP.  Almost every apache configuration on Linux typically comes with PHP configured.  After opening up a .php file and typing a few lines, I can access the file directly in my web browser and I'm immediately seeing the results.  With python, I have to do a little bit more typically to get things running via apache.  This requires setting up a cgi-bin, creating a proper script alias, etc.  Newer python frameworks like and turbogears now come with a mini-web-server included, however, so I can hold off on configurations until I'm farther in to production.

I'll also miss PHP's documentation, which I believe is generally more accessible than python's.  Many of PHP's functions get an entire page dedicated to them like explode().  The page features the basics you'd expect from an API doc--parameters, description, and return value--but includes a huge swath of examples of how to use the function.  It's like an entire programming cookbook dedicated to that function.  I find a lot of python docs to be a little too wordy (ex. datetime and difficult to browse to a particular function--especially if I'm searching by functionality and not name.  If I'm unfamiliar with a language, I'd like to know what functions are available for a given class before treading through each and every description.  The function name should typically provide sufficient description to investigate it further.  Anyway, the PHP wiki/comment style of documentation is something I hope more languages adopt.  Wikis are already prevalent on many projects, but it seems people are much more willing to contribute through a brief comment box instead of stepping into an entire editing mode.  Is it fear?  Laziness?  Regardless, it seems more accessible to contribute something beneficial to the doc.

Anyway, PHP.  Thanks for everything.  It was fun while it lasted.

Saturday, January 08, 2011

Using Redbean with Code Igniter 1.7.3

I started playing around with codeigniter for a quick project, and wanted a simple ORM for manipulating databases quickly. After looking briefly at RedbeanPHP, I decided to give it a spin. Boris Strahija had actually shown how to do it last year in the forums, but I just wanted to post my files for reference. This assumes the database.php file in your config dir is already pointing to a valid database. I was using mysql.

  1. Download RedbeanPHP and drop the two files (._rb.php and rb.php) in system/application/libraries
  2. Create a library class called Redbean.php with the contents shown below, and drop it in the same directory
  3. Use it in your controller

ci =& get_instance();

// Include database configuration

// Include required files

// Database data
$hostname     = $db[$active_group]['hostname'];
$username     = $db[$active_group]['username'];
$password     = $db[$active_group]['password'];
$database     = $db[$active_group]['database'];

// Create RedBean instance
$toolbox = RedBean_Setup::kickstartDev('mysql:host='.$hostname.';dbname='.$database, $username, $password);
$this->ci->rb = $toolbox->getRedBean(); 

Using it somewhere in your controller
// example code of creating a 'post' table
$post = $this->rb->dispense("post");
$post->title = 'My first post from CodeIgniter';
$post->body ='Lorem ipsum dolor sit amet, consectetur adipisicing elit....';
$post->created = time();
$id = $this->rb->store($post); 

After running this controller, you should see a 'post' table created. Done!

What you should see in phpMyAdmin