Website powered by

WIP Python Rigging Tool for Houdini

Work In Progress / 17 May 2020

I learned how to rig in Maya, but I wanted to give rigging in Houdini a shot. I ended up doing a human character rig and animation, and through that process I came to the conclusion that Houdini has some nice rigging tools that are hampered by a frankly confusing and difficult to use interface. In fact, I would go so far to say that if Houdini's rigging interface were better I would prefer to rig in Houdini as there's something appealing and intuitive to being able to see your rig and relationships laid out in a big network that looks vaguely like the thing you're rigging. Over the course of my education, I scripted a python-based toolset for rigging in Maya that made it quick create joints based on locators, create controls, set constraints, rename large numbers of objects, and set colors. If I can port that interface over to Houdini, that would make the entire rigging process significantly less painful.

As is implied by the title of this post, this tool is incomplete. However I wanted to breakdown what I have so far and highlight some of the differences between the two rigging systems.

I've broken down the controls into a number of different sections. First is a node renamer. Second is a locator creator, third is the bone chain creator, and fourth is the control creator.

The node renamer does exactly what you expect. You make a selection of nodes, then under the new name field you type in your new name with a # substituted for the number. For example, if I had a selection of three nodes that I wanted to rename L_Arm_Ctrl_01, L_Arm_Ctrl_02, and L_Arm_Ctrl_03, I would select them in the order I want them renamed, enter L_Arm_Ctrl_# as the new name, and set the number padding to 2. Straightforward, simple, but can be a time-saver when dealing with larger rigs.

Admittedly, the section of code that actually handles renaming is pretty inelegant. However, it currently works to an accuracy that I doubt I'll ever have to work outside of (you won't catch me working on a rig with more than 9999 bones), so I'll come back and refactor that section after the rest of the tool is working.

The Locator creator creates nulls at the centers of geometry selections. This makes it easy to create bone roots at the centers of shoulders, elbows, knees, etc. I haven't quite gotten to implementing this system yet, as I'm currently focusing on the bone chain creator. I'm still investigating the best way to reference geometry selections at the point and edge level.

The Bone Chain creator takes a set of locators, and creates bones on all but the last, the last locator being the end point of the final bone. This section has been much more complex to implement in Houdini that it was in Maya, and there are a few reasons for that. The biggest one is that, at least in my mind, it is preferable to initialize bones in their correct orientation in Houdini. Maya's "bones" are merely UI elements that indicate hierarchy, as a Maya rig's base unit is a joint rather than a bone. There are benefits and detriments to each style of rigging, but that is outside of the scope of this post. Maya has useful tools for setting joint orientations, typically done after creation of the joint hierarchy. In Houdini, orientation is visualized as a property of the bone, which itself is an object. Bones have one axis that is wider than the other, and that is the axis you want to bend along. Setting parent relationships can completely change orientations if you're not careful, and so in my mind it is preferable to kill two birds with one stone and get the orientation right the first time, if you can help it. This section is getting close to viable, what I'm currently missing is the ability to recolor bone chains, the ability to freeze bones in place when parenting, and while orientations are perfect when chains are formed on axis planes, there can be issues when they're not. These are my next areas of focus in the development of this tool.

The last section is the control creator, which is meant to create null controls between their respective bones. I have yet to do more than basic framework laying in this section, as I feel that getting over the remaining hurdles on the joint creator is not only higher priority from a functionality standpoint, but the solutions to the remaining problems on that tool are the same solutions needed to quickly set up the control generator. 

Stretch Goals

The sub-tools laid out previously, I feel, are the minimum tools required to make this a viable product.  That said, I do intend to take this tool further over time. The ability to create IK chains and functional RK/IKFK Blend systems from this interface will save me further time while rigging. Another valuable tool would be one that allowed for the easy creation of a customized general interface for the rig. What that looks like in practice, I'm not entirely sure yet, however I will be doing some interface mock-up in the meantime.

All in all, this has been a pleasant and deeply informative project work on. Already, I am thinking of further tools I can create to enhance my pipeline. I anticipate that I will have a fully working version of this tool in the weeks to come, mainly determined by how much free time I find myself with. I'll be sure to make a follow-up post when that happens.

Camera-Based Culling in Houdini

Making Of / 16 May 2020

I'll be doing a quick overview of how I set up our camera-based culling system for our film. We faced a number of obstacles on this project, some of them internal and many of them external. One significant later hiccup was that we had access significantly less render power than we were initially promised, and so we had to come up with solutions that would allow us to render scenes entirely on individual, moderate-power lab computers. This ranged from generous use of render layers to the implementation of a rudimentary LOD system on the building generator to significant reduction of detail on certain distant assets not incorporated in the LOD system. As helpful as these techniques were, they weren't enough to counteract the lockups that Redshift experienced when doing initial scene setup. After setting up this system, we saw enormous performance increases, and not just at rendertime. For example, one of the functions of the system culled city blocks before any procedural generation could happen, dramatically increasing viewport performance and calculation times. 

First, I'll go over the technical details of how this was accomplished, then I'll talk about how we integrated this system very quickly into our pre-existing workflow. To begin, the code behind it is quite simple. After a specific camera was designated to be the center of culling, various parameters could be pulled from it, such as its Euler rotations, its aperture, and focal length, and from this, its field of view could be calculated (more useful formulas can be found in Houdini's documentation: https://www.sidefx.com/docs/houdini/ref/cameralenses). Due to our specific needs and time constraints, I only worried about the horizontal field of view, and as our camera never rolled significantly along its z-axis, I only needed to worry about y-axis rotations. As such, my next step was to convert the initial camera values to radians and compensate for field of view on the left and right camera bounds. In addition, I added an additional user-editable parameter to give additional range to the culling system for specific objects. This was important as some objects, like city blocks, were culled from their centers, and as such the culling system needed a little extra field of view to pick up the blocks on the edges of the viewing field. Furthermore, sometimes the camera had its pitch adjusted, and a couple extra degrees of sight were necessary to compensate.  

After compiling the vectors representing the Euler rotations of the center of the camera's field of view, as well as its left and right bounds, I converted those Euler rotations to directional vectors using a little math, as Houdini's built-in clipping node defines its clipping plane using a directional vector. I then wrote out these vectors to detail attributes that would then be referenced in a couple of clip nodes. All in all, a pretty basic system, but it fit our needs well enough. However, the problem at this point became: How do we integrate this new culling system across all of our different shots which have already been set up and are nearly ready to be rendered? 


Luckily, when I set up all of our scene files, I made liberal use of Houdini's digital asset system, meaning I could make changes to any given subsystem, and they would be added to any scene using that system. If you're not familiar, digital assets essentially allow you to save out specific nodes trees out to disk, separate from their scene files. Thus, one could edit just that asset file without making changes to any given scene, much like Maya's referencing system. Because of this, the answer became pretty obvious. I packaged the culling system into its own digital asset, and embedded it within all of our different environmental HDAs. Then I moved up the extra FOV parameter to the top level of environmental assets that might need it, for convenient tweaking on a scene to scene basis. All this worked great, but there was one more issue to grapple with: How would we get each digital asset to recognize the appropriate camera consistently on a shot to shot basis where camera namings were not consistent? The solution I ultimately came up with was create one more digital asset whose sole purpose was to have a single field filled out on it. No nodes within, just a string field that a user could drag their camera into. I chose this method as our Houdini artists were needing to reinstall HDAs at the beginning of every work session due to external factors outside of our control. Thus, anyone working in our scene files would inevitably install this camera selector asset at the beginning of their work session, so the only directive I had to give other artists was to place down a new camera selector node in any scenes they worked in if it didn't already exist, and plug the camera in. Relative referencing and consistent naming behaviors took care of the rest.




Designing a City Block Generator (Part 1)

Making Of / 15 May 2020

In my team's senior film, we were faced with a number of technical hurdles in order to meet the scale of our project. One of these hurdles was designing a building generator that would allow us to rapidly create a passable city environment for our setting. One of the most important components of this system was our block generator. This system would take in a curve representing the outline of a city block (or a set of curves, so that a flat map of the city could be drawn in curves), and would cut it into individual buildings, extrude, and add window, facade, and roof details. I will attempt to break down this system here.


To provide context as to what our needs were, we started this as a project of six people, later seven, with the task of creating an approximately two and a half minute short film over the course of three semesters.  As is often the case with students, we perhaps over-scoped a bit, however this wasn't apparent to us at the time. One of our environments was a city, somewhere in the late Renaissance period. Concepts were drawn, color scripts were done, and maps were made in order to plan out the path of the characters. Houdini was planned to be used from the beginning, however it was unknown how the system would work. After much though, I figured that perhaps the easiest way to generate the city outlined in the map would be to draw the map in Houdini using curves and have the buildings generate from there. This would allow for rapid edits the city layout, as changes could be made by simply dragging the corners of the drawn city blocks around. Basically all we needed to do was draw all the different curves out, and run them through a for-each loop within which the building generator asset was nested.

Once the blocks entered the asset, they were carved, resampled, and fused. The resampling length would correspond approximately to the width of each building. These would be tweaked later by setting non-corner points' normals to their tangents, and running a mountain node against them. Before that, however, I had to figure out which points were corners, and among those, which were convex and which were concave, as concave cases would need to be handled differently.


After the various points are sorted through into different relevant groups, I could then run the previously mentioned mountain/building width randomization operation on only the side points. At this point I had a hollow curve whose sides were divided into segments ranging from approximately 8-20 meters. After studying how blocks in Prague were laid out, I figured one of the simplest ways to approximate that style would be to generate the corner buildings first, then connect their inner four corners together such that an inner courtyard was drawn out , then slicing the areas between each of the corner buildings into individual buildings. Corner buildings in Prague tend to have their inner sides be perpendicular to their outer sides, so I projected out a line perpendicular from one of each corner's neighbors, and projected an intersect function perpendicular from the other neighboring point.



When I generated the corner buildings, I made a group of just the inner corner points. From those points I drew the outline of the inner courtyard, using a simple set of addprim functions connecting each corner via polyline. From there, I used an intersect function from each of the points on the outer walls that were not associated with the corner buildings, projected perpendicularly from the points to the courtyard line. After deleting the courtyard polylines, I created primitives for each building. At this point, some of the winding is wrong, so I fix that with some reverse nodes, harden the normals, and then I'm ready to start adding randomized attributes to affect different parameters of the window generation, roof generation, building height, and coloration in the Redshift shader.



I use these generated attributes to extrude different buildings to different heights. From there, the roofs and other details can be generated. Most of the rest that happens in this system takes place in a large for-each loop as I generate the details on a building to building level. I won't go into the details of this process in this post, as it is running a bit long, and there are many different tutorials on how to stick details onto flat faces.  I will likely come back and make a second post about creating the details.

I also want to shout out my teammate Sam Corbridge (https://www.artstation.com/samcorbridge), who helped set up the systems for detail generation, as well as helped produce some of the different detail variations.