Where the Magic Happens

In our last post, we hinted that most of our game assets are going to be physically built and scanned in using 3D Capture software.  We’re going to give you a little peek at our studio and walk you through a day in the life of a game asset.

The first step is to raid the workbench.  This is a door that our Art Goddess has whipped up in no time from a piece of found wood and some clay.

I’m really happy with how this piece turned out.  However, one problem with it is that the ‘metal’ bits are really shiny.  This is exactly how we want them to look, but 123D Catch can have problems with reflective objects.  It likes a static lighting environment, and reflections are view-dependent which can confuse it.  Also, you want to be generating reflections and specular highlights dynamically in your engine, so shininess baked into your color texture will only get in the way.  A matte fixative spray does a good job of removing the shininess from the physical object, and you can add it back to your game asset later by using a gloss map.  However, this little amount probably won’t impact the quality of the capture too much.

But I digress.  Welcome to Skull Theatre.

This is our photo studio.  We have three pairs of 85-watt CFL lightbulbs, each mounted on a tripod and  set up behind a shoot-through umbrella.  They’re positioned equidistantly around the table to minimize light directionality.  It’s very important for your light to be as soft and ambient as possible when scanning objects in to be used in a 3D game environment.  Strong light directionality and shadows will show up in your textures, and make it very hard to dynamically light these game assets later!  For example, if you shoot your object with a dominant light on the right, and you place it in your game world, it will appear to be lit from the right because that dominant right light has been ‘baked’ into the texture.  If you place it in a scene where all of the light is coming from the left, then the object will appear out of place, lit by some amorphous light to the right that affects no other object.  However, while strong light directionality is bad, I find that keeping the lights higher up gives you some nice shadows along the base of the object (it also makes it easier to move around in there).  Since the side of the object that’s facing down is presumably going to be the bottom (or the back), these shadows won’t look out of place and will give you a nice ambient occlusion effect.

One last note on lighting.  123D Catch doesn’t strictly require a fancy setup like this.  You can take successful captures with a crappy phone camera in your hands and standard incandescent lighting if you want.  However, the single best thing that you can do to improve the quality of your textures is to use better bulbs.  Achieving the correct color temperature is important to prevent everything from looking way too red.  Also, if you’re using a nice digital camera with a lot of manual settings, then configuring the white balance for your studio environment can help a lot too.  We set our camera on full manual (except for the focus) with a high f-stop for good depth of field and a shutter speed appropriate for the lighting environment.

The camera rig is on wheels and attached to the table by a long pivoting shaft. This allows us to mount the camera on the rig and target an object on the center of the table.  Then the rig can be swiveled around the table, and the camera stays perfectly focused on the object.  All that’s needed by the controller is to move the rig to the next shoot point and press the shutter release.  We have grand designs to employ some light robotics someday to automate away even that small bit of manual labor.

Note that there are three camera mount points on the rig.  When taking a set of photos for 123D Catch, you want to get a few rotations around your target object at different heights.  Doing this will help it build a better 3D model for you.  We’ve found that three rotations give pretty good coverage – though often two is plenty.

This guy watches over us all.  You must kiss the skull if you want a good capture.

We’ve marked out thirty-two equally spaced capture points around the table.  This keeps the results consistent, and ensures equal coverage around the target.  Our super high-tech progress indicator involves a shelf brace, some masking tape and a hot pink marker.  But don’t knock it – it works like a charm.

Here’s our pilfered door, positioned on the center of the table.  You may have noted that there are a lot of scribbles over all of the surfaces in the photo studio.  These aren’t the scrawlings of a madman (those are in the office), but are in fact very intentional.  123D Catch uses the background in your photos to determine how the object is situated in the scene, and that knowledge helps it build an accurate model.  Your standard photo studio with its blank white surfaces is actually a worst-case scenario for 123D Catch.  The scribbles on the walls and table help out our friends at Autodesk, but are subtle and random enough to not noticeably impact lighting.

This is what the camera sees as it rotates around the table.

We do what we must because we can.

So after we’ve taken our photos, the next step is to upload them to 123D Catch.  The process is fairly painless, and all of the heavy lifting happens out in the cloud, so your PC isn’t locked up during the process.  The program will ask you if you want to wait for the results, or have it email you when they’re ready.  When it’s all finished, you end up with something a little like this:

It’s a little hard to see from that shot, but that’s a 3D model – table and all.  123D Catch also shows where all of the photos were taken from, and gives you the option to ‘stitch’ any photos that it failed to use in the scene.  However, we’ve had no luck with stitching and find it far easier to just try again if you get a bad capture.

You can use 123D Catch to trim off any bits of the model that aren’t desired (like the table).  You can also have 123D Catch process a selected portion of the scene at a higher quality level, and give you a higher-resolution mesh and texture.  This is highly recommended (read: necessary) if you plan on using these assets in a game.

You can export the model to a number of standard formats.  After that, we’re done with 123D Catch and it’s time to process the model.  To do this, we’ve written our own set of tools that integrate with a powerful open source program called MeshLab to handle the post processsing.  Our super-sexy import tool looks like this:

Behold the command-line awesomeness.  Most importantly, it’s entirely automated. 

After number-crunching for about two to three minutes, we’re given the option to orient the mesh however we want, and choose a topology for the mesh.  Mesh topologies are a very important concept in our engine, but that’s a discussion for another time.

And that’s it!  The import tool will build all necessary game assets needed for the object to be renderable in our engine, including normal maps based off of the original high-resolution mesh.  All that’s left is to do a little texture cleanup.  123D Catch seems to be responsible for softening the textures pretty significantly when compared to the original source photos, which we’re not thrilled about.  However, sharpening the image and picking up the contrast can do wonders.

Here are the resulting textured meshes.  The mesh on the right is the original model exported from 123D Catch with about 64,000 faces and a 4096×4096 resolution texture.  The mesh to the left of it is ours that’s been reduced to 4,000 faces with a 1024×1024 texture, and each preceding mesh has had its face count cut in half.  Note that this is without the normal map applied to these meshes (which recovers a lot more of the original detail), and prior to any texture sharpening. It’s still a work in progress, but we’re pretty happy with the results thus far.

About skulltheatre

Video Game Professional
Gallery | This entry was posted in Art, Digital Capture and tagged , , , , . Bookmark the permalink.

7 Responses to Where the Magic Happens

  1. Pingback: Revolving camera mount helps to capture 3D video-game assests

  2. Wow, what a fantastic piece of engineering. A great way to save a ton of time. I imaging that you could make quite a bit of cash if you package that up nicely and start selling it to other game development companies.

  3. Pingback: Revolving camera mount helps to capture 3D video-game assests | Make, Electronics projects, electronic Circuits, DIY projects, Microcontroller Projects - makeelectronic.com

  4. t3kboi says:

    This is not in any way a criticism of your rig, But is there a reason that you cant just rotate the table that the model is on, instead of moving the cameras?

    There probably is a reason, but I am curios to know what it is.

    Thanks!

    • skulltheatre says:

      We’d love it if we could just rotate the table. 123D Catch and all other photogrammetry tools (as far as we know) rely on the lighting of the scene and the relative positions of objects staying consistent in order to create a good model. This is why reflective and shiny objects can also give you trouble, since the reflection changes based on viewing direction. In fact, even using the flash on the camera can cause problems. The easiest solution is to have stable lighting and keep the target object fixed, rotating the camera around it instead.

      Or, if you have a lot of money to throw at the problem, you could build something like this: http://www.prototank.com/123d-catch-camera-relay-bank

      Oh, we wish….

      • Erick says:

        Would it be possible to set it up so the lights rotate with the table? So they stay fixed in the same position relative to the object.

      • skulltheatre says:

        It’s possible, but you’d also want the background of the scene to rotate with the table as well, to avoid any inconsistencies between photos. At that point, it seems far easier to just rotate the camera.

        I have heard of some people having luck using a turntable with 123D Catch, and I’d be curious to know how their captures have turned out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s