david's really interesting pages | Just another WordPress site

Mark Witton Pterosaurs; Preorder!

Well, the Brits may be leaving the European Union, but that won’t keep me from buying Mark Witton’s Pterosaurs in Euros. This one has been high on my list since I heard that it was in the works about a year ago. I am anxious to see how much of his blog tone he allows into the book. As far as I’m concerned, it can be all-in – I love the way Mark writes. I’ve thought about the various amounts of personal voice paleontologists use in their online writing and Mark is unique in writing out of the guy-next-door voice, but always and only as a build up to some scientific tidbit via a loopy derailment of that infamous train-of-thought, so that I always read with a joyful anticipation of that reveal.

He also is one of my favorite artists, so this book is MINE.

baking high-resolution mesh information to image-based displacement 2

Continuing from yesterday’s introduction, we continue by making a garbage model to show what issues we’re dealing with and what the goal is. My time tracker reports 17 minutes for this entire process, including the creation of these images, so this is a very quick-n-dirty process.

croc_bake1

I began by box-modeling a rough form to encompass the skull, then smoothing and subdividing to get it to fit somewhat tightly around the mesh.

croc_bake2

Then I smooth this geometry again, but this time using a background constraint. This takes each point in my volume and fires a ray along its normal, noting the distance it takes to encounter a surface on the background mesh. This calculation is given a default cut off distance in case the ray doesn’t find anything… in which case various options are available, ie. averaging the distance of neighboring vertices. After this process, the point itself is relocated to the respective position from the stl mesh. Voila: shrink-wrapped cg surfaces.

This may or may not be sufficient for volume analysis – there’s a lot of empty space here. If so, the model’s weight would be 199 vertices versus 499,362.

croc_bake3

Each point in the mesh now has a roughly desired position in space, defined by 3 axii (x,y,z). The next important ingredient is to give each of these points a relative coordinate on a construed 2D surface – a uv map. Why? So that….

croc_bake4

…we can repeat the ray-firing process, this time not for each point on the mesh, but for each pixel laid out along the uv map. This generates a grey value map recording each pixel’s distance from the rough volume to the scanned surface. Here again, a cut off is involved. This range determines not only the distance at which a value is simply defaulted to, but also the range of information between the darkest and lightest values. Ideally, you want to have your rough volume approximate rather consistently the scanned item. Notice that we have not done anything artistic… no sculpting, no deviations from the source scan.

What’s the point of all this? As you can see, our result is visually meager, yet it renders in 3.9 seconds as opposed to 5. And yes, we cg artists are consider that a big deal, because render times scale exponentially with resolution and often with further shading complexities, which means a lot when rendering 24 to 30 frames for each and every second of footage. What’s more important here is that the number of polygons is now dynamic. At this resolution, we calculated 783,380 polys compared to 997,593 in the stl mesh. Worth it? Not likely. Yet as a dynamic asset it quickly becomes valuable – for example if the item is rendered far away from the camera it will only cause the creation of as little as 296. That’s a major savings. Inversely, there are also methods to drive the number of generated geometry in macro ranges, so that only information currently necessary is loaded, not the mesh as a whole.

Useful for scientific purposes? I have my ideas as to why, yes, for certain purposes it is – it enables sculpting processes and visualization for example, combined with a quantifiable deviation from the scanned material. Finally, we have to spend a bit more time making our mesh have a closer fit. That comes next.

edit:
I’m not doing a 100% step-my-step. Wouldn’t have time for that, unless you want a video walk through. Feedback is very welcome. Is this too much? Too little? Helpful?