so… questions to Rubidgeinae teeth: Christian, if you could clarify?
Here are quick paint-overs… do these guys have palatal teeth? Your paper shows them, would just like to make sure before modeling them in. Also… what do we know about the teeth on the jaw? I only have other illustrations to go on for them, they are absent in the papers I’ve seen. And if there are no molar-type teeth for crushing towards the back of the mouth where forces would make that feasible, does this mean they tore off and swallowed flesh? No processing of bones, etc? Were those massive skulls muscled only for attack / demobilization with primarily front portion of snout? (Would perhaps explain that somewhat unique ‘chin’.)
Also made a quick sketch of what the open mouth would look like:
So I’m modeling a gorgonopsid for Christian Kammerer. A bunch of them, actually. And I’ve started with a biomechanical model. Not a skeleton. Skeletons are a lot of work, but there are more convincing arguments to come up with an abstracted model of the skeleton… we often do not have skeletons, or they have to be reconstructed from shards, from deformed shapes, etc. So I want a model that communicates that this is an abstracted form, and let’s me get on to work.
My plan is to then create a generic base gorgonopsid ‘meat’ mesh, and adapt that for each of the species. Of course, I’m not very knowledgeable about therapsids, so… questions:
- is this halfway reasonable? It would be good to get any critique of generic pose, limb sprawl, proportions etc now or early.
- The limbs are really hacked, but are they acceptable? Are these guys plantigrade?
- I’d like to show the mouth somehow, not for RAWR, but because we only really have the heads of these guys and they are bizarre. Did they really have palatal teeth?
- Which is your favorite? Dinogorgon and Smilesaurus are badass, but I’m thinking I’ll start with a more plain vanilla Rubidgeinae
Carrying on! After we’ve created a low-poly volume, and laid out its polygons on an additional uv coordinate table like a skinned cat, we can bake the displacement map. This takes each point on the mesh corresponding to a pixel and assigns it a grey value in accordance with the its distance to the high-resolution mesh. This information looks like this (the layout is not the best – a result of my hacked out uv map for an unfinished model – resulting in lots of wasted pixels):
As you can see from the animated gif, the result is very accurate.
But… what’s the purpose?
The main advantage is memory efficiency. The base mesh of ~26mb is reduced to one of just over 108 kb and a 137kb jpg (1024×1024). Not only storage is more efficient, but imaging a 3D model from this might require considerably reduced computational power, while allowing for dynamic resolution based on camera distance. In other words, if a scientist needs to compare meshes for volumes, this might present a technique to provide quantitatively accurate models which can be analyzed much more quickly at various levels of detail, and be used afterward in interactive digital publications as navigable illustrations with low computational overhead. And of course, this can be done for any 3D mesh… whether it represent bone, muscle or air pockets.
From the point of the artist, creating such a mesh is like sketching from a life model. I have a much better understanding of a crocodile’s cranial morphology after doing this, as if I’ve copied the drawings of a master. It is a good deal of work (more than I initially estimated), but well worth it.
After getting a crocodile skull, doing a test-run and cleaning up the skull, we now proceed to model our optimized geometry. The goal is more or less the same as the early garbage bake that we did before, just with a higher level of detail than before.
I’m also going to pay attention to edge flow, which means that the polys should be laid out so that they follow the topology of the dense stl scan. This is a crucial issue for meshes that will deform later, such as an organic character. Here, it’s more or less a question of efficiency and accuracy: to get the least number of polys to fit as closely as possible.
The workflow of steps such as these is critical, and can make the difference between an enjoyable hour of zen-like meshing or a miserable day-and-a-half. The basic tool modules are simple enough: create polys from edge, vertice or poly manipulation, while constraining the position of all elements to the dense mesh in the background. Above I’ve made a video of this process in modo… another very good tool for this is 3Dcoat (and all software have the necessary units – from Blender to Maya). Key is a in-window workflow, where keyboard shortcuts intuitively allow toolset changes. In this way, I can quickly extrude edges, lengths of edges, pull a poly out from a vertex or edge and snap it to a cohesive edge. I’m rusty, and this skull would take me about a half-day of concentrated, undisturbed work (which apparently isn’t going to happen today).
Next up: finishing this process and baking. Might take a while before I get to preparing this as I’m off teaching for a week. Any questions? Requests for other 3D processes? The visit rate makes me think this is useful, but… you’re a shy bunch!
Before we proceed to building a proxy geometry, we need to clean up the stl file. In particular, we need to separate the skull and jaw into two distinct geometries, while maintaining that they have no holes – ie. that they remain watertight.
A useful tool to find bridge connections is the select connected command, or whatever it’s called in your software of choice. One method is select all connected which will show that they are connected, and the grow method of selecting neighbors, which helps pinpoint where the bridges are. Above is an example of this grow method as the polys connect across what we would wish to be distinct geometries.
Now that you know where the problem area is, you can get in there and select the connecting polygons and delete them.
Note that there are many ways to do this – a truism for all of the steps in this process, and its generally advisable to build on your skillset. A programmer, for example, can code and might understand mathematics. So she might write a script that selects polygons based on an occlusion algorithm. I, on the other hand, can shovel. So I go in there and dig out one poly after the next, in glorious hands-on intimacy. 3D has something for everyone, and you’ll surely be bringing your own skills to the table.
You’ll undoubtedly run into single shared vertices… thinking a full edge has to be shared in order to unite the two meshes, but no… such is the injustice of 3D clean-up.
Eventually you’ll achieve that satiating moment when – upon double-clicking to select all connected – only the skull or jaw lights up. Enjoy it…
… because now you have to climb back don into the trenches, closing all the holes you’ve just made. Make that look like…
…this. Over and over again. It goes by rather quickly however.
Forgot to mention that:
>finding holes is quickly done by selecting boundary edges. If this isn’t supported by your software, select edges with >4 poly associations, this will often >fillin
g the holes can be done easily by selecting edge, creating poly and triangulating. This maintains stl support for future printing etc. You can also bridge using an extrapolation of the edge poly’s normals, which maintains the anticipated surface shape in cases where the hole is so large that the first technique would result in a flat dent. Both techniques are likely within the range of acceptable deviance.
Continuing from yesterday’s introduction, we continue by making a garbage model to show what issues we’re dealing with and what the goal is. My time tracker reports 17 minutes for this entire process, including the creation of these images, so this is a very quick-n-dirty process.
I began by box-modeling a rough form to encompass the skull, then smoothing and subdividing to get it to fit somewhat tightly around the mesh.
Then I smooth this geometry again, but this time using a background constraint. This takes each point in my volume and fires a ray along its normal, noting the distance it takes to encounter a surface on the background mesh. This calculation is given a default cut off distance in case the ray doesn’t find anything… in which case various options are available, ie. averaging the distance of neighboring vertices. After this process, the point itself is relocated to the respective position from the stl mesh. Voila: shrink-wrapped cg surfaces.
This may or may not be sufficient for volume analysis – there’s a lot of empty space here. If so, the model’s weight would be 199 vertices versus 499,362.
Each point in the mesh now has a roughly desired position in space, defined by 3 axii (x,y,z). The next important ingredient is to give each of these points a relative coordinate on a construed 2D surface – a uv map. Why? So that….
…we can repeat the ray-firing process, this time not for each point on the mesh, but for each pixel laid out along the uv map. This generates a grey value map recording each pixel’s distance from the rough volume to the scanned surface. Here again, a cut off is involved. This range determines not only the distance at which a value is simply defaulted to, but also the range of information between the darkest and lightest values. Ideally, you want to have your rough volume approximate rather consistently the scanned item. Notice that we have not done anything artistic… no sculpting, no deviations from the source scan.
What’s the point of all this? As you can see, our result is visually meager, yet it renders in 3.9 seconds as opposed to 5. And yes, we cg artists are consider that a big deal, because render times scale exponentially with resolution and often with further shading complexities, which means a lot when rendering 24 to 30 frames for each and every second of footage. What’s more important here is that the number of polygons is now dynamic. At this resolution, we calculated 783,380 polys compared to 997,593 in the stl mesh. Worth it? Not likely. Yet as a dynamic asset it quickly becomes valuable – for example if the item is rendered far away from the camera it will only cause the creation of as little as 296. That’s a major savings. Inversely, there are also methods to drive the number of generated geometry in macro ranges, so that only information currently necessary is loaded, not the mesh as a whole.
Useful for scientific purposes? I have my ideas as to why, yes, for certain purposes it is – it enables sculpting processes and visualization for example, combined with a quantifiable deviation from the scanned material. Finally, we have to spend a bit more time making our mesh have a closer fit. That comes next.
I’m not doing a 100% step-my-step. Wouldn’t have time for that, unless you want a video walk through. Feedback is very welcome. Is this too much? Too little? Helpful?
At last year’s DigitalFossil – organized by Heinrich Mallison and populated by all sorts of innovative specialists – motivated me to share some modeling / shading techniques from the workings of vfx. In particularly, I’m motivated by Julia Molnar’s talk about modeling abstracted bone volumes – not because one technique is better or worse, but because they are different. Julia presented a cool remodeling method for quickly reducing resolution of a scanned bone while roughly maintaining volume. She relied on lofting profile curves and it worked wonderfully. In Vfx, volume maintenance isn’t the goal of remodeling… you get it as a side-effect of creating models that render efficiently at various levels of detail. The most-common method relies on a combination of subdivision surfaces and image-based displacement. Subdivision surfaces (sub-d’s) means a mesh (points in 3D space forming faces) but which are subdivided according to a smoothing algorithm that makes them behave at least a bit like nurb surfaces, which are not defined by points but by mathematical equations. Sub-d’s are really powerful because they can be deformed in the low-res version and then further subdivided and displaced according to pixel or procedural textures.
I’m going to walk through this process, taking a crocodile skull from the University of Texas from scanned mesh to a sub-d mesh with pixel-based displacement. I’m writing this for scientists, but I hope it’s generally of use. Also, keep in mind that there are many possible methods to get from A to B, and also that the technologies are constantly evolving – so methods have to be assessed according to the intended use and available time.
First, get your crocodile skull. It’s downloadable as a stl file – a bunch of points in 3D space connected by faces to represent a surface. you can get it into your 3D software (like Blender) by importing or loading. You may have to convert it to a compatible format – I had to convert it to a dae (collada exchange format) to get it into modo. Blender is great for these things… it’s likely the best format converter out there, with the exception of license required max files and full scene formats. I’ll be working with modo from here on in, but all the steps can be replicated using whatever software you choose.
That’s as far as I’ll get today, which is already enough to 3D print! Well, okay, cleanup required, as this cool tutorial for maker bot shows.
Sign of the times: quality 3D printing with resin (as opposed to plastic beading technologies) is becoming affordable, now thanks to a kickstarter project in the range of 3000 US dollars. Form1 is a sign of things to come, and even though some are uncertain about the solidity of the kickstarter financing model the goals have been dramatically overachieved. Perfect for printing handy organic shapes like skeletons. Me wants.