Modeling
and Animation Systems
Im
going to assume some familiarity here with 3D systems, and how
3D coordinates are handled in a 3D system. Part of this will be
covered in the tools section, since what ever it is you use to
create and animate the models is by definition a tool. However
you decide to create and animate your 3D models, once its done
outside of the game, you have to decide how to get this data into
the game, and then on the screen. For arguments sake well
take the example of a walk sequence on a biped human figure. Well
gloss over how this model and animation sequence is created, and
get straight into the meat of the matter. There are two basic
approaches to animation systems, skeletal and mesh based. Well
deal with mesh based first, since thats actually what Quake
uses. Mesh based animation takes the form of physically calculating
all the positions of each vertex of a model for each frame of
the animation, and apply that to the model over time.
So, here
you see a 3D model. Each point of each triangle on the model is
called a vertex. Each vertex has a 3D coordinate relative to the
origin of the model. As the model animates though a couple of
frames, you can see that the 3d coordinate for the example vertex
moves in 3D space with model relative coordinates. (This means
they are relative from the origin of the model, so vertex + model
origin = real world coordinate). Mesh based animation physically
calculates each vertex 3D coordinate for each frame, and exports
that into the game. We just stuff those values in to the model
in the computer for each frame, and viola, the model moves. There
are some advantages of this approach, since you can do mesh deformation.
This means you can get skirts to wave, hair to move with the breeze
and so on really easily, and with no extra processor time, its
all calculated before time. Plus its quick. Since we arent
doing huge amounts of transformations on each vertice, its really
processor friendly.
The drawbacks
to this system are a bit heavy though. Firstly, all that data
takes a ton of space. There are things you can do to make it less
of a space hog, such as compression, relative motion and so on,
but its still pretty expensive. Plus, another drawback is that
the animation can only be used on one type of model. You cant
use the same animation on two different models, such as female
and male, since the shape of the model is intrinsic to the animation.
One last draw back is that its hard to break a model using
this system. For instance, if you want to blast a limb off, there
needs to be a replacement animation with no arm attached. You
can imagine how that drives up space requirements.
The next
system is skeletal based. The idea here is that all you animate
is a skeleton within the model itself.
You can
see here the skeleton within the model of our human. Each vertice
is assigned to one of the bones within the model. What we do is
animate just the skeleton, and then apply each bones animation
to the vertices of the actual model... as the arm bone moves,
so do all the vertices in the arm model. This is a powerful way
of moving stuff around, since given on skeletal movement, you
can apply it to many different models. Plus, you can blow limbs
off really easily since its defined in a hierarchical way... one
bone connected to the next. There are some draw backs... its pretty
expensive on processor time, doing all this 3D geometry... plus,
joints between bones can give you some grief, causing seams or
model overlap to appear. There are ways around this, but its
a little too in depth to go into right now.
Ill
briefly mention level of detail, something that a lot of people
are using now. LOD is where you reduce the detail of any given
model in direct relation to its size. If our human is way off
in the distance, then theres no reason to draw him with
all his details, like ears, noses and fingers, since you cant
distinguish all those details when hes only 10 pixels high.
Its the same as in movies, where those at the back of the
crowd scenes dont get all the cool outfits, since they arent
really seen clearly. Doing this reduces load on the processor,
since its not trying to do geometry on so many polygons.
Skeletal lends itself to this more than mesh based does. Again,
there are two schools of thought here, one is replacement models,
so instead of having one model, you have three or four, all at
different levels of detail, so when the model gets further than
position X from the camera, you simply swap which model you draw
with one with less detail.
With Mesh
based however, since you have new models at a reduced resolution,
you have a new set of animations to deal with, and up goes memory
usage. The second approach to LOD is to reduce the complexity
of the model on the fly. Having an algorithm that will drop vertices
as the model reduces in screen size. Its definitely possible,
but balance has to be made to ensure that the routine thats
doing this reduction doesnt take longer than the processor
would take to do the math on all the points in the first place.
There is much more to be said on this topic, but thats beyond
the scope of this article.