They Do That?:
By Jason "loonyboi" Bergman
In each installment of How'd They Do That?, readers send in their questions about a specific title, and we go to the developers to find out just what magic was performed. If you've got a question, send it to us at [email protected].
ey kids! The response to the first installment of How'd They Do That? has been fantastic, to say the least. I get a big kick out of tracking the answers to these questions down, so keep them coming!
By far, the most asked question since last week's installment has had to do with 3D engines...specifically outdoors environments in 3D engines. Our own Rowan "Sumaleth" Crawford asked about Unreal's huge outdoor environments in relation to the smaller ones to be found in the Quake engine, and Victor Ho asked about the Tribes engine, and how they made those giant outdoor environments work.
Well, since they asked, I went to the developers for the answers. So here you go, this week's question:
The Question: How come some engines can do giant outdoor environments, and others can't?
First we went to Tim Sweeney, programmer on the Unreal engine. Here's what he had to say:
Unreal can handle large outdoor environments decently because I made the conscious decision to sacrifice raw Quake style performance in small "corridor" levels, in exchange for more generality and the ability to handle outdoor scenes.
The two big engine decisions that impact outdoor performance are:
1. To compute visibility dynamically during gameplay. This has several nice benefits, such as the ability to handle large open environments and a significant reduction in preprocessing time. The disadvantage is a reduction in performance in small, highly occluded levels of 5-10 milliseconds per frame.
2. Unreal's software renderer combines textures and lightmaps dynamically while rendering each pixel. The advantage is that we get multicolored lighting in software, dynamic lighting becomes much faster than possible using Quake style "texture caching", and it becomes possible to render huge surfaces -- a mile wide surface has the same performance characteristics as a ten foot wide surface. The disadvantage is a reduction in performance in small "corridor" levels which don't have dynamic lighting.
Now that we know how Unreal did it, what about Tribes? No doubt, those are some of the largest outdoor environments around. What's their secret?
We asked Mark Frohnmayer, programmer on the Tribes engine just how they did it:
For the terrain we use a form of detailed heightmap rendering... basically we have a grid of heights that form squares, each 8 meters on a side. All the squares close to the player are rendered as two triangles, but as you get farther from a square on the terrain, it will combine with 3 of its neighbors to form a single, larger square, then farther still that square will combine with 3 of its neighbors to form a larger square. In this way we keep the overall number of polygons that we render small (< 450 or so) while giving the appearance of higher detail.
The other question we had, was how they did those seamless indoor-to-outdoor transitions that are omnipresent in Tribes:
Again it's really just an issue of keeping the number of triangles on screen small - so when you look outside it doesn't bog down. To manage this, all our shapes (players, static objects and interiors) use static LOD - there are several versions of each (at different detail levels) and the engine picks one depending on how far away you are from it and what your detail settings are. Also, most of the threshold areas in the interiors are lower detail than the rest of the shape.
Sounds simple enough, right? So why doesn't Quake have the same ability? I figured there was a fundamental tradeoff here, so I asked John Carmack what it was:
Quake and Quake2 had a specific disadvantage for large areas: because the software renderer used a surface cache for the static lighting, it was important that the maximum size of a single surface be kept reasonable. The utilities segmented large polygons up so that they were no more than 256 units on a side, which kept the surface caches under 64k and let per-polygon mip mapping produce good results. This means that if you make a single 8192 by 8192 brush for your giant outdoor ground, it got turned into 1024 polygons just for the surface cache. BSP splits then diced that up even more.
A technical solution that I could have followed would have been to have a hierarchy of subdivisions on surfaces so that surfaces only dropped down to a new subdivision level when their total size exceeded 64k at their currently view mip level. That would have reduced the polygon count to aproximately log2N of the current values.
Another solution is to just not use lightmaps in outdoor areas and suffer with poor mipmap selection on large polygons.Yet another solution is to use a slower rasterizer that does the lighting on a per pixel basis, avoiding the need for a surface cache.
Quake and Quake2 weren't outdoor games, so I didn't pursue any of those directions. If you are using hardware with per-pixel mipmapping, geometry doesn't need to be subdivided at all, and that 8k by 8k polygon remains a single polygon.
Yow! Hope all that answers your questions, guys! Remember, if you've got a question about games, send it to us! We're here to get the answers you crave!
Credits: How'd They Do That? logo illustrated by and is © 1999 Dan Zalkus. How'd They Do That? is © 1999 Jason Bergman. All other content is © 1999 loonyboi productions. Unauthorized reproduction is strictly prohibited, so don't try it, or we'll make you disappear. Or just throw knives at you. It's your call, really.