CGA: Please tell us about your background and how you became involved in the industry.
CW: I’ve always been interested in computer graphics since I was first introduced to it in High School. I studied Computer Art in both my undergraduate and graduate education. In 1995, I received an internship at George Lucas’ Industrial Light & Magic, where I began developing the digital tornado software for the film Twister.
I continued on at ILM for seven more years before joining Weta Digital.
CGA: What was your role in the production of King Kong?
CW: My role was Computer Graphics Supervisor for the New York sequences. Along with supervising artists, one of my major responsibilities was developing the look and technology for building digital 1933 New York City.
CGA: What was the brief you received to design and plan the architectural shots of New York?
CW: Our goal was to create a photorealistic environment as accurate to 1933 New York as we could. Coinciding with our digital work, there was also a New York set built in New Zealand. This set was built up to the first story, with digital extensions to be added later to complete the scene. We were also asked to make it all 3D, so the director would have complete freedom to send the camera anywhere in the city.
CGA: Tell us about how you obtained architectural data for the buildings for pre-1933 New York and how you went about determining what would be modeled.
CW: For specific landmark buildings, we worked off blueprints and period photo references. Each of these buildings would be hand modeled. For the thousands of other buildings the task would be much too time consuming to model by hand.
For these I began developing a piece of software called the “CityBot”. This software created procedural 3D buildings to be rendered using Renderman. Before the project would be completed, we would create over 90,000 3D buildings. All of Manhattan Island would be 3D as well as parts of New Jersey, Brooklyn, and Queens. Every building on Manhattan was unique and built to the finest level of detail. To bring the city to life we also had traffic, boats, and working chimneys, factories, and el trains.
CGA: Can you tell us a bit more about the software that was designed to compare construction age and how it worked?
CW: For the CityBot to work, we first need a simplified 3D map to define an accurate skyline to a 1933 New York. We called this map our “Guide Geo”. We started by acquiring top-print data of modern New York. We converted this data using AutoCad into a 3D map. To make the map 1933 compliant we needed to identify and replace each building that was constructed post 1933. We obtained statistical data in the form of 2D maps that identified pre and post 1933 buildings. AutoCad scripts were developed to trace this information from the 2D map to the corresponding 3D building. The figures below show our Guide Geo Map. Blue buildings were identified as post-1933, tan as pre-1933, red as signature buildings that need to be hand modeled.
The post-33 buildings were then removed and replaced. Seen below in orange. These buildings were modeled by hand by using period photographs from multiple camera angles.
After the guide geo map was built, the CityBot software would begin constructing each building. The Bot would take the low-resolution geometry from the guide geo, along with its statistical data, and begin building a hi-resolution building. To do this, special rule sets were written that aided the software in the construction of each building type. For example, there would be rules sets written for office buildings, brownstones, stores, etc. For the actual geometry of the building, the CityBot would use the rules to pick from a library of existing “cells” to construct the building. The cells were basic elements that where categorized by type and architectural style. They could be windows, doors, fire escapes, water towers, etc.
© 2005 Universal Studios. All rights reserved.
The software was programmed to try to maintain the architectural style of the area of the city it was in and it would try to pick cells that match that style.
Since this software was written in Maya, we could quickly watch each building being built in a matter of seconds. We could also modify each building or its rule set, giving us ultimate control of the look of every building. Custom scripts were also developed to construct buildings on our render farm. This way we could construct thousands of buildings in parallel. The entire city could be rebuilt in a couple of hours on our render wall.
CGA: Tell us about the process of developing the street level shots. How were they staged, how much planning went into the shots and what was involved?
CW: For the street level shots, a set was constructed here in New Zealand of several city blocks. It was only constructed up to the first story. All the extensions would be done digitally. There were many challenges with this once we received the footage. We would use the blueprints for the original set construction to model the upper floors. Hundred of unique signs were designed and painted.
Each person was rotoscoped so that we could render building extensions behind them. Textures and rendering were matched with the filmed bottom floor. Buildings across the street were rendered to reflect in the glass store front windows. We also wanted to see the vastness of the streets as they continue on to the horizon. Procedural Bot building extend our streets. Lamps, digital people and traffic, animated signs, room exteriors, the list was extensive.
© 2005 Universal Studios. All rights reserved.
CGA: Can you explain a bit more about the shader that was developed to map the massive amount of buildings in the shots?
CW: Knowing that it would be impossible to hand paint these thousands of buildings, we created a special building shader that would make the process easier. The shader called upon a library of textures we created. It projected each texture onto the geometry at the appropriate angle. Each piece of geometry was tagged with special attributes to determine which texture to use. If we wanted the bottom floor to use brick instead of concrete, all we had to do was change the number in an attribute. This was particularly helpful with mass production of buildings. The Bot could send out appropriate texture attributes when the building was built. The building shader could also create fake interiors and had a night mode, to cut on lights.
CGA: Why was so much of the set actually real, what factors determine if the background is a physical set versus digital?
CW: Although there is much we can do digitally, there are many advantages to using physical sets in collaboration with digital ones. One of the advantages is that it gives the director and actors a real physical environment in which to work. An actor can lean against a lamp pole, run down some stairs, or sit on a curb. One can use the physical environment influence new ideas, scenes, or camera work.
CGA: Were all of the traffic and extras digital in the ground level shots or were there real actors as well?
CW: There were also real traffic and hundreds of actors.
CGA: How many digital buildings were modeled for the movie and to what level of detail?
CW: Over a hundred thousand were modeled for King Kong. The majority of them were procedurally built. Each was constructed down to a fine level of detail. The smallest element on each building was the size of a doorknob.
CGA: Weathering all of the buildings is obviously a time consuming task, how did you approach this for the movie?
CW: The process of the weathering the buildings was actually surprisingly quick. We developed a new system to do this. Instead of working on complex high resolution geometry, the system worked using depth and normal map renders of a scene. There were many advantages to this. We could render a street of 3D buildings from the cameras rough position. Then these maps could be brought into our system in Maya. We could then point rain emitters at the buildings and let the particle drops create streaks as they flowed over the surfaces. The end result is a texture map that can be projected back on the building. The nice thing about using this image-based technique is that it can work on any scale. You could weather one building in the same amount of time it takes to do several blocks. We also used this technique to add snow to the roof tops for the end aerial sequence. Average time to weather an entire city block was a couple hours.
CGA: In the final scenes when Kong was atop the Empire state building you could see a great deal of the NY skyline and horizon. Tell us about how the models were prepared for this shot, how many were models vs mattes. There had to be an enormous amount of data to render, how was this handled?
CW: This was the sequence for which the majority of models were built. All of Manhattan is 3D modeled and rendered. Parts of Brooklyn, Queens, and New Jersey was also modeled and combined with a matte painting. In total over 90,000 buildings. Because that amount of data is too much for any renderer, many of the buildings were baked down into color, displacement, and material maps that would be applied back to the guide geo I mentioned earlier. The maps would be created from the original hires geometry of each building and applied to its corresponding low-res guide geo model. Non-relief detail, such as water towers and chimneys, couldn’t be rendered using displacement maps. In this case we could continue to use the original model. Over 600,000 unique building textures were created.
CGA: How did you obtain reference photography for all of the buildings?
CW: We established a research department that collected reference for over 2 years from various sources. There is actually a fair amount of photography take from that time from sources such as Fairchild, who did aerial shots of New York.
CGA: What was the most challenging part of this project?
CW: It is difficult for me to say what is the most challenging part of the project. There were definite challenges with rendering and working with so much data at the same time trying to be true to the period.
CGA: What was the most rewarding part of this project?
CW: Seeing the aerial shots of New York and the end realizing we built the entire island.
CGA: How long did it take to complete the architectural shots for the movie?
CW: From R&D to final completion was about two years of work.
CGA: Many of our readers are professional architectural visualization artists and some are occasionally involved with very large scale projects. What advice from an entertainment background would you give to them in terms of the production pipeline and planning and approaching a project of this nature?
CW: I think one of the keys to working on large scale projects such as this is to devote a lot of time to setting up libraries and a dependant system. What I mean by dependant system is a system where very little final data is stored within a model. If one was to look at the file describing one of our buildings, they would see a series of links to our libraries, instead of specific model and texture data. There would be links to the model doors, windows, and other components. They would see links to textures maps. The only thing within the building file itself would be how all these outside sources come together. The advantage to such a system is that we could effect thousands of buildings quickly by just updating a window model, or repainting a particular texture in the library. A flip of a switch could turn a Brownstone into a Greystone.
You must be logged in to post a comment. Login here.