The last two major issues I had to solve were outputting multi-image and merging them in a single frame and the shadow problem where each slice has to cast and receive a shadow from each other.
1. How to make multi output in a single frame.
I first thought about Mel script that generates Maya render layer automatically in render time, but later I found Ri-filter way can be faster and more integrated to Procedural Primitive DSO. Also, I can do more things with C++ then Mel such as making a special data structure and using external libraries.
Pass and AOV
There are two ways to generate multi images in a single frame in renderman that we are already using which are pass and AOV (Arbitrary Output Variables).
In RMS, by default, if a scene has any light that has depth map or deep shadow, RMS makes shadow passes in addition to Final Defaults pass and renders the scene as many times as the number of passes. In this case, the renderer has to read all objects in every render job, which means if a scene has three lights that have shadow map, rendering is done 4 times and object reading is done also 4 times to make 4 images which are 3 depth map images and 1 final tiff image.
AOV is different. To make a final color of object, a shader calculates many values and combines them and finally decides the final color. The values can be ambient, diffuse, specular, or environment reflection. Because all the calculation and combining are done in shader level, renderer doesn’t need to re-read object to calculate diffuse and specular separately. If some variables are defined using output in shader parameter rib can capture those values and store as an AOV image. In this case, reading object once can generate multi-output images.
The way I am trying to do is to divide heavy data processing into multiple lighter data process, so the way I should go through is the first way which is pass rendering because if I use AOV, renderer will read all heavy data at onetime which is not what I want. So, I looked at how RMS uses passes.
Rib structure in RMS
We can generate rib from Maya scene in several ways. The simplest way is to use Mel command “rman genrib” in Mel command line. Once rib generation is done, we can see several rib files in renderman folder in Maya project. The rib file that renderer directly uses is 0001.rib if the frame is 1. Here, we can see ReadArchive procedures that read all different passes. I decided to do something in this level.
Ri-filter can re-define all ri procedures, which means, in this case, one ReadArchive call can become many ReadArchiv. I changed one ReadArchive to the number of slices of ReadArchive, so it can generate as many images as the number of slices. I also set current slice id, so each pass can have different particle data and output image path.
Collecting and Sorting particles
RiPoints procedure is also re defined. I collected all particles in the scene and made them one list data structure to sort them through camera direction. I used sort function in STL (standard template library) in C++ instead of implementing it myself. I think this is one of the benefits of using C++. I can use many advanced data structure or algorithm without implementing everything myself.
One useful thing I found in this step is RiResource procedure. I had three particle objects in Maya that have all different shaders. Scene I had to make them one data to sort, I needed to keep the shader information and recover them when rendering. RiResource can do this kind of job. It works not only for shader information but all attributes, options, and even transformation changing. In this case, I stored RiResourceHandle (which is particle name like “particleShape1”) as a particle class attribute and recovered it when the particles are rendered.
Also, Rx Library was very useful when I wanted to know if the pass is shadow pass or final pass. It also gave me the attribute “identifier”, so I could use it as a RiResourceHandle.
It’s hard to explain all the process, and the code is also messy, but it will be a useful example.
The next thing that was keeping bothering me was the shadow problem. I’m rendering each slice separately but they have to cast and receive the shadow each other. To do that, I thought about using a single shadow map for every slice. Making a shadow map is also rendering, which means slicing and combining has to be done for the shadow map. I can simply make multiple shadow maps using the same technique as above. However, combining them was the problem. I used tiffcomp utility to combine final tiff images, but it doesn’t work for shadow map especially for deep shadow map. Using deep shadow map is necessary for point rendering because I can get much better quality of shadow then regular depth map shadow.
What is deep shadow map?
This is full description of deep shadow map. What I understood is there are few things different from regular z-depth map. Unlike z-depth map, when the camera ray reaches a surface, it determines if the surface is opaque or not, and if not it stores the distance from camera and goes to next surface and does the same thing until it reaches completely opaque surface or summing the opacities of all surface becomes 1, which means one pixel can store many z information through camera direction. Also if the micropolygon doesn’t completely cover the pixel, it goes to next surface, and this is why deep shadow map is better for small detail objects like points or hair. This functionality can go for volume data, so it can be a good solution for self-shadow for volume shader.
Fortunately, Pixar provides Deep Texture API and even the example code is dsMearge software which combines multiple deep shadow maps to one deep shadow map. However, the code has some problem and was hard to fix.
I wrote my own code that just over one deep shadow map to the other deep shadow map according to the order of parameters. It is definitely not an ideal deep shadow merging solution but for this project, I know I only need to combine two deep shadow maps, and one of them is always in front of the others, so it is good enough for this project.
One thing that I haven’t tried for this ho_dsOver is filtering. For now, the result deep shadow file size is almost the sum of the size of two input deep shadow files. Deep Texture API also provides filtering method, so I think I can reduce the file size using this filtering.
I am using adaptive multipoint radius to reduce group-like movement, but it still has a problem. It’s ok if I expand 1 point to 20 ~30 points, but if I expand it over 100 times, I starts to recognize the spherical shape. I haven’t found any better distribution method for that but found how to make better final result. Ri filter works for external rib archives, so instead of simulating particles in Maya one time, if I simulate particles multiple times with different seed and export them as a archives, my ri filter will collect them into one data and slice them all together, so multiple simulation data can become one image with a accurate shadow.
I can say using this solution I can render almost unlimited number of particles. It is independent from the amount of memory. If I want to render more particles, I only need more time, but point rendering is not a time consuming. It’s usually memory consuming. One thing that can affect the maximum number of particle is sorting collected particles, but I have tried 300,000 particle sorting and took less then 1 second, so it won’t be a big problem even though increasing sorting time is not linear.
I think this project was very good chance to learn how to solve various problem in different ways. I confronted many problems and had to find solutions in many places and form many people. I also found many possibilities of renderman plug-ins especially ri-filter. It was also good chance to learn C++.
And, I hope I can make some shot using this solution ASAP.