Previously in PathTracer [Part 1], we were able to implement a ray-tracing algorithm rendering technique that can realistically, and quickly, render images with physical lighting.
Here, in PathTracer [Part 2], we will add on additional features to our program that will allow for us to render objects of different BSDF (bidirectional scattering distribution function), such as mirror, glass, and metal (isotropic rough conductors) materials using mathematical models for reflection, refraction, and the BRDF evaluation function (involving the Fresnel term, shadowing-masking term, and normal distribution function (NDF)). In my opinion, the most interesting feature of isotropic rough conductors is the fact that we can use the microfacet model to create all sorts of metals through a dynamic approach. It is very fascinating to see the few variables needed for such drastic changes in material are α (Roughness factor), η (Refractive index = old index of refraction/new index of refraction), and k (Extinction coefficient) in fixed R, G, B channels. This implies much about the similar properties of isotropic rough conductors, which brings in the new question about whether other materials (like gasses and liquids) can be dynamically programmed in a similar manner. Overall, it was very enriching to be able to have a strong sense and control over what can be rendered in a .dae file.
Below are rendered images of "CBspheres.dae" with a samples-per-pixel rate of 256 and samples-per-light of 4. These rendered images display the result and visual differences of increasing max_ray_depth (maximum number of bounces for sampled rays) with max_ray_depth set to 0, 1, 2, 3, 4, 5, and 100 respectively. On the left of the Cornell box is a mirror sphere and on the right is a glass sphere.
|
|
|
|
|
|
[0 bounce]:
Naturally, rays that do not bounce in our ray tracing algorithm will simply light up the areas that has light (light source). Thus, we simply see the only light source in our scene.
[1 bounce]:
With one bounce, when a ray initially intersects the scene, we query 4 "shadow ray" samples (depending on the primitive's sampling method) and gain the spectrum we need for that ray.
If the shadow ray points towards a light source, but intersects a primitive before the light source, or the shadow ray simply misses the light source, the shadow ray will give us a black/dark spectrum to average out for the pixel. Otherwise, we have color/light for the pixel.
In the rendered image, all locations where there are no objects blocking the shadow ray's path towards the light source will have color and brightness, while places blocked, such as the points under the spheres and the ceiling (rays cannot intersect in parallel to the light), will be very dark.
Uniquely for our spheres, the left (mirror with reflection only) will reflect the inital ray about the normal of the sphere's surface point to create a shadow ray. This is simply (-wo.x, -wo.y, wo.z) via basic reflection in 3D, where the normal is (0,0,1).
In this case, the only locations where the shadow rays are directly reflected onto the light source is the white square (a reflection of the light source) on the sphere. The same goes for the right (glass with reflection and refraction), which the only difference is that some inital rays were refracted (went inside the sphere) instead of reflected via Schlick's reflection coefficient R (probability of reflection). Thus the white on the right sphere is noisy.
[2 bounce]:
Now things get more interesting, as more bounces naturally brings more light. The ceiling is now lit as two bounces allows for a hit on the ceiling, an arbitrary hit in the scene, then a hit on the light source. The same is said for the shadows under the spheres, as the shadows are softer with shades of red and blue.
For our spheres, both display new information as we can now see the scene (for the right, the scene is slightly visible) instead of just the light source. This is because of reflection, which getting a hit on the sphere, an arbitrary hit in the scene via reflective sampling, then a hit on the light source draws out the image on the spheres. As for the right sphere, this reflection is once again noisy due to its refractive features, which should not work during this stage. Notice the image in the reflective sphere is exactly like the image for 1 bounce due to similar reasons previously stated!
[IMPORTANT NOTE:] Interestingly, the right (glass) now displays refraction via light within the ball's shadow, which is unexpected! As claimed by Cheng Cao (CS184 staff), "This does happen with internal solution, it's possible that these are coming from rays with only one hit with the sphere either caused by precision issues or near tangent rays." What should have been expected is for the right ball to be darker and not have light in its shadow. From here, we can simply ignore these refractive properties in the next images.
[3 bounce]:
Finally, with at least 3 bounces, refraction can now shine (literally). Generally, the image is now brighter (walls and shadows) as each bounce is likely giving more bright spectrums to average for the pixel.
For the left sphere, its reflective image now resembles the 2-bounce image under the same bouncing logic stated previously. For the right sphere (glass: both reflective and refractive), while there is a small trace of the reflection image (seen in 2-bounce for the first time), the sphere also now displays its refractive image. One possible ray path that can create this refracted image is via hitting/entering the sphere, hitting/exiting the sphere, then refracting/hitting a wall seen in the refraction image in the sphere, then finally arriving to the light source.
Also, the right sphere finally gains its refracted light in the sphere's shadow. This is via hitting the shadow region under the sphere, hitting/entering the sphere, hitting/exiting the sphere, then arriving at the light source. We can also see some specs of white on the blue wall next to the right sphere under a similar reasoning, except most refracted rays are likely to have failed to reach the light at the end of its journey. Also, the right sphere has light on its underside and edge as well, where the hit to the sphere ends up hitting the light source for many of the pixel's sample rays via refraction, creating a brightness on the ball's edges itself.
Via refraction, new rays are created at refracted intersections and can be derived with Snell's Law (sinθ' = ηsinθ). By using Snell's Law and converting vectors to spherical coordinates, we can form the equations for our new ray:
ωi.x = −η*ωo.x,
ωi.y = −η*ωo.y,
ωi.z = ∓√(1−η^2*(1−ωo.z^2))
where η = (index of refraction), if exiting the material
or η = 1 / (index of refraction), if entering the material
Here we can assume the normal of the surface is pointing outwards of the sphere (into the air). Note that in refraction/glass materials, if we have total internal reflection (if wi.z's value inside the square root is negative/is mathematically invalid), no refraction occurs and the ray is reflected instead. Ultimately, we used spherical coordinates and Snell's equations to conveniently represent the images below and solve for ωi (direction of our new refracted ray).
[4 bounce]:
From here, it seems that our image have mostly converged and very uninteresting changes happen here. An uninteresting new detail in this section is the small concentrated patch of light on the blue wall near the right sphere. It is simply likely that more bounces allowed for more sampled rays to likely reach the light this time (unlike 3-bounce). One possible path that may have failed before, but now successfully hits the light is a hit on the white patch area next to the right sphere, through the sphere, onto the blue wall, then onto the light source.
[5, ... , 100 bounce]:
From here, nothing but general color saturation and brightness seems to be of interest.
Although it is hard to see, the image generally gets brighter as bounces increases. Also, it is good to note that due to the Russian Roulette kill probability, most rays will be killed before reaching its max bounce (it is hardly possible that a ray lasted 100 bounces in the 100-bounce render!) Overall, this rendered image appears to have mostly converged to its ideal visual state at the 4-bounce. Below is a GIF of the very small changes seen in larger bounces. Notice the brightness difference!
In general, this section contains a lot of mathematics and is quite straightforward. There are times when I wrote the equations wrong in code, which are hard to find and debug sometimes! It is suggested to code these long equations in blocks/variables to avoid needless errors.
Below is "CBdragon_microfacet_au.dae" rendered with varying α (= 0.005, 0.05, 0.25, and 0.5), where α is the roughness factor of the material. The images below are rendered with a samples-per-pixel rate of 1024, samples-per-light of 1, and max-ray-depth of 5 (5 bounces max).
|
|
|
|
With a small α, we will tend to see a glossier surface, while a bigger α will give us a more diffuse surface (much resembles a rough surface). At α = 0.005, we see the red, blue, and black of the dragon to appear very solid and saturated. This is because rays intersecting a part of the dragon tend to create new rays moving in similar directions, as seen below:
Interestingly, while glossy items tend appear glaringly bright sometimes, this dragon's orientation (rough micro bumps) implies the rays tend to hit the walls/leave the scene more than it hits the light. Thus, this dragon appears to be a bit darker than a higher α value.
As α increases, we will tend to see the intersecting rays diffuse in all directions. With this, rays now have an unbiased way of reaching towards the light source. This results in a much brighter dragon! Furthermore, the red, blue, and black on the dragon naturally averages out better and creates softer colored shadows for the dragon. Below is for diffuse materials:
On a side note, when α is small, bright specks appear on some pixels, implying the ray tracing algorithm is gathering too much light/color than needed for our average color. Since glossy implies a "biased" direction for a ray, it is interesting to see how roughness points out the intrinsic problem of pathtracing.
Below is the same rendered dragon as above, but the material turned from gold (Au) to mercury (Hg). This is done by changing the η (Refractive index = old index of refraction/new index of refraction), and k (Extinction coefficient) in fixed R, G, B channels of wavelengths 614 nm (red), 549 nm (green) and 466 nm (blue). Below are the values for mercury:
Below is "CBbunny_microfacet_cu.dae" rendered using cosine hemisphere sampling (left) and importance sampling (right) with microfacet BRDF for the Beckmann distribution. The images below are rendered with a samples-per-pixel rate of 64, samples-per-light of 1, and max-ray-depth of 5.
|
|
On the left, cosine hemisphere sampling may have been good for diffuse BSDF, but not for microfacet BRSD. Our distrubution is simply not a diffuse one! We can see that, although similar, there are a lot of black noise on the bunny. This is the same issue presented for cosine hemisphere sampling generally, where sampling uniformly around tends to sample uninteresting areas, resulting in black pixels. Thus, its only solution is to sample more, which would be too costly. However, with our new importance sampling, we can now converge faster, giving us a better result, which can be seen on the right.
As one would expect, with our new importance sampling, we are able to render nicely without many black patches on the bunny. Here, we used importance sampling in respect to the Beckmann NDF, which we can find the inversion of the pdfs pθ and pϕ, ultimately giving us sampled angles of θh (angle in respect to the z-axis) and ϕh (angle in respect to the x- and y-axis). With these, we can easily get the sampled microfacet normal h and reflect our input ray about it. Through derivation of our pdfs, we can also find the pdf of obtaining our new ray.
Once again, this section requires a lot of mathematics and a careful translation into code! While straightforward, one problem I encountered was swapping ϕh and θh, giving me the wrong sampled microfacet normal h! Looking carefully at the equations made me realize that ϕh ranged from (0 to 2π) and θh ranged from (0 to π), which was the opposite of the symbol's usual symbol convention! Be careful!