I chose to do parts 2 and 4 for this project because they seemed very interesting to me. In part 2, I implemented metallic materials using the Microfacet BRDF with the Berkmann distribution to determine how rough the surface will look, Fresnel term for the color of the metal, and Importance Sampling for less surface noise so that objects in scenes can now appear to have a metallic surface. It was very interesting to see how real-life data of different metals can be used to make renderings of surfaces that look like the actual metal in reality. For part 4, I changed the ray generation code to generate rays from a lens so that the objects away from the focal plane are blurred to obtain the Depth of Field Effect. By adjusting the aperture size, it can be observed that larger aperture sizes causes more blurring. The bugs I encountered in the project were due to incorrect order of operations in the math equations that I implemented in the code.
This part was mostly implementing the formulas for the Microfacet BRDF, Berkmann distribution, Fresnel term, and Importance Sampling for the Berkmann distribution. The Berkmann distribution is used for the Normal Distribution Function (NDF) to describe how the normals of the surface change based on the surface roughness given by alpha. The Fresnel term is used to give the surface its color based on real-life data of metals at three specific wavelengths. Importance Sampling is used so that the microfacet surface can be sampled more efficiently with less samples, resulting in less noise and a more visually appealing surface. The bugs I ran into during this part were caused by incorrect order of math operations due to missing parentheses which caused my surfaces to render darker than they should have.
|
|
|
|
It can be observed in the above images that larger alphas give a more diffused look to the metal while smaller alphas give a more shiny/specular look. Note how rough the surface of the dragon at alpha = 0.5 is compared to alpha = 0.05. At very small alphas like 0.005, the surface of the dragon almost becomes mirror-like.
|
|
A large difference can be seen in the above two images. For Cosine Hemisphere Sampling, the bunny's surface appears to be noisy and the color appears to be lighter than the copper color of the importance sampled bunny. Also there is a dark edge around the bunny and one of its ears is dark compared to the right image.
|
|
|
|
To implement the depth of field effect, I first used the ray generation code from Project 3-1 to generate the camera ray starting from the origin and going through the center of the lens. Then, I calculated the time of intersection of the camera ray with the focus plane at z=-focalDistance using the Ray-Plane intersection equation. Using the time of intersection, I calculated the focus point and created a new lens ray starting at some random point on the lens and going in the direction toward the focus point. The lens ray's origin and direction are converted to the world space using the c2w matrix and the world space position of the camera. Finally, the lens ray's direction is normalized and the min_t and max_t of the ray is set to nClip and fClip respectively before being returned from the function.
The Pinhole Camera Model is simply a camera with a very small aperture with no lens to let a small amount of light in from a scene being imaged. The Thin-Lens Camera Model is a camera with a typically larger aperture with a thin lens inside the aperture. The Pinhole Camera has no to very little blurring due to its small aperture size which causes it to have a very small Circle of Confusion. However, the Pinhole Camera has difficulty exposing enough light on to its sensor due to its small aperture, leading to darker images. Increasing the aperture size of the Pinhole Camera will cause each pixel of its sensor to record similar values, leading to a very blurred picture. On the other hand, a camera with a thin lens can more easily expose its sensor to more light due to its larger aperture size and lens focusing light onto the sensor but at the cost of increased blurring of objects not in the focal plane. As the aperture size increases for the thin-lens, the blurring becomes more apparent due to a larger Circle of Confusion.
|
|
|
|
The focus stack pictures were taken with an aperture size of 0.3 to make the changing depth of field more obvious. The autofocus feature of the program was used to change the focal distance.
|
|
|
|
In the above 4 pictures, the aperture size is increased from 0.05 to 0.35 with a step size of 0.1 while focused on the head of the dragon. It can be observed that as the aperture size increases, the body of the dragon and background become blurrier and blurrier.