To generate rays, I took in the image coordinates, then I multiplied them by an equation to turn them into correct coordinates in the camera space. Then, I use these coordinates to generate the ray.
For the intersection part, I take in a given ray and I check if that ray intersects the primitive using an equation shown in class. If there is an intersection, then I save information about that intersection into an intersection object given. I also return whether there is an intersection or not.
To test whether there is an intersection or not, I used the Moller Trumble Algorithm given below. Where O and D are the ray’s origin and direction. P0, P1, P2 are triangles corner positions. The resulting algorithm gives you the time where the ray intersects with the triangle and the barycentric coordinates.
Furthermore, I check if the ray intersects with the defined triangle, which is given by min_t and max_t. And the given barycentric coordinates are valid. Set ray’s max_t to be the time the ray intersected with the triangle, update the intersection object, and return true. Otherwise, return false due to no valid intersection being found.
./pathtracer -r 800 600 -f CBempty.png ../dae/sky/CBempty.dae
./pathtracer -r 800 600 -f CBspheres.png ../../../dae/sky/CBspheres_lambertian.dae
./pathtracer -r 800 600 -f CBgems.png ../../../dae/sky/CBgems.dae
I have created the BVH tree by recursively calling construct_bvh function. The function takes the beginning point and the ending point of all primitives and stores them into BVHnode.
The base case of this algorithm is where the number of primitives in the node is smaller than max_leaf_size. If so, create the leaf node, store the primitives, and return the node.
If the number of primitives is larger than max_leaf_size, then you need to split the primitives into two parts and recursively call construct_bvh function for each subpart. To find the splitting point, this is what I did :
After finding primitives that go to left and right, call construct_bvh function for each of them. In order to pass r and l to construct_bvh function, the tricky part was to move the original start and end iterator corresponding to the newly created l and r.
For the case where all the primitives go to either left or right, I simply splitted the primitives into half and called them left and right.
Without BVH, the rendering took forever. In my opinion, it never stops running. With BVH, it still took a while but finished rendering with correct results. A few minutes for cow.dae and 15+ mins for maxplanck.dae. BVH helps in this case because it decreases the number of primitives we have to check for intersections for each ray. Now, instead of checking each primitive for intersection with each ray, we can check only a small subset of primitives for each ray. This speeds up the process dramatically.
I construct a number of sample rays from the hemisphere I am sampling. Then I check if it intersects with anything in the scene. If it does, then I check how much light arrived at the intersection point from the hemisphere using the Monte Carlo estimator equation provided.
Like with hemisphere sampling, loop through the number of lights in the scene. If any of them are point lights, then I check if there is anything that intersects a ray between the light and hit_p. Assuming there is none, then I calculate the amount of light on that hit_p using a similar equation as above.
./pathtracer -t 8 -s 16 -l 8 -H -f CBbunny_H_16_8.png -r 480 360 ../../../dae/sky/CBbunny.dae
With estimate_direct_lighting_hemisphere() :
./pathtracer -t 8 -s 1 -l 1 -m 1 -f bunny_1_1.png -r 480 360 ../../../dae/sky/CBbunny.dae
With estimate_direct_lighting_importance() :
-l 1 :
-l 4 :
-l 16 :
-l 64 :
Compared to uniform hemisphere sampling, lighting sampling is likely less noisy. Lighting sampling gives a better rendering result in general by providing the clearer result. Also, light sampling can render images that only have point lights.
For the indirect lighting function, it is an extension of the previous function, but instead of only checking for direct light sources, it also recursively calls one_bounce_raidance to check for light sources that require a few bounces. Each call, it checks for a bounce. If it is not a light source, then it checks if the bounce after that is a light source, and so on. To prevent infinite recursion, I add a coin flip, which has a chance of stopping the constant recursion.
./pathtracer -t 8 -s 64 -l 16 -m 5 -r 480 360 -f spheres.png ../dae/sky/CBspheres_lambertian.dae
./pathtracer -t 8 -s 1024 -l 1 -m 1 -r 480 360 -f spheres_part4.png ../../../dae/sky/CBspheres_lambertian.dae
Direct only :
Indirect only :
./pathtracer -t 8 -s 64 -l 16 -m 0 -r 480 360 -f CBbunny_part4_maxdepth0.png ../../../dae/sky/CBbunny.dae
Max_ray_depth 0:
Max_ray_depth 1:
Max_ray_depth 2:
Max_ray_depth 3:
Max_ray_depth 100:
./pathtracer -t 8 -s 1024 -l 4 -m 1 -r 480 360 -f CBbunny_part4_sample1024.png ../../../dae/sky/CBbunny.dae
Sample-per-pixel 1 :
Sample-per-pixel 2 :
Sample-per-pixel 4 :
Sample-per-pixel 8 :
Sample-per-pixel 16 :
Sample-per-pixel 64 :
Sample-per-pixel 1024 :
For each sample that is traced through a pixel, get a ray out of the sample. Then, for each ray, calculate mean and standard deviation with its illuminance. The calculation for the mean and standard deviation is given below :
Where s1 and s2 are defined as the sum of illuminance and the sum of the squared of illuminance respectively.
While looping through the number of samples, for every samplesPerBatch, check if the value converged by using the formula below :
If it is converged, break out of the loop, if not, continue updating mean and standard deviation. Since we stop the loop whenever it converges, sampleCountBuffer has to be updated accordingly with the appropriate number of samples.
./pathtracer -t 8 -s 2048 -a 64 0.05 -l 1 -m 5 -r 480 360 -f bunny_part5_1.png ../../../dae/sky/CBbunny.dae
./pathtracer -t 8 -s 2048 -a 32 0.05 -l 1 -m 5 -r 480 360 -f bunny_part5_2.png ../../../dae/sky/CBbunny.dae
-a 32 :
-a 128 :