In this project, we simulate and sample light rays from light sources, and considering the light bounces at surfaces, we can render an image simulating lights captured by a camera from real scenes. The main steps to render the images include transforming rays from image space to real-world space, check ay intersection with shapes, perform Uniform Hemisphere Sampling or Importance Sampling Light to calculate direct illumination, and recursively trace rays and perform sampling for global illumination. We also improve the efficiency of the algorithms, which also means improved quality of outputs with a fixed amount of computational power and rendering time, by implementing Bounding Volume Hierarchy (BVH) and Adaptive Sampling to sample more frequently in area with slower converge rate.
To start with, we first implement ray generation given image coordinate, then we sample multiple rays for every pixel on the output image and use Monte Carlo Integration on sample rays to find the average color for the pixel. Finally, we implement algorithms to check if a ray intersects with a triangle or sphere in the given position and return the intersection coordinates.
For ray generation, we first transform the given image coordinate in image space([0,1]*[0,1]) to camera space by scaling x by \(2*\tan(0.5*hFov)\), scaling y by \(2*\tan(0.5*vFov)\), and transform the center of the image (0.5,0.5) to (0,0,-1). Then we transform the direction ray, which passes camera origin and corresponding point coordinate in camera space, in-camera space to world space using given 3*3 camera-to-world transform matrix. The final output is a ray in world space with a given origin and the transformed ray as direction.
![]() |
---|
To sample rays from a given image pixel, we perform uniform random sampling in image space ([0,w]*[0,h]) inside the square with length 1 and the pixel coordinate as the left bottom corner. Then we put the image coordinate samples into the ray generation algorithm and find estimated radiances of these rays. The Monte Carlo estimate of the radiance at the given image pixel will just be the average of those estimated radiances.
![]() |
---|
To check ray intersection with triangles, we implement Möller Trumbore Algorithm. Given input ray and triangle, the algorithm outputs barycentric coordinate of the "interesction point" with respect to the input triangle and parameter t of the "interesction point" with respect to the input ray. If the barycentric values are all between 0-1, and t value is between min_t (front clipping plane) and max_t (back clipping plane), the intersection is valid.
![]() |
---|
To check ray intersection with spheres, we rely on basic equations for rays and spheres. If there exist points t satisfying both equations for the ray and the sphere, and at least one of the intersection points has t lying between min_t and max_t, the interesction is valid. If both intersection points have t lying between min_t and max_t, we choose the point which has a smaller t as the earliest interesction point.
![]() |
---|
Below are some examples generated by performing uniform random ray sampling for every display pixel with interesction between ray and triangle and interesction between ray and sphere implemented.
![]() Bunny |
![]() Coil |
![]() Gems |
---|
Currently, it takes a long time for path tracer to run, because we store all primitive shapes in one level, and all of them have to be checked every time. To speed up rendering, we implement Bounding Volume Hierarchy (BVH).
In the BVH structure, objects in one parent node are split into the left child node and the right child node. If the centroid of an object is smaller than the centroid of all centroids of all objects, the object is split into the left child node. Otherwise, the object goes to the right child node. If one of the child nodes is empty, we stop splitting. Otherwise, we keep on splitting until the number of objects in a node are smaller than max_leaf_size.
To check if a ray intersects any object in a BVH node, first check if it intersects with the bounding box of the node. If the ray does not intersect the bounding box, return false directly. Otherwise, if the node is a leaf node, check the ray intersection with all objects in the node. If the node has child codes, recursively check ray intersection with child nodes, and return the closest intersection from the two.
![]() |
---|
Below are some examples which contain a lot of objects and take a long time to render without the BVH algorithm.
![]() maxplanck |
![]() Lucy |
![]() Dragon |
---|
From the comparison table below we can see how the BVH algorithm speeds up the rendering process dramatically. All of the three shapes are complicated and they are comprised of many primitives. The more primitives they have, the longer the algorithm runs(roughly linearly proportional), and the more obvious the improvement by BVH is with running time O(log n).
number of primitives | Naive | BVH | |
---|---|---|---|
maxplanck.dae | 50801 | 170.3622s | 0.1492s |
CBlucy.dae | 133796 | 439.1689s | 4.6458s |
CBdragon.dae | 100012 | 326.4464s | 0.6719s |
In this part, 2 sampling methods for direct light are implemented. The first one is Uniform Hemisphere Sampling that we estimate the direct lighting on a point by sampling uniformly in a hemisphere. The second one is Importance Sampling Light, which we only sample from light sources.
To implement Uniform Hemisphere Sampling, we uniformly sample incident rays at the interesction point from the whole hemisphere with probability \(p(w_j)=\frac{1}{2\pi}\) for every ray. Given the incident sample rays(\(w_j\)), exiting ray(\(w_r\)) and bidirectional reflectance distribution function (BRDF)(\(f_r\)) we can find out incident sample rays' intensities(\(L_i\)) by checking if a new ray going from the intersection point in the sampled direction intersects a light source. Finally, calculate Monte Carlo estimator is calculated by the following equation.
![]() |
---|
Importance Sampling Light mainly relies on the same Monte Carlo estimator, but now instead of sampling uniformly in a hemisphere, we only sample from directions from the light source to the interesction points with given probability density function. If the sample ray comes from a point light source, only sample once.
Below are some examples rendered with the two different sampling methods. It's obvious that Importance Sampling Lights produce outputs with fewer noises and smoother surfaces. That's reasonable because sample rays coming from light sources can contribute more information to the estimator compared to other sample rays.
![]() Bunny |
![]() Spheres |
![]() Dragon |
---|
![]() Bunny |
![]() Spheres |
![]() Dragon |
---|
Below are some examples rendered with Importance Sampling Lights but different numbers of rays per area. As we can see, the more light rays sampled per area, the less noise the output is. When there are 64 light rays sampled per area, the noise is ignorable.
![]() 1 light rays per area (l=1) |
![]() 4 light rays per area (l=4) |
![]() 16 light rays per area (l=16) |
![]() 64 light rays per area (l=64) |
---|
In this part, based on direct illumination, we implement global illumination, which contains both direct illumination and radiances with at least one bounce. We first calculate the radiance with 0 bounce, as we do for direct illumination. Then we do at-least-one-bounce radiances recursively. While the ray trace depth is under the bound set by the user, we recursively call the algorithm for at least one bounce radiance, which outputs the sum of one bounce radiance at the current tracing ray and weighted recursive algorithm outputs with input as randomly sampled incident rays (P(weight=\(\frac{1}{p}\))=p, P(weight=0)=1-p)). Russian Roulette is used here so that when under max ray tracing depth, every time when we sample an incident ray, we output ray radiance with weight \(\frac{1}{p}\) with probability p or 0 with probability 1-p. In our code, we choose arbitrary p=0.3. The new estimator is unbiased, and it reduces the chance for infinite ray tracing depth.
Below are two examples rendered with global illumination with max bounce number equals to 6.
![]() Bunny |
![]() Sphere |
---|
For the bunny example, we can further split the more-than-one-bounce indirect illumination from direct illumination to see the improvement of global illumination from direct illumination.
![]() Direct illumination(bounce=0&1) |
![]() Indirect illumination(bounce>1) |
---|
We can also see how the global illumination improves with more light bounces included.
![]() max ray depth m=0 |
![]() max ray depth m=1 |
![]() max ray depth m=2 |
---|---|---|
![]() max ray depth m=3 |
![]() max ray depth m=100 |
Similarly, the output quality can be improved by higher ray sampling rate for every pixel.
![]() 1 sample ray per pixel (s=1) |
![]() 2 sample ray per pixel (s=2) |
![]() 4 sample ray per pixel (s=4) |
![]() 8 sample ray per pixel (s=8) |
---|---|---|---|
![]() 16 sample ray per pixel (s=16) |
![]() 64 sample ray per pixel (s=64) |
![]() 1024 sample ray per pixel (s=1024) |
We can always improve the output image quality by sampling more rays. However, it also takes more computational power and longer running time. In order to maximize the benefit of upsampling, we introduce adaptive sampling, which means sampling more frequently in areas with slower converge rates.
Given a sampling rate, we sample until the convergence of the samples is small enough. Convergence is defined by \(I=1.96*\frac{\sigma}{\sqrt{n}}\), and the condition to stop sampling is \(I \leq maxTolerance * \mu\), where maxTolerance is a percentage parameter set by the user, and \(\mu\) and \(\sigma\) are statistical mean and standard deviation of current samples. By recording s1 and s2 defined as below, we can calculate \(\mu\) and \(\sigma\) at any time. Every time when we collect another batch_size (another user-set parameter) samples, we check the condition to stop sampling. If the condition is satisfied, we stop sampling for the pixel. Otherwise, we keep on sampling the next batch_size samples, until we reach the max sampling rate.
![]() |
---|
Below is an example showing render bunny and its sampling rate image. Red area represents highest sampling rates, and blue area represents lowest sampling rates.
![]() Bunny |
![]() Sampling Rate |
---|