OVERVIEW :


Homework 1 involves implementing graphics rendering techniques: rasterization, anti aliasing, transforms, barycentric coordinates and interpolation, and texture mapping.
In task 1, we implement triangle rasterization using the point-in-triangle test with a sample point at the center of each pixel. The difficulties with this task were making sure rasterization occurred properly regardless of the inputted winding order of the vertices and determining how to make the run time at least as fast as iterating over each pixel in the minimum continuing box. We also made sure all edges were properly drawn
Task 2 extends from the first by incorporating supersampling as an anti-aliasing technique. We modified the rasterization pipeline by increasing the size of the sample_buffer to accommodate the super sampling (multiple samples are taken for each pixel). Our difficulties with this task mainly revolved around how we would keep our fill_pixel function consistent with how we filled the sample buffer during super sampling, as well as keeping the indexing consistent for resolve_to_framebuffer()
In task 3, we implemented translation, scaling and rotation transforms according to the SVG spec. The main difficulty with this task was making sure to change the degrees passed into the rotation function into radians. We then modified the provided robot man image to look like he was doing a jumping jack. Here the difficulty was determining how much to rotate each part of the robot’s body.
In task 4 we implement Barycentric interpolation on three triangle vertices with colors. We interpolate each pixel’s color inside the triangle using the barycentric coordinates and the color values of each vertex. The interpolation provides smooth transition of colors across the triangle. The biggest difficulty for this task was figuring out that we should interpolate color values and not (x,y) coordinate values.
In task 5 we incorporate texture mapping. We apply textures onto triangles using nearest neighbor or bilinear interpolation for pixel sampling. The biggest difficulty with this task was realizing we had to scale our texture coordinates to agree with the height and width of the mip.
Finally in task 6, we extend the texture mapping from task 5 to support mipmaps. We implement linear and nearest neighbor methods for the level sampling of each pixel. The biggest difficulty with this task was correctly implementing the get_level() function and correctly scaling the parameters to agree with the size of the texture level.

TASK 1 :

Walk through how you rasterize triangles in your own words.
First I determine the orientation of the inputted vertices using the cross product. If the vertices are in clockwise order, I swap the x and y values of (x1, y1) and (x2, y2) Then I iterate across the integer pixel values of the smallest bounding box (explained in next part). I sample at the integer value plus 0.5 of each x and y coordinate Then I compute and input the sample into the three line equations If the output from the three line equation tests is greater than or equal to zero, then I fill the pixel with the inputted color. ::
Explain how your algorithm is no worse than one that checks each sample within the bounding box of the triangle.
My algorithm is no worse than one that checks each sample within the bounding box of the triangle because I compute the smallest bounding box of the triangle then iterate only over the values within this smallest bounding box. I found this bounding box by finding the minimum x vertex value, maximum x vertex value, minimum y vertex value, and maximum y vertex value. Then I used these coordinates as the bounding box.
Show a png screenshot of basic/test4.svg with the default viewing parameters and with the pixel inspector centered on an interesting part of the scene.

TASK 2:

Walk through your supersampling algorithm and data structures. Why is supersampling useful? What modifications did you make to the rasterization pipeline in the process? Explain how you used supersampling to antialias your triangles
The supersampling algorithm takes in a sampling rate and outputs an antialiased image. In our algorithm, we first resize the sample buffer to have enough space for all the subsamples. Then in rasterize_triange, the algorithm iterates through the locations of subsamples, checks if they fall within the triangle, and color the subsamples using the requested color. In resolve_to_framebuffer, the function iterates over sample_rate subsamples at a time, computes their average color, and paints the mean color in the appropriate spot of the framebuffer. The point of this algorithm is to remove jagged edges of skinny triangles, and filter high frequencies out of images. Since we changed the size of the sample buffer, we had to change how we indexed it in fill_pixel, set_framebuffer_target, and set_sample_rate. Since it holds more samples, the number of samples that represent a single pixel must be adjusted. Supersampling allows an increase in the sampling frequency, then (since the screen can’t display that many pixels) convolutes subsamples into pixels, which lowers the frequency of the picture at that area.
Show png screenshots of basic/test4.svg with the default viewing parameters and sample rates 1, 4, and 16 to compare them side-by-side. Position the pixel inspector over an area that showcases the effect dramatically; for example, a very skinny triangle corner. Explain why these results are observed.
We see as the sampling rate increases, the areas near the edges of the triangles have smoother transitions. Additionally, the corner of the small skinny triangle gets more connected as the sampling rate increases. The reason why this occurs is because as the sampling rate increases, the color of each pixel is constructed from more data points and hence provides a smoother transition. More sample points better approximate high frequency changes such as the edges and skinny corners.


Extra credit: If you implemented alternative antialiasing methods, describe them and include comparison pictures demonstrating the difference between your method and grid-based supersampling. We implemented a randomized supersampling method that samples random spots within the pixel. The number of random samples is up to the sampling_rate. Normal sampling:

TASK 3 :

Create an updated version of svg/transforms/robot.svg with cubeman doing something more interesting, like waving or running. Feel free to change his colors or proportions to suit your creativity. Save your svg file as my_robot.svg in your docs/ directory and show a png screenshot of your rendered drawing in your write-up. Explain what you were trying to do with cubeman in words.


We are trying to make cube man look like he is doing a jumping jack This was achieved by rotating and shifting his arms and legs outwardly, and then again rotating and shifting his lower arm and legs inwardly. More specifically, his arms were rotated outward by 20 degrees, and his legs by 45 degrees. Then the lower legs were roasted inwardly by 30 degrees and the arms by 45 degrees. Then shifts were applied to each body part to agree with the rotations.

TASK 4 :

Explain barycentric coordinates in your own words and use an image to aid you in your explanation. One idea is to use a svg file that plots a single triangle with one red, one green, and one blue vertex, which should produce a smoothly blended color triangle.
Barycentric coordinates are coordinates that give us an indication of a point’s position relative to certain vertices. In our case, it tells us our sample point’s position relative to our triangle’s vertices. The Barcycentric coordinates allow us to represent points within a triangle as a weighted sum of the vertices. The weights will be alpha beta and gamma, where alpha + beta + gamma = 1. If any weight is less than zero then the point does not lie within the triangle.


In this image, we can see each vertex is colored red blue green. Then each point within the triangle has a color relative to its relationship to each vertex. For example, points closer to the green vertex have brighter green, then as they get closer to the center, it is more of a blend. Show a png screenshot of svg/basic/test7.svg with default viewing parameters and sample rate 1. If you make any additional images with color gradients, include them.

TASK 5 :

Explain pixel sampling in your own words and describe how you implemented it to perform texture mapping. Briefly discuss the two different pixel sampling methods, nearest and bilinear.
Pixel sampling is the process of obtaining attributes, such as color, for each image or regain of an image to represent it at a particular resolution. Various approaches, such as nearest sampling or bilinear sampling, can be used to gather the data or “sample” each pixel. These methods retrieve information from a source using the pixel’s coordinates and then approximate a value to assign to each pixel. We implemented pixel sampling within our rasterize_textured_triangle() function (which takes in vertex coordinates and texture coordinates for vertex), by iterating over each pixel within a minimum spanning box around the inputted triangle (for triangles where the vertices were entered in clockwise order, we swap them to be counterclockwise to maintain consistency). We also maintain super-sampling features within this function by iterating over subsamples for each pixel if the sample_rate is greater than 1. For each pixel, we compute barycentric coordinates to determine the pixel’s relative position within the triangle. We use the barycentric coordinates to obtain the pixel’s texture coordinates. Then we sample the texture map using the input pixel sampling method (nearest or bilinear) to retrieve the desired color/texture for that pixel. Finally, we add this value to the appropriate part in the sample_buffer. The sample_buffer is later appropriately mapped to the frame buffer in resolve_to_framebuffer(), Nearest sampling method: In this method of sampling, the texel closest to the sample point is the value directly applied to that sample point. We implemented this by rounding the sample coordinates to the nearest integer with correlates to texture positions on a texture map Although fast, this method tends to produce more pixelated results Bilinear sampling method: In this method, we linearly interpolate the color/texture values of the four nearest textels to the sample points. First , we use the texture coordinates of our sample points to obtain the values of the texels surrounding our sample point. Then we compute the fractional offsets within the texels to determine the weights for interpolation Then use two helper horizontal interpolations (the top horizontal and bottom horizontal) using the horizontal fractional offset/weight. Then we interpolate those values hporizontally using the horizontal fractional offset/weight. This final interpolated color/texture is applied to the sample pixel. For the entire image, when performing this sampling for mipmaps in different layers (described more in task 6), we had to scale our vectorized values by the final width and height of the image. Although it requires more computation than nearest sampling, bilinear sampling creates smoother transitions between each pixel, improving the quality of an image.
Check out the svg files in the svg/texmap/ directory. Use the pixel inspector to find a good example of where bilinear sampling clearly defeats nearest sampling. Show and compare four png screenshots using nearest sampling at 1 sample per pixel, nearest sampling at 16 samples per pixel, bilinear sampling at 1 sample per pixel, and bilinear sampling at 16 samples per pixel.

Comment on the relative differences. Discuss when there will be a large difference between the two methods and why.
Comparing nearest and bilinear at 1 sample per pixel: While both look more pixelated than an increased sample rate image, the windows on the campanile look smoother in the bilinearly sampled image. Additionally, in the nearest sample method image, there is a more drastic difference in each pixel’s colors where we have high frequency changes occurring, whereas the bilinally sampled image does a better job of transitioning these regions. Comparing nearest and bilinear at 16 samples per pixel: In these two images, the differences between the two sampling methods is less obvious as the super sampling takes care of the aliasing caused in both types of sampling. That being said, close inspection does show that the nearest neighbor sampled image still seems a bit more pixelated than the bilinear image. In summary, the differences between the two methods is largest when we have lower sampling rates. While nearest neighbor sampling causes more aliasing than bilinear, the higher sampling rate resolves a lot of the aliasing caused by both methods.

TASK 6:

Explain level sampling in your own words and describe how you implemented it for texture mapping.
Generally, the process of level sampling refers to the technique of selecting multiple different levels of detail through which to represent the image (very often applied to texture mapping and mipmaps). Level sampling in our case specifically is a process for selecting the correct level of detail for a texture map. We have a pre-computed set of texture maps (mip-maps) where levels are progressively smaller versions of the original texture map. A pixel’s mipmap level correlates to the amount of detail needed to color or texturize that pixel. Each level on a mip-map contains a downsampled version of the original texture. Level 0 is the original, with subsequent levels becoming smaller. Different methods can be used to determine the mip-map level. In our case, we had Zeroth, Nearest, and Linear. In summary, pixel sampling involves determining a mip-map level for each pixel. Our implementation for texture mapping builds off the pixel sampling for texture mappings. In addition to the implementation used for Task 5, we implemented a get_level function. This function selects a MipLevel for texture sampling using the texel density in screen space. The function uses a SampleParams struct as input which contains the following arrays: p_uv ( the texture coordinates of the pixel being sampled) and p_dx_uv , and p_dy_uv (the differences in UV texture coordinates between adjacent pixels ). We scale the differences by the width and height of the texture image to match the texel density. Then the maxim norm of these differences is calculated and the miplevel index is the logarithm base 2 of this maxim norm. Again, in addition to the implementations used in task 5, for each pixel we construct the SampleParam struct within the rasterize_textured_triangle function. The struct contains the UV texture coordinates and their differentials for the get_level function. We then determine the appropriate MipLevel index for each pixel based on the level sampling method used in that moment (lsm). Nearest neighbor sampling selects the nearest appropriate MipLevel and passes it into the nearest or bilinear sample functions. Bilinear Level Sampling computes the miplevel as a continuous number, then computes an interpolated texture value from the two adjacent texture levels using the proportional weight of the continuous level number.
You can now adjust your sampling technique by selecting pixel sampling, level sampling, or the number of samples per pixel. Describe the tradeoffs between speed, memory usage, and antialiasing power between the three various techniques.
Pixel sampling: Speed: the pixel sampling methods that require more computation are slower than those that require less. In our case, nearest neighbor sampling requires less computation than bilinear and hence produces faster rendering times. Memory usage: both bilinear and nearest neighbor pixel sampling have similar memory usage from the process of accessing texels from the texture map. Bilinear might have slightly more memory usage due to the interpolation process. Antialiasing power: BIlinear has stronger antialiasing power compared to nearest neighbor since bilinear has smoother transitions between texels after interpolation of values. Nearest neighbor tends to produce more aliasing, especially in texture with high frequency detail where transitions between colors are less involved in the texture selection than with bilinear. Level sampling: Speed: level sampling affects the rendering time by affecting the amount of computation required to determine the texture level. Bilinear or trilinear interpolations require additional calculations per pixel, potentially slowing down rendering compared to nearest neighbor level sampling. Memory Usage: If level sampling requires storing multiple mipmaps of varying resolutions, level sampling can have an indirect effect on the resolution. Antialiasing power: Level sampling helps textures maintain quality across different viewing distances, reducing aliasing. Bilinear and trilinear level sampling smooth transitions between mip level, again reducing aliasing. Number of Samples per Pixel: Speed: higher sample rates slow down rendering time due to the additional computations needed for each pixel. Memory Usage: higher sample rates require more memory usage to store the increased number of samples within the sample_buffer. Higher sample rates directly require a larger sample_buffer array. Antialiasing power: Increasing the number of samples reduces aliasing by representing more details about each pixel. Super sampling averages the results of the samples from each pixel, making smoother transitions in areas with high frequency detail. Using a png file you find yourself, show us four versions of the image, using the combinations of L_ZERO and P_NEAREST, L_ZERO and P_LINEAR, L_NEAREST and P_NEAREST, as well as L_NEAREST and P_LINEAR.