CS184/284A Spring 2026 Homework 1 Write-Up

Names: Dantes Chen

Link to webpage: cal-cs184-student.github.io/hw-webpages-zaddle55/hw1/
Link to GitHub repository: github.com/cal-cs184-student/hw-webpages-zaddle55

Overview

In this homework, I implemented a basic triangle rasterizer that can render single-color triangles, antialiased triangles using supersampling, and textured triangles using barycentric coordinates and mipmaps. I also implemented a simple transform system to apply transformations to the triangles before rasterization. Through this homework, I learned about the fundamental concepts of computer graphics, such as rasterization, antialiasing, texture mapping, and transformations. I also gained hands-on experience with implementing these concepts in code, which helped me to better understand how they work in practice.


The most interesting part of this work is that after debugging the LOD computation and trilinear interpolation for mipmapping, I was able to see the significant improvement in image quality when rendering minified textures. It was fascinating to see how the combination of pixel sampling and level sampling can effectively reduce aliasing artifacts and produce smoother images.

Task 1: Drawing Single-Color Triangles

Task 2: Antialiasing by Supersampling

Task 3: Transforms

This is my_robot.svg after applying a combination of scaling, rotation, and translation operations. It looks like a robot that is dancing or waving its arms.
Transformed my_robot.svg
Transformed my_robot.svg.

Task 4: Barycentric coordinates

Barycentric Coordinates
Barycentric coordinates are a coordinate system for triangles that allows us to express any point within the triangle as a weighted average of the triangle's vertices.
Barycentric coordinates are often denoted as \((u, v, w)\) where \(u\), \(v\), and \(w\) are the weights corresponding to the three vertices of the triangle.
To compute the barycentric coordinates of a point \(P\) with respect to a triangle defined by vertices \(A\), \(B\), and \(C\), we can use the following formulas: \[ u = \frac{Area(PBC)}{Area(ABC)}, \quad v = \frac{Area(PCA)}{Area(ABC)}, \quad w = \frac{Area(PAB)}{Area(ABC)} \] where \(Area(ABC)\) is the area of the triangle formed by vertices \(A\), \(B\), and \(C\), and \(Area(PBC)\), \(Area(PCA)\), and \(Area(PAB)\) are the areas of the triangles formed by point \(P\) and the edges of the triangle.
This is the sample that generates the barycentric interpolation of RGB colors across the circle:
Barycentric RGB Interpolation
Barycentric interpolation of RGB colors across the circle.

Task 5: "Pixel sampling" for texture mapping

Task 6: "Level Sampling" with mipmaps for texture mapping

Level sampling, commonly known as mipmapping, is a technique specifically engineered to combat the texture aliasing that occurs when a high-resolution texture is compressed into a small geometric area on the screen (minification). If we were to only sample from the original, full-resolution image, a single screen pixel might inadvertently skip across dozens of texels, leading to severe Moiré patterns, high-frequency noise, and shimmering artifacts as the camera moves. To circumvent this, the system pre-computes a hierarchy of progressively downsampled texture images. By halving the resolution at each step, we create a pyramid of textures where each level represents a pre-filtered version of the original image, perfectly suited for viewing at different distances.

In my programmatic implementation, the core challenge was determining which mipmap level mathematically matches the screen pixel's current footprint. I achieved this by calculating the partial derivatives of the texture coordinates with respect to the screen space coordinates. By evaluating exactly how much the \(u\) and \(v\) values change when we step exactly one pixel horizontally (\(dx\)) or vertically (\(dy\)), I extracted the maximum rate of change. Taking the base-2 logarithm of this maximum vector length yields an ideal, continuous mipmap level, denoted as \(D\). Depending on the chosen configuration, the algorithm either rounds \(D\) to the nearest integer to sample a single optimal level, or, for trilinear filtering, it samples from the two adjacent integer levels (\(\lfloor D \rfloor\) and \(\lceil D \rceil\)) and linearly interpolates the two resulting colors based on the fractional remainder of \(D\).

Tradeoffs: Speed, Memory, and Antialiasing Power

Configuring the rasterization pipeline involves balancing three distinct sampling parameters, each carrying unique tradeoffs regarding computational speed, memory footprint, and antialiasing efficacy. Adjusting the pixel sampling method (from nearest-neighbor to bilinear) introduces a moderate processing penalty because it requires fetching four texels from memory instead of just one and performing sequential linear interpolations. However, it requires absolutely zero extra memory and provides excellent foundational antialiasing for magnified textures by smoothing out harsh, blocky pixel transitions.

Conversely, enabling level sampling (mipmapping) fundamentally alters the memory requirements of the application. Generating and storing the mipmap hierarchy increases the overall texture memory footprint by approximately 33.3%. In terms of speed, evaluating the derivatives and interpolating across multiple mipmap levels (trilinear filtering) consumes more processor cycles. Yet, this is an exceptionally powerful antialiasing technique specifically for minified textures, as it completely eradicates Moiré patterns and distant rendering noise that pixel sampling alone cannot fix.

Finally, increasing the number of samples per pixel (supersampling) represents the most brute-force, general-purpose approach to antialiasing. Modifying this parameter severely impacts both speed and memory. For instance, a 4x supersampling rate mathematically quadruples the size of the required memory buffer and proportionally multiplies the time spent evaluating edge functions and writing to memory. Despite this massive performance cost, supersampling boasts the absolute highest antialiasing power; it physically samples the geometric scene at a higher resolution before down-filtering, seamlessly smoothing out harsh geometric edges (jaggies) and texture artifacts simultaneously, delivering unparalleled overall image quality.


The custom texture mapping sample (in texmap/test7.svg) created with 4 versions of the sampling configuration is shown below:
Texture mapping with nearest-neighbor and bilinear pixel sampling under 1 samples per pixel, with level 0 sampling.
Texture mapping with nearest-neighbor and bilinear pixel sampling under 1 samples per pixel, with nearest level sampling.
1 samples per pixel, with level 1 sampling.
Also, the showcase with different sampling using zooming in and out is shown below:
Texture mapping with bilinear pixel sampling under 1 samples per pixel, with nearest level sampling, without zooming, zooming in, and zooming out.

(Optional) Task 7: Extra Credit - Draw Something Creative!

In this creative task, I created a concentric spiral square pattern using the draw program and the spiral.svg input file. The resulting image is shown below:
Concentric Spiral Square Pattern
A concentric spiral square pattern.
The pattern is produced by nested g elements and applying a scaling and rotation of 10 degrees to each nested group. The result is a visually appealing spiral pattern that demonstrates the power of geometric transformations in computer graphics.