Give a high-level overview of what you implemented in this project. Think about what you've built as a whole. Share your thoughts on what interesting things you've learned from completing the project.
We want to simulate visual entities in a screen. Doing so requires us to understand how the image is stored and represented. We start by drawing a colored triangle on the screen with a rasterization technique. Naive implementation causes jaggies and aliasing. We alleviate this problem by using supersampling. Given that we render figures with color on the screen, we want to move them using mathematics, linear algebra. So far, we were dealing with a single color inside the triangle. What if we have different colors in their vertices? The barycentric coordinate system helps us to determine colors inside the triangles. Furthermore, the coordinate enables sampling points from texture space. We use the nearest and bilinear sampling. Sometimes close and far distance makes jaggies and aliasing, and we mitigate this problem by using mipmap, a cached downsampling image.
![]() |
We are given three points and color of a triangle inside an SVG file. First, we are going to check every pixel. Determining the pixel inside triangle requires us to use two facts; Each line dissects space into two regions. Secondly, a triangle is an intersection between these regions. To solve this problem, we use the inner product of two vectors. Let P0, P1, P2 be the points given. Suppose we want to check the point P(x,y). We get a vector V1 = (P1-P0) and find the vector that is perpendicular to this, which turns out to be -(y1-y0),(x1-x0). We retrieve another vector V2 (P - P0) as well. Now we compute V1 * V2, and the result will tell us that V2 is in the range of +-90 degrees of the V1. This area is equivalent to the upper region of the edge P0P1. Three edges, including P0P1, P1P2, and P2P3, will determine if the given pixel is inside the triangle. Lastly, we need to check the other direction as well.
Instead of checking all pixels on the screen, my algorithm takes minimum and maximum value of bounding box enclosing the triangle. We simply takes minimum value of the three points per axis.
clock()
function around the svg.draw()
command in DrawRend::redraw()
to compare millisecond timings with your various optimizations off and on).No attempt
![]() |
In the previous task, we sampled the point in the middle of the pixel by adding 0.5 to x and y-axis, which is essentially 1:1 sampling, and it creates jaggies. Instead, we divide each pixel into 4, 9, and 16 squares. In other words, we have 4, 9, 16 points inside each pixel. Of course, we cannot apply this directly to the screen because we have fewer screen pixels than our new data structure. We simply average the color of the sampled points and create a new color for each pixel. We use simple array,
![]() |
![]() |
![]() |
No attempt
![]() |
Affine transformation enables us to do translation, rotation, and scaling with only one matrix multiplication which reduces enormous computing time.
![]() |
![]() |
As we can see in the triangle above, we have a different color per each vertex and want to interpolate smoothly. We use barycentric coordinates, which indicate how close a given point to each vertex. After getting three coefficients, alpha, beta, and gamma, we calculate the weighted sum of each vertex color to assign the current pixel color.
![]() |
The texture on the surface can be perceived differently depending on its location. Therefore, it is extremely challenging to draw texture every time we move an object. We use two spaces, which are screen and texture, to map a texture onto the surface of the screen. To compute the location of sampling, we use barycentric coordinate and scale it into the texture coordinate to pick the color.
The nearest sampling method takes the closest texture pixel. For example, our scaled uv coordinate will have a floating value, and we assign the closest integer value for its y and x-axis. On the other hand, bilinear uses all four colors and takes average using weights.
![]() |
![]() |
![]() |
![]() |
The nearest sampling at one sample per pixel is super pixelated in the magnified image. Bilinear sampling smooths this out even if we sample at one sample per pixel. A bilinear sampling at 16 samples per pixel is very smooth, and it will be very natural to see it because our surface contains a relatively small area.
![]() |
As it can be seen on the picture above, sampling high resolution image causes aliasing when texture is far from the viewer. If we are concerned aobut this and changing texture into low resolution by downsampling, close view gets blurred. Level sampling resolve this issue by level sampling, which uses different dimension of image. For example we can use level zero image for the close distance and use higher level image for the far object.
Collecting large samples per pixel smooths out; however, supersampling takes exponential runtime memory. We use nearest and bilinear sampling when we collect pixels from texture space. Bilinear is CPU intensive since it uses four pixels around the sampling point. It is smoother than the nearest. Level sampling: We prevent blurring and aliasing caused by xy->uv conversion. However, storing mipmaps takes extra memory, roughly 133% of the original size. Also, it is CPU intensive because we calculate (du/dx, dv/dx) and (du/dy dv/dy) to retrieve the optimal mipmap level.
L_ZERO
and P_NEAREST
, L_ZERO
and P_LINEAR
, L_NEAREST
and P_NEAREST
, as well as L_NEAREST
and P_LINEAR
.
![]() L_ZERO and P_NEAREST |
![]() L_ZERO and P_LINEAR |
![]() L_NEAREST and P_NEAREST |
![]() L_NEAREST and P_LINEAR |
As expected, applying pixel and level sampling smooths the pixels. However, one noticeable thing is that using only level pixeling distorts the light here. I think it happens because our surface is stretched at the left bottom, which shows an excellent example of level sampling even though it looks funny. Applying the nearest level and linear pixel sampling shows the best quality.
No attempt