site: https://cal-cs184-student.github.io/hw-webpages-sp24-ashmchiu/hw2/
Starting off by performing de Casteljau’s algorithm, we calculated and generated points on Bezier curves and surfaces. This allows us to render smooth curves and surfaces. Then, we implemented area-weighted vertex normals for Phong shading, which then had the ability to smoothly shade our models. Finally, in an effort to allow for loop subdivision to upsample a mesh, we implemented edge flipping and splitting.
In the process of implementing, we also worked on boundary edge splitting, which was interesting in showing us that we could render meshes that could be built off of triangles and potentially quadrilaterals.
We found it interesting how this homework built off the last one, with both allowing upsampling, whether that be of textures and rasterization from the first homework to smoothing out meshes here. What we learned throughout the process was the complexity it takes to move from a 2-dimensional to 3-dimensional space in rendering. Namely, we had to utilize a new class, namely the Halfedge class, understanding how halfedges relate to vertices, faces, and edges.
The de Casteljau algorithm takes in a set of control points and recursively finds intermediate points until its stopping condition (when there remains 1 point), when we hit this case, this one point will exist on the Bezier curve generated from the original control points.
In depth, de Casteljau takes in n
initial control point and a parameter t
between 0 and 1 (shifting t
ultimately gives you all the points that lie on the Bezier curve). The algorithm in return, in one step, returns n - 1
intermediate points, and recursively applying the algorithm gets us from n
points down to 1
in n - 1
steps.
Here, in BezierCurve::evaluateStep
, we have implemented the method to get from n
points to n - 1
intermediary points, effectively executing one step of de Casteljau’s. At each step, given the points p_1
to p_k
, for each p_i'
, we linearly interpolate (1 - t) * p_i + t * p_{i + 1}
. Therefore, to generate the Bezier curve, we call evaluateStep
recursively, n - 1
times, to get our final point that lies on the Bezier curve.
Below, we show screenshots of each step of the evaluation from the original control points down to the final evaluated point.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Here’s the completed Bezier curve constructed from running de Casteljau’s algorithm visually above.
![]() |
We can also drag and move the original control points that we’ve shown above to show a slightly different Bezier curve. Here, we show the curve
![]() |
and here we show how we can modify the parameter t
as this shows us other points that will lie along the green Bezier curve.
![]() t |
![]() t |
To extend de Casteljau’s algorithm to Bezier surfaces, conceptually, instead of utilizing a vector of points, we analyze a matrix. Namely, given an n x n
matrix of control points, we first perform the 1D de Casteljau algorithm on each of the rows to get one point per row (for each u
), and then perform the algorithm and interpolation again on each of those points to get the final point (in the v
direction). This results in one point that lies on the Bezier surface at the given u
and v
.
To implement this, we modified three functions: BezierPatch::evaluateStep
, BezierPatch::evaluate1D
, and BezierPatch::evaluate
.
BezierPatch::evaluateStep
An extension of our previous BezierCurve::evaluateStep
, the code is essentially the same except we now have Vector3D
s instead of Vector2D
s.
BezierPatch::evaluate1D
Calling BezierPatch::evaluateStep
n - 1
times, the purpose of this function is to fully evaluate de Casteljau’s algorithm for one set of points. This takes in n
points and actually only outputs 1 (in comparison to our two evaluateStep
functions that output n - 1
points).
BezierPatch::evaluate
Our final method in the sequence, this takes the effort to combine our previous methods and evaluate de Casteljau’s algorithm on each of the n
rows, storing each of the points in a vector, and then calling evaluate1D
on the resulting vector. Importantly, given that we have parameters u
and v
that resemble our parameter t
from Task 1, for our looping mechanism to evaluate1D
on all the rows, we pass in u
so the points we’re returned for each of the rows is at the same place within each row. Then, calling evaluate1D
to combine all the points returned to us there, we pass in v
, to get the correct column location that we want to us. This results in a point on the Bezier surface. We can think that if we iterate over all u
and v
values, we would map out every point on the Bezier surface using de Casteljau’s.
In conjunction, the three methods abstract away what the previous did: evaluating in one dimension depended on evaluating at one particular step while fully evaluating using de Casteljau had to abstract away evaluating a set of points in one dimension. Below is a screenshot of bez/teapot.bez, rendered following our implementation of de Casteljau’s algorithm for Bezier surfaces.
![]() |
To implement area-weighted vertex normals, we first notice that we are working within the Vertex
class (since the method is Vertex::normal
), which means we can get the halfedge for this vertex to start. Then, we loop through all the vertices surrounding this vertex (stopping when we reach the beginning again) and catch the other two points in the triangle (curr->next()->vertex()
and curr->next()->next()->vertex()
). Marking their positions, we get the vectors of the two edges from the Vertex
instance we’re currently in to those two positions and then take the cross product of the two edge vectors.
Namely, because the cross product of two edges of a triangle is area-weighted normal vector of the triangle, we want to generate a sum of these to then take the normal of. We iterate through, now, working with the next of the twin of the current halfedge, until we reach back to the starting halfedge again.
Doing this traversal guarantees that we have traversed all the faces that are incident to this vertex (since we are moving to the direct adjacent face through the twin()
traversal), until we reach back to the start.
Now, since we’ve retrieved a sum of all the normal vectors of each of the triangles, we normalize this summed vector to return. We note that this resulting normalized vector is the area-weighted normal at the given vertex, as desired.
We can see in the image below that the shading becomes a lot smoother, and we’ve implemented Phong shading!
![]() |
![]() |
Utilizing this diagram, we mapped out the pointer changes for edge flips.
![]() |
Our main implementation strategy was first checking whether the edge to flip was a boundary, and if so, returning. Then, we stored all the current elements as diagrammed above (with the same naming conventions). Then, we setNeighbors
for all the halfedges, ensuring that their next
, twin
, vertex
, edge
, and face
matched that of the diagram we’ve shown above. Following that, we set each of the vertice’s halfedges arbitrarily (so long as it was an outgoing halfedge). Then, we set each edge’s halfedge and set each face’s halfedge.
The main debugging trick we came up with was drawing out this diagram to ensure that we knew where everything was following the flip. This was also a great check to make sure we weren’t deleting or adding any edges, halfedges, faces, or vertices. We were lucky to not have a tumultuous debugging journey.
One debugging error that we ran into was that certain mesh objects wouldn’t change (for instance, the outer edges and halfedges didn’t move annd the vertices stayed put) and we didn’t originally update them. However, once we updated these values, it worked as expected.
Below are images of dae/teapot.dae before and after edge flips, with default flat and Phong shading.
![]() |
![]() |
![]() |
![]() |
Utilizing this diagram, we mapped out the pointer changes for edge splits.
![]() |
Our main implementation strategy was first checking whether the edge to split was a boundary, and if so, we implemented boundary checking the extra credit (so check that out!). Then, we stored all the current elements as diagrammed above (with the same naming conventions). We created all the necessary new elements (checking boundary conditions), specifically in the case with no boundary: 6 new halfedges, 3 new edges, 1 new vertex, and 2 new faces.
Our new vertex was set as the direct midpoint of the edge we were splitting (calculating the distance from the two vertices connecting that edge).
Then, we setNeighbors
for all the halfedges, ensuring that their next
, twin
, vertex
, edge
, and face
matched that of the diagram we’ve shown above. Following that, we set each of the vertice’s halfedges arbitrarily (so long as it was an outgoing halfedge). Then, we set each edge’s halfedge and set each face’s halfedge.
The main debugging trick we came up with was drawing out this diagram to ensure that we knew where everything was following the split. This was also a great check to make sure we weren’t deleting or adding any edges, halfedges, faces, or vertices. Following Task 4, we were a lot more careful in our diagram construction, and this meant we didn’t actually have any difficult debugging.
Below are images of dae/teapot.dae before and after edge flips and splits, as captioned, with default flat and Phong shading.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Below, we constructed on the ending image and added more edge splits on the already split and flipped dae/teapot.dae to make an interesting pattern. Enjoy!
![]() |
![]() |
Utilizing this diagram, we mapped out the pointer changes for edge splits on boundaries (if there is a left boundary).
![]() |
Handling boundary splits was difficult in understanding how to navigate the halfedges that lined the the central two edges. However, when we realized that the halfedges didn’t need to form triangles, this solved our problems, as we could have, for instance, halfedge h_9
’s next
point to halfedge h_1
if we were not creating the e_7
edge as diagrammed above.
We struggled in the beginning, first only creating one halfedge on the side that wasn’t split. However, this caused problems in identifying twin
s of halfedges and as such, we modified to this approach of allowing for a quadrilateral to show up in the mesh.
For implementation, we checked whether we would need to create a left boundary or a right boundary, and based on that, we changed the vertices, edges, halfedges, and faces for that. For instance, if we had a left boundary, this means that we wouldn’t want to be constructing e_5
, which meant that in our construction, we wouldn’t need h_7
or h_8
, nor f_2
(since we could use f_0
for the entire left side). Then, we’d setNeighbors
of halfedges and set halfedges of other mesh objects as coordinating to the diagram.
Doing this, we would need to create 4 new halfedges, 2 new edges, 1 new vertex, and 1 new face.
Here is a diagram of dae/beetle.dae before any boundary edge splitting. Notice that for the highlighted white edge, its isBoundary()
is 1
, which means this is a boundary edge. This makes sense since we’ve hit the edge of a surface (the window!).
![]() |
Here are images of dae/beetle.dae after splitting the edges around the nearest window in a pattern with both default flat and Phong shading. Beeep beep!
![]() |
![]() |
For our final task, we needed to implement loop subdivision. To do so, we had five main steps to go through: computing new positions for all existing vertices, (2) compute positions for new vertices, (3) split all edges in the mesh, (4) flip edges between an old and new vertex, and (5) copy over the vertex positions. The algorithm we utilized was a 4-1 subdivision, breaking each triangle down into four smaller ones.
Iterating through all vertices of the mesh, we found the halfedges associated with the vertices and calculated the sum of the positions of adjacent vertices, and depending on the degree, calculated a new position and stored it in the vertex’s newPosition
attribute. Namely, the new position would be
(1 - n * u) * original_position + u * original_neighbor_position_sum
where
n
represented the number of edges incident to the vertex (which we calculated by traversing halfedges and twins),u
was 3/16
if n = 3
, else 3/(8n)
,original_position
was the original position of the vertex as named, andoriginal_neighbor_position_sum
was the sum of all the original positions of neighboring vertices (calcualted via traversing halfedges and twins).Next, we needed to compute positiosn for new vertices that we were creating. Namely, for each pair of two triangles, we would be creating one new vertex. To do so, we traversed through the edges, and for each edge, we found the four surrounding edges, and surrounding four vertices, and calculated
3/8 * (A + B) + 1/8 * (C + D)
where A
, B
, C
, and D
were all positions of the four vertices and specifically, A
and B
were the two vertices that connected the current edge. This would be stored into the current edge’s newPosition
attribute to create the new vertex. This is particularly useful for the next step because when we’re splitting the edges, we can then use this newPosition
for our newly created vertex.
Now, we iterate through all the edges and split them (copying over the newPosition
found in the edge from the previous section) to be the position of the new vertex created from the split.
Something difficult in debugging here was ensuring that we weren’t infinite looping due to the creation of new edges via the splits, so we would store the next edge we needed to traverse and create a temporary variable to store the current one, performing the split on the temporary pointer.
Next, we iterated through all the edges again, and now flipped them if it was connected to an old, existing vertex on one side and a new one on the other.
Finally, for all vertices, we copied their newPosition
into their position
attribute since we knew that for both new and existing vertices, we wrote their new positions in the mesh in the newPosition
attribute.
An error that we ran into was weird subdivisions following the first iteration and we realized that for both vertices and edges, we weren’t resetting the isNew
attribute correctly, and as such, we set these modifications to fix that, modifying section one so all existing vertices were marked as not new, section two so existing edges were marked as not new, and section three so new vertices were marked new. Furthermore, we had to update our Task 5 implementation to ensure that of the new edges we were creating, the ones that were created during the split were marked new while the ones that composed of the original edge in the middle was marked not new.
After loop subdivision, we see that meshes with sharp corners (like dae/cube.dae) smooth out a great deal because we are increasing the number of triangles, which allows for more smooth curves to develop. However, this means that edges are lost. For instance, let’s take a look at dae/cube.dae. Namely, we can see that if we angle it in one way, we can see it smooth out with more loop subidivison.
![]() |
![]() |
![]() |
![]() |
We can reduce this effect by pre-splitting some edges. Below, we’ve pre-split some edges for the closest vertex (to preserve that vertex’s sharpness), and we can see that the sharpness there remains while the rest of the image smooths out.
![]() |
![]() |
![]() |
![]() |
From a front-on angle, we can really see the impact of the pre-splitting.
![]() |
The following screenshots show several iterations of loop subdivision on dae/cube.dae. We notice that the cube becomes slightly asymmetric after repeated subdivisions. We believe this occurs because the original mesh itself was not symmetric: the asymmetry arose because the triangle orientations were not shared across all the faces, and this caused subdivisions to pull more in one direction than the other.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
In the following screenshots, we’ve pre-processed dae/cube.dae, splitting the diagonal edge on each of the six faces. We believe that this created a more symmetrical cube after iterations of loop subdivision because each face was now symmetrical with four triangles and this meant that each vertex had the same degree (so there was no unequal pull in directions when splitting and calculating new vertex positions).
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
We can really see the difference in asymmetry when we compare the dae/cube.dae images after loop subdivision at step 5 with no pre-processing (on the left) and with pre-processing (on the right). The one on the right is much more symmetrical while the one on the left seems to have a weird bump in the back right and bumpiness as well at the bottom.
![]() |
![]() |
Ashley Chiu, Angel Aldaco