CS 184 Project 3-2 Writeup
Tianchen Liu 3034681144, Riley Peterlinz 3036754368
Link:
https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-2/index.html
Overview
Parts completed: 1 & 4
Part 1: We implemented a glossy BSDF, refractive BSDF, and then combined them to make a glass BSDF. We used first principles of material and didn’t run into any major issues.
Part 4: We adjusted where our “camera” samples pixels in order to simulate focus using the principles of a thin lens. We followed the diagram in the spec and reasoned geometrically. The only issue was we forgot to shift coordinates initially.
Part 1
Implementation
Reflection
We want to reflect a ray →r=⎡⎢⎣xyz⎤⎥⎦ about the normal →z=⎡⎢⎣001⎤⎥⎦
We can do this with a reflection matrix constructed as
Which makes
⎡⎢⎣−1000−10001⎤⎥⎦⎡⎢⎣xyz⎤⎥⎦=⎡⎢⎣−x−yz⎤⎥⎦
So we can just apply this to our wo
to get wi
void BSDF::reflect(const Vector3D wo, Vector3D* wi) {
*wi = Vector3D(wo.x * -1, wo.y * -1, wo.z);
}
This allows us to reflect wo
and then divide the reflectance by abs_cos_theta(*wi)
to cancel out the cosine term in at_least_one_bounce_radiance
.
Vector3D MirrorBSDF::sample_f(const Vector3D wo, Vector3D* wi, double* pdf) {
*pdf = 1.;
reflect(wo, wi);
return reflectance / abs_cos_theta(*wi);
}
Refraction
Refraction is trickier since we have to deal with edge cases.
We first need to decide if a ray is entering or exiting a material. This is important since we are obeying snell’s law
sinθ′=ηsinθ
Where θ′ is the angle of incidence and θ is the angle of refraction. We can tell if we are entering or exiting an object if the z-component of wo
is respectively positive or negative.
If entering snell’s law becomes
ior⋅sinθ′=sinθ
So η=1ior
If exiting snell’s law is
sinθ′=ior⋅sinθ
So η=ior
We also have to consider if there is total internal reflection, this happens when the angle of incidence is so shallow the light won’t permeate the material, rather it would reflect internally.
This can be derived from snell’s law to find the condition that if
1−η2(1−cos2θ)<0
then there is total internal reflection.
We implement all this logic in BSDF::refract
along with the following calculations for the relation between wo
and wi
using spherical coordinates:
float total_reflection = (1 - eta * eta * (1 - (wo.z * wo.z)));
wi->x = -1. * eta * wo.x;
wi->y = -1. * eta * wo.y;
if (wo.z > 0) {
wi->z = -1. * sqrt(total_reflection);
} else {
wi->z = 1. * sqrt(total_reflection);
}
The case structure for wo.z
guarantees the direction of wo
is opposite wi
w.r.t. the material’s surface.
When implementing RefractionBSDF::sample_f()
we return an empty Vector3D
if there is total internal reflection.
When refraction happens we return transmittance / abs_cos_theta(*wi) / eta^2
with *pdf = 1.
since radiance concentrates when a ray enters a high index of refraction material. abs_cos_theta(*wi)
term is present to cancel out the cosine term in at_least_one_bounce_radiance
Glass
Glass material uses both reflection and refraction. If we have total internal reflection we just send the next ray to reflect and return an empty Vector3D
.
Otherwise, we want to compute the proportion of reflection to refraction of a material depending on the angle the camera is looking at the object known as the fresnel factor.
The factor is
R0=(η−1η+1)2R(θ)=R0+(1−R0)(1−cosθ)5
We use coin_flip(R)
to then decide whether the ray will be reflected or refracted. The implemntation for GlassBSDF::sample_f()
's reflection and refraction are similar we multiply the returned value by *pdf
which is R
for reflection and 1-R
for refraction.
Bounce Stack
Bounce 0
We just get the source light as expected
Bounce 1
There are not enough bounces yet to reflect anything besides the source light since at this level, the balls do not see the other objects as illuminiated. Also notice the right glass ball has a “grainier” reflection than the right, this is due to the fresnel coefficient not being 100% reflection and even though the sample rate is 512, we need more samples to smooth it out.
Bounce 2
There we go! At two bounces we can see mirrored surfaces going from light -> reflective object -> reflected object. The glass material is still mostly black since the light ray at bounce 2 is internal and has not yet escaped.
Bounce 3
Now we can see refraction! Notice the reflection on the left ball still has the right ball as black. This is because the reflection sees the glass ball as we do in the Bounce 2 case.
There is also a large caustic pattern that shows up on the bottom of the ball, this is due to light refracting from the ball onto the floor and concentrating there.
Bounce 4
We have two new elements at bounce four, the glass ball is now accurately reflected in the mirror ball. This is because we now are at enough bounces for not only light to escape the glass ball’s refractioin, but to go from the mirror ball to the camera.
We also have a small caustic pattern showing up on the blue wall. This is also from the glass ball. I believe this smaller caustic pattern is from the mirrored ball’s reflectiion of the light sources. light -> mirror ball -> glass ball -> glass ball internal -> blue wall.
Bounce 5
We don’t notice any new elements, this is just brighterr than the last.
Bounce 100
This one is similar to the previous case except there’s more noise on the glass ball near the top.
Part 4
In this part, we introduce a depth of field to our virtual camera using a thin lens model that can change its aperture and focus distance. The difference between a pinhole and thin lens is that a pinhole has a one-to-one correspondence for points in the world to location on the lens so everything is in focus. A thin lens is more realistic since there is a circle of locations on the lens that can map to the same point. We get blurriness when multiple points map to the same location on the camera’s sensor.
Implementation
Basically, we want to get the blue vector to model our camera’s eye.
Our implementation of
Ray Camera::generate_ray_for_thin_lens(double x, double y, double rndR, double rndTheta) const
starts by figuring the generated ray direction
x -= 0.5;
y -= 0.5;
x *= 2 * tan(0.5 * radians(hFov));
y *= 2 * tan(0.5 * radians(vFov));
Next we want to uniformally sample the disk representing the thin lens
Vector3D pLens(lensRadius * sqrt(rndR) * cos(rndTheta), lensRadius * sqrt(rndR) * sin(rndTheta), 0);
this is the blue point on lens (sx,sy,0)
To get the point in focus we just invert the current pixel and scale it by focalDistance
Vector3D pFocus(x, y , -1);
pFocus *= focalDistance;
We now have the point in focus!
Finally we want the direction of the blue vector, this is simply the point in focus minus our sampled lens point.
Vector3D pDir = pFocus - pLens;
pDir.normalize();
With this we generate a new ray after transforming the points in camera space to world space and setting the bounds.
Ray newRay(c2w * (pLens) + pos, c2w * pDir);
newRay.max_t = fClip;
newRay.min_t = nClip;
Focus Stack
Each image here is rendered with 512 samples and a 0.23 aperture radius
The focus is pulled here from back to front with a very thin DOF. The size of a circle of confusion is proportional to the distance it is away from the plane of focus.
Aperture Stack
Each image here is rendered with 512 samples at the same focal distance
aperture radius = 0.23
aperture radius = 0.15
aperture radius = 0.10
aperture radius = 0.05
As the aperture radius becomes smaller we approach a pinhole camera, which is why we see the DOF become larger. The circle of confusion on the dragon’s teeth also becomes much smaller with lower aperture.
Collaboration
Both partners contributed equally to this project. It was shorter than normal so there weren’t any major bugs besides forgeting to change coordinates or inverting signs. The most interesting thing we learned is how DOF is modelled and how to create it ourselves usiing some basic linear algebra.
Link:
https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-2/index.html
CS 184 Project 3-2 Writeup
Tianchen Liu 3034681144, Riley Peterlinz 3036754368
Link:
https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-2/index.html
Overview
Parts completed: 1 & 4
Part 1: We implemented a glossy BSDF, refractive BSDF, and then combined them to make a glass BSDF. We used first principles of material and didn’t run into any major issues.
Part 4: We adjusted where our “camera” samples pixels in order to simulate focus using the principles of a thin lens. We followed the diagram in the spec and reasoned geometrically. The only issue was we forgot to shift coordinates initially.
Part 1
Implementation
Reflection
We want to reflect a ray →r=⎡⎢⎣xyz⎤⎥⎦ about the normal →z=⎡⎢⎣001⎤⎥⎦
We can do this with a reflection matrix constructed as
Which makes
⎡⎢⎣−1000−10001⎤⎥⎦⎡⎢⎣xyz⎤⎥⎦=⎡⎢⎣−x−yz⎤⎥⎦
So we can just apply this to our
wo
to getwi
This allows us to reflect
wo
and then divide the reflectance byabs_cos_theta(*wi)
to cancel out the cosine term inat_least_one_bounce_radiance
.Refraction
Refraction is trickier since we have to deal with edge cases.
We first need to decide if a ray is entering or exiting a material. This is important since we are obeying snell’s law
sinθ′=ηsinθ
Where θ′ is the angle of incidence and θ is the angle of refraction. We can tell if we are entering or exiting an object if the z-component of
wo
is respectively positive or negative.If entering snell’s law becomes
ior⋅sinθ′=sinθ
So η=1ior
If exiting snell’s law is
sinθ′=ior⋅sinθ
So η=ior
We also have to consider if there is total internal reflection, this happens when the angle of incidence is so shallow the light won’t permeate the material, rather it would reflect internally.
This can be derived from snell’s law to find the condition that if
1−η2(1−cos2θ)<0
then there is total internal reflection.
We implement all this logic in
BSDF::refract
along with the following calculations for the relation betweenwo
andwi
using spherical coordinates:The case structure for
wo.z
guarantees the direction ofwo
is oppositewi
w.r.t. the material’s surface.When implementing
RefractionBSDF::sample_f()
we return an emptyVector3D
if there is total internal reflection.When refraction happens we return
transmittance / abs_cos_theta(*wi) / eta^2
with*pdf = 1.
since radiance concentrates when a ray enters a high index of refraction material.abs_cos_theta(*wi)
term is present to cancel out the cosine term inat_least_one_bounce_radiance
Glass
Glass material uses both reflection and refraction. If we have total internal reflection we just send the next ray to reflect and return an empty
Vector3D
.Otherwise, we want to compute the proportion of reflection to refraction of a material depending on the angle the camera is looking at the object known as the fresnel factor.
The factor is
R0=(η−1η+1)2R(θ)=R0+(1−R0)(1−cosθ)5
We use
coin_flip(R)
to then decide whether the ray will be reflected or refracted. The implemntation forGlassBSDF::sample_f()
's reflection and refraction are similar we multiply the returned value by*pdf
which isR
for reflection and1-R
for refraction.Bounce Stack
Bounce 0
We just get the source light as expected
Bounce 1
There are not enough bounces yet to reflect anything besides the source light since at this level, the balls do not see the other objects as illuminiated. Also notice the right glass ball has a “grainier” reflection than the right, this is due to the fresnel coefficient not being 100% reflection and even though the sample rate is 512, we need more samples to smooth it out.
Bounce 2
There we go! At two bounces we can see mirrored surfaces going from light -> reflective object -> reflected object. The glass material is still mostly black since the light ray at bounce 2 is internal and has not yet escaped.
Bounce 3
Now we can see refraction! Notice the reflection on the left ball still has the right ball as black. This is because the reflection sees the glass ball as we do in the Bounce 2 case.
There is also a large caustic pattern that shows up on the bottom of the ball, this is due to light refracting from the ball onto the floor and concentrating there.
Bounce 4
We have two new elements at bounce four, the glass ball is now accurately reflected in the mirror ball. This is because we now are at enough bounces for not only light to escape the glass ball’s refractioin, but to go from the mirror ball to the camera.
We also have a small caustic pattern showing up on the blue wall. This is also from the glass ball. I believe this smaller caustic pattern is from the mirrored ball’s reflectiion of the light sources. light -> mirror ball -> glass ball -> glass ball internal -> blue wall.
Bounce 5
We don’t notice any new elements, this is just brighterr than the last.
Bounce 100
This one is similar to the previous case except there’s more noise on the glass ball near the top.
Part 4
In this part, we introduce a depth of field to our virtual camera using a thin lens model that can change its aperture and focus distance. The difference between a pinhole and thin lens is that a pinhole has a one-to-one correspondence for points in the world to location on the lens so everything is in focus. A thin lens is more realistic since there is a circle of locations on the lens that can map to the same point. We get blurriness when multiple points map to the same location on the camera’s sensor.
Implementation
Basically, we want to get the blue vector to model our camera’s eye.
Our implementation of
starts by figuring the generated ray direction
Next we want to uniformally sample the disk representing the thin lens
this is the blue point on lens (sx,sy,0)
To get the point in focus we just invert the current pixel and scale it by
focalDistance
We now have the point in focus!
Finally we want the direction of the blue vector, this is simply the point in focus minus our sampled lens point.
With this we generate a new ray after transforming the points in camera space to world space and setting the bounds.
Focus Stack
Each image here is rendered with 512 samples and a 0.23 aperture radius
The focus is pulled here from back to front with a very thin DOF. The size of a circle of confusion is proportional to the distance it is away from the plane of focus.
Aperture Stack
Each image here is rendered with 512 samples at the same focal distance
aperture radius = 0.23
aperture radius = 0.15
aperture radius = 0.10
aperture radius = 0.05
As the aperture radius becomes smaller we approach a pinhole camera, which is why we see the DOF become larger. The circle of confusion on the dragon’s teeth also becomes much smaller with lower aperture.
Collaboration
Both partners contributed equally to this project. It was shorter than normal so there weren’t any major bugs besides forgeting to change coordinates or inverting signs. The most interesting thing we learned is how DOF is modelled and how to create it ourselves usiing some basic linear algebra.
Link:
https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-2/index.html