This project implements and compares two methods for performing ambient occlusion, which makes nooks and crannies darker, since light has a harder time reaching them.
I started by writing a raytracer capable of producing shaded spheres and planes:
As the raytracer progresses, it populates a g-buffer which stores the surface point location and normal for each pixel in the image. From this buffer it's easy to calculate the screen space ambient occlusion based on the z-depth comparisons between each pixel in the g-buffer and sampled neighbors (seen below as a grayscale map).
Generating raytraced ambient occlusion also relies on the g-buffer; in this case the point location and normal is used to randomly cast rays in a hemisphere from the point on the surface (grayscale below). I didn't have time to try stratified sampling when casting rays, but from what I've seen it leads to much less grain with fewer rays.
After both AO images have been created, I composite the two together (simply multiplying RGB values by the grayscale values (scaled from 0-1)). Below are the SSAO and then raytraced AO composites.