![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1inx4gjk2OaaVu9xA_-I_BkXDv4i3AsxNIk1ytil3DjB8mUxebN4jJYHlQYk_tflx23gvraLUioyjFiLt4fb7ZBeRvyFSd3jBVnv0OqTK6IB5LoHyxCwxQTM_mV4wOKH5chdY2wvDv2-C/s1600/raytrace+(2).png)
This project implements and compares two methods for performing ambient occlusion, which makes nooks and crannies darker, since light has a harder time reaching them.
I started by writing a raytracer capable of producing shaded spheres and planes:
As the raytracer progresses, it populates a g-buffer which stores the surface point location and normal for each pixel in the image. From this buffer it's easy to calculate the screen space ambient occlusion based on the z-depth comparisons between each pixel in the g-buffer and sampled neighbors (seen below as a grayscale map).
Generating raytraced ambient occlusion also relies on the g-buffer; in this case the point location and normal is used to randomly cast rays in a hemisphere from the point on the surface (grayscale below). I didn't have time to try stratified sampling when casting rays, but from what I've seen it leads to much less grain with fewer rays.
After both AO images have been created, I composite the two together (simply multiplying RGB values by the grayscale values (scaled from 0-1)). Below are the SSAO and then raytraced AO composites.