Writing a raytracer – Part 1 – Introduction

I will keep this post short and sweet, as it’s likely that most people, looking to implement a raytracer, are already familiar with the concept.

In it’s most basic sense, the raytracing algorithm is extremely straight forward.   For every pixel in an image, we project a ray into the world and when that ray intersects with an object, we collect the color of that object.  The resulting color is assigned to the pixel and when you have finished tracing, you are left with a beautiful image.

In other words, imagine you have a small piece of graph paper (or possibly a post-it) with a 10×10 grid drawn on the surface.  Now hold that in front of yourself.  If you were to draw a straight line from your eye through the center of each cell, that that line would eventually intersect with some object that is immediately beyond the paper.  Simply color each cell the same as that intersection point with the world.  With that, you’ve just raytraced a 10-by-10 pixel representation of the world.


This is what a red sphere would look like on our sheet of paper.
This is what a red sphere would look like on our sheet of paper.  Note:  I am using Albert Szostkiewicz’s super cool VOP based Houdini raytracer for a number of my diagrams.


Now, a 10×10 pixel image is likely too low resolution to get an accurate representation of the world.  In addition, I have skipped over a number of additional steps we need to implement in our raytracer; steps that make for more realistic renders (shadows, reflections, and other fancy techniques to more accurately represent light transport).  However, the basics are now outlined so we can move forward!