«

exploring the mandelbrot set with your gpu

2009

The Mandelbrot set is a good candidate for GPU implementation: it’s simple, uses vector math, and is embarrassingly parallel. Doing so, however, generally adds a significant amount of complexity to a program. The programmer is forced to twist all computations to fit within the graphics pipeline, and a significant amount of ceremony surrounds even the simplest operation.

The GPGPU framework in Penumbra is designed to hide away all these rough edges, with the goal of making it easier to use the GPU for these sorts of tasks than it would be to use plain Clojure. It’s not quite there yet, but in this particular example it proves to be extremely useful.

The full source code for this program weighs in at just over 100 lines of code, and can be found here. Here’s what it looks like, slowed down to 10 iterations per frame:

Let’s look at the different GPU kernels we define for this program, and what they do.

(defmap initialize-fractal
  (float3
    (mix
      upper-left
      lower-right
      (/ :coord :dim)) 0))

This initializes our initial conditions for a particular view. We need to keep track of our position as we iterate, and how many iterations we’ve gone through, and we can store both these pieces of data together in a 3-vector. The :coord keyword represents the position of the current pixel we’re calculating, and :dim the maximum value for :coord. We calculate the initial position by doing a bilinear interpolation between upper-left and lower-right, which are parameters passed in when the kernel is run.

(defmap iterate-fractal
  (let [val %
        z (.xy val)
        c (mix upper-left lower-right (/ :coord :dim))
        iterations (.z val)]
    (? (< 4.0 (dot z z))
      val
      (float3
        (+ c
          (float2
            (- (* (.x z) (.x z)) (* (.y z) (.y z)))
            (* 2.0 (.x z) (.y z))))
          (+ 1.0 iterations)))))

This is a single iteration of every pixel. This must be repeated dozens of times for the Mandelbrot set to resolve. The % symbol represents a pixel from the input data (further data sets would be represented by %2, %3…). This 3-vector is deconstructed into a 2-vector and the counter, and checks whether we’ve exited the boundaries. If we have, we return the value unchanged, since we want to keep track of how many iterations it took to exit.

(defmap color-fractal
  (let [val %
        z (.xy val)
        n (.z val)
        escape (/ n (float max-iterations))]
    (? (< 4.0 (dot z z))
      (color3 escape escape (mix 0.2 1.0 escape))
      (color3 0.0 0.0 0.0))))

This transforms the position and iteration count into something we can display onscreen. We deconstruct the input pixel, and check if it’s still inside the boundaries. If so, we leave it black. If not, we interpolate between dark blue and white, based on how many iterations it took to escape.

This is a handpicked example of how Penumbra’s GPGPU capabilities can be used. Obviously, not every problem is going to lend itself as easily to this sort of implementation, but I’m hopeful that other people will end up being able to use it for real-world applications. If you have any ideas in this vein, let me know.