Slava Pestov recently posted a performance comparison between Factor and Java, based on a heavily numerical benchmark. I've recently added support to Penumbra for offloading computation to the graphics card (GPGPU), which is ideally suited to this sort of computation, so I decided to give it a try. It's not exactly an apples-to-apples comparison, but the results are interesting nonetheless. Here is the Clojure+Penumbra implementation, in its entirety:
(use 'penumbra.compute 'penumbra.app) (with-gl (defmap generate (let [s (sin :index)] (normalize (float3 s (* 3.0 (cos :index)) (/ (* s s) 2.0))))) (defreduce find-max (max %1 %2)) (dotimes [_ 100] (time (println (find-max (generate 5e6))))))
The first thing you'll notice is that this is about 4x shorter than the Factor version, but that's to be expected; I'm sure Factor would be similarly terse if it were allowed to hide all the code behind a library (Java, on the other hand...). The really cool part is that on my MacBook Pro, with a fairly middle of the road GPU, it runs in 350ms. That's 4x faster than the optimized Factor code, and more than 10x faster than Java. The GPGPU support in Penumbra is a work in progress, but it's looking promising. Up until now, Clojure has only been able to asymptotically approach the performance of Java (and, in the process, its verbosity). For this benchmark, at least, it's ahead on both counts by a order of magnitude.
I'll be posting a less trivial example, and a more thorough explanation, in the near future.
So this is really interesting: I installed Snow Leopard, and my performance improved by a factor of ten. This now runs 100x times faster than Java. How unexpectedly awesome.