Search This Blog

Wednesday, May 31, 2006

Memory Allocator Benchmarks - Take Two (Kinda)

So, I finally got tired of procrastinating instead of finding a program capable of plotting and performing regression analysis on my 33 million data points or so. Ideally, I'd do a three-dimensional plot and regression analysis of the data (operation order vs. operation size vs. operation time), but in the mean time I did a simple sum of the time for all operations of a given type.



Once again the data speak for themselves, and again they surprised me. Hoard seems like a fairly well rounded choice, though the Windows low-fragmentation heap absolutely owned the allocation benchmark. SMem is, indeed, almost twice as fast as HeapFree, but gets its ass handed to it in the reallocation benchmark.

I'm not sure why SMemReAlloc does so badly. It's possible that it has a larger proportion of outlying data points (points in the hundreds of thousands or millions of cycles) than the others, which could severely skew the results, as this is a sum function. I may try doing the same process using some data point cutoff, and see how that affects the results; though I'm not sure what a fair cutoff point would be.

Oh, and for reference, from what I've heard, the Windows low-fragmentation heap is very similar to the algorithm I was going to implement for LibQ (the lock-free one).

No comments: