Cache effects on benchmark

Jump to navigation Jump to search
Revision as of 17 July 2013 at 13:00.
The highlighted comment was created in this revision.

Cache effects on benchmark

I was thinking that running *just* the benchmark vs a single tree at a time would result in the KD-Tree code and data being cached quite a bit better than would be realistic for real-life situations. Perhaps it would make more sense to run all of the trees at the same time, giving each tree the new search/data one tree after the next. This would simulate the cache thrashing between turns that happens when you are running multiple robots at the same time, because the trees would be competing with each other for cache space.

Thoughts?

The reason I ask is that I designed/wrote a tree to deal with the cache problem. It outperforms Rednaxela Gen2 on large (2mil points, 12 dim) random datasets by ~2X, but ties on smaller (30k point, 12 dim) datasets. I think the fact that for the small dataset the entire thing is in cache might be causing the difference.

    Skilgannon (talk)21:03, 16 July 2013

    Maybe using a reference bot would make benchmarking more meaninful?

    Pick one bot, put each tree inside it, one at a time, and run it against a 1v1 test bed.

      MN (talk)01:37, 17 July 2013

      For the best comparison, absolutely. However, it might be difficult to set up the battles so that every one is the same, particularly if the trees are non-deterministic due to things like points being equal. Also, it adds a lot of overhead which would make testing very slow.

        Skilgannon (talk)11:56, 17 July 2013

        It would make testing slow, but the overhead is what will make benchmarks meaningful. Remove the overhead and you also remove cache thrashing.

        Running a test bed much like we run challenges would do. Instead of making every battle the same, run multiple random battles and measure average run time.

          MN (talk)13:06, 17 July 2013
           
           

          Running them all at the same time could make sense, but I would suggest being careful if you do that, because the order that they get run in may matter. Even when running them one at a time in sequence, I've recall noticing that the order in which they are run could very slightly impact the apparent performance, I suspect due to caching, JIT, and/or garbage collection characteristics. It's been a while, but IIRC the System.gc() call I have in there between running different trees was to lessen that effect. It may make sense to add some form of randomization to sequence they're run in.

          Cache performance is one of those things that's tricky with Robocode, because your robot is also sharing the CPU with another bot which could be doing who knows that with it's memory accesses. For that reason I wouldn't trust optimizations for better caching behavior to necessarily pan out in practice with bots. I may be wrong about that though.

          One could put it in a reference bot yeah, though the tests would be far slower and less consistent, thus requiring a much greater number of test iterations to have a reliable result.

          Oh, and in any case, nice job with making a tree that much faster with the large datasets :)

            Rednaxela (talk)05:13, 17 July 2013

            I wrote a quick benchmark and threw it in my main method. Some tweaks and improvements (putting the distance functions in methods, using a binary search for the results array) have given me big improvements at lower tree sizes:

            Config:
            No JIT Warmup
            Tested on random data.
            Training and testing points shared across iterations.
            Searches interleaved.
            Num points:     20000
            Num searches:   200
            Dimensions:     12
            Num Neighbours: 40
            
            Accuracy: 100%
            Iteration:      1/3
            This tree add avg:  3092 ns
            Reds tree add avg:  2216 ns
            This tree knn avg:  809965 ns
            Reds tree knn avg:  1380366 ns
            This tree knn max:  10844097 ns
            Reds tree knn max:  11005183 ns
            
            Accuracy: 100%
            Iteration:      2/3
            This tree add avg:  1259 ns
            Reds tree add avg:  846 ns
            This tree knn avg:  643037 ns
            Reds tree knn avg:  1119268 ns
            This tree knn max:  979013 ns
            Reds tree knn max:  1787566 ns
            
            Accuracy: 100%
            Iteration:      3/3
            This tree add avg:  1146 ns
            Reds tree add avg:  800 ns
            This tree knn avg:  641163 ns
            Reds tree knn avg:  1099657 ns
            This tree knn max:  1318587 ns
            Reds tree knn max:  1782212 ns
            

            Note, I hacked the RedGen2 tree to not check for NaN in the distance functions so that they are on equal footing.

              Skilgannon (talk)12:03, 17 July 2013

              Ahh nice. Out of curiosity, any reason you're comparing against my 2nd gen tree? My 3rd gen tree was a bit faster at least in the tests I did.

                Rednaxela (talk)14:00, 17 July 2013