Cache effects on benchmark
Fragment of a discussion from Talk:K-NN algorithm benchmark
Jump to navigation
Jump to search
It would make testing slow, but the overhead is what will make benchmarks meaningful. Remove the overhead and you also remove cache thrashing.
Running a test bed much like we run challenges would do. Instead of making every battle the same, run multiple random battles and measure average run time.
What I worry is that many different trees would have to be tested in exactly the same way. Unlike scores, times are different on different computers. Perhaps if we put them all in parallel in the same robot, then see how much time they take?
Skilgannon (talk)