Difference between revisions of "Thread:Talk:K-NN algorithm benchmark/Cache effects on benchmark/reply (4)"

From Robowiki
Jump to navigation Jump to search
 
(No difference)

Latest revision as of 13:03, 17 July 2013

I wrote a quick benchmark and threw it in my main method. Some tweaks and improvements (putting the distance functions in methods, using a binary search for the results array) have given me big improvements at lower tree sizes:

Config:
No JIT Warmup
Tested on random data.
Training and testing points shared across iterations.
Searches interleaved.
Num points:     20000
Num searches:   200
Dimensions:     12
Num Neighbours: 40

Accuracy: 100%
Iteration:      1/3
This tree add avg:  3092 ns
Reds tree add avg:  2216 ns
This tree knn avg:  809965 ns
Reds tree knn avg:  1380366 ns
This tree knn max:  10844097 ns
Reds tree knn max:  11005183 ns

Accuracy: 100%
Iteration:      2/3
This tree add avg:  1259 ns
Reds tree add avg:  846 ns
This tree knn avg:  643037 ns
Reds tree knn avg:  1119268 ns
This tree knn max:  979013 ns
Reds tree knn max:  1787566 ns

Accuracy: 100%
Iteration:      3/3
This tree add avg:  1146 ns
Reds tree add avg:  800 ns
This tree knn avg:  641163 ns
Reds tree knn avg:  1099657 ns
This tree knn max:  1318587 ns
Reds tree knn max:  1782212 ns

Note, I hacked the RedGen2 tree to not check for NaN in the distance functions so that they are on equal footing.