FastKDE?
Well, assuming that you are using binning followed by KDE, this process doesn't seem to be anywhere near a bottleneck in Robocode. Or is it? I mean, binning reduces the number of kernel evaluations from quadratic order to linear order if you precompute the results for each possible delta. You still get a quadratic number of additions and multiplications, but it shouldn't be that expensive and even if it is, the biggest improvement I can imagine is using FFT here, which would not have a big impact for the number of bins we usually use in Robocode (I've never seen more than 120). And I don't see any advantage on not using binning if you have more than number_of_bins samples, since the GuessFactor [-1, +1] range is pretty "small".
Anyway, seems to be a really nice article to read :P Maybe those optimizations can work well on swarm targeting?
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:ScalarBot/FastKDE?/reply (2).
I use max overlap in O(nlogn) in Monk in the swarm gun as well because of the great amount of data, and I see those subquadratic approaches as a very nice way to spend more time in other time-consuming tasks. Anyway, looking closer, fastKDE seems to be very useful at first glance, given that it could even be used on top of the existing kNN guns just to weight the queried data more carefully. The real question now is if that's worth understanding and implementing :P That's probably a topic for the future. Maybe you gonna be the first one to put your hands on that?