Thread history

From Talk:ScalarBot
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
08:32, 1 October 2017 Xor (talk | contribs) New reply created (Reply to FastKDE?)
06:01, 1 October 2017 Rsalesc (talk | contribs) New reply created (Reply to FastKDE?)
03:52, 1 October 2017 Xor (talk | contribs) Comment text edited  
03:49, 1 October 2017 Xor (talk | contribs) Comment text edited  
03:45, 1 October 2017 Xor (talk | contribs) New reply created (Reply to FastKDE?)
00:20, 1 October 2017 Rsalesc (talk | contribs) New reply created (Reply to FastKDE?)
17:23, 30 September 2017 Xor (talk | contribs) New thread created  

I've heard that recent works have done some improvements over traditional KDE (kernel density estimation).

And this project fastKDE contains an implementation.

Any thoughts on that?

Xor (talk)17:23, 30 September 2017

Well, assuming that you are using binning followed by KDE, this process doesn't seem to be anywhere near a bottleneck in Robocode. Or is it? I mean, binning reduces the number of kernel evaluations from quadratic order to linear order if you precompute the results for each possible delta. You still get a quadratic number of additions and multiplications, but it shouldn't be that expensive and even if it is, the biggest improvement I can imagine is using FFT here, which would not have a big impact for the number of bins we usually use in Robocode (I've never seen more than 120). And I don't see any advantage on not using binning if you have more than number_of_bins samples, since the GuessFactor [-1, +1] range is pretty "small".

Anyway, seems to be a really nice article to read :P Maybe those optimizations can work well on swarm targeting?

Rsalesc (talk)00:20, 1 October 2017

Well, afaik DrussGT is using 151 bins in his movement, iirc. And my old experimental anti-aliased VCS gun uses more than 1500 bins (where continuing increasing bins no longer increase performance).

In targeting, DrussGT and ScalarBot (inspired by DrussGT) is using max overlap to reconstruct firing angles, not kernel density estimation, and it's O(nlgn).

Note that by KDE I'm not only mentioning reconstructing firing angles, but also kNN. Actually we do KDE on entire data set, on every dimension, then calculate the conditional density function (reconstruct firing angles).

Anyway, fastKDE is not to accelerate existing computation — but to accelerate the process of getting the real probability density function (which includes computing bandwidth and shape function effectively), with way less samples. You know, in robocode, the sample amount is really restricted, and I think this method is exactly what modern bots needs.

And my thoughts are, the use of kNN in robocode is just some acceleration of KDE. Instead of computing KDE for every data point, we only use the nearest ones.

However, so far, we are using artificial bandwidth & shape function in this process. And I think fastKDE could bring the computation of bandwidth & shape function to robocode.

Xor (talk)03:45, 1 October 2017

I use max overlap in O(nlogn) in Monk in the swarm gun as well because of the great amount of data, and I see those subquadratic approaches as a very nice way to spend more time in other time-consuming tasks. Anyway, looking closer, fastKDE seems to be very useful at first glance, given that it could even be used on top of the existing kNN guns just to weight the queried data more carefully. The real question now is if that's worth understanding and implementing :P That's probably a topic for the future. Maybe you gonna be the first one to put your hands on that?

Rsalesc (talk)06:01, 1 October 2017

Yes that's probably a topic for the future. I'm putting it here to remind me to try it at some point, and I'll always be glad if someone else gonna be the first one. Anyway, some experiments on that is on the way ;p

Xor (talk)08:32, 1 October 2017