Thread history
Viewing a history listing
Time | User | Activity | Comment |
---|---|---|---|
07:32, 1 October 2017 | Xor (talk | contribs) | New reply created | (Reply to FastKDE?) |
05:01, 1 October 2017 | Rsalesc (talk | contribs) | New reply created | (Reply to FastKDE?) |
02:52, 1 October 2017 | Xor (talk | contribs) | Comment text edited | |
02:49, 1 October 2017 | Xor (talk | contribs) | Comment text edited | |
02:45, 1 October 2017 | Xor (talk | contribs) | New reply created | (Reply to FastKDE?) |
23:20, 30 September 2017 | Rsalesc (talk | contribs) | New reply created | (Reply to FastKDE?) |
16:23, 30 September 2017 | Xor (talk | contribs) | New thread created |
I've heard that recent works have done some improvements over traditional KDE (kernel density estimation).
And this project fastKDE contains an implementation.
Any thoughts on that?
Well, assuming that you are using binning followed by KDE, this process doesn't seem to be anywhere near a bottleneck in Robocode. Or is it? I mean, binning reduces the number of kernel evaluations from quadratic order to linear order if you precompute the results for each possible delta. You still get a quadratic number of additions and multiplications, but it shouldn't be that expensive and even if it is, the biggest improvement I can imagine is using FFT here, which would not have a big impact for the number of bins we usually use in Robocode (I've never seen more than 120). And I don't see any advantage on not using binning if you have more than number_of_bins samples, since the GuessFactor [-1, +1] range is pretty "small".
Anyway, seems to be a really nice article to read :P Maybe those optimizations can work well on swarm targeting?
Well, afaik DrussGT is using 151 bins in his movement, iirc. And my old experimental anti-aliased VCS gun uses more than 1500 bins (where continuing increasing bins no longer increase performance).
In targeting, DrussGT and ScalarBot (inspired by DrussGT) is using max overlap to reconstruct firing angles, not kernel density estimation, and it's O(nlgn).
Note that by KDE I'm not only mentioning reconstructing firing angles, but also kNN. Actually we do KDE on entire data set, on every dimension, then calculate the conditional density function (reconstruct firing angles).
Anyway, fastKDE is not to accelerate existing computation — but to accelerate the process of getting the real probability density function (which includes computing bandwidth and shape function effectively), with way less samples. You know, in robocode, the sample amount is really restricted, and I think this method is exactly what modern bots needs.
And my thoughts are, the use of kNN in robocode is just some acceleration of KDE. Instead of computing KDE for every data point, we only use the nearest ones.
However, so far, we are using artificial bandwidth & shape function in this process. And I think fastKDE could bring the computation of bandwidth & shape function to robocode.
I use max overlap in O(nlogn) in Monk in the swarm gun as well because of the great amount of data, and I see those subquadratic approaches as a very nice way to spend more time in other time-consuming tasks. Anyway, looking closer, fastKDE seems to be very useful at first glance, given that it could even be used on top of the existing kNN guns just to weight the queried data more carefully. The real question now is if that's worth understanding and implementing :P That's probably a topic for the future. Maybe you gonna be the first one to put your hands on that?