kernel density is important
The highlighted comment was created in this revision.
- Diamond 1.7.47:
Math.exp(-0.5 * ux * ux)
- Diamond 1.7.50:
Math.pow(2, -0.5 * ux * ux)
(.47 vs .50) - Diamond 1.7.51:
Math.pow(2, -Math.abs(ux))
(.50 vs .51)
I know 1.7.51 is far from stable, but it blew away my test bed enough that I'm pretty sure it's a nice jump (knock on wood).
Fun to experiment with!
I just need to figure out where my major performance problems lie, because if I try directly using Math.exp() or Math.pow(), I get hundreds or thousands of skipped turns in a round. I'm pretty heavily reliant on using a fast approximation for Math.exp() right now.. but I don't think I should have to be...
Using the approximator seems reasonable to me. I actually saw that in your version history and played with one a bit in my gun. =) In my main gun, where I do over 10k kernel density calculations per tick, I long ago abandoned gaussian because it was too slow. But I thought with an approximation, which I already had laying around from some experiments with an integral surf danger formula, it might work. It was fast enough, but it didn't perform better anyway...
I've found that a formula that smooths across the whole angle range is really important in movement. And in my movement, it's a max of 200 data points * 12 firing angles tested = 2400 kernel density calculations (across both waves). So until now, I stuck with gaussian because it's the only common kernel density formula I'd seen with that property. But I finally started playing with modifications of that and it was quite an improvement.
Bingo!
A big big problem was that I was calculating all dangers on my waves up-front. My reasoning was to take a one-time calculation hit and then surf using lookups.
Problem was, at the angular resolution I was wanting, this involved tens (maybe even hundreds) of thousands of kernel density calculations when creating my wave danger Object. Seems like a few thousand kernel density calcs each tick works a lot better for surfing. My skipped turns were probably happening when I detected enemy waves fired on the same turn as trying to make a targeting decision.
Targeting is still annoying in this sense.. the entire angular range needs to be evaluated on this tick. I like the exponential/Gaussian approach.. but want to investigate if there are less processor intensive kernel functions that work as well (or better?).
Regarding targeting being annoying in terms of evaluating the entire angular range, how are you doing that currently? Are you just calling a kernel density function on a large number of fixed points?
Here are three examples of ways to perhaps calculate kernel density faster in the context of targeting where you only care about the maximum:
- If you take the derivative of your kernel density function, you should be able to find the zero-crossings of the slope, and only calculate the kernel density at those points.
- One could also try approaches like skipping the kernel density calculation for angles which are too far from any data points.
- Or maybe even use the data points themselves as the angles to run the kernel density calculation for.
- With certain exceptionally simple kernel density functions (i.e. rectangle like I use in RougeDC/Scarlet's targeting), you can find the peak extremely fast with specialized algorithms also.
re #1: That seems to break for me, because (taking the Gaussian example) if I have two data points, centers -0.25 and 0.25 .. the maximum of the total area after calculating both kernels will be at x=0, which wasn't a zero-crossing of either Gaussian point in isolation.
re #2: I like this idea!
I've just now switched (experimentally) to using the Tricube kernel because I like it's shape: flattish in the center and trailing off to either side. I have it adjusted to slightly overhang the precise intersection width of each data point. Since it only exists from (-1,1), I've got some of your suggestion #2 built in, and turn skipping has pretty much ceased! We'll see how well this kernel compares, of course....
For #1 I did not mean the zero-crossing of any one point, I meant the zero-crossings of the sum of all the derivatives of the kernel density function. Of course, whether it's efficient to calculate those zeros or not all depends on what the kernel density function is (probably not practical for gaussian, trivial for triangular, as two extereme cases)
Hmm... tricube sounds like an interesting one, though that's quite a bit of multiplication it uses. I wonder if this is the sort of thing that would be worth doing a rough approximation of really. I mean... it probably wouldn't affect the results too much to do the kernel density as a piecewise "sum of rectangles" approximation, and it would be much faster.
My solution to your problem was 2-fold:
1: Use a faster smoothing function. I've ended up at 1/(1+sqr(x))
2: A bit of dynamic programming: pre-calculate a single 'function profile' (and put it into a set of bins), centred at GF0, which runs from GF-2 to GF+2. Then whatever your GF is, you just need to scale your GF to figure out where on the function to draw your value from. So rather than doing an entire smoothing function for each hit, log all your hits (without smoothing) into a set of bins, then do the smoothing afterwards into a different set of bins by checking each bin if it is non-zero and overlaying a 'function profile' with that weight. If you're really sneaky you can even keep what the bin index of the hit is, instead of the actual GF ;-)
Until a couple versions ago, in my main gun, I was using Gaussian until a certain number of data points, then switching to if (abs(ux) < 1) { density += square(1 - square(ux)) }
. After some testing with WaveSim, I'm now using (1 - cube(abs(ux)))
and never using Gaussian. YMMV, but I think with the amount of data you have in a gun, you don't really need the heavy smoothing offered by formulas that cover the whole range.
One less intensive approach that covers the whole range would be something like: density += 1.0 / (1 + square(ux))
, which is akin to what a lot of VCS guns do for Bin Smoothing.
Wow, nice work. I haven't really stopped robocoding (does anyone ever really stop?) but I took a break for a while and now I'm working on an R-Tree, and some rewriting for Gilgalad. I had an idea that might push Diamond to the top. As far as I can tell, you only surf three options on the second wave for each of your three options on the first wave. I suspect that the bullet shadows make the dangers much less continuous so that using more points on the second wave would help your score a bit. (For Gilgalad, I thought I had more or less fixed the skipped turns problem by using every 5th point and making sure I got the extreme points in either direction for the second wave, but I got a new computer that has an intel processor rather than an AMD. It's more than twice as fast as my old one from four years ago, but it seems to have way more skipped turns.
Hmm, interesting thought. My original surf algorithm was to check every point along the second wave (in the days of bins and no precise intersection), but just checking forward/stop/reverse somehow always outperformed it. It's true that a lot has changed since then, including bullet shadows, so maybe you're right. But my most recent experiments with changing my surf algorithm were even more significant and came out with almost no change in score, so now I'm a little skeptical about tweaking my surf algorithm. =)
Wow, congrats on these tweaks, although it brings Diamond a bit too close for my liking there! I think that we tend to weight the second wave so low anyways that minor inaccuracies aren't as big of a deal. Wintermute does that though, for each tick on the second wave try stopping and see where the intersection is. It mostly just made it slow.
You mean you only use three movement options on the second wave for each movement option on the first wave? And I've spent all these hundreds of hours optimizing for nothing!