kernel density is important
Using the approximator seems reasonable to me. I actually saw that in your version history and played with one a bit in my gun. =) In my main gun, where I do over 10k kernel density calculations per tick, I long ago abandoned gaussian because it was too slow. But I thought with an approximation, which I already had laying around from some experiments with an integral surf danger formula, it might work. It was fast enough, but it didn't perform better anyway...
I've found that a formula that smooths across the whole angle range is really important in movement. And in my movement, it's a max of 200 data points * 12 firing angles tested = 2400 kernel density calculations (across both waves). So until now, I stuck with gaussian because it's the only common kernel density formula I'd seen with that property. But I finally started playing with modifications of that and it was quite an improvement.
Bingo!
A big big problem was that I was calculating all dangers on my waves up-front. My reasoning was to take a one-time calculation hit and then surf using lookups.
Problem was, at the angular resolution I was wanting, this involved tens (maybe even hundreds) of thousands of kernel density calculations when creating my wave danger Object. Seems like a few thousand kernel density calcs each tick works a lot better for surfing. My skipped turns were probably happening when I detected enemy waves fired on the same turn as trying to make a targeting decision.
Targeting is still annoying in this sense.. the entire angular range needs to be evaluated on this tick. I like the exponential/Gaussian approach.. but want to investigate if there are less processor intensive kernel functions that work as well (or better?).
Regarding targeting being annoying in terms of evaluating the entire angular range, how are you doing that currently? Are you just calling a kernel density function on a large number of fixed points?
Here are three examples of ways to perhaps calculate kernel density faster in the context of targeting where you only care about the maximum:
- If you take the derivative of your kernel density function, you should be able to find the zero-crossings of the slope, and only calculate the kernel density at those points.
- One could also try approaches like skipping the kernel density calculation for angles which are too far from any data points.
- Or maybe even use the data points themselves as the angles to run the kernel density calculation for.
- With certain exceptionally simple kernel density functions (i.e. rectangle like I use in RougeDC/Scarlet's targeting), you can find the peak extremely fast with specialized algorithms also.
re #1: That seems to break for me, because (taking the Gaussian example) if I have two data points, centers -0.25 and 0.25 .. the maximum of the total area after calculating both kernels will be at x=0, which wasn't a zero-crossing of either Gaussian point in isolation.
re #2: I like this idea!
I've just now switched (experimentally) to using the Tricube kernel because I like it's shape: flattish in the center and trailing off to either side. I have it adjusted to slightly overhang the precise intersection width of each data point. Since it only exists from (-1,1), I've got some of your suggestion #2 built in, and turn skipping has pretty much ceased! We'll see how well this kernel compares, of course....
For #1 I did not mean the zero-crossing of any one point, I meant the zero-crossings of the sum of all the derivatives of the kernel density function. Of course, whether it's efficient to calculate those zeros or not all depends on what the kernel density function is (probably not practical for gaussian, trivial for triangular, as two extereme cases)
Hmm... tricube sounds like an interesting one, though that's quite a bit of multiplication it uses. I wonder if this is the sort of thing that would be worth doing a rough approximation of really. I mean... it probably wouldn't affect the results too much to do the kernel density as a piecewise "sum of rectangles" approximation, and it would be much faster.
My solution to your problem was 2-fold:
1: Use a faster smoothing function. I've ended up at 1/(1+sqr(x))
2: A bit of dynamic programming: pre-calculate a single 'function profile' (and put it into a set of bins), centred at GF0, which runs from GF-2 to GF+2. Then whatever your GF is, you just need to scale your GF to figure out where on the function to draw your value from. So rather than doing an entire smoothing function for each hit, log all your hits (without smoothing) into a set of bins, then do the smoothing afterwards into a different set of bins by checking each bin if it is non-zero and overlaying a 'function profile' with that weight. If you're really sneaky you can even keep what the bin index of the hit is, instead of the actual GF ;-)
Until a couple versions ago, in my main gun, I was using Gaussian until a certain number of data points, then switching to if (abs(ux) < 1) { density += square(1 - square(ux)) }
. After some testing with WaveSim, I'm now using (1 - cube(abs(ux)))
and never using Gaussian. YMMV, but I think with the amount of data you have in a gun, you don't really need the heavy smoothing offered by formulas that cover the whole range.
One less intensive approach that covers the whole range would be something like: density += 1.0 / (1 + square(ux))
, which is akin to what a lot of VCS guns do for Bin Smoothing.