kernel density is important
← Thread:Talk:Diamond/Version History/kernel density is important/reply (2)
Using the approximator seems reasonable to me. I actually saw that in your version history and played with one a bit in my gun. =) In my main gun, where I do over 10k kernel density calculations per tick, I long ago abandoned gaussian because it was too slow. But I thought with an approximation, which I already had laying around from some experiments with an integral surf danger formula, it might work. It was fast enough, but it didn't perform better anyway...
I've found that a formula that smooths across the whole angle range is really important in movement. And in my movement, it's a max of 200 data points * 12 firing angles tested = 2400 kernel density calculations (across both waves). So until now, I stuck with gaussian because it's the only common kernel density formula I'd seen with that property. But I finally started playing with modifications of that and it was quite an improvement.
Bingo!
A big big problem was that I was calculating all dangers on my waves up-front. My reasoning was to take a one-time calculation hit and then surf using lookups.
Problem was, at the angular resolution I was wanting, this involved tens (maybe even hundreds) of thousands of kernel density calculations when creating my wave danger Object. Seems like a few thousand kernel density calcs each tick works a lot better for surfing. My skipped turns were probably happening when I detected enemy waves fired on the same turn as trying to make a targeting decision.
Targeting is still annoying in this sense.. the entire angular range needs to be evaluated on this tick. I like the exponential/Gaussian approach.. but want to investigate if there are less processor intensive kernel functions that work as well (or better?).
Regarding targeting being annoying in terms of evaluating the entire angular range, how are you doing that currently? Are you just calling a kernel density function on a large number of fixed points?
Here are three examples of ways to perhaps calculate kernel density faster in the context of targeting where you only care about the maximum:
- If you take the derivative of your kernel density function, you should be able to find the zero-crossings of the slope, and only calculate the kernel density at those points.
- One could also try approaches like skipping the kernel density calculation for angles which are too far from any data points.
- Or maybe even use the data points themselves as the angles to run the kernel density calculation for.
- With certain exceptionally simple kernel density functions (i.e. rectangle like I use in RougeDC/Scarlet's targeting), you can find the peak extremely fast with specialized algorithms also.
re #1: That seems to break for me, because (taking the Gaussian example) if I have two data points, centers -0.25 and 0.25 .. the maximum of the total area after calculating both kernels will be at x=0, which wasn't a zero-crossing of either Gaussian point in isolation.
re #2: I like this idea!
I've just now switched (experimentally) to using the Tricube kernel because I like it's shape: flattish in the center and trailing off to either side. I have it adjusted to slightly overhang the precise intersection width of each data point. Since it only exists from (-1,1), I've got some of your suggestion #2 built in, and turn skipping has pretty much ceased! We'll see how well this kernel compares, of course....
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:Diamond/Version History/kernel density is important/reply (12).
My solution to your problem was 2-fold:
1: Use a faster smoothing function. I've ended up at 1/(1+sqr(x))
2: A bit of dynamic programming: pre-calculate a single 'function profile' (and put it into a set of bins), centred at GF0, which runs from GF-2 to GF+2. Then whatever your GF is, you just need to scale your GF to figure out where on the function to draw your value from. So rather than doing an entire smoothing function for each hit, log all your hits (without smoothing) into a set of bins, then do the smoothing afterwards into a different set of bins by checking each bin if it is non-zero and overlaying a 'function profile' with that weight. If you're really sneaky you can even keep what the bin index of the hit is, instead of the actual GF ;-)
Until a couple versions ago, in my main gun, I was using Gaussian until a certain number of data points, then switching to if (abs(ux) < 1) { density += square(1 - square(ux)) }
. After some testing with WaveSim, I'm now using (1 - cube(abs(ux)))
and never using Gaussian. YMMV, but I think with the amount of data you have in a gun, you don't really need the heavy smoothing offered by formulas that cover the whole range.
One less intensive approach that covers the whole range would be something like: density += 1.0 / (1 + square(ux))
, which is akin to what a lot of VCS guns do for Bin Smoothing.