kernel density is important
← Thread:Talk:Diamond/Version History/kernel density is important/reply (8)
Bingo!
A big big problem was that I was calculating all dangers on my waves up-front. My reasoning was to take a one-time calculation hit and then surf using lookups.
Problem was, at the angular resolution I was wanting, this involved tens (maybe even hundreds) of thousands of kernel density calculations when creating my wave danger Object. Seems like a few thousand kernel density calcs each tick works a lot better for surfing. My skipped turns were probably happening when I detected enemy waves fired on the same turn as trying to make a targeting decision.
Targeting is still annoying in this sense.. the entire angular range needs to be evaluated on this tick. I like the exponential/Gaussian approach.. but want to investigate if there are less processor intensive kernel functions that work as well (or better?).
Regarding targeting being annoying in terms of evaluating the entire angular range, how are you doing that currently? Are you just calling a kernel density function on a large number of fixed points?
Here are three examples of ways to perhaps calculate kernel density faster in the context of targeting where you only care about the maximum:
- If you take the derivative of your kernel density function, you should be able to find the zero-crossings of the slope, and only calculate the kernel density at those points.
- One could also try approaches like skipping the kernel density calculation for angles which are too far from any data points.
- Or maybe even use the data points themselves as the angles to run the kernel density calculation for.
- With certain exceptionally simple kernel density functions (i.e. rectangle like I use in RougeDC/Scarlet's targeting), you can find the peak extremely fast with specialized algorithms also.
re #1: That seems to break for me, because (taking the Gaussian example) if I have two data points, centers -0.25 and 0.25 .. the maximum of the total area after calculating both kernels will be at x=0, which wasn't a zero-crossing of either Gaussian point in isolation.
re #2: I like this idea!
I've just now switched (experimentally) to using the Tricube kernel because I like it's shape: flattish in the center and trailing off to either side. I have it adjusted to slightly overhang the precise intersection width of each data point. Since it only exists from (-1,1), I've got some of your suggestion #2 built in, and turn skipping has pretty much ceased! We'll see how well this kernel compares, of course....
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:Diamond/Version History/kernel density is important/reply (12).
My solution to your problem was 2-fold:
1: Use a faster smoothing function. I've ended up at 1/(1+sqr(x))
2: A bit of dynamic programming: pre-calculate a single 'function profile' (and put it into a set of bins), centred at GF0, which runs from GF-2 to GF+2. Then whatever your GF is, you just need to scale your GF to figure out where on the function to draw your value from. So rather than doing an entire smoothing function for each hit, log all your hits (without smoothing) into a set of bins, then do the smoothing afterwards into a different set of bins by checking each bin if it is non-zero and overlaying a 'function profile' with that weight. If you're really sneaky you can even keep what the bin index of the hit is, instead of the actual GF ;-)