Effectiveness

Jump to navigation Jump to search

I do think the idea is sound, a larger bandwidth reflecting the uncertainty of your data. I think many of our systems may already be tuned around this - eg, if you have your simple trees/buffers enabled by default, then enable more and more sets of stats as enemy hit % goes up - as many of us do, to varying degrees - you gain a smoothing effect as the enemy hit % goes up. (Like you mentioned with your flattener.)

I did something recently with a somewhat similar insight - I have a "base danger" added to the statistical danger, and it's proportional to enemy hit percentage. The idea being that the more an enemy is hitting you, the more it's true that any area is going to have some danger, no matter what your current stats say, and in that case, it's better to give more weight to other factors, like bullet shadows and distancing. Interestingly, with a danger that's normalized to the range 0 to 1, the optimal formula I found was using the raw hit probability (like .12 for 12%), no multiplier or anything.

Voidious00:35, 24 September 2012

Hmm... interesting... Question though, how are you normalizing the danger to the 0-to-1 range? Area, or peak?

If you're normalizing area, and the x axis ranges from 0 to 1, then the raw hit probability for the base danger makes a lot of sense to me. If you're normalization before integrating over botwidth is by area, then the danger should represent the probability of being hit. Then assuming that your surfing reliably gets you to what is normally near-zero danger most of the time, the additional baseline probability of being hit should be the hitrate... The way of handling it I see as making the most theoretical sense would be (hitProbabilityFromStats + actualHitProbability * (1 - hitProbabilityFromStats)). If you one wants to make it even more accurate... take the average "hitrate from stats" for where your robot ends up at each wave, and remove that from the actual hitrate. I think that should result in even better modeling of that baseline danger...

Rednaxela14:00, 24 September 2012
 

My danger doesn't cover an area, though I tried that quite a bit recently and was disappointed I couldn't make it work better. For each angle in my log, I have danger of: weight * power(some base, -abs(angle diff) / bandwidth), then I divide out the total weight. I originally normalized it for the sake of multi-wave surfing - otherwise, if you're weighting by inverse scan distance, one wave can easily dominate the danger calculation just because you've been hit by a more similar wave before.

(Ninja edit: bandwidth is proportional to precise intersection bot width.)

Voidious19:18, 24 September 2012

Hmm... I see. It seems to me that it's easier to factor the baseline hit probability in logically, when the danger calculations work in terms of estimated hit probability.

On the topic of the page this talk page is for... With regards to it not working as well as you would have liked when integrating over an area, I wonder if that might have been related to tuning of bandwidth that is applied before integrating over the area.

Rednaxela20:39, 24 September 2012
 

I tried using area-normalisation a while ago on my VCS bins, and it consistently gave worse results than height-normalisation. I've got no idea why, but I might give it another shot and separate out the 'future danger' from the 'wave danger' like you said. After all, it does make more sense statistically, but if there's one thing Robocode has taught me, often 'statistically correct' doesn't work as well as some tweaked variant =)

Skilgannon17:48, 25 September 2012

Mm... indeed "statistically correct" often doesn't work as well... though in my view in all of those cases, there's probably a way to get the same gains without compromising statistical correctness. It's just a matter of finding out precisely why the "statistically incorrect" version worked better, which can be quite non-trivial.

Rednaxela17:58, 25 September 2012