Talk:Variable bandwidth

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Effectiveness2118:58, 25 September 2012

Effectiveness

While I have only tested a little, I think this has lots of room for improvements in robots. I tried using the simplistic "bandwidth = rawHitPercentage" capped at upper and lower bounds and it leads to less than 60 bullet damage from HawkOnFire over 1000 rounds. I wonder what bugs I have in my surfing...

AW23:37, 22 September 2012

Hmm, interesting. With regards to the "wavesurfing without explicit dive protection" part though, I feel that Waves/Precise_Intersection covers that perfectly well, and no additional "dynamic bandwidth" is needed for that purpose. Really, the only reason I can think of for any kind of "dynamic bandwidth" besides that provided by precise intersection, is for the case of small data set size near the start of a round, where the uncertainty is greater.

Rednaxela07:35, 23 September 2012

I'm not sure about precise intersection negating dive protection, because the precise intersection only helps determine what part of the wave you would be hit by, not how dangerous that part of the wave may be.

I agree that variable bandwidth isn't a replacement for some sort of dive protection, though, because they really solve different problems - variable bandwidth doesn't take into account the reduced reaction time to enemy bullets, escaping rambots etc. But I think that variable bandwidth certainly has applications. Particularly with multi-modal distributions on the wave, the relative danger of particular locations can change considerably if the bandwidth is changed - what was considered to be a safe point between two logged GF hits with a small bandwidth might be considered more dangerous than either of the peaks with a large bandwidth.

Skilgannon12:12, 23 September 2012

Er, precise intersection most certainly handles "how dangerous that part of the wave may be", at least if you take full advantage of the data it returns, to integrate the danger over the full range it returns.

Rednaxela16:10, 23 September 2012

The actual precise intersection just gives the angle range of the wave which could contain a bullet and hit you. It's the stats system which says how likely a bullet is to be within this range, and the danger function which adds other information such as wave weighting.

Precise intersection isn't any better at dive protection than just dividing by distance-to-enemy in your danger function. What it's better at is dodging bullets in tight situations where you're reasonably sure where the bullet is =)

Skilgannon16:44, 23 September 2012

In my view, the wider range of angles that bullets may hit you at, is the only real "current danger" to include in the danger function. Now, it may be appropriate to also add a "future danger" for how a closer distance may affect waves not yet existing, but that should be:

  1. additive on top of normal danger (because it's not additional risk of current waves, it's a risk due to being close in later waves)
  2. based on the distance-to-enemy at the end of the last wave currently surfed (the future risk is all after that point in time)

Dividing by distance-to-enemy is really kind of a fudge factor that IMO doesn't reflect the real dynamics of the situation very well.

Rednaxela17:13, 23 September 2012
 

While the precise intersection is important to dive protection, but what I am trying to say is that the dive protection provided by precise intersection depends greatly on the bandwidth. If I used some huge bandwidth, say 200PI radians, my movement would just try to minimize the precise intersection angle, regardless of where that angle is, if I use a tiny bandwidth, I would almost ignore the size of the intersection. Obviously there needs to be a balance, and I think the balance changes with different bots. Actually, I used a form of this for a while, where the flattener bandwidth was much larger than the bandwidth for the other classifiers. If I need to enable the flattener, I can't guess where the enemy is shooting accurately. In such a case, minimizing the intersection angle gives better results than trying to dodge bullets which I can't predict.

AW19:33, 23 September 2012
 

re-reading the comments, it sounds like you may not understand how I calculate danger, so here is a quick summary:

1) KNN with several classifications schemes.

2) Using the points from step one, save the corresponding angles, a weight for each angle, and a bandwidth for each angle.

3) GoTo wave surfing to find precise intersection range of angles for different movement paths.

4) Use integral of sum of functions from step 2 evaluated over the range of angles from step 3 (with bullet shadows subtracted).

5) Use the safest movement path.

With such an algorithm, I am nearly certain that bandwidth strongly influences how frequently I will change direction due to walls.

AW19:41, 23 September 2012
 

Makes sense, and actually that isn't different from what I would have expected your algorithm to be.

I see what you mean in terms of the bandwidth impacting the dive protection provided by precise intersection. I see that as a side effect that's not directly related to it though. If your level of bandwidth is bad for the dive protection provided by precise intersection, it should be bad in general, because either way the goal of optimizing the bandwidth should be to estimate the real probabilities as accurately as possible in the circumstance of sparse information.

I feel like a good experiment would be to do something like... take a large amount of wave data pre-recorded... pick out subsets of this data, and with different bandwidths check how closely it matches the distribution of the whole data set.

Rednaxela21:54, 23 September 2012
 

Thinking about it some more... the main thing I'm wondering, is how to deal with the issue that that ideal bandwidth could vary across different parts of the distribution. One could very well have a situation where say... in the dense part of a distribution, there are enough data points to make it clear that the enemy has VCS bins that are slightly too wide and that you can fit between, but a more sparse part of the distribution also has danger worth considering.

Most of the information I'm seeing in searches for bandwidth estimation for kernel density estimation, involves coming up with a single estimate for the whole distribution, which wouldn't be ideal when the data has widely varying density...

Rednaxela22:10, 23 September 2012
 

Well, I built my surfing algorithm in such a way that that would be possible, so its a matter of figuring out what the ideal bandwidth would be. My first guess would be to use some function of the distance between the data point and the search point as well as time. But in my opinion this whole idea deserves a lot of research. (I'll ask my professor about using some of the computer labs for robocode)

AW00:41, 24 September 2012
 

I'm thinking that the fastest way of doing research on the ideal method for determining bandwidth for robocode purposes, might be to benchmark it outside of robocode using pre-saved data. It would be a lot faster than running battles.

Rednaxela14:55, 24 September 2012
 
 
 

I tried something like this for gun, but it didn't really work out. I ended up getting much better results using a square kernel. From Wikipedia, if you assume that your distribution is Gaussian then the optimal bandwidth would be:

<math>h = \left(\frac{4\hat{\sigma}^5}{3n}\right)^{\frac{1}{5}} \approx 1.06 \hat{\sigma} n^{-1/5}</math>, where <math>\hat{\sigma}</math> is the standard deviation of the samples and <math>n</math> is the number of samples.

Perhaps this would work well for movement, where there is much less data to work with. It might also be necessary to add some sanity checks to the calculated h value in case there is only 1 or 2 samples, etc.

Of course, I'm fairly sure our distributions are not at all Gaussian, or even uni-modal, so this formula might not be relevant at all.

Skilgannon12:00, 23 September 2012

I've considered something like this before but, when you have less than 20 samples or so, your estimate of standard deviation itself is going to have a large amount of uncertainty. I suspect one would need to determine the "typical" standard deviation for most bots, use that as the initial value, and slowly transition to a value calculated from the data.

Regarding the distribution not being gaussian, indeed it wouldn't be... but I think that formula may still be somewhat roughly applicable if we apply a correction factor for how the uncertainty gets smaller near the maximum escape angle, perhaps modeled off of the behavior of the binomial distribution near the edges.

Rednaxela16:26, 23 September 2012

I actually think this formula might not adapt to multimodal distributions very well at all, because it will try to adapt the bandwidth to fit to one big, centered danger, which may not be a reasonable assumption to make, considering things like segmented VCS buffers and different ways of measuring similar attributes (vel+offset vs latvel+advvel comes to mind) in the gun systems we're trying to dodge.

Maybe clustering the logs into different groups based on their GFs, then using this formula on each of those, with each set of logs having its own bandwidth for the final kernel density estimate would be more effective.

Skilgannon16:58, 23 September 2012

Hmm... at least with sufficient data I'd agree that applying bandwidth estimation separately to different clusters would make sense. The cases where the appropriate bandwidth changes most significantly would be when there's limited data though (each additional data point doesn't change uncertainty as much once there are already many data points).

Rednaxela17:05, 23 September 2012
 
 
 

I do think the idea is sound, a larger bandwidth reflecting the uncertainty of your data. I think many of our systems may already be tuned around this - eg, if you have your simple trees/buffers enabled by default, then enable more and more sets of stats as enemy hit % goes up - as many of us do, to varying degrees - you gain a smoothing effect as the enemy hit % goes up. (Like you mentioned with your flattener.)

I did something recently with a somewhat similar insight - I have a "base danger" added to the statistical danger, and it's proportional to enemy hit percentage. The idea being that the more an enemy is hitting you, the more it's true that any area is going to have some danger, no matter what your current stats say, and in that case, it's better to give more weight to other factors, like bullet shadows and distancing. Interestingly, with a danger that's normalized to the range 0 to 1, the optimal formula I found was using the raw hit probability (like .12 for 12%), no multiplier or anything.

Voidious01:35, 24 September 2012

Hmm... interesting... Question though, how are you normalizing the danger to the 0-to-1 range? Area, or peak?

If you're normalizing area, and the x axis ranges from 0 to 1, then the raw hit probability for the base danger makes a lot of sense to me. If you're normalization before integrating over botwidth is by area, then the danger should represent the probability of being hit. Then assuming that your surfing reliably gets you to what is normally near-zero danger most of the time, the additional baseline probability of being hit should be the hitrate... The way of handling it I see as making the most theoretical sense would be (hitProbabilityFromStats + actualHitProbability * (1 - hitProbabilityFromStats)). If you one wants to make it even more accurate... take the average "hitrate from stats" for where your robot ends up at each wave, and remove that from the actual hitrate. I think that should result in even better modeling of that baseline danger...

Rednaxela15:00, 24 September 2012
 

My danger doesn't cover an area, though I tried that quite a bit recently and was disappointed I couldn't make it work better. For each angle in my log, I have danger of: weight * power(some base, -abs(angle diff) / bandwidth), then I divide out the total weight. I originally normalized it for the sake of multi-wave surfing - otherwise, if you're weighting by inverse scan distance, one wave can easily dominate the danger calculation just because you've been hit by a more similar wave before.

(Ninja edit: bandwidth is proportional to precise intersection bot width.)

Voidious20:18, 24 September 2012

Hmm... I see. It seems to me that it's easier to factor the baseline hit probability in logically, when the danger calculations work in terms of estimated hit probability.

On the topic of the page this talk page is for... With regards to it not working as well as you would have liked when integrating over an area, I wonder if that might have been related to tuning of bandwidth that is applied before integrating over the area.

Rednaxela21:39, 24 September 2012
 

I tried using area-normalisation a while ago on my VCS bins, and it consistently gave worse results than height-normalisation. I've got no idea why, but I might give it another shot and separate out the 'future danger' from the 'wave danger' like you said. After all, it does make more sense statistically, but if there's one thing Robocode has taught me, often 'statistically correct' doesn't work as well as some tweaked variant =)

Skilgannon18:48, 25 September 2012

Mm... indeed "statistically correct" often doesn't work as well... though in my view in all of those cases, there's probably a way to get the same gains without compromising statistical correctness. It's just a matter of finding out precisely why the "statistically incorrect" version worked better, which can be quite non-trivial.

Rednaxela18:58, 25 September 2012