Effectiveness
← Thread:Talk:Variable bandwidth/Effectiveness/reply (4)
Er, precise intersection most certainly handles "how dangerous that part of the wave may be", at least if you take full advantage of the data it returns, to integrate the danger over the full range it returns.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:Variable bandwidth/Effectiveness/reply (6).
In my view, the wider range of angles that bullets may hit you at, is the only real "current danger" to include in the danger function. Now, it may be appropriate to also add a "future danger" for how a closer distance may affect waves not yet existing, but that should be:
- additive on top of normal danger (because it's not additional risk of current waves, it's a risk due to being close in later waves)
- based on the distance-to-enemy at the end of the last wave currently surfed (the future risk is all after that point in time)
Dividing by distance-to-enemy is really kind of a fudge factor that IMO doesn't reflect the real dynamics of the situation very well.
While the precise intersection is important to dive protection, but what I am trying to say is that the dive protection provided by precise intersection depends greatly on the bandwidth. If I used some huge bandwidth, say 200PI radians, my movement would just try to minimize the precise intersection angle, regardless of where that angle is, if I use a tiny bandwidth, I would almost ignore the size of the intersection. Obviously there needs to be a balance, and I think the balance changes with different bots. Actually, I used a form of this for a while, where the flattener bandwidth was much larger than the bandwidth for the other classifiers. If I need to enable the flattener, I can't guess where the enemy is shooting accurately. In such a case, minimizing the intersection angle gives better results than trying to dodge bullets which I can't predict.
re-reading the comments, it sounds like you may not understand how I calculate danger, so here is a quick summary:
1) KNN with several classifications schemes.
2) Using the points from step one, save the corresponding angles, a weight for each angle, and a bandwidth for each angle.
3) GoTo wave surfing to find precise intersection range of angles for different movement paths.
4) Use integral of sum of functions from step 2 evaluated over the range of angles from step 3 (with bullet shadows subtracted).
5) Use the safest movement path.
With such an algorithm, I am nearly certain that bandwidth strongly influences how frequently I will change direction due to walls.
Makes sense, and actually that isn't different from what I would have expected your algorithm to be.
I see what you mean in terms of the bandwidth impacting the dive protection provided by precise intersection. I see that as a side effect that's not directly related to it though. If your level of bandwidth is bad for the dive protection provided by precise intersection, it should be bad in general, because either way the goal of optimizing the bandwidth should be to estimate the real probabilities as accurately as possible in the circumstance of sparse information.
I feel like a good experiment would be to do something like... take a large amount of wave data pre-recorded... pick out subsets of this data, and with different bandwidths check how closely it matches the distribution of the whole data set.
Thinking about it some more... the main thing I'm wondering, is how to deal with the issue that that ideal bandwidth could vary across different parts of the distribution. One could very well have a situation where say... in the dense part of a distribution, there are enough data points to make it clear that the enemy has VCS bins that are slightly too wide and that you can fit between, but a more sparse part of the distribution also has danger worth considering.
Most of the information I'm seeing in searches for bandwidth estimation for kernel density estimation, involves coming up with a single estimate for the whole distribution, which wouldn't be ideal when the data has widely varying density...
Well, I built my surfing algorithm in such a way that that would be possible, so its a matter of figuring out what the ideal bandwidth would be. My first guess would be to use some function of the distance between the data point and the search point as well as time. But in my opinion this whole idea deserves a lot of research. (I'll ask my professor about using some of the computer labs for robocode)
I'm thinking that the fastest way of doing research on the ideal method for determining bandwidth for robocode purposes, might be to benchmark it outside of robocode using pre-saved data. It would be a lot faster than running battles.