Effectiveness
← Thread:Talk:Variable bandwidth/Effectiveness/reply (4)
Er, precise intersection most certainly handles "how dangerous that part of the wave may be", at least if you take full advantage of the data it returns, to integrate the danger over the full range it returns.
The actual precise intersection just gives the angle range of the wave which could contain a bullet and hit you. It's the stats system which says how likely a bullet is to be within this range, and the danger function which adds other information such as wave weighting.
Precise intersection isn't any better at dive protection than just dividing by distance-to-enemy in your danger function. What it's better at is dodging bullets in tight situations where you're reasonably sure where the bullet is =)
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:Variable bandwidth/Effectiveness/reply (9).
While the precise intersection is important to dive protection, but what I am trying to say is that the dive protection provided by precise intersection depends greatly on the bandwidth. If I used some huge bandwidth, say 200PI radians, my movement would just try to minimize the precise intersection angle, regardless of where that angle is, if I use a tiny bandwidth, I would almost ignore the size of the intersection. Obviously there needs to be a balance, and I think the balance changes with different bots. Actually, I used a form of this for a while, where the flattener bandwidth was much larger than the bandwidth for the other classifiers. If I need to enable the flattener, I can't guess where the enemy is shooting accurately. In such a case, minimizing the intersection angle gives better results than trying to dodge bullets which I can't predict.
re-reading the comments, it sounds like you may not understand how I calculate danger, so here is a quick summary:
1) KNN with several classifications schemes.
2) Using the points from step one, save the corresponding angles, a weight for each angle, and a bandwidth for each angle.
3) GoTo wave surfing to find precise intersection range of angles for different movement paths.
4) Use integral of sum of functions from step 2 evaluated over the range of angles from step 3 (with bullet shadows subtracted).
5) Use the safest movement path.
With such an algorithm, I am nearly certain that bandwidth strongly influences how frequently I will change direction due to walls.
Makes sense, and actually that isn't different from what I would have expected your algorithm to be.
I see what you mean in terms of the bandwidth impacting the dive protection provided by precise intersection. I see that as a side effect that's not directly related to it though. If your level of bandwidth is bad for the dive protection provided by precise intersection, it should be bad in general, because either way the goal of optimizing the bandwidth should be to estimate the real probabilities as accurately as possible in the circumstance of sparse information.
I feel like a good experiment would be to do something like... take a large amount of wave data pre-recorded... pick out subsets of this data, and with different bandwidths check how closely it matches the distribution of the whole data set.
Thinking about it some more... the main thing I'm wondering, is how to deal with the issue that that ideal bandwidth could vary across different parts of the distribution. One could very well have a situation where say... in the dense part of a distribution, there are enough data points to make it clear that the enemy has VCS bins that are slightly too wide and that you can fit between, but a more sparse part of the distribution also has danger worth considering.
Most of the information I'm seeing in searches for bandwidth estimation for kernel density estimation, involves coming up with a single estimate for the whole distribution, which wouldn't be ideal when the data has widely varying density...
Well, I built my surfing algorithm in such a way that that would be possible, so its a matter of figuring out what the ideal bandwidth would be. My first guess would be to use some function of the distance between the data point and the search point as well as time. But in my opinion this whole idea deserves a lot of research. (I'll ask my professor about using some of the computer labs for robocode)
I'm thinking that the fastest way of doing research on the ideal method for determining bandwidth for robocode purposes, might be to benchmark it outside of robocode using pre-saved data. It would be a lot faster than running battles.