We were talking about Virtual Guns and Flatteners. Neither GresSuffurd or Nene use either (as far as I am aware).
Never said that GresSuffurd was the highest ranking VCS. If I recall DrussGT doesn't use DC for its movement.
Yup, DrussGT still uses 'stone-age' VCS for the movement =) I've tried DC several times, but I just couldn't get the same performance. Maybe DC using range-based searches instead of KNN would work better, I seem to remember Jdev saying that worked a lot better for him in movement.
For both movement and targeting in XanderCat, I use a KDTree to store both hits (bullet hits target) and visits (where robot at time bullet would have hit whether it did or didn't). When moving or aiming, a certain number of hits are pulled from the tree using KNN search. Those hits are then dumped into a factor array (with a little smoothing for movement but not targeting). And then the lowest (or highest, for targeting) area of the factor array is where I drive to (or shoot). Visits are only used in my movement, and only when my flattener is active. When talking DC and VCS, what would you consider this to be? I've always thought of it as something of a hybrid, but I don't know if I clearly understand how you all define these terms.
I'd definitely call that KNN (or DC as we do on the wiki), you are just using a discrete kernel density estimation to find your peaks/troughs. Your algorithm would give identical results if, instead of smoothing into bins, you reversed the loops and checked each angle for a 'score' by evaluating its nearness to previous data, which is just discrete kernel density estimation through sampling. VCS refers to the way that the data is gathered in the first place, smoothing the data into segmented bins as it is collected.
DrussGT doesn't use bins until it's time to surf (like you), instead just keeping a list of the last 2*rollingDepth hits for that buffer, but I did the math so that it is functionally equivalent with rolloff and weights, and as such I still call it VCS.
As I've seen it used the "DC" terminology applies to any KNN-based system. I would classify XanderCat's strategy as you describe it as being part of the group "K-Nearest-Neighbor, Guessfactors, with Bins".
Personally I dislike the "Dynamic Clustering" term because it's a bit of a misnomer in my eyes. While ABC coming across this technique and bringing it to Robocode was inspired by clustering algorithms, k-nearest-neighbor search is not a clustering algorithm.
Oh that's what you meant.
I agree with Red that DC is kind of a misnomer. It was similar to k-means with only one cluster. Which is how it got its name. But DC is really just a KNN system.
In that case I want to pass WaveSerpent, as that is the highest ranked bot without any DC in its gun, movement or radar ;-)
Eh? The only way I can think of a K-NN search being useful for radar... would be in a field that is much larger than radar range, to predict the angle the opponent is most likely to come back into view at
Or if you're being very aggressive about an optimal radar, guessing at their movement instead of going as far as they could have gotten. :-) But GrubbmGait might have been making a subtle joke...
Nene uses a kNN for estimating the bullet power of the enemy (for its heat waves).
How do you use kNN to determine the power to fire at?
Most energy management strategies are based on estimating hit rate and then using high bullet powers with high hit rates and low bullet powers with low hit rates.
When calculating kernel density, Combat uses a uniform kernel and checks the density. If there are k waves and x uniform kernels overlapping, it assumes hit rate is x/k. Then bullet power is adjusted accordingly.
Works well in melee. High bullet powers against tight packs of bots, and low bullet powers against lone bots. Low bullet powers against flat profiles, and high bullet powers when spikes are detected.
Maybe simply counting how many bullets hit and how many miss is more accurate, but kernel density is cool.
That it is. So basically you fire a more powerful bullet when your robot is more confident that the bullet will successfully hit the opponent. Possibly based on hit rate of previous shots. That isn't a bad way to do it. How do you choose when you have no data?
With little or no data, it assumes worst case. k is always constant and with no data, x is always zero. It needs at least k waves to have any chance of estimating 100% hit rate.
The strategy works as long as the opponent does not use adaptive movement. Because if it does, spikes in movement profile are usually angles avoided the most.
I spent a bunch of time trying to optimize higher hit rates = higher bullet power. Several times at different points in Diamond's history, even. Even the most conservative of these always performed worse than my hand-tuned formulas. I presumed due to some combination of:
- It's advantageous to use a consistent bullet power for more accurate data classification.
- It's advantageous to prioritize survival at the expense of the bullet damage gained by more aggressive bullet powers.
I've done some work on algorithms to give optimal firing power. My method itself seems quite effective for the most part except there are a few caveats:
- Optimizing for score, optimizing for survival, and optimizing for PL yield rather different results for bullet power. The score vs survival difference is obvious, but optimizing for PL vs score is also important because the risk/reward tradeoffs are different depending on the score thus far in the match.
- It depends on accurate numbers for hit rates. For this I've implemented hitrate calculations which factor out the impact of distance on hit rate, but being able to incorporate other aspects would be better.
- Ideally you ought to try to model the future hitrates, by observing how the hitrates are changing over the course of the battle, however because the amount of data is limited, this can be difficult. It may be best to assume the hitrates are following a certain curve until the battle shows otherwise.
- My modeling of outcomes at different bullet power values is easiest to implement as a monte carlo simulation of the rest of the battle. Implementing this in a bot like I have in the past causes a dilemma, because with few iterations it's accuracy is very poor, but with many iterations it starts to use a very large portion of the bot's avaliable cpu time.
- One could approximate this modeling by curve fitting it, though that could get a bit messy because of how many input parameters influence it. In my next bot I probably will use a curve fitted approximation of my bulletpower modeling.
Part of what really has me feeling good about accurate modeling of outcomes caused by bullet power, is that my implementations have had emergent behavior which I didn't expect at all but makes sense in retrospect. For example, it turns out that if it's almost 100% certain that the opponent will win a particular round, it can be worth it to dump remaining energy into high power bullets to deny the enemy bullet damage score.
I'd love to see direct data of how it compares to what Diamond and DrussGT have right now. I also have well-normalized hit rate data, and even things that seem extremely safe and advantageous, like "use power 3.0 in situations where I expect a hit rate over 25% with 95% confidence" hurt me, or at least didn't help.