Head-to-head
I've been focusing a lot on beating DrussGT and Shadow lately and just wanted to give you props on how incredibly strong DrussGT is. Nothing I do to movement or gun seems to have any positive effect, like there's just some magical element I am unaware of. I tested with my flattener or Anti-Surfer gun hard-coded on, and those too had almost no effect - the flattener helped slightly, giving me my best score of 46.6% over 55 battles, up from the usual 45-46 range. Definitely need to put my Thinking Cap on over here. =)
Haha, thanks. I'm putting in work on my side as well, I'm experimenting with using DC for movement. My best version loses around 0.4% on the MC2K7 vs 2.2.0, mostly against the top bots, so I haven't released anything yet as I don't want to lose my new PL crown =)
Oh, neat! DC surfing requires some serious thought after years of doing things in VCS ways. I'm still working some fairly basic things out, after all this time. It gets you to really analyze how/why aspects of your VCS setup worked. Actually, I think I had an important realization just last night...
0.4% doesn't sound like much. :-P But then I don't think MC2K7 is a particularly robust way to measure a drastic movement change, either.
Interesting, no matter what I tried I couldn't get it better than my 3rd try (the -0.4% one). I guess I'll have to stick with my VCS for now... it's just that DC would be much more suitable for an idea I had...
Sounds familiar. =) You sure you ran enough seasons? Sometimes I convince myself I ran enough and then waste lots of cycles trying to match what was actually just a lucky score...
100 seasons, which should be plenty... minor tuning changes score slightly less but in the same region. Maybe I'll try identical code and see what happens...
My intuition is that so long as the magnitude that dimensions are weighted with is similar, the most likely sources of loss/differences between VCS and KNN would be:
- Non-linear spacing of segments in the VCS. In order to achieve maximally similar results between methods, you need to preform transforms on the dimensions to approximate the result of any non-linear spacing of segments in the VCS.
- Insufficient number of KNN data points used when the data is dense (late in battle). How many data points should be used should probably be larger as the data becomes denser. The density of points returned should probably affect how many points are used.
Have you looked into these factors Skilgannon?
I'm going to try the non-linear thing just now - good call as I had forgotten about this despite spending quite a while tuning it in my gun.
I'm currently weighting based on distance to the location point as a function of the average distance of the closest 3 points - it works quite well but I wouldn't be surprised if there were improvements which could be made.
While I think those points are valid, I somewhat disagree with their importance.
- I've experimented with different scaling of attribute differences, but never to any major success, in gun or movement. I'm currently not doing this anywhere in Diamond.
- If the data is dense anywhere in the graph of your movement data, it probably means you're getting hit a lot by a learning gun, at which point a much bigger issue is modeling data decay intelligently. Experience has shown that in VCS, stat buffers of varying depths with a generally low rolling average works well. There's no direct way to translate that to a DC setup.
Personally, I'd say that intelligently modeling data decay in DC surf stats is probably the biggest hurdle in converting from VCS. It's actually one of the main things I'm still tinkering with. I'm pretty happy with the setup I've arrived at in Diamond, but I think there's still a lot of room for improvement. I'd be happy to go into more detail about that if anyone's interested.
Here are my thoughts on those aspects.
The approach to data decay I took in RougeDC was to have an "index" dimension which continually counted up. This is kind of mean/nasty to the kd-tree performance, but as far as KNN search I think it's a very natural way to model decay.
Regarding varied depths, I'm pretty sure the depth of VCS segmentation is extremely analogous to the number of KNN points used and how they are weighted. The way to match that aspect of VCS systems is to mix the result of varied numbers of points in varied weightings. Since processing the same points multiple times is redundant it simplifies to the following: The way to get the same effect as varied depth VCS, is to work on how your weighting of KNN points rolls off, and use plenty of KNN points so it rolls off properly before the limit on number of points is reached.
I don't know if you were referring to this Voidious, but with regards to having many stat buffers as some like DrussGT do, my experience is you get the same effect by performing antialiasing and interpolation. This implies to me that the primary cause of "many stat buffers" being effective for traditional VCS is that it acts as a sort of accidental stochastic antialiasing. A KNN approach implicitly needs no antialiasing/interpolation, so that aspect of VCS setups does not need to be arranged.
(We might just move this thread to Talk:Wave Surfing at some point...)
Well, I agree that much of the value of multiple VCS buffers is covered inherently by a DC system: smoothing of the data and scaling to different amounts of data (eg, no need for a quick-learning unsegmented buffer in DC). So while my best VCS gun has a few stat buffers (and your best does anti-aliasing / interpolation), my best DC gun has only one tree. But while I wouldn't expect to have 100 trees in a DC movement, I do have more than one and I think there's value in it.
As far as data decay in a DC system, I think the progression from simplistic to sophisticated goes something like this:
- Weighting by age. This is not without merit, but is pretty crude. An old piece of data may still be the most recent for that situation.
- Capping number of data points, deleting old ones. Also pretty crude, but effective if it's very important to emphasize recent data. I still do this in parts of Diamond.
- Within the set of nearest points, sort chronologically and weight by rank. This is about as close as you can get to how rolling average works in a VCS segment. I weight data by 1 / (base ^ sort position). So with a base of 2, they're weighted 1, .5, .25, ... . A base of 2.4 is about equivalent to a rolling average of 0.7.
- Use multiple, exponentially increasing values of k (say 1/4/15/50), with each set of data weighted by chronological order. This emulates having stat buffers of increasing segmentation depths, each with a rolling average. The deepest set of segmentation is akin to taking a low k nearest neighbors search, while an unsegmented buffer would use the max value of k.
Lastly, this is just a hunch, but I think another value of combining many different views of your data is that you achieve a safe pseudo-randomness. That is, simply surfing one set of data will make you move more predictably than the sum of a diverse set of viewpoints - at least with a True Surfing algorithm. But surfing that sum of viewpoints is still going to err on the side of dodging bullets accurately, in contrast to a truly random movement.
I need to figure out a way to phase out old scans - however all my experience with using time weightings in DC have been utter failures. I just tried another right now (a counter that counts every tick of the match) and it didn't help either, although maybe I need to weight it lower than I have right now.
I'm thinking maybe using a limited number of points in the tree, and removing the closest when the limit is reached. Then when I need neighbours for creating a buffer I weight the hits by the time they were logged. This should approximate the way VCS does things.
I don't like the idea of 'doing it the way VCS does it', even if it works, because we don't exactly know why VCS works so well that way either.
Perhaps, for a simple quick method take a sqrt(tree_size) cluster and just use the most recent point. Who knows...
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:DrussGT/Head-to-head/reply (12).
I'm not sure about that, I think raw age might be better for trying to emulate enemy guns. They log hits all the time, not just when a bullet hits, so if there is a bit gap between two bullet hits the last hit may be getting weighted proportionally much higher than it should be. This is something I think VCS does wrong, which can be addressed in DC, if only I could get the scores back up to where they were =)
But it's also the case that a piece of data that's 100x older but 10x closer to the current situation (wrt the rest of the attributes) may be a better estimate of where the enemy is firing in that situation. The ratio of those values is going to depend on how granular the enemy's gun is. So I think considering the situations sorted by time at a bunch of different granularities is a pretty good bet, and I think it's similar to what our VCS systems do with great success.
You have got me thinking again about super-lightweight flatteners against weaker learning guns though. =) It does bother me that they're learning all the time and I'm not!
Btw, I'm up to 49.3% vs DrussGT 2.2.2 now with Diamond 1.6.15 (over 1000 battles). I got up to like 49.7%, but only with some changes that killed my scores too much vs Shadow and Tomcat. I decided to stop spinning my wheels for now and move on to more general improvements. =)
Heh, I was getting worried so came out with some changes which may just help in the AS department. My strongest PL version is probably 2.3.7, although I need to figure out what is loosing me my 0.1APS since 2.2.2... I'm happy to improve my PL but not at the expense of my APS =)