APM?
← Thread:Talk:Knight/VersionHistory/APM?/reply (2)
Well I think the only noticeable difference in handling flattener now is that I normalize buffers based on area, instead of max ;) AFAIK Neuromancer is also doing this, although without flattener or multi-trees (but he is dealing with multiple enemies ;) ).
My flattener also uses time-since-decel, as I find my surfers having weakness about that attribute in the past (that is VCS surfers, though).
Btw, are you doing with PMs that well in the past, or just after recent updates? I'm wondering whether flatteners help against PMs ;), since I've been long suffering about flatteners being turned on occasionally against PMs, which hurts.
I was already doing better than ScalarBot, both with the old flattener and with no flattener at all, but yeah, adding the flattener gave a tremendous improvement.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
I always thought strong APM was from lots of temporal attributes. It also makes sense that it would be from a small K size to prevent always dodging the same points and building repeated patterns.
I really tried to drop my trees and use a single one, but I really couldnt figure it out yet. I still use a bunch of trees, one is 1-nn and has higher weight and bigger decay, probably approximating the effect Druss' buffers have, which seems to work nice against PM.
That makes sense — pattern matchers do better when you have repeatable patterns, then some alias (e.g. small k size) in surfing stats helps. But yes, low repeatability in surfing stats ought to be the main point. I’ve been tuning against RaikoMicro too much, which, each time makes me create more repeatable (but exploiting) patterns. However, this gain simply makes me exploitable against PMs & top guns.
Another important thing may be the smoothing function. I’ve been using gaussian, with very low bandwidth (actual only 1.5x bot width), which makes my surfing stats pretty local (not affected by dangers far away, or even near), then when some stats are not fast changing (since I have a bunch of persistent stats), there could be even more repeatable patterns. Doh!
My past experiments told me the same story, but again I preferred gaussian for too precise bullet dodging, and then lose to anything else.
Then I got why the 1nn movement does pretty well agaisnt top guns — it aliases heavily, and replaces the nearest neighbor each time new data comes in. And it uses 1 / x^2 as well.