APM?
← Thread:Talk:Knight/VersionHistory/APM?/reply (3)
I was already doing better than ScalarBot, both with the old flattener and with no flattener at all, but yeah, adding the flattener gave a tremendous improvement.
Thanks for that information ;)
After a long time of experiment, I finally think the main difference from bots do good against PMs is not in the surfing algorithm... but the way surfing stats are handled. Maybe I should try some more traditional way before starting innovation... I've been already dropping old surfing stats (which uses 1-nn) since 0.012n1 (and finally got similar performance), now maybe I should start dropping crowd tree views ;) I use three simple views each with 3~5 attributes as main surfing stats now, maybe that's the reason why I got hit from PMs (and the top guns as well) badly.
I always thought strong APM was from lots of temporal attributes. It also makes sense that it would be from a small K size to prevent always dodging the same points and building repeated patterns.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
That makes sense — pattern matchers do better when you have repeatable patterns, then some alias (e.g. small k size) in surfing stats helps. But yes, low repeatability in surfing stats ought to be the main point. I’ve been tuning against RaikoMicro too much, which, each time makes me create more repeatable (but exploiting) patterns. However, this gain simply makes me exploitable against PMs & top guns.
Another important thing may be the smoothing function. I’ve been using gaussian, with very low bandwidth (actual only 1.5x bot width), which makes my surfing stats pretty local (not affected by dangers far away, or even near), then when some stats are not fast changing (since I have a bunch of persistent stats), there could be even more repeatable patterns. Doh!
My past experiments told me the same story, but again I preferred gaussian for too precise bullet dodging, and then lose to anything else.
Then I got why the 1nn movement does pretty well agaisnt top guns — it aliases heavily, and replaces the nearest neighbor each time new data comes in. And it uses 1 / x^2 as well.