Thread history

Fragment of a discussion from User talk:MN
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results

This has been done a few times in the past. Various kD-Tree implementations have support for dynamically setting weightings for the distance function.

IIRC Diamond is one example of a bot that makes use of this feature of the kD-Tree implementation, though I think it does still use a pre-defined list rather than dynamic tweaking.

IIRC there have been experiments in dynamic weight tweaking but I'm not sure how successful they've been off hand.

Rednaxela (talk)04:15, 9 December 2013

Yeah I just modified Skilgannon's kD-Tree to support changing of weights, though I am not using the feature yet. It also seems like you could speed up having two virtual guns (Antisurfer, Antirandom) by using just one tree and using a different weighting for each. Here's something to think about: if a flattening system is using the same data decay as the targeting system trying to hit it, it should get a very good dodging rate. So how do you figure out how much weight they are giving recent results and adapt your weights to match it, and you should dodge almost everything.

Straw (talk)06:53, 9 December 2013

Data decay (time classification) is one of many classifications in k-NN search. You need to mirror all of them to achieve perfect dodging.

You need a lot more data than is available to estimate all weights, unless you want to get shot a lot.

MN (talk)14:14, 9 December 2013

I believe multiple people have noted that the weight on new vs old data is significantly more important than weights on other predictors.(In a gun) I believe Skilgannon said something along the lines of: I can change the weights by tenfold (except the one on shots fired) and I get very little difference in performance. So if you could match that weight in your flattener, you could (hopefully) get very low hitrates against you.

Straw (talk)06:17, 10 December 2013