Thread history
Viewing a history listing
Time | User | Activity | Comment |
---|---|---|---|
01:52, 27 October 2017 | Xor (talk | contribs) | New thread created | |
10:26, 27 October 2017 | Cb (talk | contribs) | New reply created | (Reply to scale standard deviations to 1) |
10:56, 27 October 2017 | Xor (talk | contribs) | New reply created | (Reply to scale standard deviations to 1) |
14:15, 27 October 2017 | Cb (talk | contribs) | New reply created | (Reply to scale standard deviations to 1) |
Good to hear that some experiment was done with this rule-of-thumb. Anyway, did you find that improve your performance? Or some weight re-tune is needed imo.
Btw, are you using some online standard deviations algorithm and update the knn weights on the fly, or just do it offline with pre-collected data?
Yes, I think you mentioned this rule-of-thumb somewhere on the wiki, (or was it somebody else? not sure now) thanks for that. I found that improve my performance. However, I focused on becoming stronger against surfers, which made me weaker against non-surfers.
I just run a battle against some bot, collected the standard deviations and afterwards divided each attribute by the respective std deviation. I found that now the standard deviations remain pretty close to 1 against most bots.
Well, that's the easiest way I can think of... And it really surprised me that the standard deviation is almost the same for most bots. If so, then I think normalising standard deviation may not be that useful — since we already tune weights very hard. But for neural networks, I think this is invaluable, since we don't tune weights manually, and that would affect the learning speed a lot.