Upgrade client version
The highlighted comment was created in this revision.
Hi all
I've been thinking of upgrading the accepted client version. What are your thoughts on the matter? Have any of you tested extensively with the latest releases locally? Are there any changes in scores (or any changes that might affect scores) that we should do more extensive testing of first?
Finally 1.9.3.3 released, with RoboRumble battle duplicating fix, and with shorter participants list check period from 1.9.3.1 as well. It’s time to do a test and discuss to upgrade.
Question, what kind of tests should be done in local? Should we run a literumble locally and see whether it yields the same score as the main rumble? (which gives the best result but takes monthes). Or just using roborunner with a few robots (both 1v1 and melee certainly), and see their score is unaffected.
Just roborunner and a few bots should be fine. At some point it would be good to reset the meleerumble as well, since old bots get biased downwards by excessive battles against new (good) bots.
I've been always thinking about the pairing systems of meleerumble.
Once every combination of 10 bots had a run, the score is unbiased, which takes N = n! / (10! * (n - 10)!) ≈ 10^19 battles in current settings (n is the total of participants).
However we should get approximate score with feasible battles via monte carlo method. In current settings, ~10000 battles already gives a somewhat stable score (for the new participant).
Let each bot gets m battles, randomly selected from all (N / n) 10-bot combinations containing that bot, then the probability of meeting another specific bot in a battle is (N / n / (n - 1)) / (N / n) = 1 / (n - 1).
Assume that when a new bot is released, every battle contains that bot, then the probability of meeting that bot is 1 instead of 1 / (n - 1), which is highly biased.
To fix this, we have two options — mutate our current pairing systems to get unbiased score online, or to reset the entire meleerumble periodically.
Since the score of new bots are unbiased, all we need to do for an unbiased score is to ignore (n - 2) / (n - 1) biased battles randomly when calculating the score of an old bot. However this approach takes much more battles.
A more practical way is is, when bot A is added, for each battle, select another bot B randomly, and run melee battles containing those two bots as usual. A battle containing A, B and other 8 bots should yield 90 pairings, but only those matching (A, *) or (B, *) is taken into account. This produces 18 parings.
This scheme does not affect the pairings of the new bot itself at all, which is already unbiased; And for an old bot, the probability of being chosen as B is 1 / (n - 1), therefore the probability of a battle with A present being taken into account for old bots is 1 / (n - 1), the same as the unbiased one.
Your second approach seems good, it will need patches on the client side so that if a priority pairing needs to happen it only uploads the battles which contain one of the priority bots. This filter will work fine for the 1v1 rumble as well.
Why don't we wait to update the client until this can be fixed too?