Difference between revisions of "Thread:Talk:RoboRumble/Upgrade client version/reply (3)"

From Robowiki
Jump to navigation Jump to search
 
m (fix math)
 
(One intermediate revision by the same user not shown)
Line 13: Line 13:
 
Since the score of new bots are unbiased, all we need to do for an unbiased score is to ignore (n - 2) / (n - 1) biased battles randomly when calculating the score of an old bot. However this approach takes much more battles.  
 
Since the score of new bots are unbiased, all we need to do for an unbiased score is to ignore (n - 2) / (n - 1) biased battles randomly when calculating the score of an old bot. However this approach takes much more battles.  
  
A more practical way is is, when bot A is added, for each battle, select another bot B randomly, and run melee battles containing those two bots as usual. A battle containing A, B and other 8 bots should yield 72 pairings, but only those matching (A, *) or (B, *) is taken into account. This produces 18 parings.  
+
A more practical way is is, when bot A is added, for each battle, select another bot B randomly, and run melee battles containing those two bots as usual. A battle containing A, B and other 8 bots should yield 45 pairings, but only those matching (A, *) or (B, *) is taken into account. This produces 17 parings.  
  
 
This scheme does not affect the pairings of the new bot itself at all, which is already unbiased; And for an old bot, the probability of being chosen as B is 1 / (n - 1), therefore the probability of a battle with A present being taken into account for old bots is 1 / (n - 1), the same as the unbiased one.
 
This scheme does not affect the pairings of the new bot itself at all, which is already unbiased; And for an old bot, the probability of being chosen as B is 1 / (n - 1), therefore the probability of a battle with A present being taken into account for old bots is 1 / (n - 1), the same as the unbiased one.

Latest revision as of 00:28, 4 October 2018

I've been always thinking about the pairing systems of meleerumble.

Once every combination of 10 bots had a run, the score is unbiased, which takes N = n! / (10! * (n - 10)!) ≈ 10^19 battles in current settings (n is the total of participants).

However we should get approximate score with feasible battles via monte carlo method. In current settings, ~10000 battles already gives a somewhat stable score (for the new participant).

Let each bot gets m battles, randomly selected from all (N / n) 10-bot combinations containing that bot, then the probability of meeting another specific bot in a battle is (N / n / (n - 1)) / (N / n) = 1 / (n - 1).

Assume that when a new bot is released, every battle contains that bot, then the probability of meeting that bot is 1 instead of 1 / (n - 1), which is highly biased.

To fix this, we have two options — mutate our current pairing systems to get unbiased score online, or to reset the entire meleerumble periodically.

Since the score of new bots are unbiased, all we need to do for an unbiased score is to ignore (n - 2) / (n - 1) biased battles randomly when calculating the score of an old bot. However this approach takes much more battles.

A more practical way is is, when bot A is added, for each battle, select another bot B randomly, and run melee battles containing those two bots as usual. A battle containing A, B and other 8 bots should yield 45 pairings, but only those matching (A, *) or (B, *) is taken into account. This produces 17 parings.

This scheme does not affect the pairings of the new bot itself at all, which is already unbiased; And for an old bot, the probability of being chosen as B is 1 / (n - 1), therefore the probability of a battle with A present being taken into account for old bots is 1 / (n - 1), the same as the unbiased one.