Difference between revisions of "Talk:Darkcanuck/RRServer/Ratings"

From Robowiki
Jump to navigation Jump to search
(Battles per Pairing)
 
(Fun factor reply)
 
(66 intermediate revisions by 10 users not shown)
Line 1: Line 1:
 +
== Explanations Behind Ratings ==
 +
 +
This page does a nice job of explaining what some of the ratings are, but it still assumes certain existing knowledge.  Somewhere, perhaps on this page, there needs to be descriptions for all rating terms.  This could be on the wiki or even just a legend at the darkcanuck ratings site. 
 +
 +
Some examples of what is missing -- No where...anywhere...can I find out what "PBI" stands for or what it's significance is.  I don't see anywhere that explains "Specialization" either.  Also "LRP".  [[User:Skotty|Skotty]] 00:04, 26 May 2011 (UTC)
 +
 +
I'm pretty sure that sort of information mostly existed on the [http://old.robowiki.net/ old wiki] before, but yeah. Well, "PBI" means "ProblemBot Index", and it is the difference between your percent score, and the "expected" percent score based on the ELO scores of the two. (A system based on how it compares based on near-ranking-bots rather than ELO would be more accurate IMO, but ELO isn't that bad of a predictor when pairings are complete I guess)
 +
 +
"LRP" stands for "Linear Regression Plot". Well, the term is awfully vague IMO, but it refers to [[oldwiki:RoboRumble/LRP]]. It is a plot of PBI vs ELO score. Essentially, the plot can give you a quick visual display that can highlight outliers (i.e. things your bot does exceptionally well/poorly against) and show some general trends such as "Does it fare more favorably against high ranking bots, or against low ranking bots?". --[[User:Rednaxela|Rednaxela]] 11:59, 26 May 2011 (UTC)
 +
 
== Battles per Pairing ==
 
== Battles per Pairing ==
  
 
I just wanted to comment on the statement, "It's uncertain how well it works with less battles or incomplete pairings."  My experiment with the MC2K7 shows that separate runs of 75 battles can still show more than 1% variation for a given pairing.  This affects any scoring system, and is a fact that we have to live with.  The reliability of output can only be as good as input, no matter how fancy the interpolation is for incomplete pairings.  The hope is that the variance will become a wash when seen over 600+ pairings. --[[User:Simonton|Simonton]] 15:25, 26 September 2008 (UTC)
 
I just wanted to comment on the statement, "It's uncertain how well it works with less battles or incomplete pairings."  My experiment with the MC2K7 shows that separate runs of 75 battles can still show more than 1% variation for a given pairing.  This affects any scoring system, and is a fact that we have to live with.  The reliability of output can only be as good as input, no matter how fancy the interpolation is for incomplete pairings.  The hope is that the variance will become a wash when seen over 600+ pairings. --[[User:Simonton|Simonton]] 15:25, 26 September 2008 (UTC)
 +
 +
I think [[User:David Alves|David Alves]] commented that targeting challenge scores also varied by almost 1% at 15 seasons, so I agree there's lots of evidence that more battles per pairing are needed, which would take a very, very long time in a 600+ competitor environment.  You're right that as the number of competitors increases, variabilities cancel each other out.  But at the same time, the bigger the competition, the more risk of a "black swan" competitor whose scores are ''all'' skewed in one direction. -- [[User:Darkcanuck|Darkcanuck]] 15:31, 26 September 2008 (UTC)
 +
 +
After scratching some things down on paper which are mostly intuition rather than statistics, I believe the odds of having such a "black swan" are either exactly the same or reduced by increasing the number of bots. --[[User:Simonton|Simonton]] 16:05, 26 September 2008 (UTC)
 +
 +
Well, if there are 3 bots, the chance of one getting lucky against both others is 1/4th, multiply by 3 bots, and the chance of a "black swan" in 3 bots is 75% I believe. With 4 bots, the chance of one getting lucky against against all others is 1/8th, multiply by 4 bots, and the chance of a black swan is 50%. For 5 bots... it is 31.25% chance of a black swan. For 650 bots with one pairing each, the chance of a bot having above average score in every pairing is about 1 to 2.78*10^193. So if we presume getting lucky is anything above the mean score and there's a 50% chance of that in any pairing, and that a "black swan" is only when ''all'' pairings are lucky, then the chance of a black swan sharply decreases as the number of bots becomes larger. Of course perhaps what would be more useful than simply chance of there being a bot with ''all'' pairings lucky, would be the chance of luck making the score 1% different. I could calculate this, but only if I had a number of what the "standard deviation" of the percent score of an average robocode battle is. --[[User:Rednaxela|Rednaxela]] 16:28, 26 September 2008 (UTC)
 +
 +
:My intuitive hypothesis remains unshaken, but I don't have any numbers to prove it.  But I can't argue with something to the power of 193.  :)  I'll look into adding standard deviation to some of the tables.  What would be most useful, within a pairing, across all pairings, or across all final scores? -- [[User:Darkcanuck|Darkcanuck]] 16:43, 26 September 2008 (UTC)
 +
 +
Ah, now that you put the statistics that way I can see how to do it.  With 3 bots each has 2 pairings, so the chance of both coin flips being "lucky" is indeed 25%.  However, the chance of at least 1 of those bots hitting its 25% is <code>(1  - 75%^3) ~= 57.8%</code>.  Generalized, this formula is <code>1 - (1 - .5^(bots - 1))^bots</code>.  If you graph that you can see it reduces to pretty much zero pretty quickly. --[[User:Simonton|Simonton]] 17:18, 26 September 2008 (UTC)
 +
 +
:Oh right, I got slightly mixed up and was multiplying by 3 when I should have been working with powers. --[[User:Rednaxela|Rednaxela]] 17:35, 26 September 2008 (UTC)
 +
 +
:You've got an extra leading paren, but that makes sense to me.  Nothing to worry about!  -- [[User:Darkcanuck|Darkcanuck]] 18:05, 26 September 2008 (UTC)
 +
 +
== Glicko-2 Rating System ==
 +
 +
Looking at things as my recently added versions have gained battles, it's seeming like Glicko-2 seems FAR faster to converge to a realistic expected score far quicker than ELO or Glicko-1, and seems quite stable. Glicko-2's performance seems to really impress me. I wonder if maybe we should remove ELO and Glicko-1 at time point maybe, and just keep APS and Glicko-2? (Would that make uploading a little faster?) Also, maybe it would be good to make a modified APS that uses the Glicko-2 ratings to estimate the scores of missing pairings, in order to make the APS ranking less distorted by cases when there are incomplete pairings still? --[[User:Rednaxela|Rednaxela]] 21:25, 25 November 2008 (UTC)
 +
 +
Is it possible to modify the 'deviation' so that we have a similar 'spread' in the rankings as the ELO does? And a second to the using G-2 ratings for estimating the score for missing pairings in the APS rankings.--[[User:Skilgannon|Skilgannon]] 21:41, 25 November 2008 (UTC)
 +
 +
: I'm glad you guys are comparing the ranking systems.  From what I can tell, Elo and Glicko-2 ratings seem to settle to the same ranking order as APS, although I've never noticed which converges faster.    The Glicko-1 scores haven't worked out so well, so they're not really viable -- I may try replacing that column with a Glicko-2 rating which updates only using the result of the last pairing result.  This would speed up uploads since it would eliminate the full pairing query.  The three current methods all rely on the same data so just removing one or two won't make a noticeable difference.  I suppose we could fill in the APS using Glicko-2 expected scores, that would be interesting.  And I could probably scale the Glicko-2 ratings to match the current Elo scores, if that's what you meant, Skilgannon.  --[[User:Darkcanuck|Darkcanuck]] 06:25, 26 November 2008 (UTC)
 +
 +
:: Yes, that's it exactly. --[[User:Skilgannon|Skilgannon]] 06:43, 26 November 2008 (UTC)
 +
 +
== Premier League rating calculation ==
 +
 +
Is the PL score being calculated using only the last battle for each pairing? If so, I would suggest averaging all battles in the same pairing, like the APS system does. For example, if after 4 battles you win 3 times, (2+2+2+0)/4 = 1.5. The PL ranking would be more stable and there would be less ties. --[[User:MN|MN]] 20:58, 16 July 2011 (UTC)
 +
 +
: I believe the PL league works like the following (internally), every win adds one to a running total, every loss subtracts one, every actual tie does nothing. Then taking this total, if above zero is considered a win. If below zero considered a loss, and at zero is considered a tie. It then assigns points based on this win/tie/loss, win = 2, tie = 1, loss = 0. &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 21:16, 16 July 2011 (UTC)
 +
 +
: A bot's rankings data (APS, Glicko2, PL, etc) gets updated every time a new battle involving that bot is uploaded.  The PL score is based on the APS for each pairing:  you get 2 points for every pair where your APS is > 50%.  It's a winner-take-all system, so there's no credit given for losing at 49.999%.  I didn't even implement the 1-point for a tie, since it's so unlikely to ever happen (except for crashing bots paired together).  --[[User:Darkcanuck|Darkcanuck]] 22:44, 16 July 2011 (UTC)
 +
 +
:: A lot better than what I was suggesting... --[[User:MN|MN]] 03:53, 17 July (UTC)
 +
 +
Also, can we have a ranking order for the PL league in the rankings page? --[[User:MN|MN]] 20:58, 16 July 2011 (UTC)
 +
 +
: In this case I would ask the actual rank column does not get sorted, so whatever it is we sort by we see by its rank. So we could see the ranks for the top worst survival scores as rank 1 to 800+. Just not sorting this column would be far simpler then asking for a rank for every sortable column. The sortable nature of the rank column I think would be used less often. &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 21:20, 16 July 2011 (UTC)
 +
 +
: I could probably link to it (I think the sorting code already exists) but I think most of us just re-sort using the PL column.  It gives an interesting view of APS rank vs PL score.  --[[User:Darkcanuck|Darkcanuck]] 22:44, 16 July 2011 (UTC)
 +
 +
:: Re-sorting using the PL column works, but if the bot is not near the top you have to manually count the rows. [[User:Chase-san|Chase]] suggestion would be perfect. --[[User:MN|MN]] 03:53, 17 July (UTC)
 +
 +
: Please no special link for PL. I just got into the top-10, nobody needs to know that I get beaten by that much. ;-)  --[[User:GrubbmGait|GrubbmGait]] 23:16, 16 July 2011 (UTC)
 +
 +
== Offline_batch_ELO_rating_system ==
 +
 +
I added another page with a modified ELO system I made: [[Offline_batch_ELO_rating_system]] --[[User:MN|MN]] 12:46, 12 August 2011 (UTC)
 +
 +
== Premier League ==
 +
 +
Perhaps scores within .5 or 1 to 50 should be treated as a tie (since these are hard to hammer out taking many many seasons, and might still be wrong). So 49.5 to 50.5 OR 49.0 to 51.0 splits could be counted as 1 instead of one getting 2 and the other getting 0. This might motivate people to work to break the tie. After all unlike something with a low integer scoring (like soccer/football) ties are harder to come by. If no one agrees, I propose dividing all PL scores by 2 instead. :) &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 20:45, 26 August 2011 (UTC)
 +
 +
:This tie range discussion is an APS-only issue. Elo naturally takes close matches in account by measuring how often each side wins. Also, voting systems, like Condorcet, clearly define 1 vote as tie range. --[[User:MN|MN]] 23:31, 26 August 2011 (UTC)
 +
 +
:Dividing PL scores by 2 makes it look a lot like [[Wikipedia:Neustadtl_score|round-robin chess scoring]] which I also prefer. :) --[[User:MN|MN]] 23:31, 26 August 2011 (UTC)
 +
 +
: Seems a bit artificial/arbitrary, you would still get pairings teetering on the same thresholds.  I'm actually curious what PL would look like if instead of 1 point for APS > 50, each bot got points based on the fraction of battles won.  So a close matchup would score ~0.5 points while a decisive series of battles would yield up to 1 point for the winner.  (and then multiply the sum by 2) --[[User:Darkcanuck|Darkcanuck]] 21:43, 26 August 2011 (UTC)
 +
 +
:: That actually makes more sense, PL based off survival. &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 22:14, 26 August 2011 (UTC)
 +
 +
::Averaged Winning Rate ranking system? One of the simplest ranking systems, at the same time one of the most fair/accurate, and still very king maker resistant. Only flaw is it doesn´t have a tie-breaker system on its pure form. But other tie-breaker systems like [[Wikipedia:Neustadtl_score|Neustadtl]] can be easily adapted for this "AWR" system. Also doesn´t handle incomplete pairings very well, but APS suffers from the same problem and is still the main ranking system until now. --[[User:MN|MN]] 23:31, 26 August 2011 (UTC)
 +
 +
::: I added a column for AWR (W%), seems like a simple and fair ranking method.  --[[User:Darkcanuck|Darkcanuck]] 20:11, 27 August 2011 (UTC)
 +
 +
<s>I noticed a huge difference between PL and APS/Schulze in the Alternative Rankings page. I thought the difference between them was in the tie-breaker criteria only. Are they using the same battle data? --[[User:MN|MN]] 12:16, 27 August 2011 (UTC)</s>
 +
 +
:Sorry for my ignorance in Condorcet methods. Schulze also differs in the circular ambiguity resolution criteria. --[[User:MN|MN]] 13:48, 27 August 2011 (UTC)
 +
 +
== Schulze & Tideman Condorcet ==
 +
 +
I like both of these methods. Looking at the scoring, they have some very interesting results. I do somewhat like Tideman's more because of results. &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 21:05, 26 August 2011 (UTC)
 +
 +
: Well, I like Tideman because it catapults Pris to #3, but the result doesn't make much sense to me...  --[[User:Darkcanuck|Darkcanuck]] 21:13, 26 August 2011 (UTC)
 +
 +
I read up on Schulze, sounds like a very cool ranking method. Still gotta investigate Tideman. Perhaps the most interesting result to me is [[Hydra]] and [[Phoenix]] dropping into the 60s or 80s - holy cow! But now I remember the likely reason: they lose to [[BulletCatcher]]. --[[User:Voidious|Voidious]] 15:48, 27 August 2011 (UTC)
 +
 +
And don't forget the [[BasicGFSurfer]] bulletpower bug. I missed almost all discussion during my vacation, and I am reading up on it. Although I am much better ranked in APS, I do see the value of PL and similar rankings. I have mixed feelings though. If 3 [[BulletCatcher]]s and a few [[BasicGFSurfer]] derivates have such an influence as stated above by [[Voidious]], I don't know if I can trust such ranking. To me it seems that a bunch of anti-xxx routines thrown together is better rewarded than a sole generalist implementation. I have to read more and think more to make up my mind definately, although my focus probably won't change. --[[User:GrubbmGait|GrubbmGait]] 10:26, 28 August 2011 (UTC)
 +
 +
:That is kind of like splitting hairs if you ask me. Tuning for any specific situation for point gain is how the game is played, the only variable is how prevalent that situation is. If every robot was bad against 0.05 bullet power then you would have to be insane to 'not' tune your robot to fire for that. The thing is that, if every robot tunes for it, then everyone has the benefit from it. It is like saying it isn't really fair to tune your bot in a more specific way, or to counter a specific set of enemy robots and get overly rewarded for that.
 +
:Though I say all this however. I believe you gain more score for tuning towards and improving your general ability then for a specific situation. Tuning for those can only improve your score, not make it. Tuning for enemies that are bad at 0.05 bullet power doesn't matter if you don't fire well enough to hit them in the first place. Nene for example has no anti-xxx routines, not even a flattener, she doesn't seem to do to badly. Seraphim however has a great deal of anti-xxx routines, but does rather badly overall. :(
 +
:&#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 11:49, 28 August 2011 (UTC)
 +
 +
::Exactly, thats why [[Seraphim]] does much better in Tideman and Schulze (and PL), even though [[Nene]] has a far better base. There is nothing wrong to counteract on specific tactics/strategy (f.e. ramming, bulletcatchers, surfers), I just fear that it will end up to specifically. A bot with 10 specific routines to defeat the top-20 is something I really don't like. The PL ranking is a valuable and respected ranking, and it nags me that I have the worst PL ranking in the history of the top-10, but for me it is not the most important ranking. It will improve automatically when my code improves. The same with Schulze and Tideman Condorcet. It's just not my cup of tea. Still have to read what everybody said and think it over, with the focus on what is the best for the rumble, not for me personally. --[[User:GrubbmGait|GrubbmGait]] 13:12, 28 August 2011 (UTC)
 +
 +
:: Well, with Condorcet methods, the prevalence of them actually wouldn't have any increased impact - losing to 10 BulletCatchers is basically as bad as losing to the worst one, not 10x like with APS. But consider how much those specializations have an impact under the rating system. [[Phoenix]] losing to a [[Bullet Shielding]] implementation is 1 loss in the PL or .01 APS, while he drops from 5 APS/8 PL to 60 or 80 with these Condorcet methods. It's an order of magnitude (or more) impact from a single loss to a gimmicky (if brilliant) strategy. Do you consider [[WaveShark]], [[MicroAspid]], or [[Tron]] 2 stronger than Phoenix? I don't, and he crushes them all handily. If a ranking system ranks them above Phoenix, it's hard not to consider it a strike against that rating system. All that said, I still really like these methods and I think this is a really, really extreme case. --[[User:Voidious|Voidious]] 13:38, 28 August 2011 (UTC)
 +
 +
::: This is a reverse king-maker scenario:  one (or two?) bots are able to send Phoenix (an overall strong bot which rates very high in both APS & PL) spiralling down the rankings.  It's not just an extreme case, theoretically anyone could release an anti-InsertBotNameHere and achieve a similar result.  But it's more likely with bots who aren't actively maintained.  I find these methods interesting, but don't think they're valid for the rumble.  The only method I like so far is the AWR (W% column) proposed by MN:  similar to PL but with very few ties and less impact from marginal wins.  --[[User:Darkcanuck|Darkcanuck]] 14:22, 28 August 2011 (UTC)
 +
 +
:::: Well... I believe using Condorcet methods with a "ballot per bot" style input would not be subject to such "reverse king-maker" effects. I don't know exactly what the results would look like though, but I like the "ballot per bot" style conceptually. It rewards things like beating rambots by 80% more than beating them by %60, yet it should be more resistant to extreme biases of single bots ("king maker" scenarios) than APS. It may not be so similar to PL though. --[[User:Rednaxela|Rednaxela]] 14:46, 28 August 2011 (UTC)
 +
 +
::::It´s not a [[King_maker|king maker scenario]]. It is a circular ambiguity scenario. A competitor can screw up the rank of anyone it is better against. --[[User:MN|MN]] 14:55, 28 August 2011 (UTC)
 +
 +
::[[Wikipedia:Condorcet_method|Condorcet methods]] are vulnerable to [[King_maker|king maker scenarios]], or [[Wikipedia:Tactical_voting|compromising/burying]] as they are called in [[Wikipedia:Voting_system|voting systems]], when circular ambiguities arise. But in a way which I believe is good for the rumble. When [[Phoenix]] lost to [[BulletCatcher]], it became vulnerable to being screwed by all competitors above [[BulletCatcher]]. So, [[BulletCatcher]] is a problem bot. The system incentives [[Phoenix]] to generalize against it too, and not specialize even more against someone already being crushed. --[[User:MN|MN]] 14:33, 28 August 2011 (UTC)
 +
 +
::Perhaps it does create incentives for Phoenix, but that doesn't mean it's a good ranking system. Phoenix is still a very strong bot, despite the fact that somebody wrote a highly specialised bot which happens to beat it.--[[User:Skilgannon|Skilgannon]] 06:40, 29 August 2011 (UTC)
 +
 +
:::%Wins/[[Wikipedia:Schulze_method|Schulze]] seems to not show the same problem, although in theory it can also happen. I think [[Wikipedia:Schulze_method|Schulze]] is amplifying the noise/bias coming from APS. [[Wikipedia:Schulze_method|Schulze]] relies heavily on measured strength difference to solve circular ambiguities. If the metric is poor, the system breaks. --[[User:MN|MN]] 16:21, 29 August 2011 (UTC)
 +
 +
::A statistic based on have many times a given pairing is being used on [[Wikipedia:Schulze_method|Schulze]] strongest link evaluation, in favor or against a competitors score, would be a nice problem bot index. And a lot more meaningful than current Elo based PBI. --[[User:MN|MN]] 14:33, 28 August 2011 (UTC)
 +
 +
::The only thing I have against that [[Phoenix]] bashing is the system still relying on APS. APS/[[Wikipedia:Schulze_method|Schulze]] makes the assumption that APS is a good estimator of strength difference, which I don´t like. I am in favor of %Wins/[[Wikipedia:Schulze_method|Schulze]], even though by bots got bashed in it. But I know why, they were all designed weak-crushing style. --[[User:MN|MN]] 14:33, 28 August 2011 (UTC)
 +
 +
::: What alternative do you propose which would scale smoothly between 0 and 100% and have a 'tie' mapping to 50%? Using simple win% completely ignores the fact that once you have won there are still improvements to be made, so while I certainly think it is fine as a metric, I don't think it should be the primary one. APS should rather be called Average Percentage of Total Score, but beyond that it works quite well. --[[User:Skilgannon|Skilgannon]] 06:40, 29 August 2011 (UTC)
 +
 +
:::A ranking system purpose is to estimate the order of competitors strengths. I believe %wins combined with some transitivity based system does this better than APS. There is no need to measure anything above first place in a ranking system. APS is nice a challenge indicator, but when you start mixing many APSs from many pairings, the system breaks. But you can perfectly use %wins as ranking, and at the same time use APS on a controlled environment against specific opponents to measure improvement. --[[User:MN|MN]] 17:42, 29 August 2011 (UTC)
 +
 +
:::Hmmm, I'm not of the opinion that all the rumble is there for is to say who is the strongest/what order of strength bots are ranked in. When I look at the rumble and see DrussGT at 88.8% I don't think, "Well, I'm in first place now." Rather, I see a 11.2% possibility for improvement. Because getting 100% against everyone is virtually impossible, this means that the maximum possible score will essentially never be reached, and thus there is always room for improvement. --[[User:Skilgannon|Skilgannon]] 07:18, 30 August 2011 (UTC)
 +
 +
:::: I think at a more basic level, the purpose of the rumble is to make bot development more fun. Clearly, for you, me, GrubbmGait ;), and many others, APS is a good motivator. For others, like [[User:ABC|ABC]], [[User:zyx|zyx]], MN (whoah, sensing a pattern here), something focused on win/loss probably works much better. I don't see APS going away, but having a more intelligent and/or more stable ranking for the win/loss viewpoint certainly has its place. --[[User:Voidious|Voidious]] 15:00, 30 August 2011 (UTC)
 +
 +
::::: I really think that is the main goal in the rumble. To allow us (the programmers) to have fun with it. Improving your score, either it be in PL or in APS or what have you is addicting, and thusly can be considered well worth the time required to get it. I personally am in the APS crowd these days (though to be fair I was originally in the PL group). If its really a problem we should just mix W%/PL and APS somehow, hah. Being serious though, can't we just have a link that takes you to the rumble sorted by PL by default. [http://darkcanuck.net/rumble/Rankings?game=roborumble&sort=score_pl Like say this one.] &#8212; <span style="font-family: monospace">[[User:Chase-san|Chase]]-[[User_talk:Chase-san|san]]</span> 15:24, 30 August 2011 (UTC)
 +
 +
::::::I have to admit that improving APS is really addicting, the fun factor is the best argument in favor of this system so far. But after knowing about [[Wikipedia:Schulze_method|Schulze]], seeing it working for real would be a lot of fun, this method is too cool to be left aside. I enjoy seeing the client working and the rankings stabilizing as much as I enjoy developing bots. --[[User:MN|MN]] 00:27, 31 August 2011 (UTC)
 +
 +
:::All of these other methods seem to me to 'lose' information by either rounding to win/loss, seeing what order of scores are, etc., but all introduce some form of granularity which would make me suspect that a much larger number of battles would be required to achieve the same stability of rankings, and that small improvements which do not necessarily reach their goals but are a valid step, will not receive any reward. --[[User:Skilgannon|Skilgannon]] 07:18, 30 August 2011 (UTC)
 +
 +
::::A system requiring less battles to stabilize doesn´t mean it is more accurate. Let´s imagine an extreme case where the ranking system is based on the names of the competitors, sorted alphabetically. It needs zero battles to stabilize, and ranks doesn´t shift with many close matches. And changing the name to increase ranks is always rewarded as every letter of the name counts. It increased stability by not rounding all names to the same, but the system is obviously biased and will never converge to a ranking showing competitors strengths, no matter the amount of battles. --[[User:MN|MN]] 14:20, 30 August 2011 (UTC)
 +
 +
:::::If we measured strength by alphabetizing, actually that would be perfectly accurate! =) --[[User:Voidious|Voidious]] 15:00, 30 August 2011 (UTC)
 +
 +
::::I agree that they lose information found in APS, but I believe that APS also loses information. Is DrussGT really deserving of the PL crown, or did he just get lucky with 2.2.0? Our priority battles focus on stabilizing APS, so we don't know. And just as improving from 95% to 99% would mean nothing without APS, so too does improving from 49% to 55% vs one bot go mostly unrewarded under APS. If it means you go from 1 defeat to 0, that seems like a pretty important improvement to me - much more so than the 0.007 APS you'd gain from it. --[[User:Voidious|Voidious]] 15:00, 30 August 2011 (UTC)
 +
 +
::::Correction: the priority battles focus on giving each pairing an equal number of battles.  The fact that this stabilizes APS, PL (or any other ranking system) is a side-effect.  --[[User:Darkcanuck|Darkcanuck]] 16:13, 30 August 2011 (UTC)
 +
 +
:::::My bad - I guess it is more of a middle ground. Prioritizing APS would mean prioritizing pairings with high variance, prioritizing win/loss would prioritizing pairings closer to 50%. --[[User:Voidious|Voidious]] 17:06, 30 August 2011 (UTC)
 +
 +
:::I think a major difference between this what is needed in Robocode and what is needed in real competition scoring is that in the real world the only thing that matters is who comes first, second, etc. so that they can receive their awards. In Robocode we have the added requirement that it is also important by how much somebody is beaten. Not only this, but there are set limits on how much a bot can be beaten by, as there is only so much energy per bot per battle. In contrast, sports generally are only limited in scores by the skill difference between competitors. --[[User:Skilgannon|Skilgannon]] 07:18, 30 August 2011 (UTC)
 +
 +
::::In every head-to-head sport it is important how much somebody is beaten, but historically, attempts to measure that directly usually led to innacurate rankings, even in games with capped scores like chess and go. Taking the difference in account directly makes some pairings have more weight than others, and more often than not the pairings with more weight are not the ones between the top competitors. --[[User:MN|MN]] 14:20, 30 August 2011 (UTC)
 +
 +
::::The debate is more of a ranking system being head-to-head style (most sports) or crush-the-weak style with lots of king maker scenarios by choice (APS rumble). And if having both, which would be the main one. --[[User:MN|MN]] 14:20, 30 August 2011 (UTC)
 +
 +
::::I think it's also important to note that bots are unemotional. =) Humans get discouraged when they're losing, or unmotivated when winning by a lot. And spectators, who are paying the bills to run these sports, get bored if it's not close. So the score difference doesn't mean as much and putting focus on it would not serve the spectators. And not all sports ignore score differences entirely - prestige comes from score alone in bowling or golf, though of course it doesn't always affect rank like it does here. --[[User:Voidious|Voidious]] 15:00, 30 August 2011 (UTC)
 +
 +
:::Actually, %wins only stops being an indicator after a competitor wins 100% of the time against everyone, and no one improves in the meantime. --[[User:MN|MN]] 17:42, 29 August 2011 (UTC)
 +
 +
The average wins is essentially the same as PL but with the score for getting a win being 1, then divided by the number of bots and times 100, right? I could certainly live with that as a replacement for PL. It makes much more sense. --[[User:Skilgannon|Skilgannon]] 06:40, 29 August 2011 (UTC)
 +
 +
:Averaged %wins is not winner-takes-all like PL or any Condorcet method. 70% wins against someone is 0.7 divided by the number of bots, not 1. It makes the system a lot more stable, if it is fair or not is another matter. I think it is nice as a statistic, but as a ranking system I prefer those which are based on the transitivity axiom and have mathematical foundation, like [[Wikipedia:Elo_rating_system|Elo]], [[Wikipedia:Schulze_method|Schulze]] or [[Wikipedia:Ranked_pairs|Tideman]]. --[[User:MN|MN]] 16:49, 29 August 2011 (UTC)
 +
:I think another requirement of any ranking system is that it is reasonably easy to understand, so that an intuitive feel of what will improve scores will be accessible without doing an in-depth study of how the system works. --[[User:Skilgannon|Skilgannon]] 07:18, 30 August 2011 (UTC)
 +
:: Out of all the alternatives tested and discussed here, I like AWR/W% the best (for inclusion alongside APS + PL, not as a replacement).  It's simply an average of a bot's win rate (#wins / # battles for each pairing), just like APS is an average of the scores.  It's simple, easy to understand, easy to implement, and complements the information given by APS and PL. --[[User:Darkcanuck|Darkcanuck]] 16:13, 30 August 2011 (UTC)
 +
:: I think it's a fine score to have, but I suspect it will mirror PL almost exactly. I personally find Schulze the coolest of those discussed, but I'm realizing that no matter the fancy ranking system, my focus is unaffected - try to beat my toughest matchups, or improve my APS, depending on my mood and what ideas I have. I think only giving way more pairings to the top PL bots sounds particularly enticing to me, whether it's a Strongest Bots Rumble or a regularly run tourney. --[[User:Voidious|Voidious]] 17:06, 30 August 2011 (UTC)
 +
 +
:::It won´t mirror PL. There are bots with more than 30 ranks difference between them. --[[User:MN|MN]] 00:08, 31 August 2011 (UTC)
 +
 +
== %Wins/Schulze or Score/Schulze Condorcet ==
 +
 +
I would like to see one of these systems. They take out the APS formula entirely and follow Condorcet principles closer (majority rule instead of averaging). Score/Schulze in particular having no averaging, making the system the closest to Condorcet as I can think of. Tie-breaks are treated entirely inside the Schulze system. --[[User:MN|MN]] 23:31, 26 August 2011 (UTC)
 +
 +
: I'm working on %win variations now.  Schulze without some sort of normalization will not work since it will give more weight to pairings with more battles. --[[User:Darkcanuck|Darkcanuck]] 01:03, 27 August 2011 (UTC)
 +
 +
:: It's done, see SchW and TRPW columns.  I didn't implement Schulze tie-breaking (or even tie detection) so that may skew the results (algorithm was complicated enough as-is).  You'll notice that TRPW generated a huge number of ties (marked by asterisks) vs using APS only.  --[[User:Darkcanuck|Darkcanuck]] 20:15, 27 August 2011 (UTC)
 +
 +
:::The results were totally unexpected. Combat got 29 in %Wins and 75 in %Wins/Schulze... and I thought they were similar systems. Maybe I´ll try making a %Wins/Schulze ranking with tie-detection to see what happens. --[[User:MN|MN]] 00:09, 28 August 2011 (UTC)
 +
 +
::: Reviewing my code, ties are handed to the bot with the higher overall APS.  Even in the APS-based Schulze, the results are quite unexpected.  I doubt that ties are a major factor...  --[[User:Darkcanuck|Darkcanuck]] 03:59, 28 August 2011 (UTC)
 +
 +
== Averaged Winning Rate or Averaged Percentage Wins ==
 +
 +
The %W column. It is incredibly simple, making it easy to be calculated online, and quite resistant to both king maker scenarios and circular ambiguities. But inaccurate with incomplete pairings and few matches, like most non-statistical systems are. --[[User:MN|MN]] 15:19, 28 August 2011 (UTC)
 +
 +
Analyzing where to improve your bot is very easy, simply look at the pairings with below 100% winning rate. --[[User:MN|MN]] 15:19, 28 August 2011 (UTC)
 +
 +
And... Combat catapulted 30 ranks with it. :P  --[[User:MN|MN]] 15:19, 28 August 2011 (UTC)
 +
 +
But I am divided between %W and SchW. There might be a debate if increasing winning rate from 40% to 60% should be worth the same as increasing it from 80% to 100%. This is a very strong assumption, although those situations are a lot less common than in APS. --[[User:MN|MN]] 15:19, 28 August 2011 (UTC)
 +
 +
== Battle count in Alternative Rankings page ==
 +
 +
Can battle count per competitor be added as a column in the page? It would be useful to see how sensitive each system is to battle count. --[[User:MN|MN]] 12:16, 27 August 2011 (UTC)
 +
 +
: Done.  I also added tie indication to the W% column (but unfortunately only marked the second of the tied pair; in the Tideman columns, all tied rankings are marked).  --[[User:Darkcanuck|Darkcanuck]] 03:45, 28 August 2011 (UTC)

Latest revision as of 01:35, 31 August 2011

Explanations Behind Ratings

This page does a nice job of explaining what some of the ratings are, but it still assumes certain existing knowledge. Somewhere, perhaps on this page, there needs to be descriptions for all rating terms. This could be on the wiki or even just a legend at the darkcanuck ratings site.

Some examples of what is missing -- No where...anywhere...can I find out what "PBI" stands for or what it's significance is. I don't see anywhere that explains "Specialization" either. Also "LRP". Skotty 00:04, 26 May 2011 (UTC)

I'm pretty sure that sort of information mostly existed on the old wiki before, but yeah. Well, "PBI" means "ProblemBot Index", and it is the difference between your percent score, and the "expected" percent score based on the ELO scores of the two. (A system based on how it compares based on near-ranking-bots rather than ELO would be more accurate IMO, but ELO isn't that bad of a predictor when pairings are complete I guess)

"LRP" stands for "Linear Regression Plot". Well, the term is awfully vague IMO, but it refers to oldwiki:RoboRumble/LRP. It is a plot of PBI vs ELO score. Essentially, the plot can give you a quick visual display that can highlight outliers (i.e. things your bot does exceptionally well/poorly against) and show some general trends such as "Does it fare more favorably against high ranking bots, or against low ranking bots?". --Rednaxela 11:59, 26 May 2011 (UTC)

Battles per Pairing

I just wanted to comment on the statement, "It's uncertain how well it works with less battles or incomplete pairings." My experiment with the MC2K7 shows that separate runs of 75 battles can still show more than 1% variation for a given pairing. This affects any scoring system, and is a fact that we have to live with. The reliability of output can only be as good as input, no matter how fancy the interpolation is for incomplete pairings. The hope is that the variance will become a wash when seen over 600+ pairings. --Simonton 15:25, 26 September 2008 (UTC)

I think David Alves commented that targeting challenge scores also varied by almost 1% at 15 seasons, so I agree there's lots of evidence that more battles per pairing are needed, which would take a very, very long time in a 600+ competitor environment. You're right that as the number of competitors increases, variabilities cancel each other out. But at the same time, the bigger the competition, the more risk of a "black swan" competitor whose scores are all skewed in one direction. -- Darkcanuck 15:31, 26 September 2008 (UTC)

After scratching some things down on paper which are mostly intuition rather than statistics, I believe the odds of having such a "black swan" are either exactly the same or reduced by increasing the number of bots. --Simonton 16:05, 26 September 2008 (UTC)

Well, if there are 3 bots, the chance of one getting lucky against both others is 1/4th, multiply by 3 bots, and the chance of a "black swan" in 3 bots is 75% I believe. With 4 bots, the chance of one getting lucky against against all others is 1/8th, multiply by 4 bots, and the chance of a black swan is 50%. For 5 bots... it is 31.25% chance of a black swan. For 650 bots with one pairing each, the chance of a bot having above average score in every pairing is about 1 to 2.78*10^193. So if we presume getting lucky is anything above the mean score and there's a 50% chance of that in any pairing, and that a "black swan" is only when all pairings are lucky, then the chance of a black swan sharply decreases as the number of bots becomes larger. Of course perhaps what would be more useful than simply chance of there being a bot with all pairings lucky, would be the chance of luck making the score 1% different. I could calculate this, but only if I had a number of what the "standard deviation" of the percent score of an average robocode battle is. --Rednaxela 16:28, 26 September 2008 (UTC)

My intuitive hypothesis remains unshaken, but I don't have any numbers to prove it. But I can't argue with something to the power of 193. :) I'll look into adding standard deviation to some of the tables. What would be most useful, within a pairing, across all pairings, or across all final scores? -- Darkcanuck 16:43, 26 September 2008 (UTC)

Ah, now that you put the statistics that way I can see how to do it. With 3 bots each has 2 pairings, so the chance of both coin flips being "lucky" is indeed 25%. However, the chance of at least 1 of those bots hitting its 25% is (1 - 75%^3) ~= 57.8%. Generalized, this formula is 1 - (1 - .5^(bots - 1))^bots. If you graph that you can see it reduces to pretty much zero pretty quickly. --Simonton 17:18, 26 September 2008 (UTC)

Oh right, I got slightly mixed up and was multiplying by 3 when I should have been working with powers. --Rednaxela 17:35, 26 September 2008 (UTC)
You've got an extra leading paren, but that makes sense to me. Nothing to worry about! -- Darkcanuck 18:05, 26 September 2008 (UTC)

Glicko-2 Rating System

Looking at things as my recently added versions have gained battles, it's seeming like Glicko-2 seems FAR faster to converge to a realistic expected score far quicker than ELO or Glicko-1, and seems quite stable. Glicko-2's performance seems to really impress me. I wonder if maybe we should remove ELO and Glicko-1 at time point maybe, and just keep APS and Glicko-2? (Would that make uploading a little faster?) Also, maybe it would be good to make a modified APS that uses the Glicko-2 ratings to estimate the scores of missing pairings, in order to make the APS ranking less distorted by cases when there are incomplete pairings still? --Rednaxela 21:25, 25 November 2008 (UTC)

Is it possible to modify the 'deviation' so that we have a similar 'spread' in the rankings as the ELO does? And a second to the using G-2 ratings for estimating the score for missing pairings in the APS rankings.--Skilgannon 21:41, 25 November 2008 (UTC)

I'm glad you guys are comparing the ranking systems. From what I can tell, Elo and Glicko-2 ratings seem to settle to the same ranking order as APS, although I've never noticed which converges faster. The Glicko-1 scores haven't worked out so well, so they're not really viable -- I may try replacing that column with a Glicko-2 rating which updates only using the result of the last pairing result. This would speed up uploads since it would eliminate the full pairing query. The three current methods all rely on the same data so just removing one or two won't make a noticeable difference. I suppose we could fill in the APS using Glicko-2 expected scores, that would be interesting. And I could probably scale the Glicko-2 ratings to match the current Elo scores, if that's what you meant, Skilgannon. --Darkcanuck 06:25, 26 November 2008 (UTC)
Yes, that's it exactly. --Skilgannon 06:43, 26 November 2008 (UTC)

Premier League rating calculation

Is the PL score being calculated using only the last battle for each pairing? If so, I would suggest averaging all battles in the same pairing, like the APS system does. For example, if after 4 battles you win 3 times, (2+2+2+0)/4 = 1.5. The PL ranking would be more stable and there would be less ties. --MN 20:58, 16 July 2011 (UTC)

I believe the PL league works like the following (internally), every win adds one to a running total, every loss subtracts one, every actual tie does nothing. Then taking this total, if above zero is considered a win. If below zero considered a loss, and at zero is considered a tie. It then assigns points based on this win/tie/loss, win = 2, tie = 1, loss = 0. — Chase-san 21:16, 16 July 2011 (UTC)
A bot's rankings data (APS, Glicko2, PL, etc) gets updated every time a new battle involving that bot is uploaded. The PL score is based on the APS for each pairing: you get 2 points for every pair where your APS is > 50%. It's a winner-take-all system, so there's no credit given for losing at 49.999%. I didn't even implement the 1-point for a tie, since it's so unlikely to ever happen (except for crashing bots paired together). --Darkcanuck 22:44, 16 July 2011 (UTC)
A lot better than what I was suggesting... --MN 03:53, 17 July (UTC)

Also, can we have a ranking order for the PL league in the rankings page? --MN 20:58, 16 July 2011 (UTC)

In this case I would ask the actual rank column does not get sorted, so whatever it is we sort by we see by its rank. So we could see the ranks for the top worst survival scores as rank 1 to 800+. Just not sorting this column would be far simpler then asking for a rank for every sortable column. The sortable nature of the rank column I think would be used less often. — Chase-san 21:20, 16 July 2011 (UTC)
I could probably link to it (I think the sorting code already exists) but I think most of us just re-sort using the PL column. It gives an interesting view of APS rank vs PL score. --Darkcanuck 22:44, 16 July 2011 (UTC)
Re-sorting using the PL column works, but if the bot is not near the top you have to manually count the rows. Chase suggestion would be perfect. --MN 03:53, 17 July (UTC)
Please no special link for PL. I just got into the top-10, nobody needs to know that I get beaten by that much. ;-) --GrubbmGait 23:16, 16 July 2011 (UTC)

Offline_batch_ELO_rating_system

I added another page with a modified ELO system I made: Offline_batch_ELO_rating_system --MN 12:46, 12 August 2011 (UTC)

Premier League

Perhaps scores within .5 or 1 to 50 should be treated as a tie (since these are hard to hammer out taking many many seasons, and might still be wrong). So 49.5 to 50.5 OR 49.0 to 51.0 splits could be counted as 1 instead of one getting 2 and the other getting 0. This might motivate people to work to break the tie. After all unlike something with a low integer scoring (like soccer/football) ties are harder to come by. If no one agrees, I propose dividing all PL scores by 2 instead. :) — Chase-san 20:45, 26 August 2011 (UTC)

This tie range discussion is an APS-only issue. Elo naturally takes close matches in account by measuring how often each side wins. Also, voting systems, like Condorcet, clearly define 1 vote as tie range. --MN 23:31, 26 August 2011 (UTC)
Dividing PL scores by 2 makes it look a lot like round-robin chess scoring which I also prefer. :) --MN 23:31, 26 August 2011 (UTC)
Seems a bit artificial/arbitrary, you would still get pairings teetering on the same thresholds. I'm actually curious what PL would look like if instead of 1 point for APS > 50, each bot got points based on the fraction of battles won. So a close matchup would score ~0.5 points while a decisive series of battles would yield up to 1 point for the winner. (and then multiply the sum by 2) --Darkcanuck 21:43, 26 August 2011 (UTC)
That actually makes more sense, PL based off survival. — Chase-san 22:14, 26 August 2011 (UTC)
Averaged Winning Rate ranking system? One of the simplest ranking systems, at the same time one of the most fair/accurate, and still very king maker resistant. Only flaw is it doesn´t have a tie-breaker system on its pure form. But other tie-breaker systems like Neustadtl can be easily adapted for this "AWR" system. Also doesn´t handle incomplete pairings very well, but APS suffers from the same problem and is still the main ranking system until now. --MN 23:31, 26 August 2011 (UTC)
I added a column for AWR (W%), seems like a simple and fair ranking method. --Darkcanuck 20:11, 27 August 2011 (UTC)

I noticed a huge difference between PL and APS/Schulze in the Alternative Rankings page. I thought the difference between them was in the tie-breaker criteria only. Are they using the same battle data? --MN 12:16, 27 August 2011 (UTC)

Sorry for my ignorance in Condorcet methods. Schulze also differs in the circular ambiguity resolution criteria. --MN 13:48, 27 August 2011 (UTC)

Schulze & Tideman Condorcet

I like both of these methods. Looking at the scoring, they have some very interesting results. I do somewhat like Tideman's more because of results. — Chase-san 21:05, 26 August 2011 (UTC)

Well, I like Tideman because it catapults Pris to #3, but the result doesn't make much sense to me... --Darkcanuck 21:13, 26 August 2011 (UTC)

I read up on Schulze, sounds like a very cool ranking method. Still gotta investigate Tideman. Perhaps the most interesting result to me is Hydra and Phoenix dropping into the 60s or 80s - holy cow! But now I remember the likely reason: they lose to BulletCatcher. --Voidious 15:48, 27 August 2011 (UTC)

And don't forget the BasicGFSurfer bulletpower bug. I missed almost all discussion during my vacation, and I am reading up on it. Although I am much better ranked in APS, I do see the value of PL and similar rankings. I have mixed feelings though. If 3 BulletCatchers and a few BasicGFSurfer derivates have such an influence as stated above by Voidious, I don't know if I can trust such ranking. To me it seems that a bunch of anti-xxx routines thrown together is better rewarded than a sole generalist implementation. I have to read more and think more to make up my mind definately, although my focus probably won't change. --GrubbmGait 10:26, 28 August 2011 (UTC)

That is kind of like splitting hairs if you ask me. Tuning for any specific situation for point gain is how the game is played, the only variable is how prevalent that situation is. If every robot was bad against 0.05 bullet power then you would have to be insane to 'not' tune your robot to fire for that. The thing is that, if every robot tunes for it, then everyone has the benefit from it. It is like saying it isn't really fair to tune your bot in a more specific way, or to counter a specific set of enemy robots and get overly rewarded for that.
Though I say all this however. I believe you gain more score for tuning towards and improving your general ability then for a specific situation. Tuning for those can only improve your score, not make it. Tuning for enemies that are bad at 0.05 bullet power doesn't matter if you don't fire well enough to hit them in the first place. Nene for example has no anti-xxx routines, not even a flattener, she doesn't seem to do to badly. Seraphim however has a great deal of anti-xxx routines, but does rather badly overall. :(
Chase-san 11:49, 28 August 2011 (UTC)
Exactly, thats why Seraphim does much better in Tideman and Schulze (and PL), even though Nene has a far better base. There is nothing wrong to counteract on specific tactics/strategy (f.e. ramming, bulletcatchers, surfers), I just fear that it will end up to specifically. A bot with 10 specific routines to defeat the top-20 is something I really don't like. The PL ranking is a valuable and respected ranking, and it nags me that I have the worst PL ranking in the history of the top-10, but for me it is not the most important ranking. It will improve automatically when my code improves. The same with Schulze and Tideman Condorcet. It's just not my cup of tea. Still have to read what everybody said and think it over, with the focus on what is the best for the rumble, not for me personally. --GrubbmGait 13:12, 28 August 2011 (UTC)
Well, with Condorcet methods, the prevalence of them actually wouldn't have any increased impact - losing to 10 BulletCatchers is basically as bad as losing to the worst one, not 10x like with APS. But consider how much those specializations have an impact under the rating system. Phoenix losing to a Bullet Shielding implementation is 1 loss in the PL or .01 APS, while he drops from 5 APS/8 PL to 60 or 80 with these Condorcet methods. It's an order of magnitude (or more) impact from a single loss to a gimmicky (if brilliant) strategy. Do you consider WaveShark, MicroAspid, or Tron 2 stronger than Phoenix? I don't, and he crushes them all handily. If a ranking system ranks them above Phoenix, it's hard not to consider it a strike against that rating system. All that said, I still really like these methods and I think this is a really, really extreme case. --Voidious 13:38, 28 August 2011 (UTC)
This is a reverse king-maker scenario: one (or two?) bots are able to send Phoenix (an overall strong bot which rates very high in both APS & PL) spiralling down the rankings. It's not just an extreme case, theoretically anyone could release an anti-InsertBotNameHere and achieve a similar result. But it's more likely with bots who aren't actively maintained. I find these methods interesting, but don't think they're valid for the rumble. The only method I like so far is the AWR (W% column) proposed by MN: similar to PL but with very few ties and less impact from marginal wins. --Darkcanuck 14:22, 28 August 2011 (UTC)
Well... I believe using Condorcet methods with a "ballot per bot" style input would not be subject to such "reverse king-maker" effects. I don't know exactly what the results would look like though, but I like the "ballot per bot" style conceptually. It rewards things like beating rambots by 80% more than beating them by %60, yet it should be more resistant to extreme biases of single bots ("king maker" scenarios) than APS. It may not be so similar to PL though. --Rednaxela 14:46, 28 August 2011 (UTC)
It´s not a king maker scenario. It is a circular ambiguity scenario. A competitor can screw up the rank of anyone it is better against. --MN 14:55, 28 August 2011 (UTC)
Condorcet methods are vulnerable to king maker scenarios, or compromising/burying as they are called in voting systems, when circular ambiguities arise. But in a way which I believe is good for the rumble. When Phoenix lost to BulletCatcher, it became vulnerable to being screwed by all competitors above BulletCatcher. So, BulletCatcher is a problem bot. The system incentives Phoenix to generalize against it too, and not specialize even more against someone already being crushed. --MN 14:33, 28 August 2011 (UTC)
Perhaps it does create incentives for Phoenix, but that doesn't mean it's a good ranking system. Phoenix is still a very strong bot, despite the fact that somebody wrote a highly specialised bot which happens to beat it.--Skilgannon 06:40, 29 August 2011 (UTC)
%Wins/Schulze seems to not show the same problem, although in theory it can also happen. I think Schulze is amplifying the noise/bias coming from APS. Schulze relies heavily on measured strength difference to solve circular ambiguities. If the metric is poor, the system breaks. --MN 16:21, 29 August 2011 (UTC)
A statistic based on have many times a given pairing is being used on Schulze strongest link evaluation, in favor or against a competitors score, would be a nice problem bot index. And a lot more meaningful than current Elo based PBI. --MN 14:33, 28 August 2011 (UTC)
The only thing I have against that Phoenix bashing is the system still relying on APS. APS/Schulze makes the assumption that APS is a good estimator of strength difference, which I don´t like. I am in favor of %Wins/Schulze, even though by bots got bashed in it. But I know why, they were all designed weak-crushing style. --MN 14:33, 28 August 2011 (UTC)
What alternative do you propose which would scale smoothly between 0 and 100% and have a 'tie' mapping to 50%? Using simple win% completely ignores the fact that once you have won there are still improvements to be made, so while I certainly think it is fine as a metric, I don't think it should be the primary one. APS should rather be called Average Percentage of Total Score, but beyond that it works quite well. --Skilgannon 06:40, 29 August 2011 (UTC)
A ranking system purpose is to estimate the order of competitors strengths. I believe %wins combined with some transitivity based system does this better than APS. There is no need to measure anything above first place in a ranking system. APS is nice a challenge indicator, but when you start mixing many APSs from many pairings, the system breaks. But you can perfectly use %wins as ranking, and at the same time use APS on a controlled environment against specific opponents to measure improvement. --MN 17:42, 29 August 2011 (UTC)
Hmmm, I'm not of the opinion that all the rumble is there for is to say who is the strongest/what order of strength bots are ranked in. When I look at the rumble and see DrussGT at 88.8% I don't think, "Well, I'm in first place now." Rather, I see a 11.2% possibility for improvement. Because getting 100% against everyone is virtually impossible, this means that the maximum possible score will essentially never be reached, and thus there is always room for improvement. --Skilgannon 07:18, 30 August 2011 (UTC)
I think at a more basic level, the purpose of the rumble is to make bot development more fun. Clearly, for you, me, GrubbmGait ;), and many others, APS is a good motivator. For others, like ABC, zyx, MN (whoah, sensing a pattern here), something focused on win/loss probably works much better. I don't see APS going away, but having a more intelligent and/or more stable ranking for the win/loss viewpoint certainly has its place. --Voidious 15:00, 30 August 2011 (UTC)
I really think that is the main goal in the rumble. To allow us (the programmers) to have fun with it. Improving your score, either it be in PL or in APS or what have you is addicting, and thusly can be considered well worth the time required to get it. I personally am in the APS crowd these days (though to be fair I was originally in the PL group). If its really a problem we should just mix W%/PL and APS somehow, hah. Being serious though, can't we just have a link that takes you to the rumble sorted by PL by default. Like say this one.Chase-san 15:24, 30 August 2011 (UTC)
I have to admit that improving APS is really addicting, the fun factor is the best argument in favor of this system so far. But after knowing about Schulze, seeing it working for real would be a lot of fun, this method is too cool to be left aside. I enjoy seeing the client working and the rankings stabilizing as much as I enjoy developing bots. --MN 00:27, 31 August 2011 (UTC)
All of these other methods seem to me to 'lose' information by either rounding to win/loss, seeing what order of scores are, etc., but all introduce some form of granularity which would make me suspect that a much larger number of battles would be required to achieve the same stability of rankings, and that small improvements which do not necessarily reach their goals but are a valid step, will not receive any reward. --Skilgannon 07:18, 30 August 2011 (UTC)
A system requiring less battles to stabilize doesn´t mean it is more accurate. Let´s imagine an extreme case where the ranking system is based on the names of the competitors, sorted alphabetically. It needs zero battles to stabilize, and ranks doesn´t shift with many close matches. And changing the name to increase ranks is always rewarded as every letter of the name counts. It increased stability by not rounding all names to the same, but the system is obviously biased and will never converge to a ranking showing competitors strengths, no matter the amount of battles. --MN 14:20, 30 August 2011 (UTC)
If we measured strength by alphabetizing, actually that would be perfectly accurate! =) --Voidious 15:00, 30 August 2011 (UTC)
I agree that they lose information found in APS, but I believe that APS also loses information. Is DrussGT really deserving of the PL crown, or did he just get lucky with 2.2.0? Our priority battles focus on stabilizing APS, so we don't know. And just as improving from 95% to 99% would mean nothing without APS, so too does improving from 49% to 55% vs one bot go mostly unrewarded under APS. If it means you go from 1 defeat to 0, that seems like a pretty important improvement to me - much more so than the 0.007 APS you'd gain from it. --Voidious 15:00, 30 August 2011 (UTC)
Correction: the priority battles focus on giving each pairing an equal number of battles. The fact that this stabilizes APS, PL (or any other ranking system) is a side-effect. --Darkcanuck 16:13, 30 August 2011 (UTC)
My bad - I guess it is more of a middle ground. Prioritizing APS would mean prioritizing pairings with high variance, prioritizing win/loss would prioritizing pairings closer to 50%. --Voidious 17:06, 30 August 2011 (UTC)
I think a major difference between this what is needed in Robocode and what is needed in real competition scoring is that in the real world the only thing that matters is who comes first, second, etc. so that they can receive their awards. In Robocode we have the added requirement that it is also important by how much somebody is beaten. Not only this, but there are set limits on how much a bot can be beaten by, as there is only so much energy per bot per battle. In contrast, sports generally are only limited in scores by the skill difference between competitors. --Skilgannon 07:18, 30 August 2011 (UTC)
In every head-to-head sport it is important how much somebody is beaten, but historically, attempts to measure that directly usually led to innacurate rankings, even in games with capped scores like chess and go. Taking the difference in account directly makes some pairings have more weight than others, and more often than not the pairings with more weight are not the ones between the top competitors. --MN 14:20, 30 August 2011 (UTC)
The debate is more of a ranking system being head-to-head style (most sports) or crush-the-weak style with lots of king maker scenarios by choice (APS rumble). And if having both, which would be the main one. --MN 14:20, 30 August 2011 (UTC)
I think it's also important to note that bots are unemotional. =) Humans get discouraged when they're losing, or unmotivated when winning by a lot. And spectators, who are paying the bills to run these sports, get bored if it's not close. So the score difference doesn't mean as much and putting focus on it would not serve the spectators. And not all sports ignore score differences entirely - prestige comes from score alone in bowling or golf, though of course it doesn't always affect rank like it does here. --Voidious 15:00, 30 August 2011 (UTC)
Actually, %wins only stops being an indicator after a competitor wins 100% of the time against everyone, and no one improves in the meantime. --MN 17:42, 29 August 2011 (UTC)

The average wins is essentially the same as PL but with the score for getting a win being 1, then divided by the number of bots and times 100, right? I could certainly live with that as a replacement for PL. It makes much more sense. --Skilgannon 06:40, 29 August 2011 (UTC)

Averaged %wins is not winner-takes-all like PL or any Condorcet method. 70% wins against someone is 0.7 divided by the number of bots, not 1. It makes the system a lot more stable, if it is fair or not is another matter. I think it is nice as a statistic, but as a ranking system I prefer those which are based on the transitivity axiom and have mathematical foundation, like Elo, Schulze or Tideman. --MN 16:49, 29 August 2011 (UTC)
I think another requirement of any ranking system is that it is reasonably easy to understand, so that an intuitive feel of what will improve scores will be accessible without doing an in-depth study of how the system works. --Skilgannon 07:18, 30 August 2011 (UTC)
Out of all the alternatives tested and discussed here, I like AWR/W% the best (for inclusion alongside APS + PL, not as a replacement). It's simply an average of a bot's win rate (#wins / # battles for each pairing), just like APS is an average of the scores. It's simple, easy to understand, easy to implement, and complements the information given by APS and PL. --Darkcanuck 16:13, 30 August 2011 (UTC)
I think it's a fine score to have, but I suspect it will mirror PL almost exactly. I personally find Schulze the coolest of those discussed, but I'm realizing that no matter the fancy ranking system, my focus is unaffected - try to beat my toughest matchups, or improve my APS, depending on my mood and what ideas I have. I think only giving way more pairings to the top PL bots sounds particularly enticing to me, whether it's a Strongest Bots Rumble or a regularly run tourney. --Voidious 17:06, 30 August 2011 (UTC)
It won´t mirror PL. There are bots with more than 30 ranks difference between them. --MN 00:08, 31 August 2011 (UTC)

%Wins/Schulze or Score/Schulze Condorcet

I would like to see one of these systems. They take out the APS formula entirely and follow Condorcet principles closer (majority rule instead of averaging). Score/Schulze in particular having no averaging, making the system the closest to Condorcet as I can think of. Tie-breaks are treated entirely inside the Schulze system. --MN 23:31, 26 August 2011 (UTC)

I'm working on %win variations now. Schulze without some sort of normalization will not work since it will give more weight to pairings with more battles. --Darkcanuck 01:03, 27 August 2011 (UTC)
It's done, see SchW and TRPW columns. I didn't implement Schulze tie-breaking (or even tie detection) so that may skew the results (algorithm was complicated enough as-is). You'll notice that TRPW generated a huge number of ties (marked by asterisks) vs using APS only. --Darkcanuck 20:15, 27 August 2011 (UTC)
The results were totally unexpected. Combat got 29 in %Wins and 75 in %Wins/Schulze... and I thought they were similar systems. Maybe I´ll try making a %Wins/Schulze ranking with tie-detection to see what happens. --MN 00:09, 28 August 2011 (UTC)
Reviewing my code, ties are handed to the bot with the higher overall APS. Even in the APS-based Schulze, the results are quite unexpected. I doubt that ties are a major factor... --Darkcanuck 03:59, 28 August 2011 (UTC)

Averaged Winning Rate or Averaged Percentage Wins

The %W column. It is incredibly simple, making it easy to be calculated online, and quite resistant to both king maker scenarios and circular ambiguities. But inaccurate with incomplete pairings and few matches, like most non-statistical systems are. --MN 15:19, 28 August 2011 (UTC)

Analyzing where to improve your bot is very easy, simply look at the pairings with below 100% winning rate. --MN 15:19, 28 August 2011 (UTC)

And... Combat catapulted 30 ranks with it. :P --MN 15:19, 28 August 2011 (UTC)

But I am divided between %W and SchW. There might be a debate if increasing winning rate from 40% to 60% should be worth the same as increasing it from 80% to 100%. This is a very strong assumption, although those situations are a lot less common than in APS. --MN 15:19, 28 August 2011 (UTC)

Battle count in Alternative Rankings page

Can battle count per competitor be added as a column in the page? It would be useful to see how sensitive each system is to battle count. --MN 12:16, 27 August 2011 (UTC)

Done. I also added tie indication to the W% column (but unfortunately only marked the second of the tied pair; in the Tideman columns, all tied rankings are marked). --Darkcanuck 03:45, 28 August 2011 (UTC)