Approximate ELO
The highlighted comment was created in this revision.
I have been mulling around ways to approximate ELO for awhile. Since originally it was used for ranking (2000 club, etc), however the rankings have drifted considerably and with the advent of APS there is no real need for it. However the number is still nice, even if the methods for calculating it have problems.
So I figured we could get our nice number without a lot of number crunching with way of a simple equation. At least that was the original theory. However APS doesn't map very well to ELO, which is not at all linear, and the data set required for calculating such an equation is incomplete.
However here are my token efforts at doing just that. The dataset I used was the 2009 Glinko-2 and APS, as it was likely the most similar to the old ELO rankings. I would ahve used those, but they lacked an APS column, and the common bots between them don't exactly line up very well (plus that decimates my data set even more).
public static final double calculateElo(double x) {
double a = 169.1;
double b = 0.02369;
double c = 334.2;
return a*Math.cosh(b*x)+c*Math.log(x);
}
public static final double calculateEloFast(double x) {
double a = 0.7082e-03;
double b = -0.3340e-05;
double c = 0.3992e-02;
return 1.0/(a+b*x+c/x);
}
The first is a bit more accurate. 0 is negative infinity, and 100 is around 2450 (it should be positive infinity, but I did what I could). However with a logarithm and a cosh, it is a bit heavy to be called 700+ times every a page loads (I think at least). The second is a slightly less accurate system, 0 is 0 and 100 is about 2415. With mostly simple math it is much easier to execute.
So thoughts, concerns, scorn?
For a precise ELO calculation you would need the full pairwise matrix. But for an approximation based on an APS column, it looks good.
Another way to deal with rating drift (without the full pairwise matrix) is calculating the average of all ELO ratings, then calculate the difference from the average to 1600, then add/subtract the difference to all drifted ratings. So you have ratings centered around 1600. Works with both ELO and Glicko-2 ratings columns.
Does it currently do that? The Glinko-2 has drifted up, the 2000 club now consists of only the top 16 bots.
Which is why I was considering trying to make a Approximate version. However, with mine instead of drifting down I think I see higher bots drifting up, and lower bots drifting even more down. I think this is because as new lower bots are added the APS for higher robots go up, and robots that did worse against it go down.
So it faces an entirely different kind of drift, but at least the center seems stable.
ELO/Glicko-2 work with differences between ratings.
The more competitors the ranking has, the more the rating difference between first and last places will be. This is normal.
But ELO has another drifting problem due to deflation. All competitors ratings going down because most retired competitors have ratings above the average.
static final double DRIFTED_AVERAGE = -2397.92418506835;
static final double DESIRED_AVERAGE = 1600;
static double calculateElo(double driftedElo) {
return driftedElo - DRIFTED_AVERAGE + DESIRED_AVERAGE;
}
I'd be fine with a new ELO formula to replace the currently useless one, but I can't see myself ever again caring about ELO or Glicko over straight APS for the main overall score rating. APS is clear, stable, accurate and meaningful, and ELO/Glicko just seem like attempts to solve a problem we don't have. As far as new permanent scoring methods, I'm much more interested in the Condorcet / Schulze / APW ideas brought up in the discussions on Talk:Offline batch ELO rating system, Talk:King maker, and Talk:Darkcanuck/RRServer/Ratings#Schulze_.26_Tideman_Condorcet. I also really like what Skilgannon did with ANPP in LiteRumble, where 0 is the worst score against a bot and 100 is the best score against a bot, scaling linearly.