BeepBoop seems to be losing ~1.5 APS
Comparing to the nearly identical 1.20, you can clearly see some random drops, which I suspect to be skipped turns on some old hardware.
The previous results of APS ~94.8 can be reproduced on my computer, so I think the previous results can be trusted.
@Beaming seems to be contributing to RoboRumble the most recently, we could work together to see if something could be done to ensure reproducibility of RoboRumble.
I think I found a potential problem spot. One of my computers was 4 times slower and was using cpu constant from a 4 times faster computer. I recalculated cpu constant (by deleting the config file) and hope that the APS drop would resolve. It might explain why a better (subjectively) version of my bot in development performs worse than the old one.
It would be nice if rumble client recalculated CPU constant at the start. It take very little time and provides more stability. But I also recall discussion that active throttling in modern hardware makes this number just an estimate.
By the way in 2014 we had interesting discussion about Thread:User talk:Beaming/Smart bots competition allowing long calculation time for bots. Maybe it time to revive it since ML approaches developed quite a bit and they are CPU intensive.
From other hand making a top ranking fast bot is a challenge in itself.
I agree. Enforcing recomputing of CPU constant at the start and per e.g. 100 battles is necessary, as it highly affects results and is easy to get wrong. By recomputing periodically, it can also solve the problem of other heavy tasks that affects RoboRumble, without adding too much overhead.
I'm also thinking about adding some test set before allowing score submission, but that would be a long term plan.
I'll submit a PR for recomputing CPU constant, any suggestions are welcomed.
I'm also interested in adding a separate rumble with long calculation time.
I'll add an option to multiply cpu constant by a factor (warning, advanced usage) from rumble config file, then *SmartRumble* could be realized. The initial participants could be copied from GigaRumble ;)
I bought a low-end PC running Linux with 4 cores @ 1.33Ghz (cpu in 2016), and turbo-boost disabled. The cpu constant is 5x more than my master computer.
I tried to run the entire rumble with roborunner, two instances in parallel, (which takes ~20x time to complete, since I run 8 instances normally), and by far the scores look fine. So I guess what actually causes strange scores is indeed using inaccurate cpu constants.
Anyway, I haven't tried running other background tasks at the same time (because I don't have such tasks to run), so I'm not sure whether that affects the score as well.
BeepBoop 1.21a seems to be losing only 0.1 APS now comparing to 1.2 (and 0.2 APS comparing to my local run).
However, there are still some pairings with weird score:
reeder.colin.WallGuy3 1.0 hs.SimpleHBot 1.3 darkcanuck.Holden 1.13a tobe.Saturn lambda
I'm also running rumble client meanwhile, and couldn't find corresponding pairings from my logs.
@Beaming Could you please also have a look at the logs to see which machine is producing the weird scores?
I suspect most of the scores should be fine now, but some weird scores may still be producing under heavy load.
Sure, but what should I look for in my logs? Is it even stored for long time? All I see is the last uploaded file.
Also, note that there uncertainties. 0.1 APS is not that much. Battles usually have 1-6 APS spread per pairing. Also, some bot keep logs, it might be that my setup has the longest (most complete) stored enemy visit counts logs.
Also, it is possible that original scoring was done on fast CPU where timeconstant was in favor of BeeBop.
But I also notice, that newly submitted bot start better, and then drop 0.5-1% aps as the rumble settles.
I run RoboRumble client with nohup, so I can just grep nohup.out. You can also use bash redirections to persist the log. Otherwise it's impossible to track down the weird scores.
The reason that bots drop 0.5-1% APS is because some battles are having insane results, greatly affecting final score.
When using controlled environment, you get very stable score, getting less than 0.1 APS diff from 5000 battles to 20000 battles. This observation can also exclude the possibility of log saving bots. Logically, one battle is enough and increasing that to 20 battles aren't helping.
Look at BeepBoop against reeder.colin.WallGuy3, it gets APS 81.28 instead of 98 with 4 battles. You can consider this as 3x 98 APS and 1x 30 APS. What happened when it gets 30 APS? I can only imagine a heavily loaded machine with much longer computation time than usual, and both participants are skipping a lot of turns due to insufficient time.
The problem of this is that you can never reproduce the result. It has nothing todo with uncertainties. Running BeepBoop against reeder.colin.WallGuy3 will always get ~98 APS as long as the environment is controlled. You never get to know what actually happened.