BeepBoop seems to be losing ~1.5 APS

Fragment of a discussion from User talk:Kev
Jump to navigation Jump to search

Sure, but what should I look for in my logs? Is it even stored for long time? All I see is the last uploaded file.

Also, note that there uncertainties. 0.1 APS is not that much. Battles usually have 1-6 APS spread per pairing. Also, some bot keep logs, it might be that my setup has the longest (most complete) stored enemy visit counts logs.

Also, it is possible that original scoring was done on fast CPU where timeconstant was in favor of BeeBop.

But I also notice, that newly submitted bot start better, and then drop 0.5-1% aps as the rumble settles.

Beaming (talk)05:16, 6 February 2023

I run RoboRumble client with nohup, so I can just grep nohup.out. You can also use bash redirections to persist the log. Otherwise it's impossible to track down the weird scores.

The reason that bots drop 0.5-1% APS is because some battles are having insane results, greatly affecting final score.

When using controlled environment (both on newest hardware and low end 1.33Ghz processor), you get very stable score, getting less than 0.1 APS diff from 5000 battles to 20000 battles. This observation can also exclude the possibility of log saving bots. Logically, one battle is enough and increasing that to 20 battles aren't helping.

Xor (talk)08:59, 6 February 2023
 

Look at BeepBoop against reeder.colin.WallGuy3, it gets APS 81.28 instead of 98 with 4 battles. You can consider this as 3x 98 APS and 1x 30 APS. What happened when it gets 30 APS? I can only imagine a heavily loaded machine with much longer computation time than usual, and both participants are skipping a lot of turns due to insufficient time.

The problem of this is that you can never reproduce the result. It has nothing todo with uncertainties. Running BeepBoop against reeder.colin.WallGuy3 will always get ~98 APS as long as the environment is controlled. You never get to know what actually happened.

Xor (talk)09:07, 6 February 2023

I see your point, but we should not forget that there are probabilities involved. Maybe WallGuy3 get lucky once. For example, BeeBoop was spawn in the corner very close to the opponent.

Also, looking in the rumble logs is a bit useless without ability to correlate them with load of the system.

Ideally, we should solve it in the robocode client and not by running in pristine/controlled environment. None of us can dedicate computers to a single task. A potential solution is to have a thread which estimates CPU load (but even then there could be transient load spikes which might be undetected).

Beaming (talk)02:54, 8 February 2023

Probability can be reproduced. But weird scores cannot be reproduced. Even if some unlucky thing happens, it cannot explain:

reeder.colin.WallGuy3 1.0
hs.SimpleHBot 1.3
darkcanuck.Holden 1.13a
tobe.Saturn lambda

Why all four bots are "lucky" within only 4k battles, yet in my local run of over 20k battles they are never lucky. The deviation is very small for 20k battles, rejecting the presumption of lucky bots.

The point is to make the scores trackable, so that we could identify problems. Currently the client isn't automatically saving logs at all.

I quite understand that dedicated computer is not feasible for most people. My settings aren't pure either. However I never observed weird scores on my computer, making them a mystery. But those weird scores do hurt final result a lot. Weeks of dedicated work may not grant you 0.1 points, but you loss 0.2 points (sometimes) for unknown reason.

So please at least redirect the outputs of RoboRumble client to files, so that we could track down what condition produced the weird scores.

Xor (talk)03:59, 8 February 2023

Totally agreed about logging. Will do it right away.

Beaming (talk)04:02, 8 February 2023