Weird rumble scores
I'm curious why you think GC overhead is always fair. Since GC overhead happens outside of the main thread, it can punish all robots in the same battle with a high degree of randomness. Not only that, if you're running a bunch of rumble clients on a computer, and the overall CPU usage on the system for all cores reaches 100% due to a couple clients having more GC overhead, then it could affect all bots in all active battles on that system, even in the other rumble clients.
Being fair means no one is getting advantage over it. And the ability to withstand ocationally skipped turns is part of the competition.
Since no one can guarantee that robots are always running with sufficent resource, I’m always on the side that robot authors should assume low performance computers.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:BeepBoop/Weird rumble scores/reply (8).
You are right. Apart from creating many thrads that do a lot of work when it’s others turn, creating a lot of objects to increase GC overhead does affect others’ bots as well, making the result a little bit random.
But I doubt how much difference can GC overhead put. The most unreproducible scores I experienced are always coming from some rare exceptions, say 1/1000. And once happened, it causes some random pairing to be close to 0. If averaged with some normal score, it really looked like it’s decreasing with no reason.
But there’s always some reason, and mostly coming from specific bot instead of the clients, since not everyone is affected.
So my advice is that you output exceptions to file, and check if there are any. Skipped turns could also be counted. I was doing this in older bots as well, and concluded that GC overhead & skipped turns aren’t really the problem, but exceptions are.
For the specific case I mentioned of ags.Glacier 0.3.0 versus lxx.Emerald 0.6.5, when it was at 1 battle, I thought it was likely some rare exception as you say, but then a 2nd battle came in with about the same low score as the 1st battle, and yet I'm not able to reproduce a result anything like that in many many tests. This leads me to believe that there is most likely something significantly different about the environment those two battles were run in, as compared to my own environment.