Test bed with stable results
Hi to all. Who which test beds use to test robots? I cannot find out test bed which will give stable results (+/- 0.05 APS). Now i use every 5th bot roborumble with 5 season vs each and results is in interval -/+ 0.3 APS.
Well, I suspect it's total number of battles that matters most. A recent version of Diamond dropped 0.12 APS after 2000 battles in the rumble, but perhaps that was a fluke due to someone's client skipping turns. My APS test beds are 100 bots and I run 5 seasons (500 battles) to get an idea, 10 seasons I consider accurate enough even for pretty minor tweaks, or 20 seasons if I want to feel very confident. I'm not sure what the statistical accuracy is, but +/- 0.05 for 20 seasons (2000 battles) would be about my guess. That's pretty much my experience with the accuracy of RoboRumble results after 2k battles, too.
I've been thinking about adding to RoboResearch the feature to calculate the confidence interval of the overall result, assuming normal distributions. It should be pretty easy.
I run 30 seasons of 35 rounds against 40 bots (taking approx 18 hours). No idea if it is stable though, it is my first testbed. I do know my testbed does not reflect the rumble correctly, because 0.3.2 and 0.3.5 score on par, and 0.3.7 approx 1.5 APS lower.
GrubbmGait, you're a hero:) for me 4 hours for test it is limit and i want to get test which give me results in 2 hours. And may be you will try to use Distributed_Robocode - now i can share with you my home netbook (i3 1.3 x 2) and on this week i plan to setup old duron 1.6 as dedicated robocode server. So, i guess, your's test will take at maximum 6 hours (but it's strong depends from which robots is in yours test bed)
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:User talk:Jdev/Questions/Test bed with stable results/reply (5).
I found out good test bed - 7 seasons against every second bot from roborumble (~2900 battles). It produces repeating results within +/- 0.05 APS and error against real RR within +/- 0.1 APS
I'd think 1 battle against each bot from the rumble would probably give better results... maybe not as reproducible but closer correlated to the rumble.
No Skilgannon, even 3 battles against every bot is worse in terms of stability and correlation with RR
I totally agree with using very large test beds (lately ~100 bots for me), but I think you're wasting some CPU cycles testing against bots that are unlikely to be affected by your changes. Bots you get 98+ against probably won't be affected unless you're testing changes to your core surfing algorithm or something. Most changes to surf stats are not going to affect HOT bots at all, and flattener tweaks won't affect any bots that have no chance of triggering your flattener.
My last release show, that you never know what will be affected:) So better if test will execute 7 hours, against 6.5, but give confidence in results:)