how to build a good test bed?

Jump to navigation Jump to search
Revision as of 28 September 2017 at 09:19.
The highlighted comment was edited in this revision. [diff]

how to build a good test bed?

Recently I tried a lot to tune the movement, and the result is promising — it performs very well in my test bed (which consisits of some bots I’m performing bad in the past, including some guess factor targeting bots, dc bots and a simple targeter with VG. However, the rumble result shows a huge performance regression ;/

Then, I tried another one, when published to the rumble, it shows huge performance increase (and a little increase when full pairing) — but after ~5000 battles, the performance is even decreased, comparing to the baseline version.

My test bed is running at 35 battles for 30 seasons with 10 bots — 300 battles in total, but it shows irrelevant with rumble score. Is that the bots I choose make it a bad test bed, or just because I have too little battles?

The bots I use in my test bed are FloodHT, SandboxDT, RaikoMicro (gf targeting bots), Tron, Aleph (dc bots), Che, Fermet, WeeklongObsession (pattern matchers), GrubbmGrb (“simple” targeting)

Again, it seems that even after ~3000 battles, the rumble score is still not reliable enough to be used to compare two versions.

Then come my questions: How do you evaluate your bot? How much bots are there in your test bed and how many battles do you run for each of them?

    Xor (talk)02:03, 27 September 2017

    I think your problem that you are already in top 10 :) while you are testing against relatively simple bots (by modern standards). You probably already have score pushing above 90% for this bots. If I were you I would chose test bed from the top 30 or even top 10. But after all the only real test is the rumble, may be there is a bunch of bots against which you are under performing and none of them are in the test bed.

    Otherwise I do something similar but my bot is not that high, so my test bad shows relevant scores. Though sometimes it is somewhat off. I also notice that the score in rumble always slide down until it settles. I am not sure why, may be some bot which save stats keep improving with each round for a while.

    But lately I notice that in melee rumble slide down is somewhat catastrophic. When I introduced EvBot v9.2 it was in the top 20 for the first 300 pairing or so, and then just plunge about extra 20 places down. I see it with several latest releases and still cannot understand why.

      Beaming (talk)03:35, 27 September 2017

      IIRC, in the past versions, the improvement over previous version is somewhat good indicator of final result, e.g. 0.5 increase in APS of common pairings (e.g. 300 common opponents) indicates 0.5 increase in final APS.

      IMO the APS until full pairing is meanless, but the difference in common APS is useful.

      However, this version breaks the previous pattern. difference in common APS is no longer an indicator, nor the full pairing APS.

      The reason why I test agasint relatively “weak” bots is that the majority of the rumble is there. And what affects your score the most is also there. More than half of the bots in rumble is between in APS [40, 70), and there are only 160 bots above 70, and 324 bots below 40. Bots below APS 40 can be ignored IMO, as the improvement against them can only be marginal.

        Xor (talk)04:34, 27 September 2017

        Until the pairing is complete, APS is not a good indicator. I always go to the details of my bot and then select an older version to compare with. In that case only the bots that both versions have fought, are taken into account. It indeed seems that the last 10% of the pairings involve the best opponents, GrubbmThree held around 58 APS till approx 1000 pairings, then fell down to 57.2. Note that even with 3000-5000 battles, there are still a lot of bots you have only have one fight against, so a few bad battles do have influence.

        As for testbed, I used to have around 20 bots in my testbed (50 seasons): 5 top-50 bots, 5 'white-whales', 5 between place 100-300 en a few specific ones to check whether something was broke (f.e. bbo.RamboT must score less than 0.5%)

          GrubbmGait (talk)12:07, 27 September 2017

          Thanks for figuring out that! I thought the rumble is stabilized very fast (common APS diff of ~300 pairings are already useful), but it turned out not.

          Maybe I should build a test bed with more varieties, e.g. bots from all over the rumble with different kind of strategies.

            Xor (talk)14:19, 27 September 2017
             
             
             

            I found that when newer version is tight with previous version, try to compare it with different older versions could help — or, the best, compare with some baseline version which is battled enough and is stable.

              Xor (talk)15:40, 27 September 2017
               

              It depends what I am working in.

              For movement, often a single bot is enough to prove a theory. Escape angle tuning is a rambot plus DevilFish, surfing mechanics is DoctorBob, anti-GF RaikoMicro, anti-fast-learning is Ascendant and for general unpredictability Shadow or Diamond.

              Targeting I always find less interesting. Maybe because it is a more pure ML problem, with less ways to optimise that haven't already been studied in a related field. I decided to brute-force it by adding lots of features and then using a genetic optimization to tune the weights against recordings of the entire rumble population, about 5000 battles. The surfers I did separately, but with the same process.

                Skilgannon (talk)22:14, 27 September 2017

                WoW Thanks for the sharing! In the past I only tune the movement agaisnt RaikoMicro by roborunner & carefully wathcing battles and that way works very well. Recently I tried some more brute force way but it seems not working. Maybe for an undeveloped ML area, some idea or theory is more useful.

                recordings of the entire population — I’m wondering will it be useful to tune agaisnt wave surfers, which react to fire, in a way that their reaction is irrelevant?

                Or can we just treat wavesurfers as some random movement that is not random enough? And with so many attributes, their reaction on fire will be inaccurate enough to be ignored and just proper decay is enough?

                BTW, I’m really curious about how long it takes for a generation ;) And how many threads you are using to run it ;)

                  Xor (talk)00:40, 28 September 2017

                  Movement I find much more interesting - I think there is still a lot of unexplored potential here. Targeting can only get as good as the ML system though. The only tricks I see from targeting side involve bullet shielding and bullet power optimization.

                  For surfers I evolved the weights in multiple steps - record data, tune weights, re-record data, retune weights etc. I agree fixed data isn't ideal against learning movements, but it seemed to work ok.

                  By recorded battles, I actually just recorded the ML style interactions. So the only work to do in the genetic algorithm was parse input line, add to tree, and if it was a firing tick then do KNN + kernel density and N ticks later check if the prediction was in the correct bounds.

                  About 15 minutes per generation for an i5-2410M using 4 threads.

                    Skilgannon (talk)07:25, 28 September 2017

                    So only record gun waves seems ok? And IMO the gun prediction of each wave can be evaluated immediately, since the result is already known. btw, are you optimizing hit rate overall (e.g. total hit / total fire of all battles) or robocode score? (e.g. average bullet damage per battle). I think the lattar should be better when bullet power selection is also evaluated (or when it is not disabled). But since in real battles hit/miss will also affect total waves per round, that would be inaccurate for recorded battles. So how do you deal with bullet power? imo using the recorded ones sound reasonable, although not perfect.

                    The difference between evaluating overall hit rate and average bullet damage per battle is interesting. Seems that the latter will weight on damage per bullet. Also when comparing average hit rate per battle with overall hitrate, the former will weight battles on bullets fired per battle.

                      Xor (talk)08:57, 28 September 2017

                      I optimized for hit rate. Bullet power was kept the same as when it was recorded.

                      And I saved/loaded all waves (for learning), but only did prediction using firing waves.

                        Skilgannon (talk)10:17, 28 September 2017
                         
                         
                         

                        Holly smoke! Using the whole rumble for tune up. It probably takes half a day to have one generation in a genetic algorithm.

                          Beaming (talk)01:33, 28 September 2017

                          5000 battles on the fly takes me ~4hrs iirc. But recorded battles should take shorter imo.

                            Xor (talk)02:18, 28 September 2017

                            What is recorded battles?

                              Beaming (talk)03:21, 28 September 2017

                              e.g. WaveSim by voidious

                                Xor (talk)06:16, 28 September 2017