BeepBoop seems to be the new king

Jump to navigation Jump to search
Revision as of 4 January 2023 at 03:57.
The highlighted comment was created in this revision.

BeepBoop seems to be the new king

Congratulation! BeebBoop is at the very top.

Do you mind to hint about new top of the line research direction?

    Beaming (talk)01:22, 26 December 2022

    Congratulations (again) from me too ;) BeepBoop since 1.2 had very surprising results (nearly 95!!!). And yet nothing worked when I tried to use gradient descent in training models. Would you mind to share a little bit more about this section? E.g. initialization, learning rate, how to prevent getting zero or negative exponent in x^a formula…

      Xor (talk)02:07, 26 December 2022

      I’ve been meaning to release the code for the training, but it’s currently a huge mess and I’m pretty busy! In the meantime, here are some details that might help:

      • I initialized the powers to 1, biases to 0, and multipliers to a simple hand-made KNN formula.
      • I constrained the powers to be positive, so I guess the formula should really be written as w(x+b)^abs(a).
      • I used Adam with a learning rate 1e-3 for optimization.
      • Changing the KNN formula of course changes the nearest neighbors, so I alternated between training for a couple thousand steps and rebuilding the tree and making new examples.
      • For simplicity/efficiency, I used binning to build a histogram over GFs for an observation. Simply normalizing the histogram so it sums to 1 to get an output distribution doesn’t work that well (for one thing, it can produce very low probabilities if the kernel width is small). Instead, I used the output distribution softmax(t * log(histogram + abs(b))) where t and b are learned parameters initialized to 1 and 1e-4.
        --Kev (talk)16:10, 3 January 2023

        Thanks for the detailed explanation! It is not easy to get so many details right, which explained how mighty BeepBoop is, not to mention the innovations.

          Xor (talk)04:57, 4 January 2023
           
           

          Thanks! My guess for the next innovation that could improve bots is active bullet shadowing. Instead of always shooting at the angle your aiming model gives as most likely to hit, it is probably better to sometimes shoot at an angle that is less likely to hit if it creates helpful bullet shadows for you. This idea would especially help against strong surfers whose movements have really flat profiles (so there isn’t much benefit from aiming precisely). I never got around to implementing it, so it remains to be seen if it actually is useful!

            --Kev (talk)16:02, 3 January 2023

            Thanks for insights and ideas. Bullet shielder temped me a while ago. I thought that if one cat intercept a bullet wave close to the launch point, a bullet shadow will be big enough to slide in. But that required good knowledge of when a bullet will be fired. I guess it can be done similar to how its done in DrussGT which has a tree to predict the opponent bullet segmented on energy and distance. (At least I remember reading in wiki about some one way to predict an enemy wave this way). But my attempts to do it were not very successful.

            By the way, could you repackage your bot with an older Java version? I am running the rumble but it fails on your bot complaining about

            Can't load 'kc.mega.BeepBoop 1.21' because it is an invalid robot or team.
            

            I think the current agreement that Java JDK v11 or lower is accepted. If you look at rumble stats, you would see that your bot has less battles then many others.

              Beaming (talk)04:20, 4 January 2023