Awesome enty
The highlighted comment was created in this revision.
This new bot of yours really is awesome ! It is really beating the hell out of the topbots, even without BulletShielding.
Alas I am not able to run any battles for it, as I am still on Java 8.
alas, in version 0.11, still some parts are not Java 8 compatible: kc/mega/game/Battleffield has been compiled by version 57.0.
Does not matter that much, I am just not able (currently) to run any battles for it. Same for Raven as it has been compiled by version 55.0.
I've downloaded Java 13, I can now run battles for BeepBoop. After rebuilding the robot-database, also Raven and WaveShark run fine. Note that for my development I will still use the compiler option '-source 1.8'
Oh wow, missed this! Awesome work Kev, you have a history of popping up with surprise entries =)
I'd be curious to know more about the Tensorflow work you did to make the KNN features...
Thanks! I wrote a brief description under BeepBoop/Understanding_BeepBoop, but I'll release the code too once I get it cleaned up.
Aha, I missed the last section. Surprised there wasn't more to gain from some kind of deeper embedding model.
Me too, and I'll maybe revisit it at some point. Theoretically a deeper embedding model could learn feature interactions like "wall-ahead is more important when velocity is 8 than when it is 0"
I’m surprised as well. Btw, how many layers are you using in the deeper model? And is that fully connected? I guess some deeper models with explicit feature interactions may work better in robocode scenario, given high noise. I would try things like Deep&Cross, DeepFM, etc.
It's possible that the KNN already takes that into account sufficiently. Maybe if you bump the cluster size up a lot, and change the kernel width for cluster weighting, it might force this part of the learning into the NN instead?
It feels like some cascade model (widely used in ads & recommendation), by putting one deep model on top of some simple & very fast model, with the input of the former being the output of the latter. This architecture simplifies computation by magnitude of degrees, but also restricted the power of the deep model. A best practice then is to make the simple model fitting the output of the deep one, and retrain deep model on top of new input, and repeat...