Awesome enty

Jump to navigation Jump to search

Awesome enty

This new bot of yours really is awesome ! It is really beating the hell out of the topbots, even without BulletShielding.

Alas I am not able to run any battles for it, as I am still on Java 8.

GrubbmGait (talk)15:48, 19 May 2021

Thanks! I will make it Java 8 compatible for the next release.

--Kev (talk)02:17, 20 May 2021

Just wanted to add to this thread, this robot truly is a beast! Congratulations on 100% PWIN!

Slugzilla (talk)06:20, 23 May 2021
 

alas, in version 0.11, still some parts are not Java 8 compatible: kc/mega/game/Battleffield has been compiled by version 57.0.
Does not matter that much, I am just not able (currently) to run any battles for it. Same for Raven as it has been compiled by version 55.0.

GrubbmGait (talk)13:14, 5 June 2021

Oops, I will have another go at fixing it for my next release!

--Kev (talk)08:58, 6 June 2021

I've downloaded Java 13, I can now run battles for BeepBoop. After rebuilding the robot-database, also Raven and WaveShark run fine. Note that for my development I will still use the compiler option '-source 1.8'

GrubbmGait (talk)13:14, 7 June 2021

Ok, great! I compiled 0.11 with --release 8, but I think it didn't work because there were some old .class files lying around that didn't get overridden.

--Kev (talk)00:49, 8 June 2021
 
 
 
 

Oh wow, missed this! Awesome work Kev, you have a history of popping up with surprise entries =)

I'd be curious to know more about the Tensorflow work you did to make the KNN features...

Skilgannon (talk)23:26, 13 June 2021

Thanks! I wrote a brief description under BeepBoop/Understanding_BeepBoop, but I'll release the code too once I get it cleaned up.

--Kev (talk)01:13, 15 June 2021

Aha, I missed the last section. Surprised there wasn't more to gain from some kind of deeper embedding model.

Skilgannon (talk)11:28, 15 June 2021

Me too, and I'll maybe revisit it at some point. Theoretically a deeper embedding model could learn feature interactions like "wall-ahead is more important when velocity is 8 than when it is 0"

--Kev (talk)21:33, 20 June 2021

I’m surprised as well. Btw, how many layers are you using in the deeper model? And is that fully connected? I guess some deeper models with explicit feature interactions may work better in robocode scenario, given high noise. I would try things like Deep&Cross, DeepFM, etc.

Xor (talk)07:31, 21 June 2021

I tried a few (pretty simple) variants:

  • Multiplying the features by a weight matrix. One nice feature of this is that a diagonal matrix recovers standard feature weighting, so this model should be strictly better than per-feature weights.
  • A one-hidden-layer feedforward network.
  • Summing up the embeddings from the above two.

I totally agree that allowing multiplicative feature interactions as you suggest should work better though!

--Kev (talk)20:38, 22 June 2021

One more detail, are you doing any encoding before inputting them into the NN part? I remembered Darkcanuck had some rather succesful attempt in NN (end-to-end), by binning features like the old VCS ways.

And since most features are essentially tabular, apart from the NN approaches with explicit feature interaction, GBDT can work very well as some feature transformation & interaction tool. There are also approaches using GBDT to do clustering, by converting clustering into classifying “dense” & “sparse” of space.

Xor (talk)09:47, 23 June 2021

I'm using no special encoding, just normalizing the features so they are between 0 and 1. Decision-tree-like algorithms have been tried in robocode before (e.g. Wiki_Targeting/Dynamic_Segmentation), but not in conjunction with clustering/KNN as far as I know.

--Kev (talk)18:55, 23 June 2021
 
 
 

It's possible that the KNN already takes that into account sufficiently. Maybe if you bump the cluster size up a lot, and change the kernel width for cluster weighting, it might force this part of the learning into the NN instead?

Skilgannon (talk)15:06, 21 June 2021

It feels like some cascade model (widely used in ads & recommendation), by putting one deep model on top of some simple & very fast model, with the input of the former being the output of the latter. This architecture simplifies computation by magnitude of degrees, but also restricted the power of the deep model. A best practice then is to make the simple model fitting the output of the deep one, and retrain deep model on top of new input, and repeat...

Xor (talk)18:32, 21 June 2021
 

I tend to think it's right that the KNN would take such relationships of features into account in a sense, but as a statistical model what it cannot do is generalize, which increases the number of data points needed to effectively cover some areas of the input space. In many ways, for this sort of usage, I would conceptualize the potential advantage of a deep embedding not as learning the feature interactions themselves, so much as learning the generalized contour of when to de-weight features, as a noise filter of sorts. This is a bit of a tangent, but thinking of in in terms of being like a noise filter, and also considering things like BeepBoop's velocity randomization, I also start to wonder if there could be some value in including not just the present feature values as inputs to deep embeddings, but including several ticks worth of feature history. Let the embedding learning have the potential to construct it's own temporally filtered (or rate of change) features.

Rednaxela (talk)19:07, 21 June 2021

Including several ticks of history seems like a nice way of removing the need for hand-crafted features like acceleration, time-since-velocity-change, distance-last-k-ticks, etc., and having the model learn them instead. Maybe a good model could even learn some PM-like behaviors.

Definitely a weakness of KNNs is generalization to new parts of the input space. I did think a bit about pre-training a model against a lot of bots and then quickly adapting it to the current opponent (maybe using meta-learning methods) so it would generalize better early in the match before it gets lots of data. On the other hand, aiming models get a lot of data pretty quickly, so I'm not sure of how much of an issue poor generalization really is.

--Kev (talk)20:52, 22 June 2021

I would say it probably depends what you're targeting. When targeting a strong surfer, I would say there's potentially a lot of value in maximizing the utility of data learned since the surfer last got information from collisions, and so that's a scenario where generalizing seems potentially more important in my eyes.

(unless it's going 100% flattener, in which case I would say the value is adapting on time scales that are simply different from what it's flattening over, either learning faster the flattener, or learning long-term "history repeating itself" trends/patterns that it loses sight of)

Rednaxela (talk)22:05, 22 June 2021
 

Also some deep enough model can learn how surfers (without flattening) "surf" hits, just like networks like "Deep Interest Network" used in CTR that learns how users' interest change over time. However our current use of KNN allows nothing like this. Maybe some end-to-end approach exists for robocode scenario.

My past experiments (shallow NN that does online-learning without pre-training) with end-to-end approach didn't yield anything interesting though.

Xor (talk)04:00, 23 June 2021

Yeah, I also tried a purely NN gun and didn't get great results. You can train a NN offline to really high accuracy against random movement bots, but online it's much harder because data isn't iid.

--Kev (talk)18:57, 23 June 2021
 
 
 
 
 
 
 

Yeah, very cool to see! Congrats from me, too! And I'm enjoying reading about it.

Voidious (talk)17:29, 18 June 2021

Hey Voidious, long time no see! Glad you enjoyed reading about it, I learned a lot from Diamond's code while developing BeepBoop.

--Kev (talk)21:31, 20 June 2021