User talk:MN

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Hard-coded segmentation1806:17, 10 December 2013
Custom client bug422:54, 23 June 2013

Hard-coded segmentation

I know what you mean about those Hard-coded segmented bin-based statistical algorithms. I never liked those, I went to great lengths to try and make those easier to work with.

Now we just need to find a way to get rid of the hard coding in the kNN algorithms, that exist in the form of carefully tuned weights and states.

Chase18:02, 6 December 2013

I suspect we'd need much longer battles for dynamic weighting to ever out-perform hand-tuned weights. I'm not even sure enemy behavior has a non-negligible impact on the optimal weights, as we generally assume. Has anyone proven a certain set of weights to be optimal against one bot, and a different set optimal against another, beyond margin of error, in gun or movement?

Voidious (talk)18:20, 6 December 2013

I don't know if that's been proved either. However, just the way that stats are gathered combined with robocode physics are going to make some weights more relevant than others.

Of course, I can choose two different bots that I know will be optimal with different kNN weights. Think a surfer vs a random-movement bot, and the weight of the rolling average between them. Or a bot that bounces off walls, and the effect of a wall-distance attribute vs a bot that smooths against walls.

The best would probably be to have multiple weighting sets, and the gun chooses the one with the highest hitrate, or possibly with some classifier.

Skilgannon (talk)18:33, 6 December 2013

Hah, you just beat me to posting. I was about to say something similar, that due to the limited data within a single battle, it would probably be best to select the appropriate weighting via a heuristic rather than determining it "from scratch"

Rednaxela (talk)18:38, 6 December 2013
 

I don't know that a surfer vs random movement would have different optimal weights. :-) But yes, SpinBot and DrussGT have pretty different movement profiles. I don't doubt they have different optimal weights. But I still might be wrong, and that's why we have science. :-)

Choosing among multiple weightings would mean a performance hit similar to adding Virtual Guns. So it's important to figure out how much of an impact sub-optimal weightings have in the first place. I've always thought we were fairly non-scientifically rigorous around here with some of this kind of stuff. It might be fun to try to do some rigorous testing. Actually that's one of the first ideas in a while that gets me excited about doing some more actual Robocoding.

Voidious (talk)18:50, 6 December 2013

Adding virtual guns is the easiest way I see to select between multiple weight sets in guns. Also the slowest.

In movement, it is trickier. Maybe 2 weight sets, one for wave surfing and one for flattener.

When tuning, use a different population for each set of weights.

MN (talk)02:23, 7 December 2013
 

I have actually been working for awhile to find a set of gun weights that is both highly efficient against both surfers and random movers, the same with a single set for movement to be good against both Statistical, kNN and Pattern Matching targeting. Since my most recent bots I eschew the use of Flatteners and Virtual Guns. Obviously I did okay coming in number 7 in the rumble, but I feel there is still room for improvement.

Though I really do think there are different optimal weights for different enemies, but there might be a set that does well against all enemies (if not optimally).

Chase06:56, 7 December 2013

A single set that does ok against everyone is what is already calculated with genetic tuning against the whole population at once.

MN (talk)20:40, 7 December 2013
 

Many people seem to be talking about adding different sets of weights in the form of a set of virtual guns, each with a separate kD-Tree. It seems you could dynamically change them if you didn't weight your predictors when putting them into the kD-Tree, but included the weights in the distance function. I don't know if this would reduce the efficiency of the tree, but its probably faster than running one kD-Tree for each set of weights. Also, it would allow you to constantly change your weights. This might be useful when surfing as if you notice that the opponent is using a high data decay rate (hitting you a lot) you could slow down your own data decay rate to make you less vulnerable to their antisurfer targeting.

Straw (talk)01:28, 9 December 2013

This has been done a few times in the past. Various kD-Tree implementations have support for dynamically setting weightings for the distance function.

IIRC Diamond is one example of a bot that makes use of this feature of the kD-Tree implementation, though I think it does still use a pre-defined list rather than dynamic tweaking.

IIRC there have been experiments in dynamic weight tweaking but I'm not sure how successful they've been off hand.

Rednaxela (talk)04:15, 9 December 2013
 
 
 

I was referring to hard-coded bins, not hard-coded weights. Although Combat also uses runtime std. dev. based weights.

Tuning against a known population is clearly stronger than runtime estimation of std. dev. It would only be weaker if the population was constantly changing.

But having different sets of weights for different opponents seems to have potential. Forward speed is more important against rammers than wall distance. The opposite is true for non-rammers. But selecting which set is optimal against each opponent is tricky without hard-coding opponents names.

MN (talk)18:37, 6 December 2013

What type of distinction between bins and weights do you mean? They can have very similar effects, especially when using an interpolated VCS method. I tend to look at traditional VCS bins and kNN search w/ weightings as having a similar effect in the end, with the primary difference being that one has what is effectively a Math.Round() call in the transfer function of each dimension.

Rednaxela (talk)18:44, 6 December 2013

I posted that a long time ago. Only long after I learned the close relationship between bins, k-NN and kernel density.

MN (talk)18:49, 6 December 2013
 
 
 
 

Custom client bug

Just to let you know, your client is uploading bots with codesize 250 (UnterBot, Spiral, xbots), to the nanorumble, when only 249 and less should be there.

Skilgannon (talk)18:50, 22 June 2013

To make sure:

- nanobots are 0 to 249.

- microbots are 250 to 749.

- minibots are 750 to 1499.

- megabots are 1500+.

MN (talk)18:11, 23 June 2013

Yes.

Skilgannon (talk)18:20, 23 June 2013
 

Technically speaking, the roborumble weight classes have no minimum thresholds, only maximum thresholds. Thus, each rumble contains bots not only under its own weight limit, but also bots under the weight limits of other rumbles with a smaller maximum threshold. For example, just because a bot with a codesize of 249 bytes is a nanobot, it is no less a microbot than a bot with a codesize of 749 bytes. This is how the weight classes are structured: nano < 250 < micro < 750 < mini < 1500 < mega < positive infinity. Note that we use < and not <=, which is why we don't allow bots with a codesize of 250 bytes into the nanorumble.

Sheldor (talk)21:34, 23 June 2013
 

I know nanobots can also participate in bigger codesize classes. Only the < instead of <= was tricky.

MN (talk)22:54, 23 June 2013