Hard-coded segmentation

Jump to navigation Jump to search

I don't know if that's been proved either. However, just the way that stats are gathered combined with robocode physics are going to make some weights more relevant than others.

Of course, I can choose two different bots that I know will be optimal with different kNN weights. Think a surfer vs a random-movement bot, and the weight of the rolling average between them. Or a bot that bounces off walls, and the effect of a wall-distance attribute vs a bot that smooths against walls.

The best would probably be to have multiple weighting sets, and the gun chooses the one with the highest hitrate, or possibly with some classifier.

Skilgannon (talk)19:33, 6 December 2013

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:User talk:MN/Hard-coded segmentation/reply (4).

 

I don't know that a surfer vs random movement would have different optimal weights. :-) But yes, SpinBot and DrussGT have pretty different movement profiles. I don't doubt they have different optimal weights. But I still might be wrong, and that's why we have science. :-)

Choosing among multiple weightings would mean a performance hit similar to adding Virtual Guns. So it's important to figure out how much of an impact sub-optimal weightings have in the first place. I've always thought we were fairly non-scientifically rigorous around here with some of this kind of stuff. It might be fun to try to do some rigorous testing. Actually that's one of the first ideas in a while that gets me excited about doing some more actual Robocoding.

Voidious (talk)19:50, 6 December 2013

Adding virtual guns is the easiest way I see to select between multiple weight sets in guns. Also the slowest.

In movement, it is trickier. Maybe 2 weight sets, one for wave surfing and one for flattener.

When tuning, use a different population for each set of weights.

MN (talk)03:23, 7 December 2013
 

I have actually been working for awhile to find a set of gun weights that is both highly efficient against both surfers and random movers, the same with a single set for movement to be good against both Statistical, kNN and Pattern Matching targeting. Since my most recent bots I eschew the use of Flatteners and Virtual Guns. Obviously I did okay coming in number 7 in the rumble, but I feel there is still room for improvement.

Though I really do think there are different optimal weights for different enemies, but there might be a set that does well against all enemies (if not optimally).

Chase07:56, 7 December 2013

A single set that does ok against everyone is what is already calculated with genetic tuning against the whole population at once.

MN (talk)21:40, 7 December 2013

Well, that brings up the question of what one exactly means by "highly efficient against both", or to put it another way, how one scores the individuals in the genetic tuning.

The most common way to interpret it may be optimizing for APS, but that will tend to give very little weight being "highly efficient against surfers". Another way to interpret is optimizing the win percentage. In my mind the phrase "highly efficient against both" would imply optimizing with a scoring system that divides the population into "surfers" and "non-surfers" giving roughly equal weight to them, but the phrase is a bit vague.

Rednaxela (talk)22:00, 7 December 2013

Giving equal weight to surfers and non-surfers will hurt APS ranking. But may increase PL ranking.

MN (talk)20:57, 8 December 2013
 

If multi-objective optimization is being done, than surfers and non-surfers can be separated in distinct objectives. The output will be many different sets of weights, with different compromises between surfers and non-surfers, although all of them Pareto efficient.

MN (talk)21:01, 8 December 2013
 
 
 

Many people seem to be talking about adding different sets of weights in the form of a set of virtual guns, each with a separate kD-Tree. It seems you could dynamically change them if you didn't weight your predictors when putting them into the kD-Tree, but included the weights in the distance function. I don't know if this would reduce the efficiency of the tree, but its probably faster than running one kD-Tree for each set of weights. Also, it would allow you to constantly change your weights. This might be useful when surfing as if you notice that the opponent is using a high data decay rate (hitting you a lot) you could slow down your own data decay rate to make you less vulnerable to their antisurfer targeting.

Straw (talk)02:28, 9 December 2013

This has been done a few times in the past. Various kD-Tree implementations have support for dynamically setting weightings for the distance function.

IIRC Diamond is one example of a bot that makes use of this feature of the kD-Tree implementation, though I think it does still use a pre-defined list rather than dynamic tweaking.

IIRC there have been experiments in dynamic weight tweaking but I'm not sure how successful they've been off hand.

Rednaxela (talk)05:15, 9 December 2013

Yeah I just modified Skilgannon's kD-Tree to support changing of weights, though I am not using the feature yet. It also seems like you could speed up having two virtual guns (Antisurfer, Antirandom) by using just one tree and using a different weighting for each. Here's something to think about: if a flattening system is using the same data decay as the targeting system trying to hit it, it should get a very good dodging rate. So how do you figure out how much weight they are giving recent results and adapt your weights to match it, and you should dodge almost everything.

Straw (talk)07:53, 9 December 2013

Data decay (time classification) is one of many classifications in k-NN search. You need to mirror all of them to achieve perfect dodging.

You need a lot more data than is available to estimate all weights, unless you want to get shot a lot.

MN (talk)15:14, 9 December 2013

I believe multiple people have noted that the weight on new vs old data is significantly more important than weights on other predictors.(In a gun) I believe Skilgannon said something along the lines of: I can change the weights by tenfold (except the one on shots fired) and I get very little difference in performance. So if you could match that weight in your flattener, you could (hopefully) get very low hitrates against you.

Straw (talk)07:17, 10 December 2013