Poisoning Enemy Learning Systems

Fragment of a discussion from User talk:Skilgannon
Jump to navigation Jump to search

Basically what they explains is ways to use the minimum of 'false' information (in Robocode terms that means intentionally skewing your movement profile, but with increased chance of getting hit) in order to maximize the chance that they will incorrectly classify future data (ie, aim in the wrong place next time).

I agree anti-pattern matching and anti-GF have been effective at dodging their specific type of targeting, however this is a different concept entirely. This is about intentionally behaving in a certain way so that they will think you will do this next time, not behaving that way because you know exactly where they will shoot.

I would love to apply this somehow, because I think our learning guns are very susceptible to this. Sure, they all shoot differently at surfers meaning you can't take one gun and dodge another, but when you consider a movement profile with obvious peaks they all tend to shoot in the same way. Our wavesurfing flattens the profile, but all that does is bring us to the edge cases where every gun will shoot differently. If instead we have peaks that are obvious, all of the guns will shoot in the same way, making it possible to better predict their bullets and thus dodge them more reliably.

Skilgannon22:52, 22 July 2012