Poisoning Enemy Learning Systems

Fragment of a discussion from User talk:Skilgannon
Jump to navigation Jump to search

I think Skilgannon is right that this concept is different than current Robocode strategies, which are not proactive about painting an incorrect picture to the enemy. The only thing I can think of along these lines is oldwiki:AntiMirrorSystem, but even that is not really poisoning enemy data, just an exploit for a specific movement strategy.

Also, from what little I remember of SVMs, I think they may be much more susceptible to poisoning than KNN. They try to find the best plane to split the data space, so this method probably tries to plant points that alter the plane as much as possible. I think KNN would be much more resilient to this type of manipulation.

Voidious02:51, 23 July 2012