Poisoning Enemy Learning Systems

Jump to navigation Jump to search
Revision as of 22 July 2012 at 17:05.
The highlighted comment was edited in this revision. [diff]

Poisoning Enemy Learning Systems

    Skilgannon17:21, 22 July 2012

    That's pretty interesting stuff, and not just in relation to Robocode.

    As for Robocode applications, poisoning the enemy's guns with data also carries the risk of not dodging bullets, since the data gathering and the classification are so intertwined. But it's the type of technique you'd only use against high level opponents, like we do with flatteners, so it's already a situation where you're not able to dodge very accurately.

    But I wonder... One thing it mentions is that this is possible if you have access to the same data as the enemy. In Robocode, of course we do, technically. But if that were really true, we'd be able to emulate the enemy's gun stats and do perfect curve flattening and never get hit. So I think it's probably closer to true that we don't have access to the same data as the enemy.

      Voidious18:47, 22 July 2012

      Actually it is possible to emulate opponent guns unless they use some pseudo-random technique. But we don't perfectly emulate because there are many different guns from many different opponents and few bots try to classify and specialize against the bot it is battling against (i.e. ScannedRobotEvent.getName()). Generalist bots are more fun.

        MN18:59, 22 July 2012
         

        Interesting fact is, this concept is already being used in RoboRumble for years.

        PatternMatching (learning)... Anti-pattern matching (counter-learning)... GuessFactors (learning)... FlatMovement (counter-learning)... Dynamic Clustering... data decay...

        In some sense, we are at the bleeding edge of AI advancement. There are very few AI competitions with imperfect information around the world other than Robocode. I can only think of one or two.

          MN18:54, 22 July 2012