Poisoning Enemy Learning Systems
The highlighted comment was created in this revision.
I just ran across an interesting article:
That's pretty interesting stuff, and not just in relation to Robocode.
As for Robocode applications, poisoning the enemy's guns with data also carries the risk of not dodging bullets, since the data gathering and the classification are so intertwined. But it's the type of technique you'd only use against high level opponents, like we do with flatteners, so it's already a situation where you're not able to dodge very accurately.
But I wonder... One thing it mentions is that this is possible if you have access to the same data as the enemy. In Robocode, of course we do, technically. But if that were really true, we'd be able to emulate the enemy's gun stats and do perfect curve flattening and never get hit. So I think it's probably closer to true that we don't have access to the same data as the enemy.