Difference between revisions of "Thread:User talk:Skilgannon/Poisoning Enemy Learning Systems/reply (8)"

From Robowiki
Jump to navigation Jump to search
 
(No difference)

Latest revision as of 05:21, 23 July 2012

Hmm, interesting.

Regarding, "proactive about painting an incorrect picture to the enemy", two things come to mind that fit that category:

  1. Chase bullets (but that's taking advantage of the 'bookkeeping' not the learning)
  2. Robots which 'intentionally' look different to tick waves than bullet waves

A pure flattener also kind of fits this description in my mind, because it's all about going somewhere that it won't go in the future, it just looks at it from the opposite direction in time. It's just that it can't do a hugely accurate job of it due to the variety of configurations.

I think that one of the advantages of the usual "statistical" methods, is that they are more difficult to poison, because they don't "speculate" beyond what they've directly observed. In general I suspect that the same types of things that make a learning method robust to data from both multi-mode bots and random bots, also tend to make it more resistant to poisoning.