Talk:Gaff/Targeting

From Robowiki
< Talk:Gaff
Revision as of 16:52, 17 July 2009 by Darkcanuck (talk | contribs) (I've used double that...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This is really cool, thanks for writing it up. Makes me want to tinker. =) I'm curious, does the "anti-surfer" network alone do even better against surfers? --Voidious 19:24, 16 July 2009 (UTC)

It was long overdue. Yes, the AS network scored 82.93 on the surfer portion of the TC2K7 (15 seasons) but only got 80.45 against the others. I continue to tweak it every once in a while but have yet to break the 83 barrier... --Darkcanuck 20:09, 16 July 2009 (UTC)

Ooh.. very interesting things here... The sentance that most intrigues me right now is "Waves that already give the correct solution are not retrained.". I have a hunch that this could be rather important in how well Gaff hits surfers, and may be useful outside of the world of NN guns too --Rednaxela 00:05, 17 July 2009 (UTC)

A common problem with NN is overfitting -- you want to get as good an approximator as possible without losing the ability to generalize. By training waves only to within a certain tolerance, Gaff tries not to overfit. The tolerance margin is pretty small though: +/- 1/4 effective bot width at that distance. A while ago I did some tests and it seemed to help having it set fairly tight. Also note that the in-tolerance waves are rechecked every training interval (if still in the buffer), so if the net starts to forget them they do get re-trained.
I guess you're thinking of not updating a VCS buffer if it's already producing the right answer? It would be interesting to see how that worked, or if it just flattened the buffer peaks too much... --Darkcanuck 01:46, 17 July 2009 (UTC)
I've tried the more extreme version of that in my Anti-Surfer gun, decrementing bins (Dookious does everything in all bins covered by bot width) when my bullet hits, to simulate the enemy adapting to that knowledge. At some point, I thought it was giving a slight benefit, but at this point I think it has little effect either way. Though it's not quite the same thing, especially if you have Virtual Guns. --Voidious 02:04, 17 July 2009 (UTC)
Yes Darkcanuck, I'm thinking of not updating a VCS buffer, or not adding the entry to a DC table, etc. Such overfitting problems, I believe apply to statistical methods as well, when in a rapidly changing system. Also, what I understand Darkcanuck to be doing, and what I'm suggesting to maybe apply to VCS/DC/etc doesn't just mean "don't increment on bullet hit" Voidious. It means "don't record tickwaves upon virtual hit" as well. --Rednaxela 03:26, 17 July 2009 (UTC)

You mean you have 84 inputs for the network!?! :shock: » Nat | Talk » 14:31, 17 July 2009 (UTC)

Hmmm, I suppose I do. I have an experimental version in the works (very strong against RM) which uses double that... =)
There's really no limit on how many inputs you can have. It does make computation slower (with 61 outputs, there should be 5185 weights), but it seems with no hidden layer and very simple inputs (most will be near-zero for any given input set) learning speed is fine. --Darkcanuck 14:52, 17 July 2009 (UTC)