Talk:Gaff
Migrated chat
Woah there! You sure got a crazy-good score against Shadow in TC2K7! I'd be really curious about what kind of inputs nodes are getting heavy strong weights in Gaff's targeting NN there. Congrats! -- Rednaxela
Thanks -- now I just have to improve the score against the other 9 bots! I haven't dumped the weights, so I'm not sure which inputs are key to hitting Shadow. The biggest improvement (based on the 50+ tests/tweaks since version 1.00) seems to be from limiting the GuessFactor range: unlike most other implementations I've read about on the wiki, Gaff always considers GF+1 = asin(8/bvel) but limits the output to where the enemy could actually move. So if the enemy can only reach -0.4 to +0.7 then it'll take the peak value in that range, even though the biggest peak might lie outside it.
The network is using distance, speed (absolute), lateral accel, advancing accel, ticks since velocity change, ticks since CCW/CW change and straight-line distance to the wall as inputs. All of them are split up into smaller feature ranges so there are 4-10 inputs for each of those variables. I'm playing around with splitting them up into even finer features right now. -- Darkcanuck
Hmm, that sounds like an interesting. So, based on the how you're looking for peaks and such, I presume you have multiple neural net output nodes arranged in a fashion similar to the bins in 'traditional' VisitCountStats? Overall, from what you describe of it, I'd say this net is set up and used in a way far more like 'traditional' segmented VCS than other NeuralTargeting that I've taken note of, due to how it sorts the inputs/outputs in to bins/buckets. Actually, I think this has inspired me to look into neural net approaches more, perhaps integrating it into DynamicClustering, by using it in the calculation of differences between situations to create a form of DynamicWeighting. My previous neural experiments haven't went too great (neural pattern matching and neural situation-readjustment) but this weighting via neural nets might be more promising. In any case, I'm quite impressed and wish you and Gaff well. It would be nice to see some more Canadians bots ascend the ranks :) -- Rednaxela
Yep, there are 31 output nodes, each corresponding to a slice of the GF spectrum. I originally started with a single output node which gave the GF to aim at, but performance was pretty lousy. Neural nets are very strong in classification problems, so Gaff's targeting takes that approach instead. I've also tried using a net to do predictive pattern matching (see Leon) with decent results, but nowhere near good enough to hit surfers. There's probably a lot of room for improvement in the latter if I rearranged it as a classifier too.
If you're curious, add "debuglevel=2" (or was it 3?) to Gaff's properties file, turn on painting and you'll see the net's outputs graphed in real time at the bottom left of the screen. -- Darkcanuck
Ahh yes, "debuglevel=2" indeed, and quite pretty graphics. One thing I noticed with it facing RougeDC Gamma6, is that it's bullet prediction seems able to predict where RougeDC is firing to a degree I find somewhat bothersome, of course the gun in Gamma6 isn't meant for performing against adaptive movement anyways. Despite it's good prediction though it does seems hit it's own predictions of the bullet fairly often and may be something you might want to consider looking at making it more cautious of. You may already be aware of it, but it also looks like you sometimes miss detecting the enemy firing at all. That's a very interesting cornering movement you have there by the way, it seems quite good at forcing fairly close combat as well without needing to chase. -- Rednaxela
Nice work with Gaff, keep it up! Hitting Shadow well is like a lifetime goal of mine - up there with becoming an astronaut and climbing Mount Everest. =) So it's good to see someone making progress. Just as an FYI, Engineer also outputs an array of GuessFactor bins like this (and even uses the same type of setup for surfing!). Good luck with the other 9 reference bots ;), cheers, -- Voidious
By the way, out of curiosity, I ran a anti-surfer challenge I made for my own testing on Gaff. It appears it's power against Shadow doesn't hold true against Dookious, but in general isn't too bad against surfers. One interesting thing, is that it doesn't manage to hit the surfers I've found to be weaker like Komarious or MatchupWS much better than the stronger surfers.
Custom Surfer Challenge. TargetingChallenge2K7 surfers, plus: Dookious 1.573c, Chalk 2.5.Al, Komarious 1.78b, GresSuffurd 0.2.10, DarkHallow .90.9, and MatchupWS 1.2c
Bot | Author | CC | RMX | SHA | WS | WOE | TC2K7 | Dooki | Chalk | Kom | Sub1 | Gres | DH | MWS | Sub2 | Total | Comment |
Gaff 1.04 | Darkcanuck | 63.52 | 80.64 | 69.22 | 82.22 | 78.13 | 74.74 | 54.37 | 70.19 | 74.43 | 66.33 | 88.37 | 81.53 | 82.08 | 83.99 | 75.02 | 16.0 seasons |
-- Rednaxela
That's an ugly score against Dookius -- and I thought CC was hard to hit!
Good catch regarding the bullet dodging: one of the fixes I made in 1.04 broke the dodging code for reverse travel. Fixed now, just running through some more targeting iterations. Still stuck around 76 on the TC2K7, each new test brings up the score against 1 bot at the expense of the 9 others. Interestingly, smoothing the net outputs brought down the Shadow score quite a bit. Oh, and I did dump the network weights but it didn't reveal anything. That's the irony of NN, they can solve difficult problems but you can't reverse-engineer the solution. -- Darkcanuck
Well, I doubt anyone could score that great against Dookious. Actually, I find it slightly ironic that Voidious has a lifelong goal of hitting Shadow when Dookious appears to be much harder to hit. Perhaps I'll make it my lifelong goal to hit Dookious well... ;)
I think for NNs with few inputs one can create a visual map of what inputs result in what outputs, but that gets tricky when you have more than 2 or 3 input dimensions to deal with. If all one is looking at is "importance" of a dimension though, it might be possible to get a useful statistic by getting the maximum theoretical amount an output node could be affected by a single lone input node were all other input nodes 0, and average what that value was for all of the different output nodes, to get a rough comparative measure of how much the network "cares" about that input. Of course, I'm not an expert on NNs but to me that would make sense as a way to get at least marginally useful information out of a dump of the weights. -- Rednaxela
Are you offering to do the analysis for me? ;) With 45 inputs, 20 hidden nodes and 31 outputs, there are a lot of weights to pore over. Out of my last 20 tests (RoboResearch is an amazing tool) the targeting released in 1.04 is still the best. Too many variables to tweak: inputs, network size/structure, learning rates, training methods... just not enough processor time to try them all. Right now I'm trying to figure out how to build an AntiSurfer method but I'm not sure how to do fast rolling in a NN. Next test is experimenting with weight decay. Somewhere in there lies a way to hit Dookius better... -- Darkcanuck
- You might want to pay attention to if/when Dooki is enabling the flattener. For instance, right now it's probably not being enabled because you're not hitting him much. If you make some progress, you may get a higher score, but still not enough to enable the flattener. Then you could make more progress, trigger the flattener, and end up with the same score - so you really did make progress, but the end score wouldn't tell you that. And if you want to hardcode the flattener on or off for testing, you could do that easily in evaluateFlattener() or setFlattener(boolean) in DookiCape.java. -- Voidious
- Well I have my own things I'm busy with right now, but some time perhaps. ;) Indeed, RoboResearch is a great tool and I have it running something nearly 24/7 these days. In terms of hitting Dooki, keep in mind what Voidious said: For example I added my fast rolling anti-surf gun, which boosted my performance greatly against just about everyone else, it caused my Dookious score to nosedive from 59 to 55 in my targeting challenge. In other words, not only can improving enough to trigger the flattener cause your score to not improve, it can in fact kill your score if Dooki's flattener is effective against you. Well, as far as making a NN "fast rolling", wouldn't that be achieved simply by increasing the learning rate variable or alternatively running additional training iterations on new data when you get it? -- Rednaxela
Excellent! Rednaxela, you provided the key to the TC2k7 score breakthrough I was looking for. Mostly out of laziness (and probably due to the way Leon's NN works) I was only training the net once for each firing wave hit -- I already had the learning rate cranked up to compensate. By retraining the last 5 firing waves each time a new firing wave hit and dialing down the learning rate, Gaff scores almost 2pts higher overall. 9 out of the 10 bot scores are up, including a modest gain against Shadow (71.61). I'm playing with the retraining count and learning rate now to get an idea if there's more value still hidden in this approach. Look for a new release soon with a couple of new movement modes to increase survival against the big guys. -- Darkcanuck
Nice stuff there! Looks like Gaff will be on it's way up. Cheers! :) -- Rednaxela
New NN Gun
The development version of Gaff is currently scoring 82.3+ on the TC2K7 after 10 seasons... that's a new top score if it holds steady. I guess I'll have to start working on movement now? ;) --Darkcanuck 14:32, 13 May 2009 (UTC)
- Oosh! Is that 35-rounds, then? Hope it holds! --Voidious 15:08, 13 May 2009 (UTC)
It dropped after posting the above note, but still edges out Phoenix in the fast learning (35 round) challenge after 15 seasons:
Name | CC | RMX | SHA | WS | WOE | Surf | DM | FT | GG | RMC | WLO | No Surf | Total | Comment |
Gaff 1.33d_TC | 73.35 | 87.36 | 71.57 | 89.65 | 84.41 | 81.27 | 85.00 | 81.10 | 85.77 | 80.17 | 83.11 | 83.03 | 82.15 | 15 seasons |
Running the RM challenge now and will probably do a 60-100 season run later. Look for version 1.34 to hit the rumble tonight. :) --Darkcanuck 15:59, 13 May 2009 (UTC)
WoW! Above Phoenix, new record for RaikoMX and WaveSerpent! But I don't think it look really good against the majority, Random Mover =) » Nat | Talk » 16:11, 13 May 2009 (UTC)
- RM has always been the weak point of my approach, still trying to find a better way to tackle it. This gun scored about 87.6 in the RM challenge, no contender for DrussGT. I think that DC definitely has a big advantage in this area since it keeps every scan in detail and can tease out the hidden patterns in what should be a random profile... But I think this gun ranks as a pretty good surfer-killer. :) --Darkcanuck 00:51, 14 May 2009 (UTC)
Maybe I'm just being stupid, but I can't find the TC2K7 results for 35 rounds anywhere on this wiki... and my router doesn't like me checking the old wiki even with the /etc/hosts mod. Could somebody please migrate it over? Thanks... --Skilgannon 18:43, 13 May 2009 (UTC)
Oh my! Amazing work Darkcanuck! By the title of this section, I presume this is an all-new NN gun now? Does it have separate components for surfers and for random movers? :) --Rednaxela 20:07, 13 May 2009 (UTC)
- Thanks! I'm planning to write it up in detail eventually, but it's basically the evolution of Gaff's previous gun. I've been working on an RM version (different training approach) since the fall and finally got some good results in the past week. This gun uses inputs similar to what Skilgannon described for DrussGT. The first breakthrough was to slice them up into a large number of very finely grained features which are fed into the net as inputs (that's what Gaff 1.32 and 1.32rm used). And then yesterday I decided use radial-basis functions instead of simple binary inputs -- that boosted performance of both the anti-surfer and RM versions considerably. The final product was combining the two, of course. So this new gun uses two neural networks which are fed the same inputs but trained differently, with the outputs combined to give the most likely GF. This project has taken over 11 months! (on and off) --Darkcanuck 00:51, 14 May 2009 (UTC)
Anti-surfer challenge
Hey Darkcanuck, would you mind if I tested Gaff's gun over in Talk:Anti-Surfer Challenge? :) --Rednaxela 13:23, 27 July 2009 (UTC)
- [View source↑]
- [History↑]
You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.