User talk:Rednaxela/SaphireEdge
When you finish all components, how you merge it? Will you use crowd? » Nat | Talk » 08:44, 24 March 2009 (UTC)
With 'crowdtargeting' yes. While the AS series was superficially about anti-surfer performance, it's real goal was to tune/perfect my crowdtargeting technique which in the AS series combined the gun from the VCS series, with a fast rolling version and BulletHit fast rolling version. The idea is that how well it mixes will carry over when I mix in the NN series, and possible a DC-PIF later. --Rednaxela 08:57, 24 March 2009 (UTC)
If you have CT(NN, VCS-GF, VCS-AS, DC-PIF), it would result in really slow robot. I just understand 'crowdtargeting' (and 'crowd') 3 hours ago while reading RoughDC source :-) Hope SaphireSlippers will soon so we will see real Saphire. » Nat | Talk » 09:30, 24 March 2009 (UTC)
Well, somewhat slow, but not too bad. the VCS alone is reasonably fast, the NN alone i reasonably fast, DC-PIF is a litte slowish but not unreasonable. Haha, well the crowdtargeting in RougeDC is..... such a poor hack. Rather than actually learning the appropriate combination it's little better than fuzzy virtual-guns. What I have in SaphireEdge is grounded in some real theory and based upon the hebb rule, and works far far better. :-) --Rednaxela 15:58, 24 March 2009 (UTC)
Cool! So you're using a neural network to see which combination of gun inputs correlates to the required output? My NN knowledge is limited, but does the NN actually do the mixing for you? In what form do you get the output? --Skilgannon 16:19, 24 March 2009 (UTC)
Well, I use an adaptation of the generalized hebbian algorithm, not a neural net, for the crowd. Though many neural nets do use the GHA, I wouldn't call what I have a neural net because it's closer to a single neuron than a net. I suggest you see the wiki page for more info. What I have is essentially a modification of it, which among other changes works with vectors (containing the bin values) as opposed to real numbers. --Rednaxela 17:10, 24 March 2009 (UTC)
NN Series
Just a question, which library do you use? The small NeuralLib, NRLIBJ, TDNN, JSOMAP or your specially created? » Nat | Talk » 08:02, 24 March 2009 (UTC)]
A little custom 200-line class I wrote. Supports momentum, bais weights, easy to modify for any transfer functions, arbitrary numbers/sizes of layers, and batch training. It's not perfect but I'm fairly pround of it. :-) --Rednaxela 08:15, 24 March 2009 (UTC)
Well... NN01 to NN03 were a disappointment. While it's better against Shadow than the AS series, it's a VERY far cry from what Gaff gets. What I don't know, is whether it's because of the learning policy or the choice of inputs that it's failing so badly.... right now I'm suspecting choice of inputs but it's hard to really say... Pity that the two comparable NN guns are secretive, haha. --Rednaxela 17:34, 24 March 2009 (UTC)
The Secret of Gaff
Oh Blah. I can't figure out the secret to how Gaff 1.20 got that 74.68 against Shadow! My best so far is NN06. NN10 that's running now is faring even worse, despite it adding inputs to duplicate almost all of the network inputs as described here (bottom of the page)! The NN series may be doing better with Shadow than my primary gun however it's no better than the likes of Phoenix and Garm. Why Gaff is doing so much better than the NN series I'm now sure is either:
- My training configuration sucks (Doubtful, already messed with that a bunch)
- My whole NN implementation is broken (Doubtful, extensive testing shows otherwise)
- The "Wall distance along heading" dimension that Gaff has way better than my "Guessfactor to wall" dimensions against Shadow (Doubt it makes a big difference in Gaff's favor if it does anything)
- The curve I am training against (the same type of curve I use in my VCS), is inferior for net than the curve that Gaff uses. (Maybe... Not sure...)
- Gaff's "don't consider locations the enemy couldn't reach" really makes a large and critical difference. (Quite possible)
Right now I think #5 is the most probable. Anyone have thoughts on the Secret of Gaff? Or does anyone have any suggetions about how to best calculate where the enemy can reach? A crude form that doesn't consider walls is easy but does anyone have a nice fast non-iterative solution that for sure doesn't elimate guessfactors that are actually reachable?
--Rednaxela 02:45, 25 March 2009 (UTC)
- I was curious how this would turn out. :) Don't give up too quickly! Keep in mind that I've been playing with NNs + Robocode for quite some time and Gaff 1.20 was the result of many, many iterations of RoboResearch. Gaff's gun is not secretive on purpose, just too lazy to type it all up. Training is going to be the biggest factor: how do you present "hits" to the net, how often are they re-trained, etc. And limiting the GF range was important too, but don't all good GF guns do this? The algorithm is iterative, but only needs to take into account the two trajectories which result in the min/max reachable GF. I don't know if it's perfect, but it does the trick. I haven't worked on Gaff for quite a few months now -- my last attempts were to tune another NN gun for good performance against random movers but the RM challenge results plateaued around 85 in the RM challenge. --Darkcanuck 03:13, 25 March 2009 (UTC)
- Haha, well I haven't given up yet, just frusturated for the moment. Hmm, one basic question out of curiousity: Do you use non-firing waves at all in Gaff's NN? I've only been using firing waves thus far. --Rednaxela 04:33, 25 March 2009 (UTC)
- It's been a while since I looked at the source, but yes, I'm pretty sure Gaff uses both. But non-firing waves are only trained once while firing waves are trained more than once. A bit odd, yes. Also, I noticed above that you're using batch training -- Gaff uses incremental training only, weights are updated after each training vector. --Darkcanuck 06:31, 25 March 2009 (UTC)
- That's coming along nicely. Most of my later tweaks got Gaff into the low 70's. The trick, however, is to balance performance vs Shadow as well as vs other surfers. Later versions of Gaff (after 1.20) sacrificed a great Shadow score for a better overall score. The latest version clocked in at 72.72 vs Shadow, 80.90 overall. --Darkcanuck 06:33, 26 March 2009 (UTC)
- Thanks. Well, balancing performance vs Shadow as well as vs other surfers isn't high priority to me. The AS series performs quite well against all surfers except Shadow, and my plan is to merge the NN and AS series via the crowdtargeting, thus the NN series focuses solely on the weak points of the AS series. Since Shadow seems to nicely embody the current gap in the current weakness of the AS series, that is what the NN series focuses on. By the way, as far as limiting the targetted guessfactor to actually reachable ones (as in Maximum_Escape_Angle/Precise), what is this algorithm you use? Currently I'm running a test version with a brute force lookup table that evaluates almost all possible locations the bot could go however this is very slow and barely isn't skipping turns. The MEA/Precise page suggests a method involving predicting the enemy movement with some wallsmoothing, but I REALLY don't trust that method at all because I've yet to have seen a wallsmoothing algorithm that truly allowed that absolutly complete maximum range of reachable GFs. --Rednaxela 07:17, 26 March 2009 (UTC)
- I've developed my own weird future position algorithm which has a simple but effective wall-avoidance strategy. My bots use this for movement and for predicting enemy moves. I try not to change this code too often, because trying to figure out how it works always makes my brain hurt! :) But basically the algorithm tries to move the enemy bot perpendicular to its bearing to my bot, in both directions, with some wall smoothing. No lookup tables, no massive every-possible-move iteration, so I can do this every tick and still have plenty of processing power for targeting & movement. It's not perfect, but I suspect some of the bugs I've seen are related to the bugs Simonton found in the robocode movement engine. --Darkcanuck 14:37, 26 March 2009 (UTC)
- How about having a lookup table that is sorted in order of GF reached, being first segmented on velocity and offset? You just translate them back until you get to one that is within the battlefield. You could even thin it out a bit, so that only positions more than 2 units away from each other are included, and I doubt it would make any tangible difference. --Skilgannon 07:47, 26 March 2009 (UTC)
- Hmm... actually my lookup table supports lookups in such thata way that I can do this in a way that operates almost exactly like iterative wallsmoothing except that instead of incrementing an angle I decrement the "turn until" index. That should work perfectly... and be plenty fast. Haha, my lookup table was well-suited for this task but my nearly-brute-force search was incredibly foolish :D --Rednaxela 08:17, 26 March 2009 (UTC)
- The majority of the execution time I saved in DrussGT's movement is pretty much because of doing high-speed sorts/eliminations before analysing all the data, so that I analyse as little data as possible, not the just same amount faster. Of course, when you look at it from the algorithm's point of view, the worst case is still just as bad. But in practical applications, the worst case rarely, if ever, happens. Big lesson I learned there =) --Skilgannon 08:37, 26 March 2009 (UTC)
- I think "don't consider locations the enemy couldn't reach" may be the problem. Many GF gun when using precise MEA instead of normal MEA will be far more effective. » Nat | Talk » 04:09, 25 March 2009 (UTC)
- Well, actually I've seen quite a number of GF guns, and I've only heard of about 2 to 3 using such max escape angle calculations. Needing it for a strong gun is a silly myth I think. I've never used it in any gun of mine, yet VCS24 is extremely strong against random movers (TCRM score only beaten by DrussGT and one obscure targeting test I had with RougeDC), and AS06 ties for the best known overall score with TC2K7, and both set a few individual-bot-records. I strongly suspect that MEA limiting only really ever helps in the case of fast-rolling information that doesn't have a chance to accurately enough learn the MEA limits on it's own. --Rednaxela 04:33, 25 March 2009 (UTC)
- I think it doesn't help that much either, if there are good segments or attributes for wall distance I think it kind of works out by itself. --zyx 06:11, 25 March 2009 (UTC)
- There are many situations where the bot can't reach the MEA due to wall proximity. My debug graphics show that the max reachable GF can be much less than +/-1 -- but of course there may be (and probably are) bugs in my prediction routines. But for Gaff's NN, which has an output for each GF segment, it's important to disregard non-reachable segments because the net may produce weird values at the edges, especially during initial training. In any case, my tests showed definite targeting improvement after adding this check. --Darkcanuck 06:31, 25 March 2009 (UTC)
- Well my comment was about GF guns in general, I haven't done it with NN. In my DC bot I don't even use 1 as the maximum GF, sometimes 1.1 can be an useful GF, even if it is logically wrong :-). --zyx 14:36, 25 March 2009 (UTC)
- Err, I'm quite sure that the only way you'll get GF 1.1, is from botwidth, thus the targeting information of GF 1.1 should NEVER show as more probable than GF 1.0. In RougeDC where I completely and totally avoid bins, it doesn't even limit to 1.1 because there's no reason to. --Rednaxela 14:44, 25 March 2009 (UTC)
- I have never been too confident on
Math.asin(8/velocity)
being completely exact (and in my new bot I'm using precise MEA for an orbiting enemy, not linear, so my new MEA is even smaller). But even if it is from bot width GF 1.1 maybe reachable by your gun before GF 1 it is. And when your are surfing dodging GF 1.1 is very important, the other bot may be bogus, maybe he doesn't wait for gun alignment, many reasons may him shoot there. --zyx 17:43, 25 March 2009 (UTC)
- I have never been too confident on
AS Series
Well, it's not quite ready for TC tests yet but right now I'm working on a new more robust and smarter so called "Crowd Targeting" system than what I had in RougeDC. I'm not quite sure it's perfected yet but here's one very good sign: I set up hypothetical surfing stats an enemy might have and the system is consistantly giving a negative weighting to those stats when against surfers, and giving a zero-ish weight for those stats against random movers! I still have some work to do (for one it adapts much too slowly right now), but I think it's exciting that it's able to so reliably detect surfing behavior and choose a reasonable weight for simulated surfing stats! --Rednaxela 09:08, 3 January 2009 (UTC)
Success! AS03 is looking pretty nice! Still lots more potential to squeeze out but this is looking quite good so far! --Rednaxela 16:35, 22 March 2009 (UTC)
Huge success! AS04 simultaneously improved against surfers while recorvering random mover score too! WaveSerpent really gets totally kicked around too here! --Rednaxela 14:33, 23 March 2009 (UTC)
VCS Series
Based on VCS01-03 It's seeming that increasing granularity of segmentation of this current gun configuration dramatically increases anti-surfer performance (not that that is very good) however causes a small but notable decrease in performance against random movers (or not?). What I wonder now however is, how strong is possible against non-surfers with only a single pure-tickwave VCS buffer? I see bots in that challenge with scores notably better in the non-surfer category, however half are DC (Chalk and DCResearch) and the other half are multi-buffer (Phoenix & Dookious). Anybody else have any thoughts about how strong single-buffer VCS can get? --Rednaxela 07:53, 20 December 2008 (UTC)
IMO a single buffer can only ever be so strong against surfers (although perhaps your inter-segment smoothing will help with this) due to the fact that your segmentation is rarely exactly the same as them. A single buffer can be very strong against non-adaptive movement though, just look at Raiko. --Skilgannon 17:57, 20 December 2008 (UTC)
I'd agree that a single buffer is rather limited in how it can do against surfers, what I'm speaking about for now is against non-adaptive movement. I suppose my next step will be switching over to TargetingChallengeRM and and benchmarking Raiko, Bee modified to use a single buffer only, VCS04, and VCS04 modified to remove antialiasing/interpolation. Having those three should give quite a solid "baseline". Before I move beyond gun beyond single-buffer VCS (and I intend to move it far beyond that), I want to get the single-buffer VCS componant nailed down realllllly solid (in other words: Build the best damn single-buffer VCS gun there has ever been (against non-adaptives)). Right now I have the temptation to do a mad scientist laugh. Anyone else ever had that feeling when robocoding? :) --Rednaxela 02:20, 21 December 2008 (UTC)
Okay, so this is stronger thank Raiko, however is slightly weaker still than a version of Bee modified to use only a single-buffer. Next I'll try weighting my firing waves higher than non-firing as that may help. --Rednaxela 00:37, 23 December 2008 (UTC)
Woah, it seems that switching to smoothed bin insertions certainly helped overall. I think next I'll try a hybrid method of bin insertion that is smoothed but takes botwidth into account too. --Rednaxela 19:24, 23 December 2008 (UTC)
Well, from now on I'll remember this: Bin insertion shape makes a BIG difference and the optimal is certainly not a rectangle across the botwidth like I first presumed. VCS09 is kind of troubling yet uplifing at once. On one hand it demonstrates that my antialiasing/interpolation scheme is of much value, but on the other hand I need to figure out why it's so far behind SingleBufferBee without it. I can't think of any reason for it to be so much weaker considering both are very similarly natured. I wonder how much better than VCS08 it will be though once I fix whatever problem it has and re-enable antialiasing/interpolation... :) --Rednaxela 17:50, 24 December 2008 (UTC)
Wowzers! It seems VCS12 is very strong in TCRM! The things I know of that beat it, are DrussGT, Dookious, and some experimental versions of RougeDC's gun that never saw the rumble. Now.... if I can just get it past the 90 barrier, then just a little further to beat DrussGT's TCRM score and make the best anti-random-movement gun yet using only a single (though fancy) VCS buffer. Only then will I be satisfied with my take on VCS and be able to proceed to the next stage of Project SaphireEdge! *cackles like a madman* Wonder what it will take to make the final steps to making SaphireEdgeVCS the king of TCRM... Well first I'm going to find out just why VCS12 still underperforms SingleBufferBee... this will be VCS13's job. In retrospect, maybe I should have given seperate version numbering to the versions with and without antialiasing/interpolation? --Rednaxela 02:16, 28 December 2008 (UTC)
Well VCS13 failed to demonstrate improvement, and VCS11 is close enough to SingleBufferBee anyways, so in VCS14-16 I focused on improving ontop of VCS12 more. Seems that VCS16 is quite good, nearly passing the 90 mark! --Rednaxela 06:17, 29 December 2008 (UTC)
Nice! VCS20 is looking very strong! Ahead of Phoenix by a sizable amount, just baaarely behind RougeDC TC52's score of 90.24, and also closing in on DrussGT's 90.5 score! And to think this is just a single VCS buffer! :P --Rednaxela 05:29, 30 December 2008 (UTC)
Very impressive! Although, by doing all that smoothing, it seems to me that you are actually making it more in effect like a DC gun =) Are you also doing 'aliasing' when you read the data back to find a firing angle? Or just find the highest bin in the current buffer? Happy New Years =) --Skilgannon 20:53, 31 December 2008 (UTC)
Well, VCS20 turned out to bit less impressive after the rest of the seasons completed, but indeed this does seem to closer approximate like a DC gun works. Also I'm not 'aliasing' or true smoothing really, I'm doing anti-aliasing. See the example section to see how it's applied to images. Imagine you had a grid a pixels, but a point you want to insert isn't exactly in the center of a pixel, instead you can just tint all pixels adjacent to the point. In a similar way I'm writing to all buffers adjacent to the situation when I get a data point, and when reading, I do interpolation in a matching manner (I take a weighted average of all buffers adjacent to the current situation). With 6 dimensions like I'm currently using, this means each write writes to up to 64 buffers, and each read reads up to 64 buffers (it's frequently less because it's not uncommon for a given situation to be at the "edge" of one or two dimensions where it's only adjacent to a single segment). So for both reading and writing, at any given point in time I con't consider any single buffer to be the 'current' one, rather I consider a weighted mixture of buffers adjacent to the situation to be the current 'one'. To find the firing angle I don't do anything different from the norm of using the highest bin, the only difference is that the "current buffer" is a mixture of several. That make sense? :)
Now what bothers me is if I can improve this any further. I consider VCS16 to be my 'base version' now as nothing else has provided a conclusive improvement over it, and I've had 7 unsuccessful trials in a row. Further segments don't seem to be able to help, and changes to the anti-aliasing/interpolation methods I've toyed with didn't work either. Is roughly 90.0 the limit of this technique? I'm having trouble deciding if I should let the VCS series rest and begin the other parts SaphireEdge research or not. Perhaps I should try the Fourier analysis targeting idea I've been toying with...
Happy New Years!
--Rednaxela 21:47, 31 December 2008 (UTC)
Other
Hey, Rednaxela, how you create an anti-aliased gun? I try it, but get confused how it getting data from segment? I think of recursive, but it should be slow. Can you explain your idea? » Nat | Talk » 07:13, 7 April 2009 (UTC)
It is recursive yes. Doing it non-recursively wouldn't save much overhead and would be painful to code. The anti-aliased/interpolated vcs algorithm is like this:
- Let x be the first dimension
- Let data be the data
- Let weight be 1.0
- If x past the last dimension
- Add the bins stored in data to the summation
- Else
- Let v be the value of dimension x in the situation
- Let i be a floating-point bin index generated from the value
- Get the integer values ci = ceil(i) and fi = floor(fi)
- Assign 1-abs(i - ci) and 1-abs(i - fi) to cweight and fweight.
- Assign data[ci] and data[fi] to cdata and fdata
- Recurse to 4, setting x=x+1, data = fdata, weight = fweight*weight
- Recurse to 4, setting x=x+1, data = cdata, weight = cweight*weight
The process for writing is basically along the same lines. --Rednaxela 08:04, 7 April 2009 (UTC)
Well, you really like source-free discussion :-) (which is good) Quick question: floating-point index is, e.g. say index 0 is 0 and index 1 is 100, if data is 78 then floating-point index is 0.78, right? Another not-so-quick question, how long is your code (ALL code, not just recursive code)? (to prepared myself =D) » Nat | Talk » 12:23, 7 April 2009 (UTC)
Pretty much yep. And what do you mean by "all code"? Do you mean just related to the anti-aliased/interpolated VCS? Or do you mean in my whole gun system as it stands? Or do you mean the full framework I have all my recent megabot tests in? :P --Rednaxela 17:25, 7 April 2009 (UTC)
Gun system. And if you can tell me both gun system and VCS related, that would be nice. I know what your framework is, RoughDC, isn't it? Soon after I created my one for my research, I'll follow instruction to create VCS24. Just wonder one thing, VCS24 have over 90% against random mover, how my improvement (list at my research page) will help. 95% against random movement seem to be impossible to me. » Nat | Talk » 18:41, 7 April 2009 (UTC)
I'll check the lines of code in a bit, but let me warn you: My implementation relies heavily on quite a number of utility classes I have separate, which if not included make the number lower than a from-scratch thing, but if fully included make the number higher than a from-scratch thing. Oh, I'd call 95% in the random mover challenge impossible for certain. I'd consider the highest possible value to be in the 91% to 92% range, maybe more like 93% if pre-loaded data is allowed. And on your research page, what parts specifically do you mean? (Perhaps that would be a conversation best for it's talk page too) --Rednaxela 18:30, 8 April 2009 (UTC)
Discussion about my research now on its talk page. About utility classes, I don't care since it is not very hard to write. In my machine there's a lot of utility classes I wrote, but have only few operational robot classes :-) » Nat | Talk » 01:07, 9 April 2009 (UTC)
License
Which license does this experiment's robot under? I found that the your BinUtils class is really useful. » Nat | Talk » 03:28, 20 September 2009 (UTC)
Haha, I'm glad I'm not a crackpot for having a class like that BinUtils one. Feel free to use the things under the ags.util package (i.e. BinUtils) under the same license as my Kd-Tree, but at least for now no derivative works of the main parts of the SaphireEdge gun please as I wish to encourage people to try their own takes on the Antialiased-Interpolalated-VCS technique. That good? :) --Rednaxela 04:09, 20 September 2009 (UTC)
Thank you. I found that that class saves me a lot of time for dealing with precise GF and VCS buffer =) I'm not going to use your Antialiased-Interpolalated-VCS classes/method, but I may copy a bit of the clustering axes technique you use, as it is far clearer and easier than mine. Is that OK with you? » Nat | Talk » 04:57, 20 September 2009 (UTC)
If you mean the ags.muse.gun.segmentation.Dimension
stuff, then sure. :) --Rednaxela 05:38, 20 September 2009 (UTC)
- [View source↑]
- [History↑]