Talk:Geomancy

From RoboWiki
Jump to: navigation, search

Pages: Main page · Code · Version History · Discussion

Contents

General Questions

What is the difference between this and Watermelon? I first think this is Go-to surfing, but it is true surfing anyway. » Nat | Talk » 13:08, 19 June 2009 (UTC)

This is the next iteration of Watermelon - I conquered kid.Toa (finally!) and fixed some bugs in my distancing that allowed me to get rid of some rambot special-case code, improving my survival against rambots to nearly 100%. The main reason for the rerelease was twofold: I wanted a more serious name, and I wanted an excuse to rip out all the old bits of commented-out code I had stopped using a while ago. Answering your other concern: Geomancy does use true surfing, determining the best direction to orbit each turn. Soon I plan to make a change to the movement that will increase the resemblance to a go-to bot somewhat. If my best location for the upcoming wave is very close to where I am now, I'll delay movement until the last moment, giving the opponent less information about my intentions. -- Synapse 22:59, 19 June 2009 (UTC)

Debug Graphics

Those have to be the nicest wave graphics I have ever seen! --Darkcanuck 18:03, 19 June 2009 (UTC)

Glad you like them! It's important to me that the graphics be readable enough that even at 30tps I can get a feel for why the bot is making the movement choices it does. I did recently fix a positioning issue with the graphics that caused each circle to be off-center - that will fix the varying offset between the wave and the bullet you can see in the version 1 screenshot. -- Synapse 22:59, 19 June 2009 (UTC)
I have to figure how you made those nice circles. Mine look like they were drawn on an Apple ][... --Darkcanuck 03:29, 20 June 2009 (UTC)
If you are talking about the circle of the bins. I think it depend on the system. Sometimes I see the anti-aliased circle but sometimes it doesn't. Very strange. Anyway, I think using g.draw(new Ellipse2D.Double(...)) or use AffineTransform on Point2D is way more beautiful than g.drawOval(); » Nat | Talk » 04:12, 20 June 2009 (UTC)
Haha... just take a look at the robocode preferences Darkcanuck... there's a little group of settings that makes all debugging graphics look much prettier... Go to "Rendering Options", and turn on your anti-aliasing, or just hit the big pretty "Quality" button in the middle of the tab ;) --Rednaxela 05:09, 20 June 2009 (UTC)

(slaps forehead) I always set mine up for "speed", never tried the other settings... --Darkcanuck 07:40, 20 June 2009 (UTC)


Memory Usage

My rumble client just died due to java.lang.OutOfMemoryError while running synapse.Geomancy 1 vs nat.BlackHole 0.1.11. Yours is the newer bot so I thought you might want to know in case there's a memory usage issue? My client uses 512M of heap space. I'll let you know if it happens again. --Darkcanuck 13:48, 20 June 2009 (UTC)

Nope, BlackHole 0.1.11 is really a memory eater. BlackHole 0.1.11 right after finish the development consumes around 600GB. It just, too much bins to surf and result in really bad movement though. I may remove this robot from since since it worse than OcnirpSNG ;) » Nat | Talk » 14:08, 20 June 2009 (UTC)
600GB? ARe you sure you don't mean 600MB? If it's 600GB... I'd really want your computer ;P --Rednaxela 15:54, 20 June 2009 (UTC)
I really mean 600GB. When I finished the development, I tried to run it and I can't make it run. I even used the Eclipse debugger and now both Robocode and Eclipse froze so I decided to calculate the memory usage. The result is 600GB, which make me understood instantly why it doesn't run. So I decide to comment out most buffer and leave only light one, and it worked. But still I couldn't run it together under -Xmx512M. » Nat | Talk » 16:05, 20 June 2009 (UTC)
I ran Geomancy vs itself, and Java used 50mb of memory. Then I ran BlackHole vs itself, and one of them crashed due to exceeding the memory allotment of 512mb. -- Synapse 16:42, 20 June 2009 (UTC)
A memory singularity?  :) --Darkcanuck 17:36, 20 June 2009 (UTC)

Suspicious Battles

(moved from Talk:Darkcanuck/RRServer)

apv.TheBrainPi 0.5 vs synapse.Geomancy 1 -- Synapse 05:46, 20 June 2009 (UTC)
synapse.Geomancy 1 vs elvbot.ElverionBot 0.3 -- Synapse 05:46, 20 June 2009 (UTC)

Ok, a second battle for Geomancy-vs-ElverionBot (on a different client) came up with a similar result, so you may want to do some local testing to see if there are any problems with that matchup. ElverionBot is a known crasher, so I'm not surprised to see it get that kind of score. TheBrainPi isn't too stable either, so it could be a similar issue? I'll try to force a client to run that pairing again. --Darkcanuck 17:40, 21 June 2009 (UTC)
Do I need to be checking that I handle crashed opponents gracefully? What's the difference from my bot's perspective between a crashed opponent and a dead one? -- Synapse 22:40, 21 June 2009 (UTC)
Crashed bots are immediately disabled by robocode, so would probably show as still alive but with 0 energy I belive (just as if they had used up all their energy firing). --Darkcanuck 02:39, 22 June 2009 (UTC)

Out of curiousity, I fired up 1.6.1.4 and setup a battle between Geomancy and ElverionBot. The latter was disabled immediately (appears to be using a function which only exists in later versions of robocode) and then my system went to 100% cpu before java crashed with java.lang.OutOfMemoryError (set at 512MB, went up to 600MB). I repeated the test using one of my bots set to crash with an array index exception. Similar deal, except robocode was able to stop Geomancy before java could crash. Although as I write this it's still stuck at 100% cpu on one core and 600+MB. So I think you have a bug regarding disabled opponents... --Darkcanuck 02:39, 22 June 2009 (UTC)

The bug has been resolved -- I wasn't creating new Enemy objects correctly, so when I saw an enemy for the first time and it was already disabled I was entering an infinite loop. The next release won't have this issue. -- Synapse 08:52, 28 August 2009 (UTC)

Segmentation

I'm adding segmentations to Geomancy, starting with lateral velocity orthogonality. I was reading about Crowd Targeting and will be experimenting with Multiple Choice applied to the top 10% of my segmentations. I'll post more here when I have results! -- Synapse 20:56, 7 September 2009 (UTC)

Backsliding

I'm not sure what the last release (5) broke. I added two segmentations generally considered to be quiet useful - orthogonality and bullet flight time. Can someone with a little more experience maybe take a look at the version comparison or at my bot's behavior and suggest some things to focus on? I'd really appreciate it. -- Synapse 20:53, 9 September 2009 (UTC)

To clarify: looking at the comparison, it seems like the bots against which I gained ground were my former problembots - Phoenix, Fermat, Cigaret, Banzai, MirrorMicro, and SHAM.WOW. Why did this change hurt me against so many other bots? -- Synapse 20:57, 9 September 2009 (UTC)

There are a few details that I would want to know before offering much specific feedback:

  • By orthogonality, you mean something like (enemy heading minus bearing to enemy), right? I've had mixed results with this - in Dookious, I settled on just using lateral velocity, while in Diamond, I use abs(velocity) and relative heading. If you already have lateral velocity in there, this attribute might be slightly redundant, but it's an attribute worth trying.
  • I think bullet flight time is better than just measuring distance. Did you already have distance? Are you using them both now?
  • I'm not too familiar with how your stat buffers are setup. Maybe adding more attributes also decreased learning speed? Or if you have distance + BFT and orthogonality + lateral velocity, maybe the redundancy is giving undue weight to some of these attributes?

The comparison details are indeed kind of hard to interpret, but it looks like the bots you scored high against are still mostly unchanged.

--Voidious 15:41, 10 September 2009 (UTC)

For orthogonality I was using abs(sin(enemyheading + enemybearing)), which should have been abs(sin(enemyheading - enemybearing)). Fixing that just got me back the score I was missing. That also explains why adding these segments was mostly a wash - the damage from adding a meaningless axis and the benefit from adding a good one were offsetting each other. The way my stat buffers work, all buffers are kept updated when new information comes in, so adding more axes shouldn't slow down learning at all. Now that orthogonality has been fixed things should improve nicely. Answering your other question, I don't have a distance segmentation - just bullet flight time. I did get some nice gains from weighting the velocity axis as better by about 12%. -- Synapse 18:06, 10 September 2009 (UTC)

Garbage Collection

I've noticed my bot occasionally skipping batches of turns when my computer is doing something vaguely memory-intensive after matches have been running for a while (usually at about 20 of 35 rounds). Is it bad to call System.gc at the end of my matches to request garbage collection? -- Synapse 21:56, 13 September 2009 (UTC)

Honestly, I think Robocode shoul be calling System.gc every TICK even (don't do that in your robot though because that would count against your bot's running time). Personally, I put System.gc in the constructor of my bots for taht reason, but at the end of match should work just as well. --Rednaxela 23:02, 13 September 2009 (UTC)

As far as I concerned, many robots do have System.gc() at either start or the end of the round. DurssGT, for example. But I agree with Rednaxela, Robocode should request for garbage collection every tick. Now it run gc only at the end of every battle =( » Nat | Talk » 15:14, 14 September 2009 (UTC)

The garbage collector is very slow, I think that if Robocode ran the collector every tick the FPS would be 5 all the time. I remember when comparing performance of OpenGL in C++ vs in Java3D, Java rendered big scenes nice, it was fast and smooth, except for every X minutes when it hanged because it was collecting garbage. You can read gc tuning and if you want try some of that stuff, it has many options you can pass JavaVM to improve collection. --zyx 15:35, 14 September 2009 (UTC)

Large and small values for wavesurfing rolling depth

I think I've realized why Geomancy 7+ lost so much ground compared to Geomancy 6 - changes to rolling depth. Versions 7+ have improvements against the most adaptable, fast-learning opponents, but against simple or non-adaptive targeters their score suffers. Is there some way to combine the benefits of all-battle stat collection with the flexibility of a low rolling depth? Perhaps a measurement I can use to adjust the rolling depth? -- Synapse 01:37, 16 September 2009 (UTC)

Actually, my experience is that low rolling depth works fine against simple targeters, despite that seeming a little counter-intuitive. That said, using multiple stat buffers of varying levels of segmentation and summing them to calculate danger works really well to combine fast and deep learning (not sure if you already do that). --Voidious 01:40, 16 September 2009 (UTC)

I'm so glad to hear you say that - I was planning on summing my n most fit segments as an experiment but I'd been postponing it since that would require a minor overhaul to how a couple minor things are handled. I'll post here when I have results! (I'll probably be all excited and then lose a bunch of rank points like with the past 5 releases but it's the spirit of experimentation, right?) -- Synapse 04:44, 16 September 2009 (UTC)

Results so far: simply summing the best N segmentations doesn't work - the useful information from each segmentation is lost in the sum. Multiplication seems a little better but I think there must be another way to combine these buffers - perhaps for buffers A and B something like C = 1 - ((1 - A) * (1 - B)) which is how you combine probabilities of failure. I'll try it when I get home in 7.5 hours. -- Synapse 21:16, 16 September 2009 (UTC)

I found that as you add more buffers with different segmentations it seems to make up for having a lower rolling value. I'd also be interested if you find a way of only using the top N buffers to get better results than simply summing all the buffers together - from what I've tried it doesn't help anything but that doesn't seem right, because there must be certain buffers against certain bots that just add noise and not value. I think the biggest issue is deciding which buffers are adding useful information and which aren't. --Skilgannon 09:05, 16 September 2009 (UTC)

Well... SaphireEdge's system for weighting it's three buffers (fast, slow, surfer-sim) is very successful, and should be possible to expand to any number of buffers with more conventional non-antialiased/interpolated VCS. Of course, guns have more information than movement to measure these things. Also, I suspect that since anti-aliased/interpolated VCS gives much stronger and more stable/consistent results (compared to a single comparable buffer), it takes less data to judge it's strength. --Rednaxela 12:44, 16 September 2009 (UTC)
How are you rating a buffer? Based on the difference between the peaks? Or the sum of the difference between all bins? Or some other method that I'm not thinking of? And I think it's quite true, a gun has a lot more data to work with than movement. Also, unless you want to work with data that is biased towards lowly weighted bins, for movement data you can only update weightings on bullet-hit-bullet events. --Skilgannon 14:34, 16 September 2009 (UTC)
I use something called the crest factor (see Segmentation/Autoselected_Segmentation) - it's the greatest value in the buffer divided by the root mean squared sum of the buffer (square everything, sum it, then take the square root). It's a measurement of how "pointy" the buffer is. The crest factor is multiplied by that segmentation's "fudge factor" (each axis has a fudge factor, they're all 1.0 except velocity is 1.25, and they are multiplied together to get the segmentation's fudge factor) to get that segmentation's fitness. Regarding bullet-hit-bullet, in the movement I mark it the same as a bullet-hit-me event since it tells me exactly where they were firing. -- Synapse 21:10, 16 September 2009 (UTC)
In case you misunderstood, Skilgannon was also saying you might want to ignore bullet-hit-me and use only bullet-hit-bullet for evaluating buffers, since bullet-hit-me is dependent on how you move, and will therefore skew the data. (Technically, bullet-hit-bullet is also dependent on how you move and how you fire, but probably not in any important way.) --Voidious 21:39, 16 September 2009 (UTC)
Once Skilgannon mentioned that, and I ignored the bullet-hit-me for updating my surfing stats (the distancers weights) and the score of YersiniaPestis went down, so even if it sounds logical I think ignoring them is not a good idea. --zyx 01:07, 17 September 2009 (UTC)
I have been trying to figure out what Rednaxela did for a while, but my head's still spinning around ags.muse.gun.crowd.CrowdLearner.getAdjustment(SWave) I'm also curious on how do you do it. » Nat | Talk » 15:17, 16 September 2009 (UTC)
I use gradient descent set up to efficently maxamize the relative height of the bins that would have hit in the summed buffers. It's very similar to a back-propogation neural net really. getAdjustment(Swave) determines how much this iteration should move the weight on buffer, in order to move towards having the summed movement profile look as similar as possible to the real movement profile. Momentum, like in some neural networks is also used. The easiest way to think of the algorithm I use, is as a 3-neuron (one for each buffer) linear-response back-propogation neural network, which operates on multidimensional vectors (one dimension per bin) instead of floating point numbers. It was quite time consuming to correctly tune the adaptation speed/details, because if it's too fast then accuracy is significantly reduced, but if it is too slow then it can't adapt well enough to be useful. --Rednaxela 16:03, 16 September 2009 (UTC)

Just a quick question: Do you guys also take into account bulletpower, meaning a low-power end of game bullet counts less than a 'normal' 1.9 bullet? Just for your information, I don't roll my surfing buffer(s) nor my gunbuffers. --GrubbmGait 22:05, 16 September 2009 (UTC)

I have a Bullet Flight Time axis, so the fast bullets end up properly categorized. If the enemy fires fast bullets differently from regular ones then that axis will be included in the most fit segmentation. -- Synapse 02:01, 17 September 2009 (UTC)
I don't take bullet power into account when rolling my stats, though that's an interesting idea. I do take bullet power (more specifically, max escape angle and distance) into account when tracking enemy hit rate and Virtual Guns scores. --Voidious 03:41, 17 September 2009 (UTC)

Bug

I've found a serious bug with my segmentation that must be eliminated before the bot has anything resembling competence. Hits are being logged into the wrong segments, causing segmentations with more than one axis to have data added in the wrong place. For some reason this is not causing NPE or out of bounds errors but it must be fixed before any further progress can be made. How did I get this APS with such a bug? -- Synapse 12:10, 24 September 2009 (UTC)

Performance Enhancing Bug? --Nat Pavasant 13:25, 24 September 2009 (UTC)

Perhaps not a PEB, but simply the fact that you are still firing at locations they have sometimes appeared? :) --Rednaxela 13:39, 24 September 2009 (UTC)

Bug was not what I thought. While I proved to myself that it wasn't there I went ahead and added pathing for second-wave surfing rather than the modified danger summing I had been using. I anticipate good results! -- Synapse 11:10, 25 September 2009 (UTC)

There are no threads on this page yet.
Personal tools