View source for Talk:DrussGT

From Robowiki
Jump to navigation Jump to search

Just going through the algorithm, and I realised that there's a bug in my precise prediction: I'm using the bullet-hit-time of the 'target' point, not the point that I actually reach. This is left over from the days when the only thing my precise prediction checked was whether I could reach the target point. Now, for regular, constant distance surfing this is fine. But the moment I start changing distance my predictions can get a couple ticks off... Basically the less lateral velocity component I have, the more inaccurate my prediction gets. Which is bad against, say, RamBots, and in corner situations. I may have to rethink this algorithm... I've already tested doing an iterative search but it gets way too slow, way too quickly. --Skilgannon 10:47, 27 January 2009 (UTC)

Hi, I just want to know. You had once tell me that DrussGT doesn't call Math.random() in real battle, but it does in development process. Is the random call in development process use to generated all those 100 buffers' slices? Thanks » Nat | Talk » 06:02, 21 February 2009 (UTC)

Again, I'll rework DrussGT and see how it will act if it had 73,728 buffers. (The maximum buffer without duplicated buffer :)) » Nat | Talk » 06:05, 21 February 2009 (UTC)

  • My robocode crash after intialize that bot with 73728 buffers! I think it too much :) » Nat | Talk » 07:18, 21 February 2009 (UTC)

Yes, it uses random() to decide which buffers to use, and which slices for those buffers (ie fine, regular, coarse). Unfortunately I couldn't do every possible buffer, due to memory constraints. So I use random() and make a set of buffers that hopefully covers all the segments fairly evenly. Also, if I used every possible combination I would probably run into problems with execution time while extracting buffers to use and smoothing new hits into the buffers. --Skilgannon 22:45, 21 February 2009 (UTC)

One more question: Does it do hit surfing? I go trough your code to understand the flattener (it easier to understand than Dookious one because my robot base on basic surfer, too.) I've recognize that you only do logHit() on hit and do logFlattener on every waves. Does hit surfer competitive? Or I miss something in your code? Thanks. » Nat | Talk » 02:21, 22 February 2009 (UTC)

Yes, hit surfing is the primary way of surfing. The flattener is only enabled against top guns. The reason for this is that if we are flattening the whole time, we can never learn where they are shooting, and dodge those areas. For example, against linear targeting, hit surfing can learn to dodge it perfectly, whereas flattener-only will still get hit. The same is also true against GF bots because they will learn that you move in certain ways, but the moment you get hit by them, you know how they think you move, so you can move differently. By rolling your surfing stats quickly you can stay ahead of their stats and actually do better than just creating a flat profile. Only against fast adapting guns is it necessary to enable the flattener. --Skilgannon 19:49, 22 February 2009 (UTC)

Question: Why you always use Float in DrussGT? Is it faster than Double? » Nat | Talk » 13:51, 6 March 2009 (UTC)

Yes, a float is significantly faster than double for multiplication/addition, which is what I use the most. It is slower for trig due to having to caste into a double, but with the new FastTrig class I can change that. It also uses half the memory. If I tried using doubles DrussGT would skip quite a few turns, and might also skip turns on initialisation due to allocating twice the memory. --Skilgannon 22:14, 6 March 2009 (UTC)

Well, I just decided to add "float" functionality to my FastTrig class. Iterestingly, despite the bulk of the calculation involved being multiplication/addition to get the index it isn't actually significantly different than the plain double version of FastTrig:

FastTrig init time: 0.00703 seconds
Wrost error: 0.000436324725
FastTrig time: 0.520 seconds
Math time: 8.811 seconds
Wrost float error: 0.000465920262
FastTrig float time: 0.510 seconds

The difference is slim enough that I don't think it's worth keeping two different versions. I'm just not sure which version to keep. But indeed, floats are nice for speed/memory and I'm already using them in a componant of my upcoming bot other than the FastTrig --Rednaxela 23:03, 6 March 2009 (UTC)

I don't think floats are faster than doubles at operation time, if I remember correctly calculations are done in higher precision registers anyway. The main time difference comes from the memory bus, a float(32-bits) can be read and written on a single memory access, while doubles(64-bit) are not. Some 64-bit architectures can move the whole 64 bit at once and have no real performance hit, while some use only 48 bit transfers, in those cases there is still a small difference. It's been a long time since I have read any of this, but I think that is the reason. --zyx 06:07, 7 March 2009 (UTC)

I'd say it really depends. For some applications before I've noticed as much as a 10% to 20% difference using floats, even with not being on a 64-bit architecture at all. Though for FastTrig float doesn't give a significant performance benefit. The reduced memory usage though, is of course undenyable. --Rednaxela 06:20, 7 March 2009 (UTC)

Well, the memory is of course half of it when using floats :). That can also affect on performance from the cache fetching point of view, and honestly most people doesn't need the extra precision that comes from using doubles. What I mean is that there isn't any real time performance difference that should people use float over double, if it is memory you need to optimize, float is the way. Otherwise I think people should use which ever he/she feels comfortable with. I use double when is a fresh new application just because doubles can represent all 32-bit integers exactly. And when I'm working with an API(like Robocode) I use the same as the API, that ensures that I will consistently behave the same as the API. I wouldn't consider it PrecisePrediction if it uses floats, because there will be differences when Robocode handles the same situations with doubles, I'd think of it as an approximation to it. When working on something as non-deterministic as a Robocode, maybe it's not even good to have more bits, but I'm a precision freak :/, that's why currently I'm using 628318 divisions on FastTrig --zyx 19:04, 8 March 2009 (UTC)

  • Good point about the cache fetching. Also, as a precision freak, I take it you don't like VCS much ;P --Rednaxela 19:18, 8 March 2009 (UTC)
  • Hehehe yes, I had been looking forward to making a DC bot for a long time, but I admit that VCS is a very good technique and certainly was easier to learn the heavy stuff of GF and WS using simple VCS. -- zyx
    • Zyx, are you precision freak? I think not. I use 7,200,000 divisions on FastTrig in my Pallas! Take about 1 secs to load =) Also define this in it's FastMath class:
public static final double PI = public static final double PI = 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986...D;
// actually it over 2,000 precision!
 :-D So who is real precision freak? I felt more comfortable with Digital (one mistake mean corrupted data), not analog (can have some disort). I always add the asin() to my nano linear targeting. If you are precision freak like me, you should use Anti-Alias VCS from Rednaxela. » Nat | Talk » 06:01, 9 March 2009 (UTC)
  • Sheesh, that would take 27.5MB of memory on FastTrig alone, that's outright wasteful. Actually, my anti-aliased VCS, while far lower data distortion than conventional VCS isn't as low-distortion as log-based techniques like DC are. If you made a spectrum where one side was fast/distorted and the other was slow/accurate, then Anti-aliased would fit somewhere in the middle, with DC towards slow/accurate, and Conventional VCS (single buffer) towards fast/distorted. So really, no, anti-aliased VCS isn't for a real precision freak even if it's more precise than other VCS. --Rednaxela 07:11, 9 March 2009 (UTC)
    • Then having 628,318 bins, plus 7,200,000 segments will give a DC-like result with anti-aliased VCS =D » Nat | Talk » 07:59, 9 March 2009 (UTC)
  • Well about the number of divisions, you are winning :p, I may push it higher but I haven't seen any real reason to do so yet. As for PI I use the Java Math.PI, but honestly I don't even know how precise it is (I don't really know Java), I just believe it is ok. But instead of your PI constant I would use:
public static final double PI = Math.acos(-1);
You can't get better than that, it will be as accurate as your compiler (or JVM in this case) can be :-). About the anti-aliased VCS maybe I will add it to Newton someday, but in the mean time my time will be spent on DC. -- zyx
  • Doesn't it Math.acos(0)*2? Anyway, Math.PI has 20 precision. Mine have 2,000 precision. 100 times! But java double can't handler it =). I think java trig function depend on OS, so predefine it is good ideas. I'm now planning to preloaded 720,000,000 divisions into FastTrig, then my robot jar will count as 300MB. Wait! just kidding! I am not that crazy... Note to your bot, don't stick with Newton and that DC (I can't remember it), create a new robot, base on old bot if necessary. » Nat | Talk » 13:54, 9 March 2009 (UTC)

I would like to point out that your surfing can only be as accurate as the enemy gun. Thus, if the enemy has a very low number of bins (for eg. I have seen 25), keeping PI to 2000 places will make very little difference. On the other hand, this gives me an idea: if you could figure out the granularity of the enemy's shots, and find that it is quite high, there could be 'safe spots' at long distances where the enemy doesn't fire, ie. between bins. In this situation having that precision may help.

To get back to the original point, the reason I switched DrussGT to floats (after going through my private release notes) is that, during testing, it would often crash due to an exception of some sort. Robocode doesn't (or didn't) release the memory from all those buffers when I restarted the match, so I would often have to restart Robocode every 30 minutes or so from running out of memory. By changing to floats it doubled the time between each restart of Robocode. Also, with the number of buffers I'm keeping, and thus the sheer number of floating point multiplication that gets done every tick, having floats instead of doubles means that much less memory is moved around, accessed or modified. For example, if, in the same tick I both get hit by a bullet and sense a bullet being fired, without flatter, just from the VCS I'm doing around 75000 float operations, over 14000 of which are writes to an array. Added to this I still have to do several hundred precise predictions, and it's easy to see how DrussGT could start skipping turns, even if everything just took a *little* bit longer. --Skilgannon 18:48, 9 March 2009 (UTC)

  • What you say about safe spots between bins sounds like a good exploit point for some guns, and seems doable without any rocket science theory. --zyx 02:35, 10 March 2009 (UTC)
  • Hey, doesn't System.gc() you called handle that? » Nat | Talk » 12:07, 10 March 2009 (UTC)
    • Not if the bot crashes before I can call it. To prevent skipped turns I always call it at the end of the round, or in the onDeath handler. So if my bot crashed, it would never get called. Besides, my (crashed) bot was still holding a reference to the buffers, so they wouldn't get cleaned up anyways. --Skilgannon 19:53, 10 March 2009 (UTC)

Hey Skilgannon, I think if you re-structured your source code, you will easier integrated another thing and I'll easier understand your code :-) I always imagine DrussGT as clean as Dookious before I read DrussGT code and I disappointed. If you can clean your code, that would be best thing ever. » Nat | Talk » 00:36, 20 March 2009 (UTC)

Quote"Pleeaase don't just take my bot, tweak it and release it under another name. Rather tell me about the changes, and I'll give you credit.": I've improved DrussGT 1.3.3 Virtual Gun Rating a bit, here. It base on Dookious VGun. I think it perform better, at least against Shadow. » Nat | Talk » 02:52, 20 March 2009 (UTC)

Cool! Nobody's ever really contributed to my code before, so I'm not sure how to go about this. I've read through your changes, they were mainly rolling averages instead of a straight sum for the VG score, and modifying the values that different guns get chosen at, right? While I very much appreciate your effort, I'd actually like to re-code it in a way that makes more sense to me. So I'll give you credit in the /Version History of my next version. I actually had some other ideas on how to ensure that the AS gun doesn't get chosen against bots that don't surf, and it was those I was referring to. I'm not sure how much adding rolling averages to a VG will help, since, unlike GrubbmGrb my guns all learn and adapt. But perhaps they do. What I would be worried about is one gun getting a lucky streak and then me using a gun that is actually weaker. But your rolling averages are quite deep, so I don't think that will really happen. Also, about that cleanup, there is very little that I still want to add to DrussGT, most would be bugfixing. It was never designed to be something that is easy to read, as long as I can understand it, and I can =) --Skilgannon 05:42, 20 March 2009 (UTC)

Right, 2500 is a depth which Dookious use. Also, the 0.22 and 0.26 is from Dookious, too. I don't know about hitting the surfer much, but I think if you lower your vgun rolling averaged depth, you will do better against surfer. Shadow usually hit much by your AS gun in first 3 rounds, then the PM gun, then the DC gun in the rest. I don't know about random-movement, but by lowering my vgun rolling depth to 3 in BlackHole, I got more score from surfer... About code cleaning, actually I understand them but I lazy to scroll up and down to find method :-) Which editor do you use? I think it is not Eclipse because the messy indentation and it is not Robocode's editor since Robocode's show that a large part of DrussGT.java is a comment! (it doesn't understand // */) By the way, the another minor changes that I want you to keep is just to draw current gun and flattener status, as it very helpful not to watch the console while watching robot fighting. » Nat | Talk » 08:11, 20 March 2009 (UTC)

Hey, one more! Please update this robot page! It is getting outdated. AntiSurfer gun is already there but it still in "What's the next for this robot" section. » Nat | Talk » 11:51, 20 March 2009 (UTC)

Against random movement (which makes up the majority of the rumble) having a lower rolling depth for your VG will not help, because their movement doesn't change, so one gun (probably DC) will be strongest. For an editor I am using jGrasp - it is lightweight and runs in Java, which makes it easy for me to use on Linux, as well as keep running while also having a browser and robocode open. It indents things very nicely, it's just that eclipse has different indentation rules. Try opening with Wordpad or another text editor, it looks fine =) --Skilgannon 17:17, 20 March 2009 (UTC)

Hey, I look trough your new code (1.3.4) and realize this thing:

if (robot.getRoundNum() < 1
    || (DCHits > 0.25*bp && DCHits > PMHits*0.98)
    || (DCHits*bp > 0.16 && DCHits >= Math.max(PMHits*0.95,ASHits*0.9))
    || DCHits >= Math.max(PMHits, ASHits)){

should this be


if (robot.getRoundNum() < 1
    || (DCHits > 0.25*bp && DCHits > PMHits*0.98)
    || (DCHits > 0.16*bp && DCHits >= Math.max(PMHits*0.95,ASHits*0.9))
    || DCHits >= Math.max(PMHits, ASHits)){

? If you multiply DCHits with bulletPassed, which both integer, you will get something really large that always over 0.16. » Nat | Talk » 00:18, 24 March 2009 (UTC)

Wow. Yes, that could be a problem =) Programming in the evening, after a day at university which followed a night with minimal sleep, doesn't seem the best idea =) It looks like 1.3.4 lost quite a bit of score against weaker bots (as expected) - I'll release 1.3.5 right away. --Skilgannon 11:56, 24 March 2009 (UTC)

Hmm. It seems there is a problem somewhere. I wish I could do diffs against old versions so I could see where problems are. I'll probably end up reverting back to 1.3.3 and re-applying the changes I made :-/ I guess I should let them at least stabilise first though... --Skilgannon 20:42, 24 March 2009 (UTC)

Hmm? Diffs against old versions shouldn't be hard considering the source code is in the jar file, at least with a *nix system. --Rednaxela 20:50, 24 March 2009 (UTC)

No, I mean score diffs with the new rumble. Development is going to be a completely different game without them... --Skilgannon 20:53, 24 March 2009 (UTC)

Oh the score diffs... yeah.... lack of those is a really big pain... --Rednaxela 21:09, 24 March 2009 (UTC)

Sorry about that, haven't had much time recently for more rumble server development. But comparisons are the next feature to be added. --Darkcanuck 03:15, 25 March 2009 (UTC)

ERRRRRGH!!! 1.3.5 still below Dookious!!! I think you might broke your DC gun somewhere, try release another version with only DC gun (or always set DCWave.onlyDC = true) and see. Some suggestion on gun disabling, I think you should check on how many 'ticks' that the gun perform. Some example cases: Shadow. It get around 13% for every guns, but sometime DC go up to 15% and other at 11% so other guns get disabled, but some time AS get 16%, PM 15% and DC 10%! This mean Shadow usually squeeze your rating so you should count time that it operate instead of current rating. Hope it clear enough. » Nat | Talk » 09:45, 25 March 2009 (UTC)

I actually think there might be a problem in the DCWave.onlyDC, it might be throwing an exception somewhere because I set the AS bins and the PM string to null and somewhere is still accessing them. I'll figure it out when I get home... --Skilgannon 11:44, 25 March 2009 (UTC)

  • I've read through the code and found no reference to ASBuffer or PMData that do not enclose with if (!DCWave.onlyDC). There are 2 logASBuffer call that do not enclose, but it reference to currentASSegment that don't get cleared. (Clear array do not clear the reference). Anyway, I've run 35 rounds test against TheArtOfWar and GrubbmGrb and found no exceptions being thrown. » Nat | Talk » 12:12, 25 March 2009 (UTC)

I think you PM gun is broke somewhere, here a gun rating result for GrubbmGrb:

DC gun: 50.0
PM gun: 0.0
AS gun: 27.272727272727273

That's the first round where GrubbmGrb do Stop And Go. » Nat | Talk » 12:31, 25 March 2009 (UTC)

Voidious is so good he took the crown back without a single line of code changed. --zyx 02:02, 26 March 2009 (UTC)

... and he will lost his new throne faster than last time, when Slikgannon revert back to 1.3.3 ... » Nat | Talk » 02:27, 26 March 2009 (UTC)

Skilgannon, what make 1.3.4/5 buggy? Is it my improvement or the gun disabling? If it is my improvement, then I'll change my one in BlackHole :-) » Nat | Talk » 09:21, 26 March 2009 (UTC)

I've actually got no idea =) I just reverted to 1.3.3 and tried using a different method for changing the VG rules. To me it makes more sense to vary the rules based on how much data you have, because otherwise the main gun in your VG might just have a high/low hitrate due to being lucky in early rounds. I'm guessing it was neither of the things you mentioned, I might have been testing late at night and changed something that I forgot about. If you want, you can diff the files, but I don't really feel the need to, now that I've reverted whatever it was =) --Skilgannon 09:29, 26 March 2009 (UTC)

Diff:

  • You don't normalize BFT in 1.3.5
  • DCWaves.ANGLE_SCALE was increase from 24 to 128

Does it mean anything? » Nat | Talk » 09:39, 26 March 2009 (UTC)

  • The first was the bug that made the pattern matcher not work... the bullet flight time is required in the PM to know how far forward in the log we should start looking forward for matches. It would find the match starting from 0, which will always match because that is the substring it got the data from. The ANGLE_SCALE is also pattern matcher, it adjusts the granularity of the deltaHeading that gets kept. So if you want to rebuild accurately you should have a high ANGLE_SCALE, but then you will get less matches. --Skilgannon 15:14, 26 March 2009 (UTC)

But I really want you to keep the graphical part and "Switching to ..... gun". The word "... gun enabled" is somewhat weird since you don't really enabled the gun. If you enabled it, you have to disabled before enabled other gun :-) » Nat | Talk » 09:42, 26 March 2009 (UTC)

OK, yes, I'll change that for the next version =) --Skilgannon 15:14, 26 March 2009 (UTC)

PL King

Anyone realize that DrussGT 1.3.6 have successfully dethrone Shadow PL king! Ans win Shadow with 50.10% after 6 battles! Congratulation! » Nat | Talk » 03:53, 27 March 2009 (UTC)

And it successfully defeat shadow in 1000 rounds battle! Congratulation again, Slikgannon, the one who defeat Shadow! (I think you're the first one, not sure) Keep up good work! Not let shadow take your throne back! If you finish you decent melee surfing and decent melee gun, try kill Shadow like Shadow to to SandboxDT, and I'll kill you again with my new bot :-) » Nat | Talk » 11:05, 27 March 2009 (UTC)

Ahh forget this thing:

DrussGT-Shadow-Score.jpg

I think some elder version can kill it already since it always use DC gun, and DC gun isn't change since 1.3.1 » Nat | Talk » 11:09, 27 March 2009 (UTC) 11:09, 27 March 2009 (UTC)

It's not very much of a margin, and having the other guns may help in the beginning, making Shadow's movement adapt into being something that is easier for the DC gun to hit later. If I can find the time I'll get moving on that melee stuff. What I want eventually is a bot that can adapt fluidly between any amount of bots... --Skilgannon 11:33, 27 March 2009 (UTC)

  • If you more femilar with one-on-one, think melee as one-on-one with no radar lock (or infinity lock) will make it easier than to start developing from melee. :-) » Nat | Talk » 06:08, 28 March 2009 (UTC)

As fas as I know, no robot can ever defeat Shadow in 1k round battle. Also, DrussGT 1.3.6 is the first bot who win Shadow 3.83c (see Shadow ranking detail page, sort by score)! Even Phoenix get ~42% (500 rounds) and Dookious at ~38% (100 rounds) Actually, DrussGT keep rating diff at about 500 - 2000 point whole battle! » Nat | Talk » 12:09, 27 March 2009 (UTC)

It's true that 51% to 49% is not a big win, but DrussGT is now 1st in PL, have barely beaten Shadow. So congratulations are still in place. --zyx 17:27, 27 March 2009 (UTC)

One thing that amuse me from version 1.3.12 is that you use Red's tree in your gun but you still use Sim's tree in your movement =) » Nat | Talk » 11:08, 5 September 2009 (UTC)

I need to update it eventually but I'll wait until I have something big to change in the movement before I bother messing with the movement code and then retesting to make sure nothing is broken =) Also, performance hit isn't that big because it's only 3 dimensions and only has the scans from actual enemy bullet powers. --Skilgannon 11:30, 5 September 2009 (UTC)

I'm curious - what do you use the tree for in your movement? I thought DrussGT's movement was pure VCS. --Voidious 16:32, 5 September 2009 (UTC)

I have a tree to predict what bullet power the enemy will use, so that I can do Gunheat Surfing, ie. start surfing 2 ticks before I actually detect a wave. I've made a version with Rednaxela's tree in there now and a slightly cleaner/more pluggable structure, I'm just running a speed-optimized tweak through the MC2K7 to make sure it doesn't hurt my score before I release. --Skilgannon 17:11, 5 September 2009 (UTC)

Movement Prediction

From User talk:Nat/Free code, I finally got it, except the line:

if(Math.abs(acosVal) <= 1){

I don't what it do, can you explain it please? » Nat | Talk » 13:12, 8 June 2009 (UTC)

This is just a check to make sure that acos doesn't throw an exception, because acos only takes an argument between -1 and 1. It shouldn't really be possible, but if a prediction went wrong or got fed funny data I don't want my bot crashing =) --Skilgannon 16:22, 8 June 2009 (UTC)

Thanks. » Nat | Talk » 16:38, 8 June 2009 (UTC)

Is your prediction method accurate? I mean, you set the distance remaining distance in futureStatus() to the distance between current point and destination, but actually you will turn and move in little curve, which mean that you will end up a bit shorter than you want. » Nat | Talk » 13:34, 6 September 2009 (UTC)

Yes, that would be a problem if every tick I simply subtracted from distanceRemaining the velocity. However, instead I use the Cos Rule to calculate what the actual distanceRemaining value is each tick, even if I'm not moving directly towards the enemy. You had me worried there for a moment =) --Skilgannon 13:46, 6 September 2009 (UTC)

How do you test?

Back when I was last focusing on 1v1 MegaBot development, I remember having many phases in my testing style... For a while, I would run 1-2 seasons against a field of 250+ bots. Later, I recall just always having an intuitive sense of how to test a certain change, or not even testing much at all. I've been using various test beds for Diamond, but they are really just giving me a very general ballpark result. If you wouldn't mind lending some insight, I'm curious, what kind of testing do you use for DrussGT these days? (TCs are pretty reliable for gun changes, so I guess I mainly mean movement, but advice on either or both is welcome. =)) --Voidious 16:03, 1 September 2009 (UTC)

I don't think Skilgannon tests his robot, at least not when he has got this 1st position » Nat | Talk » 16:14, 1 September 2009 (UTC)

I mostly try to up my scores against my worst problem bots, usually running 4-5 matches for each tweak to get an idea if it helps. Afterwards I'll run a match or 3 against 1) RaikoMicro to test that I haven't broken anything against simple GF 2) DoctorBob to test that I haven't broken anything against LT and 3) Shadow to test against top-bots, and because it runs fast =) However, I find that I get my best improvements by fixing things that I know are broken or by adding new features, for instance right now I've been working on skipped turns because I know that has been a problem. --Skilgannon 17:44, 1 September 2009 (UTC)

Cool - thanks for the feedback, it's much appreciated. =) --Voidious 21:00, 1 September 2009 (UTC)

Imaginary Wave Surfing

When the imaginary gunheat drop to zero, you fire the imaginary enemy wave, then you re-calculate the precise prediction and choose new destination. But when you detect the energy drop, you remove the imaginary wave to use the real wave, and you re-calculate all the dimension/precise prediction/safe destination. So is it help since you re-calculate all things? » Nat | Talk » 06:55, 6 September 2009 (UTC)

Yes, it still helps, because I get an extra 2 ticks worth of reaction time =) The only reason I recalculate is because there are small things that I can't predict at the time the imaginary wave is fired, like what distance segment will be used for next tick based on my next location. Also, if the bot doesn't fire the moment their gunheat is 0 (instead they wait for their gun to aim, like Shadow or Dookious) then the gunheat wave will be fired at the wrong time anyways, so definitely needs to be recalculated. I should probably add in a condition so that I don't fire gunheat waves against bots that don't fire the moment their gun is cold, but I don't want to 'fix' what isn't broken =) I'll probably add something eventually, but for now it all works fine =) --Skilgannon 11:41, 6 September 2009 (UTC)

Also RougeDC, which was the first bot to implement this, does have that check to only do so if they fire exactly when gunheat is 0 at least some percentage of the time. It also has a kD-Tree to predict the firepower they'll use. Also, by the sound of it Skilgannon, your distance segment may be slightly incorrect. Surfing dimensions should always be calculated from the perspective of what the enemy would see when aiming. This means your distance segment should be coming from your location 2 ticks before you detect the energy drop, and the location you scan them to be 1 tick before you detect energy drop, to completely accurately account for scan delays. Anyways, yeah, I find that surfing gunheat waves particularly helps when dealing with close-range surfing (i.e. dodging rambot bullets) --Rednaxela 13:51, 6 September 2009 (UTC)

I was going on (an imperfect) memory there, the distance segment is fine =) I've gone over the code, and it seems the reason I recalculate it once I get a real wave is three fold: 1) in case I predicted the wrong bullet power, 2) in case 'they' move towards/away from me and it puts me in a different distance segment and 3) so I have an accurate position for the center of the wave. That last one is the most important, I think, although there is a maximum error of around 2 pixels. Another point is that if the bullet power is off, the bullet flight time buffer will be off.

It would be easy enough to check that the bullet powers, the 'buffer retrieval indexes' and the location are all the same and just mark the wave non-imaginary and return. However, if even 1 of those is different, the movement predictions all need to be re-run, and because it's almost impossible to say how much the enemy will be turning (due to orbiting, wall smoothing etc), I'm guessing the predictions will be re-run almost every single time. --Skilgannon 14:29, 6 September 2009 (UTC)

Hmm... 1) and 3) is the problem, but I can't see why 2) would happen. Since enemy fire base on data a tick before they fire, 2 ticks before we detect energy drop, the distance segment won't change. I think you don't need to re-calculate all your buffer, just the precise prediction, because it is still the same segment. » Nat | Talk » 14:47, 6 September 2009 (UTC)

Hmm, yes, all I need to check is that the bullet power was correct so that the BFT indexes are all correct, and that the wave was fired in the correct tick, and I shouldn't need to re-calculate the buffers, just the movement predictions. Thanks for bringing this to my attention =) I'll do some testing and put this in the next release. --Skilgannon 15:00, 6 September 2009 (UTC)

I forgot you have bft segment. So the bullet power is correct yield the same segment, you don't need to re-calculate/sum all those 150+ buffers (w/ flattener) It should give you a bit less SkippedTurn. Just FYI, 1.5.0 skipped only 16 turns in 1000 rounds (while shadow skipped none =() » Nat | Talk » 15:16, 6 September 2009 (UTC)

Well of course Shadow skips none, it's by far the fastest bot that ranks anywhere near where it ranks, even before using my new tree. I'm yet to see a faster one :) --Rednaxela 15:51, 6 September 2009 (UTC)
You're forgetting Ascendant - one day DrussGT will execute as fast as A =) (in my dreams) --Skilgannon 16:12, 6 September 2009 (UTC)
Only if you lessen your precise prediction and buffers =) Another FYI: Robocode crash (OoME) at round 1978 and Shadow still not skipped even single turns! Also, this is with 3.83c e.g. not using Red's new tree. » Nat | Talk » 16:37, 6 September 2009 (UTC)
I bet CassiusClay is faster, too, and only a few spots down. --Voidious 17:53, 6 September 2009 (UTC)

I might be completely wrong, it is just a thought of mine, but I believe Druss gun and movement are separately pretty fast. I mean, Dookious and Phoenix movements are slower, or at least they take longer to run when I'm testing, and Druss gun isn't that slow. I think the problem is when they are put together. If I am not wrong, Druss doesn't predict its movement all rounds, and that's what make it quite fast, and it probably doesn't search for a firing angle until the gun is cold enough, so I suppose the skipped turns would happen when it predicts its movements and firing angle at the same time. Perhaps you could avoid skipping turns by trying not to do the two things at the same time, like delaying fire if it is deciding where to move to, just create a variable isDoingMovement that is set to be true when it predicts its movement and then use as a condition to predict fire angle !isDoingMovement. Of course, this may just be completely non-sense and mad, I never took a time to study Druss code, so my hypothesis might be wrong, and I'm not sure about the impacts of such changes in the bot's performance, but at first sight it seems it could help. --Navajo 19:29, 6 September 2009 (UTC)

I actually have tried this =) It seems what was causing me the most skipped turns was my logging of hits, I optimized that code a bunch and now most of the skipped turns are gone. It was because I had to access the value in the bin, multiply it by the depth of the rolling average, add it to the new value in the bin, then divide by the depth plus 1. This was for each and every bin in each and every buffer, so in total 10000 times. The biggest improvement was making a value which was 1/(depth + 1) so that I could multiply instead of divide for each buffer, so it eliminated 99% of divisions (ie. 1 division per buffer instead of 100) because divisions are much slower than multiplication. The next step was assuming that outside of 20 bins from the hit-point I was just smoothing to 0, which simplifies the equation that can be used to just a multiplication. It was these that made the biggest difference, because even if I tried putting the movement and gun on different turns, the movement would still skip 2-3 turns by itself. --Skilgannon 11:27, 7 September 2009 (UTC)

Well, I don't know what you use to log hits in Druss, but in my bot I calculate a score for each bin according to how distant it is from the bin where it was hit. What caused it to be slow was that it would iterate over all buffers and in each buffer it would decide the score to log on each bin and then roll the stats into the bin, but since it was the same hit and the buffers had the same number of bins, I realized it was just calculating the same thing over and over. My solution was to create a stat buffer manager that calculates the score for each bin and store it in an array that is passed as an argument to the buffers to log the array instead of the hit. This way, the stat buffers only have to roll the scores into the bins according to their own configuration, avoiding recalculating the same thing over and over. Note that the more buffers you have the more effective this is, because you would be avoiding a bigger number of unneeded operations. Again, I don't know if you already do this or do it in another way that isn't as slow as the one I first described, but if you do this can help a lot. --Navajo 13:16, 7 September 2009 (UTC)

I already do exactly what you've described =) I've got 100 buffers, so it makes quite a big difference! --Skilgannon 13:26, 7 September 2009 (UTC)

Pattern Matching Gun

You use Waylander's PM gun in DrussGT's, right? How about using zyx's new Sequential Prediction? I believe it would give you some more nanoseconds =) --Nat Pavasant 08:27, 7 October 2009 (UTC)

Between tons of work, a new girlfriend, and it being my birthday today, I don't really have time for that at the moment. But I think I'll get around to it eventually =) I actually had a similar idea a while ago but never got around to implementing it, so I'll do that before I look too closely at zyx's method =) --Skilgannon 08:34, 7 October 2009 (UTC)

Happy Birthday! (mañana == tomorrow?) --Nat Pavasant 10:50, 7 October 2009 (UTC)

Happy Birthday to our illustrious King! =) --Voidious 15:24, 7 October 2009 (UTC)

One year older, one year wiser and still virtually impossible to beat, happy Birthday ! --GrubbmGait 17:43, 7 October 2009 (UTC)

Happy birthday indeed! Best wishes with that busy life there :) --Rednaxela 19:13, 7 October 2009 (UTC)

Take a guess what I'm going to say... =) « AaronR « Talk « 23:16, 7 October 2009 (UTC)

Chasing DrussGT

(continued from Talk:DemonicRage#DR3.14_-_DR3.15...)

...Keep up the good work on DR while I chase DrussGT. =) --Voidious 21:58, 8 May 2010 (UTC)

Can't have you catching DrussGT now, can we? =) I've been really bogged down with work lately, but exam time is coming and that tends to be the time when my robocodeing gets the most progress done. I should have time for those tree weighting experiments I've been wanting to do. Some good work with Diamond you've been doing. Although it seems mostly tweaks --Skilgannon 10:13, 9 May 2010 (UTC)
Yeah, tho I'm close enough to taste it, I also know it won't be easy. (Ask Phoenix. :-P) You're right it's been all tweaks lately, but it's felt good to polish/simplify a lot of stuff. And the more I simplify, the easier it is to tweak. Beyond tweaks, I've still got Imaginary Gunheat Waves to code up, plus some other issues for which I'm still brainstorming. Still, I'd probably trade some APS for better performance against Shadow... --Voidious 17:46, 9 May 2010 (UTC)
Well... Speaking of Phoenix... If you want an 'easy' way, there is always this approach... ;) --Rednaxela 19:12, 9 May 2010 (UTC)
Good point, though I'd need to come up with a way to preload a DC gun. =) --Voidious 19:51, 9 May 2010 (UTC)
Well I presume you mean a way that doesn't take massive amounts of space... I've thought about that before and have a several ways ;)... Here's one that should work well: Take the list of of observations, run a k-means clustering algorithm on them, with a moderately large value of 'k'. Within each cluster, find the best guessfactor. Record this guessfactor as value, with a location matching the cluster center. This method will make a best-effort attempt to 'compress' the DC gun data such that the single nearest neighbor will give a result closely approximating what the full algorithm did before, and clustering means that it will automatically take the most "interesting" regions. Essentially, this is equivalent to the "SuperNodes" concept used with VCS targeting, though it will behave slightly different in practice. One further improvement, would be to, instead of taking a single guessfactor per cluster, perform a 1D clustering of the guessfactors within the cluster, and for each of these guessfactors, recalculate a new center. Perhaps alternate between data-based clustering and guessfactor-based clustering. Assign a 'weight' to the stored data point corresponding to the the number of points. Those improvements will make little impact if it uses pre-loaded data only, but will make it easier to learn new data on top of. Actually, I've been pondering the idea of alternating between clustering on input and on output before, as a learning method that is like clustering but better suited to classification problems.... I wonder if I've been given too much away of my plans for my next attempt at a novel gun... --Rednaxela 21:48, 9 May 2010 (UTC)
Ironic that I should be replying under this section... but... another way to reduce the dataset considerably would be to only look at the firing scans. From that it should be possible to cut your data by 90% at least. Next, because we would be looking at such a reduced dataset it should be possible to keep a *balanced* tree. So what you simply do is take each subtree that contains less than X points, find it's average, and add a point at its points' centre. Then you eliminate the dimensions which don't have much effect on the GF for data saving purposes, and presto! Lossy compression for DC guns =) Just thinking, this would be even easier with k-means... --Skilgannon 19:44, 10 May 2010 (UTC)
Lol, I remember that comic :D Btw, howmany of the top-10 bots do use saved data (not preloaded)? --GrubbmGait 07:23, 10 May 2010 (UTC)
Two or three, I'm not very sure =) --Nat Pavasant 08:31, 10 May 2010 (UTC)
I think just Dookious, Phoenix, and Firebird (which looks like just VG scores and flattener enablement info). --Voidious 13:05, 10 May 2010 (UTC)

Timeouts

I was trying DrussGT against DevilFish in 500 round matches, and noticed that there would usually be around 3 or 4 losses. What's more, they didn't turn up in file writes from onDeath, and eventually I caught about 20 skipped turns followed by a timeout after watching the current scores are waiting for a death. I'm absolutely at a loss as to what it might be, as I don't have any recursion, all my loops are terminating (that I can find) and I'm not sure what to do to debug. Then I had the idea of modifying Robocode to print a stack trace for any bot that is being terminated due to timeout. I think it would be a great addition to the bug-hunting arsenal so I've submitted a feature request on Sourceforge. Any thoughts? --Skilgannon 13:41, 7 February 2011 (UTC)

Sounds like a great idea. Wouldn't mind even having some of this type of stuff reroutable to a log file. Did you figure out the problem? (And just in case, DevilFISH Challenge is 1000 rounds btw.) --Voidious 14:20, 7 February 2011 (UTC)

Update?

Could you update the main robot page? I am at lost following your version history... Thank you. --Nat Pavasant 14:31, 12 June 2011 (UTC)

Contents

Thread titleRepliesLast modified
Precise Max Escape Angle bug002:14, 22 June 2019
Throws exception when MC flag is on004:34, 3 February 2018
timeSinceDirChange bug211:10, 31 October 2017
close race at #1215:56, 18 July 2012
genetic tunings301:43, 4 December 2011
Head-to-head1707:14, 19 October 2011
exception in 2.2.0107:04, 22 September 2011

Precise Max Escape Angle bug

A thread, Thread:Talk:DrussGT/Precise Max Escape Angle bug, was moved from here to Talk:Xor. This move was made by Xor (talk | contribs) on 22 June 2019 at 01:14.

Throws exception when MC flag is on

Once I noticed DrussGT is orbiting predicted location, I turned on the MC flag to better see the movement ;)

However, in onPaint,

if (!shieldEnabled)gun.onPaint(g);

gun is null when MC is on, while shieldEnabled is false, so it throws NPE.

changing this line to

if (!shieldEnabled && !MC)gun.onPaint(g);

will fix that ;)

Btw, what's the biggest impact on movement between orbiting predicted pos and last seen pos/wave pos do you think?

Xor (talk)04:34, 3 February 2018

timeSinceDirChange bug

Hey, I was trying to plug Druss' gun into Knight (to do some experiments using my MEA/preciseIntersection/features calculations to see if they break your gun) and I noticed something interesting in the code.

I looked throughout the wiki and found nothing about it being a known bug (and probably performance enchancing?), so I'll put it here anyway.

In the onScannedRobot() method, you override the "lastDirection" variable with the new direction value before updating "timeSinceDirChange". This causes this variable to be incremented at every scan, as you increment it if(lastDirection == direction). So, the normalized version 1/(1+2x) goes quickly towards zero during the round, which is probably almost the same as having a very low weight for this feature.

Not sure when you introduced this, but maybe it is worth fixing it in the future for a possible programmatic tuning round.

Rsalesc (talk)23:13, 30 October 2017

That certainly isn't intended behaviour, and might also explain why I was never able to get much benefit from this attribute. Thanks!

Skilgannon (talk)10:55, 31 October 2017

And then that attribute serves as another time attributes ;) you may want to reintroduce it to simulate legacy behavior at some point ;)

Xor (talk)11:10, 31 October 2017
 
 

close race at #1

I find it pretty cool that Diamond and DrussGT are so close right now, it's sometimes coming down to the head-to-head score to determine who's ahead. =) (Though I don't find it as cool that it's always DrussGT...)

Voidious21:35, 17 July 2012

Yeah, I'm racking my brains for anything which might give me a little edge and have come to the conclusion that my regular, slow, plodding idea/implement/test/release schedule is still the best plan.

And I'm not sure what is giving me these surprisingly high scores against Diamond. Of course, what I gain there I lose in comparison against deo.FlowerBot. How on earth are you managing to dodge a pattern matcher so well?

Skilgannon08:16, 18 July 2012
 

I similarly kind of took my foot off the gas and decided to just continue my usual development routine. I figure it's going to take more than a minor lead to break away from DrussGT and actually claim the throne. Right now I'm obsessed with matching or beating 1.7.53 by tuning my hit percentage thresholds (flattener, decaying surf stats), because I refuse to rollback to a stupider way of normalizing hit rates. =)

Diamond scores about 44% vs DrussGT over a lot of battles. I haven't focused on it much lately, but I have in the past and just had no idea what to do to improve. Not sure about FlowerBot... I don't think I've ever watched a battle against him. =)

Voidious15:56, 18 July 2012
 

genetic tunings

I'm curious about your genetic optimization stuff. This is using genetic algorithms to tune attribute weights with pre-gathered data using your WaveSim-like system? Or is there any GA stuff happening mid-battle? I've generally believed WaveSim wouldn't work against surfers, but at one point I convinced myself I might as well try. I gathered a bunch of data, only to find that HOT outperformed my guns against it, which convinced me it was probably useless. Have you run real battles to check that the improvements correlate, or are you just dropping them in the rumble?

Voidious22:47, 3 December 2011

Well, on Talk:DrussGT/Version_History#Anti-Diamond_tuning_.3D.29_404 already stated it was fixed pre-gathered data.

My suspicion is that while HOT performs better than actual guns in such tests, WaveSim-type tests could still be useful(but non-optimal) against surfers in restricted domains (i.e. changing the relative weights in segmentation). For instance... I suspect it may be less useful for changing relatively "dynamic" things like VG rolling weight paramters, than just how segmentation works.

Rednaxela23:09, 3 December 2011
 

I would say that if HOT is outperforming your guns in classifying/regressing the wave values it probably means that your gun isn't shooting towards GF0 often enough, making the surfers move towards GF0 too often. I haven't actually tested to see if HOT is outperforming my guns, but just changing the weights in my gun can improve my hitrate by ~8% (difference between worst (random) weights and best (final) weights).

I've thought maybe my process should involve recording WaveSim data, doing genetic tuning, recording new WaveSim data with the new tuning, and repeating until it converges. I think this would do quite a good job of having the same effect as genetic tuning directly against a surfer, but much faster.

Skilgannon00:38, 4 December 2011
 

Well, I'm not sure it means that... I'm not sure how you'd even code a gun to shoot at GF=0 more often (besides stupid ways =P). Regardless, it makes it pretty clear that what works in WaveSim is not what works in real battles, since HOT would perform dismally in real battles vs surfers.

The reason I thought WaveSim might still work against surfers is that even though your WaveSim targeting decisions are based off of a different movement profile than they would be in a real battle, the characteristics of the real vs simulated movement profiles may not be different enough to matter. You're still making each firing decision off the same data set. So maybe I will give it a shot. (Hopefully I didn't delete all that data...) I also love your idea about tuning/re-gathering data.

DrussGT 2.4.5 certainly far outperforms 2.4.4 against Diamond, so that's something. Then again, I've done so much anti-DrussGT tuning recently that it's possible just about any change you made would have improved your score against Diamond. =)

Voidious01:43, 4 December 2011
 

Head-to-head

I've been focusing a lot on beating DrussGT and Shadow lately and just wanted to give you props on how incredibly strong DrussGT is. Nothing I do to movement or gun seems to have any positive effect, like there's just some magical element I am unaware of. I tested with my flattener or Anti-Surfer gun hard-coded on, and those too had almost no effect - the flattener helped slightly, giving me my best score of 46.6% over 55 battles, up from the usual 45-46 range. Definitely need to put my Thinking Cap on over here. =)

Voidious04:03, 6 September 2011

Haha, thanks. I'm putting in work on my side as well, I'm experimenting with using DC for movement. My best version loses around 0.4% on the MC2K7 vs 2.2.0, mostly against the top bots, so I haven't released anything yet as I don't want to lose my new PL crown =)

Skilgannon10:11, 8 September 2011
 

Oh, neat! DC surfing requires some serious thought after years of doing things in VCS ways. I'm still working some fairly basic things out, after all this time. It gets you to really analyze how/why aspects of your VCS setup worked. Actually, I think I had an important realization just last night...

0.4% doesn't sound like much. :-P But then I don't think MC2K7 is a particularly robust way to measure a drastic movement change, either.

Voidious14:56, 8 September 2011
 

Interesting, no matter what I tried I couldn't get it better than my 3rd try (the -0.4% one). I guess I'll have to stick with my VCS for now... it's just that DC would be much more suitable for an idea I had...

Skilgannon09:48, 12 September 2011
 

Sounds familiar. =) You sure you ran enough seasons? Sometimes I convince myself I ran enough and then waste lots of cycles trying to match what was actually just a lucky score...

Voidious15:31, 12 September 2011
 

100 seasons, which should be plenty... minor tuning changes score slightly less but in the same region. Maybe I'll try identical code and see what happens...

Skilgannon15:36, 12 September 2011
 

My intuition is that so long as the magnitude that dimensions are weighted with is similar, the most likely sources of loss/differences between VCS and KNN would be:

  1. Non-linear spacing of segments in the VCS. In order to achieve maximally similar results between methods, you need to preform transforms on the dimensions to approximate the result of any non-linear spacing of segments in the VCS.
  2. Insufficient number of KNN data points used when the data is dense (late in battle). How many data points should be used should probably be larger as the data becomes denser. The density of points returned should probably affect how many points are used.

Have you looked into these factors Skilgannon?

Rednaxela17:00, 12 September 2011

I'm going to try the non-linear thing just now - good call as I had forgotten about this despite spending quite a while tuning it in my gun.

I'm currently weighting based on distance to the location point as a function of the average distance of the closest 3 points - it works quite well but I wouldn't be surprised if there were improvements which could be made.

Skilgannon07:08, 13 September 2011
 

While I think those points are valid, I somewhat disagree with their importance.

  1. I've experimented with different scaling of attribute differences, but never to any major success, in gun or movement. I'm currently not doing this anywhere in Diamond.
  2. If the data is dense anywhere in the graph of your movement data, it probably means you're getting hit a lot by a learning gun, at which point a much bigger issue is modeling data decay intelligently. Experience has shown that in VCS, stat buffers of varying depths with a generally low rolling average works well. There's no direct way to translate that to a DC setup.

Personally, I'd say that intelligently modeling data decay in DC surf stats is probably the biggest hurdle in converting from VCS. It's actually one of the main things I'm still tinkering with. I'm pretty happy with the setup I've arrived at in Diamond, but I think there's still a lot of room for improvement. I'd be happy to go into more detail about that if anyone's interested.

Voidious17:22, 12 September 2011
 

Here are my thoughts on those aspects.

The approach to data decay I took in RougeDC was to have an "index" dimension which continually counted up. This is kind of mean/nasty to the kd-tree performance, but as far as KNN search I think it's a very natural way to model decay.

Regarding varied depths, I'm pretty sure the depth of VCS segmentation is extremely analogous to the number of KNN points used and how they are weighted. The way to match that aspect of VCS systems is to mix the result of varied numbers of points in varied weightings. Since processing the same points multiple times is redundant it simplifies to the following: The way to get the same effect as varied depth VCS, is to work on how your weighting of KNN points rolls off, and use plenty of KNN points so it rolls off properly before the limit on number of points is reached.

I don't know if you were referring to this Voidious, but with regards to having many stat buffers as some like DrussGT do, my experience is you get the same effect by performing antialiasing and interpolation. This implies to me that the primary cause of "many stat buffers" being effective for traditional VCS is that it acts as a sort of accidental stochastic antialiasing. A KNN approach implicitly needs no antialiasing/interpolation, so that aspect of VCS setups does not need to be arranged.

Rednaxela18:00, 12 September 2011
 

(We might just move this thread to Talk:Wave Surfing at some point...)

Well, I agree that much of the value of multiple VCS buffers is covered inherently by a DC system: smoothing of the data and scaling to different amounts of data (eg, no need for a quick-learning unsegmented buffer in DC). So while my best VCS gun has a few stat buffers (and your best does anti-aliasing / interpolation), my best DC gun has only one tree. But while I wouldn't expect to have 100 trees in a DC movement, I do have more than one and I think there's value in it.

As far as data decay in a DC system, I think the progression from simplistic to sophisticated goes something like this:

  • Weighting by age. This is not without merit, but is pretty crude. An old piece of data may still be the most recent for that situation.
  • Capping number of data points, deleting old ones. Also pretty crude, but effective if it's very important to emphasize recent data. I still do this in parts of Diamond.
  • Within the set of nearest points, sort chronologically and weight by rank. This is about as close as you can get to how rolling average works in a VCS segment. I weight data by 1 / (base ^ sort position). So with a base of 2, they're weighted 1, .5, .25, ... . A base of 2.4 is about equivalent to a rolling average of 0.7.
  • Use multiple, exponentially increasing values of k (say 1/4/15/50), with each set of data weighted by chronological order. This emulates having stat buffers of increasing segmentation depths, each with a rolling average. The deepest set of segmentation is akin to taking a low k nearest neighbors search, while an unsegmented buffer would use the max value of k.

Lastly, this is just a hunch, but I think another value of combining many different views of your data is that you achieve a safe pseudo-randomness. That is, simply surfing one set of data will make you move more predictably than the sum of a diverse set of viewpoints - at least with a True Surfing algorithm. But surfing that sum of viewpoints is still going to err on the side of dodging bullets accurately, in contrast to a truly random movement.

Voidious18:55, 12 September 2011
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:DrussGT/Head-to-head/reply (11).

 

I totally agree that emulating VCS is not the ultimate goal, but it does seem like a good starting point. For instance, I think weighting similar situations by chronological rank instead of raw age is a good insight that I wouldn't have noticed if not for considering how things work in VCS.

Oh, I did try phasing out nearest point instead of oldest point for a while. I thought that was such a great idea (I recall it came out of a discussion between us), so I tried hard to tweak it into submission, but I never got any improved performance out of it.

Voidious14:10, 13 September 2011
 

I'm not sure about that, I think raw age might be better for trying to emulate enemy guns. They log hits all the time, not just when a bullet hits, so if there is a bit gap between two bullet hits the last hit may be getting weighted proportionally much higher than it should be. This is something I think VCS does wrong, which can be addressed in DC, if only I could get the scores back up to where they were =)

Skilgannon14:29, 13 September 2011
 

But it's also the case that a piece of data that's 100x older but 10x closer to the current situation (wrt the rest of the attributes) may be a better estimate of where the enemy is firing in that situation. The ratio of those values is going to depend on how granular the enemy's gun is. So I think considering the situations sorted by time at a bunch of different granularities is a pretty good bet, and I think it's similar to what our VCS systems do with great success.

You have got me thinking again about super-lightweight flatteners against weaker learning guns though. =) It does bother me that they're learning all the time and I'm not!

Voidious14:38, 13 September 2011
 

Btw, I'm up to 49.3% vs DrussGT 2.2.2 now with Diamond 1.6.15 (over 1000 battles). I got up to like 49.7%, but only with some changes that killed my scores too much vs Shadow and Tomcat. I decided to stop spinning my wheels for now and move on to more general improvements. =)

Voidious20:29, 18 October 2011
 

It's great pleasure for me to stand in same row with Shadow:) (Sorry for off top)

Jdev04:13, 19 October 2011
 

Heh, I was getting worried so came out with some changes which may just help in the AS department. My strongest PL version is probably 2.3.7, although I need to figure out what is loosing me my 0.1APS since 2.2.2... I'm happy to improve my PL but not at the expense of my APS =)

Skilgannon07:14, 19 October 2011
 

exception in 2.2.0

Been running a lot of tests vs DrussGT 2.2.0 lately and I just noticed an exception I've hit twice along the way (in the data dir). Probably over a few thousand battles or so.

java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
        at java.util.ArrayList.RangeCheck(ArrayList.java:547)
        at java.util.ArrayList.get(ArrayList.java:322)
        at jk.mega.dGun.DrussGunDC.onScannedRobot(DrussGunDC.java:254)
        at jk.mega.DrussGT.onScannedRobot(DrussGT.java:266)
        at robocode.ScannedRobotEvent.dispatch(ScannedRobotEvent.java:297)
        at robocode.Event$HiddenEventHelper.dispatch(Event.java:244)
        at net.sf.robocode.security.HiddenAccess.dispatch(HiddenAccess.java:194)
        at net.sf.robocode.host.events.EventManager.dispatch(EventManager.java:487)
        at net.sf.robocode.host.events.EventManager.processEvents(EventManager.java:460)
        at net.sf.robocode.host.proxies.BasicRobotProxy.executeImpl(BasicRobotProxy.java:413)
        at net.sf.robocode.host.proxies.BasicRobotProxy.execute(BasicRobotProxy.java:123)
        at robocode.AdvancedRobot.execute(AdvancedRobot.java:565)
        at jk.mega.DrussGT.run(DrussGT.java:156)
        at net.sf.robocode.host.proxies.HostingRobotProxy.run(HostingRobotProxy.java:220)
        at java.lang.Thread.run(Thread.java:680)
Voidious01:57, 22 September 2011

Thanks, I'll get onto it =)

Skilgannon07:04, 22 September 2011