View source for Talk:DrussGT/Version History

From Robowiki
< Talk:DrussGT
Revision as of 08:23, 17 February 2010 by Miked0801 (talk | contribs) (Error report from 1.6.12)

Talk:DrussGT/Version History
Jump to navigation Jump to search

1.3.1 results

Interesting... it appears with 1.3.1 that switching the time segments to the style I've been using, has killed the score significantly, putting it behind Dookious. Maybe I should try DrussGT style time segments some time... --Rednaxela 09:10, 11 December 2008 (UTC)

I was playing with the code quite a bit... maybe it messed something else up. I'll do a revert and just keep the gun changes. In the TCRM it scored a satisfying 90.59 over 100 seasons...--Skilgannon 19:47, 11 December 2008 (UTC)

Hang on... I just looked at the details and it seems I'm getting lots of 0 scores, however only coming from Alcatraz. Also, it seems strange that in some of those battles I get 50% survival, but zero score?! Maybe this would be a good time to try out the revert features offered by the new database backed rumble =) --Skilgannon 04:55, 12 December 2008 (UTC)

Heh, I kind of think this server should still be rejecting 0 scores like the old one did... at least except when it matches the expected score reasonably :) --Rednaxela 08:14, 12 December 2008 (UTC)


Hey, hope 1.3.2 will stay at king! And hope you new AntiSurfer gun will kill Shadow! » Nat | Talk » 00:25, 12 March 2009 (UTC)


Have you realize that 1.3.1b and 1.3.2 took exactly 3 months to development! (and 9 minutes, actually) » Nat | Talk » 13:33, 24 March 2009 (UTC)

1.3.8 results

Just wanted to offer a "wow!" at the 1.3.8 rating. Awesome that you are still finding improvements. Are you trying to just crush any hope anyone has of ever catching up? =) (Just kidding, I reckon I'll make a run eventually, just having fun on something else for now.) --Voidious 15:00, 11 May 2009 (UTC)

Yeah, I decided to apply the very effective DC movement strategy of weighting scans by their inverse distance (eg. Shadow, Wintermute, CunobelinDC) to the gun as well. Seems it worked quite nicely =) And what's more, it's something that any DC gun can VERY quickly implement. --Skilgannon 08:44, 12 May 2009 (UTC)

Ah, cool. And hey, Lukious did that too! =) Actually, he used a 2D kernel density, with one of the factors being distance between the scans, but I think he also weighted by distance to search scan. I'm definitely making a mental note about this for my future DC endeavors. =) --Voidious 13:28, 12 May 2009 (UTC)

You are right it was very easy to implement. I have some versions of that idea now in test =). --zyx 14:33, 12 May 2009 (UTC)

Haha, actually, I don't think I've ever done a DC gun WITHOUT that feature... maybe I'm odd though :) --Rednaxela 15:37, 12 May 2009 (UTC)

It's interesting how well this worked, I had always assumed that the reason it worked so well for movement was because of the very limited data. I thought in a gun getting as much data as possible would be preferable, as there was a good chance some of the factors would just be noise against a large portion of the bots, and while a weighting method based on scan distance could certainly be effective, I thought one that tended towards infinity like the inverse distance one would be too 'harsh'. Seems I was wrong =) --Skilgannon 18:11, 12 May 2009 (UTC)

1.3.10

Re: kd-tree speed, how many dimensions do you have now? In testing my own kd-trees, IIRC it took something like half the match before the kd-tree was as fast as brute force with ~6 dimensions. Maybe my memory's off, but it took a while, and more dimensions takes longer. If you have a lot more, I wouldn't be surprised if it's slower than brute force over 35 rounds. My Diamond/Code#KdBucketTree.java has a runDiagnostics2 method that might be helpful to you in diagnosing the relative speeds of kd-tree vs brute force. The runDiagnostics2 is the more up-to-date method, you can just adjust the number of scans and number of dimensions (the array of random numbers) you put into the tree (though it could be coded more clearly), and you could just delete the comparison to my vanilla kd-tree. Good luck. =) --Voidious 13:47, 10 August 2009 (UTC)

Cool. I'll take a look at that. I'm running 11 dimensions, so... =) Maybe have a wrapper that uses brute force for the first N scans, and from then on uses the tree? Also your test uses perfectly distributed random numbers, so in real life the tree is penalised even further. --Skilgannon 14:01, 10 August 2009 (UTC)

Oh yeah, and bigger cluster size favors brute force, too... Kd-tree might blow away brute force for one nearest neighbor, but 50 nearest is a lot different. --Voidious 14:14, 10 August 2009 (UTC)

Hmm... I'm tempted to make a N-Nearest-Neighbour Challenge to compare various KD-tree implementations as well as brute force, at different numbers of dimensions and so-called 'cluster sizes'... :) --Rednaxela 14:38, 10 August 2009 (UTC)

Go for it! I think k-Nearest Neighbour Challenge or KNN Challenge (I can accept the British / Canadian spelling :-P) would be a closer fit to the usual naming of this problem, like wikipedia:k-nearest neighbor algorithm. For 25,000 data points, cluster size 50, 10+ dimensions, my money's on my brute force algorithm. =) --Voidious 15:01, 10 August 2009 (UTC)
I was just re-reading the wikipedia:k-nearest neighbor algorithm page and I followed the link to wikipedia:Statistical classification, where I noticed something called wikipedia:Boosting, which looks to be about the same idea as Crowd Targeting. My attempts at trying to dynamically weight DrussGT's movement buffers in the past have been, generally, failures, but maybe the algorithms they provide as examples will be a bit more successful =) --Skilgannon 15:50, 10 August 2009 (UTC)
Hmm very interesting... It does appear to be like Crowd Targeting as it's often interpreted. I may be mistaken, but those methods don't appear to handle the case where something should be given negative weighting though, which is a key aspect in SaphireEdge's "crowd" model. --Rednaxela 16:20, 10 August 2009 (UTC)
Yes, in a gun case I can see how you might need to weight a certain gun negatively if, for instance, it is being actively dodged by a surfer. In a movement case, however, where we are just talking about weighting different sets of VCS buffers more or less depending on how much they correspond with enemy targeting. I think it could be proved that by simply segmenting data it is impossible to make your targeting worse, only better, so only positive (and possibly 0) weights are necessary. Interestingly, I think this also makes having multiple segmented data buffers a form of Probably approximately correct learning. I need to find out more about boosting =) --Skilgannon 14:21, 30 August 2009 (UTC)
We talked about "boosting" in the Machine Learning class I took in college. The only thing I remember about it is that the team with the best classification score on the final project used it. =) --Voidious 17:02, 10 August 2009 (UTC)

1.5.0

No changes for this version except move the movement code to DrussMoveGT? And where is 1.4? » Nat | Talk » 14:43, 6 September 2009 (UTC)

  • 1.4 was down near 1.3.10, but I ditched that line due to it being a complete flop. 1.5.0, yes, it was just moving it out to a separate class, and also move all the gun references out of the movement code into a new class file. I also ditched 1.5.0 due to a tiny little bug that I was unable to find, but which cost me about 0.2APS --Skilgannon 06:12, 18 January 2010 (UTC)

1.6.7

On your latest update to help mirror bots - I had noticed your weakness here and wondered how you were going to address this. What's your idea on this? --Miked0801 20:43, 17 January 2010 (UTC)
Also, to let you know, I got a few NULL Ptr exceptions when running your bot vs. deith.Czolgzilla 0.11 - 3 in 35 rounds. There may be something amiss in your code.

Wow - I thought I killed all those NPEs - thanks for the heads up! The basic idea with this update is taking advantage of something goto-surfing gives me over true surfing - I know where my final destination is before actually reaching it. What I do is take the midpoint between me and the other bot and imagine it stays constant. Then I imagine that the other bot is mirroring my future predicted best location (I take the furthest forward prediction available from the movement) and check what the offset I would need to shoot at would be to hit my mirrored location. That offset is the value I use to cluster on. The idea I'm hoping for is that all bots tend to rotate around an imaginary point as they surf (or random move), and they also tend to be at the opposite side of the circle than I'm on in order to keep a decent distance, so their movements may tend to be slightly mirror-ish, and if I can tell my gun the point on the circle I'm going to be, it has a definite advantage trying to figure out which part of the circle they'll rotate to. Hope this makes sense =) --Skilgannon 06:12, 18 January 2010 (UTC)

About the NPEs, I just saw a bunch against cx.micro.Spark 0.6. 12 in 35 rounds if that helps. Interesting idea for the anti-mirror stuff by the way. How big a factor is it if your surfing data changes after firing at such a point? --Rednaxela 07:23, 18 January 2010 (UTC)
Thanks - think I've killed them all now. If anybody spots any more please let me know! So far it's a very rudimentary version - if I had enough coding time and processing power I would
  • predict forward until the next enemy wave should be fired
  • predict all my movements forward until my wave (fired now) hits my mirrored surfing position
  • use the offset of that for aiming
  • then keep track of the scan and when adding scans use what movement I actually make, and not what I'm predicted to make (which is only truly 100% accurate up to the end of this wave anyways).
But first I want to see if there's any merit to the idea. So I haven't done any of the extra predicting forward, or keeping track of scans to make the data absolutely correct, or anything. All I have right now is using the furthest wave's safest reachable point (by following the best point on the closer waves) and then taking it's offset and multiplying it by the GF direction. If it doesn't hurt my score (in this state) I'll consider it a terrific success =) --Skilgannon 17:14, 18 January 2010 (UTC)

If you add 3 bots to the rumble, you should at least leave your client running to help with the calculating ;) --Miked0801 15:47, 27 January 2010 (UTC)

Yeah sorry about that... my client died from what looks like a corrupted HDD and I didn't have time to fix it then. I think I've fixed it by re-installing the JRE but I'm going to need to keep an eye on the HDD from now on... --Skilgannon 09:18, 29 January 2010 (UTC)

By the way, it's impressive how that improved DrussGT's performance against PolishedRuby. What's interesting though, is for some reason Toorkild gets a better score though... much better. Even more so than Axe's bots with anti-mirror movement. You may want to look into what happens in Toorkild vs PolishedRuby to find more opportunity perhaps. :) --Rednaxela 17:55, 29 January 2010 (UTC)

1.6.10 bughunting

Hey guys, so far (from my own machine) I have picked up NPEs at jk.mega.DrussGT.getBestPoint(DrussGT.java:1194) and at jk.mega.DrussGT.doSurfing(DrussGT.java:1616). If you spot any others please tell me =) --Skilgannon 09:44, 4 February 2010 (UTC)

Sorry for the late response. Hit the getBestPoint once here, against strider.Festis 1.2.1:

java.lang.NullPointerException
        at jk.mega.DrussGT.getBestPoint(DrussGT.java:1194)
        at jk.mega.DrussGT.doSurfing(DrussGT.java:1615)
        at jk.mega.DrussGT.onScannedRobot(DrussGT.java:207)
        at robocode.peer.robot.EventManager.onScannedRobot(Unknown Source)
        at robocode.peer.robot.EventManager.dispatchEvent(Unknown Source)
        at robocode.peer.robot.EventManager.processEvents(Unknown Source)
        at robocode.peer.RobotPeer.execute(Unknown Source)
        at robocode.peer.proxies.BasicRobotProxy.execute(Unknown Source)
        at robocode.AdvancedRobot.execute(Unknown Source)
        at jk.mega.DrussGT.run(DrussGT.java:143)
        at robocode.peer.RobotPeer.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:619)

--Voidious 20:48, 5 February 2010 (UTC)

From 1.6.12 against toorkild 0.2.4b

=========================
Round 35 of 35
=========================
My hitrate: 0.13681592039800994
DC gun score: 174
PM gun score: 169
AS gun score: 170
gun: DC
Enemy damage: 963.2852093419626
My damage:    1421.6659683513403
Accumulated, weighted enemy hitrate % : 8.65954079741584
Flattener enabled: false
SYSTEM: Bonus for killing jk.micro.Toorkild 0.2.4b: 12
SYSTEM: jk.mega.DrussGT 1.6.12 wins the round.
Enemy damage: 984.3913840978221
My damage:    1481.6623683513399
Accumulated, weighted enemy hitrate % : 8.653287918748319
Thou rank tickle-brained bladder!
ERROR: DETECTED BULLET ON NONEXISTANT WAVE!

In the debug window it said error so I thought you might want to know... --Miked0801 07:23, 17 February 2010 (UTC)

Contents

Thread titleRepliesLast modified
Imperfect Perfection108:56, 7 October 2017
3.1.3DC vs 3.1.32408:41, 9 February 2014
2.8.0107:57, 16 August 2012
RumbleStats templates219:38, 12 August 2012
2.7.10217:36, 31 July 2012
2.7.2817:21, 31 July 2012
aggressive changes621:48, 23 July 2012
DoctorBob Testing016:44, 11 July 2012
survival score122:09, 10 July 2012
2.4.9 broken?121:47, 25 June 2012
Too awesome114:41, 4 December 2011
Anti-Diamond tuning =)306:49, 26 November 2011

Imperfect Perfection

I didn't understand what Imperfect Perfection is. I couldn't find it anywhere. What is it?

Dsekercioglu (talk)19:55, 6 October 2017

This was a reference to old rumble servers, which would throw away 100% scores in case one of the bots had crashed. So the idea was that if you had 100% at the end of a battle you should allow the opponent one small bullet hit so that you actually get a valid score. See the old wiki page.

Skilgannon (talk)08:56, 7 October 2017
 

3.1.3DC vs 3.1.3

Edited by author.
Last edit: 17:17, 25 December 2013

Which one is better Edit:found out that 3.1.3DC uses GoTo surfing

Tmservo (talk)21:37, 24 December 2013

Both 3.13 and 3.13DC use GoTo surfing. 3.13DC uses DC, while 3.13 uses some form of VCS. (Correct me if I am wrong)

Straw (talk)23:16, 24 December 2013

Correct :-) In the movement, to be more specific.

And considering I write a changelog, I don't see how this question is anything other than lazy at its worst.

Skilgannon (talk)07:04, 25 December 2013

I was always wondering why the best bot used VCS, DC seems much more elegant. Does it improve performance in your tests?

Straw (talk)08:05, 25 December 2013

For some reason I've never managed to get the DC to perform as well as the VCS, so it still used VCS. I remember Jdev commenting that a range search worked better for him than a KNN search in movement, so I'll be trying that next.

Skilgannon (talk)21:34, 25 December 2013

Have you tried doing something similar to your many randomized attribute buffers with kD-Trees? You could make 100 trees, each with a random subset of the predictors, then combine the results. You could even start weighting some tree's results higher if they perform better.

Straw (talk)00:36, 14 January 2014
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:DrussGT/Version History/3.1.3DC vs 3.1.3/reply (7).

 
 
 
 
 

Cool, been trying to find a good reason to do this myself. =)

The predicted distance to enemy is a good use for it, but besides that, every thing I prototype that might make use of this ends up failing pretty hard, and/or being really slow by killing my second wave surf danger cutoff optimization. Which isn't a deal breaker if it gave serious APS gains, but is pretty close to it, and makes it much more painful to test. I recently made another serious pass at "factor in danger of firing situations presented to the enemy" that fell into this category.

Voidious21:35, 15 August 2012

My thoughts were that my future wave surfing might actually be failing because I didn't have this feature yet. I'm going to try using the predicted enemy locations for the fire location of the future waves, since that is the one big inaccuracy that is still left in the future wave system I'm hoping that it will actually help this time...

Skilgannon07:57, 16 August 2012
 

RumbleStats templates

Hmm, what do you enter to use RumbleStats? The way I use it, I do get the MediaWiki template format like you had there. I put in this in double braces: subst:rumblestats:roborumble|voidious.Diamond 1.2.3

Voidious18:05, 12 August 2012

I do that as well, with an added |GTStats at the end. I always just do a copy-paste from Template:GTStats. I do another subst: the next time I make changes to leave the rendered text only. For some reason it showed up correctly in the page when I submitted it, but when I reloaded a bit later it showed the scores as 0. Perhaps VoidBot reached it's rumble quota?

Skilgannon19:34, 12 August 2012
 

Oh, I see. Yeah, RumbleStats (which has its own API key actually) is also subject to a rate limit, though I think I've found it to be higher than the rates Darkcanuck originally described.

Voidious19:38, 12 August 2012
 

"Fix my x.x5 exploit" - You mean where you round bullet powers to x.x5? I actually tried this recently in tests and didn't see any improvement. I figure bots where this exploit is working are being crushed enough that I'm staying at high energy and using my default all the time. And even if they are occasionally seeing hit waves when I change bullet power, it may not be enough to affect scores.

Actually, I saw a tiny improvement with one bullet power formula, and a tiny decrease in another, so I think it was just an effect of slightly increasing bullet powers to get up to x.x5. Do you always round up like I did? I figured it would be dumb if I calculated the exact right amount of energy to kill my opponent, then rounded down. ;)

Voidious17:12, 31 July 2012

I actually found not all x.x5 powers work at exploiting the bug - some of them get the same results for the rounding on both sides of the comparison. So I made an array of the ones that exploit the bug and choose the closest value.

I also removed all of the values between 1.95 and 2.95, because if I was using 2.95 in the first place (ie. hitrate > 33%) it means it was advantageous to shoot with high power against them anyways.

But maybe I should have something like 'round up if this shot will kill them'. I hadn't even considered that, and yeah, rounding down would be kind of stupid, even if it wouldn't leave them with much energy.

Skilgannon17:31, 31 July 2012
 

Lol, that is so hardcore. I'm not sure I can bring myself to muddy up my bullet power selection with all of that, but that is pretty awesome. =)

Voidious17:36, 31 July 2012
 

I don't understand, why would you ever ignore a bulletHitBullet?

Voidious19:42, 24 July 2012

I wasn't sure about making a club, but I've been watching for it, and you're at 90.02 APS with 1866 battles and 922 pairings. Congrats. =)

Voidious20:31, 24 July 2012

And looks like you're the first to make it stick even after a few thousand battles. Congrats!

Voidious03:50, 31 July 2012
 

And you were only a few hours behind...

I think with that 'light flattener' it would make sense for it to have very deep rolling depths, similar to a 'typical' VCS gun, considering how often flatteners get new data.

Skilgannon10:28, 31 July 2012
 

This first version was k=min(50, num data points / 5) of the last 1000 data points, all weighted equally (no chronological weighting, still divide by inverse distance). I still have a fair bit of tweaking to do, so hoping to squeeze another .1 or so out of it. But with my luck my first guess will be impossible to optimize further. =) This was an improvement of like 0.3 in my 500-bot test bed, but of course that's quickly halved with all the 99% bots that I don't test against.

Of course, I was thinking 1000 was about 1 round of data, when it's actually ~20 because it's not using virtual waves. Doh! Guess I'll try dialing that down a lot. =)

Voidious17:21, 31 July 2012
 

Congrats indeed!

Regarding "why would you ever ignore a bulletHitBullet?", based on the version history it sounds like this was done as a hack to get better HawkOnFire score. It makes sense to me that this would work for that because sometimes HawkOnFire doesn't *exactly* aim at GF=0.0, but if you start trying to dodge a slightly-off-zero location, you're more likely to just barely get hit on the other side of slightly-off-zero.

Rednaxela22:30, 24 July 2012
 

Yup, Rednaxela hit it on the head. HOF shoots fairly off-center surprisingly often, so I relied on my pre-seeded GF0 to dodge. It only ignored BulletHitBullets if there weren't any hits yet (ie, dodging the GF0 was successful).

Skilgannon14:06, 25 July 2012
 

Holy wombat! You guys set the bar very high :). Congrats!

90+APS on 900+ competitors is, well, very impressive.

Wompi09:07, 31 July 2012
 

Thank you =) Although, I must say, the air is getting very thin up here...

Skilgannon10:28, 31 July 2012
 

aggressive changes

Nice to see you're going for some pretty major changes in recent versions. =) Hoping to get away from the tweak train soon, myself.

Your changes sound vaguely along the lines of Talk:Wave_Surfing#The_Next_Level. My vision along those lines is more about predicting the firing situations presented to the enemy during the surfing options, then factoring the danger of those situations into the surf danger. For instance, if clockwise is moving towards the wall, the enemy's next bullet will have a precise MEA of 0.5 / distance 400 / normalized hit rate of 10, while counter-clockwise is a precise MEA of 0.9 / distance 550 / normalized hit rate of 8 for those firing attributes. At first glance, it seems like multiplying those right into the surf danger makes sense. But if you have multiple firing situations, the values should probably be added together before being multiplied in.

I tried implementing this a while back, with just the precise MEA / distance stuff, but I couldn't get any improvement out of it. It was also a HUGE pain trying to do this in such a way that preserves the "if we exceed max danger on the first wave, don't calculate the second wave" optimization, because you have to also account for the fact that different movement options will end sooner and have different numbers of firing situations to factor in.

Voidious18:26, 20 July 2012

Not to say that the approach I tried / might try again (which was based on your post) is mutually exclusive to what you're doing here...

Voidious18:34, 20 July 2012
 

Yeah, I've never had enormous gains from tweaks, so I think aggressive is the way =)

In my precise prediction I now keep a record of my path, so when I get to the time that they are predicted to fire I can calculate what the wave surfing attributes are and fire a wave. The biggest issue at the moment is predicting what their movement will be so that I can give the wave a realistic center. Right now I'm just using linear prediction. Note, this is just for the first wave. There are some major issues in the structure I've introduced, it seems, but my goal is to be able to make intelligent decisions not just for how surfing this wave will affect the second wave, but also how to move before any waves are surfed/when there are no waves in the air so that I can dodge whatever they throw at me.

Skilgannon19:46, 20 July 2012
 

Definitely agree with focusing on big changes vs tweaks. But I also can't resist trying to tune things up constantly. Part of me just feels like the only way to really show the true value of a big change is on a bot with all the small stuff already optimized. =) For instance, when you first wrote a go-to surfer, it seemed cool that it could be competitive. Once you were #1 and miles beyond the rest of us, that's when it really started to seem like there was something to it. =)

And my recent surge has, from my perspective, been the product of something else entirely: code quality and bug fixes. It was just two months ago I started a big code overhaul with no features or performance gain in mind, just trying to clean up my code so it was something I could be more proud of. I just found lots of stuff to fix and improve along the way. And the improved code base made it much easier and more pleasant to do lots of the new things I've been playing with.

Voidious21:04, 20 July 2012
 

Welllll.... sometimes it seems changes are just too aggressive. I'm going to leave this idea to mull in the back of my mind for now, and concentrate on other things. I still don't understand why it didn't work as expected, or why I get a lower score predicting a fake wave when there are no waves in the air than even plain old orbiting.

Skilgannon20:31, 23 July 2012
 

The only thing that comes to mind is attack angle control. Do you use the same in no-wave cases as for surfing cases? I know I move towards my desired distance more aggressively when there are no waves in the air.

Voidious21:40, 23 July 2012
 

I did think of that, but never got around to it. Perhaps I should breath some more life back into the 2.6.x line. I have a few more experiments I want to try with the 1.7.x's first though. And... if I do get the 2.6.x doing better than 2.5.6, it should be easy to merge the changes across. Then onto 2.8.x for some data poisoning ideas!

Skilgannon21:48, 23 July 2012
 

DoctorBob Testing

500 rounds against DoctorBob does wonders at testing whether surfing mechanics are working correctly, and hunting down what is wrong. It runs really quickly and just looking at how much bullet damage DoctorBob gets is a great indicator of how accurate your surfing is. There is usually ~50 damage required for learning (for me at least), and the rest accumulates due to bad positioning and inaccurate prediction.

A quick breakdown of DoctorBob's bullet damage by version for 500 rounds:

2.4.14: 96

2.5.2: 155

2.5.4: 62

There's a bit of noise in there, for sure, but it's fairly easy to tell when something is working ;-)

Skilgannon22:13, 10 July 2012

survival score

Wow, it's interesting your survival score has remained so high with these buggy versions that kill your APS. I've been wondering recently if your high survival is sort of just because that's where the points come from when you start getting up that high. (Like maybe when you get over the hump of losing 0 instead of 1 rounds to a lot of bots.) Maybe that's still partly true, but it appears something particular about DrussGT gives him way better survival skills than Diamond.

Voidious21:46, 10 July 2012

Yeah, not sure why that is. Although Diamond is only 0.2% behind 2.4.14 so I don't think it is related to "where the points come from". It might have to do with my bullet power selection algorithm, it starts cutting back on the bullet power really early if my energy drops.

Then again, it might be due to the kind of bugs that were being experienced, and who they most affected. I just figured out what was wrong, the second wave prediction points were being initialized with a time relative to the first wave firing instead of the absolute time. Doh. So maybe the bots that the second wave helps most against weren't bots I was likely to lose and additional battle to?

Skilgannon22:09, 10 July 2012
 

2.4.9 broken?

You've probably already noticed, but did you break something in 2.4.9? [1] At first I suspected a bad client after what happened to Wallaby, but I think pa3k has been running his clients for a while with no issues.

Voidious21:39, 25 June 2012

0 bullet damage for both bots? That's a wacky result right there! :(

Tkiesel21:47, 25 June 2012
 

Too awesome

It's so cool that you do not rest with pushing further and higher.

PEZ12:13, 4 December 2011

Totally agree, even if it is also frustrating to some of us. =)

Voidious14:41, 4 December 2011
 

Anti-Diamond tuning =)

Seems to work too.

PEZ19:38, 24 November 2011

Yup, it was the result of a genetic tuning =) I'll try with using both Diamond and Tomcat next, possibly adding a time attribute in to improve the rolling.

Skilgannon22:40, 24 November 2011

I'm curious, are you using WaveSim for the genetic tuning, or full robocode battles? It would be a lot faster to use things like WaveSim, but on the other hand I consider WaveSim bad for testing against surfers.

Rednaxela04:17, 25 November 2011
 

I hacked up my own version of WaveSim which does just the bare basics. It took about 80 seasons with a population of 30 to converge to a final set of weights. I agree, using a fixed dataset to simulate an adapting target isn't really ideal, but the speedup is just phenomenal and as PEZ said, it seems to work :-)

Skilgannon06:49, 26 November 2011