Difference between revisions of "Talk:Midboss"
(→Interesting Emergent Behaviors: impressive!) |
RednaxelaBot (talk | contribs) m (Using <syntaxhighlight>.) |
||
(17 intermediate revisions by 2 users not shown) | |||
Line 8: | Line 8: | ||
Well... this is a refreshing surprise... it looks like 1d is gaining a few spots over 1c, and all I did was the following:<br /> | Well... this is a refreshing surprise... it looks like 1d is gaining a few spots over 1c, and all I did was the following:<br /> | ||
add | add | ||
− | < | + | <syntaxhighlight>add(Dimension.ADVANCINGV, true, -8, 0, 8); |
− | add(Dimension.BFT, true, 0, 30, 60, 95);</ | + | add(Dimension.BFT, true, 0, 30, 60, 95);</syntaxhighlight> |
to | to | ||
− | < | + | <syntaxhighlight>add(Dimension.DISTANCE, true, 75, 225, 375, 525, 675); |
add(Dimension.LATERALV, true, 0, 1, 2.5, 5, 7.5); | add(Dimension.LATERALV, true, 0, 1, 2.5, 5, 7.5); | ||
add(Dimension.ACCEL, true, -0.5, 0, 0.5); | add(Dimension.ACCEL, true, -0.5, 0, 0.5); | ||
add(Dimension.VCHANGETIME, true, 0.0, 0.2, 0.5, 0.95, 1.6); | add(Dimension.VCHANGETIME, true, 0.0, 0.2, 0.5, 0.95, 1.6); | ||
add(Dimension.FWALL, true, 0.05, 0.25, 0.45, 0.65, 0.85, 0.95); | add(Dimension.FWALL, true, 0.05, 0.25, 0.45, 0.65, 0.85, 0.95); | ||
− | add(Dimension.BWALL, true, 0.175, 0.875);</ | + | add(Dimension.BWALL, true, 0.175, 0.875);</syntaxhighlight> |
I find this particularly funny because I found similar things to not help when I made SaphireEdge long ago. Of course, the reason I decided to try adding those segments, is because I felt they would come much more strongly into play once my own bot is moving, which doesn't happen during targeting tests. | I find this particularly funny because I found similar things to not help when I made SaphireEdge long ago. Of course, the reason I decided to try adding those segments, is because I felt they would come much more strongly into play once my own bot is moving, which doesn't happen during targeting tests. | ||
Line 42: | Line 42: | ||
Here's one theory about the running out of memory: Testing is showing that Robocode, at least 1.7.1.6, is not allowing memory in static variables of bots to be cleared by the GC ever, even after the same bot loads again. This is odd because it was my understanding that when the classloader that robocode made the to load the robot dies it should allow GCing of everything that classloader loaded[http://www.samaxes.com/2007/10/classloader-leaks-and-permgen-space/][http://forums.sun.com/thread.jspa?threadID=5415253]. This says to me that robocode has a bug where it keeps using the same classloader, or keeps references to it's old classloaders. Anyway, I've found that Midboss 1d uses roughly 60MB of memory per load, and Midboss 1g uses roughly the same. As a workaround, in Midboss 1h, I'll make an 'unload' type hook in the bot, which will clear out all static variables at the end of round. I have no clue what this wasn't an issue with Midboss 1d..... Perhaps there are more problems when there is a greater number of high-memory-usage distinct bot versions running in a single 10-battle iteration. --[[User:Rednaxela|Rednaxela]] 19:45, 11 January 2010 (UTC) | Here's one theory about the running out of memory: Testing is showing that Robocode, at least 1.7.1.6, is not allowing memory in static variables of bots to be cleared by the GC ever, even after the same bot loads again. This is odd because it was my understanding that when the classloader that robocode made the to load the robot dies it should allow GCing of everything that classloader loaded[http://www.samaxes.com/2007/10/classloader-leaks-and-permgen-space/][http://forums.sun.com/thread.jspa?threadID=5415253]. This says to me that robocode has a bug where it keeps using the same classloader, or keeps references to it's old classloaders. Anyway, I've found that Midboss 1d uses roughly 60MB of memory per load, and Midboss 1g uses roughly the same. As a workaround, in Midboss 1h, I'll make an 'unload' type hook in the bot, which will clear out all static variables at the end of round. I have no clue what this wasn't an issue with Midboss 1d..... Perhaps there are more problems when there is a greater number of high-memory-usage distinct bot versions running in a single 10-battle iteration. --[[User:Rednaxela|Rednaxela]] 19:45, 11 January 2010 (UTC) | ||
+ | |||
+ | The other client that I left running seems to still be having no problem with Midboss, but I stopped it when I got home. They're both configured the same: 512 mb, NUMBATTLES=25, ITERATE=NOT, and looping via a shell script. I'll just leave 'em off for now, but I'm skeptical there's anything wrong with my client, besides any problem with Robocode itself/the bot. I'll just leave my clients off for now, until we investigate further or you post a new version that you think has a workaround. --[[User:Voidious|Voidious]] 01:46, 12 January 2010 (UTC) | ||
+ | |||
+ | It's probably the NUMBATTLES=25 that's making it show as an issue on your client but not mine I think. I've reported [https://sourceforge.net/tracker/index.php?func=detail&aid=2930266&group_id=37202&atid=419486 at ticket] to the robocode tracker, since I'm very certain it's a problem in robocode itself, and will in the meanwhile make a workaround by forcing the bot to clear ALL static variables upon receiving the onBattleEnded() event. This bug concerns me significantly because really, it doesn't just affect Midboss, but I'm pretty sure affects all bots with very high memory usasge. Of course, once this gets fixed, there should finally be no issue with using ITERATE=YES. --[[User:Rednaxela|Rednaxela]] 02:32, 12 January 2010 (UTC) | ||
== Interesting Emergent Behaviors == | == Interesting Emergent Behaviors == | ||
Line 48: | Line 52: | ||
: It's not the first time I've heard of that idea, but it's very impressive that your system would come up with it! Your analysis of the effect on score must be pretty sophisticated. =) --[[User:Voidious|Voidious]] 21:47, 11 January 2010 (UTC) | : It's not the first time I've heard of that idea, but it's very impressive that your system would come up with it! Your analysis of the effect on score must be pretty sophisticated. =) --[[User:Voidious|Voidious]] 21:47, 11 January 2010 (UTC) | ||
+ | |||
+ | :: Well, it does some prediction, assuming it keeps firing the same power, the enemy keeps firing the same power, and the distance remains the same, what could happen. It tries to explore a variety of scenarios of what the result of the next few waves (both self and enemy) could be, but since CPU is not unlimited it must limit it's depth of exploration, based on both depth and the likelihood of that possible branch of waves hitting/missing. When it gets to it's limit (which is a fairly small limit really), it falls back on a more heuristic calculation. Based on all this it finds the average expected score change from now to the end of the match, for a given firepower. So far it seems to work decent but it's much more difficult to debug than the old adaptive algorithm I developed in RougeDC. --[[User:Rednaxela|Rednaxela]] 21:56, 11 January 2010 (UTC) | ||
+ | |||
+ | :: Interestingly, this behavior seems to have disappeared so far as I can see in the newer versions. It seems that improving it's accuracy/realism in various ways is causing it to no longer consider this move worthwhile. --[[User:Rednaxela|Rednaxela]] 06:33, 20 January 2010 (UTC) | ||
+ | |||
+ | = Version 1i = | ||
+ | |||
+ | Interesting... version 1i seems to be doing well enough so far... at least not worse overall than version 1d and 1h. The [http://darkcanuck.net/rumble/RatingsCompare?game=roborumble&name=ags.Midboss%201i&vs=ags.Midboss%201d compare] shows some very distinct weaknesses in my new bulletpower selection though: | ||
+ | * Bots that stop firing (Toa, Gladiator), throw things off, because it assumes the enemy always fires. | ||
+ | * 'Chase bullets' (CigaretBH) throw it off, because it assumes the enemy always fires at their most recent bulletpower. | ||
+ | * And... maybe something else? Can't figure out why WeeklongObsession or RainbowBot mess it up... | ||
+ | --[[User:Rednaxela|Rednaxela]] 20:24, 12 January 2010 (UTC) | ||
+ | |||
+ | In the development version, fixed the issues with chase bullets and bots that stop firing. Issues remain with WeeklongObesssion, RainbowBot, WaveShark, and possibly others... Not sure what's causing these issues. --[[User:Rednaxela|Rednaxela]] 22:07, 12 January 2010 (UTC) | ||
+ | |||
+ | The WeeklongObsession/RainbowBot issues now seem to be random chance... however it seems the | ||
+ | Gladiator/Toa issue is not fixed after all... so it must be something different than I thought... --[[User:Rednaxela|Rednaxela]] 06:00, 13 January 2010 (UTC) | ||
+ | |||
+ | Also, hooray for finally getting a bot past the well-known [[CassiusClay]] :) --[[User:Rednaxela|Rednaxela]] 17:49, 13 January 2010 (UTC) | ||
+ | |||
+ | = Version 1l = | ||
+ | |||
+ | Hmm... seems rambot firepower was yet again broken... and a bunch of other places having problems too... On the other hand, it got over 50% against DrussGT for the first time among other improvements, so maybe I shouldn't simply revert... --[[User:Rednaxela|Rednaxela]] 15:20, 16 January 2010 (UTC) | ||
+ | |||
+ | = Version 1m/n = | ||
+ | |||
+ | Heh... It seems that the new code I know is more realistic and statistically correct isn't giving as good results for some reason. I have one theory about the cause though. The old "hitrate estimation" code I was borrowing from RougeDC, doesn't seem to be so great in practice and doesn't account for distance and stuff very realistically it seems. Hopefully I'll make some gains by switching to the following: | ||
+ | <code><syntaxhighlight> public void waveEnd(boolean hit, double distance, double wavespeed) { | ||
+ | double maxEscapeAngle = Math.asin(8.0/wavespeed); | ||
+ | double halfWidth = Math.atan(18/distance); | ||
+ | double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth); | ||
+ | if (hit) { | ||
+ | b += uncertainArea; | ||
+ | } else { | ||
+ | a += Math.min(uncertainArea, halfWidth); | ||
+ | } | ||
+ | count++; | ||
+ | } | ||
+ | |||
+ | public double getHitrate(double distance, double bulletpower) { | ||
+ | double maxEscapeAngle = Math.asin(8.0/(20-3*bulletpower)); | ||
+ | double halfWidth = Math.atan(18/distance); | ||
+ | double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth); | ||
+ | double uncertaintyMultiplier = a / b; | ||
+ | double effectiveEscapeAngle = uncertainArea * uncertaintyMultiplier + halfWidth; | ||
+ | double hitrate = Math.min(1, halfWidth / effectiveEscapeAngle); | ||
+ | return hitrate; | ||
+ | }</syntaxhighlight></code> | ||
+ | Testing so far seems to show it as more realistic to what I'd intuitively expect, but haven't had a chance to test it in combat yet. | ||
+ | --[[User:Rednaxela|Rednaxela]] 17:47, 18 January 2010 (UTC) | ||
+ | |||
+ | So... after thinking about information theory some, I've made a new version, that takes advantage of the 'binary entropy function' from information theory, in order to maximize how evenly the 'information quantity' is weighted. It is slightly input-order-dependent now though, but I hope that's worth the gains in automatically maximizing the accuracy of the hitrate data. | ||
+ | <code><syntaxhighlight> public void waveEnd(boolean hit, double distance, double wavespeed) { | ||
+ | double maxEscapeAngle = Math.asin(8.0/wavespeed); | ||
+ | double halfWidth = Math.min(maxEscapeAngle, Math.atan(18/distance)); | ||
+ | double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth); | ||
+ | // Calculate weight based upon the binary entropy function | ||
+ | double p = getHitrateFromData(uncertainArea, halfWidth); | ||
+ | double weight = (-p*Math.log(p) - (1 - p)*Math.log(1 - p))/Math.log(2); | ||
+ | if (hit) { | ||
+ | b += weight*uncertainArea/maxEscapeAngle; | ||
+ | } else { | ||
+ | a += weight*halfWidth/maxEscapeAngle; | ||
+ | } | ||
+ | count++; | ||
+ | } | ||
+ | |||
+ | public double getHitrate(double distance, double bulletpower) { | ||
+ | double maxEscapeAngle = Math.asin(8.0/(20-3*bulletpower)); | ||
+ | double halfWidth = Math.min(maxEscapeAngle, Math.atan(18/distance)); | ||
+ | double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth); | ||
+ | return getHitrateFromData(uncertainArea, halfWidth); | ||
+ | } | ||
+ | |||
+ | private double getHitrateFromData(double uncertainArea, double halfWidth) { | ||
+ | double uncertaintyMultiplier = a / b; | ||
+ | double effectiveEscapeAngle = uncertainArea * uncertaintyMultiplier + halfWidth; | ||
+ | double hitrate = Math.min(1, halfWidth / effectiveEscapeAngle); | ||
+ | return hitrate; | ||
+ | }</syntaxhighlight></code> | ||
+ | One note about the model this uses, is it should be extremely accurate at calculating theoretical hitrates under the following conditions: 1) No walls, 2) Movement has a flat profile across all possible distances and BFTs. Now... I know reality doesn't meet those criteria, but I think reality is close enough that this is the best reasonable approximation I can make. | ||
+ | --[[User:Rednaxela|Rednaxela]] 19:50, 18 January 2010 (UTC) | ||
+ | |||
+ | Actually... remove the "binary entropy function" multiplier... It's unneeded because after that normalization I added, the formulae are already weighted appropriately. --[[User:Rednaxela|Rednaxela]] 01:00, 19 January 2010 (UTC) | ||
+ | |||
+ | = Version 2 series = | ||
+ | |||
+ | In the (eventually) upcoming Midboss 2a, I will be exploring giving the surfing more advanced learning, however for now will keep the old surfing framework of RougeDC still, even though I know it's flawed in many ways. I just want to see how far I can take it and what lessons can be learned. --[[User:Rednaxela|Rednaxela]] 17:49, 13 January 2010 (UTC) | ||
+ | |||
+ | == Probability and statistics research == | ||
+ | |||
+ | While doing some research I found an interesting thing, known as a [http://en.wikipedia.org/wiki/Bernoulli_process "Bernoulli process"] which turns out to be what my score prediction is attempting to approximate/model in a way. My approach does brute force simulation up to a certain depth, at which point it falls back on an approximation. This approximation is based on a continuous-time-damage with a statistically correct mean and standard deviation. This approximation isn't as accurate as I'd like, but it's correct in the limit as time remaining in the battle approaches infinity, thanks to the [http://en.wikipedia.org/wiki/Central_limit_theorem central limit theorem]. It's looking like in order to formulate something more accurate than a normal-distribution-based approximation, I'd need to pull off some serious stats voodoo. Even if I did manage to, I have a hunch that it would not be closed form and thus be unlikely to be more efficient than my brute force method. --[[User:Rednaxela|Rednaxela]] 04:11, 21 January 2010 (UTC) | ||
+ | |||
+ | == 1s == | ||
+ | |||
+ | Hey, I noticed this pure rerelease came in at 0.2 APS below 1r. A lot of the worst comparisons to 1r came from my clients. I'm not sure if this is a "bad client" situation or just because I ran a lot of battles last night. (Amusingly, I had Midboss in my exclude list until yesterday, when I figured "ah it's probably fine to remove this now.") I don't see any zero scores. 48% vs [[WeeksOnEnd]] is suspicious, but I did just get a 51% running battles manually. Midboss skips a lot of turns some rounds on my MacBook 2.0 GHz. Sometimes none, sometimes just a few, and sometimes as many as 25 in a round. I suspect this could be the cause... I haven't tested on my MacBook Pro yet. --[[User:Voidious|Voidious]] 14:28, 14 April 2010 (UTC) | ||
+ | |||
+ | Hmm, that may be the cause. Not sure even as many as 25 would make that big a difference though. I usually see no more than 5 skipped turns here. I suspect it may be a borderline type case where differences in the particular CPU make a notable difference... not sure what I can do to speed Midboss up though... except maybe approximating it's fancy firepower selection with either a big table or a formula. Maybe I should time how long the various parts of it are taking... =\ --[[User:Rednaxela|Rednaxela]] 14:45, 14 April 2010 (UTC) | ||
+ | |||
+ | Well, I've tried WeeksOnEnd, non.mega.NoName, and gh.GrubbmGrb now, and gotten close to the rumble scores from my clients, so I'm satisfied that none of those results are due to crashes or a true "bad client". But yeah, Midboss is skipping some turns here, especially against WeeksOnEnd. I'll try the MBP when I get home and see how that compares. Maybe I should just exclude again for now. And let me know if there are any other tests you want me to try... --[[User:Voidious|Voidious]] 14:53, 14 April 2010 (UTC) |
Latest revision as of 09:34, 1 July 2010
"Ahab is stuck at sea in shark infested waters" <<< Ahab is dead now, even he survived from the battle White Whale, because whaling was in 1800s. But this made me laughed for a good while. --Nat Pavasant 11:35, 10 January 2010 (UTC)
And isn't WaveSerpent use full Anti-Alias/Interpolation already? --Nat Pavasant 12:46, 10 January 2010 (UTC)
- Ah true, WaveSerpent did implement it too yeah. --Rednaxela 16:17, 10 January 2010 (UTC)
Contents
Version 1d
Well... this is a refreshing surprise... it looks like 1d is gaining a few spots over 1c, and all I did was the following:
add
add(Dimension.ADVANCINGV, true, -8, 0, 8);
add(Dimension.BFT, true, 0, 30, 60, 95);
to
add(Dimension.DISTANCE, true, 75, 225, 375, 525, 675);
add(Dimension.LATERALV, true, 0, 1, 2.5, 5, 7.5);
add(Dimension.ACCEL, true, -0.5, 0, 0.5);
add(Dimension.VCHANGETIME, true, 0.0, 0.2, 0.5, 0.95, 1.6);
add(Dimension.FWALL, true, 0.05, 0.25, 0.45, 0.65, 0.85, 0.95);
add(Dimension.BWALL, true, 0.175, 0.875);
I find this particularly funny because I found similar things to not help when I made SaphireEdge long ago. Of course, the reason I decided to try adding those segments, is because I felt they would come much more strongly into play once my own bot is moving, which doesn't happen during targeting tests.
It's been far too long since I've seen tangible gains from a 5 minute change... :) --Rednaxela 07:32, 10 January 2010 (UTC)
- Nice work! You may drag me back in now that Pris has lost the top Canadian spot to Midboss... --Darkcanuck 16:34, 10 January 2010 (UTC)
Very nice indeed! I never got higher than #21 (APS 81.84) with GresSuffurd 0.2.13 which used only absolute velocity as surfing dimension. Seems that spot #11 is feasible without to much effort. --GrubbmGait 16:50, 10 January 2010 (UTC)
- Thanks. True, but I'm pretty sure it's simple surfing is the biggest thing holding Midboss back. Benchmarks show it's gun being in the same league as the top 10 bots, if not better than some, and various evidence over time has always shown the movement of RougeDC to be much weaker than it's gun really. --Rednaxela 17:50, 10 January 2010 (UTC)
Version 1e
Version 1e has a slight problem with rambots. At point blank it selects a firepower between 0.1 and 0.25, probably not the best defence. Also it does not move away anymore after a few collisions. --GrubbmGait 01:26, 11 January 2010 (UTC)
Yeah, there was a bug in my new shiny/fancy score-estimation code, one of those little one-keyword typos, causing it to mix up it's own damage bonus with the enemy's damage bonus in one important branch of the code. That's now fixed in version 1f and it should be choosing bulletpower much smarter now. Early tests show it boosting performance by as much as 10% compared to version 1d against some bots so let's see how my new bulletpower selection experiment turns out. About the movement, that hasn't changed since... some old version of RougeDC so that's nothing new. --Rednaxela 03:03, 11 January 2010 (UTC)
Version 1g
Hey Voidious, any idea what's happening on your client? It's uploading bad results for Midboss 1g [1][2] and also uploaded a huge number of bad results for Midboss 1f. Is it an out of memory error perhaps? I know that ever since Midboss 1d, the gun has taken up an enormous amount of memory, but there were no bad results for version 1d so maybe it's something else? If it's an out of memory error, are you running with ITERATE=YES or are you using a loop in the bash script? If the former that could be the source of the problem because robocode for some reason seems to leak memory. Considering how some of the results seem notably improved... this issue is rather frustrating. --Rednaxela 17:34, 11 January 2010 (UTC)
Yeah, one of my clients hit an "out of memory / Java heap space" overnight. I killed that one and left just one client running since this morning. I checked the one that crashed and it was not iterating, just looping via shell script. Sorry I didn't post about it, I was on my way out the door and it didn't strike me as likely to have caused bad results before crashing. I'll stop my other client when I get home and double check my config. Sorry man. =( --Voidious 17:55, 11 January 2010 (UTC)
Here's one theory about the running out of memory: Testing is showing that Robocode, at least 1.7.1.6, is not allowing memory in static variables of bots to be cleared by the GC ever, even after the same bot loads again. This is odd because it was my understanding that when the classloader that robocode made the to load the robot dies it should allow GCing of everything that classloader loaded[3][4]. This says to me that robocode has a bug where it keeps using the same classloader, or keeps references to it's old classloaders. Anyway, I've found that Midboss 1d uses roughly 60MB of memory per load, and Midboss 1g uses roughly the same. As a workaround, in Midboss 1h, I'll make an 'unload' type hook in the bot, which will clear out all static variables at the end of round. I have no clue what this wasn't an issue with Midboss 1d..... Perhaps there are more problems when there is a greater number of high-memory-usage distinct bot versions running in a single 10-battle iteration. --Rednaxela 19:45, 11 January 2010 (UTC)
The other client that I left running seems to still be having no problem with Midboss, but I stopped it when I got home. They're both configured the same: 512 mb, NUMBATTLES=25, ITERATE=NOT, and looping via a shell script. I'll just leave 'em off for now, but I'm skeptical there's anything wrong with my client, besides any problem with Robocode itself/the bot. I'll just leave my clients off for now, until we investigate further or you post a new version that you think has a workaround. --Voidious 01:46, 12 January 2010 (UTC)
It's probably the NUMBATTLES=25 that's making it show as an issue on your client but not mine I think. I've reported at ticket to the robocode tracker, since I'm very certain it's a problem in robocode itself, and will in the meanwhile make a workaround by forcing the bot to clear ALL static variables upon receiving the onBattleEnded() event. This bug concerns me significantly because really, it doesn't just affect Midboss, but I'm pretty sure affects all bots with very high memory usasge. Of course, once this gets fixed, there should finally be no issue with using ITERATE=YES. --Rednaxela 02:32, 12 January 2010 (UTC)
Interesting Emergent Behaviors
The new bulletpower selection is seeming extremely helpful against some bots, see the comparison for Druss, Gaff, and Chalk for instance! It's interesting when I make an algorithm designed to optimize for score, it sometimes does things I just didn't expect. It sometimes seems "give up" just before dying, by shooting a few very high power bullets to use up it's last energy very fast. My theory of why this causes benefit in the score evaluation is because it denies the enemy some bullet damage score. Any thoughts on this? :) --Rednaxela 21:29, 11 January 2010 (UTC)
- It's not the first time I've heard of that idea, but it's very impressive that your system would come up with it! Your analysis of the effect on score must be pretty sophisticated. =) --Voidious 21:47, 11 January 2010 (UTC)
- Well, it does some prediction, assuming it keeps firing the same power, the enemy keeps firing the same power, and the distance remains the same, what could happen. It tries to explore a variety of scenarios of what the result of the next few waves (both self and enemy) could be, but since CPU is not unlimited it must limit it's depth of exploration, based on both depth and the likelihood of that possible branch of waves hitting/missing. When it gets to it's limit (which is a fairly small limit really), it falls back on a more heuristic calculation. Based on all this it finds the average expected score change from now to the end of the match, for a given firepower. So far it seems to work decent but it's much more difficult to debug than the old adaptive algorithm I developed in RougeDC. --Rednaxela 21:56, 11 January 2010 (UTC)
- Interestingly, this behavior seems to have disappeared so far as I can see in the newer versions. It seems that improving it's accuracy/realism in various ways is causing it to no longer consider this move worthwhile. --Rednaxela 06:33, 20 January 2010 (UTC)
Version 1i
Interesting... version 1i seems to be doing well enough so far... at least not worse overall than version 1d and 1h. The compare shows some very distinct weaknesses in my new bulletpower selection though:
- Bots that stop firing (Toa, Gladiator), throw things off, because it assumes the enemy always fires.
- 'Chase bullets' (CigaretBH) throw it off, because it assumes the enemy always fires at their most recent bulletpower.
- And... maybe something else? Can't figure out why WeeklongObsession or RainbowBot mess it up...
--Rednaxela 20:24, 12 January 2010 (UTC)
In the development version, fixed the issues with chase bullets and bots that stop firing. Issues remain with WeeklongObesssion, RainbowBot, WaveShark, and possibly others... Not sure what's causing these issues. --Rednaxela 22:07, 12 January 2010 (UTC)
The WeeklongObsession/RainbowBot issues now seem to be random chance... however it seems the Gladiator/Toa issue is not fixed after all... so it must be something different than I thought... --Rednaxela 06:00, 13 January 2010 (UTC)
Also, hooray for finally getting a bot past the well-known CassiusClay :) --Rednaxela 17:49, 13 January 2010 (UTC)
Version 1l
Hmm... seems rambot firepower was yet again broken... and a bunch of other places having problems too... On the other hand, it got over 50% against DrussGT for the first time among other improvements, so maybe I shouldn't simply revert... --Rednaxela 15:20, 16 January 2010 (UTC)
Version 1m/n
Heh... It seems that the new code I know is more realistic and statistically correct isn't giving as good results for some reason. I have one theory about the cause though. The old "hitrate estimation" code I was borrowing from RougeDC, doesn't seem to be so great in practice and doesn't account for distance and stuff very realistically it seems. Hopefully I'll make some gains by switching to the following:
public void waveEnd(boolean hit, double distance, double wavespeed) {
double maxEscapeAngle = Math.asin(8.0/wavespeed);
double halfWidth = Math.atan(18/distance);
double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth);
if (hit) {
b += uncertainArea;
} else {
a += Math.min(uncertainArea, halfWidth);
}
count++;
}
public double getHitrate(double distance, double bulletpower) {
double maxEscapeAngle = Math.asin(8.0/(20-3*bulletpower));
double halfWidth = Math.atan(18/distance);
double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth);
double uncertaintyMultiplier = a / b;
double effectiveEscapeAngle = uncertainArea * uncertaintyMultiplier + halfWidth;
double hitrate = Math.min(1, halfWidth / effectiveEscapeAngle);
return hitrate;
}
Testing so far seems to show it as more realistic to what I'd intuitively expect, but haven't had a chance to test it in combat yet. --Rednaxela 17:47, 18 January 2010 (UTC)
So... after thinking about information theory some, I've made a new version, that takes advantage of the 'binary entropy function' from information theory, in order to maximize how evenly the 'information quantity' is weighted. It is slightly input-order-dependent now though, but I hope that's worth the gains in automatically maximizing the accuracy of the hitrate data.
public void waveEnd(boolean hit, double distance, double wavespeed) {
double maxEscapeAngle = Math.asin(8.0/wavespeed);
double halfWidth = Math.min(maxEscapeAngle, Math.atan(18/distance));
double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth);
// Calculate weight based upon the binary entropy function
double p = getHitrateFromData(uncertainArea, halfWidth);
double weight = (-p*Math.log(p) - (1 - p)*Math.log(1 - p))/Math.log(2);
if (hit) {
b += weight*uncertainArea/maxEscapeAngle;
} else {
a += weight*halfWidth/maxEscapeAngle;
}
count++;
}
public double getHitrate(double distance, double bulletpower) {
double maxEscapeAngle = Math.asin(8.0/(20-3*bulletpower));
double halfWidth = Math.min(maxEscapeAngle, Math.atan(18/distance));
double uncertainArea = Math.max(0, maxEscapeAngle - halfWidth);
return getHitrateFromData(uncertainArea, halfWidth);
}
private double getHitrateFromData(double uncertainArea, double halfWidth) {
double uncertaintyMultiplier = a / b;
double effectiveEscapeAngle = uncertainArea * uncertaintyMultiplier + halfWidth;
double hitrate = Math.min(1, halfWidth / effectiveEscapeAngle);
return hitrate;
}
One note about the model this uses, is it should be extremely accurate at calculating theoretical hitrates under the following conditions: 1) No walls, 2) Movement has a flat profile across all possible distances and BFTs. Now... I know reality doesn't meet those criteria, but I think reality is close enough that this is the best reasonable approximation I can make. --Rednaxela 19:50, 18 January 2010 (UTC)
Actually... remove the "binary entropy function" multiplier... It's unneeded because after that normalization I added, the formulae are already weighted appropriately. --Rednaxela 01:00, 19 January 2010 (UTC)
Version 2 series
In the (eventually) upcoming Midboss 2a, I will be exploring giving the surfing more advanced learning, however for now will keep the old surfing framework of RougeDC still, even though I know it's flawed in many ways. I just want to see how far I can take it and what lessons can be learned. --Rednaxela 17:49, 13 January 2010 (UTC)
Probability and statistics research
While doing some research I found an interesting thing, known as a "Bernoulli process" which turns out to be what my score prediction is attempting to approximate/model in a way. My approach does brute force simulation up to a certain depth, at which point it falls back on an approximation. This approximation is based on a continuous-time-damage with a statistically correct mean and standard deviation. This approximation isn't as accurate as I'd like, but it's correct in the limit as time remaining in the battle approaches infinity, thanks to the central limit theorem. It's looking like in order to formulate something more accurate than a normal-distribution-based approximation, I'd need to pull off some serious stats voodoo. Even if I did manage to, I have a hunch that it would not be closed form and thus be unlikely to be more efficient than my brute force method. --Rednaxela 04:11, 21 January 2010 (UTC)
1s
Hey, I noticed this pure rerelease came in at 0.2 APS below 1r. A lot of the worst comparisons to 1r came from my clients. I'm not sure if this is a "bad client" situation or just because I ran a lot of battles last night. (Amusingly, I had Midboss in my exclude list until yesterday, when I figured "ah it's probably fine to remove this now.") I don't see any zero scores. 48% vs WeeksOnEnd is suspicious, but I did just get a 51% running battles manually. Midboss skips a lot of turns some rounds on my MacBook 2.0 GHz. Sometimes none, sometimes just a few, and sometimes as many as 25 in a round. I suspect this could be the cause... I haven't tested on my MacBook Pro yet. --Voidious 14:28, 14 April 2010 (UTC)
Hmm, that may be the cause. Not sure even as many as 25 would make that big a difference though. I usually see no more than 5 skipped turns here. I suspect it may be a borderline type case where differences in the particular CPU make a notable difference... not sure what I can do to speed Midboss up though... except maybe approximating it's fancy firepower selection with either a big table or a formula. Maybe I should time how long the various parts of it are taking... =\ --Rednaxela 14:45, 14 April 2010 (UTC)
Well, I've tried WeeksOnEnd, non.mega.NoName, and gh.GrubbmGrb now, and gotten close to the rumble scores from my clients, so I'm satisfied that none of those results are due to crashes or a true "bad client". But yeah, Midboss is skipping some turns here, especially against WeeksOnEnd. I'll try the MBP when I get home and see how that compares. Maybe I should just exclude again for now. And let me know if there are any other tests you want me to try... --Voidious 14:53, 14 April 2010 (UTC)