Talk:Diamond/Version History
Contents
- 1 ELO inaccuracy
- 2 Avoiding recent enemy locations
- 3 1.11* bugs / fixes
- 4 One thing at a time
- 5 One-on-one
- 6 MeleeRumble cruelty
- 7 Multiple gun waves experiment
- 8 MeleeRumble 2nd place
- 9 Performance Enhancing Bug in 1.32
- 10 Wavesurfing Views
- 11 Diamond 1.392
- 12 BulletPower
- 13 Rating order
- 14 DV vs GF
- 15 Diamond 1.461
- 16 Precise intersection (1.47*)
- 17 Dia 1.48
- 18 1.5.0
- 19 1.5.2
- 20 1.5.5
- 21 1.5.16
- 22 1.5.21
- 23 1.6.0
- 24 1.6.2
ELO inaccuracy
About "Note: Despite lower ELO, was about .3% APS better than 1.0.", that's not surprising to me at all. Glicko-2 seems to be far more true to a full-pairing APS than ELO was too. Things like this make me glad the new server doesn't just show ELO like the old one :) --Rednaxela 22:02, 17 May 2009 (UTC)
- Although I had second thoughts about setting the APS as standard ranking decisor, I must agree that ELO is not as reliable as it was on the old server(s). Mind you that ELO is calculated slightly different on this server than on the old ones. --GrubbmGait 22:11, 17 May 2009 (UTC)
- ELO scores have recently taken a nosedive, for several reasons. There has been a lot of new activity recently with lots of bots being updated and a few new ones added in -- that tends to shake things up. Also, several long-running bots were removed within a short time period, notably pederson.Moron which once anchored the bottom end of the scale. It's safer to compare APS instead of ELO, especially right now. --Darkcanuck 15:18, 18 May 2009 (UTC)
- I only pulled Moron because it seemed fairly pointless. I don't mind if he is returned to shore up a ratings slide, but that seems like giving a cancer patient a Band-Aid. On a related note, a long time ago I got the notion that the average ELO rating of the old rating system was 1600. I dropped the ratings list in Excel and confirmed that the ratings averaged to about 1600. At the time, most people were wondering what I was smoking, dismissing 1600 as anything of relevance. I recently did another averaging of the ratings and found ELO to average at 1413 and Glicko-2 averages 1608. Dunno if it really means anything, but ELO certainly doesn't compare to the old ratings.--Martin 16:24, 18 May 2009 (UTC)
Avoiding recent enemy locations
Well, I found a bug in my risk calculation for avoiding recent enemy locations (fixed but not tuned in 1.071). I really feel like this must be a good idea (because bullets are likely to be headed to those spots), but I hadn't found any rating boost from it yet. Hopefully I can find some points in a re-tuned, bug-free version of this... --Voidious 17:25, 21 May 2009 (UTC)
1.11* bugs / fixes
Man, this Performance Enhancing Bug really drove me nuts, especially since I have so much trouble testing such a little thing. I think I can live with a .15% APS drop in my "bug-free" 1.115 (versus the "buggy" 1.111). I'm sure I'll continue tuning the randomness of his movement in the future, anyway, and I know there are more performance gains to be had elsewhere. This is long and boring, but I feel compelled to write it out, even if just to have the info "out there" somewhere.
The bug was with my random direction change timer. This timer influences the bot to reverse direction at a random interval (with a risk added for disobeying the timer). If the timer trips while the bot is moving with negative velocity, there would be a risk associated with continuing in that direction, and the bot would soon change direction towards positive velocity. Before the timer trips, there's a risk associated with reversing direction.
With the bug, once the velocity went from negative to zero (or positive), it mixed up its directions and thought that negative velocities were the safer direction. This was caused by two things: it always considered zero velocity to be the same as positive, and it was comparing the possible future heading to the heading from one tick ago (instead of heading from the current tick). So here's the problem scenario:
- Diamond is moving with negative velocity.
- The timer trips, so now there's a risk in continuing in this direction.
- He begins to change direction, eventually hitting zero velocity (or, rarely, as high as +1).
- He notices he's changed direction (because zero is the other direction) and resets the timer, meaning now there is a risk with changing direction.
- But he compares possible movement direction against that of one tick ago, which is a negative velocity, so he thinks that way is the "same" direction.
In short, the direction changing worked fine when moving with positive velocity, but for negative velocity, it was more likely to just stop and then continue in that direction. At this point, I've just removed this timer and tuned the other randomizing factor (risk from recent locations) a bit more.
--Voidious 15:32, 26 May 2009 (UTC)
One thing at a time
The changes in 1.12 and 1.121 are a great example of why you should follow the "one change at a time" dogma. From 1.072 (best version at the time) to 1.08, I removed one thing: risk from recent enemy locations, and added another: risk factor based on damage given to enemy. 1.08 went down 0.1% APS, so I thought, "well that's barely beyond the margin of error, I'll just leave it". From 1.115 to 1.12, I restored the risk to recent enemy locations and saw a .25% APS drop. From 1.115 to 1.121, I removed the damage given risk factor and am seeing a .4% APS gain (pairings almost complete). Yay! --Voidious 18:25, 27 May 2009 (UTC)
- Bah, a few more battles and it's not even above 1.115. I guess I should be patient (still only at 1,000 battles). Oh well... =) --Voidious 21:14, 27 May 2009 (UTC)
One-on-one
At least you start working with One-on-one! » Nat | Talk » 14:49, 1 June 2009 (UTC)
Just a little bit to keep myself sane. =) Melee is really hard. Spending a day on 1v1 is therapeutic because I actually know what I'm doing. =) --Voidious 14:56, 1 June 2009 (UTC)
Also, I know I've covered this topic ad nauseum already, but I just have to vent: flatteners are so evil!! They entice you with their 50+ scores against CC in the MC2K6 and then they destroy your RoboRumble rating... =) --Voidious 16:02, 1 June 2009 (UTC)
- Evil?!? Not if going for PL ;) --Rednaxela 18:52, 1 June 2009 (UTC)
- Yeah, yeah, that's what the devilish flattener always says! =) Just kidding, obviously I'll try to tune it better. But they sure are touchy buggers. --Voidious 19:13, 1 June 2009 (UTC)
- Once you get them tuned they can actually give you points. I know on DrussGT the flattener gives a good 5 ELO points at least. The trick is putting your bot against the enemy with the lowest hitrate that you would want the flattener enabled against (Ascendant comes to mind) and then putting the threshold just a tiny bit below that. --Skilgannon 19:56, 1 June 2009 (UTC)
- Yeah, I've tested removing the flattener from Dookious and it costs me points (I think it was about 5, too, not to mention killing PL score). The thresholds there are really well tuned and conservative. I'm just feeling greedy now. =) I think I'm going to try to come up with something more clever than just hit percentages (even my carefully normalized ones), like measuring the adaptation rate of the enemy gun, for enabling criteria. Turning the flattener back off when the enemy hit-% drops below the threshold (as I do in Dookious) seems silly from one perspective, because it could just mean the flattener's working, but I've always felt that permanently enabling it carried too much risk. --Voidious 20:05, 1 June 2009 (UTC)
- Hmm... Normalized hitrates... I should compare those used in RougeDC's firepower selection with the ones you use for flattener enabling :) --Rednaxela 01:10, 2 June 2009 (UTC)
- I tally bullets fired as normal, but bullet hits are weighted by distance and precise escape angle range. So a bullet hit from twice as far away or in a situation with twice as big a max escape angle would count as two hits. (Edit: Oh yeah, I also subtract 1 from bullets fired for onBulletHitBullet.) For my bullet power management experiment in Dookious 1.60, I normalized out the same stuff in a different way, but I didn't use precise max escape angles, because I was feeding the escape angle into a formula for testing 300 bullet powers each tick. =) --Voidious 01:31, 2 June 2009 (UTC)
- I've also been thinking about ways to make the flattener smarter, and one was by enabling it when entropy shows that it is predicting where they will shoot better than the regular hit surfing. I haven't tried it yet though. --Skilgannon 20:33, 1 June 2009 (UTC)
Wow, nearly 1% APS improvement with Dooki's gun, meaning more than half the difference between Diamond and Dookious lies in the gun. I guess the movement is already capable of some pretty nice PL performance, too. Exciting! --Voidious 13:59, 13 July 2009 (UTC)
MeleeRumble cruelty
So I feel 99% certain that 1.196 is functionally equivalent to 1.183, but that's a big discrepancy: 1.196 vs 1.183. I recompiled the source I had for 1.183 for a re-release, and it's coming in somewhere between: 1.183b vs 1.183. A binary comparison of the .class files shows that they are all the same besides one which I never update, so I'm confident the source is right. The MeleeRumble is a cruel, frustrating beast... --Voidious 16:02, 10 July 2009 (UTC)
It does make a big difference score to one robot if you fight it under a set of sample robot (which exist in melee) and a battle full of ABC's, rozu's and justin's robots. Although the difference you point isn't as much as I expected. Perhaps we should have a better way to control the melee score. Perhaps we need to weight the score base on the opponent level (which can take from the ranking). But it's a work. » Nat | Talk » 17:29, 10 July 2009 (UTC)
Multiple gun waves experiment
Well, I'm really surprised this worked, and now I'm wondering if anyone else has tried it before. Whenever I fire a gun wave in melee, I now fire an additional gun wave from the last known location of each (still living) enemy (besides the target). The idea is that I can easily collect gun data from the perspectives of all bots on the battle field, and that hopefully, I'll get a better/faster picture of the enemy movements this way.
A couple bug fixes and a bit of polishing later, I had a 0.74% improvement in my test bed from the new gun waves, on top of a 0.14% improvement with the tweak to the "number of bots" weight (also in 1.283). I'm now looking at a ~0.6% APS improvement in the rumble for 1.284 over 1.282, and not too far behind Aleph. Yay!
--Voidious 15:35, 24 August 2009 (UTC)
I've thought about something similar before, but I've yet to build a real melee gun. Really nice stuff! What I wonder, is what the results would be like if you added other points like the corners to that list, since many bots move relative to the corners as well... --Rednaxela 15:43, 24 August 2009 (UTC)
- Well, with the way my gun records and reconstructs firing angles, firing waves from the corners for that reason wouldn't have the same effect as if you were using GuessFactors. (The enemy movement isn't recorded relative to the wave source, but relative to the enemy's initial heading.) But indeed, firing waves from additional sources is a good idea to try, and you just gave me the idea to try a "heading relative to nearest corner" attribute... --Voidious 16:03, 24 August 2009 (UTC)
Congrats, that's a huge and clever improvement! --Positive 15:58, 24 August 2009 (UTC)
- Thanks! Still a ways to go before I catch Aleph or Portia 1.13, and I don't even want to think about Shadow 3.83, but I'm very happy for now. =) --Voidious 16:03, 24 August 2009 (UTC)
Very clever indeed! That kind of specific melee gun idea is exactly what made Shadow finally break away from Aleph in the rumble. You (and Positive) are not as far from Shadow as you might think, imo. --ABC 16:58, 24 August 2009 (UTC)
Niice! (wall distance.. corner distance) I never thought to try that attribute in the gun.. sounds like a great attribute!, and the gun collecting data from other enemies.. a truley great idea Void.. -Jlm0924 18:55, 10 January 2011 (UTC)
MeleeRumble 2nd place
It seems that no-one noticed, due to the ....Hawk hype, that Diamond passed Aleph and reached second place. Congratulations Voidious! And do I read the results of DiamondHawk correctly if I state that your movement is better than Shadows, but your gun is holding you back?? --GrubbmGait 05:38, 28 August 2009 (UTC)
Thanks for noticing. =) However, Portia 1.13 actually still has 2nd place by a small margin if he were un-retired. I'm still pretty excited about passing Aleph, though! And yes, I'm surprised by how many points I have left in my gun (and working on it now =)). If the *Hawk results translate linearly, this would put Diamond's movement ahead of Shadow 3.84, which is really exciting (even to be close!), but still a little behind 3.83. --Voidious 05:44, 28 August 2009 (UTC)
- How about ask ABC to create Shadond? I'm curious if he will get to the throne (or even SHA3.83?) » Nat | Talk » 13:47, 28 August 2009 (UTC)
Now you know what I meant by you and Positive being closer to Shadow than you thought ;). Shadow 3.83 melee movement is just a normal (and very old) Minimum Risk movement, the melee strategy page on this wiki has long contained all the tricks I use. Aleph's movement has probably always been slightly better than Shadow's, but my gun made the difference. At this time I suspect that the strongest combination would be my gun with Portia's movement. --ABC 15:00, 28 August 2009 (UTC)
Well, perhaps the very strongest combination would be... Shadow's gun, Portia's Melee movement, and Diamond's 1v1 movement, but the 1v1 part would only make a small difference I'm sure... :) --Rednaxela 15:04, 28 August 2009 (UTC)
ShadowHawk and DiamondHawk gave me a pretty good idea where my gun and movement stand, so I don't feel a need to request Shadond. (That is a hilarious name, though.) I'm quite content to wait until I can challenge Shadow myself. =) ABC (or Positive) is free to make a hybrid if he likes - my code is very pluggable, as always. 1.30 is officially at #2 now, so maybe I'll focus on 1v1 for a bit. =) --Voidious 15:21, 28 August 2009 (UTC)
- The only obstacle is RWPCL. So unless you officially gave them permissions, they can't =) » Nat | Talk » 15:32, 28 August 2009 (UTC)
- Oh bah, the statement "ABC (or Positive) is free to" is good enough for all intents and purposes :) --Rednaxela 15:47, 28 August 2009 (UTC)
(Edit conflict) Diamond 1.30 has definitely passed Portia 1.13 now, so congrats on reaching 2nd place! It's interesting to think about what combination would be best. I think Shadow's movement and gun are quite good for the first few turns of a 1v1 fight, because they don't wait for waves to reach the target (something GF-targeting does have to wait for). In melee, you often only get a few shots while down to 1v1, so it's extra important to have a quick start. --Positive 15:27, 28 August 2009 (UTC)
- Thanks! I don't understand, why would GF targeting have to wait for waves to reach the target before firing? Diamond sometimes stays still for a moment when it gets down to 1v1, but I'm not really sure why, actually... I should investigate that. --Voidious 19:51, 28 August 2009 (UTC)
- What I mean is that GF-guns can only use a simple targeting method to aim before they have any data, and they only have enough data to make an informed shot after a few waves have hit. A pattern matching can already detect and simulate movement (for example, stop 'n go) before its first waves hit. --Positive 21:45, 28 August 2009 (UTC)
- Oh, of course. But in a Melee battle, you can still use the targeting data you collected during Melee when it gets down to 1v1, right? I don't see why a GF gun can't use that Melee data (as I'm sure Shadow does). Shadow is an awesomely quick learner, though, there's no doubt about that. --Voidious 21:56, 28 August 2009 (UTC)
- That's true, you could use a combination of the melee data and new data. However, I think most advanced robots behave or will behave differently from melee in 1v1, so it will be or is better to use new data as fast as possible. Also, if you look at Shadow's debugging graphics, it seems like it doesn't use the melee data. :P --Positive 22:06, 28 August 2009 (UTC)
- Really? Interesting. Diamond uses "number of bots" as one of its gun dimensions, so 1v1 data is favored, but it uses whatever it has. If I were using GF + VCS, I would probably sum a lower-weighted buffer that doesn't segment on # of bots and a higher weighted one that does, so it would have something to work from before collecting the 1v1 waves. =) --Voidious 22:11, 28 August 2009 (UTC)
Oh, very nice - congrats! Perhaps this will trigger the release of Shadow 3.85? --Darkcanuck 15:56, 28 August 2009 (UTC)
- I sure hope not... =) --Voidious 19:51, 28 August 2009 (UTC)
Wow, 1.31 is 0.4 APS from Shadow 3.84, congrats! Now I really have to see if I can squeeze some more points from Shadow's gun/movement. --ABC 12:30, 29 August 2009 (UTC)
- Thanks! =) Now that I've got it working, your Melee Gun design is so powerful, it almost feels dirty. Thanks for sharing. I'll probably focus on 1v1 for a bit now, anyway. And seeing as rating points are so much harder to find as you ascend towards #1, I think 3.83 still has a pretty solid lead over Diamond. --Voidious 17:09, 29 August 2009 (UTC)
I just noticed, you are now second in both melee and 1v1 =) Great work you've been doing on Diamond. I'm curious, how much do your melee and 1v1 code interact? For instance, at the end of a melee when there are only 2 bots left (1 being Diamond), does your 1v1 movement kick in with no values? Or does it have some sort of idea where the enemy will be aiming due to it's observations during the melee? --Skilgannon 17:59, 29 August 2009 (UTC)
- Thanks! While there's nothing quite like being #1 ;), I'm really satisfied with Diamond's progress. It's also great fun having active competition from Positive, ABC, and Justin... Hopefully, I can give you that "pleasure" on the 1v1 side sometime. =) I know it gets lonely up there. As to your question:
- On the movement side, they are totally separate, much like you said. When it gets to 1v1, the surfing kicks in with preloaded HOT-avoidance, not knowing anything from melee bullet hits. Of course it saves surf stats per-enemy across rounds, too. If I had some Portia-style melee bullet dodging, I'd surely try to integrate it more, but as of now, Diamond's melee movement doesn't even notice energy drop.
- On the gun side, they are more integrated. One of the gun dimensions is number of bots alive, but it doesn't discard melee data for 1v1 scenarios or vice versa. Wall distance is calculated differently (though recorded in the same slot), I use different weights when aiming (including some dimensions being used in only 1v1 or melee), and now the melee gun uses the Shadow/Melee Gun technique.
- --Voidious 19:10, 29 August 2009 (UTC)
- Hmm. So the gun works pretty optimally between melee/1v1 but the movement doesn't. The movement I've been working on expands seamlessly from 1v1 to melee, with full surfing capabilities, and full learning based not only on hits to yourself, but to all other bots that you can scan. The logic is amazingly complex, with decisions about bullet hits needing to be deferred until more data is collected and checked for bullet bonuses etc, but I see no reason (aside from CPU limits) why it isn't perfectly viable to learn how everybody is shooting all at once =) This is the main reason I haven't progressed that far with it - it's such a massive project that I'm not sure where to begin, and I'm also scared that I'll spend several hundred hours in development and then the whole thing will be a flop =) --Skilgannon 20:35, 29 August 2009 (UTC)
- Sounds very cool, and very ambitious. =) I see how a lot can be approximated about enemy targeting during melee, I'm just very skeptical it can be precise enough to be useful for Wave Surfing, which thrives on ultimate precision. But Portia has found a lot of success, ABC is trying likewise, and it sounds like Rednaxela has some good melee dodging now too, so maybe it's worth a shot. Good luck, in any case - it'd be way cool to see you enter the melee arena. --Voidious 21:30, 29 August 2009 (UTC)
I just saw that Diamond 1.31 is PL king at the moment (100% Pairs won), congrats! --Positive 00:14, 1 September 2009 (UTC)
- Thanks... Though the top bots are so always so close and I release so many versions, it was bound to happen eventually. =) Likewise, great work with Portia 1.19 - great way to start the semester, eh? Cheers, --Voidious 02:30, 1 September 2009 (UTC)
Performance Enhancing Bug in 1.32
Hm, well it seems that the bug fixed in 1.323 was slightly performance enhancing, and thusly says to me that something isn't ideal about using inverse-distance weighting for scans... Hmm... --Rednaxela 14:58, 31 August 2009 (UTC)
Yeah, I don't know what's up with that. It's not that far beyond the margin of error, but still, I was hoping for a boost. I'm rewriting a lot of my data logging / danger projection into a bigger / badder system, anyway, so I actually don't care all that much. (It was actually while writing up my new system that I noticed this bug.) For what it's worth, in my experience, weighting scans by inverse distance is quite essential against simple targeters, where your most similar scans tell you exactly where the danger is and the rest are just noise. --Voidious 15:30, 31 August 2009 (UTC)
Well, I've found in various experiments over time that weighting scans some function of distance is essential indeed, but that a simple inverse is suboptimal. --Rednaxela 16:07, 31 August 2009 (UTC)
What, according to your experiments would be closer to optimal then? The inverse of some power of the distance perhaps?--Navajo 22:01, 1 September 2009 (UTC)
I'm also curious to hear what Rednaxela thinks... I used to use inverse square root of distance in Lukious, and I think Skilgannon tends to prefer inverse manhattan distance. Diamond is one of the strongest DC movements and is currently using a simple division by distance, for whatever that's worth. (In terms of rumble points, I think it just passed Hydra for the strongest DC movement.)
Semi-off-topic ramblings: I think we end up tuning around a lot of arbitrary things in our very complex bots, so when we change something and lose points, we think the change was universally "bad", when it really was just bad in our specific case. That's just a hunch, partially based on how often I and others tweak values that we've never tweaked and (seem to) find that our first instinct for that value cannot be improved upon.
--Voidious 23:00, 1 September 2009 (UTC)
- (OT) I was just thinking the same -- 99% of my tweaks end up with similar or slightly worse performance, it's usually the bigger changes that yield results. --Darkcanuck 03:41, 2 September 2009 (UTC)
- Yes, sometimes I make a change that when tested with several seasons yields some APS gain and I never upload a version of YersiniaPestis (after 1.3.7) that is not able to beat all my test bed bots on average, but on the rumble luck plays a huge role. Right now version 3.0 loses to Locke, and I haven't tested against him, but I'm most confident that if they fight some more times that would change, the only reason I haven't really cared about that is because Shadow is currently losing to two bots :). --zyx 04:09, 2 September 2009 (UTC)
Well, I had the most success with things like 1/distance^n or 1/n^distance type things for some magic tweaked value of n. What I found most interesting of all though, was that the best values were highly specific to the dimensions and surfing algorithm in question. When I said that I found simple inverse to be suboptimal, I didn't really have anything very specific in mind that I expected to be better for Diamond, but I had a doubt reinforced by this PerformanceEnhancingBug, that it was quite unlikely that a plain inverse was being truly optimal --Rednaxela 05:59, 2 September 2009 (UTC)
Wavesurfing Views
I'm curious... what do you mean by wavesurfing views? Multiple kd-tree weightings, the results of which are composition into an overall profile perhaps? --Rednaxela 14:40, 3 September 2009 (UTC)
Yeah, basically. I had trouble coming up with a word for it. A "view" consists of a distancing function, attribute weights, cluster size, max size (before cycling out old points), its own tree, and some other stuff like enablement criteria (hit percentage threshold, flattener mode). I already had some of this stuff in my movement, but it was hard-coded. Now it's easy for me to add/remove/change these multiple views. And yes, I now have a bunch more of them weighted and layered at all times. --Voidious 14:55, 3 September 2009 (UTC)
Diamond 1.392
Wow! Diamond 1.392 is doing great so far! Looks like it will take the throne! I really wonder how DiamondHawk would score with those tweaks... and I'm really surprised you never waited till aimed before --Rednaxela 00:56, 21 September 2009 (UTC)
- Thanks. =) Still quite a few battles to go, but my fingers are crossed. Before, I'd wait for the gun to be turned only when it got down to 4 bots or less. Maybe ignoring it very early in the round is still a good idea, but I guess that was way too late... Assuming these 700 battles aren't a fluke, I'll post an updated DiamondHawk tomorrow. --Voidious 01:05, 21 September 2009 (UTC)
- 425 battles to go and Diamond holds the top spot with a narrow 0.2% APS margin... --Darkcanuck 04:36, 21 September 2009 (UTC)
- At least it doesn't have to fight mini/microbattles ;-) Congrats man, seems that you have acheived the impossible. --GrubbmGait 06:02, 21 September 2009 (UTC)
- Congrats! It's amazing how such a simple change codewise can make such a large difference. I'm definitely going to try to catch up. :) --Positive 10:03, 21 September 2009 (UTC)
Congratulations, you did it! So much time wishing for some melee competition, I finally got what I asked for ;). Now you defend that throne the best you can, there are other very strong contenders coming for it. --ABC 10:45, 21 September 2009 (UTC)
Congrats. Updating Wikipedia page, but still leave best overall megabot to Shadow. » Nat Pavasant » 11:15, 21 September 2009 (UTC)
Thanks guys! I still feel Diamond 1.392 vs Shadow 3.83 is a bit too close to call, but even a draw makes me pretty ecstatic. =) --Voidious 12:51, 21 September 2009 (UTC)
- Information on the wikipedia is current meleerumble kings so it is Diamond. » Nat Pavasant » 13:18, 21 September 2009 (UTC)
- Thanks, but I still feel free to have my own view of things. =) --Voidious 13:21, 21 September 2009 (UTC)
Great work on Diamond. I just have to hope you don't make a similar push in 1v1 =) --Skilgannon 14:30, 21 September 2009 (UTC)
Congrats on your new shiny crown. --zyx 14:37, 21 September 2009 (UTC)
I didn't thik anyone would beat Shadow this soon, but you did it, congratulations, it is great. --Navajo 20:47, 21 September 2009 (UTC)
Well then, it also seems that you've stolen my recently obtained title of "Strongest Melee Gun" from me with this tweak... I may have to delay the full release of Glacier further than I had initially planned in order to fix this... :) --Rednaxela 00:15, 22 September 2009 (UTC)
I noticed some bugs in my melee bullet power selection code and I've been obsessing over it since. It's there in 1.392 and since 1.382. The most glaring one is setting bulletpower=2.999 if enemiesAlive <= 7, instead of >= 7. Attempts to fix/update bullet power have all lost points, but leaving the silly logic doesn't sit well with me. 1.393 and 1.395 both showed improvement in my test bed over a significant number of seasons, but lost points in the rumble. Argh, maybe I'll just leave it for now... it's not the most exciting thing to work on. --Voidious 13:09, 23 September 2009 (UTC)
I don't think that setting should result badly per se. At the start of the game robots move a lot (go to corners, etc.), so they might be harder hit with slow bullets. Somewhat later they settle in more static positions when bots aren't constantly dying. With Portia I tried to avoid using 3.0 bullets as much as possible, and made it use 2.5 bullets for that reason. By the way, I have a similar 'bug' in Portia's targeting: it prefers using the first few results of the KdTree search, but those results aren't ordered on lowest error. 'Fixing' it causes point loss, and I have no idea why. --Positive 15:39, 23 September 2009 (UTC)
BulletPower
Hmm... 1.40 seems to be performing almost as good as 1.392, though not quite. Hooray for PerformanceEnhancingBugs. I've lately found that even seemingly minor/improving BulletPower tweaks can cause nightmarish (IMO) score drops in melee rumble. By the way, mind if I test out Diamond 1.40's bulletPower choosing in a GlacialHawk 1.11 release? (and as an aside, it seems that GlacialHawk 1.10 has stolen the 'Strongest Melee Gun' back, even if only by a small margin :)) --Rednaxela 17:40, 24 September 2009 (UTC)
I noticed, congrats. =) Feel free to borrow the bullet power - I'm curious too. I'm chalking up 1.392 and 1.40 as equal, as it's pretty close, so I can rest easily again. (1.40's Wave Surfing is also slightly weaker, but not sure it's enough to matter in melee.) --Voidious 17:50, 24 September 2009 (UTC)
Hmm... seems that GlacialHawk's bulletPower calculation is stronger in melee rumble overall, but weaker in battles with the stronger bots. Actually, it's just a very slightly modified HawkOnFire bulletPower calculation, which is far slimmer in code, but seems well-tuned overall for melee. As a side note, I find it somewhat amusing how Hypothermia is stronger in melee, yet is less than 1/4th of DiamondFist's code even when only stripping out the non-melee and anti-surfer modes :) --Rednaxela 15:16, 25 September 2009 (UTC)
- [Edit conflict] You do know that size doesn't matter ? ;) Personally I have never found it worth the effort to investigate different bulletpower schemes, so I still fire full-power at close range, 1.9 at medium and peas at large distances. --GrubbmGait 15:32, 25 September 2009 (UTC)
- I know size doesn't matter. I'm interested in simplicity, not code size. :) --Rednaxela 15:57, 25 September 2009 (UTC)
I must say, it's nice to see you don't have lots of points in a simple bullet power tweak. :-P DiamondFist definitely has some bloat right now from recent gun experiments, but I don't obsess much over code size, anyway... And I'm not sure I agree with your measurement: it looks like GlacialHawk 1.11 is just over 50% the code size of DiamondHawk 1.02 (12302 vs 23435). --Voidious 15:30, 25 September 2009 (UTC)
- Well, I once compared lines of code between the core functionality of each gun, rather than codesize. I suspect the discrepency is due to things like 1) Enemy status/history storage is more seperated from the gun in Glacier, 2) Both including significant amounts of code that are not a core part of gun functionality, and 3) Different code style. I think comparing the gun package only would make GlacialHawk appear much leaner (not that the other parts don't matter equally) --Rednaxela 15:57, 25 September 2009 (UTC)
Rating order
It would be good if you keep your rating order the same. I mean, sometimes you put MeleeRumble first but sometimes you put RoboRumble rating first. --Nat Pavasant 14:57, 1 October 2009 (UTC)
- I do mean to put MeleeRumble first always, but I did a bunch of them at once and used the wrong order, apparently... Thanks for the heads up. --Voidious 14:58, 1 October 2009 (UTC)
DV vs GF
I wonder what if you switch Diamond's kNN 1v1 gun to GF, how much will you gain? Because you say that the performance of TripHammer increased a lot when you switched to GF from DV. --Nat Pavasant 15:10, 1 October 2009 (UTC)
I think it would gain some in the TC, but not much in the rumble. Even when TripHammer was at 90.56 in TCRM vs Diamond's Main Gun at 89.85, using TripHammer in the rumble gained me almost nothing. Also, since TripHammer and Diamond share a ton of code, the TripHammer KNN test that got 91.20 in the TCRM could just as well be called Diamond + GF + different cluster size + TripHammer's kernel density. --Voidious 15:18, 1 October 2009 (UTC)
Diamond 1.461
Congrats on passing Dookious! --Nat Pavasant 15:03, 16 October 2009 (UTC)
Thanks, I'm pretty stoked! =) On the other hand, I could try rolling those changes back into Dookious, too... --Voidious 15:06, 16 October 2009 (UTC)
Wow! Interesting that it increased your score so much, in DrussGT I settled on a 'best distance' of 500 but I never really experimented with it beyond the MC2K7. --Skilgannon 16:30, 16 October 2009 (UTC)
- I'm surprised, too. The first change I tested was to 450, but the score dropped so much in my test bed that I tried the opposite =), and saw a nice improvement. The attack angle changes came out even in my test bed, but I prefer one mode to two. --Voidious 17:18, 16 October 2009 (UTC)
- Congrats, I would have to test something like that, but the best desired distance for YersiniaPestis so far is 400, I set it to 450 and it already drops a lot of score (especially PL wise). But I've always though is because my gun is pretty bad compared to top guns, and the movement is what gives me the edge, so being relatively close allows me to increase my hit rate without being hit that much. If I set it to ~370, it crushes Shadow, about 60% on average, but Dookious and DrustGT feast on him. --zyx 05:53, 17 October 2009 (UTC)
- I thought that when you fight more aggressively, you get better PL but worse APS and vice versa. --Nat Pavasant 12:53, 17 October 2009 (UTC)
I figured it was inevitable that you'd eventually top Dookius. Now, do you have enough tweaks left to topple WaveSerpent and DrussGT? =) --Darkcanuck 17:58, 18 October 2009 (UTC)
Precise intersection (1.47*)
This precise intersection stuff is cool, even though it seems like sooo much code for such a tiny detail! Calculating the line segment / circle intersections was the first algebra I've done by hand in years. =) Really surprised by some of these results...
- 1.47 - Totally lame and imprecise implementation, not surprised it did poorly.
- 1.471 - Real precise intersection, simulating the given movement option until the wave totally passes. Uses center of angular range as the firing angle, half the range as the kernel density bandwidth.
- 1.472 - No precise intersection. Use the angle from wave source to bot center on the first tick that the wave could hit the bot, use (18 / predicted distance) for kernel density bandwidth. (This is how I've done it for ever...)
- 1.473 - Precise intersection, but instead of predicting the movement option until the wave passes, predict slamming on the brakes as soon as the wave could hit the bot. My rationale is a little complicated and specific to my exact surfing, but it's basically like this:
- I want to consider how dangerous it is to choose this movement option. That may not include going full speed through the whole wave passing, which will create a large bot width.
- Slamming on the brakes once the wave starts passing will minimize bot width, and thus projected dangers (not always, but in most cases, I am guessing).
- Once the surfing really does reach the first tick the wave intersects the bot, it will predict one tick into the future for each movement option, then (just for precise intersection) slamming on the brakes until the wave passes. So it will eventually consider going full speed through the wave, or when to really slam on the brakes.
- I'm really shocked how well this worked. Will go hunting for other explanations (read: other bugs =)) later.
- Note that my "slam on the brakes" precise intersect prediction is a separate branch from my main precise prediction. I also want to try predicting a slam on the brakes only after wave has passed center.
--Voidious 17:43, 22 October 2009 (UTC)
- I have similar logic in YersniaPestis, but for every tick. On every tick I see what happens if I start stopping, so the actual decision is how many ticks until I have to break is the least dangerous option on each direction and keep the smallest of those two. --zyx 19:24, 22 October 2009 (UTC)
Dia 1.48
Congrats! (never noticed this 'till I saw your tweet =)) By the way, does this mean that kNN is better than k-means? And still you are using GF or DV in one-on-one? --Nat Pavasant 13:50, 24 October 2009 (UTC)
Thanks! I think it does mean kNN is better - 0.1% is almost within margin for error, but I also saw a nice boost in my test bed. I wanted to remove the k-means just because it seemed to be lots of code and complexity for little value. The main gun uses GFs with precise MEA, and the Anti-Surfer gun uses DV. I might experiment with making the Anti-Surfer gun use GFs soon. --Voidious 16:37, 24 October 2009 (UTC)
1.5.0
Nice update for 1.50. Good overall gain thus far! --Miked0801 22:07, 11 February 2010 (UTC)
Thanks. It was a very patient process of: make slight tweak, run 300 battles (2700 pairings) in my test bed. =) Eliminated two attributes and tweaked all the weights in the melee gun. Will probably need to use my brain for the next update, though... --Voidious 22:29, 11 February 2010 (UTC)
1.5.2
Finally putting some real focus on PL prowess now. My test bed is 16 of Diamond's toughest matchups. 1.5.1 was a 1.3% APS improvement over 1.49 and 1.5.2 was another 0.7% APS. While 1.5.2 is sitting at 3 losses in PL, in my testing over 50 battles, it still loses to Shadow, Pris, DrussGT, WaveSerpent, and Dookious, plus basically a draw (50.2%) against YersiniaPestis. Shadow is the only one that's not close, at 42.2%; the rest are ~48% or more. --Voidious 18:30, 24 February 2010 (UTC)
1.5.5
Awesome bulletpower tweaks! I had no idea that there was still room for improvement there. And ofcourse #1 ! (for now) --GrubbmGait 12:48, 11 April 2010 (UTC)
I know, I'm completely amazed...! And happy. =) --Voidious 16:04, 11 April 2010 (UTC)
That's very odd! Do you think it's due to a bug in the robocode engine? --Skilgannon 16:49, 11 April 2010 (UTC)
It hadn't occurred to me, but with such a freakishly large gain, you have to wonder. It could also be that the distance > 700 thing is where the rumble points came from, while the 1.95 thing was just something about my test bed... I should try reverting default to 1.999 and see what happens, but I kind of want to continue testing more bullet power tweaks now. =) --Voidious 17:07, 11 April 2010 (UTC)
I'll release a DrussGT with that change, see if it makes any difference =) Although I feel kind of guilty, since this is the results of your research... If I get time I want to try to make a platform that can use any gun in the sort of rumble-less environment you've set up for Diamond. Unless yours already does that? --Skilgannon 17:16, 11 April 2010 (UTC)
No need to feel guilty, I've sure looked over your version history for inspiration plenty of times. When you say rumble-less environment, do you mean the TripHammer/Research stuff? That just lets me test the classification, ie what angle to aim at, not bullet powers or anything like that. My general testing is still just RoboResearch and test beds created with the help of my BedMaker script. (Currently, 48 bots in the 70-90% score range.) --Voidious 17:31, 11 April 2010 (UTC)
Huh, it still feels strange =) Mostly because I'm still king I guess? I'm not sure. I didn't fully understand your TripHammer/Research stuff. I thought it was something where you saved the sequences of moves the enemy bot would make and then would iterate everything forward using everything except the actual Robocode itself. Anyways, that is the vision of what I want to create. Something that essentially provides an optimized targeting environment without any extra overhead, and which calculates all the stats for you. Thus, it would work for any gun, you just need to implement all the methods for onXXX() and log which gets called when, and the actual gun does the rest. It probably wouldn't be nearly as fast as yours, but could probably still cut benchmarking time in half. --Skilgannon 19:23, 11 April 2010 (UTC)
Not quite - it's another step removed from that, I guess. It basically stores raw wave data: attributes, hitting GuessFactor / bearing offset, bullet power, and id of last wave collected before this one was fired. For most wave-based guns, this is everything you need to train your gun or decide what angle to aim at. So then it iterates through the data, feeding your gun the same data it would have collected in a real battle, seeing what angle it would have aimed at for each firing wave, and checking if it would have hit. Yeah, storing actual positions would cost you some speed, but give you a lot more freedom in what you can test with it. --Voidious 19:39, 11 April 2010 (UTC)
Ok, back on topic =) It seems my survival was boosted about 0.2% but APS wise there was essentially no difference. I wonder if it's a thing with your test bed? Or maybe the way you weight/distribute your attributes? --Skilgannon 05:38, 12 April 2010 (UTC)
And were you also coming from a default of 1.999ish?? I guess I should just try reverting to 1.999 and see what happens in the Rumble... --Voidious 13:42, 12 April 2010 (UTC)
Yep, 1.999. Doing a direct compare, actually, it seems that survival went up 0.4% but APS didn't change at all. --Skilgannon 14:46, 13 April 2010 (UTC)
I just discovered a major bug in DrussGT, causing the majority of waves not to be logged, which was affected by any bullet power which gave a bullet velocity of XX.X5. 1.95 is one of them, with a bullet velocity of 14.15. I wonder if DrussGT happened to sneak into your test bed, throwing off your scores =)? Unless other robots also do matching of detected bullet velocity to the wave velocity by multiplying by 10, then rounding, then testing for equality? Needless to say, I'll release a new DrussGT the moment the current one stabilizes. This one shouldn't get 35% against Diamond =) --Skilgannon 17:14, 13 April 2010 (UTC)
Haha... you know, I noticed the high 60's against DrussGT, but it just seemed kinda weird and I shrugged it off. =) So it was a rounding error, like 14.14999 rounded down and didn't match? What a strange series of events to find that bug. I actually compare bullet powers rounded to 1 decimal place, which should have the same problem. I doubt most bots in my test bed are even detecting energy drop, but now you have me thinking it must be something like this. Bullet power 1.95 was like 0.5 APS better than 1.94 or 1.96 over 30 seasons of a 48-bot test bed, which is just baffling. --Voidious 17:51, 13 April 2010 (UTC)
- Yep, exactly. If you test 1.85 or 2.05 against the values around them you should notice similar phenomenon =) Could you see which bots in your 48 were affected? --Skilgannon 18:03, 13 April 2010 (UTC)
- Well, 30 seasons for an individual bot isn't terribly precise, but I will take a look... --Voidious 18:34, 13 April 2010 (UTC)
Lol, GresSuffurd made an almost 0.5 APS point jump from 20 to spot 16, just by changing bulletpower from 1.9 to 1.95. I wish there were more of these 'logical' improvements to make :D --GrubbmGait 17:59, 13 April 2010 (UTC)
I wonder how much of that was from going from 30s against DrussGT to high 60s? =) --Skilgannon 18:03, 13 April 2010 (UTC)
- Well, a 30% change... divided by about 700 participants... results in 0.04% APS change... therefore only roughly tenth of this is from those DrussGT results :P --Rednaxela 18:21, 13 April 2010 (UTC)
Just tested against Diamond 1.5.5 and BasicSurfer suffers from this bug. A lot of bots are based off BasicSurfer. Doh! I'll fix the code to check if the difference is under some threshold. I tested Diamond and noticed that it too suffered from this, tho it seems quite a bit less frequently. --Voidious 18:34, 13 April 2010 (UTC)
Chase's bots especially, they're (Seraphim and Prototype) both scoring less than 10% against the 1.95 power bots. It looks like 1.95 might be a valid improvement until everyone catches on =) --Skilgannon 18:40, 13 April 2010 (UTC)
1.5.16
Interesting... Seems the gunheat wave helped a bot already this high up more than I expected. I'm glad this trick I started in RougeDC works well :) --Rednaxela 13:23, 12 May 2010 (UTC)
Yeah, very cool and innovative idea! That reminds me I should give you credit. Sometimes I forget that what seems like "common knowledge" to me (that you invented this) is anything but. =) --Voidious 13:56, 12 May 2010 (UTC)
1.5.21
Curious about how many points I lose to my Virtual Guns, I tested with the Anti-Surfer gun off a while back. Avg score went down a whopping 0.5% in my test bed! So I guess my AS gun (whether due to fast adapting or some bots being more susceptible to Displacement Vectors, I'm not sure) helps for more than just surfers. Removing bullet hits showed a noticeable improvement - hopefully it translates in the rumble. --Voidious 15:42, 29 May 2010 (UTC)
Are you sure your test bed doesn't include any surfers? I remember your score also went up a bunch when you discovered the x.x5 bug in BasicSurfer. --Skilgannon 09:32, 30 May 2010 (UTC)
It includes PhoenixOS and BlackHole, which are surfers, and Tron 3 might surf too. My score actually went down against all 3 of them with 1.5.21. But, darn it... most of that drop when I turned off the AS gun was due to PhoenixOS and Tron. So that explains that. Looking back at the 1.95 thing, I still don't understand it. Biggest improvement was 8% vs Jekyl, a very old non-surfer, and 4% vs Earth, apparently a surfer based on CC. I gained 0.6% APS across 30 bots. Maybe it's time to refine this test bed. --Voidious 16:56, 30 May 2010 (UTC)
1.6.0
Congrats on being the first to implement bullet shadows in normal surfing! I'm really curious about how it'll work out :) --Rednaxela 03:48, 21 August 2011 (UTC)
I'm curious too - it was killing it in my tests (knock on wood). The only down side is this is pretty straightforward for anyone else to do too if it proves to be valuable. (Well, I guess there's only one person that I really care about... =)) But it's still a really cool addition to the state of the art, and beyond this basic technical implementation, there's probably room for strategic enhancements. --Voidious 04:05, 21 August 2011 (UTC)
Wow, that's a nice score increase! I predict that more bots will be implementing this soon... --Darkcanuck 01:30, 22 August 2011 (UTC)
Thanks! I never would've guessed it might be worth 0.8 APS, but I started getting optimistic once it was destroying my 100-bot test beds. I'll enjoy my time sharing the thin air up here with DrussGT while it lasts, which I suspect won't be long. =) Now to turn my attention to that PL (/Condorcet/batch ELO) crown...--Voidious 02:43, 22 August 2011 (UTC)
Hah! Sneaking up with a potential death blow, huh? This has always been one of those things that I thought wouldn't give much of an advantage for a whole lot of extra work... I didn't realise how easy this would be to implement until I tried it! I've been running MC2K7 on various tweaks all night. Just one more variation to test, then I'll be ready for release I think. Using DC and precise intersection should be easier with this technique... just zero the region of the intersection and it's done. For me the tweaks that can be done to try to take the discritization of the bins into account makes things a little more hairy =) Anyway, state of the art indeed. Nice thinking on this - if I ever get around to coding my Targeting Conditions Manipulation surfing, well, I expect some competition! --Skilgannon 05:56, 22 August 2011 (UTC)
Very nice score increase indeed. As a heads up, Chase-san has made "cs.Nene MC58k7" the second robot in the rumble to do passive bullet shadow surfing, haha. Also... hmm... it seems bullet shadows will now go on the looooong list of things for me to include on my next surfer. --Rednaxela 06:23, 22 August 2011 (UTC)
Oh, a question about implementations: are you calculating where bullet shadows will be before the bullets pass through the waves, or are you only updating shadows once the bullet actually hits the wave? Currently I'm doing the latter, but I suspect the former may improve scores a bit. Also, precalculating hits may be nasty once actual bullet hits start happening - keeping track of which bullets (which may now be gone) improved which waves (which may now be gone) and in what order is a bit of a nasty. (this question goes for you too, Chase) --Skilgannon 07:33, 22 August 2011 (UTC)
- At the moment I only add the bullets once they actually pass the wave, as this was much simpler to add. — Chase-san 10:21, 22 August 2011 (UTC)
- Why would order matter? The shadows exist for every pair of bullet/enemy wave until/unless there is a BulletHitBullet, then you can recalculate them all. I still say it's not nasty, despite the fact that I moronically forgot to remove the destroyed bullet before recalculating in 1.6.1. --Voidious 16:03, 22 August 2011 (UTC)
Lol @ "death blow". =) Yes, I calculate as soon as a bullet or wave is added. And as of 1.6.1 (which sadly seems to be exploding somehow on Darkcanuck's client) I recalculate all bullet shadows from onBulletHitBullet. It's not really that nasty. I have my lists of active bullets and active enemy waves - when I add one, I loop through the other, and onBulletHitBullet I loop through both, clearing the old shadows first. --Voidious 09:26, 22 August 2011 (UTC)
Congrats. Looking at your debug graphics are the dangers of the dots the actual dangers you use to calculate movement or do you just use those for painting and not use bins for you surfing? I don't use bins in Gilgalad to make the botwidth more precise, but this makes it difficult to mark an area on a wave as safe.--AW 13:37, 22 August 2011 (UTC)
- I don't use bins - indeed I do just calculate the danger at 51 specific points for the debug graphics. --Voidious 13:44, 22 August 2011 (UTC)
- Well for marking areas as safe without bins, I believe there are two ways:
- The simple way: Just scale the risk depending on the proportion of the botwidth that is covered by the shadow. (Potential trap of getting caught by bullets near but outside shadows, but Nene MC58 shows this simple method is better than no bullet shadows)
- The accurate way: Ensure you have a risk function that works with the guessfactors ranges rather than just the center. Then ensure your risk function satisfies the criteria "risk(A to B) + risk(B to C) = risk(A to C)". If your risk function is defined that way, you can subtract the risk of the overlap with the shadow (be sure to not double-count multiple overlapping shadows on the same wave)
- --Rednaxela 14:16, 22 August 2011 (UTC)
- Rednaxela's #2 is clearly the most accurate way, but seemed really hard. My two approaches were his #1, and since you're dealing with raw firing angles already when you don't use bins, just ignore any angles that fall within a bullet shadow when doing kernel density to calculate danger. Both of those options performed about the same, but somewhat shockingly, doing both was almost twice as good in my test beds. --Voidious 14:26, 22 August 2011 (UTC)
- Exactly what I was thinking (re: accurate but difficult), but I have an idea on how to approximate it. The reason it is hard is that we want the danger to push us as far as possible from the firing angles, and to do this, you can use bins (less accurate) or as we do, some smoothed kernel density estimation. So basically we want to approximate the integral of the kernel density estimation across the bot width and use that for our danger. To mark an area as safe, we need to find the danger on the intersection of the interval we will mark as safe and the interval the bot will cover and subtract that from the danger of this movement option as calculated without bullet shadows. So the question is now: "how do you approximate the integral?" Because the danger function is smoothed relative to our bot width, we previously used the center of the bot to approximate danger, but we can do it differently without using much more CPU time. We can approximate the danger by taking the Riemann sum (average the upper and lower for more accuracy) of the kernel density estimation on the interval the bot will cover (for more accurate calculations use more points.) Approximate the danger on the intersection using the same method, subtract this from the movement option's danger, and use this new value. I hope that was clear enough.--AW 15:13, 22 August 2011 (UTC)
- Also, if anyone can understand / clarify the algorithm above, should it be added to Bullet Shadow--AW 15:17, 22 August 2011 (UTC)
- I think that is very similar to using bins =) The difference being using bins only for the bullet shadow, not the danger function... --Skilgannon 15:37, 22 August 2011 (UTC)
- My view is... why bother with the mess of doing a approximations of integrals? Using the exact formula is not hard. Just design a risk function for "probability they fire at a given angle", do a proper integral of that, and just start using it. No need for the extra coding complexity of iterative approximations. It's really not difficult at all, so long as you have a well defined risk function in the first place. If your kernel density function for "probability of where they fire" is the sum of gaussian curves, the integral over a botwidth is made up sums of the "error function". It seems to me, that doing it in an accurate way is only difficult if your risk function is not well-founded in the first place. --Rednaxela 15:56, 22 August 2011 (UTC)
- Actually Rednaxela's full method is more or less precisely what Nene did originally, however it required changing my risk function, the new risk function did not perform as well as the older one did, meaning it gained an overall drop in score. I have attached a little doodle I drew detailing about how it works.— Chase-san 15:23, 22 August 2011 (UTC)
- http://img189.imageshack.us/img189/4650/bulletshadowintegrals.th.png
Even with bins it's a complicated story. Because of the finite number of bins, there are obviously cases where bins are only partially covered. So what do you do? Round it at 0.5, take the risk and act like they're empty? Act conservatively by using floor and ceil and not take advantage of that extra half bin that could have been safe? Obviously, the larger the number of bins the better, but isn't it always like that... right now I'm weighting the bin by the portion that was covered, but I don't really like it. According to MC2K7 it worked just a smidgeon better when rounding, but not against the weak bots - I lost 0.3 on average compared to partial bins. The possibility for tweaking is endless...--Skilgannon 14:58, 22 August 2011 (UTC)
s
- Funny you should say that about endless tweaking. This struck me as a super straightforward feature that you'd implement, collect your 0.8 APS, and move on. :-) Well, before considering more strategic applications where you modify when/where you fire. I may try doing the integral thing to subtract the exact danger in the shadowed region, but I don't expect much gain from that, if any. And it seems non-debatable that that's the most accurate way of modeling the impact on surfing dangers, right? The approximation of 171 bins can't be too far off. (Btw, we should prolly cut/paste all this to Talk:Bullet Shadow sometime...) --Voidious 18:07, 22 August 2011 (UTC)
Nice idea and improvement :) This is similar to enemy waves passing other bots which is effective in melee. This may help in melee too. (Make sure your accomodating for missed scans/interpolation). Will the new Diamond be released in Melee? -Jlm0924 16:26, 22 August 2011 (UTC)
- I wasn't planning to, but I'm sure I'll post an updated version to Melee sometime. I don't surf except for in 1v1 scenarios, so this stuff probably wouldn't matter too much in Melee. ;) --Voidious 17:54, 22 August 2011 (UTC)
1.6.2
Looks like there are still bugs in this version, judging from a few odd battles (from your client this time, not mine). Doesn't seem to hurt your score too much though... --Darkcanuck 20:27, 22 August 2011 (UTC)