View source for Talk:DeBroglie
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Improvement? | 1 | 18:10, 20 June 2012 |
deBroglie vs Combat | 1 | 18:25, 18 June 2012 |
????? | 33 | 23:52, 17 May 2012 |
Should I consider it a good thing when movement updates that improve my Rumble score overall drop my rating considerably against mirror movement bots? Seems intuitively right to me.. but it's an annoying pill to swallow.
Yes, unless you use some specific anti-mirror gun.
Nice work.
Thanks!
Combat is in my "Really tough to beat" stable of bots I test against. More than almost any other bot actually, Combat is sensitive to how I've been fooling with my flattener and stat rolling (I roll stats via half-life by what round the data point occured in. It's a rough solution, but one of the easier ones to implement when dealing with Objects fetched via KNN instead of a VCS array.)
The fact that this version can (barely) beat Combat is no guarantee that future versions will. :)
This is impossible. I've watched that bot in awe for years.
Have you tried running it locally? Those results don't look suspicious to me, especially from 2 different clients. And assuming they're legit, congrats. :-)
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:DeBroglie/?????/reply (9).
I've yet to try it locally here but that doesn't look so impossible to me. In particular, I'd note that Aristocles' movement you're currently using is a "flat profile" type movement, so you would tend to expect it to score well against high-rank bots compared to where it's own rank is. If the results are accurate, congrats indeed :)
Oh, and I'd note that further evidence you have some rather nice targeting on there, is that DeBroglie scores around 50% against PolishedRuby, which is a mirror movement bot with my RougeDC targeting. RougeDC and Scarlet score around 56% against PolishedRuby, so this would certainly imply that against a "flat movement" like Aristocles, your targeting isn't too far behind RougeDC/Scarlet... Looks to me like your targeting is on the right track ;)
Oh, and testing locally once this was the result:
Rank | Robot Name | Total Score | Survival | Surv Bonus | Bullet Dmg | Bullet Bonus | Ram Dmg * 2 | Ram Bonus | 1sts | 2nds | 3rds |
1st | pez.rumble.CassiusClay 2rho.02no | 2677 (50%) | 950 | 190 | 1360 | 177 | 0 | 0 | 19 | 16 | 0 |
2nd | tjk.deBroglie rev0025 | 2654 (50%) | 800 | 160 | 1514 | 180 | 0 | 0 | 16 | 19 | 0 |
Thanks for the notes!
It's been a real help developing targeting off of a known movement. I compare DeBroglie to Aristocles as a guide for how my gun stacks up. Any variation is due to the difference in guns. Presumably the difference in the radar handling is miniscule at best, after all. Comparing PBI between the two bots has been very instructive!
I've just about finished the main work on targeting.. the point where I stop adding features and just tweak/debug.
It's been so useful to develop this way that I've been tempted to make a DeBroglie-M with my movement and Aristocles' gun so that I can debug movement issues in a similar way... and finally combine my movement and gun when I think they're both in a workable state.
I know many of the bots have a base class which calls the movement and gun separately - Diamond is the first that springs to mind, although I think it was actually CassiusClay which pioneered this structure.
And just a note, Aristocles doesn't use waves in the movement, only in the gun. The simplest wavesurfing movement would be that of BasicSurfer, although to get something reasonable you probably need to go with something like Voidious's Komarious or my CunobelinDC.
I actually think you're better off just freezing your gun changes at that point to compare the movement to your last version with Aristocles movement. There could be cases where tuning to Aristocles' gun would not be the same as tuning to your own gun. As long as you're comparing to a fixed gun, changes can be attributed to the movement. Just my 2 cents. :-)
Skilgannon: I know Aristocles doesn't wave surf, but the waves that Aristocles fires are used in its movement block.
me.setAhead(Math.cos(angle = absoluteBearing(wave.wGunLocation, robotDestination) - me.getHeadingRadians()) * 100);
This was enough for me to decide it fit the spirit of DeBroglie for a placeholder movement. When I searched for a movement, I really didn't care what kind of movement it was, just that it was mini-, open source, and utilized waves somehow.
The real kicker is going to be when I want to add a few simple targeters in a virtual guns array... will I keep my ideological purity and write wave-centric simple targeters, or just take off-the-shelfs from old bots of mine and/or the wiki? Ohh the dilemma!
Voidious: You have a great point there. During movement development, a few changes are going to happen to the data handling structure that might impact the performance of the gun. Eventually the whole thing has to coexist anyway, so I suppose just jumping in with both feet is best anyway.
I'm hoping that performance/turn-skipping doesn't become a concern too soon in movement development. I tend to favor readability over raw efficiency, and my code is probably fairly unwieldy. Do really heavy bots end up using some sort of locking system (gun requests exclusive access next tick, etc.) to prevent turn skipping, or is the usual method to optimize the code?
Keep favoring readability over performance or you will loose control of the code in a few months. (unless you are writing a restricted codesize bot)
You will only start stumbling into skipped turns if you start analysing huge amounts of log data, or do some deep precise prediction. Even then, if you keep the heavy processing code encapsulated, only few parts of code will need optimization.
I still use plain java.util.ArrayList in k-NN search and it works. Rednaxela´s kd-tree is incompatible with my normalized euclidean distance search. :(
It's a bit off topic, but I'm curious, in what way is the normalized euclidean distance search incompatible? I have a "DistanceFunction" interface that allows other distance metrics to be implemented. It should even be just fine with the scaling of different dimensions changing over time (it just makes the tree splits slightly less optimal, but not a big deal generally).
Is it fine to call setWeights() before getting a nearest neighbors list, then using setWeights() again to return to the original weighting?
I was hoping to implement an anti-surfer gun with custom dimension weights.. but just to use those weights at getKNN time.
Yep, that should work just fine.
(Note, the newer version of the tree doesn't have the "setWeights()" thing but it would be trivial to include inside a "DistanceFunction" implementation.)
A faster way of doing this would probably be to have two different trees, one with weights of one type and another with weights of another type. This will ensure that your tree splits are more optimal, and you can use completely different dimensions as well without a slowdown.
I´m using normalized Euclidean distance (weight = 1/stddev) with fast sampled deviation, which do change over time (k-NN for lazy people who don´t fine tune their bots).
I had the impression the changing weights were not being accounted for. My bot´s scoring dramatically decreased with the kd-tree. I suspect it is the cached coordinate limits inside each node. Or maybe I screwed up something while trying to use the library.
Changing weights is certainly intended to be accounted for, though I hadn't tested it as much as other features. I know it couldn't be the coordinate limits inside each node, because they are copying the min/max values from the points which don't change with weighting. Do you remember which version of the tree were you trying this with?
Usually it's just a matter of optimizing the code and algorithms, but some sort of locking system could be interesting. I'm unsure if such locking would help much though since at least in my experience, a high quality surfing movement takes a much larger amount of cpu than just about any targeting out there.
For what it's worth, adding simple guns to a virtual gun system which has a dynamic clustering, precise intersection gun is not going to give you any more points. The only thing which might help is a circular gun, and even then just against SpinBot.
If you design your algorithm taking speed into account to begin with, it shouldn't be a problem. Be careful of nested loops, and use FastTrig and you should be fine. The biggest speed increases for me came when I figured out ways I could keep a value for later, instead of re-calculating it, particularly things like square roots (distances) and KNN results from Dynamic Clustering. If you can eliminate whole branches of code from being re-executed it can also be a major time-saver, such as exiting a search early if you know that all points from here on will be infeasible.
I do disagree slightly with that first point. RougeDC's gun system, which is primarily k-nn precise intersection targeting, did benefit slightly from the inclusion of a "Single Tick" PM targeting system. Switching between it and the main targeting system seems to pay off slightly against some surfers the targeting would otherwise have more trouble with. In general though, I do agree that simple guns in a virtual gun system won't gain all that much.
I wouldn't exactly term a Single-Tick PM gun as 'simple' ;-) I was more thinking along the line of HOT, linear, circular, random linear and averaged linear. Single-Tick is still something I need to play with... possibly Single-Tick KNN?
Well... my current opinion is that the single-tick thing just wastes way too much CPU to be worth it. Maybe it would be worth trying a similar concept with a larger fixed number of ticks though... like predicting forward in 10-tick intervals...
The simple targeter I really wanted most was a dead-stop.. where would the bot end up if it stopped as soon as it saw the wave, using deceleration, etc.
I've even pondered making dead-stop the 0.0 center of my guess factors instead of HOT, in which case the separate dead-stop targeter becomes superfluous. Wasn't sure if it'd yield anything significant, but it bugs me that 1.0 and -1.0 have a precise physical/decision meaning when you do a good MEA calculation, but that 0.0 wasn't significant of any enemy decision, whereas dead-stop would be.
Random Targeting does have its uses. A pure random gun usually improves the score against very strong movement. And hurts the score against everything else.
As for "readability" vs "efficiency", I don't think they are necessarily at odds, besides as they compete for your time and effort. Like Skilgannon said, the most important optimizations are high level stuff - fast code can still be readable, testable, and maintainable. Having everything in one giant method with big ugly lines is not necessarily going to give a huge performance increase.
I think it just so happens that some bots, like DrussGT and Shadow, went through a lot of changes without ever getting sufficiently refactored, so the code is a little disorganized. But the authors made the effort to keep them fast. ;) (In Shadow's case, I'm just going off what ABC has said...) I feel Diamond is decent code and pretty fast. Rednaxela's kd-tree is also good code and lightning fast.
With regards to the "readability" vs "efficiency" thing, I'd agree that they don't have to be entirely at odds. In the version of my kd-tree posted here I do feel I sacrificed too much readability with the aim of gaining performance, but in the rewrite (here) I found that almost all of the readability I sacrificed in the earlier version was unnecessary. Really, a good number of things that you might think are not optimized turn out to be taken care of by the JIT, so you just need to worry about higher level aspects in most cases.