Thread history

Fragment of a discussion from Talk:DeBroglie
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results

Skilgannon: I know Aristocles doesn't wave surf, but the waves that Aristocles fires are used in its movement block.

me.setAhead(Math.cos(angle = absoluteBearing(wave.wGunLocation, robotDestination) - me.getHeadingRadians()) * 100);

This was enough for me to decide it fit the spirit of DeBroglie for a placeholder movement. When I searched for a movement, I really didn't care what kind of movement it was, just that it was mini-, open source, and utilized waves somehow.

The real kicker is going to be when I want to add a few simple targeters in a virtual guns array... will I keep my ideological purity and write wave-centric simple targeters, or just take off-the-shelfs from old bots of mine and/or the wiki? Ohh the dilemma!

Voidious: You have a great point there. During movement development, a few changes are going to happen to the data handling structure that might impact the performance of the gun. Eventually the whole thing has to coexist anyway, so I suppose just jumping in with both feet is best anyway.

I'm hoping that performance/turn-skipping doesn't become a concern too soon in movement development. I tend to favor readability over raw efficiency, and my code is probably fairly unwieldy. Do really heavy bots end up using some sort of locking system (gun requests exclusive access next tick, etc.) to prevent turn skipping, or is the usual method to optimize the code?

Tkiesel19:28, 16 May 2012

Keep favoring readability over performance or you will loose control of the code in a few months. (unless you are writing a restricted codesize bot)

You will only start stumbling into skipped turns if you start analysing huge amounts of log data, or do some deep precise prediction. Even then, if you keep the heavy processing code encapsulated, only few parts of code will need optimization.

I still use plain java.util.ArrayList in k-NN search and it works. Rednaxela´s kd-tree is incompatible with my normalized euclidean distance search. :(

MN20:03, 16 May 2012

It's a bit off topic, but I'm curious, in what way is the normalized euclidean distance search incompatible? I have a "DistanceFunction" interface that allows other distance metrics to be implemented. It should even be just fine with the scaling of different dimensions changing over time (it just makes the tree splits slightly less optimal, but not a big deal generally).

Rednaxela20:28, 16 May 2012

Is it fine to call setWeights() before getting a nearest neighbors list, then using setWeights() again to return to the original weighting?

I was hoping to implement an anti-surfer gun with custom dimension weights.. but just to use those weights at getKNN time.

Tkiesel20:59, 16 May 2012

Yep, that should work just fine.

(Note, the newer version of the tree doesn't have the "setWeights()" thing but it would be trivial to include inside a "DistanceFunction" implementation.)

Rednaxela21:06, 16 May 2012

Thanks for the info! Replacing the wiki version of your tree for the bitbucket one is one of the very next things I'm doing. I need that iterator now, because I'm occasionally culling too many neighbors via displacement vector, leaving me with too-sparse data. The ability to fetch more will be invaluable!

Tkiesel21:31, 16 May 2012
 

A faster way of doing this would probably be to have two different trees, one with weights of one type and another with weights of another type. This will ensure that your tree splits are more optimal, and you can use completely different dimensions as well without a slowdown.

Skilgannon21:24, 16 May 2012

Interesting idea! If I can spare the cycles, that would be interesting to do. Are there segmentations that are useful against adaptive movements but utterly superfluous against non-adaptive movements? (other than data age?) Vice versa?

Tkiesel21:33, 16 May 2012
 

I'm unsure if that would be faster really. Since these trees are generated on-the-fly without rebalancing, it's not like they're hugely optimal in the first place. I doubt the overhead of changing the weighting would be much compared to how non-optimal it is in the first place. Additionally two trees could be worse from the perspective of on-CPU caches. IMO it's worth trying both ways and benchmarking.

Rednaxela21:37, 16 May 2012
 
 

I´m using normalized Euclidean distance (weight = 1/stddev) with fast sampled deviation, which do change over time (k-NN for lazy people who don´t fine tune their bots).

I had the impression the changing weights were not being accounted for. My bot´s scoring dramatically decreased with the kd-tree. I suspect it is the cached coordinate limits inside each node. Or maybe I screwed up something while trying to use the library.

MN22:59, 16 May 2012

Changing weights is certainly intended to be accounted for, though I hadn't tested it as much as other features. I know it couldn't be the coordinate limits inside each node, because they are copying the min/max values from the points which don't change with weighting. Do you remember which version of the tree were you trying this with?

Rednaxela23:44, 16 May 2012

Don´t remember exactly, it was months ago, but it is either the one published here in the wiki, or the other being used in DrussGT.

MN14:56, 17 May 2012