The difference between VCS and KNN is huge, if you have the time and space to tune it. Seriously, even a double-buffered VCS is nothing compared to KNN. It takes many buffers of varying depth, granularity etc to get close to what KNN can achieve, and even then it isn't as good. See targeting challenges, I believe the highest scoring VCS is Dookious.
I know Hydra and Waveserpent use a combination VCS/KNN gun, which segments on velocity and acceleration but does KNN on everything else. That way it doesn't need a tree, but still runs at reasonable speed because of the reduced data size.
Though I would note that the difference between VCS and KNN is significantly reduced when you use interpolated VCS like I did in SaphireEdge/Midboss, later adopted by WaveSerpent starting in version 2.0
True. But speed-wise, that is about as slow as KNN, because you have to smooth across all of the dimensions. Of course, the slow part is the adding data, not so much retrieval, unlike KNN.
Actually, I'd say it's somewhat slow on both adding data and retrieval. It's checking adjacent bins in all dimensions for adding and retrieval, leading to <math>2^n</math> entries checked for each operation, where n is the number of dimensions. If anything, retrieval is a bit slower, because for each entry you need to process all guessfactor bins, whereas for adding data you only need to process guessfactor bins near the hit.
The intention was to save codesize by being able to use the same wave and stats functions for both gun and surfing. I have tried my algorithm in Cunobelin as proof of concept and remarkably (after fixing one div by zero bug I created) it worked and got 50% scores against the original.
However it neither saved on codesize nor execution speed, so the idea is probably a no starter as without that change I would have too direct a copy of cunobelin's movement and skipped turns due to data volumes in the gun. Turns out modern processors do very fast FP and the implementation of Integer.bitCount() has too many steps.
I have another idea about weighting which might work, but the performance problem will remain. I will eventually try it, probably along with culling old data to mitigate volume, but it is a lower priority now that my "good" idea turned out not to help.
For your amusement here is the idea:
Replace the EnemyWaveDC.scan array of 5 (actually 4, the other is a special case) doubles with a single integer where that segmentation data is encoded using individual bits as tally marks (i.e. 0b1 = 1, 0b11 = 2, 0b111 = 3 etc using int encoded = (1 << value) - 1;). The values are then assembled with | and <<. Note that this is not the same as simple adding as the positional information (i.e. which segment the data belongs to) is retained, as it must be for the XOR below to work.
The point of this is then the loop over those 4 doubles in the distance calculation in checkDanger is no longer required and the manhattan distance can be calculated with Integer.bitCount(surfWave.scan ^ scan);
I had hoped that using integers rather than doubles and removing the loop would provide a speed increase, but unfortunately FP is very fast and bitCount does a lot (20?) operations, so no gain :(
That's pretty cool, reminds me of what I am doing in Cotillion's gun. Although, I think there would be problems with some bits being worth more than others unless you used some variation on Gray codes in your XOR. Perhaps doing the XOR, then disassembling the result and adding them back together as a single integer representing distance would be faster than simply using bitCount? Although, even then I think there might be issues with the distance metric not being correct.
One thing to be careful of, don't test a gun against surfers if you want to know how good it is. In fact, if you gun is good at hitting surfers, chances are it will be pretty miserable at hitting regular random movement. Rather test against RaikoMicro, and check what CunobelinDC got in the rumble as a comparison.
Edit: I misread that, it shouldn't have problems with different bits being worth different amounts, although you're limited to 8 values per attribute, if you want to stick 4 in an
int. Gray codes should still work, and give you much finer granularity.
Yes, it does work just fine, and you weight the importance of the dimensions just as you would with doubles, just having some slight issue with granularity when only a small number is available. I had 11 for latv, which is fine, but for the weighting I wanted only needed 2 or 3 for distance, but that would have been a granularity of 500 or 300 which is a little too coarse so I compromised on 4 bits.
The gray codes do not work for the simplistic XOR/bitcount mechanic and computing gray->binary looks expensive, so even if I could solve the math it probably wouldn't help.
If it's not too much codesize, maybe this would be faster?
Not sure if it would be faster, but in any case codesize will be an issue, so adding any but the most trivial function is probably not feasible.
The only other idea which would be much faster is a lookup table, but that would be 4GB, and if you partitioned to byte or short to get the size sensible you get more codesize and slower performance.