Danger function

Jump to navigation Jump to search

Though I would note that the difference between VCS and KNN is significantly reduced when you use interpolated VCS like I did in SaphireEdge/Midboss, later adopted by WaveSerpent starting in version 2.0

Rednaxela (talk)15:40, 19 June 2013

True. But speed-wise, that is about as slow as KNN, because you have to smooth across all of the dimensions. Of course, the slow part is the adding data, not so much retrieval, unlike KNN.

Skilgannon (talk)16:21, 19 June 2013

Actually, I'd say it's somewhat slow on both adding data and retrieval. It's checking adjacent bins in all dimensions for adding and retrieval, leading to <math>2^n</math> entries checked for each operation, where n is the number of dimensions. If anything, retrieval is a bit slower, because for each entry you need to process all guessfactor bins, whereas for adding data you only need to process guessfactor bins near the hit.

Rednaxela (talk)16:37, 19 June 2013
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:Cunobelin/Danger function/reply (8).

That's pretty cool, reminds me of what I am doing in Cotillion's gun. Although, I think there would be problems with some bits being worth more than others unless you used some variation on Gray codes in your XOR. Perhaps doing the XOR, then disassembling the result and adding them back together as a single integer representing distance would be faster than simply using bitCount? Although, even then I think there might be issues with the distance metric not being correct.

One thing to be careful of, don't test a gun against surfers if you want to know how good it is. In fact, if you gun is good at hitting surfers, chances are it will be pretty miserable at hitting regular random movement. Rather test against RaikoMicro, and check what CunobelinDC got in the rumble as a comparison.

Edit: I misread that, it shouldn't have problems with different bits being worth different amounts, although you're limited to 8 values per attribute, if you want to stick 4 in an int. Gray codes should still work, and give you much finer granularity.

Skilgannon (talk)17:00, 19 June 2013

Yes, it does work just fine, and you weight the importance of the dimensions just as you would with doubles, just having some slight issue with granularity when only a small number is available. I had 11 for latv, which is fine, but for the weighting I wanted only needed 2 or 3 for distance, but that would have been a granularity of 500 or 300 which is a little too coarse so I compromised on 4 bits.

The gray codes do not work for the simplistic XOR/bitcount mechanic and computing gray->binary looks expensive, so even if I could solve the math it probably wouldn't help.

Nz.jdc (talk)00:17, 20 June 2013
 

If it's not too much codesize, maybe this would be faster?

Skilgannon (talk)19:42, 19 June 2013

Not sure if it would be faster, but in any case codesize will be an issue, so adding any but the most trivial function is probably not feasible.

The only other idea which would be much faster is a lookup table, but that would be 4GB, and if you partitioned to byte or short to get the size sensible you get more codesize and slower performance.

Nz.jdc (talk)23:57, 19 June 2013