Reason behind using Manhattan distance

Jump to navigation Jump to search

Just had a thought about DrussGT's hundreds of random VCS bins and Manhattan distance —

Consider we have infinite amount of random VCS buffers (random bin size and dimensions, weighted equally, no decay), then 1 distance increment in a dimension result in "1" decrease in the total of buffers (data weight) containing that data.

When distance increased in dimension A by 1, and distance increased in dimension B by 1 as well, then data weight decreased by 1 + 1 = 2, in the same way manhattan distance works.

If we use manhattan distance together with knn, and decrease weight linearly on data distance, it should yield similar result to random VCS.

However, once rolling average (decay) is used, things get a lot different there...

Xor (talk)16:43, 15 September 2018