Difference between revisions of "Talk:Range Search"
(comment) |
(case with high density) |
||
Line 8: | Line 8: | ||
I quite agree this is an idea worth looking into. I've tried it in guns and not found any improvement over KNN, but it's something I'd tinker with again. It's worth noting that some of us weight data points by inverse distance to the current data point, so theoretically we're not being hurt by including the less relevant data because we already weight it accordingly. --[[User:Voidious|Voidious]] 18:46, 8 August 2011 (UTC) | I quite agree this is an idea worth looking into. I've tried it in guns and not found any improvement over KNN, but it's something I'd tinker with again. It's worth noting that some of us weight data points by inverse distance to the current data point, so theoretically we're not being hurt by including the less relevant data because we already weight it accordingly. --[[User:Voidious|Voidious]] 18:46, 8 August 2011 (UTC) | ||
+ | : here another case: current situation is in area with high density, and you can get much more relevant data than k --[[User:Jdev|Jdev]] 18:54, 8 August 2011 (UTC) |
Revision as of 19:54, 8 August 2011
Your KNN algorithm illustration isn't correct, as it is showing a range box like the others. A more accurate depiction would be lines drawn to the nearest 4/8 neighbors. As for the range search, My KdTree Implementation supports them. — Chase-san 10:47, 8 August 2011 (UTC)
- Yes, you're rigth. i'll redraw it later. but it's any way will be rhomb for Manhettan distance formula and circle for Euclidean distance formula, where radius is max distance to neighbor. --Jdev 11:07, 8 August 2011 (UTC)
Thanks to all for help. A little clarification: imho, advantage of RS, that if you have many data points, but current situation is placed in area with low density of data, you did not get not corresponding data --Jdev 16:54, 8 August 2011 (UTC)
I quite agree this is an idea worth looking into. I've tried it in guns and not found any improvement over KNN, but it's something I'd tinker with again. It's worth noting that some of us weight data points by inverse distance to the current data point, so theoretically we're not being hurt by including the less relevant data because we already weight it accordingly. --Voidious 18:46, 8 August 2011 (UTC)
- here another case: current situation is in area with high density, and you can get much more relevant data than k --Jdev 18:54, 8 August 2011 (UTC)