<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://robowiki.net/w/index.php?action=history&amp;feed=atom&amp;title=Thread%3ATalk%3ADrussGT%2FUnderstanding_DrussGT%2FReason_behind_using_Manhattan_distance%2Freply_%288%29</id>
	<title>Thread:Talk:DrussGT/Understanding DrussGT/Reason behind using Manhattan distance/reply (8) - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://robowiki.net/w/index.php?action=history&amp;feed=atom&amp;title=Thread%3ATalk%3ADrussGT%2FUnderstanding_DrussGT%2FReason_behind_using_Manhattan_distance%2Freply_%288%29"/>
	<link rel="alternate" type="text/html" href="http://robowiki.net/w/index.php?title=Thread:Talk:DrussGT/Understanding_DrussGT/Reason_behind_using_Manhattan_distance/reply_(8)&amp;action=history"/>
	<updated>2026-04-11T05:10:52Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.34.1</generator>
	<entry>
		<id>http://robowiki.net/w/index.php?title=Thread:Talk:DrussGT/Understanding_DrussGT/Reason_behind_using_Manhattan_distance/reply_(8)&amp;diff=54781&amp;oldid=prev</id>
		<title>Skilgannon: Reply to Reason behind using Manhattan distance</title>
		<link rel="alternate" type="text/html" href="http://robowiki.net/w/index.php?title=Thread:Talk:DrussGT/Understanding_DrussGT/Reason_behind_using_Manhattan_distance/reply_(8)&amp;diff=54781&amp;oldid=prev"/>
		<updated>2018-08-28T17:15:27Z</updated>

		<summary type="html">&lt;p&gt;Reply to &lt;a href=&quot;/wiki/Thread:Talk:DrussGT/Understanding_DrussGT/Reason_behind_using_Manhattan_distance/reply_(4)&quot; title=&quot;Thread:Talk:DrussGT/Understanding DrussGT/Reason behind using Manhattan distance/reply (4)&quot;&gt;Reason behind using Manhattan distance&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;I think it is due to the noise rejection. For me it is the ratio between how a small change in a lot of dimensions is weighted compared to a big change in a single dimension, as you demonstrated above. You can also think about it like the difference between L1 and L2 distance, how they would affect a minimization problem. L1 rejects large noises, and is the most robust you can get while still maintaining a convex search space. L2 has a gradient that gets larger the bigger the distance, so dimensions with more error are effectively weighted higher, and weighted higher than just proportional to the amount of error.&lt;/div&gt;</summary>
		<author><name>Skilgannon</name></author>
		
	</entry>
</feed>