Difference between revisions of "Thread:User talk:D414/What's the highest ranking non-learning bot in the rumble?/reply (3)"
Jump to navigation
Jump to search
m |
|||
Line 1: | Line 1: | ||
Well, we can loosen the limit of perceptual to allow information of k recent turns, e.g. k-perceptual. Under this definition, linear targeting will be 1-perceptual, circular targeting being 2-perceptual, and average velocity targeting with window size k will be k-perceptual. | Well, we can loosen the limit of perceptual to allow information of k recent turns, e.g. k-perceptual. Under this definition, linear targeting will be 1-perceptual, circular targeting being 2-perceptual, and average velocity targeting with window size k will be k-perceptual. | ||
− | As long as k is large enough, we can still build effective learning method. So there’s still not an absolute difference between learning | + | As long as k is large enough, we can still build effective learning method. So there’s still not an absolute difference between learning or non-learning. Simple enough methods like averaging velocity is still “learning”. |
Revision as of 16:42, 25 January 2024
Well, we can loosen the limit of perceptual to allow information of k recent turns, e.g. k-perceptual. Under this definition, linear targeting will be 1-perceptual, circular targeting being 2-perceptual, and average velocity targeting with window size k will be k-perceptual.
As long as k is large enough, we can still build effective learning method. So there’s still not an absolute difference between learning or non-learning. Simple enough methods like averaging velocity is still “learning”.