Archived talk:User:Rednaxela 2012-05-16

From Robowiki
Revision as of 11:01, 27 January 2009 by Skilgannon (talk | contribs) (→‎Slow Algorithms: pre-computed precise prediction?)
Jump to navigation Jump to search

Slow Algorithms

Well then... I set up some code to via brute force calculate nearly all possible ways a bot could move (restricts turning to 3 possible values and acceleration to integers) within a given amount of time. To calculate for 10 ticks it took about 1 second, which is very good for my purposes. To calculate for 20 ticks however... it didn't seem to stop even after hours so I calculated roughly how long it would take with my algorithm that got 1 second for 10 ticks and it turns out that for 20 ticks would take at least 110 years!!! Wowzers... looks like I'll have to diverge from brute force and work with some methods that drastically reduce calculation at the cost of significant approximation... --Rednaxela 16:07, 26 January 2009 (UTC)

Those exponential algorithms =S, maybe instead of approximating you could prune branches of your search tree, a lot of states will be very similar and some others will have low probability. Hope it gives ideas to improve your algorithm. --zyx 23:42, 26 January 2009 (UTC)

Well, for my purposes, pruning low probability states would be unacceptable, however I am planning to redesign it so it can prune sufficently similar (or exactly the same if I can afford it?) states of each tick. The memory requirement will unavoidably be drasticaly increased however in order to make non-recursive as that would require. Hopefully that memory requirement increase won't get too high. I intend to make these calculation for at least 100 ticks so yeah... Hopefully I won't need to borrow a supercomputer to get SaphireSlippers on the road... --Rednaxela 02:06, 27 January 2009 (UTC)

How about working the other way around? Instead of working out all the different ways you could move, and seeing where you end up, take lots of different places, try to get to them, and see where you end up. (Basically, my style of GoTo Surfing ;-) ) This way it is very easy to limit computational time: just increase or decrease the granularity of the places you are using as 'targets'. And also: what's up with 1 second of computational time being acceptable? Are you pre-computing all possible locations, and then just transforming them at runtime? My guess is that would be slower than precise-prediction due to evaluating the danger at the HUGE number of possible positions. Although I can see it's usefulness... finding optimal locations considering distance etc. For a 'pruning' technique, how about 'merging' all entries that are at the same velocity and have less than a certain amount of difference in the heading and the location? --Skilgannon 10:01, 27 January 2009 (UTC)