Great and very detailed article!

Jump to navigation Jump to search

Algorithmic optimisation will pretty much always outperform low level optimisation's such as optimising memory access for cache misses.

You really shouldn't need to worry about cache misses in Java!

Optimising from o(n) to o(log n) will give a big performance benefit!

Wolfman (talk)20:01, 3 November 2017

Yes, I agree with that. It just happens that in a scenario where you get on average one scan every 4-5 ticks, and the average BFT is 50 or less, even the theoretical improvement becomes negligible. But I'm a guy who likes to have the worst case situations nicely covered :P

The interesting question for me is: "does this make my bot run faster?"

I do not have this answer, I only know that this helps me not skipping turns because of odd worst case situations.

Rsalesc (talk)21:14, 3 November 2017

Well, I think the worst cases is not about bft, but the entire round time. BFT is too small to make you skip a turn, but a bug most bot authors make could make the worst case round-long.

The catch-point is, how do you handle data from different round?

Xor (talk)00:46, 4 November 2017

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:Binary Search PIF/Great and very detailed article!/reply (6).

e.g. You store the information of the next round right after the first round, and when the scans of the first round isn’t enough to get a hit, you continue searching scans from the next round and start from time = 0 to time = movie start time + bft.

if you store time as globaltime, this will only result in inaccurate result which may be eliminated by kde. But if you store round time, it will cause the data of the entire round be iterated.

Xor (talk)04:09, 4 November 2017

When the data isnt enough I just stop. If im binary searching I guarantee that its domain is entirely inside a single round. I dont even consider the scans of the next round and I discard that situation. Then I keep picking matches from the tree with an iterator.

Rsalesc (talk)04:45, 4 November 2017
 
 

Of course it is not only abount the BFT time, we still have a Kdtree, and the other components, but when we are talking about milliseconds it helps a lot.

Rsalesc (talk)01:19, 4 November 2017

kd tree is very fast, comparing to the cost of simulation imo.

Xor (talk)04:10, 4 November 2017

Not as fast as the presented algo for sure.

Rsalesc (talk)04:34, 4 November 2017
 
 
 
 

Well, that’s only true for large enough n... And for small n, such as our cases, constant factor is dominant.

Btw, memory access is WAAAY expensive than basic calculations, so the gain for optimized memory access, for small n, often outperforms paper algorithms that don’t use contiguous memory in order.

Xor (talk)00:43, 4 November 2017