Difference between revisions of "LightR"
Jump to navigation
Jump to search
m |
m |
||
Line 20: | Line 20: | ||
:: Offline pre-training & online fine-tuning of everything above. | :: Offline pre-training & online fine-tuning of everything above. | ||
: Towards differentiable programming: | : Towards differentiable programming: | ||
− | :: Directly optimizing prior probability of getting hit (max escape angle | + | :: Directly optimizing prior probability of getting hit (max escape angle, distancing and multi-wave risk fusion) |
− | :: Per-instance level optimization of the above ( | + | :: Per-instance level optimization of the above (Pareto frontier) |
__NOTOC__ __NOEDITSECTION__ | __NOTOC__ __NOEDITSECTION__ | ||
{{Template:Bot Categorizer|author=Xor|isMega=true|isOneOnOne=true|isMelee=false|isOpenSource=false|extends=Interface}} | {{Template:Bot Categorizer|author=Xor|isMega=true|isOneOnOne=true|isMelee=false|isOpenSource=false|extends=Interface}} |
Revision as of 07:25, 20 March 2022
- LightR Sub-pages:
- Version History
This page is under construction. For recent activities, see Version History.
- Design principle
- Strategy light, machine learning heavy
- Central goal
- Learned models -> learned systems
- Planned experiments
- Towards deep learning:
- Multiple hand-tuned danger models -> Expert model & gate model
- Hand-crafted features with naive KNN -> Search-based sequence model
- Offline pre-training & online fine-tuning of everything above.
- Towards differentiable programming:
- Directly optimizing prior probability of getting hit (max escape angle, distancing and multi-wave risk fusion)
- Per-instance level optimization of the above (Pareto frontier)