Difference between revisions of "LightR"
Jump to navigation
Jump to search
m |
m (Towards differentiable programming) |
||
Line 12: | Line 12: | ||
; Planned experiments | ; Planned experiments | ||
− | : | + | : Towards deep learning: |
− | : Multiple hand-tuned danger models -> Expert model & gate model | + | :: Multiple hand-tuned danger models -> Expert model & gate model |
− | : Hand-crafted features with naive KNN -> Search-based sequence model | + | :: Hand-crafted features with naive KNN -> Search-based sequence model |
− | : Slow networks -> | + | :: Slow networks -> Knowledge distill & quantization aware training |
− | : Offline | + | :: Offline pre-training & online fine-tuning of everything above. |
− | + | : Towards differentiable programming: | |
+ | :: Directly optimizing max escape angle & prior probability of getting hit (distancing) | ||
+ | :: Per-instance level optimization of the above (mea & distancing as part of network) | ||
__NOTOC__ __NOEDITSECTION__ | __NOTOC__ __NOEDITSECTION__ | ||
{{Template:Bot Categorizer|author=Xor|isMega=true|isOneOnOne=true|isMelee=false|isOpenSource=false|extends=Interface}} | {{Template:Bot Categorizer|author=Xor|isMega=true|isOneOnOne=true|isMelee=false|isOpenSource=false|extends=Interface}} |
Revision as of 09:12, 19 March 2022
- LightR Sub-pages:
- Version History
This page is under construction. For recent activities, see Version History.
- Design principle
- Strategy light, machine learning heavy.
- Planned experiments
- Towards deep learning:
- Multiple hand-tuned danger models -> Expert model & gate model
- Hand-crafted features with naive KNN -> Search-based sequence model
- Slow networks -> Knowledge distill & quantization aware training
- Offline pre-training & online fine-tuning of everything above.
- Towards differentiable programming:
- Directly optimizing max escape angle & prior probability of getting hit (distancing)
- Per-instance level optimization of the above (mea & distancing as part of network)