Difference between revisions of "LightR"
Jump to navigation
Jump to search
m (Towards differentiable programming) |
m (Learned models -> learned systems) |
||
Line 10: | Line 10: | ||
; Design principle | ; Design principle | ||
: Strategy light, machine learning heavy. | : Strategy light, machine learning heavy. | ||
+ | |||
+ | ; Central goal | ||
+ | : Learned models -> learned systems | ||
; Planned experiments | ; Planned experiments | ||
Line 15: | Line 18: | ||
:: Multiple hand-tuned danger models -> Expert model & gate model | :: Multiple hand-tuned danger models -> Expert model & gate model | ||
:: Hand-crafted features with naive KNN -> Search-based sequence model | :: Hand-crafted features with naive KNN -> Search-based sequence model | ||
− | |||
:: Offline pre-training & online fine-tuning of everything above. | :: Offline pre-training & online fine-tuning of everything above. | ||
: Towards differentiable programming: | : Towards differentiable programming: | ||
− | :: Directly optimizing | + | :: Directly optimizing prior probability of getting hit (max escape angle & distancing) |
:: Per-instance level optimization of the above (mea & distancing as part of network) | :: Per-instance level optimization of the above (mea & distancing as part of network) | ||
Revision as of 07:21, 20 March 2022
- LightR Sub-pages:
- Version History
This page is under construction. For recent activities, see Version History.
- Design principle
- Strategy light, machine learning heavy.
- Central goal
- Learned models -> learned systems
- Planned experiments
- Towards deep learning:
- Multiple hand-tuned danger models -> Expert model & gate model
- Hand-crafted features with naive KNN -> Search-based sequence model
- Offline pre-training & online fine-tuning of everything above.
- Towards differentiable programming:
- Directly optimizing prior probability of getting hit (max escape angle & distancing)
- Per-instance level optimization of the above (mea & distancing as part of network)