Benchmarking sampling strategies with ado #335
Unanswered
danielelotito
asked this question in
Q&A
Replies: 1 comment
-
|
moved to #607 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I’m working on benchmarking different sampling strategies and I could use your input on how to set this up efficiently with an ado custom experiment.
Here’s the situation:
• For each combination of points in the training set, I need to train a separate model, but the order of points doesn’t matter.
• I want to make sure we don’t train the same model twice on the same set of points.
• My idea is to use the orchestrator, so it feels natural that the fundamental unit should be a unique identifier for each set of points. But setting up a space in this way sounds impractical
• Each set should then be linked to a set of performance metrics (measured properties).
Basically, I would like to:
A solution that I would like:
Is this already possible within ado? What would be the recommended practice?
Additional detail
The size of the benchmarking sampling strategy must be suited for the caching approach. I.e. sampling O(10) on a total of O(10) and with constraints on the sampling.
Beta Was this translation helpful? Give feedback.
All reactions