-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement get_next_trials #3107
base: main
Are you sure you want to change the base?
Conversation
This pull request was exported from Phabricator. Differential Revision: D66367189 |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3107 +/- ##
==========================================
+ Coverage 95.43% 95.79% +0.35%
==========================================
Files 503 508 +5
Lines 50348 50776 +428
==========================================
+ Hits 48051 48641 +590
+ Misses 2297 2135 -162 ☔ View full report in Codecov by Sentry. |
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
8cc7f9c
to
20d5b45
Compare
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
This pull request was exported from Phabricator. Differential Revision: D66367189 |
Summary: Implements new Client methods `configure_experiment`, `configure_optimization`, and `configure_generation_strategy`. Creates new module api.utils.instantiation that holds functions for converting from Configs to core Ax objects. These functions do not do validation, which will live on the configs themselves and be implemented in a separate diff. Note that this diff also does not implement saving to DB, although this will happen after each of these three methods are called if a config is provided **Id especially like comment on our use of SymPy** here to parse through objective and constraint strings -- what we've wound up with is much less verbose and I suspect much less error prone than what exists now in InstantiationBase while also providing a more natural user experience (ex. not getting tripped up by spacing, automatically handling inequality simplification like `(x1 + x2) / 2 + 0.5 >= 0` --> `-0.5 * x1 - 0.5 * x2 <= 1`, etc.) without any manual string parsing on our end at all. Im curious what people think of this strategy overall. SymPy usage occurs in `_parse_objective`, `_parse_parameter_constraint`, and `_parse_outcome_constraint`. Specific RFCs: * We made the decision earlier to combine the concepts of "outcome constraint" and objective thresholds into a single concept to make things clearer for our users -- do we still stand by this decision? Seeing it in practice I think it will help our users a ton but I want to confirm this before we get too far into implementation * We discussed previously that if we were using strings to represent objectives we wanted users to be able to specify optimization direction via coefficients (ex objective="loss" vs objective="-loss") **but we did not decide which direction a positive coefficient would indicate**. In this diff Ive implemented things such that a positive coefficient indicates minimization but Im happy to change -- I dont think one is better than the other we just need to be consistent. * To express relative outcome constraints, rather than use "%" like we do in AxClient, we ask the user multiply their bound by the term "baseline" (ex. "qps >= 0.95 * baseline" will constrain such that the QPS is at least 95% of the baseline arm's qps). To be honest we do this to make things play nice with SymPy but I also find it clearer, though Im curious what you all think Differential Revision: D65826204
Summary: These methods are not strictly speaking "part of the API" but may be useful for developers and trusted partners. Each is fairly self explanatory. Differential Revision: D66304352
Summary: configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment. configure_runner is fairly straightforward and just sets experiment.runner configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric. Reviewed By: lena-kashtelyan Differential Revision: D66305614
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
20d5b45
to
c2a2486
Compare
This pull request was exported from Phabricator. Differential Revision: D66367189 |
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
Summary: As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available). If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically. Once generated trials are attached to the experiment and immediately marked RUNNING Differential Revision: D66367189
Summary:
As titled. Get next trials takes in a maximum number of trials to generate and optionally a dict of parameter values to be fixed and generates up to that number of trials (as parallelism is available).
If Experiment is not set or Experiment is set but Optimization is not set raise an Exception. If GenerationStrategy is not set then choose one automatically.
Once generated trials are attached to the experiment and immediately marked RUNNING
Differential Revision: D66367189