Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean and isolate model/backend attributes #635

Closed
wants to merge 48 commits into from
Closed

Conversation

irm-codebase
Copy link
Contributor

@irm-codebase irm-codebase commented Jul 9, 2024

Fixes #608 and #619

Summary of changes in this pull request

This PR aims to condense the amount of stuff we save throughout the init / build / solve process by avoiding attribute duplication and cleanly defining what is passed to the backend and saved in model files.

  • model._timings and model._model_data.attrs["timestamp_xxxx"] are now model.timestamps.
  • backend._add_run_mode_math was moved to model._ensure_mode_math.
  • model.math / model._model_data.attrs["math"] are now just model.math.
  • model.math_documentation was moved to postprocessing.math_documentation, which takes any model as input. This should help users build their own docs without tampering with model.

PR checklist

  • rework timestamps
  • avoid math additions in the backend
  • remove _model_def_dict
  • remove unnecessary math documentation attribute
  • avoid duplication of math attributes
  • ensure backend has limited attribute access

Reviewer checklist

  • Test(s) added to cover contribution
  • Documentation updated
  • Changelog updated
  • Coverage maintained or improved

@irm-codebase irm-codebase added the v0.7 (upcoming) version 0.7 label Jul 9, 2024
@irm-codebase irm-codebase self-assigned this Jul 9, 2024
@irm-codebase irm-codebase marked this pull request as draft July 9, 2024 08:54
@irm-codebase irm-codebase removed the request for review from brynpickering July 9, 2024 08:54
Base automatically changed from feature-gurobi-interface to main July 9, 2024 15:41
@irm-codebase
Copy link
Contributor Author

At the moment, use of math.math_documentation is expected to fail. I'll most likely extract this from model, since it is not really needed in a regular case.

@irm-codebase
Copy link
Contributor Author

model.math_documentation is now its own postprocessing module, and is no longer included in model!
It now receives a model as an input, and build math documentation for it.
Docs are failing because of this, though.

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Jul 11, 2024

Current implementation is returning a rather funky error when building the documentation for storage_intra_cluster.

For the storage_intra_cluster_max constraint, the method generate_top_level_where_array returns an object with missing dimensions!

To repeat, add a conditional breakpoint in backend_model.py, line 284 with name == "storage_intra_cluster_max".

import calliope
from calliope.postprocess.math_documentation import MathDocumentation


MODEL_PATH = "docs/hooks/dummy_model/model.yaml"
model_config = calliope.AttrDict.from_yaml(MODEL_PATH)

for override in model_config["overrides"].keys():
    custom_model = calliope.Model(model_definition=MODEL_PATH, scenario=override)
    custom_model.build()
    math_documentation = MathDocumentation(custom_model)

Notice for top_level_where.dims

  • pyomo backend: ('nodes', 'techs', 'carriers', 'clusters')
  • latex backend: ('techs',)

No idea why this happens, yet.

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Jul 11, 2024

Further debugging:

The error is caused by clusters disappearing from the xarray dataset dimensions????

edit: error was caused by passing model.inputs instead of model._model_data to the latex backend. Attribute filtering also REMOVES dimensions!

@irm-codebase
Copy link
Contributor Author

Closed in favor of #639

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
v0.7 (upcoming) version 0.7
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Potential desync in model configuration at the model.py level
2 participants