Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UFS/dev PR#216 #1092

Merged
merged 6 commits into from
Oct 21, 2024
Merged

UFS/dev PR#216 #1092

merged 6 commits into from
Oct 21, 2024

Conversation

grantfirl
Copy link
Collaborator

Identical to ufs-community#216

Copy link
Collaborator

@climbfuji climbfuji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A question about the tuning parameter change, and a request to be added as reviewers manually until we've resolved the problem of the CODEOWNERS file. The prosigma changes are no problem, since they are only being used if prosigma is enabled.

@@ -275,7 +275,7 @@ subroutine satmedmfvdifq_run(im,km,ntrac,ntcw,ntrw, &
parameter(elmfac=1.0,elefac=1.0,cql=100.)
parameter(dw2min=1.e-4,dkmax=1000.,xkgdx=1000.)
parameter(qlcr=3.5e-5,zstblmax=2500.)
parameter(xkinv1=0.4,xkinv2=0.3)
parameter(xkinv1=0.15,xkinv2=0.3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the effect of this? @matusmartini do you think this is ok? I know sas is not in our candidate suite for the immediate operational implementation, but it's in a few of our test suites.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are plots, etc. in ufs-community#216

@lisa-bengtsson
Copy link
Collaborator

Over the past 4+ years we have worked under the prototype process to come up with a physics suite that consist of schemes that are well integrated and tuned together. It is the GFSv17/GEFSv13/SFSv1 physics suite aimed for next operational implementation at NOAA. If you “plug and play” schemes from that suite with other schemes you will run into problems. If you want to run the GFSv17 suite, you need to turn progsigma = true, and run with the scale aware SAS scheme together with TKE-EDMF.

Regarding this specific parameter, it controls the background diffusion in the boundary layer scheme, it was 0.15 in the HR3 prototype, Jongil briefly updated it to 0.4 to address challenges in marine stratocumulus, but with the changes in this PR we decided after testing and integration with these changes, to change it back to 0.15 - so it never had the value 0.4 in any GFS pre-operational prototype. HR3 and HR4 will both have 0.15.

@grantfirl @dustinswales @ligiabernardet @yangfanglin I would like to re-visit our discussion on CODEOWNER, I don't think this is going to work. My suggestion is that NRL creates their own fork where they can have their local changes.

@grantfirl
Copy link
Collaborator Author

@grantfirl @dustinswales @ligiabernardet @yangfanglin I would like to re-visit our discussion on CODEOWNER, I don't think this is going to work. My suggestion is that NRL creates their own fork where they can have their local changes.

Another idea that we have talked about for many years (probably since the beginning of the CCPP) is for developers to expose the most important tuning parameters via the argument list, use intent(in) for them within the schemes, and let hosts set them how they need to. The same thing goes for sections of the algorithms that need to differ amongst hosts -- wrapping those changeable sections in if (logical flag) then; else and have the logical flags come in as intent(in) and changeable by the host.

This preserves parts of the scheme that are truly interoperable across hosts/suites while allowing for different tuning/behaviors for different hosts.

@grantfirl grantfirl mentioned this pull request Oct 16, 2024
@climbfuji
Copy link
Collaborator

Over the past 4+ years we have worked under the prototype process to come up with a physics suite that consist of schemes that are well integrated and tuned together. It is the GFSv17/GEFSv13/SFSv1 physics suite aimed for next operational implementation at NOAA. If you “plug and play” schemes from that suite with other schemes you will run into problems. If you want to run the GFSv17 suite, you need to turn progsigma = true, and run with the scale aware SAS scheme together with TKE-EDMF.

Regarding this specific parameter, it controls the background diffusion in the boundary layer scheme, it was 0.15 in the HR3 prototype, Jongil briefly updated it to 0.4 to address challenges in marine stratocumulus, but with the changes in this PR we decided after testing and integration with these changes, to change it back to 0.15 - so it never had the value 0.4 in any GFS pre-operational prototype. HR3 and HR4 will both have 0.15.

@grantfirl @dustinswales @ligiabernardet @yangfanglin I would like to re-visit our discussion on CODEOWNER, I don't think this is going to work. My suggestion is that NRL creates their own fork where they can have their local changes.

@lisa-bengtsson NRL has its own fork with local changes, just as the UFS has. But this is the authoritative repo where changes ought to be consolidated and shareable across all partners. As you may recall, based on the discussions that happened over the last months, it was decided that no CCPP customer other than NOAA is watching and reviewing PRs that go into the ufs-community fork. But for the authoritative repo/branch (which is the case here, NCAR main), the vision has always been that this is a place where all partners contribute to and, if needed, maintain differences from the authoritative code in their own fork. There have been slides with fancy graphics about changes being merged back and forth between the individual partner's forks and the authoritative repo all the way back when I was working at GSL, so there's nothing new about the concept of the authoritative repo not being owned or controlled by any single organization.

As for this particular question that I asked, I wanted to make sure that folks who develop and update physics regularly at NRL saw this so that it doesn't get lost in a large update from NCAR main in the near future. Your explanation of the nature of this change is perfectly fine, this gives us the necessary information on why this parameter changes (back to something it was before anyway) so that we don't have to guess. The PR description in the corresponding ufs-dev PR does give some information (ufs-community#216), but knowing what you wrote above helps definitely.

I like @grantfirl's idea but acknowledge that this may be too invasive in some cases. Another option (and there are many more) would be to use preprocessor directives for these. CCPP already knows the name of the host model (currently FV3 for the UFS, SCM for the SCM, ...). It would be straightforward to use the same name in a preprocessor statement (e.g. -DFV3 in the compile command) and have switches for parameters in the code based on that. This would be easy to read, would incur no computational overhead, and there would be fewer merge conflicts (since each partner would modify "their" line). As a hypothetical example for this parameter, it could look like

#if defined(FV3)
parameter(xkinv1=0.15,xkinv2=0.3)
#elif defined(SCM)
parameter(xkinv1=0.40,xkinv2=0.3)
...
#endif

As long as there aren't too many CCPP users, this would be too unwieldy?

Anyway, totally fine with @grantfirl's suggestion or any other option that gets us to where we want.

Another, more radical idea, and against the original philosophy of CCPP (but in line with what you wrote above), would be to abandon NCAR ccpp-physics main and not share physics - only share the framework and the interface section to the framework in the physics. This still facilitates moving scheme from one model to another, but otherwise makes sharing innovations more difficult ...

@lisa-bengtsson
Copy link
Collaborator

@climbfuji those are some good perspectives. My concern is that we already have quite many NOAA UFS applications sharing the global coupled suite. E.g. GFSv17 at 9km for medium range, GEFSv13 at 25 km for S2S, SFSv1 at 50km for seasonal scales and HAFSv2 for high resolution nested hurricane applications. It's been very difficult to introduce any updates that are not under a namelist flag because of the testing demand across these applications (scales). The particular PR discussed here was tested in coupled SFS, GFS and HAFS - it took a lot of resources. If the developers of this suite also have to test the changes for a different host (and I learned above its not even the integrated suite, but rather schemes plugged and played in a different way), it is simply too difficult for the developers to keep up with. If NOAA UFS developers are the main contributors to CCPP physics, and in addition - NOAA is funding the CCPP, the discussion seems lopsided? I don't think your last suggestion is too radical, I think it is worth considering for NOAA/UFS if the testing demand for other CCPP hosts is becoming too large of a burden in the future. I would be happy to discuss possible options moving forward.

@climbfuji
Copy link
Collaborator

@climbfuji those are some good perspectives. My concern is that we already have quite many NOAA UFS applications sharing the global coupled suite. E.g. GFSv17 at 9km for medium range, GEFSv13 at 25 km for S2S, SFSv1 at 50km for seasonal scales and HAFSv2 for high resolution nested hurricane applications. It's been very difficult to introduce any updates that are not under a namelist flag because of the testing demand across these applications (scales). The particular PR discussed here was tested in coupled SFS, GFS and HAFS - it took a lot of resources. If the developers of this suite also have to test the changes for a different host (and I learned above its not even the integrated suite, but rather schemes plugged and played in a different way), it is simply too difficult for the developers to keep up with. If NOAA UFS developers are the main contributors to CCPP physics, and in addition - NOAA is funding the CCPP, the discussion seems lopsided? I don't think your last suggestion is too radical, I think it is worth considering for NOAA/UFS if the testing demand for other CCPP hosts is becoming too large of a burden in the future. I would be happy to discuss possible options moving forward.

Agreed, we should continue this conversation elsewhere. Two quick points. (1) Other organizations are also funding CCPP, most notably NCAR and NRL. The amount of $$$ may be different, though. (2) Under the original plan, the other host model developers would be responsible for testing PRs that are supposed to go into NCAR ccpp-physics main. In this case, it would be NRL's responsibility to test your changes in NEPTUNE (hence pinging my colleagues in the review, which started this conversation). Likewise, if NRL creates a PR for ccpp-physics main, NOAA should test those changes in the UFS. That's easier said than done, since we all operate on our forks (ufs-community/ufs-dev, NRL's fork, ...) and can't just simply switch from those to NCAR ccpp-physics main.

@lisa-bengtsson
Copy link
Collaborator

At the time when a PR is committed it has already been discussed in the working groups, and carefully tested across many applications. The process I hope we can reach in the future for cross institution collaboration is to have a scientific discussion between the developers on key parameters. If NRL will commit a scheme to CCPP in the future, I think NOAA could reach out and ask about parameters/processes if we would like to test it in the UFS outside of its integrated suite at NRL. I don't think it is appropriate to add people not familiar with the physics as CODEOWNERS so they can veto a change at the stage of a PR already being committed.

@climbfuji
Copy link
Collaborator

At the time when a PR is committed it has already been discussed in the working groups, and carefully tested across many applications. The process I hope we can reach in the future for cross institution collaboration is to have a scientific discussion between the developers on key parameters. If NRL will commit a scheme to CCPP in the future, I think NOAA could reach out and ask about parameters/processes if we would like to test it in the UFS outside of its integrated suite at NRL. I don't think it is appropriate to add people not familiar with the physics as CODEOWNERS so they can veto a change at the stage of a PR already being committed.

That is exactly why NRL asked to be alerted of changes to the ufs-community fork in the first place - so that concerns can be raised before changes are committed. But that wasn't acceptable either.

I stand by my earlier comment that CODOWNERS is such a poor name for the only mechanism that github provides to alert someone of changes. The usual Github notifications are useless, everyone gets hundreds of them a day for each and every commit being pushed, thus "watching" a repo doesn't help. But review requests show up in dedicated places (e.g. https://github.com/pulls/review-requested). If CODEOWNERS was named PEOPLE-TO-BE-NOTIFIED-WHEN-A-PR-IS-READY-FOR-REVIEW, it would have a different notion.

And that is really what we are after, it's not about vetoing work that SMEs did over the course of several years, it's about knowing what is coming down the pipeline and making adjustments (model-dependent or in generalized way, as discussed above), if necessary. And maybe, rarely, catch something like "hey, we've tried that, too, but we ran into issues with xyz - have you looked at this?". Think raising awareness and providing an opportunity for constructive feedback, not creating asystem for blindly vetoing changes.

@dustinswales
Copy link
Collaborator

dustinswales commented Oct 16, 2024

A solution to this problem was proposed when this PR was going into the RRFS production branch,, albeit slightly over the top since I exposed all the tunable parameters in the scheme, but nonetheless it could be modified in little time to only include the two parameters being tuned here.
If this approach was implemented, UFS wouldn't have to change anything and all that NRL/Neptune would have to set the parameters on the host side. Am I missing something here?

@ligiabernardet
Copy link
Collaborator

The original vision of the CCPP is that each organization can achieve more if we work together. While there is a burden to collaboration (we have to communicate), there are also benefits. The premise is that the gains offset the deficits. Here are a few suggestion on the way forward:

  • Expose and quantify the contributions from the various organizations.
  • Clarify how each organization is benefitting (or can benefit) from the collaboration.
  • Clarify the burdens.
  • Use the above to discuss the best ways to collaborate (that feel beneficial to all)

In the shorter term, I think it is beneficial to expose the most common tuning parameters (to namelist or preprocessor directives) because that will make it easier to tune among the various UFS applications. If there is a need, I hope NRL colleagues can help with the generalization.

I also agree with Lisa that UFS-NRL need more conversations about the science, before it gets to the PR stage.

I can help unearth some of the programmatic topics above in a document to which you can contribute. And follow up with you via email on the topics above.

Copy link
Collaborator

@dustinswales dustinswales left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see no reason to hold this PR up while we suss out a pathway forward for dealing with "tuning".

@climbfuji
Copy link
Collaborator

I see no reason to hold this PR up while we suss out a pathway forward for dealing with "tuning".

Agree!

@grantfirl grantfirl mentioned this pull request Oct 17, 2024
@grantfirl grantfirl merged commit 653db0a into NCAR:main Oct 21, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants