research-projects (temp-RIT-Tides and descendants) research-projects-RIT
Repository for the rapid_PE / RIFT code, developed at RIT (forked long ago from work at UWM/CGCA).
Please see INSTALL.md
If you are using this code for a production analysis, please contact us to make sure you follow the instructions included here! Also, make sure you cite the relevant rapid_pe/RIFT papers
- Pankow et al 2015
- Lange et al 2018 (RIFT)
- Wysocki et al 2019 (RIFT-GPU)
If you are working with this development repository, please try to add your name/project to the wiki, so we collaborate most effectively. When preparing your work, please cite
-
the relevant rapid_pe/RIFT papers
- Pankow et al 2015
- Lange et al 2018 (RIFT) paper
-
If you are using the surrogate-basis approach, please cite 3. O'Shaughnessy, Blackman, Field 2017 paper
-
If you are using a GPU-optimized version, please acknowledge 4. Wysocki, O'Shaughnessy, Fong, Lange paper
-
If you are using a non-lalsuite waveform interface, please acknowledge a. gwsurrogate interface: (a) O'Shaughnessy, Blackman, Field 2017 paper; (b) F. Shaik et al (in prep) b. TEOBResumS interface: Lange et al 2018 RIFT paper c. NR interface (parts inside ILE): Lange et al 2017 PRD 96, 404 NR comparison methods paper d. NRSur7dq2 interface: Lange et al 2018 (RIFT) paper, and ...
-
If you are using an updated Monte Carlo integration package, please acknowledge the authors; papers will be prepared soon a. GMM integrator: Elizabeth Champion; see repo, see MonteCarloEnsemble and mcsamplerEnsemble b. GPU MC integrator: Wysocki, O'Shaughnessy
Several aspects of this code are very actively developed. We encourage close collaboration with the lead developers (O'Shaughnessy and Lange) to produce the best possible results, particularly given comparatively rapid changes to the interface and pipeline in the past and planned for the future as the user- and developer-base expands.
We expect to make the final developed code widely available and free to use, in a release-based distribution model. But we're not there yet. To simplify discussions about authorlist and ettiquete, we have adopted the following simple model, to be revised 4 times per year (1/15, 4/15, 7/15, 10/15):
- Free to use: Any code commit older than 3 years from the date an analysis commences is free to use for any scientific work.
- Opt-in: Any more recent use should offer (opt-in) authorship to the lead developers, as well as to developers who contributed significantly to features used in the version of the code adopted in an analysis. Loosely speaking, the newer the features you use, the more proactive you should be in contacting relevant developers, to insure all authors are suitably engaged with the final product. This policy refers only to commits in this repository, and not to resources and code maintained elsewhere or by other developers (e.g., NR Surrogates), who presumably maintain their own policy.
The following authors should be contacted
- O'Shaughnessy and Lange: Iterative pipeline, fitting and posterior generation code, external interfaces (EOB, surrogates)
- Field, O'Shaughnessy, Blackman: Surrogate basis method
- Wysocki, O'Shaughnessy, Fong, Lange: GPU optimizations
- ...
Chris Pankow has also been maintaining a port of the original rapid_pe as part of lalsuite. While this code is unreviewed and has many API and workflow differences, the underlying likelihood evaluation procedure has been the same (until the recent GPU rewrite). We hope to eventually merge the codebases, likely by modernizing the version in lalsuite and/or by ports of rapid_pe techniques to next-generation PE codes.
Short term: roughly major.minor.feature_upgrade.internal_rc_candidates. So the 4th number are upgraded every few major bugfixes or moves; the 3rd number will upgrade if we add a feature; and we will eventually get to 0.1 for a production O3 analysis (O3b).
Medium-term: Amajor API change or two we are thinking about for how the users specify workflows should be 0.2
Long-term: Version 1 will reduce dependency on hardcoded parameter names. More flexibility in how inference is done. May be a different repo.