Skip to content

Commit

Permalink
Merge pull request #309 from bfhealy/add-training-calls
Browse files Browse the repository at this point in the history
Add example script containing tf training calls
  • Loading branch information
bfhealy authored Jan 22, 2024
2 parents e0df9c9 + cf770ad commit c51ab76
Show file tree
Hide file tree
Showing 2 changed files with 43 additions and 2 deletions.
5 changes: 3 additions & 2 deletions doc/training.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@

## Training
## Training overview

It is common to have light curves on "grids", for which you have a discrete set of parameters for which the lightcurves were simulated. For example, we may know the lightcurves to expect for specific masses m_1 and m_2, but not for any masses between the two.

We rely on sampling from a grid of modeled lightcurves through the use of Principle Component Analysis (PCA) and an interpolation scheme (either Gaussian process modeling or neural networks). The PCA serves to represent each light curve by a small number of "eigenvalues", rather than the full lightcurve. After performing PCA, you will have a discrete grid of models that relate merger parameters to a few **lightcurve eigenvalues** rather than the whole lightcurve.

At this point, you can model this grid as either a Gaussian process or Neural Network. This will allow you to form a **continuous map** from merger parameters to lightcurve eigenvalues, which are then converted directly to the set of light curve parameters that most likely resulted in this lightcurve.

For a list of example training calls on various model grids using tensorflow, see `tools/tf_training_calls.sh`.

### NMMA training
### Training details

There are helper functions within NMMA to support this. In particular, `nmma.em.training.SVDTrainingModel` is designed to take in a grid of models and return an interpolation class.

Expand Down
40 changes: 40 additions & 0 deletions tools/tf_training_calls.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Example tensorflow training calls for different model grids

# model: LANL2022
# lightcurves: lcs_lanl_TS_wind2
create-svdmodel --model LANL2022 --svd-path svdmodels_LANL2022 --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_lanl_TS_wind2 --tensorflow-nepochs 100 --outdir output_LANL2022_tf --plot

# model: Bu2019lm
# lightcurves: lcs_bulla_2019_bns
create-svdmodel --model Bu2019lm --svd-path svdmodels_Bu2019lm --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_bulla_2019_bns --tensorflow-nepochs 100 --outdir output_Bu2019lm_tf --plot

# model: Bu2019nsbh
# lightcurves: lcs_bulla_2019_nsbh
create-svdmodel --model Bu2019nsbh --svd-path svdmodels_Bu2019nsbh --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_bulla_2019_nsbh --tensorflow-nepochs 100 --outdir output_Bu2019nsbh_tf --plot

# model: Bu2022Ye
# lightcurves: lcs_bulla_2022
create-svdmodel --model Bu2022Ye --svd-path svdmodels_Bu2022Ye --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_bulla_2022 --tensorflow-nepochs 100 --outdir output_Bu2022Ye_tf --plot

# model: Bu2022Ye
# lightcurves: lcs_bulla_2023
create-svdmodel --model Bu2023Ye --svd-path svdmodels_Bu2023Ye --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_bulla_2023 --tensorflow-nepochs 100 --outdir output_Bu2023Ye_tf --plot

# model: Ka2017 (no smooth)
# lightcurves: lcs_kasen_no_smooth
create-svdmodel --model Ka2017 --svd-path svdmodels_Ka2017_no_smooth --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_kasen_no_smooth --tensorflow-nepochs 100 --outdir output_Ka2017_no_smooth_tf --plot

# model: Ka2017 (with smooth)
# lightcurves: lcs_kasen_with_smooth
create-svdmodel --model Ka2017 --svd-path svdmodels_Ka2017_with_smooth --interpolation-type tensorflow --tmin 0. --tmax 21.0 --dt 0.1 --data-path lcs_kasen_with_smooth --tensorflow-nepochs 100 --outdir output_Ka2017_with_smooth_tf --plot

# model: AnBa2022_log
# lightcurves: lcs_collapsar
create-svdmodel --model AnBa2022_log --svd-path svdmodels_AnBa2022_log --interpolation-type tensorflow --tmin 0.0 --tmax 21.0 --dt 0.1 --data-path lcs_collapsar --data-file-type hdf5 --plot --tensorflow-nepochs 100 --data-time-unit seconds --outdir output_AnBa2022_log_tf

# model: AnBa2022_linear
# lightcurves: lcs_collapsar
create-svdmodel --model AnBa2022_linear --svd-path svdmodels_AnBa2022_linear --interpolation-type tensorflow --tmin 0.0 --tmax 21.0 --dt 0.1 --data-path lcs_collapsar --data-file-type hdf5 --plot --tensorflow-nepochs 100 --data-time-unit seconds --outdir output_AnBa2022_linear_tf

# Use svdmodel-benchmark to generate additional performance plots. It takes many of the same arguments as create-svdmodel above, except for --tensorflow-nepochs and --plot.
# (Note that an error is currently raised if --ncpus is > 1 in svdmodel-benchmark, see Issue #125)

0 comments on commit c51ab76

Please sign in to comment.