Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changed dynamic scale and proc formatting on Suu to chi chi fully hadronic #3425

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

emcannaert
Copy link
Contributor

Changed the dynamic_scale_choice to 2 (from -1) in the run card as the default -1 value was causing an error in the gridpack creation process. The formatting in the process card was also changed to be more readable and to prevent errors.

@menglu21
Copy link
Collaborator

what's the error using default setup, and have you obtained the gridpack successfully using "2 = dynamical_scale_choice"

@Saptaparna
Copy link
Contributor

@emcannaert
Copy link
Contributor Author

The gridpack generation error with the default scale choice is detailed here in launchpad - https://answers.launchpad.net/mg5amcnlo/+question/706505. The recommendation from Olivier was to change the value, and doing this fixed the problem and allowed me to create working gridpacks.

@emcannaert
Copy link
Contributor Author

emcannaert commented May 17, 2023

Another issue that I have seen with the resulting MiniAODv2 events that I make from these gridpacks is that the counted chi -> {W+ b, h t, Z t} branching fractions are not equal as I had tried to set them, but seem to be closer to 35%,50%,15%. I tried to set the branching fractions in the customize card with

set param_card decay 9936662 1.595357e+01
BR NDA ID1 ID2
0.33333333E+00 2 5 24 # BR(chi -> b W+)
0.33333333E+00 2 6 23 # BR(chi -> t Z)
0.33333333E+00 2 6 25 # BR(chi -> t H)

@sihyunjeon
Copy link
Collaborator

Hi, I would actually ask what the physics motivation is to add H > ZZ > qqqq.

  • H > bb is about 60%
  • H > WW > qqqq is 20% x 66% x 66% = 10%
  • H > ZZ > qqqq is 3% x 70% x 70% = 1.5%
    Is it really worth adding ZZ ?

@sihyunjeon
Copy link
Collaborator

sihyunjeon commented May 18, 2023

Also there are several things I think you are missing out

  • add process p p > suu > H t Z t, (t > W+ b, W+ > j5 j5), Z > j5 j5, (H > Z Z, Z > j5 j5)
  • add process p p > suu > H t H t, (t > W+ b, W+ > j5 j5), (H > Z Z, Z > j5 j5),(H > W+ W-, W+ > j5 j5, W- > j5 j5)

I see these two lines in your proc card. You have 2 tops and 6 quarks from above, and 2 tops and 8 quarks from below. But if you let Z > nu nu for one of Z for below, you share the same final state = 2 tops and 6 quarks (+ some small contribution to MET which won't be really easy to distinguish whether MET is from neutrinos or not given that you are playing with a lot of jets).

So if I were you I would really revise what the final physics target is, concretely think what the final goal is, and try to include only feasible processes (H > ZZ > 4q?) instead of making this computationally heavy gridpack.

@emcannaert
Copy link
Contributor Author

The inclusion of the H > ZZ > 4q process was simply for the sake of having all the possible fully hadronic processes. If this is too computationally taxing, this can certainly be removed given it only contributes a small amount to the total hadronic yield. I was considering removing this already given how large the subprocess folder becomes in MG.

In regards to the H > nu nu processes, these events don't work well for the analytical approach that we are using. In our analysis we use the event geometry to group AK8 jets into bins (called "superjets") that represent each of the VLQs, and then use a NN (which is what the majority of these MC events are going towards training) to categorize each of these collections of AK8 jets. If we were to include a p p > suu > H t Z t process where Z > nu nu, then at best the superjet that we will reconstruct will be the H t (giving us the correct VLQ mass) and then the lone t, which will not give us the correct VLQ mass and instead just the mass of the top.

@sihyunjeon
Copy link
Collaborator

I was considering removing this already given how large the subprocess folder becomes in MG.

If this was too large already I think it's really worrisome.
add process p p > suu > W+ b H t, W+ > j5 j5, (t > W+ b, W+ > j5 j5), (H > W+ W-, W+ > j5 j5, W- > j5 j5)
Correct me if I am wrong, such lines won't work because if I look at the model file, mH = 125 GeV so SM higgs. And I don't see you changing the mass of higgs. So "mH < mW + mW" thus you cannot sequentially decay it. Have you checked the log? I think you will be only getting contributions from H > b b since this doesn't have any issues, but H > w+ w- if you check the logs, madgraph should be complaining about it. You need to do H > j j j j instead.

Be mindful that H > w+ j j, w+ > j j would seem like it's working (as madgraph won't complain) but in fact it doesn't, leads to very wrong branching fraction calculations.

@menglu21
Copy link
Collaborator

menglu21 commented Jun 8, 2023

Another issue that I have seen with the resulting MiniAODv2 events that I make from these gridpacks is that the counted chi -> {W+ b, h t, Z t} branching fractions are not equal as I had tried to set them, but seem to be closer to 35%,50%,15%. I tried to set the branching fractions in the customize card with

set param_card decay 9936662 1.595357e+01 BR NDA ID1 ID2 0.33333333E+00 2 5 24 # BR(chi -> b W+) 0.33333333E+00 2 6 23 # BR(chi -> t Z) 0.33333333E+00 2 6 25 # BR(chi -> t H)

it may be caused by some default cuts in the run card. maybe you can try to set all the cuts to none and check the events number for each channel.

@lviliani
Copy link
Contributor

lviliani commented Nov 5, 2024

We are cleaning the old PRs.
This one seems obsolete, so if we don't receive any objections we will close it later today.
Please react if instead this PR is still needed.

@emcannaert
Copy link
Contributor Author

Hi, I think this issue has been resolve, and therefore this can be removed. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants