-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Realistic BTOF digitization #1635
base: main
Are you sure you want to change the base?
Conversation
Update BTOFHitDigi.cc
Update BTOFHitDigi.h
Update BTOFHitDigi.cc
… distance relative to the center of the center cell. Now it's changed so spread is calculated relative to the true hit location.
…ighbors when cell ID from BitFieldDecoder for cellID is negative.
…ith dead spaces implemented.
…ion-clusterization
I have asked opinions of other experts and it appears that I was mistaken. I thought the bunch crossings are synced to the 40 MHz clock. You are right, it's synced to 100 MHz clock. Naively, what you've mentioned, "This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock." should be a natural consequence. However, that doesn't make sense to me. Take for example, let say we begin at t = 0. 40 MHz clock begins it's first 12.5ns half cycle. The next collision occurs at t = 10 ns. However, at t = 10ns, the first half cycle from 40 MHz clock is still not done, it's still 2.5 ns from the end of the first half cycle. Does that mean we will be saving only the first 2.5 ns of the event? I asked multiple people in BNL and some say yes, some say such events are just discarded. Our group is not really sure what will happen in that case. From my limited understanding of EPIC clock, it does seem like it will just save the remaining 2.5 ns to disk, but such events are useless. It doesn't make intuitive sense to do things that way. Can you confirmed if my understanding is correct? In any case, I think such detailed simulation of the clock and DAQ behavior can be added in the future. Such timing effects affects much more than BTOF. I believe this should not be a show stopper for this pull request, but keep this shortcoming in mind and we should circle back to it in the future.
I confirmed with other experts that it should be 25 ns. You are correct, the TDC bin size is 25 ns/1024 = 24.4 ps, which also means the resolution can't be better than 24.4 ps no matter what. However, this is the settings our prototype uses (@eicsouvik please correct me if I am wrong). It is subjected to change. I have updated the code so the bin size is indeed 25 ns/1024. TDC resolution in the original pull request is indeed to optimistic. It seems like our goal of achieving a timing resolution of 30 ps is quite hard. But like I said, it's not set in stone yet. We may still change those numbers in the future. Just like the last paragraph, I think having the framework is important and it shouldn't be a show stopper.
I hesitate to implement any clock related stuff until I understand fully how EPIC is expected to behave. Like I mentioned in my first paragraph, our group do not have a good understanding of the EIC clock. If you know the details, I think we can arrange a zoom/Skype meeting. I can implement those effect when we are certain of the expected behavior.
Can you elaborate more on the factor of 1024 please? Why should the dead time be 140 ps*1024? Sorry for my ignorant questions as I am not familiar with the detailed circuitry design.
Landau minimum is needed in the simulation to scale the pulse height. But I can just use a constant. After all, the height of landau template function is something that can be calculated numerically. Find the minimum from -inf < t < inf in init, and just use that for process, is that what you are suggesting? That can be done.
Got it. I will try my best. |
Thank you for entertaining my curiosity on this, as you suggest we can push most of the effects I'm trying to understand into the future and get the core included much sooner.
This is why I originally asked about the hits being required to come in the first half of the clock cycle because I can't imagine the ASIC would be designed with this limitation, it must be able to accept single hits at any time. If another hit comes within the 25 ns period from a subsequent bunch crossing this however I can't see how the ASIC would be able to distinguish these to will be discarded but might contribute to adc value recorded, this should be very rare. What I am hoping to understand, but can almost certainly be put aside to get the initial PR included is what happens after the 25 ns in which the hit is recorded, the deadtime of the pixel until another hit can be measured. Combining contributions from different hits can perhaps wait until later as the charge sharing and timing resolutions of individual hits is more important to get in first.
I think this is important to this PR, the simulation and reconstruction should not be restricted to absolute time windows. I believe at the moment the way tMin and tMax are used means that all simulated hits have to occur in the 25 ns between these values. If we simulate an event starting at -100 ns the hits will all be outside of this window. For inclusion of backgrounds and realistic time frame readout for the streaming DAQ this is what we need to simulate over a wider range. I will try and add some suggestions directly into the code for this but could arrange a call next week if still useful.
On slide 31 of the presentation, the way I interpret the hit arrival time as being measured depends on n where n is up to 1024 counted in periods of the 140 ps and 160 ps clock until the value of n for both counters matches. This means it can take up to 1024*140 ps before the time of arrival can be determined. If only a single pair of n values can be held in the pixel at one time this becomes the minimum deadtime (or maximum smallest possible deadtime if once a matching n is reached the counters can be reset). Or if they are not counters, instead separate delay circuitry maybe multiple signals can be propagated through them at once. I am also not familiar with the circuitry design so am only making guesses based on the presentation.
This is more related to dealing with multiple hits again but still relevant for single hits. If a Landau pulse crosses the minimum threshold just before the end of the 40 MHz clock cycle, both the start and stop signals will come before the Landau reaches its minimum. This means there needs to be some other stop for measuring the adc value of the hit, this might just be when the pulse first returns below the smallest threshold after the initial start. Otherwise if a second hit with a larger adc value comes in during my supposed 143 ns processing time, this value could be used instead of the initial hits adc, again I doubt that would be the case. |
auto& ADCs = adc_sum[cellID]; | ||
if(ADCs.size() == 0) ADCs.resize(nBins, 0); | ||
|
||
mpv_analog = time + m_cfg.risetime; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If time
is smaller than tMin
or greater than tMax
the signal will not be captured. This becomes a problem when the origin of the simulated collision is offset from t=0. A time series for each hit would probably be best produced individually over a range from tMin=time
and tMax=time_signal_returns_below_threshold
. A decision on whether to sum them taken in a separate loop or even a intermediary, optional extra factory before digitization.
In order to properly line up the signals so they can be summed between hits this way, some of the digitization of the time needs to leak into the PulseGeneration so tMin
would actually need to be rounded down to the 25 ps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I understand at least some of your concern. If the hit time lies outside of tMin
and tMax
, the landau_min
will not reflects the peak height. I can replace landau_min
with double landau_min = this -> _Landau(mvp_analog, mpv_analog, m_cfg.sigma_analog);
However, I am not sure if I understand the last part about tMin
rounding down to 25 ps. Since there is no clock in the current implementation, TOF time is perfectly aligned with the collision time, i.e. t=0 is when collision occur and TOF pulse begins. (I realized that tMin = 0.1
in TOFHitDigiConfig
, but that is a mistake. It should have been 0). Do you want me to change that code such that tMin
is a variable instead in preparation for a clock class in the future where the beginning time is taken as an input? I can do that, but if I misunderstood your comments, please let me know.
I don't think we should set tMax=time_signal_returns_below_threshold
and just save the entire pulse until the stop signal (in my case, it's always 25 ns). If we want to discard pulses below threshold, we can do it downstream in other classes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that the collision time is not always at t=0
, within a bunch crossing it can vary between around +/-100 ps from where the centre of the bunches cross z=0. Ontop of that the simulation needs to deal with collisions from offset bunches such that the time associated with the hit from the simulation should be able to be treated reasonably equally with any value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let see if I understand correctly. hit.getTime()
returns Geant time, which always start at zero when collision occur. But like you said, bunch crossing can vary around +/- 100 ps. To account for this, instead of offsetting the Geant time, you want me to offset the TOF pulse timing windows, is that correct?
I think similar arguments extend to offset bunches. Let say a collision occurs at 10ns after the beginning of a 25ns cycle, i.e. detector only save the final 15ns to disk, wouldn't it make more sense to add 10ns to hit.getTime()
rather than to shift the pulse generation windows by 10ns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is incorrect here is that the Geant/simulation time starts at 0 at the start of a simulation. The MonteCarlo event samples used in the simulation already have times attributed to them which is meant to replicate the distribution of collisions throughout time. All of the times recorded in the hits already has this offset included so your algorithm has to be able to account for an arbitrary time coming from BTOF hits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've established that t=0 corresponds to the beginning of a new cycle for the 40 MHz clock, which may not correspond to the time collision happens. The data taking duration is the entire cycle for the 40 MHz clock, i.e. 25 ns. Therefore, an event contains everything that happens from t = 0 (again, may not be when collision occur) to t = 25 ns.
My understanding is that all cells (pixels) and all sensors are synchronized to the same 40 MHz clock and share the same time. As a result, shouldn't tMax = 25 ns
always for all cells? I don't think it's tMax = tMin+m_cfg.tMax
.
tMin
may not be zero 0 due to mismatch of the 40 MHz EPIC clock and the 100 MHz experimental clock. However, I don't think it is the responsibility for this class to simulate the 100 MHz clock. I made it flexible so that when something like globalClock
class is developed by someone else, we can just replace tMin with tMin = _DigitizeTime(globalClock->GetEPICClockTime());
or something like that.
tMin = time
is not what I intended to do, assuming time
is double time = hit.getTime() * dd4hep::ns;
The time window for recording begin before any hit arrives. If we ignore the EPIC/experiment clock issue, the period for data taking should always be tMin = 0 to tMax = 25ns, or at least that how I envison. If a hit arrives at time > 25 ns, it will just be discarded. I didn't write anything explicit like if(time > tMax) continue;
, but if the peak location of Landau pulse is at t > 25 ns, ADCs[j] += charge * this -> _Landau(-m_cfg.gain, t, mpv_analog, m_cfg.sigma_analog) * scalingFactor;
is essentially adding zeros as this -> _Landau
will be very close to zero as t <<< mpv_analog.
Essentially, a covering a static range for multiple hits in a single event is exactly how I envision this class to work, with tMin
being the only variable due to clock mismatch, but even then, within a single event, tMin
should be identical for all cells across all sensors for all hits that occur in the same event. Is it not supposed to do that? My understanding is that everything that happens from the beginning of 40 MHz cycle (t=0) to the end of that cycle (t=25ns) all counts as the same event.
tMin
and tMax
is the time interval when event recording occur, and tMin
is not the time when hit arrives at BTOF. tMin
is the beginning of the 40 MHz cycle which should be aligned with 100 MHz experimental clock in some way, which in turns is synced to the bunch crossings. Therefore, tMin
should start independent of when a hit arrives, but solely depend on the phases between the 100 MHz and 40 MHz clock. Please let me know if there's any misunderstanding on my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a hit arrives at time > 25 ns, it will just be discarded.
This is the key point - you cannot discard these hits.
The algorithm needs to work for hits arbitrarily distributed through time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's that how LGAD works in EPIC? I thought the end of a 40 MHz cycle marks the end of an event and everything that happens after that are discarded? There are only 1024 TDC bins, how can it save anything beyond 25 ns?
If tMin = time
where time is the arrival time of the hit at BTOF, then what is the purpose of the 40 MHz clock? I thought 40 MHz clock marks the start and end of an event, but I admit that I don't know too much about the DAQ. I would greatly appreciate if you can show me documents/references that details how timing of hits are performed here. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the details of the EICROC/AC-LGAD myself but the 1024 TDC bins will be the finest time binning of the electronics over the 25ns. It might be that the EICROC only outputs this 10 bit tdc but there will also be another layer(s) of electronics which will tag the 25ns bin which the hit occurred in so that a global time can be reconstructed.
Unfortunately I don't know of any specific links to help you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's quite unfortunate because we've asked multiple people here at BNL and none of them claim to know how clocks work. My advisor, Prithwish Tribedy, seems to be the one with the most intimate knowledge on these kind of thing, but he is on vacation until after thanksgiving.
I implemented what I believe Prithwish thinks will happen to EPIC, i.e. recording starts at the beginning of 40 MHz clock cycle and ends when that cycle ends. After that, there will be 143 ns of dead time. My TOFPulseGenerator reflects my understanding of how timing works.
From a physics point of view, if an event starts at t = 0 (which may not be because of mismatch between the 100 MHz experiment clock and the 40 MHz EIC clock), a hit after 25 ns isn't really possible for BTOF. Straight line distance between the vertex and the furthest edge of BTOF = sqrt(1.5m^2 + 0.6m^2) = 1.62m, so a hit at 25 ns implies a speed of 1.6/25e-9 = 0.22c. If this is a proton, it would have a momentum of just ~200 MeV/c. This is very slow considering our CM beam energy of ~100 GeV.
Co-authored-by: Derek M Anderson <derek.murphy.anderson@protonmail.com>
…ycle. Units are also added to TOFHitDigiConfig.
for more information, see https://pre-commit.ci
Briefly, what does this PR introduce?
It's an attempt to make LGAD sensor response for Barrel TOF more realistic by doing the following,
Noise, edge correction and time talk correction will be developed in the future.
What kind of change does this PR introduce?
Please check if this PR fulfills the following:
Does this PR introduce breaking changes? What changes might users need to make to their code?
Does this PR change default behavior?
It replaces the content of "TOFBarrelRecHit" with results of this new clusterization. It was originally just geant point + random smearing. I have communicated to the reconstruction group and they advise that I submit this pull request so they can study the effect of having this new digitization/clusterization algorithm. They will decide if I should save the data in a new branch or replace the origin content.