Datarate studies for the TPC of Darkside-20k


This TWiki page is designed to document the methods used to obtain the expected datarates of the TPC for darkside-20k. The datarate is estimated by combining the background predictions with simulations performed in g4ds. The outputs of the simulation are converted to the expected data per event by making some assumptions about the digitiser outputs and the FEP compression performance. Combination with the background rate estimates is done using a ToyMC.

A crude estimation of the datarate can also be found by multiplying the activity of each of the isotopes with the probability of that decay . I have created a google spreadsheet showing this calculation for each of the isotopes here:

A schematic of the entire analysis chain is shown below.


Selected backgrounds

The following backgrounds were chosen as they produced the highest rates in:

  • Ar39 TPC
  • U238 SiPMs /Reflectors
  • Th232 SiPMs /Reflectors
  • K40 SiPMs /Reflectors
  • Co60 Steel Structure

All the simulations were done using g4ds both the S1 and the S2 signal were included. A total of 100,000 decays for each background were simulated. In the case of the U and Th chains, these backgrounds are spread across the entire chain so each isotope approximately has O(10,000) decays per simulation.

Background Rate

The rates of the backgrounds were also taken from The background rates are shown below in a table. The activities for the SiPMs combine the rate of the silicon, the PDM and the optical module. In the case of the uranium chain, upper refers to decays above Ra226, middle refers to decays from Ra226 to Po214, and low refers to decay of Po210 and isotopes below it in the chain.


Note on the SiPM simulations

The SiPM simulations in g4ds assume that all the activity takes place in the silicon, this is due to the fact that the geometry in g4ds does not contain either the optical module or the PDM. As the silicon is closer to the LAr than the PDMs or the optical module, the background rates presented here should be considered conservative.

Note on the Uranium chain

The uranium chain uses a slightly different version of g4ds than is used in the simulations of the other backgrounds. The default version of g4ds is used with geant-4.10.00p04, by default this comes packaged with the decay data libraries PhotonEvaporation3.0 and RadioactiveDecay4.0. There is a bug within these simulations when simulating decays of Pa234. In reality the decay of Pa234m should be a direct beta decay to the ground state of U234. In the geant simulations however the beta decay often decays to an excited state of U234, furthermore the probability of Pa234m decaying to Pa234 before its beta emission is greatly overestimated.

This is fixed by upgrading the data libraries to PhotonEvaporation3.1 and RadioactiveDecay4.2. After this fix the decays seemed to be simulated correctly.

Note that only the U chain is using these updated files, the other decays are simply using the original libraries.

Calculating the data per event

To calculate the data per event the PE times from the g4ds simulations were run through code to assemble them into waveform like objects. The code first groups all the PEs by channel. It then iterates through each channel looking at the PE times and combining them based on rules shown below.

Nominal algorithm

The nominal algorithm simply combines PE closer than a certain distance into a waveform segment. The code looks at a PE time then from this calculates the expected end of the waveform segment based on input parameters. If another PE on the same channel is produced before the end of the window then it is extended, this can either be a fixed extension size (i.e. 400 samples) or a variable extension size such that this second PE is more than a certain distance from the segment end. If the next PE falls with in this extended window the window is extended once again, the window only ends once the next PE occurs after the extended windows end. PE are grouped in this way until they are all grouped. Note that if PE occur at exactly the same time then the window will not be extended, this is the reason the exponential decay method was implemented as described in the next section.

Exponential Decay

Converting WF sizes to data

Once we have the waveform lengths we can easily convert them to their size in bits using the equation below, where D is the size of the event in data, h is the size of the header for the waveform, b is the number of bits per sample from the digitisers and L is the length of the waveform in samples.

For the following studies the following assumptions were made, the size of the header has been arbitrarily chosen, but is expected to be somewhere around this size.

  • h = 64 bits
  • b = 16 bits/sample

-- MarkIanStringer - 2021-03-05

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng DecayRates.png r1 manage 34.3 K 2021-03-05 - 21:27 MarkIanStringer  
PNGpng Screenshot_2021-03-05_14-59-27.png r1 manage 26.0 K 2021-03-05 - 21:00 MarkIanStringer  
Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2021-03-06 - MarkIanStringer
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback