General
August 13, 2014 From : Joel Butler Subject: CT-PPS TDR Overall Comments
This is a very detailed document that provides an excellent description
of the proposed proton spectrometers, their physics case, and their
integration into the LHC and CMS. The detailed technical definition of
the detector has advanced a lot in the last 6-12 months.
I have done my best to provide comments on the text even though the time
provided to read this >100 page document was relatively short. I apologize for any mistakes I have made but the time
pressure makes it difficult to recheck each comment.
I also have some general comments:
1) The impedance, beam loss, and vacuum stability are potential show
stoppers. Much progress has been made in evaluating the severity of
these problems and mitigating them. However more work is needed. The
total impedance of the current RP design I think adds up to a lot more
than the allowed 1% of the LHC impedance. The proponents know this and
part of the early R&D is to study this, see how serious it really is,
and then try to further reduce it if necessary. Since each of these
three problems can affect the luminosity, CMS (and the other
experiments) will certainly ask hard questions about these issues.
CT-PPS needs good answers. Is there a level of possible CMS luminosity
or efficiency reduction that CMS considers acceptable to achieve the
benefits of forward proton tagging? How are the other experiments
affected? It would seem that emittance blowup or protective aborts due
to losses are problems for all the running experiments. The same problem
would apply to AFPs impact on CMS.
The test of RP operations with high intensity beam is the first priority in 2015. Our expectations are based on measurements done by the TOTEM collaboration in 2012 which pointed to a number of improvements of the RP design carried out since then. The threshold of 1% of LHC impedance is indicative in the sense that there is no experimental data validating it. The tests done in 2012 of the non-upgraded RP insertion in a high intensity beam didnt show direct evidences of beam instabilities induced by the RP. These tests need to be repeated.
The tests to be carried in 2015 (using end-of-fill periods) will establish the impact on the LHC operation of RP insertion as a function of the distance of the RP to the beam. Our physics program benefits from the smallest distance of approach possible, which in the limit could be as small as 10 sigma. Conservatively the physics performance was evaluated for 15 and 20 sigma. One motivation for the MBP development is the possibility to eventually reach the 10 sigma limit.
The decision on what luminosity loss due to pocket insertions CMS or LHC could afford should result from a discussion between all parties involved once data on the proposed insertion test program is available.
2) The performance of the timing system with pileup was not demonstrated
with a really detailed Monte Carlo that included both the main CMS
detector and the two proton spectrometers with full pileup and CEP
events and had the efficiency for choosing the right vertex and the
probability of getting the wrong one or none. The match to the CEP event
was not shown, at least not based on a full simulation.
The performance of the timing system was studied in combination with the performance of the CMS central detector. For the PPS forward detectors we used an approximate simulation that depends on the expected time resolution (i.e. smeared based on the resolutions) and that takes into account the detector granularity. The channels occupancy reflects two different contributions: 1) protons from signal or pileup events tracked up to the detector; 2) beam background, simulated event-by-event and based on a probabilistic model extrapolated to PU=50 from TOTEM data collected with PU=9. For the central CMS detector we used a combination of Full simulation (for the exclusive WW signal and for the most important backgrounds, such as exclusive tautau) and FastSim (for some backgrounds, such as inclusive WW). Pileup was simulated accordingly (FullSim and FastSim). Results are consistent with earlier results (after accounting for proper scale factors).
We assumed a challenging situation with an average of 50 pileup events. We expect that we will be able to learn ways to improve our understanding of the data and mitigate the challenging pileup conditions. (These hopes/expectations are not included in our simulation).
The "vertex position vs time" distributions (with the corresponding time resolutions of 10 and 30 ps) of Fig.22 (top) show the expected based on FullSim for the exclusive WW signal events. This a true association to the signal events.
The selection is based on a simple/naive approach for now, and the corresponding results are based on these selection criteria. We believe improvements in the vertex selection can certainly be made.
Also, timing is essential. Similar measurements performed in Run1 under (much) less challenging pileup conditions cannot be repeated (or improved) in Run2 without the necessary forward detector upgrade. (This addresses also one your other concerns, but it is worth to point this out here as well).
3) The frequency of more than one proton on a side of the detector is
not negligible. It can be estimated from plots shown but it would have
been better to simply provide a histogram of the number of protons/side,
the correlation of the number of protons of each side, etc. If there is
a proton on one side, and more than one of the other (other multiple
protons an each side) there are multiple solutions to the CEP
kinematics. Does that mean the event is not useful or is there some
strategy for choosing a good solution?
We have produced these plots for the WW signal MC (attached). In the case where there are 2 or more protons in different cells of the ToF in one arm, we check all the multiple solutions and then choose one best candidate per event. Currently this is done based on the combination with the best match between the delta-ToF and dilepton vertex z position.
In addition to real high-energy protons, there is also an important contribution of tracks from beam backgrounds, which is extrapolated from data collected with by the Totem Roman Pots in 2012. With the quartic ToF detector geometry with 3x3mm cells, we have taken the approach that events with one or more extra tracks in the same cell as the signal proton are rejected. The possibility of recovering this inefficiency is one of the motivations for studying alternative solid state timing detector technologies, with finer granularity near the beam.
4) There are vague remarks about the value of a single tag but there is
no demonstration of its use and that physics can be done with the
associated higher backgrounds.
Indeed we could not pursue these investigations further due to lack of time. The demonstration of the possibility of using events with a single tag requires detailed simulations and substantial analysis effort .
5) There is a comment that it is not known how accurately the Transport
matrix will be known and that the experiment will find out when it runs.
However, I am sure that the LHC machine group must have some kind of
estimate of this and I would like to know the impact on the overall
resolution of the kinematic quantities. (I doubt this is a big deal, but
it should be looked at quantitatively.)
The procedure reported in Ref. 1 of Chapter 2 provides the calibration of L, v and their derivatives (values sensitive to quadrupole fields) with a precision of ~ 0.1%. The dispersion is sensitive to dipole fields and needs to be calibrated by checking the dipole fields stability and the kickers. We aim a similar resolution on the dispersion and it will be tested once that the new optics will be implemented in LHC.
6) Many claims are made about the physics but some are not very
quantitative. In some cases, the CMS detector can address some of the
physics and it needs to be shown what the added value of CT-PPS really
is.
We dont want to quote a number on the comparison of the PPS sensitivity to various physics channels to the one of the CMS detector alone because we dont have the latter optimized for exactly the same conditions.
In general, the purity of the selection of events from exclusive processes pp --> pXp depends strongly on the average number of pile-up events (mu), for small mus rapidity gap and vertex exclusivity requirements can give adequate performances but for larger mus proton detection and matching between the vertex of the two protons and the central system as provided by CT-PPS is highly beneficial. As outlined in section 1.2, the addition of CT-PPS allows unique measurements of low cross-section exclusive processes like the production of gauge bosons pairs (W, Z, gamma) and high pT jets. CT-PPS also adds the possibility to determine the kinematics of the system X e.g. its mass (M_X) with a good resolution that is additional information that can be used in the event selection as well as for the interpretation of the measurements. Regarding exclusive dijet production, in a scenario where a new resonance in the mass range 300-1000 GeV decaying only into gluons and/or light quarks exists, the addition
of CT-PPS would be the only way to have a chance to event detect an indication of it.
To be a little bit more specific, if the process exclusive WW production is taken as an example, the determination of M_X i.e. the mass distribution of fused gamma gamma pair (M_gamma gamma) enhances significantly the sensitivity to anomalous quartic gauge couplings (AQGC), see Figure 26 and the comparison between the CT-PPS sensitivity and the result of the published CMS analysis (more than simply the factor 20 in the luminosity). Also any BSM physics signal leading to AQGCs generally tends to have an excess at large M_X, where the CT-PPS has a good acceptance.
For the exclusive WW production, it should also be stressed that being able to reconstruct the M_gamma_gamma distribution makes the interpretation of the measurement cleaner and less controversial since issues that come into the interpretation of all other hadron collider anomalous couplings analyses like unitarity violation and form factors can largely be avoided. Furthermore, in case of a real BSM access, having information about the kinematics of the originally fused gamma gamma system will certainly be valuable in its interpretation. So although the CMS detector alone might allow the measurement of the exclusive WW production in the SM, by adding the CT-PPS the sensitivity to AQGCs is increased and the interpretation of a possible BSM signal is improved.
7) HOWEVER, this is a quite modest device and not too expensive in
dollars (and perhaps not too expensive in people's effort) for the pay
back you get. The reader who did not know about the project could go
along way into the document without realizing that. I think the addition
of a schematic early in the document and a characterization of the scale
, rough dimensions of the detectors and number of channels would put the
reader in the correct frame of reference to judge this proposal.
Excellent suggestion.
Fig. 27 will be moved just after the first paragraph, L 117, with more detailed caption.
First paragraph of 1.2 will be moved here and will be re-written:
The CT-PPS detector consists of a silicon tracking system to measure the position and direction of the protons, and a set of timing counters to measure their arrival time. This in turn allows the reconstruction of the mass and momentum as well as the z coordinate of the primary vertex of the centrally produced system, irrespective of its decay mode. The detector covers an area of about 4 cm2 on each arm. In total it uses 144 pixel readout chips and has ~200 timing readout channels. The total cost of the detectors is below 1 MCHF.
I want to conclude by saying that I support giving this the "green
light". The document is definitely "reviewable" by the
collaboration as is. I think the implementation of item 7 would improve
it for readers who are not as familiar with the project as the ARC
members are. It is also something that could appear at the beginning of
an approval talk.
Chapter 1
L116
Move to the end of the sentence so it reads:
"to bend protons that have lost a small fraction of their momentum out of the beam envelope so their trajectories can be measured."
OK
L129
Here and in the future, it should be recognized that being better than LEP or the Tevatron is not enough if they will be eclipsed by other RUN 2 measurements. The question is whether this physics will be done using some other technique at the LHC, say in CMS.
We didnt want to quote a number on the comparison of the PPS sensitivity to anomalous couplings to the one of the CMS detector alone because we dont have the latter. Fig 26 shows a striking comparison between the expected PPS sensitivity with 100 fb-1 and the CMS measurement at 7 TeV and 5 fb-1. PPS will improve the present CMS limits by two orders of magnitude. The extrapolation of the 7TeV result to the new beam conditions and luminosity is not easy to do without a detailed analysis study.
L134
There has been a lot of work done on this and there has been a lot of progress. There are other ways of isolating relatively pure samples of quark and gluon jets. Do the Jet experts agree with this claim?
We really didnt pursue the study of this question beyond this general statement, which is motivated by the known difficulty in obtaining very pure samples of gluon and quark jets. We propose to change the wording to The detailed characterization of gluon jets in this data sample relative to quark jets may improve the efficiency of gluon vs. quark jets separation.
L151
"when data taking starts at the beginning of LS2 (Long Shutdown 2)."
To avoid confusion we prefer to delete expected to be concluded in LS2.
L153
State the lower bound on the mass and then observe it cannot reach the Higgs. (I know this information is available later but this is a place to make the point)
Changed to In this configuration, the spectrometer is not sensitive to the central exclusive production of the recently discovered Higgs resonance at 125 GeV/c2 since its mass acceptance is above 200 to 300 GeV/c2 depending on the distance of detectors to the beam.
L182
My understanding is that the two main issues are the beam impedance caused by the RPs (and later the MBPs) and the losses caused by particles interacting in the CT-PPS components. Maybe this should be stated here.
Re-written: Prove the ability to operate detectors close to the beam-line at high luminosity, showing that the beam impedance caused by the RPs (and later the MBPs) and the losses caused by particles interacting in the CT-PPS components do not prevent the stable operation of the LHC beams and do not affect significantly the luminosity performance of the machine.
L190
in--> into
OK
L225
I am not clear what dimensions this is. We will probably find out later.
Defined later
L255
I am not sure what they are trying to say here.
Use now a simpler formulation: The standard CMS L1 Trigger provides full efficiency for the exclusive production of EW final states.
L265
The word "Pocket" is not the best description of the mechanical arrangement of the detectors, especially the Roman Pots.
We prefer to keep the term beam pockets, used as title of chapter 3 and in other parts of the document.
L272
this would appear to be a possible show stopper. I understand that this is part of the initial R&D but people should be aware of this. It needs to be clearly stated that emittance blowup that would impact CMS integrated luminosity is probably not acceptable (but there must some specification here, e.g. not greater than x%). I assume that if 1% os ok with the machine it is ok with CMS
See answer to general comment above.
L289
with
OK
L292
with the goal of installation of a test structure in 2016.
OK
L302
End note for this section: A generally very well organized introduction that clearly states most important issues. A schematic of the detector might be good to have here rather than waiting until later. Also some characterization of this as low cost with a ballpark number might help an unfamiliar reader get properly calibrated to the project
See answer to general comments above.
End note for this section: Should there be a little discussion of the problems of beam background affecting the accelerator included here to give a complete overview of the main issues
Will add the following sentence:
The test runs performed in 2012 complemented by Fluka simulations indicate that the approach of the RPs to the beam at the distance demanded by physics requires to absorb the showers produced by the RPs in order to protect the downstream quadrupole. The solution envisaged was the addition of the new collimators (TCL6), which is already concluded.
L327
cross sections that are typically about 1 fb.
OK
L329
of order 1 pb
OK
L350
So for 100 fb-1, 150 events assuming 100% efficiency
L356
Since this was observed without seeing the protons and apparently with tractable backgrounds, this is an opportunity to explain exactly what the proton detection adds to this investigation in CMS. Better S/N? By how much?
See answer to general comments
L369
This is the point but it would really be nice to have some numbers or a plot of some kind to support and quantify this type of statement
This is illustrated in Chapter 2 where the results of a detailed simulation of this channel are presented.
L419
Since there are other ways to do this, e.g. top, does this add anything?
It adds redundancy to the b-tagging efficiency measurements.
L438
This is a certain outcome not achievable by other means. It might have been useful to expand in a few sentences on the models that can be distinguished.
The feasibility study of the exclusive dijet process with CT-PPS is still on-going. Without estimates of expected event samples and purities, such an expansion is premature. Given the fact that we can reach masses well beyond the exclusive dijet measurements at Tevatron, a measurement of the overall cross-section behaviour with M(pp) will be a good test of how well the different models include the energy evolution of the rapidity gap survival probability.
L455
Another example of how some of the physics can also be done without proton tagging so it will be good to quantify the added value, if possible.
The statement refers to the fact that for single exclusive Z production, we have only single-tag acceptance with the CT-PPS. The focus here is not so much the added value in terms of physics but given as a possible way to provide a check of the momentum scale of the xi determination. We have not pursued these investigations further due to lack of time. The demonstration of the possibility of using events with a single tag requires detailed simulations and substantial analysis effort.
L464
It would have been nice to have some scale for cost and complexity right at the beginning since the reader may not know and it helps in judging the physics case to know what resources are needed to achieve it
See answer to general comment 7.
L516
By this , does one mean making it possible to move the RPs closer to the beam?
The RP insertion test program should determine how close to the beam the RPs can be moved, for different instantaneous luminosities.
Chapter 2
L602
The expected scale of these (Or worst case) can be estimated and its impact on the resolutions can be discussed and should be.
>> This is a well defined procedure. It has been used in the past, and we expect it to be part of the standard preparation for data-taking/-analysis.
See also answer to general question 5.
L624: New to me
>> ExHuME is a reliable generator widely used for the study of exclusive dijet processes.
L638: I take it that this sentence is the discussion of tracking resolution. This almost seems toe belong to the previous sentence but is a somewhat new issue.
>> Rephrased and clarified.
Fig 9: Is this just a relative hit distribution or is there a absolute scale for this color coding?
>> These distributions are on an absolute scale, no color coding. We changed them, and normalized them to unity.
L734: Can we have an explicit statement of the efficiency as a function of luminosity (or pileup) due to this effect? Dont we have problems if there are two protons in either detector because we get multiple solutions with different pairings on the two sides. How do we use the kinematics and does the Delta t with multiple solutions cause problems
>> We studied the effect for an average pileup of 50 events. Additional protons in the detectors are coming from pileup or from beam background. Both contributions scale with the luminosity. The Delta_t between the two arms is used to exclude background protons which don't belong to the vertex selected (see Fig22). Depending on the analysis, additional kinematics constraints can be applied. In the WW analysis no attempt was made to reduce the background by using these constraints. Under these conditions and assuming 10 ps time resolution the fraction of WW exclusive events incorrectly reconstructed (wrong proton combination selected) is 10% (see Table 4). In order to study the effect as a function of pileup we would need to make additional the samples, which is planed but not yet available.
The sentence starting in L734 refers to the probability of having multiple hits in the same Quartic cell, which in the baseline design is about 50% in the two cells closest to the beam. As explained, the simulation assumes that protons in cells with multiple hits can not be measured accurately and are lost. The reduction of signal events between lines 4 and 5 in Table 4 is mainly due to this effect. One goal of the R&D program in new timing detectors and in fine granularity Quartic option is to eliminate this source of inefficiency.
>> See also answer to general comments above.
L801: Issues with the TOTEM approach which is for lower L and larger beta*
>> The method was used and it worked at the low luminosity. The method has to be adapted to the luminosity scenario of Run2. The method uses any kind of reconstructed track (so it doesn't depend on the optics or luminosity) and an elastic scattering sample which can be collected in few hours"
L805: Rate of elastic events at low beta *. Claim is that this should work
>> Yes. The claim is that data are sufficient to perform the alignment.
L818: So this new method is not yet really proven.
>> This is an alternative method used at the Tevatron. So it is a method that worked reliably, and it is almost entirely based on data-driven technique. It should work fine as well (we do not see why it should not work). The precision achieved at the Tevatron was limited by the detector resolutions.
L825: So how much improvement does the proton tagging give.
>> While in Run1 the measurement was performed using the CMS central detector alone, the more challenging conditions of Run2 will be a (likely) impediment to the measurement. The proton tagging will provide the additional background rejection power against backgrounds in the harsh pileup conditions (Fig22, and Fig23).
L831: Again, the issue is what CMS will have after 100 fb-1.
>> Perhaps many new SUSY particles, perhaps nothing new. Having an extra handle to anomalous couplings in the WW final state is certainly a plus.
Fig 22: Do the background events have vertex cuts and restrictions on other tracks coming from the primary besides the leptons?
>> Distributions are shown for events where both leading protons are within the PPS detector acceptance. No other requirement on other extra-tracks, etc. is requested.
A selection based on the best vertex matching between the dilepton system and the two leading protons is applied to select only one entry per event.
Changed the text accordingly.
L865: Ok, it maybe possible to work with such events.
>> Yes. Figs22/23 indicate that timing requirements are indeed needed to allow to improve on this measurement.
L870: This is apparently applied post hoc but could have been applied earlier and the impact of pp made after all such cuts, since the proton signal are not used (or needed) as a trigger.
>> This is certainly a possibility. There are no studies on this yet.
Fig 23: This seems to imply that al cuts are made but should check that strong vertexing of both leptons and extra multiplicity are actually already applied.
>> In Fig.23, all cuts (including extra multiplicity, or Ntrack of Table 3) are applied except for the ToF information.
L878: But isn't the Delta Z determined using the two closet to a reco primary?s
The \xi values are reconstructed by the PPS for both leading protons. They are used to reconstruct the missing mass. For AQGC events, the missing mass values can be significantly different from the backgrounds' (Fig.25). The DeltaZ is used to remove background events (see comments to L734)
L896 Pretty low signal, especially since this is probably a higher efficiency than will really be achieved.
>> It is indeed a low signal yield for SM exclusive WW events. In case of anomalous coupling we may expect a significantly larger event yield.
Chapter 3
Table 5
I am not clear on the estimate for the total impedance of the system. If the <x% are taken as ~ values, then the total might be well over a per cent? What is the limit (which I understand is probably conservative) from the machine? Is it not <1% total.
See answers to general comments
L1147
work to do!
Indeed.
L1204
A lot known but still work to do to get a solution
L1236
Pocket
L1276
So is this a statement that this solution probably works?
>> Yes, it may work but detailed calculations are still needed.
L1283
How bad is this really?
See Fig 44. The stainless steel thin windows are bad, multiple scattering alone exceeds the target resolution of 1 urad. Using the aluminum- beryllium alloy the multiple scattering effect is 0.4 urad which is acceptable.
Chapter 4
L1411
Why so many planes? Way more than in the CMS FPIX or even the tracker for the most part.
The main reason is extra redundancy at a very small cost increase. The extra cost of sensors, ROC chips, other electronics and readout links from choosing 6 planes per station (instead of say 4) is not really significant. The additional planes bring the big advantage that the system will be resilient to possible failures without requiring immediate intervention. In our case, where any detector failure may translate in a significant acceptance loss, redundancy is a major concern. Due to the non uniformity of the occupancies in the detector, some regions of the detector are extremely critical. This fact is not true for FPIX and BPIX and this difference justify an higher redundancy w.r.t. these detectors.
L1625
endnote: Seems to based on reasonably well worked out solutions obtained from existing detectors or extrapolations
Chapter 5
L1682
Need this to be quantified for one side and both sides
Close to the beam the ideal segmentation is of the order of 1mm2 cells such that the cell occupancy is 3-4%. We dont have now a timing detector technology capable to achieve this granularity. The impact of higher cell occupancies when using the baseline Quartic is discussed later in the chapter.
L1903
How about a good statement of the use of the sum, the jitter in event times, etc.
Well, we prefer not to comment outside CT-PPS.
Fig 67
axis labels? Units?
Will be fixed
L2103
So this is a way of doing a low luminostiy experiment in parallel. Nice.
L2117
Could SEUs be a problem?
It could. Our baseline is to have the digital electronics (with radiation resistant FPGA) in the RP system. The neutron fluence is about 1012 neq/cm2 per 100 fb-1, which may indeed create SEUs. We will use the CMS Hard Reset mechanism to recover from those, as done in other parts of CMS. If the rate of resets is low (say smaller than a few per hour) it should be ok. We may also try to shield the electronics box from neutrons.
Otherwise we would need to move the electronics to a protected region. The plan is to make a hole in the concrete floor below the beam pipe where to house the digital electronics. If needed, such hole could be made in the winter stop.