#### CT-PPS TDR Rewiew

Dear Authors, congratulations for this very nice document. It is really a pleasure to see the PPS project growing so nicely. Please find below some suggestions on the version dated 4/8/2014. I hope you will find them useful. Mirko.

Comments by Mirko Berretti (lines refer to version of 4 Aug)

Chapter 1

1.1

L136: references missing.

L141: Vector Boson Fusion (VBF) acronym missing.

L127: Reference [1], errata missing: Eur.Phys.J.C19:477-483,2001; Erratum-ibid.C20:599,2001.

1.1.2

L200-201 TBM acronym missing.

Paragraph starting at L277.

I'm fully agree in explicitly write that the RP operations with high intensity beam is challenging and we will need to gain experience on that. But reading the text I have the feeling that the insertion of the RPs will be (very likely) a problem for the beam stability. Therefore I would try to stress more some optimistic arguments about the feasibility of the project: -the 1% threshold is a safety indication and not a sharp limit (the number is not validated experimentally). -according to what reported in sec 3.4, the non-upgraded RP insertion in a high intensity beam at 8 TeV created problems because of the particle showering and of the behaviour of the Ferrite (both issues seems to be resolved in the new PPS design). As written in L1106 there were no direct evidences of beam instabilities induced by the RP because of its large impedance (apart from the impedance heating problem which now should be fixed).

1.2.3

L416: are the resolution on pt and pz also known ?

Additional information on resolution of kinematical variables is given in Chap 2

1.2.4

L448-449: experimental results about the photoproduction of vector mesons are quoted without references.

References:

Repetition of TOTEM upgrade proposal [18],[5].

corrected

Chapter 2

2.1

L587 detector tracking and timing detector-> tracking and timing detector.

OK

2.2

Please also report the simulated time-resolution.

2.3

For fig 14 (left) vs 16(left) one can remark the fact that t.resolution does not depend only from t but also from x (this is the reason of different t resolution?).

The resolution in t is poor (~20-30% for values between t=1 and t=2). It is hard to draw conclusions on how this resolution is affected by xi. Small differences are likely due to the different kinematics. This is corrected in the Aug8 version.

In fig 10,12 the range of the colors in the palette is determined by a few amount of entries around x=0.03 while the bulk of the distribution is red/orange, which makes difficult to appreciate variations. If one can easily redefine the palette range the plots will improve.

Improved visibility now. Thanks, anyway.

2.5

Ok with the FIXME in fig 21/22 (occupancy of timing detector) or report an order of magnitude for the contribution due to the beam background. I would write the distance even in the text as it is a fundamental parameter for the occupancy.

This is fixed in the Aug8 draft. The distance is clearly stated in the caption, and in the Figure.

L722: such events or such hits? (as in principle one doesn't need to drop the full event because of multi- hit superposition in only one channel)

In the case in which the multiple hits are from the leading proton, the entire event is lost. It is however true that multiple hits may be from other background source. Rephrased.

2.6

L756: pileup of physics protons is instead included in the simulated sample.

2.7 Is there any proposal about the relative alignment between the tracking stations and the timing stations? If not, we can put at least a reference for this issue pointing at section 5.2.5.

We added a sentence: "Relative alignment between the tracking and the timing stations are not discussed here. More details can be found in Section~5.2.5."

2.8.1

I would remark the scenario supposed in the analysis (at least mu, distance from the beam, square-cell geometry).

Text has changed. Not applicable.

L879: nsec -> ns.

Text has changed. Removed.

2.8.2

Fig 29: not easy to distinguish the curves. In fig 31, the Y axis ranges of some of the pads can be reduced, in order to better distinguish the curves.

We tried different versions but it does not improve much. All exclusive processes are in the narrow window at ~0, while the inclusive WW is the only spread out.

In fig 31, the Y axis ranges of some of the pads can be reduced, in order to better distinguish the curves.

Fig31 (corresponding to Fig.25 of the Aug8 draft) shows 1-dim distributions of some of the kinematic properties of the events. Perhaps you are referring to different distributions? These seem to be rather well visible.

L964: events of events->events.

OK

Chapter 3

3.2

L1042 Please move here the NEG acronyms (now introduced in L1312)

OK

3.2.1

L1052-1053: to be moved in the reference.

OK

3.4.2

Are the results of the simulations also valid (or similar) to the one expected for the 25 ns run with the typical beam condition foreseen for the PPS operation?

Yes

3.4.3

L1157: . missing before Therefore

OK

3.6.3

L1371: is there any estimation on the effect of this showers on the quadrupoles? If not I would mention about that in this line as ongoing studies.

References

L1404, at the beginning of the line: ' ' instead of

OK

Chapter 4

Please justify the choice on the number of planes.

The main reason for the number of planes is extra redundancy at a very small cost increase. The extra cost of sensors, ROC chips, other electronics and readout links from choosing 6 planes per station (instead of say 4) is not really significant. The additional planes bring the big advantage that the system will be resilient to possible failures without requiring immediate intervention. In our case, where any detector failure may translate in a significant acceptance loss, redundancy is a major concern. Due to the non uniformity of the occupancies in the detector, some regions of the detector are extremely critical. This fact is not true for FPIX and BPIX and this difference justify an higher redundancy w.r.t. these detectors.

4.2

L1509: reference not correctly introduced [?]

changed

Somewhere could be useful to say if the tracking stations will have trigger capabilities (according to L2686 they will not)

The tracking front-end will be based on the CMS pixel readout chip PSI46dig chip which does not have trigger capabilities.

4.2.1

Could be useful to know the expected distance of the physical edge of the silicon to the bottom window.

As said, 3D sensors for the ATLAS IBL project have already achieved 200 um dead region, and development is being pursued to reduce it.

Chapter 5

5.1

1708: suggested reference on the 100 ps diamonds [14]:

M. Ciobanu et al., In-beam diamond start detectors, IEEE Trans. Nucl. Sci. 58 (2011) 2073.

OK

For the solid state detectors one can mention that, because of the logarithmic rise of the energy loss with the proton momentum, we will have a larger amount of charge released in the medium and an even better time resolution. In Carbon, dE/dx of 7 TeV proton ~ 1.4 dE/dx of a MIP.

This fact will certainly help in reaching the challenging time resolution required by PPS.

L1808: MCP acronym missing.

L1812: If available, a reference on MCP lifetime can be useful. Are the quartz optical properties (affecting the timing measurements) expected to be stable during the 100 fb-1 of data taking?

L1844: reference to a section not correctly introduced [??]

Corrected

5.2.5

L1941 I would mention also the fact that the time calibration have to be done channel by channel as the bars have different length.

L2002: the quantity (tL+tR)/2 would be also important to characterize combinations including debris background (expected to arrive later in time) and therefore to improve the background rejection in the analysis. Maybe it can be mentioned.

We prefer to leave as it is.

5.2.6

L2046 : HPS = PPS ?

?

It would be nice to write a range (even a rough estimation) of expected detection efficiency per proton which takes into account for all the effects: spacers, sipm filling efficiency, proton interaction within the bar.

The proton detection efficiency is of the order of 94% taking into account 200 um space between bars. The efficiency to detect the light signal is close to 100% given the ~100 photo-electrons produced per proton. As written in the text, the probability of proton interaction in one detector is between 7.2% and 14.6%. However this probability doesn't translate directly into a detection inefficiency, as a fraction of these events could provide a good time measurement. Precise estimations do not exist yet.

5.2.7

Congratulations for the very nice results.

Fig 76: please report the X axis units, at least in the caption. Is the proton interaction probability in the bars around 10% according to fig 76?

See above.

5.3

Fig 80: I would write at least in the caption that the reported distributions are occupancies per proton and m^2 (or per read-out channel) and that y-z is actually the transverse plane (x-y according to the coordinate system used previously in the document). The left palette is partially covered by the right side of the picture.

ok

5.4.1

L2272: maybe we can report the distance of the NINO from the sensor. So the reader have an idea about the real significance of the 10^14 neq/cm^2.

Not really necessary. It should be farther away from the beam than the SiPMs which should sustain 10^12 neq/c2.

5.4.4

L2300: about the max rate=5 MHz: should one cross check that the probability to have the high occupancy cell ON for two consecutive BX is negligible?

The readout system is designed to process events in consecutive bunch crossings. The only requirement is that the average rate should be less than 5 MHz.

5.5

L2313: In L249 the number quoted was 50%. L2316: Typo sqrtN

corrected

L2323: The inner area is even smaller, 0.5cm x 0.03cm. L2325: part of the sentence is missing.

corrected

L2326: The -> the

corrected

L2337: you can make a reference to fig. 21 which shows already the geometry

5.6

Congratulations for the very nice results.

L2400: formatting problem.

==> fixed

5.6.1 Why the contribution of the time walk to the resolution is not mentioned anymore but only the jitter is reported? If the jitter is the main contribution has to be remarked.

"The relative importance of the jitter and time walk component to the total time resolution depends on the choice of read-out scheme. In a read-out scheme that has a single threshold, and time walk correction is implemented by either using a Constant Fraction Discriminator or Time-over-Threshold (see NINO + HPTDC scheme), jitter tends to be the dominant component. On the other hand, in read-out schemes that use multiple sampling, such as those based on the DRS4 chip or SAMPIC, both components are reduced quite significantly, and each contributes to the time resolution more equally. "

5.6.3

L2446: how much the time resolution is affected from the pixel capacitance, using the proposed FEE solutions?

==> Another good question...

"While designing the optimal sensor, it is necessary to consider the pad capacitance: time resolution is better in thin detectors, which unfortunately have larger value of capacitance. The choice of read-out scheme will ultimately determine the allowed range of capacitance values, and therefore determine the maximum pad size for a given sensor thickness. "

Is it possible to have an idea of the efficiency of this sensor, also assuming the typical dead regions of the pixelization? In particular another interesting parameter that can be mentioned is the dead region from the bottom window of the RP.

"Following the performance of similar devices such as Silicon Photomultipliers and Avalanche Photo Diodes, we expect a lower gain for tracks impinging in between pads. This effect will be studied in upcoming testbeams. "

"Another important feature of the UFSD design is that UFSD can be built with a rather slim dead area toward the bottom of the RP, near the beam. This advantage is due to their reduced thickness: in planar pixel silicon detectors, the distance between the last pixel and the detector edge has to be of the order of the detector thickness. As UFSD are rather thin, 50-100 micron, the dead distance is quite small. "

5.7-5.8

The titles are not very telling, at least for a non-expert. I would write few lines before these two section which explain the problem and then start with the two proposed solutions. Like this seems that the paragraphs are two different things used in two different part of the experiment.

Changed in 8 Aug version.

L2521: typo: systemshow

corrected

Fig 98 can maybe be rotated by 90 degree, in order to appreciate better the details.

corrected

Somewhere I will write explicitly where in the DAQ process this synchronization technique acts (now there is something reported at the end of the chapter, L2571), as an example it is used to correct the TDC converted signal or to set the zero of the TDC conversion.

These corrections are continuously updated or established at the beginning of the run?

The reference clock is used in the timing readout system (see Fig 83)

Chapter 6

Fig 99, maybe we can mention sigmaZ=10cm.

Is it really needed?

General

August 13, 2014 From : Joel Butler Subject: CT-PPS TDR Overall Comments

This is a very detailed document that provides an excellent description of the proposed proton spectrometers, their physics case, and their integration into the LHC and CMS. The detailed technical definition of the detector has advanced a lot in the last 6-12 months.

I have done my best to provide comments on the text even though the time provided to read this >100 page document was relatively short. I apologize for any mistakes I have made but the time pressure makes it difficult to recheck each comment.

I also have some general comments:

1) The impedance, beam loss, and vacuum stability are potential show stoppers. Much progress has been made in evaluating the severity of these problems and mitigating them. However more work is needed. The total impedance of the current RP design I think adds up to a lot more than the allowed 1% of the LHC impedance. The proponents know this and part of the early R&D is to study this, see how serious it really is, and then try to further reduce it if necessary. Since each of these three problems can affect the luminosity, CMS (and the other experiments) will certainly ask hard questions about these issues. CT-PPS needs good answers. Is there a level of possible CMS luminosity or efficiency reduction that CMS considers acceptable to achieve the benefits of forward proton tagging? How are the other experiments affected? It would seem that emittance blowup or protective aborts due to losses are problems for all the running experiments. The same problem would apply to AFPs impact on CMS.

The test of RP operations with high intensity beam is the first priority in 2015. Our expectations are based on measurements done by the TOTEM collaboration in 2012 which pointed to a number of improvements of the RP design carried out since then. The threshold of 1% of LHC impedance is indicative in the sense that there is no experimental data validating it. The tests done in 2012 of the non-upgraded RP insertion in a high intensity beam didnt show direct evidences of beam instabilities induced by the RP. These tests need to be repeated.

The tests to be carried in 2015 (using end-of-fill periods) will establish the impact on the LHC operation of RP insertion as a function of the distance of the RP to the beam. Our physics program benefits from the smallest distance of approach possible, which in the limit could be as small as 10 sigma. Conservatively the physics performance was evaluated for 15 and 20 sigma. One motivation for the MBP development is the possibility to eventually reach the 10 sigma limit.

The decision on what luminosity loss due to pocket insertions CMS or LHC could afford should result from a discussion between all parties involved once data on the proposed insertion test program is available.

2) The performance of the timing system with pileup was not demonstrated with a really detailed Monte Carlo that included both the main CMS detector and the two proton spectrometers with full pileup and CEP events and had the efficiency for choosing the right vertex and the probability of getting the wrong one or none. The match to the CEP event was not shown, at least not based on a full simulation.

The performance of the timing system was studied in combination with the performance of the CMS central detector. For the PPS forward detectors we used an approximate simulation that depends on the expected time resolution (i.e. smeared based on the resolutions) and that takes into account the detector granularity. The channels occupancy reflects two different contributions: 1) protons from signal or pileup events tracked up to the detector; 2) beam background, simulated event-by-event and based on a probabilistic model extrapolated to PU=50 from TOTEM data collected with PU=9. For the central CMS detector we used a combination of Full simulation (for the exclusive WW signal and for the most important backgrounds, such as exclusive tautau) and FastSim (for some backgrounds, such as inclusive WW). Pileup was simulated accordingly (FullSim and FastSim). Results are consistent with earlier results (after accounting for proper scale factors).

We assumed a challenging situation with an average of 50 pileup events. We expect that we will be able to learn ways to improve our understanding of the data and mitigate the challenging pileup conditions. (These hopes/expectations are not included in our simulation). The "vertex position vs time" distributions (with the corresponding time resolutions of 10 and 30 ps) of Fig.22 (top) show the expected based on FullSim for the exclusive WW signal events. This a true association to the signal events.

The selection is based on a simple/naive approach for now, and the corresponding results are based on these selection criteria. We believe improvements in the vertex selection can certainly be made.

Also, timing is essential. Similar measurements performed in Run1 under (much) less challenging pileup conditions cannot be repeated (or improved) in Run2 without the necessary forward detector upgrade. (This addresses also one your other concerns, but it is worth to point this out here as well).

3) The frequency of more than one proton on a side of the detector is not negligible. It can be estimated from plots shown but it would have been better to simply provide a histogram of the number of protons/side, the correlation of the number of protons of each side, etc. If there is a proton on one side, and more than one of the other (other multiple protons an each side) there are multiple solutions to the CEP kinematics. Does that mean the event is not useful or is there some strategy for choosing a good solution?

We have produced these plots for the WW signal MC (attached). In the case where there are 2 or more protons in different cells of the ToF in one arm, we check all the multiple solutions and then choose one best candidate per event. Currently this is done based on the combination with the best match between the delta-ToF and dilepton vertex z position.

In addition to real high-energy protons, there is also an important contribution of tracks from beam backgrounds, which is extrapolated from data collected with by the Totem Roman Pots in 2012. With the quartic ToF detector geometry with 3x3mm cells, we have taken the approach that events with one or more extra tracks in the same cell as the signal proton are rejected. The possibility of recovering this inefficiency is one of the motivations for studying alternative solid state timing detector technologies, with finer granularity near the beam.

4) There are vague remarks about the value of a single tag but there is no demonstration of its use and that physics can be done with the associated higher backgrounds.

Indeed we could not pursue these investigations further due to lack of time. The demonstration of the possibility of using events with a single tag requires detailed simulations and substantial analysis effort .

5) There is a comment that it is not known how accurately the Transport matrix will be known and that the experiment will find out when it runs. However, I am sure that the LHC machine group must have some kind of estimate of this and I would like to know the impact on the overall resolution of the kinematic quantities. (I doubt this is a big deal, but it should be looked at quantitatively.)

The procedure reported in Ref. 1 of Chapter 2 provides the calibration of L, v and their derivatives (values sensitive to quadrupole fields) with a precision of ~ 0.1%. The dispersion is sensitive to dipole fields and needs to be calibrated by checking the dipole fields stability and the kickers. We aim a similar resolution on the dispersion and it will be tested once that the new optics will be implemented in LHC.

6) Many claims are made about the physics but some are not very quantitative. In some cases, the CMS detector can address some of the physics and it needs to be shown what the added value of CT-PPS really is.

We dont want to quote a number on the comparison of the PPS sensitivity to various physics channels to the one of the CMS detector alone because we dont have the latter optimized for exactly the same conditions.

In general, the purity of the selection of events from exclusive processes pp --> pXp depends strongly on the average number of pile-up events (mu), for small mus rapidity gap and vertex exclusivity requirements can give adequate performances but for larger mus proton detection and matching between the vertex of the two protons and the central system as provided by CT-PPS is highly beneficial. As outlined in section 1.2, the addition of CT-PPS allows unique measurements of low cross-section exclusive processes like the production of gauge bosons pairs (W, Z, gamma) and high pT jets. CT-PPS also adds the possibility to determine the kinematics of the system X e.g. its mass (M_X) with a good resolution that is additional information that can be used in the event selection as well as for the interpretation of the measurements. Regarding exclusive dijet production, in a scenario where a new resonance in the mass range 300-1000 GeV decaying only into gluons and/or light quarks exists, the addition of CT-PPS would be the only way to have a chance to event detect an indication of it.

To be a little bit more specific, if the process exclusive WW production is taken as an example, the determination of M_X i.e. the mass distribution of fused gamma gamma pair (M_gamma gamma) enhances significantly the sensitivity to anomalous quartic gauge couplings (AQGC), see Figure 26 and the comparison between the CT-PPS sensitivity and the result of the published CMS analysis (more than simply the factor 20 in the luminosity). Also any BSM physics signal leading to AQGCs generally tends to have an excess at large M_X, where the CT-PPS has a good acceptance.

For the exclusive WW production, it should also be stressed that being able to reconstruct the M_gamma_gamma distribution makes the interpretation of the measurement cleaner and less controversial since issues that come into the interpretation of all other hadron collider anomalous couplings analyses like unitarity violation and form factors can largely be avoided. Furthermore, in case of a real BSM access, having information about the kinematics of the originally fused gamma gamma system will certainly be valuable in its interpretation. So although the CMS detector alone might allow the measurement of the exclusive WW production in the SM, by adding the CT-PPS the sensitivity to AQGCs is increased and the interpretation of a possible BSM signal is improved.

7) HOWEVER, this is a quite modest device and not too expensive in dollars (and perhaps not too expensive in people's effort) for the pay back you get. The reader who did not know about the project could go along way into the document without realizing that. I think the addition of a schematic early in the document and a characterization of the scale , rough dimensions of the detectors and number of channels would put the reader in the correct frame of reference to judge this proposal.

Excellent suggestion.

Fig. 27 will be moved just after the first paragraph, L 117, with more detailed caption.

First paragraph of 1.2 will be moved here and will be re-written:

The CT-PPS detector consists of a silicon tracking system to measure the position and direction of the protons, and a set of timing counters to measure their arrival time. This in turn allows the reconstruction of the mass and momentum as well as the z coordinate of the primary vertex of the centrally produced system, irrespective of its decay mode. The detector covers an area of about 4 cm2 on each arm. In total it uses 144 pixel readout chips and has ~200 timing readout channels. The total cost of the detectors is below 1 MCHF.

I want to conclude by saying that I support giving this the "green light". The document is definitely "reviewable" by the collaboration as is. I think the implementation of item 7 would improve it for readers who are not as familiar with the project as the ARC members are. It is also something that could appear at the beginning of an approval talk.

Chapter 1

L116

Move to the end of the sentence so it reads: "to bend protons that have lost a small fraction of their momentum out of the beam envelope so their trajectories can be measured."

OK

L129

Here and in the future, it should be recognized that being better than LEP or the Tevatron is not enough if they will be eclipsed by other RUN 2 measurements. The question is whether this physics will be done using some other technique at the LHC, say in CMS.

We didnt want to quote a number on the comparison of the PPS sensitivity to anomalous couplings to the one of the CMS detector alone because we dont have the latter. Fig 26 shows a striking comparison between the expected PPS sensitivity with 100 fb-1 and the CMS measurement at 7 TeV and 5 fb-1. PPS will improve the present CMS limits by two orders of magnitude. The extrapolation of the 7TeV result to the new beam conditions and luminosity is not easy to do without a detailed analysis study.

L134

There has been a lot of work done on this and there has been a lot of progress. There are other ways of isolating relatively pure samples of quark and gluon jets. Do the Jet experts agree with this claim?

We really didnt pursue the study of this question beyond this general statement, which is motivated by the known difficulty in obtaining very pure samples of gluon and quark jets. We propose to change the wording to The detailed characterization of gluon jets in this data sample relative to quark jets may improve the efficiency of gluon vs. quark jets separation.

L151

"when data taking starts at the beginning of LS2 (Long Shutdown 2)."

To avoid confusion we prefer to delete expected to be concluded in LS2.

L153

State the lower bound on the mass and then observe it cannot reach the Higgs. (I know this information is available later but this is a place to make the point)

Changed to In this configuration, the spectrometer is not sensitive to the central exclusive production of the recently discovered Higgs resonance at 125 GeV/c2 since its mass acceptance is above 200 to 300 GeV/c2 depending on the distance of detectors to the beam.

L182

My understanding is that the two main issues are the beam impedance caused by the RPs (and later the MBPs) and the losses caused by particles interacting in the CT-PPS components. Maybe this should be stated here.

Re-written: Prove the ability to operate detectors close to the beam-line at high luminosity, showing that the beam impedance caused by the RPs (and later the MBPs) and the losses caused by particles interacting in the CT-PPS components do not prevent the stable operation of the LHC beams and do not affect significantly the luminosity performance of the machine.

L190

in--> into

OK

L225

I am not clear what dimensions this is. We will probably find out later.

Defined later

L255

I am not sure what they are trying to say here.

Use now a simpler formulation: The standard CMS L1 Trigger provides full efficiency for the exclusive production of EW final states.

L265

The word "Pocket" is not the best description of the mechanical arrangement of the detectors, especially the Roman Pots.

We prefer to keep the term beam pockets, used as title of chapter 3 and in other parts of the document.

L272

this would appear to be a possible show stopper. I understand that this is part of the initial R&D but people should be aware of this. It needs to be clearly stated that emittance blowup that would impact CMS integrated luminosity is probably not acceptable (but there must some specification here, e.g. not greater than x%). I assume that if 1% os ok with the machine it is ok with CMS

See answer to general comment above.

L289

with

OK

L292

with the goal of installation of a test structure in 2016.

OK

L302

End note for this section: A generally very well organized introduction that clearly states most important issues. A schematic of the detector might be good to have here rather than waiting until later. Also some characterization of this as low cost with a ballpark number might help an unfamiliar reader get properly calibrated to the project

End note for this section: Should there be a little discussion of the problems of beam background affecting the accelerator included here to give a complete overview of the main issues

The test runs performed in 2012 complemented by Fluka simulations indicate that the approach of the RPs to the beam at the distance demanded by physics requires to absorb the showers produced by the RPs in order to protect the downstream quadrupole. The solution envisaged was the addition of the new collimators (TCL6), which is already concluded.

L327

cross sections that are typically about 1 fb.

OK

L329

of order 1 pb

OK

L350

So for 100 fb-1, 150 events assuming 100% efficiency

L356

Since this was observed without seeing the protons and apparently with tractable backgrounds, this is an opportunity to explain exactly what the proton detection adds to this investigation in CMS. Better S/N? By how much?

L369

This is the point but it would really be nice to have some numbers or a plot of some kind to support and quantify this type of statement

This is illustrated in Chapter 2 where the results of a detailed simulation of this channel are presented.

L419

Since there are other ways to do this, e.g. top, does this add anything?

It adds redundancy to the b-tagging efficiency measurements.

L438

This is a certain outcome not achievable by other means. It might have been useful to expand in a few sentences on the models that can be distinguished.

The feasibility study of the exclusive dijet process with CT-PPS is still on-going. Without estimates of expected event samples and purities, such an expansion is premature. Given the fact that we can reach masses well beyond the exclusive dijet measurements at Tevatron, a measurement of the overall cross-section behaviour with M(pp) will be a good test of how well the different models include the energy evolution of the rapidity gap survival probability.

L455

Another example of how some of the physics can also be done without proton tagging so it will be good to quantify the added value, if possible.

The statement refers to the fact that for single exclusive Z production, we have only single-tag acceptance with the CT-PPS. The focus here is not so much the added value in terms of physics but given as a possible way to provide a check of the momentum scale of the xi determination. We have not pursued these investigations further due to lack of time. The demonstration of the possibility of using events with a single tag requires detailed simulations and substantial analysis effort.

L464

It would have been nice to have some scale for cost and complexity right at the beginning since the reader may not know and it helps in judging the physics case to know what resources are needed to achieve it

See answer to general comment 7.

L516

By this , does one mean making it possible to move the RPs closer to the beam?

The RP insertion test program should determine how close to the beam the RPs can be moved, for different instantaneous luminosities.

Chapter 2

L602

The expected scale of these (Or worst case) can be estimated and its impact on the resolutions can be discussed and should be.

>> This is a well defined procedure. It has been used in the past, and we expect it to be part of the standard preparation for data-taking/-analysis.

L624: New to me

>> ExHuME is a reliable generator widely used for the study of exclusive dijet processes.

L638: I take it that this sentence is the discussion of tracking resolution. This almost seems toe belong to the previous sentence but is a somewhat new issue.

>> Rephrased and clarified.

Fig 9: Is this just a relative hit distribution or is there a absolute scale for this color coding?

>> These distributions are on an absolute scale, no color coding. We changed them, and normalized them to unity.

L734: Can we have an explicit statement of the efficiency as a function of luminosity (or pileup) due to this effect? Dont we have problems if there are two protons in either detector because we get multiple solutions with different pairings on the two sides. How do we use the kinematics and does the Delta t with multiple solutions cause problems

>> We studied the effect for an average pileup of 50 events. Additional protons in the detectors are coming from pileup or from beam background. Both contributions scale with the luminosity. The Delta_t between the two arms is used to exclude background protons which don't belong to the vertex selected (see Fig22). Depending on the analysis, additional kinematics constraints can be applied. In the WW analysis no attempt was made to reduce the background by using these constraints. Under these conditions and assuming 10 ps time resolution the fraction of WW exclusive events incorrectly reconstructed (wrong proton combination selected) is 10% (see Table 4). In order to study the effect as a function of pileup we would need to make additional the samples, which is planed but not yet available.

The sentence starting in L734 refers to the probability of having multiple hits in the same Quartic cell, which in the baseline design is about 50% in the two cells closest to the beam. As explained, the simulation assumes that protons in cells with multiple hits can not be measured accurately and are lost. The reduction of signal events between lines 4 and 5 in Table 4 is mainly due to this effect. One goal of the R&D program in new timing detectors and in fine granularity Quartic option is to eliminate this source of inefficiency.

L801: Issues with the TOTEM approach which is for lower L and larger beta*

>> The method was used and it worked at the low luminosity. The method has to be adapted to the luminosity scenario of Run2. The method uses any kind of reconstructed track (so it doesn't depend on the optics or luminosity) and an elastic scattering sample which can be collected in few hours"

L805: Rate of elastic events at low beta *. Claim is that this should work

>> Yes. The claim is that data are sufficient to perform the alignment.

L818: So this new method is not yet really proven.

>> This is an alternative method used at the Tevatron. So it is a method that worked reliably, and it is almost entirely based on data-driven technique. It should work fine as well (we do not see why it should not work). The precision achieved at the Tevatron was limited by the detector resolutions.

L825: So how much improvement does the proton tagging give.

>> While in Run1 the measurement was performed using the CMS central detector alone, the more challenging conditions of Run2 will be a (likely) impediment to the measurement. The proton tagging will provide the additional background rejection power against backgrounds in the harsh pileup conditions (Fig22, and Fig23).

L831: Again, the issue is what CMS will have after 100 fb-1.

>> Perhaps many new SUSY particles, perhaps nothing new. Having an extra handle to anomalous couplings in the WW final state is certainly a plus.

Fig 22: Do the background events have vertex cuts and restrictions on other tracks coming from the primary besides the leptons?

>> Distributions are shown for events where both leading protons are within the PPS detector acceptance. No other requirement on other extra-tracks, etc. is requested. A selection based on the best vertex matching between the dilepton system and the two leading protons is applied to select only one entry per event. Changed the text accordingly.

L865: Ok, it maybe possible to work with such events.

>> Yes. Figs22/23 indicate that timing requirements are indeed needed to allow to improve on this measurement.

L870: This is apparently applied post hoc but could have been applied earlier and the impact of pp made after all such cuts, since the proton signal are not used (or needed) as a trigger.

>> This is certainly a possibility. There are no studies on this yet.

Fig 23: This seems to imply that al cuts are made but should check that strong vertexing of both leptons and extra multiplicity are actually already applied.

>> In Fig.23, all cuts (including extra multiplicity, or Ntrack of Table 3) are applied except for the ToF information.

L878: But isn't the Delta Z determined using the two closet to a reco primary?s

The \xi values are reconstructed by the PPS for both leading protons. They are used to reconstruct the missing mass. For AQGC events, the missing mass values can be significantly different from the backgrounds' (Fig.25). The DeltaZ is used to remove background events (see comments to L734)

L896 Pretty low signal, especially since this is probably a higher efficiency than will really be achieved.

>> It is indeed a low signal yield for SM exclusive WW events. In case of anomalous coupling we may expect a significantly larger event yield.

Chapter 3

Table 5

I am not clear on the estimate for the total impedance of the system. If the <x% are taken as ~ values, then the total might be well over a per cent? What is the limit (which I understand is probably conservative) from the machine? Is it not <1% total.

L1147

work to do!

Indeed.

L1204

A lot known but still work to do to get a solution

L1236

Pocket

L1276

So is this a statement that this solution probably works?

>> Yes, it may work but detailed calculations are still needed.

L1283

See Fig 44. The stainless steel thin windows are bad, multiple scattering alone exceeds the target resolution of 1 urad. Using the aluminum- beryllium alloy the multiple scattering effect is 0.4 urad which is acceptable.

Chapter 4

L1411

Why so many planes? Way more than in the CMS FPIX or even the tracker for the most part.

The main reason is extra redundancy at a very small cost increase. The extra cost of sensors, ROC chips, other electronics and readout links from choosing 6 planes per station (instead of say 4) is not really significant. The additional planes bring the big advantage that the system will be resilient to possible failures without requiring immediate intervention. In our case, where any detector failure may translate in a significant acceptance loss, redundancy is a major concern. Due to the non uniformity of the occupancies in the detector, some regions of the detector are extremely critical. This fact is not true for FPIX and BPIX and this difference justify an higher redundancy w.r.t. these detectors.

L1625

endnote: Seems to based on reasonably well worked out solutions obtained from existing detectors or extrapolations

Chapter 5

L1682

Need this to be quantified for one side and both sides

Close to the beam the ideal segmentation is of the order of 1mm2 cells such that the cell occupancy is 3-4%. We dont have now a timing detector technology capable to achieve this granularity. The impact of higher cell occupancies when using the baseline Quartic is discussed later in the chapter.

L1903

How about a good statement of the use of the sum, the jitter in event times, etc.

Well, we prefer not to comment outside CT-PPS.

Fig 67

axis labels? Units?

Will be fixed

L2103

So this is a way of doing a low luminostiy experiment in parallel. Nice.

L2117

Could SEUs be a problem?

It could. Our baseline is to have the digital electronics (with radiation resistant FPGA) in the RP system. The neutron fluence is about 1012 neq/cm2 per 100 fb-1, which may indeed create SEUs. We will use the CMS Hard Reset mechanism to recover from those, as done in other parts of CMS. If the rate of resets is low (say smaller than a few per hour) it should be ok. We may also try to shield the electronics box from neutrons. Otherwise we would need to move the electronics to a protected region. The plan is to make a hole in the concrete floor below the beam pipe where to house the digital electronics. If needed, such hole could be made in the winter stop.

I would like to congratulate the authors of the CT-PPS proposal for providing a carefully prepared and very detailed document outlining the proposed upgrades to the CMS and TOTEM experiments.

- When discussing the physics motivation, the anticipated achievements with CT-PPS should be compared to what CMS and TOTEM can do in run 2 without these additional detectors. E.g. you argue that the AQGC limits will be much more stringent compared to the current CMS result. Could we however repeat the 2011 analysis without CT-PPS and what would then be the result? Pile-up will be more of a problem, I imagine, but will it make any new measurement completely impossible?

The purity of the selection of events from exclusive processes pp --> pXp depends strongly on the average number of pile-up events (mu), for small mus rapidity gap and vertex exclusivity requirements can give adequate performances but for larger mus proton detection and matching between the vertex of the two protons and the central system as provided by CT-PPS is highly beneficial. The published CMS exclusive WW analysis (reference [17] of chapter 2) was done a substantially lower mu (about 8-9) than those expected at Run 2 and therefore for a direct comparison the published analysis should be re-optimized for the Run2 conditions (not done). CT-PPS also adds the possibility to determine the kinematics of the system X e.g. its mass (M_X) with a good resolution that is additional information that can be used in the event selection as well as for the interpretation of the measurements. Regarding exclusive dijet production, in a scenario where a new resonance in the mass range 300-1000 GeV decaying only into gluons and/or light quarks exists, the addition of CT-PPS would be the only way to have a chance to even detect an indication of it.

To be a bit more specific on the exclusive WW production, the determination of M_X i.e. the mass distribution of fused gamma gamma pair (M_gamma gamma) enhances significantly the sensitivity to anomalous quartic gauge couplings (AQGC), see Figure 26 and the comparison between the CT-PPS sensitivity and the result of the published CMS analysis (more than simply the factor 20 in the luminosity). Also any BSM physics signal leading to AQGCs generally tends to have an excess at large M_X, where the CT-PPS has a good acceptance. For the exclusive WW production, it should also be stressed that being able to reconstruct the M_gamma_gamma distribution makes the interpretation of the measurement cleaner and less controversial since issue that come into the interpretation of all other hadron collider anomalous couplings analyses like unitarity violation and form factors can largely be avoided. Furthermore, in case of a real BSM access, having information about the kinematics of the originally fused gamma gamma system will certainly be valuable in its interpretation. So although the CMS detector alone might allow the measurement of the exclusive WW production in the SM, by adding the CT-PPS the sensitivity to AQGCs is increased and the interpretation of a possible BSM signal is improved.

- I understand that the discussion on impedance is not closed yet and that this could still turn out to be prohibitive I leave this point to the real experts on this topic.

The test of RP operations with high intensity beam is the first priority in 2015. Our expectations are based on measurements done by the TOTEM collaboration in 2012 which pointed to a number of improvements of the RP design carried out since then. The threshold of 1% of LHC impedance is indicative in the sense that there is no experimental data validating it. The tests done in 2012 of the non-upgraded RP insertion in a high intensity beam didnt show direct evidences of beam instabilities induced by the RP. These tests need to be repeated.

The tests to be carried in 2015 (using end-of-fill periods) will establish the impact on the LHC operation of RP insertion as a function of the distance of the RP to the beam. Our physics program benefits from the smallest distance of approach possible, which in the limit could be as small as 10 sigma. Conservatively the physics performance was evaluated for 15 and 20 sigma.

- Is it feasible to install a MBP in a YETS? Has this been discussed with LHC experts?

Technically it is certainly feasible. However no discussion took place yet.

- When evaluating the resolution on the reconstructed xi and t (pp. 21-22), did you consider the effect of multiple hits (both from physics and beam induced) in one station? How do you disentangle this? Are showers created in the first station a problem for the hit multiplicity in the second station?

Fig 21-22 show the resolutions in xi and t for protons reconstructed in PPS. Multiple hits are considered to be a problem only for the timing detector (discussed later in the text). Effects due to multiple hits in the timing detectors (both due to pileup and beam backgrounds) have been taken into account in the WW analysis presented in 2.8. Based on Run-I experience, with the standard TOTEM-RP the inefficiency per pot was of the order of 1.5-2% (ie, when the proton cannot be reconstructed due to the presence of a shower).

- If I understand well, the alignment procedure that is detailed in point 3 of page 27 is only applicable to vertical pots. In horizontal detectors one sees only one side of the beam.

The alignment with elastic scattering involves only vertical pots it's true, but the relative alignment (point 2) is among all the detectors (2V+1H): the method with elastic scattering determines very precisely the horizontal position of the beam, which is then used for the horizontal pot alignment. The other option for alignment, based on seeking the maximum of the dsigma/dt distribution, was used at the Tevatron to align the RPs there. The details are in the references. The method is rather straightforward and the basics are given in the text. Additional studies have not been done for the LHC. The accuracy at the Tevatron was limited by the detector resolution, not by the method itself.

How stable are the optical functions and beam position at the LHC? E.g. can the beam drift during a run?

According to the experience of Run-I the optical functions are very stable in a fill and even from run-to-run. To determine them the full set of measurements vs time in LHC are considered (magnets current, etc...) and any variation can be taken into account. With the alignment as described also the beam position can be monitored.

- L209-210: Why is the position resolution different in x and y and the angular resolution not? When discussing the detector resolutions, these should be compared to the beam width and divergence (and correlation between both) at the location of the detector. The beam width and divergence are also energy dependent...

Text was modified to include angular resolutions in both planes. The detector resolutions were chosen so as not to be a limitation to the mass resolution on the centrally produced object, which we wanted to keep at the level of a few GeV. At the detector location the divergence is few urad and the alignment precision is of the order of 10 microns same order as the detector resolution.

Do you plan to use a measurement of the transverse primary vertex coordinates in the reconstruction of the proton kinematics?

Yes we plan although it has not big impact on the xi resolution.

- L221-222: You mention a pile-up reduction factor of 25 for a timing resolution of 10 ps. What would the rejection factor be for a resolution of 30 ps?

The nave answer is: a factor 3 smaller. The rejection factor quoted here is based on a nave calculation: sigma(z-beam)~50 mm divided by sigma(z-vert)~2 mm, which holds when there are no additional pileup protons in the detectors. At the largest pileup (mu=50) where average proton multiplicity is higher than 1 the rejection is smaller, as shown in the detailed analysis presented in section 2.8 (rejection factor is ~10 for 10 ps and around 5 for 30 ps resolution)

- L397-398: CEP dijets give access to generalised pdfs: Could you elaborate on this? If CEP dijets are produced from gg->gg with the addition a some soft non-perturbative exchange to neutralise the colour flow, how does this give access to generalised pdfs?

Indeed the two-gluon proton vertex measures the (skewed) unintegrated gluon parton distribution function of the proton, in a region never explored before. We will change the text accordingly.

- Eq. (2.2): Please be aware that this is only correct if Dx* = 0. I seem to remember that this is not the case and that this has a non negligible effect on the position at the location of the RP. It would be good to check.

Eq 2.2 is the transport equation valid for a particle which has phase advance =0 and dispersion=0 at the IP: as it is a scattered particles it is not supposed to be a multiturn particle. The collimators should be efficient enough to remove multiturn particles which have a xi beyond the nominal.

- Fig. 3 shows that you properly calculate non-linear effects as function of the energy loss (are corrections for energy dispersion by sextuple magnets included?).

Fig 3 includes all the beam line (as described in MAD-X) up to the detectors, with nominal optics (where the sextupoles are off).

I am still worried about the effect of this on the mapping of (xi,t) on the (position,angle) plane at the location of the detectors. I add a plot showing the situation which we faced at HERA. It can be seen that the non-linearities create a region of overlap where the same values for (position,angle) can result from different values for (xi,t). This may create difficulties in the reconstruction of the proton kinematics. It would be very informative if you could provide a similar plot for the LHC at 220m.

At LHC is not so dramatic (see Fig) . The effect could be more enhanced in the vertical pots, but not in the horizontal.

- Fig. 6: I am unsure how you define xi and t for background events (esp. MB)

In the background sample only the protons are considered: the xi and t is defined as for the signal. They are tracked to the detector locations and the surviving ones are considered in the analysis. If there is more than one proton attached to a vertex, the one with higher momentum is chosen.

- L679: I dont understand the sentence. If the protons are within the CT-PPS acceptance, how can the single-arm acceptance be smaller than 100%?

The acceptance is estimated as the number of protons that arrive at the CT-PPS divided by the number of generated ones in the same bin. Added a clarification in the text.

- Tables 17-18: The cost estimates are obviously missing...

They will be provided soon.

I start with the congratulations to the authors of the CT-PPS proposal. They provided a clear and well prepared document describing the upgrades plans for the CMS and TOTEM experiments. My comments will focus mainly on the fourth chapter Tracking Detectors as my expertise is strong in this field.

Chapter 4 does not contain a description of the expected integrated Fluence and TID (Total Ionising Dose). The sentence of line of lines 221-222 in the introduction should be expanded in chapter 4. It would be great to include a X-Y map of the expected fluence on the surface of the pixel planes (something like what is described in Fig 9 including all background). The concerns are related to operation of the pixel readout chip (PSI46) after a very inhomogeneous exposure to radiation.

We have added the picture below showing the expected proton fluence in the detector (protons/cm2) for 100fb-1. The rectangle indicates the detector surface. The ellipse shows the 15 sigma beam contour. In the detector edge a value of the order of 5.1015 p/cm2 is obtained. This is compatible with the extrapolation from TOTEM data.

If I understand the pixel detector design the total surface of Silicon sensors to be installed is in the order of 100 square centimeters. This is a rather small surface (a single 6 inch wafer can host all necessary sensors for this project). As a consequence the sensors YIELD should play a little role on the choice of vendors and/or technology. With reduced YIELD concerns a more aggressive approach for "slimmer" edges could be pursued (first requirement is: "Efficient pixel based tracking as close as possible to the sensors physical edge").

This is a good suggestion. We will take it into account in our prototyping plans.

Lines 1539-1540 should be rephrased as it is not so clear.

Sentence is rephrased.

Lines 1569-1572 The rotation angle of 18.5 degrees seems to be fully motivated by the baseline 3D sensor technology and the inefficiencies on the electrodes. It would be good to cross-check the impact on the spatial resolution of this angle for both 3D and planar technologies.

Test beam results with 3D sensors indicate that a rotation of 5o is enough to eliminate the inefficiencies on the electrodes (Fig. 49). The 18.5o angle is a preliminary value motivated by mechanical considerations and by measurements of the pixel cluster resolution as a function of the tilt angle with similar sensors. We are still analysing data of 3D sensors in recent test beam to have more precise numbers. We will add in the TDR the following sentence:

The resolution of the x-coordinate is determined by the sharing of charge in pixel clusters, which depends on the detector tilt angle in the x-z plane. While this parameter is not yet defined, test beam results with similar sensors indicate that for an angle of 20o the two-pixel clusters have resolution of the order of 10 um. Since there is no tilt in the y-z plane, the resolution of the y-coordinate is of the order of 30um.

Lines 1598 A power consumption of <10W for the pixel detector is quoted in this line. I believe this is per package but it could be read as for the whole system. It is probably better to clarify what the 10W refers to.

The 10 W refers to the power consumption in one RP station (6 detector planes). Sentence was modified.

This is an excellent, very detailed and comprehensive document summarizing an impressive amount of work. Congratulation to the authors! I dont come back to cosmetic changes such as correcting a few typos, missing references, incomplete acronym definitions, etc, as this point has been already addressed in detail by Mirko. On the last point however I would suggest adding one page with a list of most unusual acronym definitions to help the reader having a short memory.

Below is my list of comments/suggestions, the most important in my view being the last 3 ones:

1- section 5.1: when mentioning the use of NINO and HPTDC for Quartic and Gastof it could be useful to add that this approach offers not only an immediate solution because of the existence of these two well known chips but that it offers a potential for possible future upgrades as new improved versions of the HPTDC and of the NINO are already in the pipeline.

2- section 5.2.2: I am sure that the Quartic people are aware of this point but the fact that the Cerenkov angle is the complement of the critical angle for the propagation of the photons in the bar imposes severe constraints not only on the alignment of the bar vs the beam line (this point is addressed later in the TDR in section 5.2.6) but also on the surface state of the bar. In order to minimize light losses due to breaking the total reflection criteria the surface must be polished at a precision of at least ⎣/4, which has a cost. On this point I am surprised to see that wrapping the bars with a simple and very thin (25 microns would be enough) aluminized mylar foil does not seem to have been considered to avoid cross talk when maintaining the minimum gap between the bars and recuperate the light leaking by not perfect total reflection conditions.

We plan to investigate possible wrappings in association with thinner bars (~ 1mm2 cross-section) in a second generation Quartic detector.

3- Section 5.2.2. Table 10: This table would be easier to interpret if it was supported b a small sketch with a definition of x, y, R and L.

Figure is added and table is removed.

Figure xx: Schematic layout of quartz bars looking in the direction of the protons. Numbers on the 3 x 3 mm^2 radiator bars are their lengths in mm, and coordinates (mm) are the centers of the bars. The light guide bar lengths are chosen to all end in a common plane 81.7 mm from the edge closest to the beam.

4- Section 5.3: I think that the recovery time of 100ns for the SPAD recharge is rather pessimistic, particularly after mentioning SPADs of 20 microns a few lines above. It depends of course of the SPAD dimensions but even for 50 micron SPADs, which is the likely dimension to be chosen at the end the recovery time is probably of the order of 50ns.

We agree. Recovery time changed to 50 ns.

5- Section 5.2.8 or 5.3: about the improvements of MCP cathode lifetime in high rate environment with the ALD technique (Atomic Layer Deposition) you could show this plot:

Plot was included. Thanks for making it available.

6-; Section 5.3, Fig. 74: The legend of this figure should better explain what is the difference between the left and right plots.

Figure caption was changed.

7- Section 5.4.2: As the TDR is an official document we have the obligation, when mentioning EndoTOFPET-US to put a footnote with the following sentence: This project have been funded by the European Union 7th Framework Program (FP7/ 2007-2013) under Grant Agreement No. 256984 (EndoTOFPET-US)

8- This is a more general comment. The TDR is aiming at convincing the funding agencies to inject money in this project. I think that the last section 7 is the most important one from this point of view and the TDR would gain in convincing power by improving it on a few points. The project has several phases and the articulation between these phases needs to be summarized in this section. I can imagine that if the funding agencies will easily accept to fund the baseline solution for 2015 they are likely to request more information for supporting the different R&D lines for the MBP, the Gastof, Diamond and timing Si prototypes. The first question, which is not clearly answered in the document (or only in a scattered way, which should be summarized at the end) is the following: is each of these developments mandatory for the success of the physics program or can they be considered as useful upgrades only? I think that will make a strong difference in the overall strategy. I dont know if there is sufficient available information for that or if there is enough time to collect it but I would have liked to see in section 7 a table summarizing the baseline and alternative options indicating (possibly in a quantitative and very synthetic way) the physics potential and limits for each of them.

9- On a similar track the decision mechanism and timescale for it for selecting the best options should also be summarized. One suggestion is to do it on a Gant chart with clear indication of milestones on the basis of which decisions could be made.

10- It would have been nice to make a judgment about the amount of money requested for each of the items shown in the financial table at the end to see if it looks realistic in the timeframe proposed. By the way this timeframe is actually very vaguely proposed. Ideally a spending profile should be shown. Another point for this table. It is mentioned in the document (section 5.2.10) that some R&D will go on on Quartic improvements. Maybe a line should be added n the table for this.

Reflecting comments 8, 9 and 10, we have improved section 7.4:

7.4 Planning and cost estimate

The CT-PPS plan includes an exploratory phase in 2015-16 followed by a production phase until LS2. The objectives of these two phases have been presented in section 1.1.

In 2015 we will use in each arm of the spectrometer two old RPs with new RF shields housing available TOTEM silicon strip detectors, and one new cylindrical RP housing two quartic modules. The Quartic detectors will be installed in the second half of 2015 during a technical stop.

At the year-end technical stop 2015-16 the new pixel detectors would replace the silicon strips. The plans of installation of timing detectors for operation in 2016 will depend on the results of the evaluation in beam of the prototypes (see sections 5.3, 5.5 and 5.6). Provided that the R&D is successfully concluded, a MBP structure could also be installed for tests in 2016. Detailed plans for this installation have not yet been defined.

7.4.1 Cost estimate of baseline detector

The cost estimates of the construction of CT-PPS detectors are summarized in Table 17. It should be noted that these cost estimates are for Materials and Services (M&S) only. In this estimate we have considered the baseline detector, including the following costs:

1. Final prototype or pre-production fabrication required to validate a final design or product quality, prior to production;

2. Engineering costs incurred during production at a vendor or contractor, not at a CMS and TOTEM member Institution;

3. Production fabrication and construction costs, including QA and system testing during the assembly process;

4. Transportation costs, integration and installation.

The TOTEM expenditure executed in 2013-14 on the Roman Pot project is summarized in Table 18. It includes:

1. The cost of the relocation to the 210m region of the 4 (out of 12) RPs used by CT-PPS tracking detectors, as well as the cost of the new RF shielding;

2. The cost of two new cylindrical RPs for the CT-PPS timing detectors, including the movement system (motors and control), RP infrastructure (cables, cooling, vacuum, LV), and ferrites (fabrication, machining, bake out, cleaning).

These values include the costs of CERN services manpower. The manpower from the TOTEM collaboration having contributed to the project are not accounted for.

The TOTEM contributions to the silicon strip detectors to be used in 2015 and to the reference timing system are not included in the tables, neither the original cost of the four relocated RPs housing the tracking detectors.

In this chapter, all monetary values are expressed in CHF. The following conventional exchange rates have been used to convert EUR and USD to CHF:1 USD = 1.0 CHF; 1 EUR = 1.2 CHF.

7.4.2 Objectives, plans and cost of R&D program

Besides the construction of the baseline detectors we propose to carry the R&D program on Moving Beam Pipes and Timing Detectors, described in sections 3.6, 5.3, 5.5 and 5.6. In table 19 we summarize the physics motivations, objectives and time scale of the proposed R&D developments.

The results of the R&D program will be evaluated in reviews to be carried out in the falls of 2015, 16 and 17. These reviews will establish the basis for the decisions on the detector configuration to be used in the following year(s).

We have estimated the cost of the development of prototypes of the new Timing detectors and of prototypes of Moving Beam Pipes (Table 20). Cost estimates of the construction of final versions of any of these items will be established after the evaluation of prototypes and the choice on the possible upgrades.

We have included in the table 20 the cost of two additional cylindrical chambers, RP cylinder and ferrites, integrated in present stations, to accommodate new timing detectors in 2016.

7.4.3 Expected Funding and Cost Sharing

Unchanged

I would like to congratulate the authors for such a detailed document. The progress of the TOTEM-CMS project is really impressive. Due to the limited time for the review I send you only remarks on things which are not completely clear to me.

L209: The resolution of the tracking detectors is not described in Chapter 4 (resolution used in simulation 10um)

The resolution of the tracking detectors is now discussed in Chapter 4

L272: Same concerns as M. Berretti. It seems that only the impedance is considered as problem for the insertion. The proton interactions for example could create some difficulties.

The following sentence was added in the introduction:

The test runs performed in 2012 complemented by Fluka simulations indicate that the approach of the RPs to the beam at the distance demanded by physics requires to absorb the showers produced by the RPs in order to protect the downstream quadrupole. The solution envisaged was the addition of the new collimators (TCL6), which is already installed.

L281 : Impedance MBP bellow: 0.9% is for all bellows? This issue is not explained/mentioned in Section 3.6

L523: Maybe it is worth to specify that there will be several end-of-fill studies....

L1218: I don't understand if there is a limit in the approach. Will the MBP be aligned as the RPs? During alignment the approach can be as close as 5sigma

MBP are subject to the same type of limitations affecting RPs and the alignment procedures will be similar.

L1236-1249: 10 cm or 25 cm?

The length is determined by the timing detector requirements. Today we may estimate it to be between 10 and 25 cm.

L1254 Chapter 4.4 -> Section3.4.2

corrected

L1274: Please specify the contribution of the bellows to the impedance (explicit in the introduction)

0.9% is the estimate for two standard double bellows if they were unshielded (based on a geometrical longitudinal impedance of 0.4mOhm each, out of 90mOhm total for the LHC). This corresponds to one complete MBP station. Since this would be larger than the impedance of the MBP itself, it will be reduced by either adopting a modified bellows design (as tried in the AFP MBP design), or by adding RF shielding, as discussed in Section 4.6.2.

Sect 3.6.3: Which is the probability of proton loss due to nuclear interaction in the material as it is said that the material traversed by the proton is 5 times larger with the tapering? In RP(box shape) it was simulated/measured to be <2%

The probability of nuclear interaction in stainless steel thin window with 11o tapering angle is ~1%. Using the aluminium beryllium alloy it will be considerably less.

Sec 3.6.4: Test and simulation of mechanical deformation performed on the old design. Which is the size (length) of the prototype? What can be expected with the new design (with tapering)?

The prototype version that was used for these deformation tests had a length of 22cm.

In the new design the tapering itself isn't expected to significantly affect the deformation. In terms of the material properties, the AlBeMet alloy is only marginally less rigid (Young's modulus of E=193 GPa for AM162) than stainless steel (E=200 GPa), so this also is not expected to substantially change the conclusions. This expectation will be checked in updated ANSYS simulations and tested with the prototype to be built.

Chapter 4: Describe detector resolution; was it measured in some of the test beam mentioned in the chapter?

Sentence modified (L1386):

Efficient pixel based tracking as close as possible to the sensors physical edge , with hit resolution better than 30um

The resolution of the x-coordinate is determined by the sharing of charge in pixel clusters, which depends on the detector tilt angle in the x-z plane. While this parameter is not yet defined, test beam results with similar sensors indicate that for an angle of 20o the two-pixel clusters have resolution of the order of 10 um. Since there is no tilt in the y-z plane, the resolution of the y-coordinate is of the order of 30um.

Section 5.3: What about the schedule (costs?) to bring the gas in the tunnel?

For a test run/installation it is zero cost - we already run at a TB detectors for a couple of months without any connection/refill.

For regular running the cost would be of about 1000 euro; Gastof's gas volume is very small, only about 0.3 liter per PPS arm (which is good for safety too), so, it is enough to have a small, near by reservoir of about 30 l, with a slow circulation.

L2231: 6000e for 0.2mm thickness according to RD42; for 0.5mm the charge is ~15000e. Correct?

We prefer to take the conservative assumption that the signal charge is of the order of 6000e.

L2237: 9 planes need to be justified? Let's say ~10 planes to optimize the cost/no of channels etc...

L2260: What about the other detector technologies in terms of including the signal in the trigger? Is it valid for all of them?

Yes

L2319: SAMPIC could be used also for other detector technologies?

Yes. SAMPIC is a general purpose digitizer. It just requires the input signal to be at least 100 mV, so a preamplifier is needed to match the output signal of the specific detector to SAMPIC.

L2359: 1GHz input and 2Gigasample seems to be already achieved in the current version. Correct?

Yes, The key missing ingredient is "no dead time a the LHC". So it will be a challenge to maintain this precision and no dead time. They need to add additional pipelines to their architecture.

Sec 5.6.3: This technology seems promising. Could you specify a more detailed timescale and costs?

We are currently in the process of manufacture two additional sets of sensors: the first one will be on a 200-micron thick FZ substrate, and will use either boron and indium as dopant for the gain layer. This production, expected in the Fall of 2014, will help us consolidate the design process, compare the performance of 300- and 200-micron thick sensors, and measure the radiation hardness of indium-doped sensors. The second production, aimed for the Spring 2015, will use 50- and 100-micron SOI sensors, and it will have full PPS sensor geometry. As time resolution improves for reduced sensor thickness, we expect the 50- and 100-micron thick detector to achieve a significant break-through in performance, allowing us to manufacture the final UFSD sensor geometry for the PPS for the Fall 2015.

Below are some comments to Chapter 1. It is a very nice and educative document, it is hard to find something to criticize. Sorry for the delay. I will continue reading but I wanted to send these already. I have found the replies to many of these questions later, but I still thought it may be good to give a hint already in this introductory chapter for impatient readers. But it reads pretty well already I think.

142 why do we need a inelastic event and two SD event overlapping to create a signal? Isn't it enough/more probable that we have two SD events overlapping?

Backgrounds with two SD events overlapping have also been considered. In the cases where the cross section of system X is small (e.g. WW production) the most probable background is one inelastic event and two SD event overlapping.

148 I guess you cannot require small missing ET and p_z, for example, because of neutrinos? In that case I am not yet sure here what 'compatible kinematics' mean (although you give details later).

This is discussed in detail in chapter 2.

153 is there a simple explanation of why there is no acceptance for the Higgs? It would be useful already here, or a ref to later explanation.

The mass of the central system relates to the fractional momentum loss xi of the two protons M=sqrt(xi1.xi2.s). Acceptance to the Higgs requires xi~0.01. At ~200m from IP, protons with xi=0.01 separate from the beam by ~1mm, which is beyond the expected performance of the RP/MBP (we aim at 2-3 mm).

172 It would be nice to read something about why the TOTEM contribution is important to the low xsec CEP physics, beyond what can be done with CMS+PPS.

In the terms of the MoU, TOTEM will contribute to the CMS+PPS program. The TOTEM interest for low xsec physics is expressed by participating in this program.

193 I guess it is only meaningful to declare 100/fb of data if the maximal pileup value is also specified? Or is L the only figure of merit? I mean, why is it important to declare this value separately for PPS, when clearly the big experiments and LHC will determine the integrated lumi achievable?

CT-PPS aims to run at the highest luminosity in Phase I LHC (up to 2 1034). The physics performance is evaluated for PU=50 (chapter 2). The 100 fb-1 is an estimation of the achievable integrated luminosity in the CT-PPS production phase before LS2. Of course the real number will depend on the duration of the CT-PPS exploratory phase.

222 what is the PU rejection factor for 30 ps? I assume around 8.

The nave answer is: a factor 3 smaller than for 10ps. The rejection factor quoted here is based on a nave calculation sigma(z-vert)~2 mm divided by sigma(z-beam)~50 mm which holds when there are no additional pileup protons in the detectors. At the largest pileup (mu=50) where average proton multiplicity is higher than 1 the rejection is smaller, as shown in the detailed analysis presented in section 2.8 (rejection factor is ~10 for 10 ps and around 5 for 30 ps resolution)

236 is the number of devices the only potential difficulty in replacing them? Are there radiation protection considerations too?

The estimated particle fluence and dose in the detectors is similar to what is expected in the CMS pixel detector. Radiation protection considerations are not an issue.

261 if the L1 signal from PPS does not represent a proton reconstructed there, then what does it represent? THis paragraph first talks about an L1 signal but then it states that a HLT trigger is necessary. Could this be clarified a bit?

This is discussed in more detail in Chapter 6. In the baseline PPS, only the timing detectors can be used in L1 (pixel front-end electronics does not have trigger outputs). The L1 trigger computes just the time difference between the two protons signals to estimate z-vertex and select events in the tails of the z-vertex distribution (low pile-up density). At HLT we will use the PPS pixel data to reconstruct tracks, compute the pp kinematics and match it to the central system. In a later CT-PPS upgrade we may exploit the possibility of a tracking trigger at L1.

309 20 ps appears here, while so far it was 10 or 30 ps. It would be nice to make these uniform.

The time resolution will be something between 10 and 30 ps. 20 ns is the average.

351 it would be nice to point out why it is important to measure these cross sections, since they are precisely predicted by QED. What new phyisics are we sensitive to?

Deviations to the SM prediction of these cross-sections may point towards new physics. For example heavy Higgs decaying in tau pairs would give an excess of events in this channel relative to the other lepton species.

L632 Interactions with an average multiplicity of 50 has a double meaning, since multiplicity is very often used for the number of particles created in a single collision. It would be nice to rephrase this.

changed to "by simulating additional interactions with an average PILEUP multiplicity of $\mu=50$ matching that expected during Run~2."

L655 this ellipse distribution means that the protons with the given parameters are distributed along the ellipse line (circumference), or in the whole area of the ellipse? Is the distribution uniform? Thus, is there an easy way to estimate the acceptance just from the ellipse numerically?

The ellipse lines indicate the given "\xi t" values. It is clearly not uniform, as the distributions get denser with smaller t/xi. The different lines are drawn to give a feeling of how the acceptance develops.

Fig 8 legend missing the unit of t

L678- The RP window has an 1-2% interaction thickness, but what is the interaction thickness of the vacuum between the IP and the RP? Is that much smaller?

The loss due to the interaction with the thin window has been measured in RunI and it is ~1-2%. The interaction of the proton with gas is much much smaller and it is not taken into account in the simulation.

L683- This 7% is not 49% squared, why?

The double arm acceptance is not merely derived by the single arm acceptance as the physics event is not left-right symmetric in terms of xi-t. However we will double check this number.

700 in general: why is the detector a square shape and not an annulus? Since it is clear that one needs to maximize the sensitive area around the beam, very close. And a square does not seem ideal for that. Is simplicity more important than physics potential?

From the construction point of view is much simpler, and the detector dimensions are dictated by the RP dimension. Also, the rectangular detector shape does not compromise on the acceptance, as the full acceptance of the PPS is exploited. The RP is filled completely.

Fig 16: it would be nice to have on the same plot the distribution of mass according to MC. This way the reader does not know how important the acceptance loss is at low M.

It is indeed a possible option. Precise numbers on the fraction of events retained by PPS can be found in section 2.8.

Fig 17: mass system range -> mass range of the system

Rephrased it.

Fig 18 (and other similar plots): do we really expect any significant difference between the left and right panel?

No, we do not expect significant differences. These distributions are shown for completeness.

Fig 19: the color palette is not very fortunate as in black and white, the top and bottom of the scale is equally dark.

Please use colors. It is hard to adapt to black and white.

732 it is not easy to visualize the diamond-like geometry without a drawing.

The geometry is only indicative. A clearer picture is given by the Figure 19, where it is easier to visualize it.

735 if the plan is to reject events with multiple hits, how do we really know from the data that multiple particles hit a single cell?

The information on the multiple hits comes form the tracking detectors that have a much finer segmentation (a resolution of ~10um is assumed).

744 same location: it would be good to clearly and briefly describe the near future TOTEM plans so that the reader does not think that TOTEM will be replaced by PPS (or is it?)

This is discussed elsewhere in the TDR. See for example, the Introduction (chapter 1).

L749- why not BPTX_XOR is going to be used instead of zero bias?

The extrapolation of the background at high pileup has been done using data recorded by TOTEM. The best "unbiased" sample to make the estimation is the zero-bias (or bunch crossing) triggered.

L755 background... what is the pileup value here? Did we subtract two large values? Is this a good way to do this?

This was the procedure used in Totem. The estimation in Totem Run1 data was performed with a pileup of mu=9 and extrapolated to the expected Run2 conditions (i.e. pileup of 50). We will measure the background as a function of pileup as soon as we have data.

Fig 21 middle plot missing title

The x-axis indicates the number of events per bin. Fixed the caption clarifying this.

L805 is the uncertainty on this xsec a factor of 100?

Yes, the models have very large uncertainties.

L806 why do we have here only a factor of 30 between numbers, not 100? And in this same line, a factor of 33? This is clearly inconsistent.

These are approximate numbers. Anyhow, they are consistent now.

L811 does xi not influence this at all?

the alignment is done using elastic scattering events which have xi=0

832 are detected: does not sufficiently indicate that this does not always happen

It indicates the case in which the WW events fall within the PPS acceptance and are detected. Further clarification is given/simulated later in the text.

842 modest would be better word than mitigated

The gain has not yet been assessed. Therefore, it is not yet proven it is "modest". We prefer "mitigated".

866 the factor 10 or 5 is mentioned here, but why not 25? The pu rejection power was 25 for 10 ps...

Numbers are from the Table~3, ToF difference, for the "inclusive WW" background.The rejection factor quoted in the introduction is based on a nave calculation sigma(z-vert)~2 mm divided by sigma(z-beam)~50 mm which holds when there are no additional pileup protons in the detectors. At the largest pileup (mu=50) where average proton multiplicity is higher than 1 the rejection is smaller.

Fig 23: what will be precisely the unprescaled trigger which will be capable of collecting these events?

We plan to use the CMS dilepton triggers. See Table 13 in chapter 6.

Fig 24 this is confusing; are all inclusive WW events coming from gammagamma at high multiplicities?

The events are all in the same conditions as indicated in Section 2.2 ("simulated samples"). Inclusive WW are not from gammagamma->WW. Those are the exclusive events.

Fig 25 bottom right: is Wgammagamma the best notation for the missing mass?

In order to avoid confusion we clearly state this is the missing mass. It is defined at line 878 as M_X. Added a clarification, in order to avoid confusion. It should be clear now.

940 it would be nice to describe in the beginning already what TOTEM consolidation means precisely

It is outside the scope of the TDR to describe or summarize the TOTEM consolidation program. A reference is given.

956 Cerenkov: why not using diamonds?

983 is the system tested for sudden change of pressure as well?

Yes

OK

1025 how do we precisely know it was 100 C, how do we get it from the sensor reading which is only 4 C?

This is an estimate based on the thermal model of the RP.

Fig 40: this is for a single proton, for a bunch, for a lumi unit, or for how many events?

It will be clarified in the caption.

1165 is TCL5 likely to produce bg for the RPs?

In the foreseen configuration TCL5 is open at 35 sigma and therefore it is not expected to be a source of background.

1182-4 this is repeated already for the third time...

We will revise the text.

1295 what is the meaning of the 1 urad here? Is this somehow counted with respect to the direction of the proton coming from the IP, or 1 urad is really the scattering angle in the material?

1 urad is the resolution of the PPS measurement of direction of the incoming proton.

1386 why pixel based? What does this mean? Isnt it better to say something about the segmentation?

text was modified.

1388 not well phrased: what is better than 1e15? a smaller or larger number is better? and also, it is not really required... what is required is that the detector survives a dose/flux at least that value.

text was modified

1398 are phrases like we think allowed?

rephrased

1402-4 not understandable sentence. what does schedule risk mean, and have to do with the edge thickness, and the rapid installation?

Schedule risk means not to be ready when it is possible to install the detector.

Fig 49 quotes 30 V bias, and line 1477 says 15 V. why?

Measurements where made for different bias voltages showing similar efficiencies for the two values. This particular plot was made for V=30V.

1611 is there going to be an ice formation from the ambient air humidity? (-30 C)

The detectors are in secondary vacuum

1656 typo ?oemphet?

corrected

Topic attachments
I Attachment History Action Size Date Who Comment
pdf PL_CMS-TOTEM-PPS-TDR-Comments-1.pdf r1 manage 164.1 K 2014-08-15 - 20:11 ValentinaAvati
pdf PPS_ToFMultiplicityProtons2D_ARCReview.pdf r1 manage 14.9 K 2014-08-17 - 01:11 JoaoVarela
png PPS_ToFMultiplicityProtons2D_ARCReview.png r1 manage 26.1 K 2014-08-17 - 01:17 JoaoVarela
pdf PPS_ToFMultiplicityProtons_ARCReview.pdf r1 manage 13.8 K 2014-08-17 - 01:11 JoaoVarela
png PPS_ToFMultiplicityProtons_ARCReview.png r1 manage 24.2 K 2014-08-17 - 01:17 JoaoVarela
png ProtonFluence_Pythia6_s.png r1 manage 169.2 K 2014-08-21 - 17:12 JoaoVarela
png mappingxit.png r1 manage 173.2 K 2014-08-21 - 17:22 JoaoVarela
png quarticbars.png r1 manage 173.1 K 2014-08-21 - 17:38 JoaoVarela
png rdprogram.png r1 manage 156.9 K 2014-08-21 - 17:53 JoaoVarela

This topic: TOTEM > WebHome > CTPPS
Topic revision: r20 - 2014-08-28 - ValentinaAvati

Copyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback