Contents:

Links to the AN, subgroup meeting talks, CADI and hypernews forum:

This page presents the discussion of B -> X(3872)/psi(2S) K analysis.

The latest version of AN.

Links to talks at subgroup meetings:

Here will be links to CADI lines and hypernews forum after it is opened.

Analysis discussion:

Sergey's questions received at 20 Jan 2020:

General: Figures have very poor quality, please use vector graphics in the future (save to pdf in ROOT and import pdf plots in latex)

General: use linenomath environment around equations to make correct line numbering

General: A lot of fits look strange, with 2 (or 3-) gaussian fits, you get one of the components consistent with zero, or unreasonably wide. Please check all fits and consider using less number of gaussians. For the fits in data consider using shapes from simulation with a single floating parameter responsible for the resolution scaling. Some specific comments to the fits are presented below as well.

2nd sentence of abstract is not clear without a context, better rephrase as X and psi states are reconstructed using the decays into ... </p>

L30-L31 suggest to give deltaM 0 and + definitions explicitly

L70-78: why using old reconstruction of data, there is 17Sep rereco ?

L124: In my experience it often makes sense to use different requirements for the softer and the harder of the two pions. Consider this if any selection optimization is foreseen in the future.

L127: deltaR is not measured in cm

L123: I see there is no requirement on K0s decay vertex to be displaced from the B0 decay vertex. why ? I strongly suggest to do so and also require K0s momentum to point to the B vertex (e.g. 3D cosine > 0.9). If you do not apply this requirement, you have contribution from B -> Jpsi + 4 pi decays where all 4 pions come from the same displaced vertex (just happened so that two of them have a mass close to 0.5 GeV). E.g. B0 -> psi(2S) rho or B0 -> psi(2S) pi+ pi-.

L128: for the B+ case, can it be possible that you pick the same candidate twice, e.g. for the 3 selected tracks the two alternative mass hypotheses are K+ pi+ pi- and pi+ K+ pi-. How do you chose between them ?

L131: 1% seems a little too soft requirement for many-object vertex. consider tightening it

Figure 1 lest: why there is no fit quality indication as is on the other 3 plots on this page ?

L133: 0.9 seems to be too soft requirement, usually 0.99 or 0.999 are used

L129, L135: please clarify how the Jpsi pi pi mass is calculated: do you use the refitted with J/psi constraint 4-momenta ?

Table 3: such a large difference in B mass resolutions between X and psi channels is a bit surprising. Is it understood ? Can you give uncertainties in this table?

Section 5.1: why only 2018 data is used ?

Figure5 left: the triple gaussian seems like overkill, the widest component seems out of place. If I understand the numbers correctly, its fraction is less then 8%, try using a double Gaussian here or fix some parameters to the simulation.

Figure 6: The triple-Gaussian looks strange, S2_sigma is exactly 20 MeV, check if the parameter is at the boundary. The chi2/ndf = 1.5 seems that the fit quality is poor, what is the respective probability ? I

Figure 7 left: ensure that the fit table does not overlap with the points

Figure 7 left: components of the double gaussian look awkward. consider fix some of the signal shape parameters from the simulation. currently the fraction of the secong gaussian is 0.08+-0.19 meaning it is not needed. Did this fit even converge ? (the uncertainties seem to be too small)

Figure 8: left - The chi2/ndf = 1.9 seems that the fit quality is poor, what is the respective probability ?

Figure 9 left: components of the double gaussian look awkward. consider fix some of the signal shape parameters from the simulation.

L194: please move Table 5 closer, e.g. put it on the next page.

L202: DeltaR < 0.01 seems a bit tighter that usual, what is the gen-matching efficiency ? Also such a requirement can lead to artificial improvement in the mass resolution in simulation. Please check if that is the case by comparing the fit results with and without the matching requirements.

Section 5.2: again why only 2018 is used ?

Figure 10: this triple gaussian fit looks awkward as well. the 3rg gaussian fraction is 3.4+-5.3% meaning it is non needed.

r="ltr">L209 - L212: please move table 6 closer to this text, it is currently several pages away

L218: what is the third uncertainty? if that is a systematic one, please move this discussion after section 6.

L228-229 How the deviation in the signal yield is connected with the uncertainty in the mass measurement ?

L235, L244: fix broken link

Table 6: it seems that the statistical uncertainties of egen are 10 times the statistical uncertainties of the ereco. You need to generate more gen-level MC to estimate egen with the precision at the level of the ereco precision, in order to significantly reduce the systematic uncertainty connected to the size of MC samples. Currently that uncertainty is about 5-6% for each individual channel and results in 12% systematic uncertainty in the measured value of R(the largest systematic uncertainty so far in Table 8). By simply generating more gen-level MC you can bring this uncertainty down to ~1-1.5% (!!!)

Formulas between L258 and 259 make no sense to me: why the central values are different ? what is the third uncertainty in the 1st ratio ? Or the letter R means different quantity in these lines ??? then you need two different symbols.

Tables 7 and 8: usually changes to the signal model and to the background model are considered as independent sources of the systematic uncertainties, and are included as separate entries in the syst uncertainty table and then added in quadrature.

Section 7: repeat about the data set; use different symbols for the two R measurements

Figure 14 and 15 right: the third very wide gaussian component seems to be not needed.

Figure 17 and 19 left: the uncertainties seem to be very large, e.g. how can S2_frac = 0.50+-45 be considered a reliable fit result ? try using 2 gaussians for all 3 fits here.

Appendix D: This is very important cross-check that shows that something is not correct in the analysis ! Do you understand why are you getting this result ? Why only 2018 is used here ?

Figure 18: The double-gaussian fits are overkill, you get fraction consistent with zero. either fix shape from MC or use single-gaussian.

Guido's questions received at 21 Jan 2020:

Type A comments:

  • Abstract
  • missing article in "CMS experiment" (-> the CMS experiment) and "Mass difference" (-> The mass difference.)
  • line 45: bigger then -> bigger than
  • line 48: ... from B0 decay was rather poor events -> from B0 decays was rather low.
  • line 59: by CMS detector -> by the CMS detector
  • Figure 2, caption: "mas s" -> mass
  • line 169: by Belle collaboration -> by the Belle collaboration
  • line 176: in this way of analysis -> with this method
  • line 247: systematic error comes -> systematic errors come</li>
  • line 248: It can be evaluated -> These can be evaluated
  • line 269: The systematic error caused -> In order to asses the systematic error caused
  • line 276: add a comma after "choice"

Type B comments:

  • Figure 1: I see that the B0 peak has a better resolution than the B+ one. Can you comment on this, outlining the possible causes?
  • I am confused about appendix B. What does figure 15 show? it that MC? How does it compare to Figure 4?
  • Linked to the previous question: can you add a comment at the end of Section 4, explaining how different is the free-fitted model from the one that you get from MC?
  • Figure 9: the caption says that the signal is modelled with one single gaussian, but in the left plot there are 2 gaussians. Also, why are they not defined in the whole plot range?
  • In line 278 you mention that the efficiencies are supposed to be independent. Do you refer to the epsilon_reco and the epsilon_gen? Are they not computed from the same simulated sample?

Jhovanny's questions received at 21 Jan 2020

L. 45. then → than.

you are using "PromptReco" samples for 2018. Are there special reasons not to use "re-reco"?

L. 99. It says "We match muons with HLTtrigger". Is there matching of the pion tracks? if not, are you planing to do it?.

Signal yields for figures 1 and 2 are no in agreement with the signals yields in figures 3 and 4. It seems in figures 1 and 2 we are taken No reals psi(2S) or X(3872). Nevertheless, when you do the same in MC Figure 10 (11) and Figure 12 (13), the yields are in agreement. Could you elaborate on this comment?

If we compared figures 6left and 8left the yields are 165569 +/- 2856 and 163448 +/- 454. Probably in the figure 6left the fit gets convergency but the error matrix was no positive define (or something like that). Could you tried to check, please?.

L. 237. Here, for yields, you are using numbers from figures 8 and 9. However, from the equation in line 52, it was expected to use numbers from figures 6 and 7. Of course, the results must be very similar. Could you comment on this, please?

L. 256. There is a typo in the table number (probably something with the latex text).

Greg's questions received at 21 Jan 2020

General: Since Ruslan is listed as a PDF author, should not he be also listed as an author?

Simulation: it's not clear if you used the same MC samples for all three years of data taking or one per year. It appears that the former is the case, in which case you really should generate some number of events in the 2017 and 2018 setup to test that the resolutions are the same and also that the efficiencies are the same. The latter is important for the branching fraction ratio measurements.

LL98-99 and further in the text. I'm not convinced that using HLT matching only for the branching fraction ratio measurement is a good idea. While you are losing a quarter of signal events, you may also be losing a lot of background by requiring the match. Generally speaking, given large uncertainties from the background subtraction, I don't think the reduction in the number of events would affect the uncertainties in Delta M significantly. I'd therefore prefer that the matching is used uniformly throughout the analysis.

L134: is taken from $V^0$ collection?

LL137-138: I'd imagine that cos(alpha) > 0 cut is too loose - have you tried to optimize it using S/sqrt(B)?

L145: chi^2 is always >= 0 by construction, so what's the point of this cut?

Section 3.2: I'd like to have better uniformity in fit function for various distributions. Since you fit the B mass in the psi(2S) channel with a double-Gaussian, it would make sense to also use a double-Gaussian for the X signal. Alternatively, you should try other functions, e.g., double-sided CB to assess the systematics due to the signal shape (for both the mass and the branching measurements).

Figure 3: here all of a sudden you need a triple-Gaussian to describe the B+ signal. This looks arbitrary to me. You should either try to use the same function for different channels, as otherwise you get additional systematics in the Delta M value, or use the MC-based templates for all signals.

Figure 4: there is significant background after the sPlot method under the X peak. Have you considered using the B mass sideband background subtraction method instead? I think you should do it as a cross-check and to asses the systematic uncertainty.

LL170-172: given a small systematic shift int he psi(2S) mass, shouldn't you conservatively consider this shift an additional systematics for the mss difference?

Eqs. (1-2): I'm not sure what the rationale is to measure Delta M+/-. There is no doubt whatsoever that the particles produced in B+ and B0 decays are the same, so one obviously expects the Delta M to be equal to zero. This is not a publishable measurement, but a sanity check. Perhaps it was more interesting at the time of an early Belle paper, but not anymore. I think what you should do instead is to measure the X-psi(2S) mass difference with the best possible precision by combining both channels. That would be an interesting measurement from th PDG point of view.

Section 5.2: as I mentioned before, it appears that you only measured efficiencies n 2018 MC; you need to check how different they are on 2017 and 2016 one. You also may want to check whether the ratios of 2018 to 2017 efficiencies in data, as measured, by both peak yields, scale with the efficiency of MC. In 2017 the pixel detector was dying, so I'd expect the average efficiency lower. This needs to be covered by a systematic uncertainty.

Section 5.2: the sPlot background subtraction on MC simulation is not very meaningful procedure, as the main combinatorial background present in data is not in the simulation. Consequently, I think you should fit the peaks in simulation without any sPlot subtraction, or at least try both approaches and use the difference as a systematic uncertainty.

Table 6: the psi(2S) efficiencies are consistently higher than the X ones; it would be good to understand explicitly why.

-- IvanLilienberg - 2020-01-28
Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2020-01-28 - IvanLilienberg
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback