Anonymous Questions and Answers


This is the TWiki page for Nikhef Bfys members where you ask your (anonymous) LHCb or b-physics related question. To submit an anonyous question, please use this webform. If you don't mind about staying anonymous, you can also edit this TWiki top add your question.

Feel free to provide answers to the questions. In case you are not completely sure you can put something to your best knowledge.


Questions about facts

What does "B to Open Charm" mean? What is opened?

Open charm is also referred to as naked charm. A D meson is called open charm as it contains a net charm quantum number. A J/ψ particle is referred to as hidden charm. The same convention holds for beauty. [MM]

Why does combinatorial background in an invariant mass distribution behave exponentially?

In the LHC light particles are produced as function of momentum along a rapidity plateau. The PT spectrum of particles decays exponentially. If you now make random combinations of two tracks (probably also three?) you will see that most combinations have smaller masses, due to the PT spectrum of underlying event particles. [MM]

... and when does it not?

The above only applies when you are far from the threshold. Below that there is a threshold curve (often modelled by an ARGUS function). A typical case is the D0pi mass spectrum near the D* mass. [PK]

Please elaborate on "Turbo". It's easy to find what it technically is, but which analyses use it. How much data is processed by Turbo and how much in the usual way? Can they be combined?

Turbo is a reduced data format. It allows you to store objects on demand. In other words only the data you chose will be made persistent. The alternative is the Full mode, where you store all data. In Run-2 about 25% of the event rate was done using Turbo, but it only required 10% of the bandwidth because the events were on average a factor 2.5 smaller. In Run-3 it is forseen to write ~70% of the bandwidth using Turbo. [MM]

Why is it okay to skip offline reconstruction and rely on trigger reconstruction?

The question is what is meant with offline reconstruction. Certain things cannot be done offline for certain types of triggers. For example for high rate physics programs, eg charm physics , it will no longer to write the full physics rate on tape and completely re-reconstruct events offline. For other triggers, various B-physics triggers, it still may happen. Depending on which information is stored on disk, several parts of offline reconstruction can still be done. For example if you do not store all raw hits, but store the hits on tracks in your selected decay, you can still refit the event offline to obtain a better offline reconstructed decay time or mass. [MM]

The removal of L0 trigger is quite intriguing. It seems that each event (~30 MHz) gets fully reconstructed. How is it possible?

It is indeed quite a challenge. It is only possible since we plan to have a HLT farm with XXX nodes, such that many events are processed in parallel. Note that HLT1 reduces the event rate from 30 MHZ to XX MHZ, while HLT2 reduces it further to YY Hz. HLT1 will use fast version of reco algorithms (eg. fast Kalman on selected tracks) while HLT2 can do a complete reconstruction. There is also a proposal to fully implement HLT1 on GPGPUs. This project is called Allen and Nikhef is contributing to it. [MM]

Is British English the standard in LHCb? Why? I think it's more beautiful anyway

The LHCb template says "Standard English should be used (British rather than American) for LHCb notes and preprints. Examples: colour, flavour, centre, metre, modelled and aluminium. Words ending on -ise or -isation (polarise, hadronisation) can be written with -ize or -ization ending but should be consistent. The punctuation normally follows the closing quote mark of quoted text, rather than being included before the closing quote. Footnotes come after punctuation. Papers to be submitted to an American journal can be written in American English instead. Under no circumstance should the two be mixed." [PK]

What do you do with this one: * subjob out of 500 subjobs in Ganga that keep on failing

PK: I don't understand the question.

How do the 3 dipole magnets compensate the impact of the spectrometer magnet on the trajectory of the LHC beams?

What is ultimately limiting the systematic uncertainty in the muon efficiency?

What is ultimately limiting the systematic uncertainty in the electron efficiency?

Probably the same as above. If the question was "What is ultimately limiting the systematic uncertainty in the ratio of electron to muon efficiencies?", which is the relevant question for lepton-universality measurements, then only external branching fractions. With infinite statistics one can always calibrate an electron mode to a muon mode and test that the ratio of branching fractions matches the expectation. In case of $J/\psi\to e^+e^-$ versus $\mu^+\mu^-$ this ratio is precisely known, assuming lepton universality in electroweak interactions. But this is an assumption. [PK]

What is the limit in mass resolution in LHCb?

The mass resolution depends on the precision of the measurement of the momentum as well as on the precision of the direction at the vertex of tracks. Depending on the exact topology of the event one is more important than the other. In short: the smaller the opening angle between tracks, the more important are the track slopes, the larger the opening angle the more important is the momentum measurement. In short (guesstimating) , the mass resolution of Z->mumu is dominated by the momentum measurement, while the mass resolution of D->Kππ is dominated by the track slopes in the Velo. [MM]
IPResolution.pdf
The impact parameter resolution plot in LHCb is determined by the single hit resolution of the detector, the material budget of the detector planes and the material budget of the RF-foil. The curve on the left shows the x-component of the IP resolution as a function of 1/PT. The IPy component plot looks very similar. The intersect with the vertical axis (very high PT) is the limit of the hit resolution, while the slope of the track provides a measure of the multiple scattering term. A priori one would expect a kwadratic curve in case multiple scattering and resolution would for independent contribution, but the geometry of the Velo actually leads to a straight line curve. A detailed explanation of the curve can be found in appendix A of the thesis of Aras Papadelis. [MM]

What is the limiting factor in the momentum resolution for Long tracks for the run-3 detector for B- and D- reconstruction? Is it Velo, UT or SciFi hit resolution or is it multiple scattering?

MomentumresolutionJpsiMuons.pdf
The momentum resolution also depends on multiple scattering and on hit resolution. For momentum measurement using an ideal massless detector around a spectrometer has a dependency that $\delta(p)/p = C_{res} p$, where $C$ is a constant that depends on the value of the B-field and the measurement resolution. The effect of material corresponds effectively to an additional uncertainty on the measurement of the curvature and translates in a constant behaviour $ \delta(p)/p=C_{MS}$. For tracks with high momenta the scattering in RICH-1 and UT is relatively small such that the slope before the magnet is also determined by the Velo part of the track, and the measurement is effectively the kink of the slope before the magnet with respect to the slope (SciFi) behind the magnet. For tracks with low momentum the Velo "decouples" from the measurement. In this case the momentum measurement is effectively obtained from the slope in the SciFi and de position of the track in UT. As a rough average guideline in LHCb you can say that up to 70 GeV or so multiple scattering dominates, while for tracks with momentum higher than 70 GeV the hit resolution dominates. This means that for low momentum tracks, i.e. in the outside of SciFi region, the limiting factor is the material, while for high momentum tracks, i.e. close to the beampipe - the former IT region - the hit resolution wil dominate. For the Velo impact parameter measurement the resolution depends on the PT of the track. Here a similar rough guideline (see IP resolution plot) is that for tracks with PT > 1 GeV the hit resolution is the limiting factor while for tracks with PT < 1 GeV multiple scattering gives the largest contribution.
A general overview of the motivations for the design of the original LHCb spectrometer can be found in this pptx talk of 2010. [MM]

What is a rare decay?

The question may have an answer that changes in time. The name "rare" implies that they are not observed often. The B production rate at the LHC is much higher than at the B-factories. I think it is fair to define rare decays as those final states that are produced by Penguins, with Branching Ratio's typically on the order of 10-6. Very rare decays are those decays that are suppressed by additional orders of magnitude, in particular through helicity suppression in Bs->mumu decays. In LHCb the production of rare decays of the type B->K*mumu is actually not so rare anymore. [MM]

Why does interesting physics happen at high PT?

In this case "interesting physics" should be interpreted as the production of massive particles, say M ~> 100 GeV. In this case the particle mass is not small compared to the effective parton-parton collision energy such that the final state particle does not have very high boost. As a result the produced particle (say a Higgs) is (almost) produced with spherical symmetry: just as often perpendicular to the beam than along the beam. The decays of such heavy particles into lighter final state particles automatically give you high PT.

We often hear that the size of the unitarity triangle is linked to the amount of CP violation. However all 6 unitarity triangles have the same area, 1/2*J, the Jarlskog invariant. But there is clearly more CP violation in B decays than in charm decays. So how does all this come together?

This topic often causes discussion. The amount of CP violation in the Standard Model is expressed by the Jarlskog invariant. Its exact definition is the product of a quarks mass term and the surface of the triangle (see eg. slide 31 of the Nikhef Topical Lecture on CP violation). However, the amount of CP violation observed in a particular measurement is a very different quantity! It turns out that only Standard Model processes involving all three quark generations result in possible observable CP violation, and only in case there are at least two interfering amplitudes with different weak and small phases. In practice the strongest CP asymmetries are seen in decays of B-meson decays with two equally contributing amplitudes, either mixed and unmixed for neutral B decays, or tree and penguin decays. In charm physics there is only a small contribution of diagrams invoilving the third generation quarks, hence small observable CP violation. Same is true for Kaons. [MM]

Why are b quarks boosted forward and backward?

In a high energy collision particles are created out of the energy available; for high energy many particles can be created. In a beautiful paper Feynman showed [] that the distribution of light final state particles is a constant in dPz/E or dx/x, for not too large x (where x = Feynman-x). This is called Feynman scaling. Here light particles mean particles with a mass negligible compared to the center-of-mass energy: m/E <<1. In practice this behaviour results in a particle spectrum shows a so-called plateau in rapidity. Since the (pseudo-)rapidity is logarithmically related to the tangent of the polar angle, a plateau in rapidity implies a densely forward boosted production of particles. The final step is that a B-meson is a relatively light particle for a collision at at energy of the TeV level. But clearly, for top quarks, W, Z, Higgs bosons, the lightness assumption is not true and the particles are less boosted. See the high PT question. [MM] One can also use a more intuitive picture. The center of mass of a collision of two colliding partons, each with a momentum selected from a parton distribution function. To produce light particles, say a pion, one parton can have low momentum and the other one a high momentum, such that the center of mass system of the final state has a high longitudinal momentum, and the particles are produced under small angle. [MM]


"Can we..." questions

Can we build a second LHCb detector in the other direction?

There are several reasons why this is not considered. A civil engineering one is that the LHCb cavern is not large enough to host both forward and backward LHCb spectrometers. A financial reason is that it would roughly cost a factor two in prize for only a factor two more events. It is more economical to invest in a factor 10 improvement with almost a factor two in prize, ie. the current upgrade. A third, physics, reason is that for a proton - anti-proton collider one could argue that two sides would have a cancellation in systematics for CP violation measurements. However for the LHC, proton-proton, this is not the case. It is extremly important, however, that we have the reversal of the magnetic dipole field. [MM]
One could also wonder how the trigger of such an experiment would look like. Would you read out both sides for every triggered event, or trigger both sides independently with just the VELO being in common? In the former case you waste resources storing useless information (however mitigated by using TURBO), in the latter you have effectively two independent experiments. [PK]

Can we run Ganga on Stoomboot?

You use Ganga to submit jobs on the GRID, but Stoomboot was introduced as a local Nikhef facility for jobs that do not run on the GRID. [PK]

Can we use the old Velo in the exhibition during run-3 as a seperate (silicon only) detector? Maybe with a magnet around it?

The spare vertex detector is a fully functional detector that could have been installed in case of problems or radiation damage of the current detector. For Run-3 the only possible location would seem to be upstream of the LHC collission point. However it would need a full new vacuum tank and infrastructure etc. Even if that would work, you cannot read the spare Velo out with 40 MHZ, since the electronics operate in Run-1, Run-2 mode. And finally, even if all of those would be magically solved, what would be the physics case? [MM]

Can we use the histograms of the lifetime of prompt J/\psi or prompt Ds to make the data manager shifter monitor the bias of the measuremed lifetime during data taking or use them in alignment?

Can we do Flavour Tagging with CEP?

Let's imagine we produce b-bbar quarks that would hadronize into a B and a anti-B with CEP. The opposite side tag would probably be cleaner since there are no underlying event tracks present. Alternatively, same-side tagging would only work if there is another particle produced in the final state. This triggers the thought whether the B and B-bar mesons produced in CEP events would be in a coherent state or not. I'm not sure. If they are the opposite side tagging would be better than in regular b - bbar events. [MM]

Can we fill the magnet with a target and do capture cross section with low momentum particles that pass through RICH-1 (eg. protons, kanos, pions, muons)


Open discussion items

What is the most interesting thing expected to come out of the upgrade LHCb detector?

This question is of course a bit a matter of taste. The LHCb upgrade detector ("LHCb-2") is built to become a general purpose detector in the forward direction. This means that in addition to our mainstream b- and c- physics program, we should be able to analyse forward produced Higgs events, do spectroscopy studies including tetra-, penta- and hexa-quark states, search for (long-lived) Majorana and exotics, as well as heavy ion and fixed target physics (SMOG). personally I find the b-physics CP-violation precision tests, the lepton non-universality measurements and the searches for LFV the most promising as they allow to probe physics at very high energy scales. [MM]

How much can we improve flavour tagging during run-3?

There are two opposing effects. One the one hand the instantaneous luminosity in Run-3 will be higher, ie there will be higher pile-up; more vertices and more tracks. On the other hand, the new Velo is a pixel detector, such that the tracking efficiency is higher and ghost rate is lower. The precision of the Velo is similar, so it is hard to say what the net result will be. [MM]

Can we (LHCb) say something about the Higgs?

Interesting question. LHCb has physics potential to search for Higgs events. Due to the forward geometry we are only sensitive to Higgs particles with high rapidity. In this case associated production is the most likely production mechanism. In principle we could be sensitive to Higgs->b bbar or Higgs -> c-cbar. Alternatively one could ask, whether we are sensitive to virtual effects of the Higgs, eg in the decay of Bs->mu mu, where there is a diagram with Higgs exchange. However the contribution of Higgs to this process is rather small. In general it seems that the sensitivity of a minimal flavour violating scalar is not very strong. To be continued/corrected! See [https://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-CONF-2016-006.html][CONF-2016-006]]. [MM]

An important uncertainty in lifetime and mixing frequency measurements is the length scale of the VELO. Does anybody have an original idea how we could measure it?

To be addressed before the velo survey finishes. [PK]

How does the uncertainty on the Velo sensor z-position affect the measurements of Delta m_s and lifetimes? - Niels

A hot topic currently being studied by Michele, Hilbrand, Sevda, Kazu, Wouter and Niels.[MM]

How to make a parametrization of the magnetic field that is both more accurate and faster than the current grid interpolation (which doesn't quite obey the Maxwell equations)?

To be filled in. Activities are underway from Wouter and Miriam, but also from Gerco with students. [MM]

Would LHCb be able to reconstruct long-lived particles decaying after the magnet (so we have only T-stations and muon detectors and maybe the calorimeter)? The idea is to understand if we could make use our forwardness to (partially) reconstruct very long lived particles.

We do reconstruct offline T-tracks, but as far as I know there currently is no T-seeding related effort in the HLT. Something to be followed up. [MM]

Can Atlas/CMS do proton/K/pi separation with their planned TOF detectors? If so, is LHCb still a competitive B-physics experiment after LS3?

For phase-I upgrades, I am only aware of the Atlas Forward Proton (AFP) ToF detector for diffractive physics. This is not in competition with B-physics, but with Pomeron physics. For the phase-2 upgrades I think the general purpose experiments are indeed planning precision timing detectors.I think their main goal is to use timing resolution of the order of 30 ps to be able to seperate primary vertices. Whether this can then also be used for ToF PID I do not know. In any case the challenge for LHCb will be to deal with the maximum luminosity that the LHC can deliver at the LHCb collision point. [MM]

How will bfys survive in a CLIC era?

I am not aware of a flavour physics programme in CLIC. There may be, but I don't know. At a Z-factory, however, there is great potential for b-physics. [MM]

Which proposed next large accelerator project is most in favour of flavour physics?

Apart from further upgrades of Belle-II, LHCb-3, Atlas or CMS, the most promising would be a Tera-Z facility (aka FCC-ee). A very high luminosity e+e- collider at the Z-resonance. [MM]


Suggestive questions

Do you think machine learning will become more prevalent in reconstructing events or cleaning up pile-up from events? It is generally difficult to tell what such a Neural Network is learning; could you live with it working but knowing exactly how it works?

We already now live with many examples of algorithms that are used as a "black box". As long as they are well tested we can certainly live with it. I do think that we should make use of many (closure) tests of these applications. Indeed, we have also seen examples of the first applications to clean up pile-up. Why not? [MM]

Is machine learning a useful tool for LHCb, or an overhyped buzzword?

It is beyond any doubt that machine learning is a very useful tool. Almost all analyses make use of neural nets or decision trees. In addition PID algorithms and tagging do so. Whether the term is overhyped I can't say. I do see that many funding proposals include some machine learning these days. [MM]


Existential questions

What comes after the Vici grant?

In the Netherlands there is no personal funding scheme one can apply for. There are program schemes like the NWO-ENW-groot; these we try to continue to get funding for the bfys programme. However, it is possible to be awarded the Spinoza prize, also called the dutch nobel prize. But you cannot apply for that. The natural grant to apply for after a vici is the ERC advanced grant. [MM]

Do you think we have a free will?

I do. Not sure about you though.

Are we living in a simulation?

When will the small LHCb keychains come back?

In the early universe about 10^6 photons remained for 1 baryon (radiation dominated universe). At present 4% of the energy density of the universe is baryons (matter dominated universe), and about 10^-4 is radiation. Baryon number is conserved, and energy is conserved, so where did the energy go that was contained in radiation? How can the ratio have changed so much?

Deep question that surely deserves much more than any answer on this Twiki. One counter question: when the universe expands the photons reduce their energy with $ E = h c / \lambda$. Does this numerically work out? Where does it go? Into space-time?
That's an "ask Ethan" question. Start from here. [PK]

Why are we doing particle physics? Do our findings help mankind in any way?

Yes. Most and for all, we try to understand the foundations of nature. I is only natural since we have developed brains. It is important that we explain all our findings to the general audience. Second, I find it is very important to have a world laboratory where top-scientists collaborate independent of politics, ethnics and social prejudices. Finally, in the process of our research there are both planned as well as unexpected spin-off technological developments. [MM]

Why 42?

Simple_Magic_Cube.png
The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the "Answer to the Ultimate Question of Life, the Universe, and Everything", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is. Thus, to calculate the Ultimate Question, a special computer the size of a small planet was built from organic components and named "Earth". The Ultimate Question "What do you get when you multiply six by nine"[17] was found by Arthur Dent and Ford Prefect in the second book of the series, The Restaurant at the End of the Universe. This appeared first in the radio play and later in the novelization of The Hitchhiker's Guide to the Galaxy. The fact that Adams named the episodes of the radio play "fits", the same archaic title for a chapter or section used by Lewis Carroll in The Hunting of the Snark, suggests that Adams was influenced by Carroll's fascination with and frequent use of the number. The fourth book in the series, the novel So Long, and Thanks for All the Fish, contains 42 chapters. According to the novel Mostly Harmless, 42 is the street address of Stavromula Beta. In 1994 Adams created the 42 Puzzle, a game based on the number 42.
Furthermore, you can find in Wikipedia that forty-two (42) is a pronic number and an abundant number; its prime factorization 2 3 7 makes it the second sphenic number and also the second of the form (2 3 r), and many more details. One of the is that a 3x3x3 magic cube can be constructed such that every row, column, and corridor, and every diagonal passing through the center, is composed of 3 numbers whose sum of values is 42. [MM]


-- MarcelMerk - 2019-07-15

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng IPResolution.png r1 manage 68.4 K 2019-07-18 - 16:44 MarcelMerk Impact Parameter resolution vs 1/P_T
PNGpng MomentumResolutionJpsiMuons.png r1 manage 35.6 K 2019-07-18 - 16:43 MarcelMerk Momentum resolution of muons from J/psi decays
PNGpng Simple_Magic_Cube.png r1 manage 450.0 K 2019-07-23 - 13:24 MarcelMerk The 3 × 3 × 3 magic cube with rows summing to 42.
Edit | Attach | Watch | Print version | History: r10 < r9 < r8 < r7 < r6 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r10 - 2019-11-05 - PatrickSKoppenburg
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback