We then have as the definition of differential cross section
This has the simple interpretation of the probability of finding a scattered particle within a given solid angle.
A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. Usual units are the cm2, the barn (1 b = 10−28 m2) and the corresponding submultiples: the millibarn (1 mb = 10−3 b), the microbarn (1 b = 10−6 b), the nanobarn ( 1 nb = 10−9 b), the picobarn (1 pb = 10−12 b), and the shed (1 shed = 10−24 b). The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; The production occurs via the creation of a quark loop, like the triangle shown in the graph on the right. Any quark may run in the loop, but the top quark dominates the proceedings, because quarks couple to the Higgs boson proportionally to their mass squared, and the square of the top quark mass is ten thousand times larger than that of the next-in-line, the bottom quark. Fiducial cross section, in particle physics experiments, a cross section for the subset of a process in which the distinctive process signatures are visible within the sensitive regions of the detector volume. The definition now commonly means a cross section with kinematic and other selection cuts consistent with the sensitive detector acceptance applied, but in which detector inefficiencies are corrected for within that volume. These corrections are typically derived by applying the fiducial cuts on collections of simulated collision events, with and without detector simulation, and inverting the resulting detector transfer function. Fiducial cross sections are favoured for many purposes because they minimise extrapolation into experimentally invisible phase space, and are hence maximally model-independent. In theories beyond the SM, the properties of the 125 GeV Higgs boson may not be determined only by a simple scaling of couplings. Instead, the kinematic distributions in the various Higgs production and decay channels may be sensitively modified by BSM (incl. EFT) effects. Fiducial cross sections (FXS), i.e. cross sections, whether total or differential, for specific states within the phase space defined by experimental selection and acceptance cuts, provide a largely model-independent way to test for such deviations in kinematic distributions. In particular, differential FXS are a powerful for scrutinizing the SM Lagrangian structure of the Higgs boson interactions, including tests for new tensorial couplings, non-standard production modes, determination of effective form factors, etc. The measurement of Higgs FXS was already strongly advocated in Section 6 of On the presentation of the LHC Higgs Results arXiv:1307.5865As you know, we are search for new particle whose mass was not known, so we will search it at any mass points. Now we found one with 125 GeV, then this particle only has one mass, but we still can search for the other new particles at any mass point.
4. Pileup, luminosity? Out of time pile-up: This is due to the superimposition of signals in the detector that come from different bunch crossings (collisions). The most important example is pile-up of calorimeter signals. In time pile-up: Particles emerging from “secondary” vertices constitutes pile-up signals to the interesting event within the same bunch crossing. All the tracks, the energy deposit in the calorimeter of those particle is a source of in time pile-up. A direct measure of the in time pile-up contribution to the event is the number of reconstructed vertices in the event (all the vertices with at least two tracks). More collision energy means more pileup interactions; these occur when our detector can not distinguish between two separate collision events and thus considers them part of the same collision. We need to disentangle the pileup contribution to look at the real single collision event, and while a lot of work has been done in this direction, an increase in pileup is always a cause for concern. However, as someone working closely with Monte-Carlo tuning and production, I know firsthand how big of an issue this is going to be for us. Pile-up occurs when the readout of a particle detector includes information from more than one primary beam particle interaction - these multiple interactions are said to be "piling-up".This is a pile-up event, in which four separate collisions occurred (vertices at the red dots) when two bunches of LHC protons crossed each other inside ATLAS. There are about 100 billion protons in a bunch, so four collisions is not all that many--except that protons are incredibly small. In fact, most protons miss each other in a bunch crossing. Currently, there are about four million bunch crossings per second and this is being increased the LHC ramps up. Even four collisions per crossing are quite enough for now.
In high-luminosity colliders, there is a non-negligible probability that one single bunch crossing may produce several separate events, so-called pile-up events. This in particular applies to future pp colliders like LHC, but one could also consider e.g. ee colliders with high rates of gamma gamma collisions. The program therefore contains an option, currently only applicable to hadron-hadron collisions, wherein several events may be generated and put one after the other in the event record, to simulate the full amount of particle production a detector might be facing.
In scattering theory and accelerator physics, luminosity is the number of particles per unit area per unit time times the opacity of the target, usually expressed in either the cgs units cm−2 s−1 or b−1 s−1. The integrated luminosity is the integral of the luminosity with respect to time. The luminosity is an important value to characterize the performance of an accelerator. Rather than continuous beams, the protons will be bunched together, into 2,808 bunches, 115 billion protons in each bunch so that interactions between the two beams will take place at discrete intervals never shorter than25 nanoseconds (ns) apart. However it will be operated with fewer bunches when it is first commissioned, giving it a bunch crossing interval of 75 ns. [[http://en.wikipedia.org/wiki/Large_Hadron_Collider#cite_note-commissioning-36][[36]]] The design luminosity of the LHC is 1034 cm−2s−1, providing a bunch collision rate of 40 MHz 5. BrIn particle physics and nuclear physics, the branching fraction for a decay is the fraction of particles which decay by an individual decay mode with respect to the total number of particles which decay. [[http://en.wikipedia.org/wiki/Branching_fraction#cite_note-1][[1]]] It is equal to the ratio of the partial decay constant to the overall decay constant. Sometimes a partial half-life is given, but this term is misleading; due to competing modes it is not true that half of the particles will decay through a particular decay mode after its partial half-life. The partial half-life is merely an alternate way to specify the partial decay constant λ, the two being related through:
6. event number.In particle physics, an event refers to the results just after a fundamental interaction took place between subatomic particles, occurring in a very short time span, at a well-localized region of space. Because of the quantum uncertainty principle, an event in particle physics does not have quite the same meaning as it does in the theory of relativity, in which an "event" is an point inspacetime which can be known exactly, i.e. a spacetime coordinate.
In a typical particle physics event, the incoming particles are scattered or destroyed and up to hundreds of particles can be produced, although few are likely to be new particles not discovered before.
At modern particle accelerators, events are the result of the interactions which occur from a beam crossing inside a particle detector.
Physical quantities used to analyze events include the differential cross section, the flux of the beams (which in turn depends on the number density of the particles in the beam and their average velocity), and the rate and luminosity of the experiment.
Individual particle physics events are modeled by scattering theory based on an underlying quantum field theory of the particles and their interactions. The S-matrix is used to characterize the probability of various event outgoing particle states given the incoming particle states. For suitable quantum field theories, the S-matrix may be calculated by a perturbative expansion in terms of Feynman diagrams. At the level of a single Feynman diagram, an "event" occurs when particles and antiparticles emerge from an interaction vertex forwards in time.
Events occur naturally in astrophysics and geophysics, such as subatomic particle showers produced from cosmic ray scattering events
7. Drell-yan The Drell–Yan process occurs in high energy hadron–hadron scattering. It takes place when a quark of one hadron and an antiquark of another hadron annihilate, creating a virtual photon or Z boson which then decays into a pair of oppositely-charged leptons. This process was first suggested by Sidney Drell and Tung-Mow Yanin 1970 [[http://en.wikipedia.org/wiki/Drell–Yan_process#cite_note-DrellYan-1][[1]]] to describe the production of lepton–antilepton pairs in high-energy hadron collisions. Experimentally, this process was first observed by J.H. Christenson et al. [[http://en.wikipedia.org/wiki/Drell–Yan_process#cite_note-Christenson-2][[2]]] in proton–uranium collisions at the Alternating Gradient Synchrotron. The Drell–Yan process is studied both in fixed-target and collider experiments. It provides valuable information about the parton distribution functions (PDFs) which describe the way the momentum of an incoming high-energy nucleon is partitioned among its constituent partons. These PDFs are basic ingredients for calculating essentially all processes at hadron colliders. Although PDFs should be derivable in principle, current ignorance of some aspects of the strong force prevents this. Instead, the forms of the PDFs are deduced from experimental data. The production of Z bosons through the Drell–Yan process affords the opportunity to study the couplings of the Z boson to quarks. The main observable is the forward–backward asymmetry in the angular distribution of the two leptons in their center-of-mass frame. If heavier neutral gauge bosons exist (see Z' boson), they might be discovered as a peak in the dilepton invariant mass spectrum in much the same way that the standard Z boson appears by virtue of the Drell–Yan process. 8. local p0 goes down. The p-value is the probability of observing data at least as extreme as that observed, given that the null hypothesis is true. is the p-value where the null hypothesis is “the signal is random background noise” and the x-axis is the mass of the Higgs boson. 9. why pp.??? 10. JetA jet is a narrow cone of hadrons and other particles produced by the hadronization of aquark or gluon in a particle physics or heavy ion experiment. Because of QCD confinement, particles carrying a color charge, such as quarks, cannot exist in free form. Therefore they fragment into hadrons before they can be directly detected, becoming jets. These jets must be measured in a particle detector and studied in order to determine the properties of the original quark.
In relativistic heavy ion physics, jets are important because the originating hard scattering is a natural probe for the QCD matter created in the collision, and indicate its phase. When the QCD matter undergoes a phase crossover into quark gluon plasma, the energy loss in the medium grows significantly, effectively quenching the outgoing jet.
Caption for Figure B
This plot shows hypothetical data and expectations that could be used in setting the limits shown in Figure A.
The green curve shows (fictional) predicted results if there were a Higgs boson in addition to all the usual backgrounds. It could also represent the predictions of some other new physics. The dashed black curve shows what is expected from all background processes without a Higgs or some new physics. The black points show the hypothetical data.
In this case, the data points are too low to explain the Higgs boson hypothesis (or whatever new physics the green curve represents), so we can rule out that hypothesis.
Nonetheless the data points are higher than the expectations for the background processes. This could yield an excess such as shown on the left in Figure A. There are three possible explanations for this excess:
If instead, the black points lay close to the green curve, that could be evidence for the discovery of the Higgs boson (if it were statistically significant).
If the black points lay on or below the dashed black curve (the expected background), then there is no evidence for a Higgs boson and depending on the statistical significance, the Higgs boson might be ruled out at the corresponding mass.
Instead the detectors register all the decay products (the decay signature) and from the data the decay process is reconstructed. If the observed decay products match a possible decay process (known as a decay channel) of a Higgs boson, this indicates that a Higgs boson may have been created. In practice, many processes may produce similar decay signatures. Fortunately, the Standard Model precisely predicts the likelihood of each of these, and each known process, occurring. So, if the detector detects more decay signatures consistently matching a Higgs boson than would otherwise be expected if Higgs bosons did not exist, then this would be strong evidence that the Higgs boson exists. 11 Prompt lepton Prompt Lepton: Lepton that originates from primary interaction vertex from interesting physics (EWK or BSM) Fake Leptons include: Leptons from meson decays in Jets Cosmic rays Jets that punch through to the muon chambers 12 Isolation: Sum of PT of objects in a cone around the lepton divided by the PT of the lepton. Lower values of isolation means that the particle is more isolated. Muon isolation variables are used to calculate the energy surrounding the muon along its trajectory released from other particles. Those variables are useful for distinguishing muons from hadron decays and muons from decays of resonances. In ATLAS there are two independent approaches, a calorimeter based and a tracking based method, both define a cone around the muon trajectory in which the energy deposit is calculated. Different cone size are available, respectively ΔR < 0.4, 0.3 and 0.2 Calorimeter based muon isolation variable. This represent the sum of the calorimeter cluster energy in a cone around the muon trajectory of the sizes defined above (ET ΔR < 0.X). A narrow core cone is subtracted to take into account the muon energy deposit. Only calorimeter signals 3.6 σ above noise are considered. Track based muon isolation variable.This is the sum of the transverse momenta (PT) of all the tracks in a cone around the muon trajectory (PT ΔR < 0.X ). Tracks are required to have a small impact parameter with respect to the primary vertex and PT > 1 GeV. The first cuts reduces enormously contributions from pile-up vertices tracks. 13 The transverse (d0) and longitudinal (z0) impactparameters please take the reference of this slide : https://indico.cern.ch/event/96989/contributions/2124495/attachments/1114189/1589705/WellsTracking.pdfIt's importance arises because momentum along the beamline may just be left over from the beam particles, while the transverse momentum is always associated with whatever physics happened at the vertex.
That is, when two protons collide, they each come with three valence quarks and a indeterminate number of sea quarks and gluons. All of those that don't interact keep speeding down the pipe (modulo Fermi motion and final state interaction).
But the partons that react do so on average{*} at rest in the lab frame, and so will on average spray the resulting junk evenly in every direction. By looking at the transverse momentum you get a fairly clean sample of "stuff resulting from interacting partons" and not "stuff resulting from non-interacting partons".
The collisions of protons are complicated, because the proton has a big mess inside. In order to see simple collisions, you want to find those cases where a single quark or gluon, a single parton scattered off another parton in a nearly direct collision. Such collisions are relatively rare, most proton proton collisions are diffractive collective motions of the whole proton, but every once in a while, you see a hard collision.
The characteristic of a hard collision is that you get particles whose momentum is very far off the beam line direction. This is a "high P_T" event. A high P_T electron usually means that an electrically charged parton (a quark) collided with some other parton, and emitted a hard photon or a Z which then produced an electron and a positron. Alternatively, it could mean that a W boson was emitted by the quark, and this produced an electron and a neutrino. Alternatively, it could be a higher order process in the strong interaction, where two gluons produced a quark-antiquark, and one of the quark lines then emitted an electroweak boson, which decayed leptonically.
The point is that any way it happened, the event indicates that a clean hard collision happened between two partons, and this is a useful indication that the event was an interesting one, which will give useful clues about new physics if similar events are isolated and counted.
The reason P_T is important is because when the actual collision event is a short distance collision dominated by perturbative QCD, the outgoing particles are almost always away from the beam-line by a significant amount. Even in interesting events, when the outgoing particles are near the direction of the beam, it is hard to distinguish this from the much more common case of a near glancing collision, which lead to diffractive scattering.
Diffractive scattering is the dominant mechanism of proton proton scattering (or proton antiproton scattering) at high energies. The cross section for diffractive events are calculated by Regge theory, using the Pomeron trajectory. This type of physics has not been so interesting to physicists since the mid 70s, but more for political reasons. It is difficult to calculate, and has little connection with the field theory you are trying to find. But Regge theory is mathematically intimately related to string theory, and perhaps it will be back in fasion again.
15
17
Overlap Removal ‘Overlap removal’ summarises two aspects of the object selection that are similar in their implementation but are performed for different reasons. One of the aspects is the removal of objects that are overlapping due to a double counting of objects by the reconstruction algorithms. In this case only one of the two objects is an actual object while the other is an artefact of the reconstruction mechanism. This concerns electrons and jets, that are both reconstructed as jets by the jet algorithms. Therefore, any jet that is found to be closer than DR(e;jet)<0: 2 to an electron after applying the object selection criteria is discarded. It can also happen that an electron is erroneously reconstructed twice. In order to reject the second electron, whenever two electrons are found within DR(e1;e2)<0:1, the electron with the lower energy is discarded The other aspect is the spatial separation of two objects. Leptons can arise from the semileptonic decay of b or c quarks inside a jet. These leptons should in general be re- jected by the isolation requirements, but a sizeable contribution of leptons inside jets passing the isolation requirements can be seen. Electrons and muons are thus required to be separated from jets by more than DR(lep;jet) =0:4. Muons and electrons are also seen to overlap in the detector when a muon emits bremsstrahlung and the resulting photon is misidentified as an electron. Both objects are rejected in this case if they overlap within DR(μ;e)<0:1 as both are likely to be badly reconstructed. 18 Trigger matching For plots shown in Figure 2 so called trigger matching was applied. This procedure drops electrons which did not trigger the event, e.g. gammas that were misidentified as electrons during the reconstruction process. The idea is to match offline electrons to trigger objects with pTover given trigger threshold using minimization of the ∆R distance, defined by Equation 2. Only offline electrons with a trigger object matched in ∆R <0.2 cone were considered.One can see the effect of the trigger matching in Figure 3. After applying trigger matching there is no artifact in the low-pT region that is caused by fake electrons. The efficiency in general is a little bit lower, but comparing the mean plateau efficiencies the efficiencies are the same when taking into account the statistical uncertainity (without trigger matching (0.993 ± 0.005), with trigger matching (0.992 ±0.005)). 19 QCD Background In simple terms QCD