ROOTFILES

http://ams.cern.ch/AMS/Analysis/hpl3itp1/root02_v5/html/

RICH

Regarding RICH data analysis of root files, at the upper level there are two topics to be covered:

Dynamic calibration

When reading consecutive events using the AMSChain::GetEvent method, it is possible to perform a calibration of the RICH radiator refractive indexes as the data us being read.This allows to correct any small change in the optical properties of the radiator without the need of a new calibration.

This feature is deactivated by default. To activated it use the static method RichRingR:: switchDynCalibration()

Once this method is called, subsequent events read will be used to correct the radiator properties, such that protons wth high momentum will be reconstructed as beta=1 particles by the RICH CIEMAT reconstruction (explained below).

RICH reconstructed ring objects

RICH data is reconstructed using two different algorithm:

  1. A clustering algorithm, usually referred as CIEMAT reconstruction. The class holding the reconstructed variables in this case is RichRingR.
  2. A likelihood based algorithm, referred as LIP reconstruction. The class holding the reconstructed variables in this case is RichRingBR.

Both algorithm require as input the direction of the particle at its crossing point with the RICH radiator, as well as this latter. These quantities are obtained from the different tracks (Tracker, TRD o ECAL) reconstructed across AMS. Therefore for each track a Cherenkov ring reconstructed with each algorithm can be present in the reconstructed data.

Both reconstruction methods have similar performances and should agree for correctly reconstructed events. The pointers of the the reconstructed object associated for the particle can be accessed through the methods

The difference between both reconstruction for the reconstructed beta of the particle is accessed with ParticleR:: RichBetasDiscrepancy (). For correctly reconstructed events this should be of order or smaller than 1e-3.

Regardless of dealing with the same event, both objects containing the reconstruction information deliver different quantities using different interfaces. Here we briefly describe the main methods and reconstructed variables for each object. The main function provided for each reconstruction are:

CIEMAT reconstruction

  • float RichRingR::getBeta() -- return the reconstructed beta for the ring, eventually corrected by the dynamic calibration if activated as described above.
  • float RichRingR::getBetaError() -- return an estimate of the error of the reconstructed beta. This estimate is valid only for corrected reconstructed rings.
  • bool RichRingR::IsClean() -- return true if the reconstructed ring is not contaminated by the light emitted in a PMT light guide by a crossing charge particle.
  • bool RichRingR::IsNaF() -- return true if the reconstructed ring was emitted in the NaF radiator (above the ECAL)
  • float RichRingR::getProb() -- for a correctly reconstructed ring, returns a number distributed uniformly between 0 and 1. For rings with problems returns a number which peaks at zero and decays exponentialy. For an explanation of what's this number ask an expert (C. Delgado by the moment). For the 90% of the correctly reconstructed rings this number should be larger than 0.1
  • float RichRingR::getWidth() - return the width of the reconstructed ring. The measure unit is the expected ring width. For correctly reconstructed rings this quantity should be smaller than three.
  • int RichRingR::getHits() -- return the number of RICH channels which have been associated to the ring.
  • float RichRingR::getCharge2Estimate() -- return the estimate of Z^2 (but for a normalization constant)
  • float RichRingR::getPhotoElectrons() -- return the number of photoelectrons detected in the ring
  • float RichRingR::getExpectedPhotoElectrons() -- return the number of expected photoelectrons for a ring with the characteristics of the reconstructed one, but due to a crossing particle of Z=1.

Additional information useful to reject events with fragmentation can be accessed trough

  • int RichHitR::getPMTs() -- return the number of PMTs with signal in the event that are not likely crossed by a charge particle
  • float RichHitR::getCollectedPhotoElectrons() -- return the total number of photoelectrons in the event. This together with the number of photoelectrons in the reconstructed ring allows to efficiently discard events with fragmentation (a simple minded cut to do so is RichHitR::getCollectedPhotoElectrons()*0.5<RichRing::getPhotoElectrons())
  • int RichRingR::getPMTs() -- get the number of PMTs associated to the ring

LIP reconstruction

Available functions

  • bool RichRingBR::IsNaF() -- TRUE if NaF event, FALSE if aerogel event.
  • float RichRingBR::getIndexUsed() -- refractive index used in the reconstruction.
  • float RichRingBR::getBetaConsistency() -- difference (unsigned) between RichRingR and RichRingBR velocity results for the particle.
  • float RichRingBR::getBetaConsistencySigned() -- same as above, but signed (positive if RichRingR value is greatest, negative if RichRingBR value is greatest).

Main data variables

  • float RichRingBR::Beta -- reconstructed particle velocity.
  • float RichRingBR::AngleRec -- reconstructed Cherenkov angle (rad).
  • float RichRingBR::ChargeRec -- reconstructed particle charge (using all data).
  • float RichRingBR::ChargeRecDir -- reconstructed particle charge (using non-reflected ring segments only).
  • float RichRingBR::ChargeRecMir -- reconstructed particle charge (using reflected ring segments only).
  • int RichRingBR::Used -- number of hits in reconstructed Cherenkov ring.
  • float RichRingBR::NpeRing -- total signal (p.e.) in ring.
  • float RichRingBR::NpeRingDir -- total signal (p.e.) in non-reflected ring segments.
  • float RichRingBR::NpeRingRef -- total signal (p.e.) in reflected ring segments.
  • float RichRingBR::RingAcc[3] -- geometrical acceptance (visible fraction) of Cherenkov ring: [0]=total, [1]=non-reflected, [2]=once reflected. The total is given considering an 85% reflectivity for reflected photons, i.e. [0]=[1]+0.85*[2] and assuming each PMT grid as a fully active surface except for dead PMTs.

ECAL

EcalShower variables available on gbatch since release B505 (AMSEcalShower::LAPPVariables()):

* float S1tot[3]: fraction of energy deposited on CofG cell: S1tot[0]: sum over all layers; S1tot[1]: sum over all x layers; S1tot[2]: sum over all y layers.

* float S3tot[3]: fraction of energy deposited on CofG plus CofG+-1 cells: S3tot[0]: sum over all layers; S3tot[1]: sum over all x layers; S3tot[2]: sum over all y layers.

* float S5tot[3]:fraction of energy deposited on CofG plus CofG+-1, plus CofG+-2 cells: S5tot[0]: sum over all layers; S5tot[1]: sum over all x layers; S5tot[2]: sum over all y layers.

* int NbLayerX: number of hit layers on x side

* int NbLayerY: number of hit layers on y side

* float ShowerLatDisp [3]: shower lateral dispersion calculated as sum of transversal CofG variances : ShowerLatDisp [0]: sum over all layers; ShowerLatDisp [1]: sum over all x layers; ShowerLatDisp [2]: sum over all y layers.

* float ShowerLongDisp: shower longitudinal dispersion calculated as CofGZ variance

* float ShowerDepth: estimator of shower depth

* float ShowerFootprint [3]: estimator of shower size: ShowerFootprint [1]: x layers component; ShowerFootprint [2]: y layers component; ShowerFootprint [3]: overall

Exact formulae and example plots are in: EcalShowerLAPPVars.pdf

EcalShowerR EM estimator to be included in the next version (AMSEcalShowerR::EcalStandAloneEstimator()):

More informations available in: EcalShowerLAPPEMEstimator.pdf

Unfolding

An implementation of the so called Bayesian Unfolding (Richardson, W. H. (1972), Lucy, L. B. (1974), D'Agostini, G. (1995)), is available in the AMSRoot library. It allows to fully perform fully automatized unfolding, with propagation of statistical errors due to Poisson fluctuations in the folded (measured) histogram and Poisson fluctuations in the joint histogram used to estimate the migration matrix. Additionally an approximation to truly Bayesian unfolding, with bayesian computation of errors, is also provided under the name of Stochastic unfolding (C. Delgado, in preparation (2013)). A minimal common interface is provided to use both methods. The next example shows how:

New example of usage in a anonymous root macro using cint

{
  gInterpreter->Load("ntuple_slc4_PG.so");  // Load AMSRoot shared library

  BayesianUnfolder unfolder;                       // Instantiate the object that will perform the unfolding.
  StochasticUnfolding stochastic;

  TFile files("unfold.root");                            // File containing the measured distribution and the migration matrix (or joint probability)
  TH1F *reconstructed=(TH1F*)files.Get("hrec");  // Measured distribution
  TH2F *matrix=(TH2F*)files.Get("hmig");    // Migration matrix or joint distribution
  
  // Some coments on the joint distributio histogram:
  //    * The x axis should correspond to the "true" quantity (usually the generated energy)
  //    * The y axis should correspond to the "measured" quantity (rigidity on tracker, energy on ECAL)
  //    * The binning is not very important as far as it is thinner or equal than that of the distribution we want to unfold (in y) and the unfolded one we enat (in x) 
  //    * The matrix is internally normalized to really provide the migration (or response) matrix. Overflows and underflows, if any, are used to 
  //      compute efficiency factors. However it is useally better to keep these at zero and correct them manually after the unfolding.



  TH1F unfoldedB=*reconstructed; unfoldedB.Reset();   // Histogram where to store the unfolding result. The binning has to be set a priori and it has to  be empty
  TH1F unfoldedS=*reconstructed; unfoldedS.Reset();   // Histogram where to store the unfolding result. The binning has to be set a priori and it has to  be empty

  bayes.computeAll (*matrix,               // Joint distribution of measured and reconstructed or migration
                               *reconstructed,    // Measured distribution
                                unfoldedB,         // The histogram where to dump the result. 
                                         100,  // (optional) Number of samples used to compute the statistica errors
                                          true, // (optional) Allow statistical fluctuations of the migration matrix in the propagation of errors
                                          true, // (optional) Allow statistical fluctuations of the measured distribution. This should always be true
                                          10,     // (optional) Regularization strengh. The larger the smoother is the result, but could introduce a bias
                                          1e-2, // (optional) Minimum change of the Kullback-Leibler divergence allowed between two iterations to continue iterating
                                          1e-2  // (optional) Minimum change of the chi2 between the measured and the folded unfolded disribution to continue iterating
                                          );


  stochastic.computeAll(*matrix,               // Joint distribution of measured and reconstructed or migration
                                   *reconstructed,    // Measured distribution
                                    unfoldedS,         // The histogram where to dump the result. 
                                         100,  // (optional) Number of samples used to compute the statistica errors
                                          true, // (optional) Allow statistical fluctuations of the migration matrix in the propagation of errors
                                          true, // (ignored) 
                                          0.5,     // (optional) Prior hyperparameter. Roughly speaking, the probability of a bin being empty in the unfolded distribution is forced to be this number divided by the number of entries in the measured distribution. The larger the smaller are the errors in the unfolded distribution, but the bias can increase.
                                          0.5, // (optional) Speed up. Controls the accuracy of the approximations in the truly bayesian computation. the smaller the more accurate, but the convergence is slower. A value of 0.5 is good for most applications.
                                          );

  unfoldedB.Draw();
  unfoldedS.Draw("same");
}

OLD: Example of old usage in a anonymous root macro using cint

{
  gInterpreter->Load("ntuple_slc4_PG.so");  // Load AMSRoot shared library

  BayesianUnfolder unfolder;                       // Instantiate the object that will perform the unfolding.

  // In general the BayesianUnfolder object needs to know the migration matrix and a prior "true" distribution
  // to start the unfolding procedure. The migration matrix is automatically computed by sampling on the joint
  // distribution, which is the techical way of saying "take the MC and build an 2d histogram with the generated
  // energy in one axis and the reconstructed in the other."

  ////////////////////////////////////////////////////////////////////
  // Build the sampling of the joint distribution //
  ////////////////////////////////////////////////////////////////////
  mc_tree.Fill("reconstructed_rigidity:generated_energy>>joint");   // mc_tree is a ntuple I already loaded in root with the output of the MC 
  TH2F joint=*(TH2F*)mc_tree.GetHistogram();

  unfolder.setMigrationMatrixFromJoint(joint);   // Build the migration matrix   

  // Some coments on the joint distributio histogram:
  //    * The x axis should correspond to the "true" quantity (usually the generated energy)
  //    * The y axis should correspond to the "measured" quantity (rigidity on tracker, energy on ECAL)
  //    * The binning is not very important as far as it is thinner or equal than that of the distribution we want to unfold (in y) and the unfolded one we enat (in x) 


  /////////////////////////////
  // Provide a prior//
  /////////////////////////////

  // The prior is used as starting point for the iterations of the algorithm. Additionally, the binning of the prior is the one used for the unfolded
  // distribution.
  
  TH1F prior;
  double binx[11];
  for(int i=0;i<=10;i++) binx[i]=pow(10,-1+i*0.4);
  prior.SetBins(10,binx);  
  for(int i=1;i<=prior.GetNbinsX();   prior.SetBinContent(i++,1)); // Usually an uniform prior is OK

  unfolder.setPrior(prior);


   // Now we can run the bayesian unfolding manually, but we need the measured distribution
  data_tree.Fill("reconstructed_rigidity>>mesured"); // again I already loaded this ntuple with the data 
  TH1F measured=*(TH1F*)data_tree.GetHistogram();

  // Run 10 iterations of the bayesian unfolding
  double prevKL=1e10;
  double prevChi2=1e10;
  for(int iteration=0;iteration<10;iteration++){

    unfolder.Posterior.Smooth(1);  // To provide a regularization mechanism that guarantees that the algorithm coverges, I use to smooth the result of the previous iteration a bit

    unfolder.BayesianUnfoldingStep(measured);  // Perform a unfolding step

   // Here we can check if we already got convergence by comparing the folding of the unfolded distribution with the measured one.
   // This is performed by  computing the change in the chi2/ndof:
   double chi2=unfolder.GetChi2(measured);
   if(fabs(chi2-prevChi2)<1e-2) break;  
   prevChi2=chi2;


   // We also can check if the unfolded distribution changes between two consecutive iterations
   // A good quantitiy to do this is the Kullback-Leibler divergence, that is computed with
   double kl=estimateKL();
   if(fabs(kl-prevKL)<1e-2) break;
   prevKL=kl;
  }

  // Finally we should retrieve the result:
  TH1F unfolded=unfolder.Posterior;

   // Usually it is a good idea to check that the result is stable after 100, or 1000 iterations. If it is not
   // it is necessary to increase the regularization, by changing the 1 in Smooth(1) by a larger number.


  // Finally, the computed result has no assigned statistical errors. I use to compute the errors by repeating the whole
  // procedure many time (100) by resampling the joint and measured distribution from the originals accordingly to their
  // statistics, and computing the variance of the results. If the errors in the measured histogram and the 
  // joint histogram are Poissonian, the whole procedure is already implemented:

  BayesianUnfolder unfolderLazy;
  TH1F unfoldedLazy=prior; unfoldedLazy.Reset();  // prepare a slot to store the result
  unfolderLazy.computeAll (joint, measured,   // Inputs, the joint and measured distributions
                                         unfoldedLazy,      // The histogram where to dump the result. 
                                         100,  // (optional) Number of samples used to compute the statistica errors
                                          true, // (optional) Allow statistical fluctuations of the migration matrix
                                          true, // (optional) Allow statistical fluctuations of the measured distribution
                                          1,     // (optional) Regularization strengh
                                          1e-2, // (optional) Minimum change of the Kullback-Leibler divergence allowed between two iterations to continue iterating
                                          1e-2  // (optional) Minimum change of the chi2 between the measured and the folded unfolded disribution to continue iterating
                                          );
 
}

Generic MVA manager

Multivariate Analysis (TMVA http://root.cern.ch/root/htmldoc/TMVA_Index.html) access via MVAmanager class (vdev) https://twiki.cern.ch/twiki/bin/view/AMS/MVAmanager

-- FernandoBarao - 28-Jul-2011

-- ArmandFiasson - 05-Jul-2011

-- MercedesPaniccia - 05-Jul-2011

-- CarlosDelgado - 04-Jul-2011

-- VitaliChoutko - 01-Jul-2011

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf EcalShowerLAPPEMEstimator.pdf r1 manage 120.3 K 2011-07-05 - 15:23 ArmandFiasson  
PDFpdf EcalShowerLAPPVars.pdf r1 manage 597.9 K 2011-07-05 - 09:09 MercedesPaniccia  
Topic revision: r13 - 2013-06-11 - ValeryZhukov
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    AMS All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback