TWiki> Main Web>TWikiUsers>IoannisXiotidis>BsPlot>NumericalLikelihood>AnalyticalModelToys (2021-02-23, IoannisXiotidis) EditAttachPDF

--+!! AnalyticalModelToys
# Introduction

# Signal mass model

## Br-analysis model description

## Fast test on signal MC

## Outcome of signal only fit

## Application of new signal model in the full mass range on data

## Outcome of data fit with new signal model

# Toy generation

## Model description (both in mass and proper time)

## Data sets for fitting

### Background model fitting

### Signal fitting

#### Debugging of 2D fit

## Analytical toy validation

### Bin content and error distribution on sPlot toys

#### Overlaying analytical toy distributions with bootstrapped reference distributions

### Summary plots per bin (bin content, custom tagging toy separation)

### Generation step by step plotting

### Analytical toys vs full MC comparison

### Bootstrap toys testing

#### Improvements on analytical fit in mass

#### Generation of bootstraps from reference sample generated from an analytical model

### Comparison of analytical fitters in lifetime between analytical generated toys and bootstraps

- Introduction
- Signal mass model
- Toy generation
- Model description (both in mass and proper time)
- Data sets for fitting
- Analytical toy validation
- Bin content and error distribution on sPlot toys
- Summary plots per bin (bin content, custom tagging toy separation)
- Generation step by step plotting
- Analytical toys vs full MC comparison
- Bootstrap toys testing
- Comparison of analytical fitters in lifetime between analytical generated toys and bootstraps

In this new twiki page what is going to be shown is the toy generation from analytical lifetime and invariant mass models. Along with this study for porting the Br signal mass model, and it's performance when applied on toy MC

For the toy study and the sPlot a key ingredient is the invariant mass models used to fit the mass of the B-meson. Up to today a single Gaussian model has been used with all it's parameters free. Since, this model forces many toys to fail and thus an unstable procedure, a more elaborate signal model is required. The decision was made to use the signal model from the 2015/2016 Br analysis which is fully understood and tested in various configurations.

The model used in the 15/16 Br analysis is a double Gaussian model. The model has two independent means and sigmas and a fraction for the relative proportion. The values of the previously listed parameters were frozen for the full range fit according to the values obtained when applying the independent two Gaussian fit on signal MC toys.

Parameter name | value |

Mean-1 | 5357.7 +/- 1.5 MeV |

Mean-2 | 5257.0 +/- 18.3 MeV |

Sigma-1 | 83.0 +/- 1.9 MeV |

Sigma-2 | 193.8 +/- 10.5 MeV |

fraction of Gauss 1 | 0.88 +/- 0.02 |

the values listed above are the average values from all the fits on the signal MC toys.

The Br model has been also fitted and compared with the up to today used model. The two signal models where applied on the signal only MC. However, since the signal MC is very high statistics a 10% of events has been sampled from the full data set to apply the model. The main reason for sampling less events is to avoid structures due to high statistics. The result along with the fit log from this exercise can be found in the following table

Model | Parameter name | Initial values | Final result | Plot |

Single Gaussian | mean | 5300 MeV | 5346.27 +/- 1.52 MeV | |

sigma | 160 MeV | 103.66 +/- 1.07 MeV | ||

#Events | 4672 | 4672 +/- 68.35 | ||

Double Gaussian | mean1 | 5366 MeV | 5356.7 +/- 1.8 MeV | |

mean2 | 5366 MeV | 5257.54 +/- 16.21 MeV | ||

sigma1 | 60.0 MeV | 84.18 +/- 2.15 MeV | ||

sigma2 | 160.0 MeV | 173.65 +/- 9.52 MeV | ||

fraction of Gauss 1 | 0.5 | 0.87 +/- 0.03 | ||

#Events | 4672 | 4672 +/- 68.35 |

It's clear when looking at the two fits that the double Gaussian model with the independent means reproduces our data better. Additionally we can see although this is a fit on a single sub-set of the full signal MC the result of the fit is comparable with the average values used in the Br analysis. Those two argument lead to the decision that we need to use a more elaborate model in our toy study for the fit estimator. Additionally the choice of the Br model for the signal is expected to solve the problem of toys failing the sPlot procedure, but this needs still to be tested.

The next step is to use the signal model with the standard background models (Chebychev 1st order + Exponential). The full model was applied on the analysis invariant mass range (4766,5966MeV) and in the BDT that is planned to be used for the lifetime analysis (0.365,1). With this exercise the validation of the full model on data is going to happen along that we will determine the number of events that is going to be needed for generating toys with the bootstrap method.

Model | Parameter name | Initial values | Final result | Plot |

O(1) Chebychev + Exponential + Double Gaussian | mean 1 | 5357.7 MeV | 5357.7 MeV (*) | |

mean 2 | 5257.0 MeV | 5257.0 MeV (*) | ||

sigma 1 | 83.0 MeV | 83.0 MeV (*) | ||

sigma 2 | 193.8 MeV | 193.8 MeV (*) | ||

fraction on Gauss 1 | 0.88 | 0.88 (*) | ||

Chebychev slope | -0.05 | 0.996 +/- 0.0005 | ||

Exponential constant | 500 | 141.9 +/- 0.57 | ||

#Gaussian events | 50 | 49.93 +/- 29.18 | ||

#Chebychev events | 150 | 36.12 +/- 6.03 | ||

#Exponential events | 50 | 148.77 +/- 0.45 |

* Those values are frozen for the reason explained above.

The fit to the analysis data set converges. Additionally the performance of the new model is acceptable given that no significant fluctuations are being observed in the pull plot provided at the bottom of the plot. This is clearly a positive sign towards using this model for the toy generation and in the sPlot procedure.

In this section what is going to be discussed initially is the generation of toy samples both in mass and decay time with the use of an analytical model. Along the study the side-quests are going to be to test the correlation of the mass and proper time, prepare reliable models for a potential 2D likelihood fit and test the multiple peak structure if more background events are available. The first side-quest is an important assumption of the sPlot technique that needs to be tested. The second provides a fall back solution in case the numerical approach of the chi2-like variable is not going to converge. The last side-quest is actually the most important currently because this will show us a potential improvement in the current approach for determining the fit estimator. If this improvement is shown to be significant we expect a speed-up in the progress of the analysis.

In this sub-section we are going to discuss the models that are used and tested for the toy generation. There are two components in our fit, the mass and the proper time which need to be fitted simultaneously to determine the shape of the mass and proper time models. The models that we are planning to use are the following:

Component | Mass model | Decay time model |

Combinatorial background | Chebychev O(1) | Smeared exponential with a Gaussian |

Same sign same vertex (SSSV) background | Exponential | Smeared exponential with a Gaussian |

Bs-signal | Double gaussian with shape parameters frozen to MC values | Exponential and error function smeared with a Gaussian |

The models shown in the table are motivated from past analysis for the proper decay time as well as from the CMS analysis on the effective lifetime. Additionally Alex has already used those models to generate toys and therefore their properties are understood in a significant level.

In parallel with determining the models for fitting the different models, the selection of the dataset needs to be discussed. The testing of the signal model is straight forward since the full signal MC was generated independently for the Bs and Bd model that normally are present on data and therefore can be immediately applied. For the background what is available is the bbmmX continuum background MC which contains two components the SSSV and Combinatorial background, that have to separated. In the following sections the fit on the background and the signal MC independently is going to be shown for the validation of the models. As a reminder in both MC full samples the BDT cut of the analysis is applied so that the resulted PDF fits are going to represent the type of data that we require.

As discussed in the introduction of this section the separating of the background MC into it's two components is required. For this reason Fabio has been contacted who was the main analyzer of the Br 15/16 analysis, to ask about the method to reliably separate the two components. Luckily enough the full continuum background sample contains a branch in it's ttree indicating from which decay each event is originating. After the BDT cut of the analysis the following values have been found in the sample:

Index | Value | Component assigned to | Index | Value | Component assigned to | Index | Value | Component assigned to | |||

0 | combinatorial | Combinatorial | 10 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:D_s+[mu+:nu_mu:gamma]]gamma] | SSSV | 20 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:D*_s+[gamma:D_s+[mu+:nu_mu]]]gamma] | SSSV | 30 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:gamma:gamma:D_s+[mu+:nu_mu:gamma]]gamma] | SSSV |

1 | B_s0[D_s-[nu_mu_bar:mu-]mu+:nu_mu] | SSSV | 11 | B_s0[D_s-[nu_mu_bar:mu-:gamma]mu+:nu_mu] | SSSV | 21 | B*-[B-[nu_mu_bar:mu-:gamma:rho0[mu+:mu-]]gamma] | SSSV | 31 | B_s0_bar[nu_mu_bar:mu-:gamma:gamma:D_s+[mu+:nu_mu]] | SSSV |

2 | B*0_bar[B0_bar[nu_mu_bar:mu-:D+[mu+:nu_mu]]gamma] | SSSV | 12 | B_s0_bar[nu_mu_bar:mu-:gamma:D_s+[mu+:nu_mu]] | SSSV | 22 | B0_bar[nu_mu_bar:mu-:D+[mu+:nu_mu]] | SSSV | 32 | missing particles | ? |

3 | B0[D-[nu_mu_bar:mu-]mu+:nu_mu:gamma] | SSSV | 13 | B0[D-[nu_mu_bar:mu-]mu+:nu_mu] | SSSV | 23 | B+[mu+:nu_mu:rho0[mu+:mu-]] | SSSV | 33 | unmatched | ? |

4 | B_s0[D*_s-[D_s-[nu_mu_bar:mu-]gamma]mu+:nu_mu] | SSSV | 14 | B_s0_bar[nu_mu_bar:mu-:D_s+[mu+:nu_mu]] | SSSV | 24 | B_s0_bar[nu_mu_bar:mu-:D_s+[tau+[nu_tau_bar:mu+:nu_mu]nu_tau]] | SSSV | 34 | B0[D-[nu_mu_bar:mu-:gamma]mu+:nu_mu:gamma] | SSSV |

5 | B_s0[D_s-[nu_mu_bar:mu-]mu+:nu_mu:gamma] | SSSV | 15 | B_s0[D_s-[nu_mu_bar:mu-:gamma]mu+:nu_mu:gamma] | SSSV | 25 | B+[D0_bar[nu_mu_bar:mu-:pi+]mu+:nu_mu] | SSSV | 35 | B+[mu+:nu_mu:rho0[mu+:mu-:gamma]] | SSSV |

6 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:D_s+[mu+:nu_mu]]gamma] | SSSV | 16 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:gamma:D_s+[mu+:nu_mu:gamma]]gamma] | SSSV | 26 | B_s0[D*_s-[D_s-[nu_mu_bar:mu-]pi0[gamma:gamma]]mu+:nu_mu] | SSSV | 36 | B*_c-[B_c-[nu_mu_bar:mu-:J/psi[mu+:mu-:gamma]]gamma] | SSSV |

7 | B*_c-[B_c-[nu_mu_bar:mu-:J/psi[mu+:mu-]]gamma] | SSSV | 17 | B*0_bar[B0_bar[nu_mu_bar:mu-:gamma:D+[mu+:nu_mu]]gamma] | SSSV | 27 | B_s0[D*_s-[D_s-[nu_mu_bar:mu-]gamma]mu+:nu_mu:gamma] | SSSV | 37 | B+[D0_bar[nu_mu_bar:mu-:K+]mu+:nu_mu] | SSSV |

8 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:gamma:D_s+[mu+:nu_mu]]gamma] | SSSV | 18 | B_s0_bar[nu_mu_bar:mu-:D*_s+[gamma:D_s+[mu+:nu_mu]]] | SSSV | 28 | B_s0[D_s-[nu_tau_bar:tau-[nu_mu_bar:mu-:nu_tau]]mu+:nu_mu] | SSSV | 38 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:D*_s+[gamma:D_s+[mu+:nu_mu:gamma]]]gamma] | SSSV |

9 | B0[D-[nu_mu_bar:mu-:pi0[gamma:gamma]]mu+:nu_mu] | SSSV | 19 | B*_s0_bar[B_s0_bar[nu_mu_bar:mu-:gamma:D*_s+[gamma:D_s+[mu+:nu_mu]]]gamma] | SSSV | 29 | B_c-[nu_mu_bar:mu-:J/psi[mu+:mu-:gamma]] | SSSV |

Except the two decays that are highlighted with a question mark all the other decays are clear to which component the belong two. Therefore with the use of the value of the decay branch as discrimination the two MC samples were separated. The separation (after the BDT cut) resulted to the following amount of events for each sample.

Sample | Number of entries |

Combinatorial | 367 |

SSSV | 203 |

For the decays that are shown in the previous table as missing particles or unmatched we have found that those are 8 in number. The values of lifetime and mass are listed in the following table:

Index | Decay string | Mass value | Proper time value |

4122932 | missing particles | 5596.18 MeV | 1.1717 ps |

4133617 | unmatched | 5642.43 MeV | 0.378554 ps |

4438624 | unmatched | 4949.11 MeV | 1.50203 ps |

4492579 | unmatched | 5846.34 MeV | 2.87935 ps |

4919003 | unmatched | 5065.8 MeV | 7.0801 ps |

5220625 | unmatched | 5093.74 MeV | 2.59245 ps |

5270671 | missing particles | 5148.9 MeV | 2.65543 ps |

The fit on those two samples has been performed with various configuration to check if there are any significant effects that are being forgotten. The main goal (just to not forget) is to perform a simple fit on both the mass and the proper time in order to extract "sensible" 2D model that will allow us to generate toys from analytical models. In case the 2D fit is going to be used for determining the lifetime value in our sample this model will be refined and improved. In order to validate the fit we have performed the following sequential fits. Initially we fitted the decay time and the invariant mass independently and afterwards with the values from the independent fits we initiated the 2D fit. The configuration can be summarized in the following table.

Background source | Individual fits plot | 2D fit plot |

Combinatorial | ||

SSSV |

By looking and examining the fit logs as well as the shapes of the PDFs projected on the dataset we can see that the models chosen for the two background components are fitting our sample with a good accuracy. For this reason we concluded that the parameterization chosen is sufficiently good and that means we are going to use it as a first order shape fit to generate toys with.

As shown above the same procedure has been applied for the signal distribution. Initially an independent fit has been performed in mass and lifetime with the model described above, followed by a 2D fit initiated with the result of the individual fits. The results of this fitting procedure can be seen below

Signal dataset | Individual fit plot | 2D fit plot |

Fit did not converge |

In the section above it has been shown what is the result of the 2D fit when applying the same methodology as the background fit, with the models that apply on signal. The result is that although the individual fits converge properly when initiating the 2D fit with the result of the individual the fit gets confused. For this reason investigating the parameters and the models used has been started with the following conclusions. Initially we looked at the construction of the lifetime model since this is the model that seems to be mostly affected from the 2D fit. The first feature to be checked is if the convolution is performed properly. What has been observed is that due to the PDF not being exactly 0 at the upper edge a bump has been created which is artificial. The solution to the problem is to create the PDF in the longer range and then fit only the range of interest. This "work around" has been proven successful since the PDF in lifetime now looks more consistent with what someone would expect.

The next step is to check the behavior of the individual fits. We are showing here how the fit is initiated and how the fit converges with the two fit logs quoted in the comment section. With the red line we see the starting point of the fit and witht he blue the fit result. From a closer look in the fit log it seems that the lifetime model struggles more to find convergence in comparsion with the mass fit. However both fits are converging correctly.

Following the check of the individual steps, is the construction of the 2D fit model. For this we are using the RooFit integrated class called RooProdPdf. With the RooProdPdf you are able to plot the projections of the PDFs in the two variables as long as the two PDFs can be factored out. Which is exactly what is happening in the case of the mass and time PDFs. In addition with the use of the createNLL method of the abstract RooAbsPdf class we can derive the -log(LL) value of our PDFs for our given datasets which is also shown below. It can be seen in the table that although the 2 models (2D and indiv) do not distinguish at all in terms of functional forms, the values of the NLL do. Naively someone can expect since the two models are independent the massNLL * timeNLL = combNLL. This is clearly not the case in our situation raising the question about the construction of the 2D PDF from the individual ones.

mass nLL value | time nLL value | 2D model nLL value | Plot |

286331.110938 | 81570.593384 | 573149.460308 |

Since, all the input parts have been validated a break down of the lifetime model has been started. For this reason a simpler model with a smaller fitting range has been employed. For the lifetime we chose to use a simple error function exponential that is going to fit the range [3,8] ps with a 2D fit to check a few properties of the RooProdPdf. The first thing that has been observed is that still the product of the NLL values is different than the NLL value of the 2D model. The factor that they differ is about the same order as for the full model O(e-5). The next step is that the fitter converges properly to an expected result, without any issue.

mass nLL value | time nLL value (erf+exp) | comb nLL value | Plot |

286331.110939 | 21702.010750 | 125278.786254 | 2D fit with simple lifetime model (Erf+Exp) in smaller range FitLog |

Since a huge amount of fits were performed for different configurations the following set of slides has been created with the references on the fit logs in this site. The set can be found Fits in 2D for mass and lifeitme

At this stage the result of the independent fit is being used in order to generate toys and check their validity in compare with the bootstraps. For this reason a sub set of the validation plots for the bootstrap toys has been produced with the same statistics.

After generating 1000 of toys from the analytical mass and proper time models. The sPlot procedure has been applied with a single gaussian model for the signal representation in mass. Essentially the whole procedure remains the same the only difference is initial pool of toys is coming from the analytical models rather than bootstraps.

Bin content distributions | Error distributions | Reference plots |

BinContentBootstrap ErrorBootstraps |

Eyeballing the distributions from the previous section shows that at comparable statistics the two procedures yield similar results. However, both samples since they are generated with similar statistics are prompt to statistical fluctuations. Since for the analytical toys the plan is to use ~10x more statistics, a large samples has been created and then re-scaled to the required statistics to compare with the reference plots from the bootstrap procedure. There are two sets of plots below the first set contains the information of the bin distributions including the toys that have no statistics in those bins. (therefore a large peak at 0). The second set discards those bins increasing the sensitivity to the shape.

Includin 0 content toys | Bin 0 | Bin 1 | Bin 2 | Bin 3 | Bin 4 | Bin 5 | Bin 6 | Bin 7 |

Discarding 0 content toys | Bin 0 | Bin 1 | Bin 2 | Bin 3 | Bin 4 | Bin 5 | Bin 6 | Bin 7 |

* Red is the rescaled 10k sample and blue the unscaled 1k reference bootstrap sample

After validating the same performance in terms of shapes the next step is to produce the multi canvas with the likelihood tagging algorithm for all the bins to look at the separation of the peaks.

Bin 0 | Bin 1 | Bin 2 | Bin 3 | Bin 4 | Bin 5 | Bin 6 | Bin 7 |

the analytical toys up to know show similar behavior as for the bootstrap. The only gain is that less fits are failing with the old model as well as that the generation number now is arbitrary. We can choose as many toys as we want and this is yielding to the next step of performing a large campaign of 10000 toys to be used in the RMS studies.

The production procedure of toys has a very standarized pattern that is being followed in both generation methods. The steps with the plots refering to the number of generated events are shown below. The main reason we look at the number of generated events in each flavour is because the discrepancy in the bin content distributions occurs from the different mixing of the number of events for each component.

Step 1 | Step 2 | Step 3 |

Poisson for number of Events at truth level | Number of events fitted in mass fit | Number of signal events in weighted histogram |

in the mass fit we see there is a discrepancy between the number of signal events and number of sssv background which might explain the discrepancy also seen in the total number of signal events in the sPlot projection.

To test that the generation of the analytical toys is correct, what happened was that a full 10k production was launched. Then the resulted datasets where summed and averaged and the average histogram has been produced. The full MC was scaled to the analysis number of events and both histograms were overlayed to determine if the analytical toys generate the full MC truthfully.

It can be seen that in the distribution of the number of events (specially for the signal) it's being under-estimated from the bootstrap toys. This might be due to two effects. The first effect is because the mass model is not accurate enough, specially for the correlations between the signal and the background. In this a bias on the lifetime might be introduced which is also affecting the studies we perfromed with the analytical fitters. The oddity of this study is that the analytical toys seem to not be affected by the same issue. The main reason is due to the fact that the analytical models are ment to be generated from the PDFs used. In this case that yields to making those toys too "perfect" in compare to the bootstraps. The second main reason for the discrepancy is related to the first one but instead of having to do with the PDF describing the data it's related to the data itself. The background MC is mostly limited by the statistics. For this reason low stat regions (specially for the SSSV-which is falling exponentially) are being oversampled. This results a constant bias that is affecting the bootstrap toys. To test the existance of those low stat regions, toys from the analytical models have been created with the same statistics as the bbmmX MC. Those toys act as reference samples where applying the bootstrap methods new toys can be generated that are then fitted in mas as done with the bbmmX ones. This will show whether what is observed on the existing toys generated from the bbmmX are due to a statistical fluctuation or due an intrinsic bias of the sample.

For testing if the analytical model used in mass is not accurate enough, several configurations have been applied in order to see if this brings an improvement on the not missing signal events. The configurations are the following:

Index | Configuration name | Con values |

0 | No constrain | None |

1 | Constrain on signal mean | mean = 5366.9 +/- 10 MeV |

2 | Constrain on exp. const. | exp.const = 83.4866 +/- 5.86 |

3 | Constrain on comb. slope | comb. slope = -0.366314 +/- 0.087405 |

4 | Freezing signal shape | mean = 5347.04 MeV sigma = 102.74 MeV |

5 | Constrain on exp. const. and frozen signal shape | exp.const = 83.4866 +/- 5.86 mean = 5347.04 MeV sigma = 102.74 MeV |

The resulting yield distributions orginating from the mass fits used in sPlot along with the mean values for the different configurations can be seen in the next two plots. The configurations have been also tried on the analytical toys to be checked the effect of those in the "perfect" scenario.

Bootstrap toys | Analytical toys |

To summarize the above two distributions the mean of each distribution for every yield has been taken. The configuration scheme remained the same and for each yield we have the analytical and the bootstrap values. It can be seen that in the analytical case the means are almost not affected by the different configurations prooving a robust fit perfromance already from the non constraint case.

Signal Yield | Combinatorial Yield | Same Sign Same Vertex Yield |

In this section the second assumption is going to be tested that is hinting the discrepancy between analytical and bootstrap samples on the fact that the bootstraps in the background are systematically biased from the high mass values in the mass range, where the statistics is extemely limited. For this reason as noted above 100 samples have been generated with the same statistics as the full bbmmX MC at the analysis BDT cut (0.365,1.0). From those 100 samples bootstrap samples have been created in the same amount as for the reference bootstraps used till now (generated from the bbmmX). Since, this is an extremely time consuming task not all reference samples have been processed but only a few to get an understanding of the mean value from the different configurations. The expectation for this test was that if the mean value was systematically off then an intrinsic bias of the bootstrap method is present. If on the other had the mean value was consistent with the generation value then the discrepancy observed between analytical and bootstrap toys would be due to the nature of that particular sample used till now. The mean values and RMS for the till yet processed samples are visible in the following table:

Mean values | RMS values | Distribution of mean values | Mean values with RMS as error for all configurations |

Since, the analytical toys show a better performance "out of the box" in comparison to the bootstrap toys, a test has been performed to check whether fitting the analytical sPlot with the analytical fitters (chi2, multinomial) would yield to similar results as for the bootstrap toys. The main effect for this test is arising from the hypothesis that the shift observed in the signal yield for the bootstrap toys might have an effect on the lifetime measurement. For this reason, the two fitters have been re-run with the following results for the pulls and the residuals:

Chi2 | Multinomial |

-- IoannisXiotidis - 2021-01-18

Topic revision: r14 - 2021-02-23 - IoannisXiotidis

**Webs**

- ABATBEA
- ACPP
- ADCgroup
- AEGIS
- AfricaMap
- AgileInfrastructure
- ALICE
- AliceEbyE
- AliceSPD
- AliceSSD
- AliceTOF
- AliFemto
- ALPHA
- ArdaGrid
- ASACUSA
- AthenaFCalTBAna
- Atlas
- AtlasLBNL
- AXIALPET
- CAE
- CALICE
- CDS
- CENF
- CERNSearch
- CLIC
- Cloud
- CloudServices
- CMS
- Controls
- CTA
- CvmFS
- DB
- DefaultWeb
- DESgroup
- DPHEP
- DM-LHC
- DSSGroup
- EGEE
- EgeePtf
- ELFms
- EMI
- ETICS
- FIOgroup
- FlukaTeam
- Frontier
- Gaudi
- GeneratorServices
- GuidesInfo
- HardwareLabs
- HCC
- HEPIX
- ILCBDSColl
- ILCTPC
- IMWG
- Inspire
- IPv6
- IT
- ItCommTeam
- ITCoord
- ITdeptTechForum
- ITDRP
- ITGT
- ITSDC
- LAr
- LCG
- LCGAAWorkbook
- Leade
- LHCAccess
- LHCAtHome
- LHCb
- LHCgas
- LHCONE
- LHCOPN
- LinuxSupport
- Main
- Medipix
- Messaging
- MPGD
- NA49
- NA61
- NA62
- NTOF
- Openlab
- PDBService
- Persistency
- PESgroup
- Plugins
- PSAccess
- PSBUpgrade
- R2Eproject
- RCTF
- RD42
- RFCond12
- RFLowLevel
- ROXIE
- Sandbox
- SocialActivities
- SPI
- SRMDev
- SSM
- Student
- SuperComputing
- Support
- SwfCatalogue
- TMVA
- TOTEM
- TWiki
- UNOSAT
- Virtualization
- VOBox
- WITCH
- XTCA

Welcome Guest

Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback

or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback