The version of the paper containing the changes listed in that TWIKI can be found in: http://diablo.phys.northwestern.edu/~mvelasco/TOP-11-028_FR_Pub_Com.pdf

see below my comments on your draft, divided into type A and type B ones.

• Type A

abstract, second line: 'compatible with the decay chain'

abstract, seventh line: 'top-quark decays' as in the title; also at L215

abstract, seventh line: 'A branching fraction t->Zq larger than'

L2 and L4: as we usually do, put no hyphen in 'top quark' and in 'bottom quark'

L13: 'upper limit of 3.7%'

L59: add a comma after 'events'

L89: 'single top quark events'

L99: 'single top quark production' also at L154

L102: 'distributions for data'

L110: 'top quark mass', also at L168

L113+1: we usually write 'b jets' with no hyphen; also at L143, 144 and 151, and in the caption of fig.2

L157: 'The WW production'

L163: 'single top quark'

ref.18: write 'D, in boldface and attached to the volume, as done for ref. 3

• Type B

L15: 'at the LHC at the next-to-leading order (NLO) is about 157.5 pb for an assumed top quark mass of 172.5 GeV.'

L39: 'isolated leptons (e or \mu)'

L113+2: you call it 'missing transverse momentum' here, but it was 'missing transverse energy' at L75; be consistent

fig. 1, St: the dashed line at 250 is not described in the caption, while you do it for fig. 2

L148: more than calling it 'nominal' I would call it 'assumed top quark mass' (with no hyphen)

L188: I think that the syst uncertainties are 'related to' rather than 'due to' the things you quote; also write 'choice of parton distribution functions'

table 3: the number of significant digits is not consistent, but we can discuss how to handle this at the final reading

table 4: I think you should say that the uncertainties have been rounded off to the integer or else the quadratic sum does not give the total.

Dear Authors - please find my comments based on the FR version mentioned in Mayda's email below.

• TYPE A:

- Title: 'Flavor-Changing' -> 'Flavor Changing'

- remove '-' between flavor and changing throughout the paper, no need!

- Abstract: last sentence: 'A t->Zq branching fraction larger than...'

- L2: 'The top quark decays with a nearly 100% branching fraction to a bottom quark and ...'

- remove '-' between top and quark, and between bottom and quark throughout the paper when given as a name

- L4: '... t->Zq, where...'

- L13: '... a B(t->Zq) upper limit of...'

- L15: '... about 157.5 pb, which is twenty times ...

- L16:'This allows for use of event samples based on leptonic decays of the vector bosons, where backgrounds can be well determined.'

- L18-19: '... state events, which produce three-lepton ...'

- L21: 'The analysis uses ...' avoid use of 'our' 'us' etc when possible

- L27: 'Charged particles ...' no need for '-'

- L37: 'elsewhere' -> 'in Ref.'

- L43: 'is' -> 'are'

- L46: '... muon candidate must have associated hits in the silicon...'

- L48: remove '>'

- L58: use 'pileup' here and throughout the paper

- L61: remove 'background from'

- L68: change 'to' to '->' to be consistent

- L70: '... a W boson.'

- L84-85: I'd add these lines on the end of the last paragraph.

- L86: remove the first sentence - stating the obvious. Next sentence: 'The samples of simulated ...'

- L90: remove '[14]' already given on previous line

- L94-95: add these lines on the end of the last paragraph

- Table 1 caption: '... missing transverse energy requirements, for a total...'

- L102: '... the distributions for data and...'

- L107: 'The m_T is calculated using the transverse...'

- Fig 1 caption: '... W boson candidate, and (c) the ...' last sentence ' ... the dominant backgrounds.'

- Eqn above L114: give a neutrino subscript to p_z too - consistent with the other components

- L119: '... of p_zn is taken; studies with ...' n for neutrino subscript in greek symbol

- L121: 'Next, we add the requirements on jets, ....'

- L125: '...the hadronization of a b quark, namely a "b jet".'

- L126: 'The first selection is the more...'

- L128: add this line on the end of the last paragraph

- Table 2 caption: '... three-lepton channels in ...' and remove 'Only statistical errors...'

- L130: 'In the S_T selection, at least two jets are required, which are assumed to come from the same primary vertex.'

- L133: remove sentence 'A selected event...'

- L135 '... between 100 and 250 GeV/c^2, and the ...'

- L136: remove 'in the transverse plane'

- L140: remove 'Based'

- L141: '... from diboson events, a b-tag selection is....'

- L142: '... this selection, the two jets are ...' and remove 'already'

- L143: '... the Z candidate and one of the two jets is a b jet.'

- L144: 'The b jets...'

- L145: 'in Ref. [20]...' and 'impact parameter' no s

- L148-149 'GeV/c^2'

- L151: 'b jet'

- Fig 2 caption: 'Comparison between data and simulated events (total luminosity of 5.0 fb^-1) of the ...' also remove 'based' and write 'b jet' and '... show the dominant backgrounds.'

- L153: 'our' -> 'the'

- L160-161: remove 'based' 'respectively' and 'already'

- L166: remove 'Similarly'

- L170: '... background estimates are based ...'

- L184: remove 'based' 'selections,...' and add ', respectively.' on the end of this sentence

- L185: 'compatible with the expectations based ...'

- L186: add this line on the end of last paragraph

- Table 3 caption and headers: remove 'based'

- L188: 'The systematic uncertainties come from the...'

- L195: remove 'based'

- L197: 'background estimation is listed with the total...'

- Table 4 caption: remove 'in per cent' remove 'based' in caption and Table headers

- Table 4: I'd give the totals with ±

- L199: 'In the S_T (b-tag) selection, ...'

- L209: 'limits' -> 'limit'

- L210: remove 'respectively,'

- L212: remove 'based'

- L213: 'is' -> 'are' and remove ', respectively'

• TYPE B:

- L39: 'Events with two opposite-sign charged leptons (electron or muon) consistent with a Z-boson decay and an extra charged (+ or -) lepton are ...' All three leptons must be isolated (defined later) and have ...'

- L56: state what happens when both are in that mass window!

- L58: give what is meant by 'good primary vertex'

- L69: 'large background' not clear which background, reader cannot tell background to lepton ID or to signal!

- L71: what is the 'electron sample' - reader would be confused - you have only discussed a 3-lepton sample up to this point! I understand this is the electron subsample of the 3-lepton sample - clarify in the paper...

- L74: useful to mention what % of events have a 4th lepton, hence what % rejected!

- L79: what is a 'dynamical correction' the reader would ask! Need to clarify

- L82: is this requirement of delta_R>0.4 really necessary since you already have lepton isolation?!

- Table 1: I'd give the number and its errors rather than '< 0.001' which has no error!

- L114: 'E_Tl' -> 'E_l' and remove 'transverse'

- L117: remove sentence 'If the discriminant is found...' what does it mean anyway!

- Table 2 caption: '... with leptonically (e, mu) decaying...' no tau here - right! ==> NOT Taus need to be included

- L129: remove 'Based' use 'S_T Selection' consistent with the above section

• Type A:

line 20: "at the expense of fewer signal events" -> "at the expense of more signal events" ===> That is not what the sentence is saying

line 67: "correcting" and "rejecting" are both dangling participles in this sentence, although the ambiguity is not too bad. "correcting" -> "correction" would easily fix the first case (so as not to imply that the "quantity" is "correcting"). ===> IS THIS CORRECT?

Figure 1 and Figure 2: Should all of the ordinates have labels like "Events/10 GeV"? ==> ?

• Type B:

line 18: "CDF and D0" -> "CDF and D0 at the Tevatron collider" ==> Will add in the next round

line 111: "since there is no unambiguous way to pair the light-quark jets with the Z". I could not discern from the paper how you handle the combinations in the b-tagged selection. If there are only two jets, and one is b-tagged, then there is no ambiguity. Do you still examine all possible combinations in this case?

line 209: "The best observed (expected) 95% CL ..." This makes it sound as if you chose the best observed limit as your final result, but that is not correct without and additional statistical correction for trying two different selections. Usually, there is some a priori decision to select the individual result with the strongest expected limit or to combined the correlated results for a single limit. An explicit statement of the a priori choice would be reassuring.

• Type A

L40, a minor detail, since you always leave the "\pm lepton" the last in the list, do it also for emumu, write mplus-mminus-eplusminus table 1, a space is missing after 0.9 in first line.

• Type B

L60-61, I guess you are only defining acolinearity in a rather windy way. Isn't clear enough to say Acolinearity>0.05 radians was requested to reject... ?

L67-69, I cannot understand what the different cuts on deltaR mean. The cut should be applied to leptons and at this stage, it is not clear if they come from a Z or W

L112, I already asked this before, but don't remember having got an answer, you say there is ambiguity on the pairing with light-quark jetS, but my understanding is that in your topologies only ONE light quark is present

Figure 2. I think the red line is confusing, not being "N-1" plots. i suggest to drop it, and mention in the caption that the mass plots are done before both cuts are applied ===> This plot is needed

Table 3. In the first two lines you present some statistical errors with an additional significant digit. I understand that is to avoid a +-0.0, but I think that is what you have to include. Not sure, though...

Table 4, shouldn't you include lumi error as an additional item on the table, rather than mentioning in the caption?

L199, again possibly an extra significant digit (0.8+-0.02+-0.1)

Section 7, I find the last paragraph difficult to read. My suggestion is to initially quote the expected number of events, followed by the expected limits and boundaries. In a different paragraph you quote the observed number of events and the limit.

I just approved the CMS-TOP-11-028, but realized that the reference to the similar ATLAS work is missing in the paper:

A search for flavour changing neutral currents in top-quark decays in pp collision data collected with the ATLAS detector at sˆ = 7 TeV

arXiv:1206.0257 <http://arxiv.org/abs/arXiv:1206.0257> ; CERN-PH-EP-2012-139

(1) This is a previous comment for v4, i.e.

> (2) In all figures,
> (a) on the top line inside the plot, the power (-1) of "fb" all
> should be moved toward the right for a half size of a letter, i.e.
> "= 7 TeV, 5.0 fb**(-1)" -->
> "= 7 TeV, 5.0 fb**( -1)" - show quoted text -

I'm glad to see that the effort has been made to improve this. However, it is looked like that the modifications seem a little too much, i.e. now the power (-1) of "fb" all may be better to be moved back toward the left a very little bit.

(2) This is a previous comment for v4, i.e.

> (7) L88 and L108 (now L75, L105 and L108 in v5), at 2 (now 3 in v5)
> places, the distance between the "E/" and its subscript "T" seems a
> little too much, it will be looked better if to reduce it.

It seems have still no much improvement yet. Is it possible to try again to reduce the distance for making them better looking?

Page 4

(3) This is a previous comment for v4, i.e.

> (11) L120, it is printed that "... is the correct one more than 60% of
> the time"; this is looked like that nearly 1/2 is not correct, i.e. one
> decision being correct is only a half of chance, another half is being
> wrong???

===> from Andrea Giammanco

This is indeed the case. See these two mails (the second is the answer to the first): https://hypernews.cern.ch/HyperNews/CMS/get/TOP-11-028/25/1.html https://hypernews.cern.ch/HyperNews/CMS/get/TOP-11-028/25/1/1.html

It is extremely hard to do better than that. In my own experience, this is anyway not an issue because the events where this simple criterion has less discriminating power between "correct" and "incorrect" solutions are also the events where "correct" and "incorrect" solutions are closer in value, and therefore picking up the incorrect one has little effect. Maybe the sentence can be removed, to prevent similar doubts in the future readers?

Page 10

(4) L228, at the end of line, a letter is missing, i.e. "CAS, MoST, and NSF" --> "CAS, MoST, and NSFC".

Please cross-check with all other CMS papers.

Page 11, in the References Section,

(5) L285, in [18], to be consistent with other Refs. in this Section, if the number of authors is more than 3, only the 1st author is put in the Reference plus "et al.", instead of "2 or 3 names + et al.", i.e.

"[18] P. M. Nadolsky, H.-L. Lai, Q.-H. Cao et al.," --> "[18] P. M. Nadolsky et al.,"

(6) L293, in [21], the "year" number may should be given. If there would be problems to display the year number with the default bib file, it may be fixed by changing from "article" to "unpublished" in the bib file.

### E-mail exchanges after CWR prior FR:

Comments from Francisco Matorras on Jun. 12.

Hi Mayda and Yuao,
previously, but I see that Ulrich Heinz sees the same problem I see of
quoting MC stat errors and syst errors on background estimation. I'd
propose to just quote the total error in this case, mentioning in the
text that the error accounts both for MC stat and systematics.
Since I will be on FR, I have no problem if you go ahead now to avoid
delays and we discuss it later, before or during FR.

Francisco


Response from Mayda velasco

Dear Francisco,

here is a copy of what we are putting on the TWiki to answer this question.  Which is the same
that we  gave during review process:

The errors for the data-driven  WW/WZ/ZZ estimates are from Poisson sqrt(N)
for  the observed number of events  originally calculated for  the HT_S based selection.
In a second step, the HT_S estimate was scaled to get the equivalent estimate for the b-tag
based selection. Therefore, the error in the b-tag selection is  just a*sqrt(N), where "a" is the
ratio of acceptances between the two selections .

The central value for the WW/WZ/ZZ  estimates was found to be the same between the data-driven
method and those obtained from simulated events for both selections.
The uncertainty for the MC estimate was larger due to the limitations for MC statistics.

I will send the link to the TWiki later today.

Best regards,
Mayda


Andrea Giammanco

Hi all,
My two cents: I find appropriate that if the uncertainty is calculated as a*sqrt(N) the reader can know the relative contribution of the purely statistical part (sqrt(N)) and of the systematic uncertainty on a. This is part of the things that allow "reproducibility". If the ARC doesn't like this, a possible compromise is to give the combined uncertainty in the tables, mentioning in the text that the error accounts both for MC stat and systematics (as Francisco suggested) and also mentioning the relative weight of the MC stat on the combined uncertainty.

Cheers,
Andrea


One more thing I didn't realise yet, and maybe is what ulrich asks for.
The errors you quote are exactly sqrt of the number you quote there,
which is not N, but N*a factor scaling from MC luminosity to data
luminosity. If this is the case it is not what you should do.
Let's say, the number you quote is Nexpected=alfa*NMC, your error should
be alfa*sqrt(NMC) and not sqrt(Nexpected)=sqrt(alfa*NMC), which is
different.
Or I am missing something?
If I'm right please correct the numbers, if not clarify that in the
That's independent that I still prefer you to quote a single error in
these tables.

Francisco


after talking with Andrea and reading more carefully,  you have also questions
about the error on Table 1. They do look strange and I will check it more carefully.
More news soon,
Mayda


Please check table 3 (ST column) and numbers in line 156. They could
suffer from the same problem.

Francisco


Dear ARC and Peter,

we have implemented all the changes.
There are two things that we will do before the FR on Th.  They
are:

1) Check  error use in Table1
2) Get the proper reference for the top cross section measurement.

In the meantime you can take a look at  the new version that is

http://lotus.phys.northwestern.edu/~mvelasco/TOP-11-028_temp.pdf

https://twiki.cern.ch/twiki/bin/view/Main/FCNCTopCWR

Best regards,
Mayda


Dear All,

I have questions now about the uncertainties in table 3 as well.
Naively I would expect ratio of the number of background events in WZ in the ST and b-tag
analyses to be the same as the ratio of the uncertainties. Since
N_B-tag = a * N_ST where a is some constant due to the b tagging.
Then statistical uncertainty on N_ST = sqrt(N_ST)
while statistical uncertainty on N_b-tag = a * sqrt(N_ST).

but 0.7/13.6 = 0.051 and 0.1/3.7 = 0.027. Maybe this is a round off problem but
I want to make sure. This might affect the "answer" so I would like it to be resolved before the green-light.

Thanks to Francisco and (maybe Uli) who spotted this first.

We can fix this soon. But I think we should take the needed time.

Regards,

Nick

PS Twiki looks good. All other questions answered.


Please refer to the calculation discussed in the ARC meeting hold in Mar.

https://indico.cern.ch/conferenceDisplay.py?confId=183803

(the only difference is that WZ and ZZ are now calculated separately and the

As to the systematically, we didn't use exactly the grand total listed in table
4. Since WZ/ZZ are re-normalized from N_jet=0 bin, the 8% X-sec.
is not counted in the background part; only used in the signal part.
A 7% is added from the normalization. (as described in PAS) Lumi. syst.
is only used in the limit calculator as a separate input.
So for S_T method the actual syst. is ~19%.
==> WZ: sqrt(13.6)=3.7, 13.6 * 19% = 2.6
ZZ: sqrt(1.1)=1.0, 1.1 * 19% = 0.2
For DY+ttbar, the uncertainties of true and fake lepton isolation efficiencies
are embedded. (bare me not repeating here)

Similar procedures done for b-tag method with further tagging syst. embedded.

As to the uncertainty alfa*sqrt(NMC), it's the MC syst. due to MC sample
statistics. It's fairly small for the large MC samples we have. Or maybe
my wrong understanding here?

Yuan


Sorry Yuan, but i don't understand.
What do you call the statistical error on the number of expected events from a background you estimate from simulation? Isn't that the MC stat error? If so, you should take the sqrt of the actual number of events you select on MC, before normalising to the lumi. and then do the normalization. And yes, I'd expect it to be much smaller.
If not, I don't understand which is the error you are quoting as the sqrt of the number of expected events.

Francisco


Dear Francisco:

> What do you call the statistical error on the number of expected events
> from a background you estimate from simulation? Isn't that the MC stat
If I understand correctly, the uncertainty calculated as
alfa * sqrt(N_MCsurv)  where N_MCsurv is the survived events of the MC sample

is one of the "systematical uncertainty" that due to MC "statistics" which
should be listed in Table4. With more MC events used, we can reduce this
uncertainty. Since the MC events after all selection cuts for WZ/ZZ are
~4100/~2800 for S_T method, this "systematical uncertainty" is sub % and
negligible therefore not listed.

However, this uncertainty would be meaningful to be listed in Table 1 as
providing an information about the MC sample statistics.

Hope this is clear.

Yuan


I'd expect a relative error of 1/sqrt(N), which is 1-2% for the numbers you quote. Still small but visible.
However, my question is if you consider this negligible, what is the meaning in table 3 of the statistical uncertainties (caption says "The uncertainties in the background estimation include the statistical and systematic components separately")
For example for WZ you quote 13.6+-3.7+-2.6. What is that 3.7? It is exactly sqrt(13.6), but what is its meaning?

Francisco


Sorry that I was caught by a mail server trouble previously.

>> On Tue, 12 Jun 2012 20:36:04 GMT, Francisco Matorras wrote:
>>> What do you call the statistical error on the number of expected events
>>> from a background you estimate from simulation? Isn't that the MC stat
>> If I understand correctly, the uncertainty calculated as
>> alfa * sqrt(N_MCsurv) where N_MCsurv is the survived events of the MC sample
>> is one of the "systematical uncertainty" that due to MC "statistics" which
>> should be listed in Table4. With more MC events used, we can reduce this
>> uncertainty. Since the MC events after all selection cuts for WZ/ZZ are
>> ~4100/~2800 for S_T method, this "systematical uncertainty" is sub % and
>> negligible therefore not listed.
> I'd expect a relative error of 1/sqrt(N), which is 1-2% for the numbers you
> quote. Still small but visible.
After times 'alfa' it's really sub-percent level.

> However, my question is if you consider this negligible, what is the meaning in
> table 3 of the statistical uncertainties (caption says "The uncertainties in the
> background estimation include the statistical and systematic components
> separately")
That calculation of tt+DY part includes this effect as they are not negligible.
For the caption description of table 3, it's simply telling people that the
first error is statistics and the 2nd is systematics.

> For example for WZ you quote 13.6+-3.7+-2.6. What is that 3.7? It is exactly
> sqrt(13.6), but what is its meaning?
This is exactly sqrt(13.6) as mentioned in the previous email.

If you are available now, I've created an EVO room. (TOP-11-028 in CMS)

Yuan


I'd expect a relative error of 1/sqrt(N), which is 1-2% for the numbers
you quote. Still small but visible.
However, my question is if you consider this negligible, what is the
meaning in table 3 of the statistical uncertainties (caption says "The
uncertainties in the background estimation include the statistical and
systematic components separately")
For example for WZ you quote 13.6+-3.7+-2.6. What is that 3.7? It is
exactly sqrt(13.6), but what is its meaning?

Francisco


Dear Authors and Analyzers:

When you have completed the latest revisions, please tell the ARC exactly what was given to the
CLS calculator as inputs. Note that the calculator already includes properly the effects of the
statistical fluctuation on the expected number of background events.

Thanks,

Nick (for the ARC)


Thanks a lot for the info. At least for S_T method, the stat. is quad. added to
syst. and put into the CLS calculator as input. So as the calculator already
includes properly this part, I have counted it duplicatingly. I'll soon provide
an update.

Yuan


On 2012å¹´06æœˆ13æ—¥ 19:53, Nick Hadley wrote:

> When you have completed the latest revisions, please tell the ARC exactly what was given to the
> CLS calculator as inputs. Note that the calculator already includes properly the effects of the
> statistical fluctuation on the expected number of background events.
Sorry about the late email. There was some delay as it happened that Mayda and
Steven were both unavailable last week. During the last weekends, we finally had
a conclusion on the numbers to be changed.

So, for the uncertainty from MC sample statistics, I've double checked and the
effect is small. For example WZ we have ~4000 events after all selection cuts.
This leads to a 1.6% uncertainty and the actual contribution on limit is
negligible, O(10^-3), after we quad. sum them to the systematics listed with
rounding. Similar case for ZZ. (1.9%)

As to the numbers we put into CLS calculator, we wrongly quad. added the
statistical fluctuation on the expected number of background as Nick pointed
out. So we need to update the numbers due to it with confirmation from Steven
got for the b-tag method. The final numbers are: (errors includes MC stat. and
total syst. as suggested)

Table 3:
S_T                     b-tag
WZ              13.6\pm2.6              0.7\pm0.1
ZZ              1.1\pm0.2               0.06\pm0.01
tt+DY   1.5\pm0.6               0.06\pm0.03       % the "stat." of N_tight

Expted(%)       < 0.39  < 0.39
Obs.(%) < 0.28  < 0.28

1-sig bound     [0.26-0.53]     [0.28-0.52]

Mayda is also on traveling this week. She will provide an updated paper draft
soon. Also sorry if you get double or no message for HN is currently down.

Yuan


Thanks for the update Yuan,
Can you double-check your limit calculations? I would naively expect a larger reduction of the limit. With a back-of-envelope calculation i would say that you were double-counting stat errors, so ~ equivalent to loose 1/2 of the sample. since the systematic error is small, I'd expect a reduction of about sqrt(2), so a limit of about 0.3%. I'm probably oversimplistic, but please check again to be 100% sure.
On the other hand, I suppose that you are aware that now both methods give identical expected limits. That would need a re-write in several parts of the paper, because the argument we choose this one because gives the best expected limit is no longer valid (and I wouldn't go to the third digit!)

Francisco


I did double-change the results. Well, since the statistical uncertainty due to
fluctuation of bkg. should be considered and is taken care by the limit
calculator, we did double-count the total uncertainty. But for S_T case, the
change should be sqrt(16.2) (x) 3.7 (x) 2.6 -> sqrt(16.2) (x) 2.6, which is
quite large. As to b-tag method, the original statistics 0.1 is coming from the

(dont' know how the limit calculator estimate it, here only taking sqrt(n))


For observed limit is more or less what I expected 0.28/0.35=1.25~1.3~sqrt(2) However, for the expected limit 0.43/0.39=1.1 <1.3, shouldn't you get something like 0.33?

Though it doesn't change as that much as I expected. To me it makes more sense
on expected limit as the limit calculator should take some amount of statistical
fluctuation from background events.


dear all,

as for the need of a second CWR round, let me give you an excerpt from the CWR guidelines:

"The ARC chair notifies the PCB chair and PC chair and also posts on the hypernews a short report describing the main modifications between the previous and the present draft, including a recommendation to go either to the Final Reading (FR) or to a second round of comments in a new CWR. Note: if the analysis described in the paper has been modified in a significant way in its methods, results or conclusions, then a second round of CWR is necessary."

so it should be up to the ARC to decide on this. let me add that the second CWR could be shorten wrt the usual two weeks

best

andrea


The ARC will wait until the next draft is ready and then decide about the changes and whether a second CWR is needed.

Regards,

Nick (for the ARC)


Dear colleagues,

the version to be used for the final reading is available in SVN and in:

http://diablo.phys.northwestern.edu/~mvelasco/TOP-11-028_FR.pdf

For now, the 0.00  were set to 0.001 and at the FR we can decide
if we should combine  STAT and SYS errors, in which case this
issue will go away.

Best regards,
Mayda


-- YuanChao - 14-Jul-2012

Edit | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | More topic actions
Topic revision: r4 - 2012-07-17 - MaydaVelasco

Webs

Welcome Guest

 Cern Search TWiki Search Google Search Main All webs
Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback