Projects

This log place shows different efforts as undergraduate student involving in studies of radiation levels using FLUKA and toy physics analysis (2013-2014). After my final log I visited CERN as summer student and the Fermilab LPC as a visitor. The final product of these experiences is a new top-mass measurement using pure relativistic kinematics (b-quark energy in the top frame), which is currently a CMS public result named "Measurement of the top-quark mass from b jet energy spectrum": http://inspirehep.net/record/1393817/


HCAL Radiation Project:


This project consists in a simulation work which its objective is to estimate the radiation levels in the Front-End Electronics at the HCAL Forward.The estimation of radiation levels are made in FLUKA (Monte Carlo simulation of radiation transport).

Thanks to Pablo Jácome! pablo.jacome<at>SPAMNOTcern.ch

Most of information you can see here was registered for him while he was working in the project. He made studies of Total Ionizing Dose (TID), Desposited Energy and Particle Fluence in the HF-FEE, his information is like gold if you are starting. I have replicated his work and I will continue with activation studies.

The official HCAL Radiation Project twiki:

https://twiki.cern.ch/twiki/bin/view/Sandbox/HCALRadiationProject


FLUKA (Monte Carlo simulation of radiation transport)


If you have to start in Fluka, I strongly recommend these steps. It's important that you go step by step.

Read and learn about FLUKA

I started working in learning how FLUKA works. If you want to start in this kind of simulations you can start reading about FLUKA. https://www.fluka.org/fluka.php?id=secured_intro. FLUKA has an graphic interface that is named FLAIR where is easier the editing of FLUKA input files, execution of the code and visualization of the output files, http://www.fluka.org/flair/index.html.

Install FLUKA and its graphic interface FLAIR

Once you read about FLUKA you can install it on your laptop. In the case you have a computer with an arquitecture of 32bits you can follow this link. https://twiki.cern.ch/twiki/bin/view/Sandbox/HCALRadiationProject#Install_FLUKA_and_its_graphic_in

Read the manual of FLUKA

You have to know how make the processes in FLUKA, its a good idea read the manual of FLUKA. Exists a online manual that you can read and be familiar with the code. Please follow this link http://www.fluka.org/fluka.php?id=man_onl.

Understand how FLUKA and FLAIR work together

Fortunately FLUKA has an interface. You can start watching tutorials of how to make geometries in FLAIR. In youtube exists a channel where you can follow videos and learn! Watch it, it will be very useful. http://www.youtube.com/user/Flair4Fluka.

This point its very important, you have to know how to run the FLUKA code in FLAIR. You have to read the FLAIR Guide that you can found here. http://www.fluka.org/flair/flair.pdf. You can look this links where its explained how you can make a simulation using FLAIR.

Activation Studies in the HF-FEE:

I will continue with the estimation of radiation levels in the Front-End Electronics in the HF that I Pablo had begun. I will write a diary of my advance.

Most of information is taken from http://www.fluka.org/fluka.php?id=man_onl

Information about activation studies was taken from http://www.fluka.org/content/course/NEA/lectures/Activation.pdf


March 5, 2013: Simulation of activated nuclei in the HF per 14TeV proton-proton collision

This card requests simulation of radioactive decays and sets the corresponding biasing and transport conditions (defined with other cards)
for application to the transport of decay radiation


March 14, 2013: Rack Model of Front-End Electronics in the HCAL Forward

Finally the rack real model is ready to be simulated. The simulation aims to estimate the Total Ionizing Dose (TID)

The racks have 3 parts:

  • Metal case (aluminium)
  • Motherboards (silicon)
  • Air inside the rack
You can see the model in the next pictures:

Both racks in the HCAL Forward

Front-Racks.png

Vertical motherboards which are made of silicon inside the up rack.

Front-UpRack.png

Vertical motherboards which are made of silicon inside the low rack.

Front-LowRack.png

March 17, 2013: Problems with LXPLUS jobs

I tried to simulate the Total Ionizing Dose in lxplus but something happened. They said this:

Thu 13th Mar 2013: The SLC6 lxplus service lxplus6.cern.ch will be redirected
to new machines: See http://cern.ch/go/8wrZ


March 24, 2013: Problems with LXPLUS jobs

I tried to simulate again. And now they said this:

Tue 19th Mar 2013: The SLC5 lxplus.cern.ch service will rebooted over the next week.


March 26, 2013: Problems with LXPLUS jobs

I tried to use the CERN IT Agile Infrastructure project (using daguerre@aiplus6NOSPAMPLEASE.cern.ch) but now my jobs are in stand by. I just use 10 jobs for testing.

[daguerre@lxplus4dad95cd ~]$ bjobs

JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME

382146391 daguerr PEND 2nw lxplus4dac8 sim 1 Mar 26 07:13

382146397 daguerr PEND 2nw lxplus4dac8 sim 2 Mar 26 07:13

382146399 daguerr PEND 2nw lxplus4dac8 sim 3 Mar 26 07:13

382146402 daguerr PEND 2nw lxplus4dac8 sim 4 Mar 26 07:13

382146407 daguerr PEND 2nw lxplus4dac8 sim 5 Mar 26 07:13

382146412 daguerr PEND 2nw lxplus4dac8 sim 6 Mar 26 07:13

382146413 daguerr PEND 2nw lxplus4dac8 sim 7 Mar 26 07:13

382146417 daguerr PEND 2nw lxplus4dac8 sim 8 Mar 26 07:13

382146424 daguerr PEND 2nw lxplus4dac8 sim 9 Mar 26 07:13

382146433 daguerr PEND 2nw lxplus4dac8 sim 10 Mar 26 07:13

March 27, 2013: Learning about scorings for activation studies

Activation Studies: General inputcards in FLUKA

These new cards are going to be applicated in a basic example using the FLUKA files (Very_simplified_CMS.zip) that you can find here: https://twiki.cern.ch/twiki/bin/view/Sandbox/HCALRadiationProject#Learning_the_required_scorings +}

First Step: Erase all scoring cards (USRBIN, USRCOLL, USRTRACK)


RADDECAY: Request radioactive decays


Using Flair go to Card/Transport/RADDECAY. You have to define the card as you can see in the picture:

Raddecay.png

The principal parameters you have to define are:

  • What(1):
    • =1 (Decays: Active): radioactive decays activated for requested cooling times “activation study case”: time evolution calculated analytically for fixed (cooling) times and daughter nuclei as well as associated radiation is considered at these (fixed) times.
    • >1 (Decays: Semi-analogue): radioactive decays activated in semi-analogue mode each radioactive nucleus is treated like all other unstable particles (random decay time, daughters and radiation), all secondary particles/nuclei carry time stamp (“age”).
  • What(2):
    • >0 (Patch Isom: On): isomer production activated.
  • What(3):
    • #(Replicas): number of “replicas” of the decay of each individual nucleus.
  • What(4):
    • switch for applying various biasing features only to prompt radiation or only to particles from radioactive decays
  • What(5):
    • multiplication factors to be applied to transport cutoffs

IRRPROFI: Definition of irradiation pattern


Using Flair go to Card/Transport/IRRPROFI. You have to define the card as you can see in the picture:

Irrprofi.png

The principal parameters you have to define are:

  • What(1, 3, 5):
    • irradiation time in seconds.
  • What(2,4,6):
    • beam intensity (particles per second)
In this example we have:

180days - 185days - 180days

5.9 x10^5 p/s - 0 p/s - 5.9 x 10^5 p/s


DCYTIMES: Definition of cooling times


Using Flair go to Card/Transport/DCYTIMES. You have to define the card as you can see in the picture:

Dcytimes.png

The principal parameters you have to define are:

  • What(1-6): Cooling time (in seconds) after the end of the irradiation.
For this example we have: 1 hour, 8 hours, 1 day, 7 days, 1 month, 4 months.


DCYSCORE: Associate scoring with different cooling times.


Using Flair go to Card/Scoring/DCYSCORE. You have to define the card as you can see in the picture:

Dcyscore.png

The principal parameters you have to define are:

  • What(1) :
    • Cooling time index to be associated with the detectors. You can drop down list of available cooling times
  • What(4-6):
    • Detector index/name of kind (SDUM/Kind). You can drop down list of available detectors of kind.
  • SDUM
    • Kind: RESNUCLE, USRBIN/EVENTBIN, USRBDX, USRTRACK…
The quantities we are going to estimate are expressed per unit of time.

For this example:

RESNUCLE : Bq

USRBIN: fluence rate /dose rate


AUXSCORE: associate scoring with dose equivalent conversion factors


Using Flair go to Card/Scoring/AUXSCORE. You have to define the card as you can see in the picture:

Auxscore.png

The principal parameters you have to define are:

  • What (1) (Type):
    • Type of estimator to associate with.
  • What (2) (Part):
    • Particle or isotope to filter scoring.
  • What (4,5):
    • Detector range.
  • What (6) (Step):
    • Step in assigning indices of detector range.
  • SDUM (Set):
    • Conversion set for dose equivalent (DOSE-EQ) scoring
Using the new cards for activation studies

Associate scoring with different cooling times (Do it in Flair):

input_overview.png

March 28, 2013: Testing simulations of Acitvity, Ambient equivalent Dose and Dose effective rate

Making a simulation test using the cards and parameters that I defined before.

The results of this simulation were not satisfactories because as activation studies I need to determine these parameters:

  • Activity (Bequerels or Decays per unit of time)
  • Ambiente equivalent dose rate (uSv/h):
  • Dose effective rate (uSv/h): Dose per unit of time
These parameters are defined by:

Using this information:

1 year of continuous LHC operation

Instantaneous luminosity: 10^34 [cm-2*Hz]

Min bias cross section (pp collision at 14TeV): 76 mb

Cooling times: 1 hour, 8 hours, 1 day, 7 days , 1 month , 4 months


March 29, 2013: Searching for information about Ambient equivalent dose rate in FLUKA

I have been searching information about Ambient equivalent dose rate, Dose Effective rate and Activity in FLUKA. I got these links:

April 11, 201: Testing Simulations of Activity, Ambient Equivalent Dose and Dose effective rate

Finally a job is going to be simulated in the lxplus. I am going to test:

  • Activity (Becquerels) in the HF Detector (steel-quartz fibres)
  • Ambient Dose equivalent maps in the HF Detector includying FEE
  • Dose effective rate maps in the HF Detector includying FEE (using two different methods: Using AUXSCORE AND USRWEIGHT CARDS)
Conditions:

1 year of continuous LHC operation

Instantaneous luminosity: 10^34 [cm-2*Hz]

Min bias cross section (pp collision at 14TeV): 76 mb

Cooling times: 1 hour, 8 hours, 1 day, 7 days , 1 month , 4 months

Here is a part of the FLUKA code in the scoring part for each one.

  • Activity
* Becquerels in the HF steel-quartz fibres

RESNUCLE 3. -23. HF9019512.51HFRegion1

DCYSCORE 1. HFRegion1 HFRegion1 RESNUCLE

  • Ambient Dose equivalent maps
* 2. Residual dose rate (Ambient dose equivalent rate) from in the HF detector including Up and Low RACKS (after shutdown)

USRBIN 10. DOSE-EQ -29. 300. 350. 1500.HFResdose1

USRBIN -300. -350. 1000. 100. 150. 200.&

DCYSCORE 1. HFResdose1 USRBIN

AUXSCORE USRBIN HFResdose1 AMB74

  • Dose effective rate maps (Method 1)
* 3. Residual dose rate(Dose rate) from in the HF detector including Up and Low RACKS (after shutdown). Method 1

USRBIN 10. ALL-PART -29. 300. 350. 1500.HFDoser1

USRBIN -300. -350. 1000. 100. 150. 200.&

DCYSCORE 1. HFDoser1 HFDoser1 USRBIN

AUXSCORE USRBIN HFDoser1 EWT74

  • Dose effective rate maps (Method 2)
* 4. Residual dose rate(Dose rate) from in the HF detector including Up and Low RACKS (after shutdown). Method 2

USRBIN 10. ALL-PART -29. 300. 350. 1500.HFDoser1

USRBIN -300. -350. 1000. 100. 150. 200.&

USERWEIG 1. 0.0

DCYSCORE 1. HFDoser1 HFDoser1 USRBIN

Simulate! smile


April 12, 2013: Problems with insufficient memory in my CERN area

The simulation didn't work in lxplus. Maybe It could be because I have insufficient memory in my CERN area frown . I need to find what are the mistakes.

April 14, 2013: Problems with insufficient memory in my CERN area

The file simulated of Ambient Dose Equivalent rate has 150 MB. If I want to simulate 3000 primaries I will have to use 4.5 GB in my work directory at CERN. I will make the simulation using 3000 primaries smile

April 16, 2013: Results of testing simulations of Activity, Ambient equivalent dose and Dose effective rate

I made a test in lxplus because I needed to know how much memory I will use in my CERN space. I will have to do my simulation in two parts.

  • First part: Activity and Ambient Dose Equivalent rate
  • Second part: Dose effective rate
In my test I got this result for Activity in the HF Dectector after a shutdown. Clearly it has the tendence I had expected (an exponencial!) smile

HFActivity.jpg

The same happened with the Ambient Equivalent Dose rate

October 11, 2013: A new simulation of fluence using the new rack model and other features

For new purposes I am going to make a simulation of Fluence, these are pictures of the geometry I use. Basicall, I used 3 plates between the motherboards to study their influence in the results.

1. This picture shows how the plates are located in the region L3. The purpose of this study was to compare fluence of proton and neutron between the region L1(without plates) and L3. View from the top of CMS

3plates.jpg

2. The plates and motherboards in the CMS detector

cms_rack.jpg

3. Scoring in Flair for the simulation

scoring.jpg

October 12, 2013: Problem using Lxplus, jobs are pendient

I was going to send the jobs in LXPLUS but the jobs are pendient. I tried changing the configuration files of old simulation and still did not work:

458414602 daguerr PEND 2nw       lxplus0435             sim 1     Oct 12 14:58
458414605 daguerr PEND 2nw       lxplus0435             sim 2     Oct 12 14:58
458414835 daguerr PEND 2nw       lxplus0435             sim 1     Oct 12 15:18
458414843 daguerr PEND 2nw       lxplus0435             sim 2     Oct 12 15:18
458414901 daguerr PEND 2nw       lxplus0435             sim 1     Oct 12 15:59
458414910 daguerr PEND 2nw       lxplus0435             sim 2     Oct 12 15:59
458414911 daguerr PEND 2nw       lxplus0435             sim 1     Oct 12 16:36

When I take a look why is pendient using "bjobs -l ":

Job <458414602>, User <daguerre>, Project <default>, Status <PEND>, Queue <2nw>
                    , Job Priority <50>, Command <sim 1>
Mon Oct 14 16:36:08: Submitted from host <lxplus0435>, CWD </afs/cern.ch/work/d
                    /daguerre/private/working>;
 RUNLIMIT               
 8640.0 min of KSI2K
 PENDING REASONS:
 Not the same type as the submission host: 2218 hosts;
 Not specified in job submission: 461 hosts;
 The CPU utilization (ut) is beyond threshold: 540 hosts;
 Unable to reach slave batch server: 39 hosts;
 Job slot limit reached: 21 hosts;
 The 15 min effective CPU queue length (r15m) is beyond threshold: 38 hosts;
 Load information unavailable: 90 hosts;
 Closed by LSF administrator: 31 hosts;
 External load index (pool) is beyond threshold: 187 hosts;
 Just started a job recently: 4 hosts;
 The 1 min effective CPU queue length (r1m) is beyond threshold: 4 hosts;
 SCHEDULING PARAMETERS:
          r15s  r1m r15m  ut     pg   io  ls   it   tmp   swp   mem
 loadSched  -    -    -    -      -    -   -    -    -     -     - 
 loadStop   -    -    -    -      -    -   -    -    -     -     - 
            lftm   pool maxpool ccload   ccio 
 loadSched    -      -      -      -      - 
 loadStop     -      -      -      -      - 

It is weird, it has never happened before!

October 31, 2013: Results of the simulation of proton and neutron fluence in the HF-FEE

The simulation has finished. These are the results for an integrated luminosity of 3000fb-1 (2.28E+17 events):

Fluence of protons (Protons per cm2):

Position Whole range energy %Uncer. >100KeV %Uncer >1MeV %Uncer. >10MeV %Uncer. >20Mev %Uncer.
U1 4685616828 22.62933 4684594704 22.63282 4669365444 22.68510 4307299620 23.91652 3858110208 25.37735
U2 6748491060 26.15852 6747747552 26.16109 6736866480 26.19873 6468301632 27.10882 6126747600 28.31863
U3 4784376396 24.12082 4783347660 24.12515 4768342524 24.18866 4414689444 25.63261 3879978828 28.06287
L1 3285911148 37.46229 3285331800 37.46814 3276762648 37.55498 3068826420 39.63594 2810875428 42.14440
L2 2585794512 39.73979 2585098428 39.74958 2574806280 39.89489 2321585832 43.74054 2030053518 49.29874
L3 3270101400 38.11452 3269543256 38.12038 3261239952 38.20763 3027937812 40.70407 2678757408 45.01987
Fluence of neutrons (Neutrons per cm2):

Position Whole range energy %Uncer. >100KeV %Uncer >1MeV %Uncer. >10MeV %Uncer. >20Mev %Uncer.
U1 1.5266940E+12 2.569396 8.9261906E+11 3.380868 6.1750450E+11 3.808578 3.18025822E+11 4.669308 2.69165716E+11 5.057142
U2 1.6628991E+12 2.442326 9.6102984E+11 3.172637 6.5202224E+11 3.513967 3.17901243E+11 4.275285 2.78072448E+11 4.647319
U3 1.6984596E+12 2.446338 9.8158115E+11 3.179754 6.8463062E+11 3.494904 3.78909666E+11 4.205036 3.27544321E+11 4.425794
L1 1.3155317E+12 3.012941 6.8242509E+11 4.003644 4.7570624E+11 4.456219 2.31300117E+11 5.949393 2.05216856E+11 6.093086
L2 1.3970731E+12 2.838537 7.6904462E+11 3.795929 5.0558943E+11 4.344901 2.44850636E+11 5.600479 2.15221106E+11 5.775403
L3 1.4004108E+12 2.859599 7.2330124E+11 4.057461 4.6157615E+11 4.582900 2.32855670E+11 5.772658 2.00070645E+11 6.125000

My reports to the HCAL Radiation Project Group

Total Ionizing Dose and Fluence in the Front End Electronics at HF (14TeV pp collisions)

Abstract: This simulation aims to estimate the Total Ionizing Dose (TID), Neutrons and Charged Hadrons Fluence in the front-end electronics in the HCAL Forward (HF). Integrated results in a high-luminosity scenario of 3000 fb−1 and 14TeV pp collisions. The estimation was made in a 3 plates rack made of silicon in the position of HF-FEE. It was made a comparison with the results of Pablo Jácome and other references.

Report: FluenceandDoseintheHFFEE.pdf

Estimated by Daniel Guerrero

Total Ionizing Dose in the Front End Electronics at HF using the a realistic rack model (14TeV pp collisions)

Abstract: This work aims to estimate the Total Ionizing Dose (TID) in a high luminosity scenario of 3000fb-1 in the Front End Electronics of the HCAL Forward (HF) in the CMS detector. The task was to create a more realistic racks than the ones created before. The simulation was made in FLUKA based on the CMS Geometry provided by Moritz Guthoff. This work gives us an important result, the TID estimated by FLUKA statistics, depends of the distribution of the silicon inside the racks. The estimation were made in the electronic motherboards (for convenience made of silicon) usually located inside the HF-FEE racks.

Report: DoseintheHFFEE_Rackmodel.pdf

Estimated by Daniel Guerrero

Activation in a small region in the HF (Testing simulations)

Abstract: Estimation of the number of residual nuclei(stables and unstables) produced per pp collision at 14TeV in a small region of the HCAL Forward (HF). Furthermore, It was calculated Isotope Yield as a function of Mass Number, isotope Yield as a function of Mass Number and Isomers. It was studied a small region inside the HF detector.

Report: ActivationintheHF.pdf

Estimated by Daniel Guerrero

Activity and Ambient equivalent dose in the HF for different cooling times (Testing Simulations)

Abstract: This was an estimation of the decay of the unstables nuclei produced per pp collision in the materials of the HF detector. Furthermore, the Ambient Equivalent Dose was calculed at the HF region, after a continuous operation of the LHC with an integrated luminosity of 315.36 fb-1 in different cooling times : 1 hour, 8 hours, 1 day, 7 days, 1 month, 4 months.

Report: ActivityandAEDintheHF.pdf

Estimated by Daniel Guerrero

Snowmass


This is a page of my advance in learning about Madgraph and getting my first signals. Point by point I will show my steps like a diary. If you are interested in what is this project that is named Snowmass, please follow this link: http://www.snowmass2013.org/tiki-index.php.

Most of the information posted here is taken from Alejandro Gomez's pages (alejandro.gomez<at>SPAMNOTcern.ch) and Pablo Jácome's pages (pablo.jacome<at>SPAMNOTcern.ch).

Thanks Alejandro and Pablo!

Displaced Secondary Vertex Analysis for BSM Physics models using Delphes

October 14 - 17, 2013: How to make an analysis for Delphes output

I am going to describe how to make macro for an analysis of a signal.I have generated a bbbar production at 14TeV with 10000 events using MadGraph, Pythia and Delphes. Now the idea its try to compare several properties of the particles.

1. Define a structure for your histograms. In this case, the analysis is going to be the comparation of the PT, Eta between the genparticle and the detection for Electron, Photon, Muon, Track and Jet.

struct Plots
{
 TH1 *fElectronDeltaPT;
 TH1 *fPhotonDeltaPT;
 TH1 *fMuonDeltaPT;
 TH1 *fTrackDeltaPT;
 TH1 *fTrackDeltaX;
 TH1 *fTrackR;
 TH1 *fTrackRT; 
};

2. Add classes and libraries you are going to use.

#include <cmath>
class ExRootResult;
class ExRootTreeReader;

3. Book your histograms. You can take a look here how to make it: void_bookhistograms.C

4. Analysis:

  • Compare the PT between Gen Particle branch and its respective branch (Electrons, Photons, Muons, Tracks ) over all events and over all respective particle.
  • Comparte the X vertex of the Track between Gen Particle branch and tis respective Track branch
  • Estimate the distance of the vertex from IP
  • Estimate the tranversal distance of the vertex from IP
4.1) First you have to create a void function() to make the analysis

void Analysis(ExRootTreeReader *treeReader, Plots *plots)
 {  
 }

Now inside your void function

4.2) Inside your function get pointers(*) to branches used in this analysis from Delphes output

 TClonesArray *branchParticle = treeReader->UseBranch("Particle");
 TClonesArray *branchElectron = treeReader->UseBranch("Electron");
 TClonesArray *branchPhoton = treeReader->UseBranch("Photon");
 TClonesArray *branchMuon = treeReader->UseBranch("Muon");
 TClonesArray *branchTrack = treeReader->UseBranch("Track");

4.3) Define your variables and other pointers(*) you are going to use

 Long64_t allEntries = treeReader->GetEntries();
 GenParticle *particle;
 Electron *electron;
 Photon *photon;
 Muon *muon;
 Track *track;
 TObject *object;
 Long64_t entry;
 Int_t i

4.4) Now you have to create a "event loop " to loop all over the events

for(entry = 0; entry < allEntries; ++entry)
 {
 }

Now create your analysis code inside the "event loop"

4.4.1) Load selected branches with data from specified event

treeReader->ReadEntry(entry);

4.4.2) Loop over all electrons in event

   for(i = 0; i < branchElectron->GetEntriesFast(); ++i)
   {
     electron = (Electron*) branchElectron->At(i);
     particle = (GenParticle*) electron->Particle.GetObject();

     plots->fElectronDeltaPT->Fill((particle->PT - electron->PT)/particle->PT);
   }

4.4.3) Loop over all photons in event skipping photons with references to multiple particles

   for(i = 0; i < branchPhoton->GetEntriesFast(); ++i)
   {
     photon = (Photon*) branchPhoton->At(i);

     if(photon->Particles.GetEntriesFast() != 1) continue;

     particle = (GenParticle*) photon->Particles.At(0);

     plots->fPhotonDeltaPT->Fill((particle->PT - photon->PT)/particle->PT);
   }

4.4.4) Loop over all muons in event

  for(i = 0; i < branchMuon->GetEntriesFast(); ++i)
   {
     muon = (Muon*) branchMuon->At(i);
     particle = (GenParticle*) muon->Particle.GetObject();

     plots->fMuonDeltaPT->Fill((particle->PT - muon->PT)/particle->PT);
   }

4.4.5) Loop over all tracks in event

   for(i = 0; i < branchTrack->GetEntriesFast(); ++i)
   {
     track = (Track*) branchTrack->At(i);
     particle = (GenParticle*) track->Particle.GetObject();

     plots->fTrackDeltaPT->Fill((particle->PT - track->PT)/particle->PT);
     plots->fTrackDeltaX->Fill((particle->X - track->X)/particle->X);
     plots->fTrackR->Fill(sqrt((track->X)*(track->X) + (track->Y)*(track->Y) + (track->Z)*(track->Z)));
     plots->fTrackRT->Fill(sqrt((track->X)*(track->X) + (track->Y)*(track->Y)));
   } 

5) Print your histograms

void PrintHistograms(ExRootResult *result, Plots *plots)
{
 result->Print("png");
}

6) Create a void function to add the other functions to be applicated in your MC sample.

void AnalysisDelphes(const char *inputFile)
{
 gSystem->Load("libDelphes");
 TChain *chain = new TChain("Delphes");
 chain->Add(inputFile);
 ExRootTreeReader *treeReader = new ExRootTreeReader(chain);
 ExRootResult *result = new ExRootResult();
 Plots *plots = new Plots;
 BookHistograms(result, plots);
 Analysis(treeReader, plots);
 PrintHistograms(result, plots);
 result->Write("Analysis.root");
 delete plots;
 delete result;
 delete treeReader;
 delete chain;
}

At the end you should have a macro like this: AnalysisDelphes.C

Now open a terminal and run the macro using some signal for example I used a ttbar production simulated in Delphes:

  • root -l examples/AnalysisDelphes.C\(\"/home/guerrero/Desktop/samples/ttbar/ttbar_CMS.root\"\)
Then you will have some messages in ROOT:

root [0] 
Processing examples/AnalysisDelphes.C("/home/guerrero/Desktop/samples/ttbar/ttbar_CMS.root")...
Info in <TCanvas::Print>: file electron delta pt.png has been created
Info in <TCanvas::Print>: file photon delta pt.png has been created
Info in <TCanvas::Print>: file muon delta pt.png has been created
Info in <TCanvas::Print>: file track delta pt.png has been created
Info in <TCanvas::Print>: file track delta x.png has been created
Info in <TCanvas::Print>: file Distance D.png has been created
Info in <TCanvas::Print>: file Transverse Distance DT.png has been created

A root file and plots .png for each histogram will be generated:

electron_delta_pt.png

photon_delta_pt.png

muon_delta_pt.png

track_delta_pt.png

track_delta_x.png

Distance_D.png

Transverse_Distance_DT.png

The last two plots can be reedited using the Browser with your Analysis.root

Distance.png

Trasnverse.png

October 2, 2013: Generating MC Samples using just Pythia and Delphes

Now It would be great if we take the ttbar production.hepmc (we made in the last log, simulate using Delphes:

If you remember now we have two output files main42.out and ttbar.hepmc

  • Open a terminal
  • Go to the files directory
  • Call delphes (for me ):
    • /home/guerrero/Desktop/MadGraph/Delphes-3.0.9/DelphesHepMC /home/guerrero/Desktop/MadGraph/Delphes-3.0.9/examples/delphes_card_CMS.tcl ttbar.root ttbar.hepmc
#--------------------------------------------------------------------------
#                        FastJet release 3.0.3
#                M. Cacciari, G.P. Salam and G. Soyez                 
#    A software package for jet finding and analysis at colliders     
#                          http://fastjet.fr                          
#                                                                      
# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package
# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210].  
#                                  
# FastJet is provided without warranty under the terms of the GNU GPLv2.
# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code
# and 3rd party plugin jet algorithms. See COPYING file for details.
#--------------------------------------------------------------------------
** INFO: initializing module FastJetFinder           
** INFO: initializing module ConstituentFilter       
** INFO: initializing module BTagging                
** INFO: initializing module TauTagging              
** INFO: initializing module UniqueObjectFinder      
** INFO: initializing module ScalarHT                
** INFO: initializing module TreeWriter              
** Reading ttbar.hepmc
** [################################################################] (100.00%)
** Exiting...

Now we should be able to look the signal using Delphes display

  • Go to your Delphes directory
  • open rooy
    • root
    • .x examples/EventDisplay.C("examples/delphes_card_CMS.tcl","/home/guerrero/Desktop/Pythia/examples/ttbar.root")
    • ShowEvent (1)
I looked at the event and I got this:

ttbarusingpythia.jpg

Signal of ttbar production generated directly in Pythia and Delphes (Not using MadGraph)

This method works very well, now we should be able to make some exotic signals like B mesons decaying in two mouns smile

October 1, 2013: Generating events using Pythia (Not using MadGraph)

Interface to HepMC

  • Open a terminal
  • Move to the main pythia directory
  • Remove the currently compiled version
    • make clean
  • Configure the program in preparation for the compilation
    • ./configure --with-hepmc=/home/guerrero/Desktop/Pythia/hepmc/x86_64-slc5-gcc41-opt --with-hepmcversion=2.06.08
  • Recompile the program, now including the HepMC interface, with make as before, and move back to the examples subdirectory.
  • Do either of:
    • source config.csh
    • source config.sh
Now if you want to create your HEPMC event files, I made a configuration for t tbar production with 200 events
  • ./main42.exe main42.cmnd ttbar.hepmc > main42.out
The ttbar file is the HEPMC file

Once you have read the codes in examples directory, for instance main42, you should be able to make these kind of configuration files. For ttbar production:

September 29, 2013: Generating events in Pythia (Not using MadGraph)

I need to generate this event in Pythia. I will start training me a bit in Pythia (Based on Pythia 8 Worksheet).

Install Pythia 8 in my laptop

A “Hello World” program

Now I am going to generate a single program for gg-> ttbar at the LHC

  • Open a new file mymain.cc in examples subdirectory and then type these following lines
// Headers and Namespaces.
#include "Pythia.h"
// Include Pythia headers.
using namespace Pythia8; // Let Pythia8:: be implicit.
int main() {
// Begin main program.
// Set up generation.
Pythia pythia;
// Declare Pythia object
pythia.readString("Top:gg2ttbar = on"); // Switch on process.
pythia.readString("Beams:eCM = 7000."); // 7 TeV CM energy.
pythia.init(); // Initialize; incoming pp beams is default.
// Generate event(s).
pythia.next();
// Generate an(other) event. Fill event record.
return 0;
}
// End main program with error-free return.
  • Edit in the makefile the last of these lines to include also mymain:
main31 ... main40 mymain:  
  • Now execute your test program
make mymain
./mymain.exe > mymain.out
Generating more than one event: I will now gradually expand the skeleton mymain program from above

  • To add the process qqbar-> ttbar to the above example, I just have to add a second pythia.readString call:
    • pythia.readString("Top:qqbar2ttbar = on");

  • Now I wish to generate more than one event (5000 events). To do this, introduce a loop around pythia.next(), an event loop:

    for (int iEvent = 0; iEvent < 5000; ++iEvent) {
    pythia.next();
    }
  • To list more of the events, you also need to add with the other pythia.readString commands:
pythia.readString("Next:numberShowEvent = 5000");
  • To obtain statistics on the number of events generated of the different kinds, and the estimated cross sections, add at the end of the program:
pythia.stat();

  • Finally our code:
// Headers and Namespaces.
#include "Pythia.h"
// Include Pythia headers.
using namespace Pythia8; // Let Pythia8:: be implicit.
int main() {
// Begin main program.
// Set up generation.
Pythia pythia;
// Declare Pythia object
pythia.readString("Top:gg2ttbar = on"); // Switch on process.
pythia.readString("Top:qqbar2ttbar = on"); // Switch on process.
pythia.readString("Beams:eCM = 7000."); // 7 TeV CM energy.
pythia.readString("Next:numberShowEvent = 5000");
pythia.init(); // Initialize; incoming pp beams is default.
// Generate event(s).
for (int iEvent = 0; iEvent < 5000; ++iEvent) 
{
pythia.next();
}
// Generate an(other) event. Fill event record.
return 0;
pythia.stat();
}
// End main program with error-free return.

With the mymain.cc structure developed above it is necessary to recompile the main program for each minor change:

make mymain
./mymain.exe > mymain.out

And then our output is mymain.out (988MB)

Working with Input files

  • Parameters can be put in special input “card” files that are read by the main program. Open a new file, mymain.cmnd, and input the following lines:
! t tbar production at the LHC
Beams:idA = 2212    ! first incoming beam is a 2212, i.e. a proton.
Beams:idB = 2212    ! second beam is also a proton.
Beams:eCM = 7000  ! the cm energy of collisions.
Top:gg2ttbar = on     ! switch on the process g g -> t tbar
Top:qqbar2ttbar = on ! switch on the process q qbar -> t tbar.
  • The final step is to modify our program to use this input file. To do this, replace the int main() { line by int main(int argc, char* argv[])
  • Replace all pythia.readString(...) commands with the single command pythia.readFile(argv[1]);
// Headers and Namespaces.
#include "Pythia.h"
// Include Pythia headers.
using namespace Pythia8; // Let Pythia8:: be implicit.
int main(int argc, char* argv[]) {
// Begin main program.
// Set up generation.
Pythia pythia;
// Declare Pythia object
pythia.readFile(argv[1]);
pythia.init(); // Initialize; incoming pp beams is default.
// Generate event(s).
for (int iEvent = 0; iEvent < 5000; ++iEvent) 
{
pythia.next();
}
// Generate an(other) event. Fill event record.
return 0;
pythia.stat();
}
// End main program with error-free return.
  • Recompile mymain and execute it:
make mymain
./mymain.exe mymain.cmnd > mymain.out

September 8 - 20, 2013 Analyzing displaced secondary vertex

I tried to work with a sample of B meson decay but I coud not simulate in Magraph because the b mesons are not defined in the Standard model for default. I thought maybe I could try to work with a signal which everybody knows how it works: Z boson decaying in two muons.

I opened the ROOT file and then I saw what happened with the tracks:

  • Delphes->Scan("Particle.PID:Track.X:Track.Y:Track.Z")
***********************************************************************
*    Row   * Instance * Particle. *   Track.X *   Track.Y *   Track.Z *
***********************************************************************
*      11 *       0 *     2212 *        0 *        0 *        0 *
*      11 *       1 *     2212 *        0 *        0 *        0 *
*      11 *       2 *       21 *        0 *        0 *        0 *
*      11 *       3 *       21 *        0 *        0 *        0 *
*      11 *       4 *        4 *        0 *        0 *        0 *
*      11 *       5 *       -4 *        0 *        0 *        0 *
*      11 *       6 *       23 *        0 *        0 *        0 *
*      11 *       7 *      -13 *        0 *        0 *        0 *
*      11 *       8 *       13 *        0 *        0 *        0 *
*      11 *       9 *       23 * 0.0123941 * -0.007683 * -0.112376 *
*      11 *      10 *      -13 * 0.0616979 * 0.0352841 * -0.003633 *
*      11 *      11 *       13 * 0.0616979 * 0.0352841 * -0.003633 *

The position of the secondary vertex are in cm. As we can see that using Delphes, we could be able to look in the .root file the displacements of a secondary vertex in an position (Vx, Vy, Vz) with units :

0.0000001 centimeters (cm) = 0.000000001 meters(m) = 0.001 micrometers (µm)

That is useful for an analysis of displaced secondary vertex (usually in an order of 10 µm )!!

Once I opened ROOT I wrote too:

  • .x examples/EventDisplay.C("examples/delphes_card_CMS.tcl","/home/guerrero/Desktop/MadGraph/zboson/run/zmumu_CMS.root")
  • ShowEvent (11);
I started playing/training and taking a look to the principal and secondary vertexes and I found this:

moun1.jpg

A secondary vertex displaced from the IP

moun2.jpg

The secondary vertex in the picture is at the position we saw two muon decaying in the same vertex (0.062 cm ,0.035 cm,-0.004 cm). One important thing: Why does the delphes display take these tracks for pions( take a look at the left side of the picture above) and not for muons?

moun3.jpg

You can see the tracks which come from the secondary vertex with red points outside the tracker

August 28, 2013 - September 7, 2013 : Analyzing MadGraph requirements for a model created by the User (User-mode)

-Signal of lw- > Z + e-

signal.jpg

Important for simulation of Physics Beyond the Standard Model!!!!

You can add your extensions of the SM using the UserMode in the models of MadGraph

In the MadGraph directory open "/model/usrmod_v4"

You will just need to edit 3 important files:

  1. particles.dat
  2. interaction.dat
  3. VariableName.dat
Please read the README.txt for more infotmation

Some of the parameters you have to define in the paramcard.dat are for example: width of resonance, masses of particles, coupling parameters (gL, gR, ggR, ggL)

Madgraph and Delphes Diary

August 2, 2013: Analyzing Delphes output: Event Display

I simulated ttbar production using Delphes CMS card instead Snowmass for Detector level in my laptop

Once you have pythia_events.hep, you just have to run Delphes like this:

  • ~/Desktop/samples/ttbar$ /home/guerrero/Desktop/MadGraph/Delphes-3.0.9/DelphesSTDHEP /home/guerrero/Desktop/MadGraph/Delphes-3.0.9/examples/delphes_card_CMS.tcl ttbar_CMS.root pythia_events.hep
** INFO: adding module       ParticlePropagator      ParticlePropagator      
** INFO: adding module       Efficiency              ChargedHadronTrackingEfficiency
** INFO: adding module       Efficiency              ElectronTrackingEfficiency
** INFO: adding module       Efficiency              MuonTrackingEfficiency  
** INFO: adding module       MomentumSmearing        ChargedHadronMomentumSmearing
** INFO: adding module       EnergySmearing          ElectronEnergySmearing  
** INFO: adding module       MomentumSmearing        MuonMomentumSmearing    
** INFO: adding module       Merger                  TrackMerger             
** INFO: adding module       Calorimeter             Calorimeter             
** INFO: adding module       Merger                  EFlowMerger             
** INFO: adding module       Efficiency              PhotonEfficiency        
** INFO: adding module       Isolation               PhotonIsolation         
** INFO: adding module       Efficiency              ElectronEfficiency      
** INFO: adding module       Isolation               ElectronIsolation       
** INFO: adding module       Efficiency              MuonEfficiency          
** INFO: adding module       Isolation               MuonIsolation           
** INFO: adding module       Merger                  MissingET               
** INFO: adding module       Merger                  ScalarHT                
** INFO: adding module       FastJetFinder           GenJetFinder            
** INFO: adding module       FastJetFinder           FastJetFinder           
** INFO: adding module       ConstituentFilter       ConstituentFilter       
** INFO: adding module       BTagging                BTagging                
** INFO: adding module       TauTagging              TauTagging              
** INFO: adding module       UniqueObjectFinder      UniqueObjectFinder      
** INFO: adding module       TreeWriter              TreeWriter              
** INFO: initializing module Delphes                 
** INFO: initializing module ParticlePropagator      
** INFO: initializing module ChargedHadronTrackingEfficiency
** INFO: initializing module ElectronTrackingEfficiency
** INFO: initializing module MuonTrackingEfficiency  
** INFO: initializing module ChargedHadronMomentumSmearing
** INFO: initializing module ElectronEnergySmearing  
** INFO: initializing module MuonMomentumSmearing    
** INFO: initializing module TrackMerger             
** INFO: initializing module Calorimeter             
** INFO: initializing module EFlowMerger             
** INFO: initializing module PhotonEfficiency        
** INFO: initializing module PhotonIsolation         
** INFO: initializing module ElectronEfficiency      
** INFO: initializing module ElectronIsolation       
** INFO: initializing module MuonEfficiency          
** INFO: initializing module MuonIsolation           
** INFO: initializing module MissingET               
** INFO: initializing module GenJetFinder            
#--------------------------------------------------------------------------
#                        FastJet release 3.0.3
#                M. Cacciari, G.P. Salam and G. Soyez                 
#    A software package for jet finding and analysis at colliders     
#                          http://fastjet.fr                          
#                                                                      
# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package
# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210].  
#                                  
# FastJet is provided without warranty under the terms of the GNU GPLv2.
# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code
# and 3rd party plugin jet algorithms. See COPYING file for details.
#--------------------------------------------------------------------------
** INFO: initializing module FastJetFinder           
** INFO: initializing module ConstituentFilter       
** INFO: initializing module BTagging                
** INFO: initializing module TauTagging              
** INFO: initializing module UniqueObjectFinder      
** INFO: initializing module ScalarHT                
** INFO: initializing module TreeWriter              
** Reading pythia_events.hep
** [##################################################--------------] (78** [################################################################] (100.00%)
** Exiting...

Finally your root file is ttbar_CMS.root

Now you want to see an event. You have to go to your Delphes directory(For me: ~/Desktop/MadGraph/Delphes-3.0.9) and write:

  • make display
  • root -l examples/EventDisplay.C\(\"examples/delphes_card_CMS.tcl\",\"/home/guerrero/Desktop/samples/ttbar/ttbar_CMS.root\"\)
You will have this:

event.jpg

If you want to see event number 1 you have to write the command:

and then:

event_0.jpg

Just for fun you can get a picture like this:

event_0_3d.jpg

Simulation of TTbar production at 14TeV in the CMS detector

July 17, 2013: Analyzing Delphes output, two hardest jets in ttbar production


Macro based Analysis: Two Hardest jets in ttbar production

I work in my fnal area: /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/

I made two macros. You can compare that both have the same result:

1. The first macro was produced after several Delphes guides I read and a root file shared to me by Edgar Carrera.

  • root
  • gSystem->Load("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/libDelphes.so");
  • .x /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/jet1.C("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/ttbar_50pileup.root");
jet1.png

2. The second macro is more complex. It has a different structure of work. If you are interest, you can take a look and replicate the work for your signal smile

  • root
  • gSystem->Load("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/libDelphes.so");
  • .x /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/jet2.C("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/ttbar_50pileup.root");
jet_pt_0.png

jet_pt_1.png

jet_pt_all.png

My macro was: jet2.C

July 16, 2013: First steps Analyzing Delphes output

Macro based Analysis

I will work in my fnal: /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9

I am going to use examples directory which contains a basic ROOT analysis macro called Example1.C. I will apply this macro to my ttbar production.

  • root
  • gSystem->Load("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/libDelphes.so");
  • .x /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/examples/Example1.C("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/ttbarproduction/run/ttbar_50pileup.root");
c1.png

July 15, 2013 : First steps Analyzing Delphes output

Analyzing Delphes output file

I will work in my fnal are: /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9

Simple Analysis using TTree::Draw

  • Log to cmslpc
  • Open ROOT file and load Delphes' shared library:
    • root -l h_zz_4mu_50pileup.root
    • gSystem->Load("/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/libDelphes.so");
  • Draw the Jet PT distribution
    • Delphes->Draw("Jet.PT");
jet_pt.png

Note 1: Delphes: Tree name

Note 2: Jet: Branch name

Note 3: PT : Variable (leaf) of this branch

July 10 - July14, 2013: New MinBias file for Snowmass at 14TeV on a MC sample of 125GeV SM Higgs in a 4-lepton channel. Pile-up=50

For this example I will use H -> Z Z-> 4 mu with H mass = 125GeV and with pile up = 50. Note: You have to read the other MadGraph Diary entries to continue here.

Start a new fast simulation!!

1. Parton: MadGraph /MadEvent (.lhe file)

I will work the fast simulation in my fnal area

Now I will work at LPC in Fermilab:

  • Log to LPC
  • I work in this directory: /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/.
  • Open MadGraph directory
  • Make a copy of the Template directory of MadGraph files. My copy is H_ZZ_4MU (/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/H_ZZ_4MU)
    • cp -r Template H_ZZ_4MU
  • For this signal we wil have to make some modifications. For gluon-gluon fusion:
  • (1) We need to use the Higgs Effective Theory (heft) in MadGraph /MadEvent
    • cd ./H_ZZ_4MU/Cards
    • nano proc_card_mg5.dat
  • You have to import heft model instead sm model. Write:
    • import model heft
  • You have to write your process:
    • generate p p > H > z z, z > mu+ mu- @1
  • Generate your new process:
    • ./bin/newprocess
  • (2) Higgs mass has to be 125GeV(For default is 120GeV)
  • You have to change it in this diretory: MadGraph5 _v1_5_9/H_ZZ_4MU/Cards/proc_card_mg5.dat
    • nano param_card.dat
###################################
## INFORMATION FOR MASS
###################################
Block mass
   5 4.700000e+00 # MB
   6 1.730000e+02 # MT
  15 1.777000e+00 # MTA
  23 9.118800e+01 # MZ
  25 1.200000e+02 # set of param :1*MH, 1*MP
## Dependent parameters, given by model restrictions.
## Those values should be edited following the
## analytical expression. MG5 ignores those values
## but they are important for interfacing the output of MG5
## to external program such as Pythia.
 1 0.000000 # d : 0.0
 2 0.000000 # u : 0.0
 3 0.000000 # s : 0.0
 4 0.000000 # c : 0.0
 11 0.000000 # e- : 0.0
 12 0.000000 # ve : 0.0
 13 0.000000 # mu- : 0.0
 14 0.000000 # vm : 0.0
 16 0.000000 # vt : 0.0
 21 0.000000 # g : 0.0
 22 0.000000 # a : 0.0
 24 80.419002 # w+ : cmath.sqrt(MZ__exp__2/2. + cmath.sqrt(MZ__exp__4/4. - (aEW*cmath.pi*MZ__exp__2)/(Gf*sqrt__2)))
 9000006 125.000000 # h1 : MH

You have to change the line:

 9000006 120.000000 # h1 : MH

to

 9000006 125.000000 # h1 : MH

  • Now generate events:
      • ./bin/generate_events
 === Results Summary for run: run_01 tag: tag_1 ===
    Cross-section :  2.566e-05 +- 1.963e-07 pb
    Nb of events :  2500
store_events
Storing parton level results
End Parton
quit

Great!!! =)

card.jpg

Feynman Diagram!

2. Hadronization and fragmentation: Pythia (.hep file )

  • Make I run directory, I made this one: /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/H_ZZ_4MU/run
  • Open your run directory
  • Copy unweighted_events.lhe to your run directory:
    • cp ./Events/run_01/unweighted_events.lhe.gz ./run/
  • Run pythia in your run directory:
    • /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Template/bin/internal/run_pythia /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/pythia-pgs/src/
 ==============================================================================
 ********* Total number of errors, excluding junctions =       0 *************
 ********* Total number of errors, including junctions =       0 *************
 ********* Total number of warnings =                          0 *************
 ********* Fraction of events that fail fragmentation cuts = 0.00000 *********
 Cross section (pb):  2.56201660060008111E-005

The most importante file that we will need is pythia_events.hep

3. Detector level: Delphes (.root file)

  • Get the last version Delphes files in this order
    • Install the last version Delphes-3.0.9
    • Get the new detector files and copy them in the examples directory (/uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/examples):
      • delphes_card_Snowmass_140PileUp.tcl
      • delphes_card_Snowmass_VLHCPileUp.tcl
      • delphes_card_Snowmass_50PileUp.tcl
      • delphes_card_Snowmass_NoPileUp.tc
    • Get the MinBias file for 14TeV
    • Change its name to MinBias.pileup
  • Now we can start detector level simulation
  • In your run directory:
    • ln -s /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/MinBias.pileup
    • /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/DelphesSTDHEP /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.9/examples/delphes_card_Snowmass_50PileUp.tcl h_zz_4mu_50pileup.root pythia_events.hep
#--------------------------------------------------------------------------
** INFO: initializing module FastJetFinder           
** INFO: initializing module CAJetFinder             
** INFO: initializing module GenJetFinder            
** INFO: initializing module JetPileUpSubtractor     
** INFO: initializing module CAJetPileUpSubtractor   
** INFO: initializing module ConstituentFilter       
** INFO: initializing module PhotonEfficiency        
** INFO: initializing module PhotonIsolation         
** INFO: initializing module ElectronEfficiency      
** INFO: initializing module ElectronIsolation       
** INFO: initializing module MuonEfficiency          
** INFO: initializing module MuonIsolation           
** INFO: initializing module MissingET               
** INFO: initializing module BTagging                
** INFO: initializing module BTaggingLoose           
** INFO: initializing module TauTagging              
** INFO: initializing module UniqueObjectFinderGJ    
** INFO: initializing module UniqueObjectFinderEJ    
** INFO: initializing module UniqueObjectFinderMJ    
** INFO: initializing module ScalarHT                
** INFO: initializing module TreeWriter              
** Reading pythia_events.hep
** [##--------------------------------------------------------------] (3.22%)
  • Finally you will have the root file that you can analize:
    • h_zz_4mu_50pileup.root

May 8-13, 2013: MC sample of ttbar production but using Delphes 3 and comparison with Snowmass Repository. Pile-up=50

Let's start doing a simulation using the Delphes recipe here:

http://www.snowmass2013.org/tiki-index.php?page=Energy_Frontier_FastSimulation

First you have to get the pythia file(.hep) of your signal in the model you want to work (standard model for me) as I made in May 1-7, 2013

Delphes recipe (in my fnal area)

Now basically you have to do this, instead the step 3 in my last recipe (but for ttbar production, not for ttbar+jet production).

  • You can get the Snowmass card files from here(HEAD version):
http://cmssw.cvs.cern.ch/cgi-bin/cmssw.cgi/UserCode/spadhi/Snowmass/Cards/

a) delphes_card_Snowmass_NoPileUp.tcl

b) delphes_card_Snowmass_50PileUp.tcl

c) delphes_card_Snowmass_140PileUp.tcl

  • Copy the tcl files to Delphes-3.0.7/examples/

  • Now in the run directory you have do to this (Once you have pythia_events.hep):
    • $ ln -s ../Delphes-3.0.7/MinBias.pileup
    • $ ln -s ../Delphes-3.0.7/examples
    • $ /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.7/DelphesSTDHEP /uscms_data/d3/guerrero/work/MadGraph5_v1_5_9/Delphes-3.0.7/examples/delphes_card_Snowmass_50PileUp.tcl ttbar_50pileup.root pythia_events.hep

  • Finally you will have these files in your run directory:

    • beforeveto.tree
    • examples
    • pythia.done
    • xsecs.tree
    • ttbar_50pileup.root
    • fort.0
    • pythia_events.hep
    • events.tree
    • MinBias.pileup
    • unweighted_events.lhe
Results of a ttbar signal with 50 pile up

Here are some plots for comparison with a sample that is located here:

http://red-gridftp11.unl.edu/Snowmass/Delphes-3.0.6.1/50PileUp/TTBAR_13TEV/0001/TTBAR_13TEV_50PileUp_10616.root

I used 5000 events for my signal they used 5406 events:

My signal is in green and their signal is in blue:

  • Missing Tranverse Momentum (MET)
MissingET.MET.png

MissingET.MET1.png

  • MissingET.Phi
MissingET.Phi.png

MissingET.Phi1.png

  • Particle.E
Particle.E.png

Particle.E1.png

  • Particle.Mass
Particle.Mass.png

Particle.Mass1.png

  • Particle.PT
Particle.PT.png

Particle.PT1.png

I found a histogram with differences

  • Particle.Size
Particle_size.png

Particle_Size1.png

May 1-7, 2013: MC Sample of ttbar + 1 jet using MadGraph, Pythia and Delphes 2

Let's start to work! Backgrounds! smile

First log in to my fnal area (Exactly the same in your laptop but using different directories)

I have been reading information for getting a optimal way to do the backgrounds. Basically the sequence is

  1. Parton level: MadGraph /MadEvent (.lhe file)
  2. Hadronization and fragmentation level: Pythia (.hep file )
  3. Simulation of the detector: Delphes (.root file)
Install new version of Magraph 5 and Pythia-pgs in my fnal area

I found it here: https://launchpad.net/madgraph5

  • tar -xzf MadGraph5 _v1.5.9.tar.gz
  • cd MadGraph5 _v1_5_9
  • ./bin/mg5
    • mg5> install pythia-pgs
    • mg5> quit
Install Delphes 2 in my fnal area

Generating a background ttbar+j at sqrt(s)=13TeV

1. Parton simulation of the event in MadGraph /MadEvent using 10000 events

  • cd MadGraph5 _v1.5.9
  • cp -r Template TTBARJET
  • cd TTBARJET
  • cd Cards
    • nano proc_card_mg5.dat
    • generate p p > t t~ j @1
    • nano run_card.dat
    • modify the beams to 6500
  • cd ..
  • ,/bin/newprocess_mg5
  • ./bin/generate_events
  • 1
Results: = Results Summary for run: run_01 tag: tag_1 =

Cross-section : 4040 +- 8.925 pb

Nb of events : 10000

  • cd Events/run_01/
The final files are :events.lhe.gz run_01_tag_1_banner.txt unweighted_events.lhe.gz

I will need for the next step: unweighted_events.lhe

  • gzip -d unweighted_events.lhe.gz
2. Hadronization using Pythia

First, please go to the Cards directory and rename pythia_card_default.dat by pythia_card.dat, because when Pythia runs, it expects to find pythia_card.dat in ../Cards

  • Go to TTBARJET directory and make a directory
    • mkdir run
  • open run and copy the file: unweighted_events.lhe
    • cp ...../Events/run_01/unweighted_events.lhe ./
  • run pythia
    • ../MadGraph5_v1_5_8/Template/bin/internal/run_pythia ../MadGraph5_v1_5_8/pythia-pgs/src
Finally you will have something like this:

****** Total number of errors, excluding junctions = 0 ******* *
****** Total number of errors, including junctions = 0 ******* *
****** Total number of warnings = 0 ******* *
****** Fraction of events that fail fragmentation cuts = 0.00000 *** *

Cross section (pb): 4039.6504800004113

Please see your run directory you will find: beforeveto.tree fort.0 pythia_events.hep xsecs.tree events.tree pythia.done unweighted_events.lhe

I will need for the next step: pythia_events.hep

3. Simulation of the detector using Delphes 2

I have to create a .txt file that should contain a list of input files for Delphes (in run directory)

  • echo pythia_events.hep >> inputList.txt
Delphes looks for files in ./data directory

  • ln -s ../Delphes-2.0.5/data
Now you have to define parameters to run in delphes

  1. File containing list of input files
  2. Output name
  3. Detector card
  4. Trigger card
First you have to copy DetectorCard _CMS.dat and TriggerCard _CMS.dat to your Cards directory. Then you can run Delphes:

  • .../Delphes-2.0.5/Delphes inputList.txt delphes2.root ../Cards/DetectorCard_CMS.dat ../Cards/TriggerCard_CMS.dat
Finally you will have

####### Start conversion to TRoot format ########

StdHEP file format detected

This can take several minutes

Exiting conversion...

####### Start fast detector simulation ########

Total number of events to run: 10000

[$$$$$$$$$$$$$$$$$$$$$$$$$] 100.00% processed

Exiting detector simulation...

########### Start Trigger selection ###########

Exiting trigger simulation...

################## Start FROG #################

Exiting FROG preparation...

################## Time report #################

Time report for 10000 events

Time (s): CPU Real

+ Global: 15.53 16.2053

+ Events: 9.08 9.20876

+ Trigger: 0.31 0.30919

+ Frog: 0.05 0.04338

8/05/2013 0:35:26

Exiting Delphes ...

4. The files in your run directory are:

  • DelphesToFrog.geom
  • beforeveto.tree
  • delphes2.root
  • events.tree
  • inputList.txt
  • pythia_events.hep
  • xsecs.tree
  • DelphesToFrog.vis
  • data
  • delphes2_run.log
  • fort.0
  • pythia.done
  • unweighted_events.lhe
smile

April 26, 2013: Important question about Backgrounds for Snowmass

Questions about Backgrounds:

April 12, 2013: Installing complementary MadGraph packages

I will continue getting other signals but using the complements of Madgraph.

I installed Delphes, Pythia and GPS package :)

March 22, 2013: Making my first Fitting using ROOT on my MC sample

Playing with my MadGraph file of z->+mu -mu in ROOT. I tried to get certain information making test fittings(Using ROOT Browser) to the histograms of energy and mass.

Canvas_1.jpg

c1_n2.jpg

March 8 ,2013: Installing ROOT and Making some plots of my MC sample of Z-> +mu -mu


Guide for Installing ROOT


If you want to install in your laptop or PC. Please follow my steps:

  • Download a version of ROOT http://root.cern.ch/drupal/content/downloading-root. I choose version 5.32 http://root.cern.ch/drupal/content/production-version-532. Select the source form.
  • Once you have downloaded ROOT please unpacked it :
    • gzip -dc.root_v5.32.04.source.tar.gz | tar -xf-
  • Getting ready to build. A number of prerequisite packages must be installed.
    • Here you can find which are these packages. http://root.cern.ch/drupal/content/build-prerequisites
    • Use " sudo apt-get install package"
  • Choosing the installation method. I chose the Location Indepent Installation.
  • Open the directory where you have unpacked ROOT files. For example I have (/home/dguerrero/ROOT/root)
    • cd root
  • Now You have to tyoe the build commands
    • ./configure --help (here you have to see the arquitecture [arc] of your machine)
    • ./configure gcc >=3
    • make
    • ./bin/thisroot.sh
  • Open root
    • root (You should not to have problems.)
Now ROOT needs to be redirected because it is very useful to open root in the any directory we wan to:
  • Copy the root directory to the usr/local/ of your laptop (you will need superuser privilegies to do this part)
    • cp -r root /usr/local
  • Create a script to run environment variables (I created this script in my home directory). You have to run this script everytime you want to use ROOT
    • nano root.sh
    • Write these lines
export ROOTSYS=/usr/local/root
export PATH=$ROOTSYS/bin:$PATH
export LD_LIBRARY_PATH=$ROOTSYS/lib:$LD_LIBRARY_PATH

Important: Remember if you want to open ROOT, you will need to do these steps:

  1. . ./root.sh (run environment variables)
  2. root (run root executable from the directory you want)
This is the way I use ROOT because I had a problem when I needed ExRootAnalysis and Delphes for an analysis


Getting histograms of the process pp > z > mu+ mu-

First you have to convert .lhe files to .root files. (You have to install ExRootAnalysis. You just have to open Madgraph 5 and write >install ExRootAnalysis)

  • Converting .lhe to .root
    • ~/Madgraph/MadGraph5_v1_5_7/Zprocess/Events/run_01$ /home/dguerrero/Madgraph/MadGraph5_v1_5_7/ExRootAnalysis/ExRootLHEFConverter unweighted_events.lhe unweighted_events.root
    • root -l unweighted_events.root
  • Now ROOT is opened. I wrote these next code:
    • root [1] gSystem->Load("/home/dguerrero/Madgraph/MadGraph5_v1_5_7/ExRootAnalysis(int)0ibExRootAnalysis.so")
    • TFile::Open("unweighted_events.root")
    • LHEF->Draw("Particle.PT")
    • LHEF->Draw("Particle.M")
    • LHEF->Draw("Particle.E")
  • These were the distribution plots for PT,Energy and Mass.

  • Particle.M:
    Particle.M.jpg

March 3, 2013: Making the FNAL kerberos to my laptop, Generating my first MC sample


Finally my laptop is kerberized


Please just follow this tutorial once you have your FNAL account.

Here is a guide of what I did:

  • I installed Kerkeros in my laptop. Open a terminal:
    • $ sudo apt-get install krb5
  • If you can not find the krb5, you can install kerberos using the ubuntu software center: The name of the program is "Kerberos Autentication"
    • In this case, you will need your fnal account because you will write your username.
  • Once you have installed the kerberos program, you will need to modify or replace the configuration file. In your home directory copy your krb5.conf in this way:
    • $ cp krb5.conf /etc/krb5.conf
  • You have to copy to /.ssh/config/ on your local machine, the sshd_config file in this way:
    • $ cp sshd_config /etc/ssh/sshd_config
  • To connect to cluster in the LPC, just open a terminal and write:
Note: If you have troubles you can find information here! http://www.usqcd.org/fnal/troubleshooting.html


Installing Madgraph in my cmslpc account


  • I have been followed Pablo Jácome's guides without problems. Madgrapgh is ready to work in the LPC cluster :)

Generating a first sample


  • Login in my fnal area
  • Let's start!
    • Follow the tutorial using cmslpc account(But you can make it just using your laptop). if you have the next message "MadGraph 5 works only with python 2.6 or later (but not python 3.X). Please upgrate your version of python". Please check Pablo's guides and make the step about CMSSW environment.
      • [guerrero@cmslpc36 MadGraph5 _v1_5_7]$ ./bin/mg5
      • mg5>tutorial
      • mg5>output MY_FIRST_MG5_RUN
      • mg5> launch MY_FIRST_MG5_RUN
      • mg5> quit $
        :

Generating the process: p p > z > mu+ mu-

  • Let's start!
    • Copy carpet Template and rename it as Zprocess.
      • [guerrero@cmslpc36 MadGraph5 _v1_5_7]$ cp -r Template Zprocess
      • cd Zprocess
      • nano ./Cards/proc_card_mg5.dat
      • Remove default lines:
generate p p > e- ve~ @1
add process p p > e- ve~ j @2
add process p p > t t~ @3

      • Replace for:
        •  generate p p > z > mu+ mu- @1 
      • cd ..
      • ./bin/newprocess_mg5
      • Running...
      • done
      • ./bin/generate_events
    • Finally I had smile
=== Results Summary for run: run_01 tag: tag_1 ===

     Cross-section :   717.2 +- 1.46 pb
     Nb of events :  10000

  • Now get the results to your laptop from LPC
    • See in your work directory: /work/Zprocess/Events/run_01/
      • events.lhe.gz
      • run_01_tag_1_banner.txt
      • unweighted_events.lhe.gz
    • Download to your computer
      • [guerrero@cmslpc36 ~/work]$
      • cd MadGraph5 _v1_5_7
      • [guerrero@cmslpc36 MadGraph5 _v1_5_7]$ tar -zcvf zboson.tar.gz Zprocess
    • Now in your home directory in your laptop
  • Results:
The results you can find it here: Zbosonresults.pdf

The Feynman diagrams:


  • Zbosondiagram1.png


  • Zbosondiagram2.png
For instante, if you want to get the distribution of the transverse momentum you have to use ExRootAnalysis (to convert your .lhe to .root), but first you have to install ROOT.

March 2, 2013: Running MadGraph Tutorial

  • I started seeing some guides as an introduction. I made this steps:
    • I explored some Madgraph directories as /home/dguerrero/Madgraph/MadGraph5_v1_5_7/models/sm_v4/particles.dat that has the particles of the standard model or /home/dguerrero/Madgraph/MadGraph5_v1_5_7/models/sm_v4/interactions.dat which contains allowed interactions.
    • Running a process in /home/dguerrero/Madgraph/MadGraph5_v1_5_7/835_Proc (It's a copy of the original template)
      • >bin/newprocess_mg5
      • >Running...
      • >done
    • Generating events in /home/dguerrero/Madgraph/MadGraph5_v1_5_7/835_Proc
      • >./bin/generate_event
At the end you will have a .lhe file

CMSSW Diary

Solving CMS Data Analysis School (CMSDAS) at LPC FNAL 2014 Exercises


I am going to start training myself on CMSSW by solving CMSDAS exercises. I will plot some important results and upload some codes. Some information has been taken from the official page:

https://twiki.cern.ch/twiki/bin/viewauth/CMS/WorkBookExercisesCMSDataAnalysisSchool

I will use my FNAL account for working at LPC-Fermilab.

December 21, 2013. Pre-Workshop Exercises

Description - Cut and paste, setting up a release, find data in DAS(Data Aggregation Service), EDM(Event Data Model framwork ) utilities, creating PAT(Physics Analysis Toolkit)tuple. You have to follow this link: https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCMSDataAnalysisSchoolPreExerciseFirstSet

I am not gonna put the answers because you have to do it!. Here, I did not have problems I have just added the plots I got in the exercise number 6:

  • data_pt.png:
    data_pt.png

  • MC_pt.png:
    MC_pt.png

December 22, 2013. Pre-Workshop Exercises

Description - Modifying PAT tuple content and size, Analyzing PATtuple with FWLite and make a Z mass peak, installing and using Fireworks.You can follow this link: https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCMSDataAnalysisSchoolPreExerciseSecondSet

Once you have modified the PATtuple you will have these plots for muon PT:

  • data_pt_patslected.png:
    data_pt_patslected.png

  • MC_pt_patslected.png:
    MC_pt_patslected.png

December 27, 2013. Pre-Workshop Exercises

We will first make a ZPeak. We will loop over the reduced size selectePatMuons in the PATtuple and get the mass of oppositely charged muons. These are filled in a histogram that is written to an output ROOT file.

  • ZPeak_MC.png:
    ZPeak_MC.png

  • ZPeak_data.png:
    ZPeak_data.png

December 29, 2013. Pre-Workshop Exercises

For exercise 10, I used Fireworks (CMS Event Display), here is a view of how it looked:

  • Fireworks.png:
    Fireworks.pnge
Important!!: The other pre-exercises have been followed but because of time I did not write about them. Please follow them they are a good introduction.

December 30, 2013. Generator Exercises

You can follow this link: https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCMSDataAnalysisSchool2014GeneratorExerciseatFNAL

Plot the distribution of the event scale saved in the HepMCProduct.

  • event_scale.png:
    event_scale.png

January 3, 2014. Generator Exercises

Tutorial 0: Comparing the distributions for W+jets events using Pythia(Blue) and Madgraph+Pythia(Red)

In the case for W boson pt I got:

  • wplusjets_pt_comparison.png:
    wplusjets_pt_comparison.png
Important things I learned: Since there are differences in the low $p_T$ spectrum of the W boson (PV=0.000173383), you would likely: not use matched(MG+Pythia) samples for precision measurements, such as the W mass. On the other hand, since the matched MG+Pythia samples include hard jets to high multiplicity, you may want to use such a sample for a new physics search involving a lepton, jets, and missing transverse energy.

January 3, 2014. Generator Exercises

Tutorial 1: Modify the analysis code to filter out just W+ or W events and make a comparison. We can find differences between observables kinematic as PT. You can see here:

  • wpluswminus_production_comparison.png:
    wpluswminus_production_comparison.png
PDF( Parton Distribution Functions ) Plotting Tool: I made a plot using this tool as the CMSDAS picture: plotpaw.ps


A Important thing I learned: The PDF can produce effect on final states with different charges.

Tutorial 2: Compares LO(leading-order)/LL(leading-logarithm) predictions to a next-to-leading-order (NLO) prediction generated with PowHeg.

For w+ rapidity comparison was:

  • NLO_LLLO_comparison_wplusjets.png:
    NLO_LLLO_comparison_wplusjets.png
One important thing I learned: NLO calculation can be important for many reasons due the effects in some observables as PT distribution of the decay lepton (Spin correlations).

Tutorial 3: Make the same calculation for Pythia but using a different PDF (CTEQ6M instead of CTEQ6L)

  • NLO_LLLO_comparison_pdf_wplusjets.png:
    NLO_LLLO_comparison_pdf_wplusjets.png

Rapidity of the W boson compatibility increase from 0.0156384 to 0.0711945. Rapidity of the lepton decay compatibility increase from 0.523681 to 0.794191

A important thing I have to learned: The CMSDAS lesson says ". . . much of the effect of NLO for W production at the LHC is in the choice of PDFs, not in the matrix elements. This is true also for Z, Higgs, and t-tbar production, especially when considering the rapidity distribution. It is not a theorem, but a useful rule of thumb. However, this is only the case when discussing the shapes of distributions. NLO is very important for understanding the absolute normalization of the distributions."

"Congratulations on completing the Generator Exercise" smile

"However, you are not done" More things to learn!! smile

January 5, 2014. Generator Exercises: OSET

Part 1: Monte Carlo Truth

  • Generate a GEN level data set

Part 2: Study MC Truth Information
  • List the Event Record

My record.log file was: record.log

Extra:Do the same for 14TeV

My record.log file was: record_extra.log

  • Top and Antitop mass

  • Decay product masses

Results for Part1 and Part2: generator_oset_part_12.pdf

Part 3: Study angular distributions and reweight

  • Generated Angular Distribution
Results for Part 3: generator_oset_part_3.pdf

Vertexing and b-tagging Exercises

January 10, 2014. B-tagging Exercises : Extract MC b-tagging efficiencies and mistagging rate

Link: https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCMSDataAnalysisSchoolbTagExercise

Extract MC b-tagging efficiencies and mistagging rate(Picture) for ttbar+jets:

efficiences_b_tagging_ttbarplusjets_MC.png

Extract MC b-tagging efficiencies and mistagging rate(Picture) for QCD:

efficiences_b_tagging_QCD_MC.png

Important definitions to learn:

b-tagging Efficiency in MC = number of true b-flavored jets which are b-tagged / number of true b-flavored jets

mistagging rate in MC = number of light jets=u,d,s,g which are b-tagged / number of light jets

Btag Perfomance comparison: ttbar+jets(red) vs QCD(black)

btag_perfomance.png

January 11, 2014. B-tagging Exercise : Correct MC using b-tagging data/MC scale factors derived from data.

Extract MC b-tagging efficiencies and mistagging rate(Picture) for ttbar+jets(Correct MC using b-tagging data/MC scale factors):

efficiences_b_tagging_ttbarplusjets_SF_MC.png

January 11, 2014: Muon Exercises

Link: https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCMSDataAnalysisSchool2013MuonExercise

Exercise 1: Create PAT-tuples

I had problems, the automatic selection of a RelVal input sample failed. Apparently the dataset is not avalaible. I used this dataset:/store/relval/CMSSW_5_2_3/RelValZMM/GEN-SIM-RECO/START52_V5-v1/0043/1011EE9E-2B7A-E111-9349-0018F3D0970C.root

For this, my PAT-tuple: my_muon_PATandPF2PAT_cfg.py

Eta and Pt distributiopn

muon.eta.png

muon.pt.png

Exercise 2: Including Particle Flow

Using the correct dataset we can compare the pt of muons produced by path patPF2PATSequence(Red) and muons produced by path patDefaultSequence (Black).

pt_distributions_muons_comparison.png

Exercise 3: Embedding MCTruth information

moun_pt_genparticle.png

moun_pt_reconstructed.png

Exercise 4: Embedding HLT matching information

At the end you will see additional objects called "patTrigger" containing trigger and trigger matching informations. Next you can run the analyser script that is reading your patTuple.root file, accessing the trigger matching information and making few plots(analyzePatTrigger.root):

Efficiency vs Pt of Candidate

efficiency_pt_candidate.png

Exercise 8(Other ones were suggested to be skipped): Making Basic Muon Observable (Kinematic) Plots

At the end you will have these normalized plots:

charge_linear.png

eta_linear.png

phi_linear.png

January 12, 2014: Muon Exercises: A long interactive example

A long interactive example

These are some plot of these long exercise:

  • p_log.png:
    p_log.png

  • phi_log.png:
    phi_log.png

February 1-2, 2014: Electrons - Photons Exercises

Analysis of ntuples: Access and display photon kinematic:

Photon Pt:

phopt.png

pT of ONLY photons passing an eta cut:

phopt_restricted_eta.png

creates hPt with 100 bins ranging from 0.0 to 100.0:

phopt_b.png

ROC curves

ROC curve for photon isolation: rocCurve.pdf

Efficiency curves

Efficiency curve for photon pT with cut of photon isolation < 0.08

eff_phopt_cut.png

Mass distribution in the nTuple

mass.png

February 5, 2014. Statistics Exercises: Combined Limit, Roofit

"The exercise introduces methods and tools for answering very common for a CMS analysis statistics questions while complying with rigorous standards set in today's experimental particle physics. The main topics are finding a confidence interval for a parameter of interest, and estimating statistical significance of an observation " ok great smile

For a particle physcis analysis we should know four types of statistics tasks: Parameter estimation, Hypothesis testing, Confidence interval, Goodness-of-fit.

Part 1. Terminology and Conventions

In this part we deal with the important definitions to get involved in the analysis:

  • observable
  • global observable or auxiliary observable
  • model
  • model parameter
  • parameter of interest (POI)
  • nuisance parameter
  • data or dataset
  • likelihood
  • hypothesis
  • prior PDF
  • Bayesian
  • frequentist
Part 2. Code and environment

No problems with this part smile

Part 3. The Counting Experiment Model

No problems with this part smile

Part 4. CombinedLimit primer

Part 4.1. CombinedLimit: Data card

 -- MarkovChainMC -- 
Limit: r < 3.11863 +/- 0.0615268 @ 95% CL (10 tries)
Average chain acceptance: 0.18967
Done in 0.00 min (cpu), 0.00 min (real)

No problems smile

Part 4.3. CombinedLimit: Adding Systematics

 *-- MarkovChainMC --* 
 *Limit: r < 3.05409 +/- 0.0467019 @ 95% CL (10 tries)* 
 *Average chain acceptance: 0.11935* 
 *Done in 0.01 min (cpu), 0.01 min (real)* 

Part 4.4. CombinedLimit: Systematics for data-driven background

I had to create a data card that describes a Higgs -> WW counting experiment:

 -- MarkovChainMC -- 
Limit: r < 2.17565 +/- 0.0458643 @ 95% CL (10 tries)
Average chain acceptance: 0.07699
Done in 0.02 min (cpu), 0.02 min (real)

Part 4.5. CombinedLimit: CLs, Bayesian and Feldman-Cousins

Part 4.5.1 CombinedLimit: asymptotic CLs limit

 -- Asymptotic -- 
Observed Limit: r < 1.6281
Expected 2.5%: r < 0.9653
Expected 16.0%: r < 1.4364
Expected 50.0%: r < 2.3203
Expected 84.0%: r < 3.9666
Expected 97.5%: r < 6.5972
Done in 0.00 min (cpu), 0.00 min (real)

This created some plots in a root file: higgsCombineTest.Asymptotic.mH120.root

Here are some plots inside this file:

limit.png

quantilexpected.png

Part 4.5.2 CombinedLimit: full frequentist CLs limit

 -- HybridNew, before fit -- 
Limit: r < 1.7876 +/- 0.124926 [1.69425, 2.22553]
Fit to 4 points: 1.86067 +/- 0.0344805
 -- Hybrid New -- 
Limit: r < 1.86067 +/- 0.0344805 @ 95% CL
Done in 0.01 min (cpu), 1.08 min (real)

Part 4.5.3 CombinedLimit: Computing Feldman-Cousins bands

 -- FeldmanCousins++ -- 
Limit: r< 0.85 +/- 0.075 @ 95% CL
Done in 0.17 min (cpu), 0.17 min (real)
 -- FeldmanCousins++ -- 
Limit: r> 0.05 +/- 0.05 @ 95% CL
Done in 0.16 min (cpu), 0.15 min (real)

Part 4.5.4 CombinedLimit: Compute the observed significance

 -- Profile Likelihood -- 
Significance: 0
      (p-value = 0.5)
Done in 0.00 min (cpu), 0.00 min (real)

Part 5. Model Building with RooFit.

Part 5.1. ROOT Macro, Part 5.2. Workspace and Part 5.3. Workspace Factory

At this point the python macro was: counting.py.txt

And once you run it! . . .

building counting model...
RooWorkspace(myWS) myWS contents
variables
---------
(n,nbkg,nsig)
functions
--------
RooAddition::yield[ nsig + nbkg ] = 10
Info in <RooWorkspace::SaveAs>: ROOT file workspace.root has been created

We usually want to express the signal yield via integrated luminosity, acceptance and efficiency, and the signal cro.ss section. Once you have changed that, this is the python macro:

counting_2.py.txt

Once you run the macro:

building counting model...
RooWorkspace(myWS) myWS contents
variables
---------
(eff,lumi,n,nbkg,xsec)
functions
--------
RooProduct::nsig[ lumi * xsec * eff ] = 0
RooAddition::yield[ nsig + nbkg ] = 10
Info in <RooWorkspace::SaveAs>: ROOT file workspace.root has been created

Part 5.4. Prior PDF and Part 5.5. Systematics

At the end the python macro was: counting_3.py.txt

Once you run the macro:

building counting model...
RooWorkspace(myWS) myWS contents
variables
---------
(eff_beta,eff_glob,eff_kappa,eff_nom,lumi_beta,lumi_glob,lumi_kappa,lumi_nom,n,nbkg_beta,nbkg_glob,nbkg_kappa,nbkg_nom,xsec)
p.d.f.s
-------
RooGaussian::eff_constr[ x=eff_beta mean=eff_glob sigma=1 ] = 1
RooGaussian::lumi_constr[ x=lumi_beta mean=lumi_glob sigma=1 ] = 1
RooProdPdf::model[ model_core * lumi_constr * eff_constr * nbkg_constr ] = 4.53999e-05
RooPoisson::model_core[ x=n mean=yield ] = 4.53999e-05
RooGaussian::nbkg_constr[ x=nbkg_beta mean=nbkg_glob sigma=1 ] = 1
RooUniform::prior[ x=(xsec) ] = 1
functions
--------
RooProduct::eff[ eff_nom * eff_alpha ] = 0.1
RooPowerFunction::eff_alpha[ x=eff_kappa r=eff_beta ] = 1
RooProduct::lumi[ lumi_nom * lumi_alpha ] = 20000
RooPowerFunction::lumi_alpha[ x=lumi_kappa r=lumi_beta ] = 1
RooProduct::nbkg[ nbkg_nom * lumi_alpha * nbkg_alpha ] = 10
RooPowerFunction::nbkg_alpha[ x=nbkg_kappa r=nbkg_beta ] = 1
RooProduct::nsig[ lumi * xsec * eff ] = 0
RooAddition::yield[ nsig + nbkg ] = 10
Info in <RooWorkspace::SaveAs>: ROOT file workspace.root has been created

Part 5.6. Datasets and Part 5.7. Model Config and Parameter Snapshot

This was the final macro to prepare the workspace!: counting_4.py.txt

And once you run it, this is the message: message.txt

"Congratulations! Now you have prepared a workspace that is compatible and contains enough information for most RooStats calculations"

Ok, good, feeling awesome! smile

February 6, 2014. Statistics Exercises: RooStats

Part 6. RooStats

Part 6.1.1. Bayesian numeric calculator

The contents of the loaded workspace and details about some components like model config and data using the macro are: message2.txt

You will get the 95% C.L. one-sided credible interval for xsec and the corresponding posterior PDF plot: bayesian_num_posterior.pdf

And the macro to get this plot was: bayesian_num.py.txt

posterior_prob_xsec.png

Part 6.1.2. Bayesian MCMC calculator

In this case using this macro: bayesian_mcmc.py.txt

I got these different plots:

Part 6.2. CLs upper limit

The asymptotic CLs calculation

asymptoticCLScanforworkspace.png

The Full CLs calculation

frequentistCLscanforworkspace.png

February 8, 2014. Tracking and Primary Vertices Exercises: Tracks as particles and Constructing vertices from tracks

1. The five basic track variables

These five parameters are:

  • signed radius of curvature (units of cm)
  • angle of the trajectory at a given point on the helix, in the plane transverse to the beamline (usually called &phi)
  • angle of the trajectory at a give point on the helix with respect to the beamline (θ, or equivalently λ = π/2 - θ), which is usually expressed in terms of pseudorapidity (η = −ln(tan(θ/2)));
  • offset or "impact parameter" relative to some reference point (usually the beamspot), in the plane transverse to the beamline (usually called dxy);
  • impact parameter relative to a reference point (beamspot or a selected primary vertex), perpendicular to the beamline (usually called dz).
My quality.py macro for putting in a histogram track.chi2(): quality.py.txt

generaltracks.png

2. Tracks as particles


Printing total energy, kinetic energy and plotting Px,Py,Pz for different tracks: kinematic.py.txt

px_histogram.png

py_histogram.png

pz_histogram.png

Finally, let's look for resonances. Given two tracks asociated of dimuon final state, we can calculate the invariant mass and kinetic energy using this python macro (I made my own modification to the original code): invariantmass.py.txt

mass_histogram.png

This mass histogram is pretty cool, the peak on 3GeV is J/ψ!

kinetic_histogram.png

3. Constructing vertices from tracks

  • Running the vertex reconstruction for Ks particle

  • Looking at secondary vertices

  • Plot the invariant mass of all vertices

These were the pythom macros I used for that: construct_secondary_vertices_cfg.py.txt and sec_vertices.py.txt

You can see a very prominent KS → π+π− peak:

mass_histogram_2.png

Basic distributions of primary vertices

In this case is the rho_z distribution: vertex.py.txt

rho_z.png

For making a plot of the z distance between primary vertices: vertex_distance.py.txt

deltaz.png

Important: "The broad distribution is due to the spread in primary vertex positions. Zoom in on the narrow dip near deltaz = 0. This region is empty because if two real vertices are too close to each other, they will be misreconstructed as a single vertex. Thus, there are no reconstructed vertices with such small separations. "

A python script for making:

  • Print out the number of primary vertices in each event(Pile-up)
  • Print out the number of tracks in a single vertex object
  • Plot the distribution of the number of tracks vs the number of vertices
  • Distinguish between the "primary vertices" and the "beamspot"
Python macro: script.py.txt

ntrksvsnvtx_histogram.png

Primary vertices improve physics results

An example of how primary vertices are useful to an analyst. In this case it was studied the invariant mass of KS → π+π− on real data

The macro for this part was: analyse.py.txt

The 3 plots:

cosAngle.png

cosAngle_zoom.png

mass_improved.png

February 9, 2014. MET Exercises

Exercise 1: Access to MET objects in AOD

We will access MET objects stored in AOD by using the printMet_AOD.py script.

This was the result: access.txt

Exercise 2: Apply MET filters

We will apply a set of MET filters using the met_filters_cfg.py script.

This was the result: metfilters.txt

And at the end, you will have a root file filtered.root

Exercise 3: Apply MET corrections

To make MET a better estimate of true MET, we will apply MET corrections with the python configuration file: corrMet_cfg.py

And at the end, you will have a root file corrMet.root and this message: metcorrections.txt

If you print the results, this is the information: correctedmet.txt


These exercises were really short smile

February 15 - Present, 2014. Long Exercise: Exotica, Z'-to-dimuons Exercise

Great, I will follow this exercise trying to pick up everything smile

"Prerequisites: Muons short exercise are recommended"

I made that exercise, so I can continue smile

"In this exercise, you'll work on the meat of the analysis to produce the dimuon mass spectra for data and MC; for more on the statistical treatment that we use to search for a peak and to set limits on a new resonance's mass, please refer to the RooStats tutorial at this School as well as our analysis notes (ANs)."

ttbar+gamma Analysis

16 April - 6 May: Generating MadGraph samples at 8TeV and 13TeV at LXPLUS

This is a recipe of how generate a signal sample of ttbar+gamma on lxplus using several jobs

1. Preparing MadGraph

  • Connect to lxplus at CERN
ssh -XY daguerre@lxplus.cern.ch
  • Create your work directory (For me: /afs/cern.ch/work/d/daguerre/public/)
mkdir work
cd work
  • Download and install the last version of Madgraph. You can use my files:
cp -r  /afs/cern.ch/work/d/daguerre/public/work/MG5_aMC_v2_1_1/ ./
tar -zxvf MG5_aMC_v2_1_1/   
  • You will need the cards for generation of ttbar+gamma. Please copy them from my CERN space.
cd MG5_aMC_v2_1_1/
cp -r /afs/cern.ch/work/d/daguerre/public/work/MG5_aMC_v2_1_1/ttbargamma  ./ 

Important: The ttbargamma is the "Template" usually used when you are working with MadGraph

Your MadGraph is ready to run the jobs!

2. Generating a simulation of ttbarplusgamma: A brief description of the code

  • In your work directory create a script directory
mkdir ttbargamma
  • Create the CMSSW directory
cmsrel CMSSW_5_3_11
cd CMSSW_5_3_11/src
cmsenv
  • Inside the script directory create your simulation script
cd ttbargamma
nano sim8TeV.sh
  • Description of the code
Define directory some importante variables (Number of Events, Aleatory number or "Seed"), define important paths and run the CMSSW environment.

#!/bin/bash

sample_base='SM_ttgamma_8TeV'
nevents='10000'
seed=$1
mydir=ttbargamma

export MADGRAPH=/afs/cern.ch/work/d/daguerre/public/work/MG5_aMC_v2_1_1
export WORK=/afs/cern.ch/work/d/daguerre/public/work/${mydir}
cd ..
cd ./CMSSW_5_3_11/src/
eval `scramv1 runtime -sh`

Enter to your MadGraph

cd $MADGRAPH

Generate the new process. You copy a template directory to be used for running the new process script of MadGraph. Below is a way to edit the proc card in your script. A generation directory is created for running the generation of events

cp -r ./Template/LO  ${sample_base}_${seed}
cd ${sample_base}_${seed}
cat > ./Cards/proc_card_mg5.dat << EOF
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
generate p p > t t~ a, t > b j j, t~ > b~ l- vl~  @1
add process p p > t t~ a, t > b l+ vl, t~ > b~ j j  @2
add process p p > t t~, t > b j j, t~ > b~ l- vl~ a  @3
add process p p > t t~, t > b j j a , t~ > b~ l- vl~   @4
add process p p > t t~, t > b a l+ vl, t~ > b~ j j  @5
add process p p > t t~, t > b l+ vl, t~ > b~ j j a  @6
output ${sample_base}_${seed}_generation
EOF

Generate a new process using the MadGraph script

cd $MADGRAPH
./bin/mg5_aMC ./${sample_base}_${seed}/Cards/proc_card_mg5.dat

Change parameters for running the MC signal: Number of events, center of mass, seed, etc. Here you are going to use run card that you copy from my CERN area.

cp ./ttbargamma/Cards/run_card.dat ./${sample_base}_${seed}_generation/Cards/run_card.dat
cd ${sample_base}_${seed}_generation

sed -i 's|[0-9]\{1,\} \+= \+nevents|'${nevents}' = nevents|g' ./Cards/run_card.dat
sed -i 's|[0-9]\{1,\} \+= \+ebeam1|4000 = ebeam1|g' ./Cards/run_card.dat
sed -i 's|[0-9]\{1,\} \+= \+ebeam2|4000 = ebeam2|g' ./Cards/run_card.dat
sed -i 's|.*timeout \+= \+[0-9]\+|timeout = 1|g' ./Cards/me5_configuration.txt
sed -i 's|0 \+= \+iseed|'${seed}' = iseed|g' ./Cards/run_card.dat

Generate events and copy to the output directory for the lhe files. In this case "samples"

./bin/generate_events
gunzip ./Events/run_01/unweighted_events.lhe.gz
mkdir $WORK/samples
cp ./Events/run_01/unweighted_events.lhe  $WORK/samples/${sample_base}'_'${nevents}'ev'_run_${seed}.lhe
cd ..

Create the directory when you are going to storage your root files

cd $WORK
mkdir MC_SM_ttgamma_8TeV_set

Please see attached my script: sim8TeV.sh

IMPORTANT: This script needs an entry (The seed for your simulation). If you want to run it please use it like this! where 1 is the seed number I want.

 ./sim8TreV 1

3. Send several jobs to the batch system

For this purpose, you will need to write a script. In this case you define the initial and final variable, these variables will be the entries for the sim8TeV script:

#!/bin/bash

initial=1
final=5

for i in `seq $initial $final`;
do
bsub -q 1nd -W 600 sim8TeV.sh $i
sleep 1
done

You have to run this script in the ttbargamma directory (/afs/cern.ch/work/d/daguerre/public/work/ttbargamma): job.sh. This script send 5 jobs of 10000 events. At the end of your simulation you will have 50000 events. Please see the next step to process everything in a root file.

4. Take your lhe files and create a root file with all of them

cd MG5_aMC_v2_1_1
./bin/mg5_aMC
install ExRootAnalysis
  • I create a script that basically copy the lhe files generates in the sample directory to the and then they are converted to the MC_SM_ttgamma_8TeV_set. Then, ExRootAnalysis is used to convert the lhe files to root files. Next using hadd comand they are put it together. Please see below my script: convert_lhe_to_root.sh. You have to run this script in the ttbargamma directory(For me: /afs/cern.ch/work/d/daguerre/public/work/ttbargamma).
#!/bin/bash

sample_base='SM_ttgamma_8TeV'
nevents='10000'
####Convert lhe to root file
initial=1
final=5
for i in `seq $initial $final`;
do
/afs/cern.ch/work/d/daguerre/public/work/MG5_aMC_v2_1_1/ExRootAnalysis/ExRootLHEFConverter ./samples/${sample_base}'_'${nevents}'ev'_run_$i.lhe ./MC_SM_ttbar_8TeV_set/${sample_base}'_'${nevents}'ev'_run_$i.root
sleep 1
done 
####Combine root files
hadd ${sample_base}.root ./MC_SM_ttgamma_8TeV_set/*.root


If you run the script you will have a file with 50000 events:

SM_ttgamma_8TeV.root

Top Mass Measurement using b-jet Energy spectrum (Summer Student Project)

16 June, 2014: Reading the twiki page and starting to work

Twiki page: https://twiki.cern.ch/twiki/bin/viewauth/CMS/BeautyOfTheTopSummer2014

  • The software was compiled without problems. My directory was: /afs/cern.ch/work/d/daguerre/public/topmass/CMSSW_5_3_15/src/UserCode/llvv_fwk
>> Creating project symlinks
>> Done python_symlink
>> Compiling python modules python
>> Compiling python modules src/UserCode/llvv_fwk/python
>> All python modules compiled
>> Pluging of all type refreshed.

  • Run the preselection with 270 jobs. I had no problems running the script for creating the summary trees with the relevant information for the analysis. For now they are pending in the batch system.
537408219 daguerr PEND 8nh       lxplus409              *ysis0265_ Jun 16 18:35
537408223 daguerr PEND 8nh       lxplus409              *ysis0266_ Jun 16 18:35
537408226 daguerr PEND 8nh       lxplus409              *ysis0267_ Jun 16 18:35
537408231 daguerr PEND 8nh       lxplus409              *ysis0268_ Jun 16 18:35
537408235 daguerr PEND 8nh       lxplus409              *ysis0269_ Jun 16 18:35
Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg 3plates.jpg r1 manage 73.1 K 2013-10-13 - 03:02 DanielGuerrero  
PDFpdf ActivationintheHF.pdf r1 manage 214.8 K 2014-02-03 - 01:04 DanielGuerrero  
PDFpdf ActivityandAEDintheHF.pdf r1 manage 942.5 K 2014-02-03 - 01:12 DanielGuerrero  
Unknown file formatroot Analysis.root r1 manage 8.3 K 2013-10-17 - 22:20 DanielGuerrero  
C source code filec AnalysisDelphes.C r2 r1 manage 5.8 K 2013-10-17 - 23:11 DanielGuerrero  
PNGpng Auxscore.png r1 manage 8.3 K 2013-03-28 - 06:13 DanielGuerrero  
JPEGjpg Canvas_1.jpg r1 manage 18.4 K 2013-03-22 - 05:58 DanielGuerrero  
Postscriptps Canvas_1.ps r1 manage 8.7 K 2013-03-09 - 20:13 DanielGuerrero Particle.PT
PNGpng Dcyscore.png r1 manage 12.2 K 2013-03-28 - 06:00 DanielGuerrero  
PNGpng Dcytimes.png r1 manage 7.0 K 2013-03-28 - 04:57 DanielGuerrero  
PNGpng Distance.png r2 r1 manage 5.5 K 2013-10-17 - 23:08 DanielGuerrero  
PNGpng Distance_D.png r1 manage 12.3 K 2013-10-17 - 22:26 DanielGuerrero  
PNGpng Distance_R.png r1 manage 12.3 K 2013-10-17 - 22:20 DanielGuerrero  
PDFpdf DoseintheHFFEE_Rackmodel.pdf r1 manage 1311.7 K 2014-02-03 - 00:56 DanielGuerrero  
PNGpng Fireworks.png r1 manage 134.5 K 2013-12-29 - 08:22 DanielGuerrero  
PDFpdf FluenceandDose.pdf r1 manage 194.2 K 2014-02-03 - 00:48 DanielGuerrero  
PDFpdf FluenceandDoseintheHFFEE.pdf r1 manage 194.2 K 2014-02-03 - 00:50 DanielGuerrero  
PNGpng Front-LowRack.png r1 manage 8.5 K 2013-03-16 - 22:15 DanielGuerrero  
PNGpng Front-Racks.png r2 r1 manage 49.7 K 2013-03-16 - 22:21 DanielGuerrero  
PNGpng Front-UpRack.png r1 manage 15.7 K 2013-03-16 - 22:15 DanielGuerrero  
JPEGjpg HFActivity.jpg r1 manage 26.2 K 2013-04-16 - 10:09 DanielGuerrero  
Unknown file formateps HFSimulation.eps r1 manage 20.7 K 2013-04-16 - 10:04 DanielGuerrero  
JPEGjpg HFSimulation.jpg r1 manage 20.7 K 2013-04-16 - 10:06 DanielGuerrero  
PDFpdf Inform_for_Tullio.pdf r1 manage 194.2 K 2013-11-01 - 20:26 DanielGuerrero  
PNGpng Irrprofi.png r1 manage 6.8 K 2013-03-28 - 04:34 DanielGuerrero  
PNGpng Jet.PT.png r1 manage 8.3 K 2013-05-14 - 00:50 DanielGuerrero  
PNGpng Jet.PT1.png r1 manage 8.2 K 2013-05-14 - 00:59 DanielGuerrero  
PNGpng MC_pt.png r1 manage 8.7 K 2013-12-24 - 07:45 DanielGuerrero  
PNGpng MC_pt_patslected.png r1 manage 7.8 K 2013-12-24 - 09:14 DanielGuerrero  
PNGpng MissingET.MET.png r1 manage 8.9 K 2013-05-14 - 00:47 DanielGuerrero  
PNGpng MissingET.MET1.png r1 manage 8.9 K 2013-05-14 - 00:59 DanielGuerrero  
PNGpng MissingET.Phi.png r1 manage 8.5 K 2013-05-14 - 00:50 DanielGuerrero  
PNGpng MissingET.Phi1.png r1 manage 8.3 K 2013-05-14 - 00:59 DanielGuerrero  
PNGpng NLO_LLLO_comparison_pdf_wplusjets.png r1 manage 29.0 K 2014-01-05 - 07:56 DanielGuerrero  
PNGpng NLO_LLLO_comparison_wplusjets.png r1 manage 35.6 K 2014-01-05 - 06:35 DanielGuerrero  
PNGpng Particle.E.png r1 manage 9.2 K 2013-05-14 - 01:08 DanielGuerrero  
PNGpng Particle.E1.png r1 manage 9.3 K 2013-05-14 - 01:08 DanielGuerrero  
JPEGjpg Particle.M.jpg r1 manage 18.3 K 2013-03-09 - 20:27 DanielGuerrero  
PNGpng Particle.Mass.png r1 manage 8.1 K 2013-05-14 - 00:50 DanielGuerrero  
PNGpng Particle.Mass1.png r1 manage 8.3 K 2013-05-14 - 00:59 DanielGuerrero  
PNGpng Particle.PT.png r1 manage 8.7 K 2013-05-14 - 00:50 DanielGuerrero  
PNGpng Particle.PT1.png r1 manage 8.6 K 2013-05-14 - 00:59 DanielGuerrero  
JPEGjpg ParticleE.jpg r1 manage 19.7 K 2013-03-09 - 20:26 DanielGuerrero  
JPEGjpg ParticlePT.jpg r1 manage 18.7 K 2013-03-09 - 20:26 DanielGuerrero  
PNGpng Particle_Size1.png r1 manage 8.8 K 2013-05-14 - 01:13 DanielGuerrero  
PNGpng Particle_size.png r1 manage 8.9 K 2013-05-14 - 01:13 DanielGuerrero  
PNGpng Raddecay.png r1 manage 12.2 K 2013-03-28 - 04:05 DanielGuerrero  
PDFpdf Report_Radiation.pdf r1 manage 942.5 K 2013-08-27 - 21:23 DanielGuerrero  
Unknown file formatext Resnuclei r1 manage 434.7 K 2013-03-01 - 21:30 DanielGuerrero  
PNGpng Screenshot_from_2014-01-10_214616.png r1 manage 171.8 K 2014-01-11 - 03:51 DanielGuerrero  
PDFpdf Simulation_with_14TeV_and_silicon.pdf r1 manage 187.3 K 2013-03-02 - 01:26 DanielGuerrero  
PNGpng Transverse_Distance_DT.png r2 r1 manage 12.5 K 2013-10-17 - 22:26 DanielGuerrero  
PNGpng Trasnverse.png r1 manage 5.8 K 2013-10-17 - 23:06 DanielGuerrero  
PNGpng ZPeak_MC.png r1 manage 5.7 K 2013-12-29 - 04:51 DanielGuerrero  
PNGpng ZPeak_data.png r1 manage 4.6 K 2013-12-29 - 05:12 DanielGuerrero  
PNGpng Zbosondiagram1.png r1 manage 2.9 K 2013-03-04 - 01:21 DanielGuerrero  
PNGpng Zbosondiagram2.png r1 manage 3.0 K 2013-03-04 - 01:23 DanielGuerrero  
PDFpdf Zbosonresults.pdf r1 manage 33.4 K 2013-03-04 - 01:20 DanielGuerrero  
Texttxt access.txt r1 manage 9.0 K 2014-02-10 - 06:56 DanielGuerrero  
Texttxt analyse.py.txt r1 manage 3.7 K 2014-02-09 - 02:15 DanielGuerrero  
PNGpng asymptoticCLScanforworkspace.png r1 manage 17.0 K 2014-02-06 - 05:47 DanielGuerrero  
Texttxt bayesian_mcmc.py.txt r1 manage 3.2 K 2014-02-06 - 05:39 DanielGuerrero  
PDFpdf bayesian_mcmc_posterior.pdf r1 manage 13.4 K 2014-02-06 - 05:36 DanielGuerrero  
Texttxt bayesian_num.py.txt r1 manage 2.2 K 2014-02-06 - 05:02 DanielGuerrero  
PDFpdf bayesian_num_posterior.pdf r1 manage 14.7 K 2014-02-06 - 05:00 DanielGuerrero  
PNGpng btag_perfomance.png r1 manage 11.7 K 2014-01-11 - 06:35 DanielGuerrero  
PNGpng c1.png r1 manage 8.9 K 2013-07-16 - 19:37 DanielGuerrero  
Unknown file formateps c1_n2.eps r1 manage 12.9 K 2013-03-22 - 05:58 DanielGuerrero  
JPEGjpg c1_n2.jpg r1 manage 17.4 K 2013-03-22 - 06:00 DanielGuerrero  
JPEGjpg card.jpg r1 manage 6.7 K 2013-07-14 - 23:29 DanielGuerrero  
PNGpng charge_linear.png r1 manage 10.1 K 2014-01-12 - 05:34 DanielGuerrero  
JPEGjpg cms_rack.jpg r1 manage 82.2 K 2013-10-13 - 03:13 DanielGuerrero  
PNGpng comparison.png r1 manage 18.5 K 2014-06-19 - 09:46 DanielGuerrero  
PNGpng comparison_lxy.png r1 manage 13.8 K 2014-06-19 - 09:46 DanielGuerrero  
PNGpng comparison_reweighted.png r1 manage 7.8 K 2014-06-19 - 09:46 DanielGuerrero  
Postscriptps comparisons_pdf.ps r1 manage 791.7 K 2014-01-05 - 07:31 DanielGuerrero  
Texttxt construct_secondary_vertices_cfg.py.txt r1 manage 3.1 K 2014-02-08 - 23:25 DanielGuerrero  
Unix shell scriptsh convert_lhe_to_root.sh r1 manage 0.4 K 2014-05-09 - 20:11 DanielGuerrero  
Texttxt correctedmet.txt r1 manage 90.8 K 2014-02-10 - 06:57 DanielGuerrero  
PNGpng cosAngle.png r1 manage 16.4 K 2014-02-09 - 02:16 DanielGuerrero  
PNGpng cosAngle_zoom.png r1 manage 15.3 K 2014-02-09 - 02:17 DanielGuerrero  
Texttxt counting.py.txt r2 r1 manage 6.3 K 2014-02-06 - 00:24 DanielGuerrero  
Texttxt counting_2.py.txt r1 manage 1.2 K 2014-02-05 - 22:53 DanielGuerrero  
Texttxt counting_3.py.txt r1 manage 2.6 K 2014-02-06 - 00:05 DanielGuerrero  
Texttxt counting_4.py.txt r1 manage 6.3 K 2014-02-06 - 00:25 DanielGuerrero  
PNGpng data_pt.png r1 manage 8.7 K 2013-12-24 - 07:45 DanielGuerrero  
PNGpng data_pt_patslected.png r1 manage 7.7 K 2013-12-24 - 09:14 DanielGuerrero  
PNGpng deltaz.png r1 manage 16.5 K 2014-02-09 - 00:56 DanielGuerrero  
PNGpng eff_phopt_cut.png r1 manage 13.7 K 2014-02-05 - 20:44 DanielGuerrero  
PNGpng efficiences_b_tagging_QCD_MC.png r1 manage 56.1 K 2014-01-11 - 04:51 DanielGuerrero  
PNGpng efficiences_b_tagging_ttbarplusjets_MC.png r1 manage 89.6 K 2014-01-11 - 03:52 DanielGuerrero  
PNGpng efficiences_b_tagging_ttbarplusjets_SF_MC.png r1 manage 54.8 K 2014-01-11 - 07:13 DanielGuerrero  
PNGpng efficiency_pt_candidate.png r1 manage 6.1 K 2014-01-12 - 04:32 DanielGuerrero  
PNGpng electron_delta_pt.png r1 manage 13.7 K 2013-10-17 - 22:20 DanielGuerrero  
PNGpng eta_linear.png r1 manage 9.6 K 2014-01-12 - 05:34 DanielGuerrero  
JPEGjpg event.jpg r1 manage 86.1 K 2013-08-03 - 06:34 DanielGuerrero  
JPEGjpg event_0.jpg r1 manage 114.8 K 2013-08-03 - 06:34 DanielGuerrero  
JPEGjpg event_0_3d.jpg r1 manage 37.1 K 2013-08-03 - 06:58 DanielGuerrero  
PNGpng event_scale.png r1 manage 11.0 K 2013-12-30 - 22:25 DanielGuerrero  
PNGpng event_scale_first.png r1 manage 11.0 K 2013-12-30 - 22:52 DanielGuerrero  
PNGpng ex_1.png r1 manage 8.9 K 2014-01-06 - 00:19 DanielGuerrero  
PNGpng frequentistCLscanforworkspace.png r1 manage 16.4 K 2014-02-06 - 05:55 DanielGuerrero  
PNGpng generaltracks.png r1 manage 17.9 K 2014-02-08 - 21:14 DanielGuerrero  
PDFpdf generator_oset_part_12.pdf r2 r1 manage 33.3 K 2014-01-06 - 20:09 DanielGuerrero  
PDFpdf generator_oset_part_3.pdf r1 manage 8.2 K 2014-01-07 - 01:40 DanielGuerrero  
PNGpng input_overview.png r2 r1 manage 86.3 K 2013-03-28 - 06:04 DanielGuerrero  
Texttxt invariantmass.py.txt r1 manage 1.3 K 2014-02-08 - 22:53 DanielGuerrero  
PNGpng jet1.png r1 manage 13.3 K 2013-07-19 - 07:52 DanielGuerrero  
C source code filec jet2.C r1 manage 2.5 K 2014-02-01 - 06:53 DanielGuerrero  
PNGpng jet_pt.png r1 manage 9.6 K 2013-07-16 - 19:47 DanielGuerrero  
PNGpng jet_pt_0.png r1 manage 12.0 K 2013-07-19 - 07:52 DanielGuerrero  
PNGpng jet_pt_1.png r1 manage 13.1 K 2013-07-19 - 07:52 DanielGuerrero  
PNGpng jet_pt_all.png r1 manage 15.8 K 2013-07-19 - 07:52 DanielGuerrero  
Unix shell scriptsh job.sh r1 manage 0.1 K 2014-05-09 - 20:11 DanielGuerrero  
Texttxt kinematic.py.txt r1 manage 1.5 K 2014-02-08 - 21:45 DanielGuerrero  
PNGpng kinetic_histogram.png r1 manage 16.3 K 2014-02-08 - 22:50 DanielGuerrero  
PNGpng limit.png r1 manage 3.8 K 2014-02-05 - 22:14 DanielGuerrero  
Unknown file formatcc main42.cc r1 manage 3.2 K 2013-10-01 - 18:29 DanielGuerrero  
Unknown file formatcmnd main42.cmnd r1 manage 1.6 K 2013-10-01 - 18:29 DanielGuerrero  
Microsoft Executable fileexe main42.exe r1 manage 3539.5 K 2013-10-01 - 18:35 DanielGuerrero  
Unknown file formatout main42.out r1 manage 192.1 K 2013-10-01 - 18:29 DanielGuerrero  
PNGpng mass.png r1 manage 6.2 K 2014-02-05 - 21:04 DanielGuerrero  
PNGpng mass_histogram.png r1 manage 16.3 K 2014-02-08 - 22:50 DanielGuerrero  
PNGpng mass_histogram_2.png r1 manage 15.3 K 2014-02-08 - 23:25 DanielGuerrero  
PNGpng mass_improved.png r1 manage 16.0 K 2014-02-09 - 02:16 DanielGuerrero  
Postscriptps matrix1.ps r1 manage 12.6 K 2013-07-14 - 23:27 DanielGuerrero  
Texttxt message.txt r1 manage 3.5 K 2014-02-06 - 00:24 DanielGuerrero  
Texttxt message2.txt r1 manage 2.6 K 2014-02-06 - 04:54 DanielGuerrero  
Texttxt metcorrections.txt r1 manage 14.6 K 2014-02-10 - 06:57 DanielGuerrero  
Texttxt metfilters.txt r1 manage 19.0 K 2014-02-10 - 06:57 DanielGuerrero  
JPEGjpg moun1.jpg r1 manage 35.3 K 2013-09-09 - 23:17 DanielGuerrero  
JPEGjpg moun2.jpg r1 manage 94.7 K 2013-09-09 - 23:17 DanielGuerrero  
JPEGjpg moun3.jpg r1 manage 27.2 K 2013-09-09 - 23:17 DanielGuerrero  
PNGpng moun_pt_genparticle.png r1 manage 10.1 K 2014-01-12 - 03:51 DanielGuerrero  
PNGpng moun_pt_reconstructed.png r1 manage 7.9 K 2014-01-12 - 03:51 DanielGuerrero  
PNGpng muon.eta.png r1 manage 9.3 K 2014-01-11 - 21:01 DanielGuerrero  
PNGpng muon.pt.png r1 manage 9.4 K 2014-01-11 - 21:01 DanielGuerrero  
PNGpng muon_delta_pt.png r1 manage 13.2 K 2013-10-17 - 22:20 DanielGuerrero  
Texttxt my_muon_PATandPF2PAT_cfg.py.txt r3 r2 r1 manage 1.6 K 2014-01-11 - 21:08 DanielGuerrero  
Unknown file formatout mymain.out r1 manage 70.4 K 2013-09-30 - 07:16 DanielGuerrero  
PNGpng ntrksvsnvtx_histogram.png r1 manage 30.6 K 2014-02-09 - 01:50 DanielGuerrero  
PNGpng p_log.png r1 manage 27.4 K 2014-02-03 - 03:34 DanielGuerrero  
PNGpng phi_linear.png r1 manage 10.0 K 2014-01-12 - 05:34 DanielGuerrero  
PNGpng phi_log.png r1 manage 13.9 K 2014-02-03 - 03:34 DanielGuerrero  
PNGpng phopt.png r1 manage 8.0 K 2014-02-05 - 05:18 DanielGuerrero  
PNGpng phopt_b.png r1 manage 11.0 K 2014-02-05 - 05:18 DanielGuerrero  
PNGpng phopt_restricted_eta.png r1 manage 11.2 K 2014-02-05 - 05:19 DanielGuerrero  
PNGpng photon_delta_pt.png r1 manage 13.5 K 2013-10-17 - 22:20 DanielGuerrero  
Postscriptps plotpaw.ps r1 manage 16.9 K 2014-01-05 - 05:39 DanielGuerrero  
PNGpng posterior_prob_xsec.png r1 manage 8.6 K 2014-02-06 - 05:13 DanielGuerrero  
PNGpng pt_distributions_muons_comparison.png r1 manage 10.1 K 2014-01-12 - 03:25 DanielGuerrero  
PNGpng px_histogram.png r1 manage 14.5 K 2014-02-08 - 22:49 DanielGuerrero  
PNGpng py_histogram.png r1 manage 14.9 K 2014-02-08 - 22:50 DanielGuerrero  
PNGpng pz_histogram.png r1 manage 14.5 K 2014-02-08 - 22:50 DanielGuerrero  
Texttxt quality.py.txt r1 manage 0.5 K 2014-02-08 - 21:14 DanielGuerrero  
PNGpng quantilexpected.png r1 manage 4.7 K 2014-02-05 - 22:14 DanielGuerrero  
Unknown file formatlog record.log r2 r1 manage 108.0 K 2014-01-06 - 16:38 DanielGuerrero  
Unknown file formatlog record_extra.log r1 manage 108.0 K 2014-01-06 - 16:39 DanielGuerrero  
PNGpng rho_z.png r1 manage 20.8 K 2014-02-08 - 23:45 DanielGuerrero  
PDFpdf rocCurve.pdf r1 manage 16.2 K 2014-02-05 - 20:41 DanielGuerrero  
PDFpdf scatter_mcmc_xsec_vs_beta_efficiency.pdf r1 manage 634.5 K 2014-02-06 - 05:37 DanielGuerrero  
PDFpdf scatter_mcmc_xsec_vs_beta_lumi.pdf r1 manage 632.9 K 2014-02-06 - 05:37 DanielGuerrero  
PDFpdf scatter_mcmc_xsec_vs_beta_nbkg.pdf r1 manage 639.1 K 2014-02-06 - 05:37 DanielGuerrero  
JPEGjpg scoring.jpg r1 manage 62.7 K 2013-10-13 - 03:02 DanielGuerrero  
Texttxt script.py.txt r1 manage 1.8 K 2014-02-09 - 01:50 DanielGuerrero  
Texttxt sec_vertices.py.txt r1 manage 0.9 K 2014-02-08 - 23:25 DanielGuerrero  
JPEGjpg signal.jpg r1 manage 110.1 K 2013-09-05 - 01:35 DanielGuerrero  
Unix shell scriptsh sim8TeV.sh r1 manage 2.5 K 2014-05-09 - 19:30 DanielGuerrero  
PNGpng track_delta_pt.png r1 manage 14.8 K 2013-10-17 - 22:20 DanielGuerrero  
PNGpng track_delta_x.png r1 manage 14.8 K 2013-10-17 - 22:20 DanielGuerrero  
JPEGjpg ttbarusingpythia.jpg r1 manage 125.6 K 2013-10-03 - 02:53 DanielGuerrero  
Texttxt vertex.py.txt r1 manage 0.6 K 2014-02-08 - 23:45 DanielGuerrero  
Texttxt vertex_distance.py.txt r1 manage 0.6 K 2014-02-09 - 00:55 DanielGuerrero  
Unknown file formatext void_bookhistograms r1 manage 1.9 K 2013-10-17 - 19:47 DanielGuerrero  
C source code filec void_bookhistograms.C r1 manage 1.9 K 2013-10-17 - 19:52 DanielGuerrero  
Unknown file formatdat void_bookhistograms.dat r1 manage 1.9 K 2013-10-17 - 19:49 DanielGuerrero  
PNGpng wplusjets_pt_comparison.png r1 manage 28.6 K 2014-01-03 - 07:02 DanielGuerrero  
PNGpng wpluswminus_production_comparison.png r1 manage 28.3 K 2014-01-05 - 05:28 DanielGuerrero  
Edit | Attach | Watch | Print version | History: r122 < r121 < r120 < r119 < r118 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r122 - 2015-11-15 - DanielGuerrero
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback