Introduction

This is an effort to collect information that I hope is useful to newcomers and to describe a-posteriori what I did to get started with my work at CMS at CERN. It may very well be missing some steps. Feel free to expand this page with whatever you consider helpful, and please correct any mistakes you see. Also, since at the time of writing, I haven't been at CERN for too long, I may have misunderstood a thing or two. Lastly, by the time you read this, things might have changed. Still, I hope it is helpful.

Useful Links

The CERN TWiki contains vast amounts of documentation; often, you can find answers to your questions there. However, it's sometimes hard to navigate. Below is a list of links I found useful; maybe you can benefit from some of them as well.

Reading all of the material referenced above at once is maybe neither useful nor healthy - at least I had sometimes the fear that my head would explode shortly. So, although what you will read in the next sections most probably is already covered by the links above, read on if you'd like to get your feet wet without reading all of the documentation. However, I still recommend reading at least the introductions the links above refer to, so you get an idea of how the software you'll use is structured and how it works.

Some Terminology and Introduction to CMSWBM

The LHC ring does not circulate protons or ions at all times. Instead, every now and then, the empty ring is filled with protons or ions. The beam persists until it is dumped for security reasons or its quality gets too low. The period in which particles are circulated is called a Fill. One can go to the CMSWB Start Page and navigate either to a specific Fill using the Fill Report. (example)

A Run is a period of time in which CMS takes data. That might either be during a Fill when there are collisions, or if there are no beams and the influence of cosmic particles is measured. The Run is a concept which is private to CMS, i.e. it is not something that is determined by LHC, in contrast to Fills. Runs are started and stopped by the CMS operators according to various rules, so that one Fill may contain any number of Runs. One can look at specific Runs from a Fill Report in CMSWBM, or select it from the Run Summary. (example)

Some Runs last many hours. Every now and then, though, parts of CMS may fail or misbehave, invalidating (parts of) the data taken during the time in which the problem exists. However, the data taken before the failure occured and after it was solved still may be good. So, each Run is divided into small time segments, called Luminosity Sections, or Lumisections for short. The detector status and data taken during each Lumisection is monitored and analyzed. If everything looks alright, the Lumisection is certified as good, otherwise the whole Lumisection is declared bad. However, bad Lumisection data is not thrown away, so make sure to use only good Lumisections when running an analysis job. Near the top of Run Summary Page, if a Run has already been selected, there is a section titled Links - one of these links, labelled LumiSections (example), leads to a page that shows the status for each Lumisection and subsystem. A green square means that the subsystem worked correctly, a red one means that it did not. At the time of this writing, one can ignore the fact that Castor is always marked as bad.

From the Run Summary, you can follow the TRG link in the Components section near the top of the page. (example) In the L1Summary Algorithm Triggers table, you can see the available triggers with their bit numbers and names, the number of times the trigger fired (i.e. got actvated), and the prescale values.

Each trigger has an associated Trigger Bit. The Level 1 Trigger looks at the event data and decides whether all criteria for a certain trigger are met. If so, the corresponding trigger bit is set (i.e., set to 1), otherwise it is set to 0. For example, the L1_SingleJet128 bit (number 20 at the time of this writing, but that number might change in the future) is set if there is a jet with an energy of 128 GeV or higher. One can see how often this happened in the Pre-DT Counts and Post-DT Counts columns.

Some triggers, e.g. L1_SingleJet16, would cause a readout of the detector so often that it actually harms data taking, because there is a limited number of events CMS can read out per second. Still, removing the trigger as a whole is not wanted either. Prescaling is the solution; if the prescale is set to, say, 200000, that means that in 200000 events where the L1_SingleJet16 bit is set, that bit is forced to 0 again 199999 times, leaving only every 200000th event with the L1_SingleJet16 bit in the set state.

After this prescaling, a readout is caused if one or more of the triggers which are set in a bold face have their respective bit set. Triggers that are in a normal font do not cause readouts.

Each beam consists of packages of protons of ions which are spatially separated from each other. These packages are called bunches. Each beam can contain a certain number of bunches. Keep in mind though that currently (September 2012), not every bunch is actually filled with protons/ions. A bunch crossing, or BX for short, is when two bunches are colliding.

Even though the whole detector cannot be read out multiple times in consecutive events, parts of the detector can. The global trigger, for example, not only reads the jet information of the triggering bunch crossing, referred to as BX0, but also of the previous (BX-1) and next (BX+1) bunch crossing.

First Contact with CMSSW

This is a series of things I did to get started with CMSSW. It may not do what you need, but I hope it enables you to toy around with and get a feeling for the software. Also, it's not unlikely that I did something when I first started, not realizing that it was important, and therefore not writing it down, so that it doesn't show up in the following instructions. If this is the case, please update this TWiki page.

I worked exclusively on lxplus, which is an array of servers that has a lot of software you may need preconfigured for you. You can connect there by typing

ssh lxplus.cern.ch
into your shell. If you need graphical output from the server on your local machine, e.g. because you run ROOT there and want to look at some graphs, add the -Y or -X flags to your invocation of ssh. Please read the ssh manpage for security implications. Windows users may use an SSH client like PuTTY. In this case, you probably don't have an X Server installed locally. If you still want a graphical interface on the server, you may use VNC. On the server, invoke
vncserver :12 -localhost -name MyLxplusVNC -geometry 1024x768
export DISPLAY=":12"
where 12 is the VNC port (usually that corresponds to TCP port 5900+12, in this case), and MyLxplusVNC is the session name. You may need to choose another port if the one specified is already taken. Then, use SSH tunneling to connect to your VNC session in a secure manner. Keep in mind that lxplus consists of more than one node. All of your data is accessible via the OpenAFS filesystem from any of the lxplus nodes, and from your local computer if you set it up to access the CERN AFS volume. However, if for some reason, you want to connect to one specific node, say because you run a screen session there, you can do so by using its full name, e.g.
ssh lxplus442.cern.ch

Your AFS quota will be quite limited, so your home directory is not suitable for large files. For these, you can request a separate workspace by going to the Account Management site, click the Applications and Resources link in the menu at the top of the site, and click the Manage link on the line that says Linux and AFS. Then, in the AFS section, you can request your Workspace area, and increase your home directory and Workspace quota. After doing so, you should be see your Workspace path; if your computing account name is xyuser, it will look somethin like /afs/cern.ch/work/x/xyuser. You can create a softlink in your home directory so you don't have to remember that path:

mkdir /afs/cern.ch/work/x/xyuser/private/scratch0
ln -s /afs/cern.ch/work/x/xyuser/private/scratch0 ~/

Next, I went to the newly created directory

cd ~/scratch0
and followed this guide to set up CMSSW. I did the follwoing to get check out and build some code out of the CVS repository; you should probably still read the mentioned guide, as what I did may already be outdated.
export SCRAM_ARCH=slc5_amd64_gcc462
cmsrel CMSSW_5_3_3_patch1
cd CMSSW_5_3_3_patch1/src
cmsenv
cvs co UserCode/L1TriggerDPG
scram b

cmsenv sets some environment variables so you can use the CMSSW binaries. You need to call it from within your CMSSW src directory in every new session. To automate this, you could add

export SCRAM_ARCH=slc5_amd64_gcc462
export DISPLAY=":12"
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
cd ~/scratch0/CMSSW_5_3_3_patch1/src
cmsenv
source /afs/cern.ch/cms/ccs/wm/scripts/Crab/crab.sh
or something similar to your ~/.bashrc, if you're a bash user. If not, you may have to change the file paths that are =source='d above, as they might not work with the shell you are using.
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2012-09-11 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback