Introduction

This page summarizes how to use the code developed to create look up tables for the MuCTPI hardware and simulation, and analyze the performance of these configurations.

The code was originally developed to run on single muon MC simulations, but has recently been upgraded to handle real ATLAS data. All of the necessary code is in the ATLAS user SVN repository. In the following the page gives a summary of how the code works internally, and then explains step by step how to use it.

The analysis method

Summarize how the MuCTPI works, how the look up tables are created, and how their performance is analyzed.

Using the code

The code is in the ATLAS user SVN repository. You can have a look at it in your browser here. The following explains how to retrieve, compile and use the code.

Initial checkout

If you start from scratch, you first need to create a working area for the packages. I use CMT to manage and compile the source code, as it made it possible to have one working area on AFS, and use the compiled applications both from my desktop and my laptop. CMT is the code management application used by the ATLAS offline code as well.

Create an empty directory somewhere. Take note that you'll need at least about 70 MB of space for the compiled code. (This is because by default the code is compiled with debugging symbols.) Go to this directory, and check out "working area code":

svn co svn+ssh://svn.cern.ch/reps/atlasusr/krasznaa/L1DiMuon/trunk ./

This will create the following layout in your working area directory:

cmt/
   project.cmt
Doxyfile
setup_CERN.sh
setup_STANDALONE.sh
checkout_LUTAnalysis.sh

The setup scripts take care of setting up your environment for compiling the code. Note that I developed the project on BASH, so there are no guarantees that it will work on anything else.

There are two separate setup scripts in order to make it simple to set up the project both on lxplus, and on any other supported system type. Compiling the code on lxplus and in standalone mode is discussed separately here.

Setting up the compilation on lxplus

Nowadays all regular lxplus machines run SLC5, but since I started writing this code quite a long time ago, the compilation system still fully supports SLC4. (I removed SLC3 support at one point.) Note that SLC4's default compiler is GCC 3.4, and you don't need to set up a different compiler to build the code.

Before sourcing setup_CERN.sh, make sure that if you're on SLC5, you have GCC 4.3 set up. SLC5 comes with GCC 4.1 by default. But most of the centrally provided libraries at CERN are only compiled with GCC 4.3 for SLC5. (For instance XercesC is not available with GCC 4.1 on AFS.) To set up GCC 4.3 on lxplus, do:

source /afs/cern.ch/sw/lcg/contrib/gcc/4.3/[platform]/setup.sh

Now source setup_CERN.sh. It will detect that you haven't checked out anything yet, and will tell you to at least check out the L1DiMuonPolicy package.

Setting up the compilation on a "standalone" system

In order to be able to compile the code on practically any kind of system, the compilation system supports a "standalone" mode. In this mode the user has to point the CMT glue packages to the location of ROOT and XercesC on the system using some environment variables. These variables are set in the setup_STANDALONE.sh file, and have the following meaning:

  • ROOT_ROOT: The directory where you have your local version of ROOT installed.
  • PYTHON_ROOT: In case ROOT was compiled against a different version of Python than the default on your system, you need to set this variable to point to this special version of Python. Usually it can be left to point to "/usr".
  • XERCES_ROOT: The directory where you have your local version of XercesC installed.

The default values for these variables reflect the setup on my SLC5 desktop machine, which has ROOT installed under /usr/local/root by hand, and has XercesC installed from the SLC5 repository.

While it is possible to compile the code on practically any system as long as you set these variables correctly, unless you try to compile the code on one of these systems, internally CMT will name your system as "Unknown_system".

  • SLC4 with GCC 3.4 on a 32-bit machine (slc4_i686_gcc34)
  • SLC4 with GCC 3.4 on a 64-bit machine (slc4_amd64_gcc34)
  • SLC5 with GCC 4.1 on a 32-bit machine (slc5_i686_gcc41)
  • SLC5 with GCC 4.1 on a 64-bit machine (slc5_amd64_gcc41)
  • SLC5 with GCC 4.3 on a 32-bit machine (slc5_i686_gcc43)
  • SLC5 with GCC 4.3 on a 64-bit machine (slc5_amd64_gcc43)
  • Intel OSX with GCC 4.0 (osx_i386_gcc40)
  • Intel OSX with GCC 4.2 (osx_i386_gcc42)

The code will still work like that, but the directories created by CMT will look a bit frightening I guess.

-- AttilaKrasznahorkay - 14-Sep-2010

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r2 - 2010-10-12 - AttilaKrasznahorkay
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback