Una bimba en el data flow
Run a partition
- Login in to lx64slc5
- Set up your environment (give as input corresponding release an iblah-blah):
source /afs/cern.ch/atlas/project/tdaq/cmt/bin/cmtsetup.sh tdaq-02-00-03 i686-slc5-gcc43-opt
pm_part_l2ef.py
This will generate for you a .xml file: part_l2ef.data.xml, in which all the details of your partition are described.
setup_daq -p part_l2ef -d part_l2ef.data.xml
and your partition will be started. First input is the name of your partition as specified inside your .xml file, second argument is the .xml file that contains all the details of your partition
About "part_l2ef.data.xml", the file which describes all the details of your partition.
- The main object is defined at:
<obj class="Partition" id="part_l2ef">
"part_l2ef" is the name of your partition, you can modify it, for example:
<obj class="Partition" id="part_l2ef_teresa">
This is convenient since a common source of problem is to have two people in lxplus trying to run a partition with exactly the same name. You can check the partitions being runed with:
ipc_ls -P
- The following line, tells you where your logfiles will be dumped:
<attr name="LogRoot" type="string">"/tmp/lala"</attr>
you could modify it, modifying: "/tmp/lala"
- The following line tells the partition where to look for your local code, it will look there before going to the repository:
<attr name="RepositoryRoot" type="string">""</attr>
you could modify it to be for example:
<attr name="RepositoryRoot" type="string">"~/public/testarea/blah/blah/installed"</attr>
- The partition has a tree structure, the daughter structure is all described here. It describes all the relations between the objects, but also each object can have its own parameters.
Playing with the partition
- Press Boot it starts the controler. Each of them controls the tree structure that hangs below them
- Press Initiazalize it starts all the applications that hang from the controlers
- Press Config
- Start
- You can "close" all the chain. Depending on what you want to do, you finish and restart the whole chain, or only part of it.
- Commit & Reload read a database (to be pressed for example if you have changed the path to which your logfiles are going).
- OH (right up corner), histogram display (the new histograms that you create will appear here, there is one called SFI...).
SFI code
Basics to start
- Check the branches to see which one is interesting for you (in each corresponding subdirectory you have the head version of the corresponding branch)
svn ls $SVNROOT/DAQ/DataFlow/SFI/branches
- Check out the tag that you need
- To compile go to: SFI/cmt
make inst
It creates a install directory where all the binary files, etc are
(The full path of the install directory (including install) is the one you want to include in the
RepositoryRoot of the blah.data.xml that describes your partition
Management of trunk/tags/branches
- Typically 'trunk' is the most up to date working directory
- When you need to do developments for several days, and you don't want to be disturbing in the trunk, you can create a branch for your work. You copy the trunk, keep note of the version you are using, for which you are evolving, in case you decide to put your changes into trunk, to make sure you don't overwrite other people's modifications.
- To create a new tag: checkout the trunk, modify it, test your changes, update the trunk, create a new tag
Nightlies
source /afs/cern.ch/atlas/project/tdaq/cmt/bin/cmtsetup.sh nightly i686-slc5-gcc43-dbg
Links
Changing dcmessages
$SVNROOT/DAQ/DataFlow/dcmessages/branches/multi-peb
DAQ/DataFlow/hltinterface hltinterface-01-00-01
Second fase of multi-PEB implementation
Links:
We can see together if something is missing in the efio/SFIEFIOEvent.h which you might need
Pra mirarse
Code updates
Links
--
TeresaFonsecaMartin - 11-Jun-2010