Athena Code Snippets
"All the code that's fit to snip!"

Reading Pool files

The first line in joboptions below will create an "EventSelector" service and add it to the service manager. You then give the EventSelector your list of files.

joboptions:

import AthenaPoolCnvSvc.ReadAthenaPool
svcMgr.EventSelector.InputCollections = ["my","list","of","files"]

However, sometimes this will not be enough.... for some objects you will want to access from the storegate, their associated pool convertors (the thing that reads in the object from the input file) will need the detector description services setup. This takes a bit of time to set up, so only add the following to your job options if necessary:

from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = svcMgr.EventSelector.InputCollections
from RecExConfig import AutoConfiguration
AutoConfiguration.ConfigureSimulationOrRealData() #configures DataSource global flag
AutoConfiguration.ConfigureConditionsTag() #sets globalflags.ConditionsTag, but isn't super smart, will probably be better use current conditions tag instead
from AthenaCommon.DetFlags import DetFlags
DetFlags.detdescr.all_setOff() #skip this line out to leave everything on. But this will take longer
DetFlags.detdescr.Calo_setOn() #e.g. if I am accessing CaloCellContainer, I need the calo detector description
include("RecExCond/AllDet_detDescr.py")

Reading Bytestream files (RAW)

You need to use the bytestream converter service, and you also need to manually specify a conditions and geo tag:

from ByteStreamCnvSvc import ReadByteStream
svcMgr.ByteStreamInputSvc.FullFileName=["/path/to/raw"]

from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = svcMgr.ByteStreamInputSvc.FullFileName
from RecExConfig import AutoConfiguration
AutoConfiguration.ConfigureFromListOfKeys(['ProjectName'])
AutoConfiguration.ConfigureGeo() #sets globalflags.DetDescrVersion
AutoConfiguration.ConfigureConditionsTag() #sets globalflags.ConditionsTag, but isn't super smart, will probably be better use current conditions tag instead
AutoConfiguration.ConfigureSimulationOrRealData()

#set up the conddb with the correct conditions tag
from IOVDbSvc.CondDB import conddb
from AthenaCommon.GlobalFlags import globalflags
conddb.setGlobalTag(globalflags.ConditionsTag())

include("RecExCond/AllDet_detDescr.py") #loads detector description, you can set DetFlags to control this

#to actually make proxies appear in the storegate, one needs to register TypeNames with the ByteStreamAddressProviderSvc:
svcMgr.ByteStreamAddressProviderSvc.TypeNames += ["xAOD::TriggerTowerContainer/TriggerTowers", "xAOD::TriggerTowerAuxContainer/TriggerTowersAux."]

You can use the getCurrentCOMCONDTag.py script to find out a current conditions tag for data (gives two... not sure which is which). You may want to use that instead for globalFlags.ConditionsTags in the above setup

In the above example, the ByteStreamAddressProviderSvc will attempt to invoke the bytestream convertors registered for the TriggerTowerContainer and TriggerTowerAuxContainer types (there can only be one convertor for each type, determined by the CLID passed to the Convertor base class in the constructor). The usual setup is that a tool is responsible for creating the object, you will see a tool is used in the convertor code. If necessary, you made need to configure this tool in your joboptions.

How bytestream and pool files are read

A piece of the bytestream or pool file is read by a converter, which implements the IConverter interface. Most will actually inherit from Converter directly. Pool converters (Different to bytestream converters) tend to inherit from this class through AthenaPoolConverter as invoked through one of the template classes given in the AthenaPoolCnvSvc package.

The converters are classified by what number they return from their repSvcType method. Bytestream converters will return ByteStream_StorageType, whereas pool converters return POOL_StorageType.

Converters are instantiated and held by an instance of a conversion service, which implements the IConversionSvc interface. Conversion services also implement the IConverter interface, so conversion services can hold others conversion services. The two main conversion services in athena are AthenaPoolCnvSvc for reading pool files and ByteStreamCnvSvc for reading bytestream. They both inherit from AthCnvSvc which is where they acquire their IConversionSvc inheritance from. The createConverter method of AthCnvSvc uses the gaudi factory method to create an instance of a converter for a given clid and svcRepType (a clid and svcRepType together are a ConverterID). These are declared through DECLARE_CONVERTER_FACTORY macro (which appears in the entries.cxx file in the packages where each converter lives), which asks the static classID method implemented for each converter which clid it is able to convert. It also asks the static storageType method for the svcRepType, so in all cases this should return the same result as the svcRepType method of a converter (I haven't seen a case where it doesnt). Because the ConverterID must uniquely identifer a converter, it is not possible to have two different converters for the same clid with the same svcRepType (aka bytestream or pool). If you do, the invoked converter will always be the first converter registered to the gaudi factories.

A conversion svc will be asked to create objects (through it's converters) by a call to its createObj method. This receives an IOpaqueAddress and a DataObject. The DataObject is what will be 'filled' with the payload. The IOpaqueAddress contains what clid and svcRepType (svcType) this object creation request is for. It also holds the storegate key in its par property. This IOpaqueAddress object is held inside a TransientAddress, which is held inside a DataProxy. When a DataProxy is asked for its data through the accessData method, if it needs to load the data (the address is currently in invalid state) it will call the createObj method of its conversion svc (the m_dataLoader data member) with its IOpaqueAddress.

The DataProxies are put in the storegate by the ProxyProviderSvc registered to the storegate. This svc creates every proxy with the same conversion svc, which is instance of EvtPersistencySvc called EventPersistencySvc (it's set up in the application mgr). It's actually just a dumb inherited class of PersistencySvc. This is designed to be a conversion service that just holds other conversion services. The conversion services it knows about are the ones specified in the CnvServices property. So when you configure a job to read bytestream, the ByteStreamCnvSvc is added to the EventPersistencySvc (see here).

The ProxyProviderSvc knows which proxies to add (as well as what clid and svcType to set for the IOpaqueAddress) by actually asking its address provider services for these addresses. The address provider for bytestream is ByteStreamAddressProviderSvc and its registered to the ProxyProviderSvc here. Address provider services implement the IAddressProvider interface. The TransientAddress objects created in the preloadAddresses method of the address provider are aware of their provider when they are updated with them in the addAddresses method of ProxyProviderSvc, which is called when the proxies are being added to the storegate for the first time. When a transient address is checked for its validity (isValid()) for the first time in an event (in the accessData method of the proxy) the updateAddress method is called for the provider, where the instance of IOpaqueAddress is actually created (this happens each event, because the addresses are destroyed at the end of each event, when the proxies are reset()). In this case it's of type ByteStreamAddress, which inherits from GenericAddress, which inherits from IOpaqueAddress. Note that the first parameter of the GenericAddress constructor ends up being the svcType of the address, and in the ByteStreamAddress constructor, that is set to ByteStream_StorageType. Note that for pool files, the address provider is AthenaPoolAddressProviderSvc, which registers GenericAddresses with storagetypes defined in the tokens of the data header (which should always be POOL_StorageType).

So this completes the 'food chain' for how your data gets loaded:
1. At the start of the job the AddressProviderSvc's registered to ProxyProviderSvc create TransientAddress objects for each type of object they know about
2. The ProxyProviderSvc creates proxies for each TransientAddress object and put the proxies in the storegate. The TransientAddresses all have EvtPersistencySvc set as their ConversionSvc. The AddressProvider of each TransientAddress is also set
3. You ask storegate for the data
4. Storegate asks the corresponding DataProxy for the data.
5. The proxy checks the TransientAddresses validity. In this process, if it's invalid (no data loaded yet), the AddressProvider assigned to the TransientAddress will create a IOpaqueAddress in its updateAddress method.
6. The proxy will then call its ConversionSvc (always EvtPersistencySvc for main storegate) with the IOpaqueAddress (and an empty DataObject).
7. The ConversionSvc (always EvtPersistencySvc) uses the IOpaqueAddress' svcType and clid to look up the appropriate converter. EvtPersistencySvc knows about other ConversionSvc', which it will call based on the svcType of the IOpaqueAddress.
8. The final ConversionSvc then calls createObj on the converter (which it looks up or creates through gaudi's factories) to populate the DataObject

Reading Root files (D3PD)

The first line in joboptions below will create an "EventSelector" service and add it to the service manager. You must tell the EventSelector the name of TTree you are looping over. You then give the EventSelector your list of files.

joboptions:

import AthenaRootComps.ReadAthenaRoot
svcMgr.EventSelector.TupleName = "TTreeName"
ServiceMgr.EventSelector.InputCollections = ["my","list","of","files"]

See AthenaRootD3pdReading for more info (Also see my analysis framework for d3pd reading in athena, it's really good, honest!: Cam Classes )

A lesson in AutoConfiguration

from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = ["your","list","of","files"] #set this from your event selector
from RecExConfig import AutoConfiguration
Then call the following functions of AutoConfiguration
Function What properties/flags it sets notes
ConfigureSimulationOrRealData() AthenaCommon.GlobalFlags.globalflags.DataSource
RecExConfig.RecFlags.rec.Commissioning (sets true if data)
 
ConfigureGeo() AthenaCommon.GlobalFlags.globalflags.DetDescrVersion  
ConfigureConditionsTag() AthenaCommon.GlobalFlags.globalflags.ConditionsTag May need to pass None,None as arguments in older releases
ConfigureBeamType() AthenaCommon.BeamFlags.jobproperties.Beam.beamType  
ConfigureDoTruth() RecExConfig.RecFlags.rec.doTruth  
ConfigureFromListOfKeys(["ProjectName"]) RecExConfig.RecFlags.rec.projectName  

A lesson in event generation

In this section I will give you my notes on setting up an event generation for an 'on the fly' madgraph sample.

First ensure you are using the latest mc production release. At time of writing, mc production is being done in the 19

Outputting Histograms and TTrees

You should use the histogram service. In your initialize, get the histogram service and register your histogram. The example below shows how to create two seperate "streams" and write a histogram to one and a tree to another
#include "GaudiKernel/ITHistSvc.h"
#include "GaudiKernel/ServiceHandle.h"
...
in initialize: 
  ServiceHandle<ITHistSvc> histSvc("THistSvc",name());
  CHECK( histSvc.retrieve() );
  myHist = new TH1D("myHist","myHist",10,0,10);
  histSvc->regHist("/MYSTREAM/myHist",myHist).ignore(); //or check the statuscode
  myTree = new TTree("myTree","myTree);
  CHECK( histSvc->regTree("/ANOTHERSTREAM/myTree",myTree) );
Note the "MYSTREAM" and "ANOTHERSTREAM". You must match this to a stream configured in your job options. In your job options you should add the following:
svcMgr += CfgMgr.THistSvc()
svcMgr.THistSvc.Output += ["MYSTREAM DATAFILE='myfile.root' OPT='RECREATE'"]
svcMgr.THistSvc.Output += ["ANOTHERSTREAM DATAFILE='anotherfile.root' OPT='RECREATE'"]
Note: if you are reading a d3pd, you don't need the first two lines above, since the d3pd reading job options automatically load the histogram service

To put your output Tree/Hist/Whatever inside a TDirectory in the output file, just include the path in the string you pass to regTree (etc), so eg:

  myTree = new TTree("myTree","myTree);
  CHECK( histSvc->regTree("/ANOTHERSTREAM/MyDirectory/myTree",myTree) );

Will create a 'myTree' tree inside the 'MyDirectory' directory. Do not include the directory in the name of the TTree.

Speed up transforms start time

All the transforms, by default, will do a 'scan' of your asetup. This can take several minutes, and happens near the top of the job. It's what happening immediately after this line:
PyJobTransforms.trfJobOptions.writeRunArgs 2015-05-29 15:01:33,305 INFO Successfully wrote runargs file

A trick to skip this dumb check when testing locally is to add --asetup='' to you command.

jobConfig in generate_tf

You can specify: "MC15JobOptions/DSID302xxx/blah.blah.blah.py" for the jobConfig, that's how the main block of joboptions auto-appear in the path (no tarball needed)

Minimal joboption for copying part of an xAOD

To copy a primary xAOD's content, you will need to load the necessary metadata tool to copy the metadata, and also you will need to load the calorimeter geometry because primary xAODs contain a calo cell collection for the thinned calocells around egamma candidates.

Here's the essential parts:

#this part just gets the flags ready for loading detector description
#if you are using AthAnalysisBase, you can skip this section because the CaloCell collections are automatically skipped in those releases
from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = svcMgr.EventSelector.InputCollections
from RecExConfig import AutoConfiguration
AutoConfiguration.ConfigureSimulationOrRealData() 
AutoConfiguration.ConfigureGeo() 
from AthenaCommon.DetFlags import DetFlags
DetFlags.detdescr.all_setOff()
DetFlags.detdescr.Calo_setOn()
#Now load the description:
include("RecExCond/AllDet_detDescr.py")

#now setup the output stream, adding all the Event payload (using TakeInputsFromInput) 
from OutputStreamAthenaPool.MultipleStreamManager import MSMgr
xaodStream = MSMgr.NewPoolRootStream( "StreamXAOD", "xAOD.out.root" )
xaodStream.Stream.TakeItemsFromInput = True
#add the metadata and the metadata tools
ToolSvc += CfgMgr.LumiBlockMetaDataTool("LumiBlockMetaDataTool") #only needed for data
ToolSvc += CfgMgr.xAODMaker__TriggerMenuMetaDataTool( "TriggerMenuMetaDataTool" )
theApp.CreateSvc += ['CutFlowSvc/CutFlowSvc'] #Not a metadatatool yet :-(
svcMgr.MetaDataSvc.MetaDataTools += [ ToolSvc.LumiBlockMetaDataTool, ToolSvc.TriggerMenuMetaDataTool  ]
xaodStream.AddMetaDataItem(["xAOD::LumiBlockRangeContainer#*","xAOD::LumiBlockRangeAuxContainer#*",
                                                   "xAOD::TriggerMenuContainer#*","xAOD::TriggerMenuAuxContainer#*",
                                                   "xAOD::CutBookkeeperContainer#*", "xAOD::CutBookkeeperAuxContainer#*"])

Trigger Decisions

If the trigger configuration is stored in the pool file (usually the case for ESD/AOD) then:

joboptions:

#TriggerConfigGetter looks at recflags, so it might be a good idea to set those, but I have found not to be always necessary
from RecExConfig.RecFlags  import rec
rec.readRDO = False #readRDO is the True one by default. Turn it off before turning others on
rec.readAOD = True #example for reading AOD

#need this bit for AOD because AOD doesn't contain all trigger info - so has to connect to DB for some of it
#ESD/RDO appear not to need this. Only do this bit if running on data, not MC
from AthenaCommon.GlobalFlags import globalflags
globalflags.DataSource = 'data'

#every situation needs the next bit
from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = ["list","of","input","files"]
from TriggerJobOpts.TriggerFlags import TriggerFlags
TriggerFlags.configurationSourceList=['ds']
from TriggerJobOpts.TriggerConfigGetter import TriggerConfigGetter
cfg = TriggerConfigGetter()

Note that the athenaCommonFlags filelist is needed because the TriggerConfigGetter relies on the InputFilePeeker, which looks in AthenaCommonFlags for it's files to peek at. You could even use the inputFilePeeker to configure the recflags and globalflags.

header:

#include "TrigDecisionTool/TrigDecisionTool.h"
...
ToolHandle<Trig::TrigDecisionTool>    m_trigger;

initialize:

m_trigger.retrieve();

usage:

m_trigger->isPassed("L1_EM3");

asetup devval from afs nightlies

if you just do:

asetup devval,rel_5

You'll get the cvmfs copy of the nightlies, aka you'll see:

at /cvmfs/atlas-nightlies.cern.ch/repo/sw/nightlies/x86_64-slc6-gcc47-opt/devval/rel_5

To get the proper afs nightly, do:

asetup devval,rel_5 --nightliesarea=/afs/cern.ch/atlas/software/builds/nightlies

Adding a filtering algorithm to control events of the AthSequencer

You can write algorithms which control which events will be processed by the algorithms in the AthSequencer.

In execute of filtering algorithm (useEvent is a bool, true if should next alg in sequencer should execute, false if it should not):

this->setFilterPassed(useEvent);

My current advice is to only work with the AthAlgSeq, which is an instance of an AthSequencer that is setup for you by athena core job options. There are 4 (!!!) other sequencers which are all set up for you, but my advice is to avoid using them unless you really know what you're doing, because of some strange behaviours (they aren't "normal" sequencers, which I wont go into now. But you've been warned).

A quick explanation: AlgSequence is a name that gets thrown around. It should be thought of as shorthand for "the AthSequencer instance named AthAlgSeq ". You might see code like this:
from AthenaCommon.AlgSequence import AlgSequence
job = AlgSequence()
I currently think we should try to just use the CfgMgr for everything in our joboptions, so see the code below to do the equivalent of above.

When you make a job, do it like this:

job = CfgMgr.AthSequencer("AthAlgSeq") # This is equivalent to the "AlgSequence" you can get from "import AlgSequence", but I consider that depricated
job += CfgMgr.MyAlg("MyAlg")
That added an alg to be run into your job. If you want that alg to be a filter (i.e. if it contains this->SetFilterPassed(decision);), so that if the decision is false, the subsequent algorithms in the sequencer do not execute, then you must create a new sequencer and add that to AthAlgSeq instead. To demonstrate:

job += CfgMgr.MyAlg("MyAlg")
job += CfgMgr.AnotherAlg("SecondAlg")

Both of the above algs will execute, regardless of the filter decision of the first alg. To get the desired behaviour, do this:

mySeq = CfgMgr.AthSequencer("MySequence")
mySeq += CfgMgr.MyAlg("MyAlg")
mySeq += CfgMgr.AnotherAlg("SecondAlg")
job += mySeq

That will now pay attention to the filter decision. You can have multiple sequencers in the AthAlgSeq, which will all be executed. For example:

mySeq = CfgMgr.AthSequencer("MySequence")
mySeq += CfgMgr.MyAlg("MyAlg")
mySeq += CfgMgr.AnotherAlg("SecondAlg")
job += mySeq
mySeq2 = CfgMgr.AthSequencer("MySequence2")
mySeq2 += CfgMgr.MyAlg("MyAlg2")
mySeq2 += CfgMgr.AnotherAlg("SecondAlg2")
job += mySeq2

In the above, MyAlg and MyAlg2 (instances of the MyAlg algorithm) will both definitely be executed. SecondAlg and SecondAlg2 (instances of AnotherAlg) will be executed based on the filter decision of MyAlg and MyAlg2 respectively.

Creating Pool files (DPD)

This can be used to "Slim" pool files, by picking what things from the storegate you want to save.

joboptions:

from OutputStreamAthenaPool.MultipleStreamManager import MSMgr
StreamDPD=MSMgr.NewPoolStream("StreamDPD","DPD.pool.root")
StreamDPD.AddMetaDataItem(["IOVMetaDataContainer#*"])
StreamDPD.AddItem( ["EventInfo#*"] )
StreamDPD.AddItem( ["DataVector<LVL1::TriggerTower>#TriggerTowers"] )

The format of the above lines is:

StreamDPD.AddItem(["Type#Key"])

You can check the type with a storegate dump (

StoreGateSvc.Dump=True
in job options) or using
checkSG.py
e.g:

> checkSG.py myfile.pool.root | grep "TriggerTowers"
          DataVector<LVL1::TriggerTower> | TriggerTowers, TriggerTowersMuon        

The key is just the storegate key name. If you set it to "*" it will cause all objects of the given type to be saved to your pool file. Note that the code wont crash if it can't find a particular item in the storegate, so always check your output dpd has got what you want in it, and if it hasn't, go back and check your "AddItem" lines.

Copying all the containers

For some containers you will need a pool converter to be able to read the container and then write it back out (this is persistent->transient->persistent. I haven't figured out how to skip the conversion step and just do persistent->persistent, so the converters are needed). The safe way to load all converters is:

DetDescrVersion = 'ATLAS-GEO-16-00-01'
include("RecExCond/AllDet_detDescr.py")
...
StreamDPD.GetEventStream().TakeItemsFromInput = True

You don't need any of the AddItem stuff of StreamDPD if you do this. Note that the DetDescrVersion must match that of your input files. You can auto-set this with inputFilePeeker (see section on this page about inputFilePeeker)

Controlling which events make it in to the pool file

You can control which events get put in the pool file with:

StreamDPD.AcceptAlgs(["MyFilteringAlg"])

Where "MyFilteringAlg" is the name of an alg that has the

this->setFilterPassed(useEvent);
sort of code in it, as given in the section "Adding a filtering algorithm". The filter alg does not have to be in the "AthFilterSeq" AthSequencer, it can be in the AlgSequence ("AthAlgSeq") as well

Any algs you add as AcceptAlgs will be able to trigger an event for saving to your pool file. If you want an "AND" of a series of algs, then you can instead use:

StreamDPD.RequireAlgs(["MyFilteringAlg1","MyFilteringAlg2"])

Both algs will have to has "passed" for the event to be saved. If you had used AcceptAlgs, either of the algs passing would save the event.

Controlling AthenaEventLoopMgr printout

By default, the event loop will print a message every event. To change this, at the bottom of your job options add:

svcMgr += CfgMgr.AthenaEventLoopMgr(EventPrintoutInterval=500)
Technical note: we have to give the instance to the svcMgr, because the AthenaEventLoopMgr isn't created by theApp until it is initialized... so to get it to pickup the instance we configure, we need python to not forget it, so give it to svcMgr and it will remember, and theApp will use our instance.

Using InputFilePeeker to auto-config your job options

Add the line:
from RecExConfig.InputFilePeeker import inputFileSummary
Then you can access useful information in the inputFileSummary dictionary. E.g:

globalflags.DataSource = 'data' if inputFileSummary['evt_type'][0] == "IS_DATA" else 'geant4'
DetDescrVersion =  inputFileSummary["geometry"]

To see all your options, do a "print inputFileSummary" and you can look at the output of that to see what info you can access.

Disabling checkreq

See AthAnalysisBase

Registering user defined classes

Before a class may be registered in StoreGate, it needs a unique Class ID (clid). If you don't have a CLASS_DEF available for a type, you will get messages from the storegate when you run saying that it doesn't have clid for that type and so cannot record to storegate.

Assuming that the class has been defined, the CLASS_DEF macro must be called. In a bash shell, use the following command to generate a suitable clid:

clid -m "MyObjType"

This will output a line of the form CLASS_DEF( MyObjType, 8002 , 1 ), where 8002 is the unique clid in this case. Put this in any header file.

Working with ClassID in code

You use the ClassIDSvc to convert between a clid and the type names (in the code, CLID is just a typedef for an unsigned int). E.g. if I wanted to find the typename of the class with clid 71052636, I just do:

#include "GaudiKernel/ServiceHandle.h"
#include "AthenaKernel/IClassIDSvc.h"
...
      ServiceHandle<IClassIDSvc> m_clidSvc("ClassIDSvc",name());
      CHECK( m_clidSvc.retrieve() );
      std::string typeName;
      CHECK( m_clidSvc->getTypeNameOfID(71052636,typeName) );
      ATH_MSG_INFO("Type is: " << typeName);
This service also allows you to find out the clid using the type as a string, just check the interface file.

Looking at what configurables exist

The list of configurables is kept in the WeakValueDictionary called Configurables.allConfigurables. You can ask for its items (gives a list of tuples), where the first in each tuple is the string name, and the second in the instance. You can ask the second the name and type. So e.g:

for name,insta in Configurable.allConfigurables.items() : print(insta.getType() + "/" + insta.name())

prints a nice list (getType() and name() are methods of all configurables).

Printing configurables in postexec and postinclude

In the transforms (e.g. Reco_tf) I have found that the printing of configurables gets disabled at some point between preExec/PreInclude and the postInclude/postExec . I can't find where in the pandoras box this happens, but you can re-enable the printing by doing:

import AthenaCommon.Configurable as Configurable;Configurable.log.setLevel( INFO );

At the start of your postExec or postInclude. This will re-enable the printing of the properties of a configurable, rather than just the name and type.

workarea for broadcasting compilation

Please see AthAnalysisBase

Adding extra ROOT libraries to compilation

See AthAnalysisBase

Official files are dataset aware

This is a note to myself that it turns out files (including D3PD) do know which dataset they come from. Use the 'CollectionTree' tree, look at the Token branch, which has format like
[DB=<GUID> ][blahl blah blah...   
Take that GUID, and and after setting up pyAMI, you can do:
ami cmd GetFileByGUID fileGUID=<GUID>
and this returns the dataset name of the AOD (in the case of D3PD) that the file was made from (in logicalDatasetName). Note to self to incorporate this into cam tool.

Find file of AOD or DAOD that a data event is in

You can combine pyAMI with the event lookup service to locate the exact file an event is located in. Note that not all data types (i.e. derivations) have been indexed, but HIGG2D1 is one that has, for example. Setup like this:

lsetup pyAMI eiclient
You might need to voms-proxy-init as well!

Then do (I've combined the whole thing into one line command, you can split it up if you wish, to see what is going on in each command):

ami cmd GetFileByGUID fileGUID=$(el -e '00300415 259128153' -s physics_Main -api simple -t DAOD_HIGG2D1)

Where you can replace the runNumber and eventNumber in the middle of the command with the numbers you want. Note: This only works like this on lxplus, away from CERN you need cookies to use eventIndex

Working with Identifiers

Can print out identifier with: myID.getString(); Then if you can create a corresponding identifier with:
            Identifier targetChannel;
            targetChannel.set("0x39e03c0000000000");
And then you can compare to find matches with: if(myID==targetChannel) ATH_MSG_INFO("Found match");

Look up packages used in a release in a text file

Can be found here: /afs/cern.ch/atlas/software/dist/nightlies/nicos_work/tags look under 2.0.X for AthAnalysisBase, and find the file corresponding to the day the release was built (look up the day from athena -i printout after setting up release.. gives date).

Dark arts: Interactive Athena

Start athena in interactive mode:
athena -i
Load a file, e.g.
import AthenaPoolCnvSvc.ReadAthenaPool
svcMgr.EventSelector.InputCollections = ["myfile.pool.root"]
Now initialize the application, and load the first event:
theApp.initialize()
theApp.nextEvent()
Now get a handle to the storegate:
sg = PyAthena.py_svc('StoreGateSvc')
To take a look at what you have, you can do a dump:
sg.dump()
To access one of the things in the storegate, do, e.g.
tt = sg['TriggerTowers']

Here's a full example, showing how you could read calo cells from an AOD (put the code in a python file and execute it with athena -i example.py:

#setup reading a POOL file
#including loading the detector description for calorimeter
#taken from: https://twiki.cern.ch/twiki/bin/view/Main/AthenaCodeSnippets#Reading_Pool_files

import AthenaPoolCnvSvc.ReadAthenaPool
svcMgr.EventSelector.InputCollections = ["/afs/cern.ch/user/a/asgbase/patspace/xAODs/r7725/mc15_13TeV.410000.PowhegPythiaEvtGen_P2012_ttbar_hdamp172p5_nonallhad.merge.AOD.e3698_s2608_s2183_r7725_r7676/AOD.07915862._000100.pool.root.1"]

from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = svcMgr.EventSelector.InputCollections
from RecExConfig import AutoConfiguration
AutoConfiguration.ConfigureSimulationOrRealData() #configures DataSource global flag
AutoConfiguration.ConfigureConditionsTag() #sets globalflags.ConditionsTag, but isn't super smart, will probably be better use current conditions tag instead
from AthenaCommon.DetFlags import DetFlags
DetFlags.detdescr.all_setOff() #skip this line out to leave everything on. But this will take longer
DetFlags.detdescr.Calo_setOn() #e.g. if I am accessing CaloCellContainer, I need the calo detector description
include("RecExCond/AllDet_detDescr.py")


#now to a pyAthena loop
#taken from: https://twiki.cern.ch/twiki/bin/view/Main/AthenaCodeSnippets#Dark_arts_Interactive_Athena


theApp.initialize()
sg = PyAthena.py_svc("StoreGateSvc") #The main storegate


#the event loop
for i in range(0,5):
    theApp.nextEvent() #load next event
    cells = sg['AODCellContainer'] #this is the calo cell collection in an AOD
    print "(e,eta,phi) = (%f,%f,%f)" % (cells[0].e(),cells[0].eta(),cells[0].phi())

Compare two releases

You can check which tags differ between releases with a command like this:

get-tag-diff.py --ref=AthAnalysisBase,2.3.21,slc6,gcc48 --chk=AthAnalysisBase,2.3.22,slc6,gcc48
You have to specify the slc and gcc version because the script currently hardcodes them to slc5 and gcc43 respectively (maybe one day I'll update this script!)

Browsing conditions databases

Use command like this:
AtlCoolConsole.py "COOLOFL_LAR/OFLP200"
For data, change OFLP200 to COMP200 (or CONDBR2 for run 2). It accepts a full connection string too (search this page for the words 'connection string' to see more about the options, the above is a logical connection string, below is a physical one):
AtlCoolConsole.py "sqlite://;schema=/path/to/my.db;dbname=OFLP200"

Dumping conditions to a ROOT file

You can dump out the information from a folder (e.g. see list of database folders) by doing for a specific run with:
AtlCoolCopy "COOLOFL_TRIGGER/CONDBR2" my.output.root -root -folder "/TRIGGER/OFLLUMI/OflPrefLumi" -run 296939 -tag OflLumi-13TeV-001

There's a bunch of options for this program you can see with AtlCoolCopy -h, e.g. if you want to specify a run range you can use the -runsince -rununtil options. If you dont specify a tag then you will get all the tags of the folder dumped into your ROOT file.

The only thing you need to work out is which database (COOLOFL_TRIGGER, in the example above) your desired folder lives in. That you can usually infer from the first directory in the folder string, along with whether its an online or offline like quantity you are trying to access.

A lesson in conditions databases

The information in the conditions database is usually provided to you via the DetectorStore, which is an instance of the storegate. A typical error you will see if you haven't loaded conditions information correctly might looking something like this:

DetectorStore       ERROR regFcn: could not bind handle to AthenaAttributeList to key: /LAR/IdentifierOfl/OnOffIdMap_SC
ToolSvc.LArSupe...  ERROR Failed to register callback on SG key/LAR/IdentifierOfl/OnOffIdMap_SC
Or any other DetectorStore 'ERROR' that says it can't find something that was requested (often an 'AthenaAttributeList' object type is requested). So how do we provide it with this "/LAR/Identifierofl/OnOffIdMap_SC" object in this case? This is in fact a conditions 'folder' to use the technical name. What goes into the detector store is controlled by the conddb 'service' (it's really just a conduit class for configuring the IOVDbSvc, which you should only have to interact with directly in expert cases). Here's the joboption I need to add:

from IOVDbSvc.CondDB import conddb
conddb.addFolder('LAR_OFL',"/LAR/IdentifierOfl/OnOffIdMap_SC")

The first argument of the addFolder method says which database the folder is contained in (the first part is technically called a schema, and there are multiple database instances under each schema), and the second says the folder I want to access. How do I know what database to use? We'll come back to that...

Anyhow.. the effect of adding these two lines to a joboption would be:

Py:IOVDbSvc.CondDB    INFO Setting up conditions DB access to instance OFLP200

This line says that the conddb is being set up to look at the DB instance called OFLP200. The important DB instances are: OFLP200 (for MC), COMP200 (for run1 data) and CONDBR2 (for run2 data). I.e. there are three instances of every database (the notation in log files is dbname/instance, so for example LAR_OFL/OFLP200, LAR_OFL/COMP200 and LAR_OFL/CONDBR2 ... LAR_OFL is the 'schema'). By default, the OFLP200 will be used. The conddb can be configured to choose a different database instance via the 'global flags' class like this (put this before the conddb lines above):

from AthenaCommon.GlobalFlags import globalflags
globalflags.DataSource = 'data'  # or set this to 'geant4' if running on MC! ...
globalflags.DatabaseInstance = 'CONDBR2' # e.g. if you want run 2 data, set to COMP200 if you want run1. This is completely ignored in MC.

You can also auto-configure these settings based on your input file. If you leave DatabaseInstance set to its default value ('auto'), then you need to specify the rec.projectName instead, in order for conddb to infer which instance to use:

from AthenaCommon.AthenaCommonFlags import athenaCommonFlags
athenaCommonFlags.FilesInput = ["your","input","files"] #you can set this from your event selector, say
from RecExConfig import AutoConfiguration
AutoConfiguration.ConfigureSimulationOrRealData() #sets globalflags.DataSource
AutoConfiguration.ConfigureFromListOfKeys(['ProjectName']) #sets rec.projectName, necessary to infer DatabaseInstance if that is left to 'auto' (default value)

OK, so we assume you've set up the correct database instance at this point.

Later down the logs you should see (I've commented out some unimportant bits with ...):

IOVDbSvc             INFO Opening COOL connection for COOLOFL_LAR/OFLP200
...
Data source lookup using /cvmfs/atlas-nightlies.cern.ch/repo/sw/nightlies/x86_64-slc6-gcc47-opt/dev/rel_4/AtlasCore/rel_4/InstallArea/XML/AtlasAuthentication/dblookup.xml file
...
CORAL/Services/ConnectionService Warning Failed to connect to service sqlite200/ALLP200.db (coral::Exception): 'CORAL/RelationalPlugins/sqlite ( CORAL : "Connection::connect" from "/var/clus/usera/will/dbDemo/DPSValidation/share/sqlite200 is not writable" )' - do NOT retry
CORAL/Services/ConnectionService Info Connection to service "sqlite200/ALLP200.db" with connectionID=C#1 will be disconnected
CORAL/Services/ConnectionService Warning Failure while attempting to connect to "sqlite_file:sqlite200/ALLP200.db": CORAL/RelationalPlugins/sqlite ( CORAL : "Connection::connect" from "/var/clus/usera/will/dbDemo/DPSValidation/share/sqlite200 is not writable" )
...
CORAL/Services/ConnectionService Info New connection to service "ATLF/()" with connectionID=C#2 has been connected
CORAL/Services/ConnectionService Info New user session with sessionID=S#1(C#2.s#1) started on connectionID=C#2 to service "ATLF/()" for user "" in read-only mode
RalSessionMgr Info Start a read-only transaction active for the duration of the database connection
RelationalDatabase Info Instantiate a R/O RalDatabase for 'COOLOFL_LAR/OFLP200'
RelationalDatabase Info Release number backward compatibility - NO SCHEMA EVOLUTION REQUIRED: database with OLDER release number 2.7.0 will be opened using CURRENT client release number 2.9.1
IOVDbSvc             INFO Disconnecting from COOLOFL_LAR/OFLP200

What happened here?... The IOVDbSvc tried to locate the 'database' called COOLOFL_LAR/OFLP200. But hang on... I thought the database we wanted was called 'LAR_OFL'? So it turns out what you give to the addFolder method is actually a sort of 'alias' for an actual database name. I.e. 'LAR_OFL' maps onto the database name 'COOLOFL_LAR'. These mappings are listed here. Then what happened is a special configuration files called dblookup.xml was read in. This file is pointed at by the $CORAL_DBLOOKUP_PATH environment variable. If you open it you find blocks like this (what you see very much depends on what release you have set up):

<logicalservice name="COOLOFL_LAR">
 <service name="sqlite_file:sqlite200/ALLP200.db" accessMode="read" />
 <service name="oracle://ATLAS_COOLPROD/ATLAS_COOLOFL_LAR" accessMode="read" authentication="password" />
 <service name="frontier://ATLF/()/ATLAS_COOLOFL_LAR" accessMode="read" />
</logicalservice>

These are the replicas of the databases. So what we saw is that IOVDbSvc first tried to access the database instance inside an sqllite_file located at sqlite200/ALLP200.db (one possible replica of the COOLOFL_LAR database). It didn't find this though, so it gave up and moved onto the frontier connection (ATLF()). The actual frontier server that it connects to (in the case where '()' is specified in the syntax) is determined by the $FRONTIER_SERVER environment variable ... if this isn't set then you wont be able to use frontier servers, so please do check it is set to something! Oracle was skipped because direct oracle access is no longer allowed.

The logs says it connects to frontier fine and then it reads the payload and disconnects from the database. If we were to do svcMgr.DetectorStore.Dump=True, we would hopefully now see:

Found .. proxies for ClassID 40774348 (AthenaAttributeList): 
...
 flags: (  valid,   locked,  reset) --- data: 0x128c4190 --- key: /LAR/IdentifierOfl/OnOffIdMap_SC

I.e. we're loading the folder! It's often useful to check what folders are configured to be read (and which database they should be read from).... you can do that be doing:

print(svcMgr.IOVDbSvc.Folders)

This list of strings is what the conddb is actually setting up and controlling. To learn more about the syntax of these strings see the expert section below.

DBReplicaSvc: Controlling which replica is tried

You can control which database replicas are tried via the DBReplicaSvc (if you don't set this up yourself, IOVDbSvc will set it up for you, i.e. you should always see some log messages from DBReplicaSvc if you are using IOVDbSvc). If you know that you definitely dont want to use the sqlite replicas (either because they are not available or for some other reason) then you can do:

svcMgr += CfgMgr.DBReplicaSvc(UseCOOLSQLite=False)

This will skip all SQLite replicas, which would make the warnings about failed connections that we saw in the section above go away (I believe this what RecExCommon sets up by default). There is also a switch for the frontier replicas, e.g. to only allow it to access SQLite replicas you can do:

svcMgr += CfgMgr.DBReplicaSvc(UseCOOLSQLite=True,UseCOOLFrontier=False)

(Obviously if the service already exists in the svcMgr then you should just set the property: svcMgr.DBReplicaSvc.UseCOOLFrontier=False etc etc)

Note that the DBReplicaSvc only controls which replica is tried for a logical connection string (any addFolder which doesn't include the special syntax (see below) is a logical connection string).

Note also that you are strongly encouraged to use frontier for everything!

Setting up a DBRelease

A DBRelease is an sqlite-based replica of the OFLP200 (i.e. MC only) database instances. When you run on the grid one of these will be set up for you automatically (although apparently the frontier servers will be favoured over the DBRelease based on the way it gets set up.. I need to investigate this further). When you run locally, you can set one up by creating softlinks from your run directory to the sqlite200 directory in a DBRelease on cvmfs. Try doing this (in the same dir as you will run your code from):

ln -s /cvmfs/atlas.cern.ch/repo/sw/database/DBRelease/current/sqlite200 sqlite200
ln -s /cvmfs/atlas.cern.ch/repo/sw/database/DBRelease/current/geomDB geomDB

(the second link is for the geometry databases, which I haven't covered on this page yet). You can replace 'current' with a specific DBRelease number, just take a look at which ones are available on cvmfs!

Alternatively, redirect your $CORAL_DBLOOKUP_PATH environment variable to point at the special path (containing dblookup.xml) provided in the DBRelease, e.g:

export CORAL_DBLOOKUP_PATH=/cvmfs/atlas.cern.ch/repo/sw/database/DBRelease/current/XMLConfig

Remember if you setup a DBRelease you should make sure your DBReplicaSvc will actually try to use sqlite replicas, which by default RecExCommon will not do (you shouldn't be using RecExCommon anyway wink )... so follow the instructions in the above section. You will know you've done things right if you see messages in the logs about successfully connecting to the "sqlite200/ALLP200.db" database replica.

Conditions tags

A conditions tag is a way of loading a particular version of the folder. There are lots of versions for a folder, like you have lots of tags of svn packages. A conditions tag is like a predefined set of versions for each folder, like how a release is a predefined list of tags of svn packages.

You look at what versions are available for a folder like this....

TODO (AtlCoolConsole)

You set the conditions tag like this ...

conddb.setGlobalTag(MYCONDITIONSTAG)
(You can set this automatically too from the input file!)

You see which version of a folder a conditions tag will resolve to like this ...

TODO

You can explicitly specify a folder version like this...

conddb.addFolder('LAR_OFL',"/LAR/IdentifierOfl/OnOffIdMap_SC <tag>MYVERSION</tag>")

Expert DB folder configuration

The basic use of the 'conddb' configuration 'service' was outlined in the opening section of this lesson. The following 'addFolder' statement:

conddb.addFolder("MYDB","MYFOLDER")

generates an entry in IOVDbSvc's Folder property of the form:

<db>MYACTUALDB/DatabaseInstance</db> MYFOLDER

where MYACTUALDB has come from the mapping provided in the conddb python code here, and DatabaseInstance comes from the chosen DatabaseInstance. If MYDB isn't in the mapping, no db string (<db>xxx</db> part) is provided. Note that we can obtain exactly the same result therefore by using:

conddb.addFolder("","<db>MYACTUALDB/DatabaseInstance</db> MYFOLDER")

This 'trick' of not specifying the first parameter of addFolder is what the 'expert' usage is all about. So what can you specify in the string that goes in the second argument of the addFolder method? It takes the form:

<X>valueX</X> <Y>valueY</Y> ... MYFOLDER

where X, Y, etc, are taken from the following list of possibilities:

argument (X,Y, etc) value description
db the name and instance of the database, separated by a slash, e.g: COOLOFL_LAR/COMP200 ... this is called a logical connection string. The exact replica tried by such a connection string is controlled by dblookup.xml file, and limited by the configuration of the DBReplicaSvc
dbConnection a physical connection string (in contrast to the logical connection string above). see here for explanation of the syntax. Folders with this specification pay no attention to what is specified in the dblookup.xml file or which replicas the DBReplicaSvc says should or shouldn't be tried.. they will only try the explicit db replica given by the connection string.
tag the 'version' of the folder. If unspecified, the version is taken from the mapping provided by the conditions tag.

The logical or physical connection strings are the valid arguments that you can give to AtlCoolConsole.py

When working with transforms like reco_tf, or any job that probably sets up folders behind the scenes, you might want to 'overrride' these settings. To do that, you should do:

conddb.blockFolder("MYFOLDER")
conddb.addFolder("whatever","whatever  MYFOLDER",force=True)

The first line removes any occurance in IOVDbSvc.Folders property of the specified folder (MYFOLDER in this case). The second line adds a configuration for this folder (e.g. reading from a new location or with a new version or whatever), but note that it is necessary to add the 'force=true' option at the end .. this is because conddb will ignore entries for this folder after it has been blocked, so we have to tell conddb "no I promise I really know what I'm doing, add this folder to IOVDbSvc".

Expert notes:

You can load information from both the MC and the Data database instances. When using addFolder provide optional forceData=True to force use of the globalflags.DatabaseInstance value, or forceMC=True to force use of OFLP200.

When you run on the grid, a DBRelease sets up the dblookup.xml to include sqlite files from the DBRelease, so that those are looked at first before oracle or frontier (unless DBReplicaSvc is set up to skip them!)

A useful page covering using the IOVDbSvc can be found at https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CoolIOVDbSvcConfigurable

Accessing COOL without an input file

You can put the EventSelector in a state where it will read COOL information by using:

svcMgr.EventSelector.RunNumber=1234 #set to the run you want svcMgr.EventSelector.InitialTimeStamp=4566 #set to the UTC timestamp you want COOL info for!

Interesting note about CreateSvc

theApp (which is actually the python class AthAppMgr that lives in AthenaCommon/AppMgr.py) seems to lie about what it will CreateSvc, if you do (in interactive athena ideally with debugging on: athena -l debug -i):

from PerfMonComps.PerfMonFlags import jobproperties
jobproperties.PerfMonFlags.doMonitoring = True
theApp.setup()

You see PerfMonSvc created as well, even though it isn't listed by theApp.CreateSvc before calling theApp.setup()!! It appears afterwards (along with ToolSvc) so some piece of magic somewhere in the python is adding these services but I dont yet know how frown

How to make a 'truth xAOD'

Don't use reco_tf.py!!

Using CMT for Analysis

If you asetup, you automatically get the CMT build system, and probably do not need to use a homebrew compilation suite such as RootCore. To keep your work as simple as possible, I recommend you try working with CMT first and only if it is impossible to do what you want should you try a different build system. This section has advice on how to work with CMT (+athena, if you so wish) to do your analysis.

Getting started

The only prerequesite is you have cvmfs installed (to check, type /cvmf on the command line and hit tab to see if it autocompletes!) Do the following to get access to the necessary commands for setting up atlas software:
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh

Choosing an atlas release - the version of ROOT

How do you find a version of athena software that has a version of ROOT you like? The version of ROOT that you get with a release is determined by the LCG software release that the athena release is bundled with. Go to http://lcgsoft.cern.ch/ and click on "ROOT" and then you can click on a version you want and see which LCG software versions have that ROOT version. For example, 5.34.05 root ships in LCG 64d only (at the time of writing). Now you just need to find which releases have that LCG version in them. Go to the tag collector (at http://ami.in2p3.fr) click "Browse" at the top, then click the "(browse)" next to LCGCMT, find the LCG release and click on it, then you might be able to figure out from the table on that page what release number it comes with. In my case, I see a "17.7.0" in the table, so I try:

asetup 17.7.0,here

and then confirm I have the version of ROOT I want by looking at the $ROOTSYS environment variable.

A standalone application compiled with cmt

Please see my LearningAthena twiki page

Bootstrapping gaudi for a standalone 'athena' job

This is just a note for my own benefit, but the minimal needed for boostrapping gaudi (so that you can instantiate algs yourself) is:

//bootstrap the gaudi environment (necessary for MessageSvc existence)
  IAppMgrUI* theApp = Gaudi::createApplicationMgr();
  SmartIF<IProperty> propMgr(theApp);
  propMgr->setProperty("JobOptionsType","NONE"); //no joboptions
  theApp->configure();
  //done ... we could initialize at this step, but not required

  //now we are ready to create our algorithm
  MyAlg alg("MyAlg",Gaudi::svcLocator());
  //Probably wont be able to make algs like that though (e.g if they are from component library), instead do:
  SmartIF<IAlgManager> algMgr(theApp);
  IAlgorithm* myAlg = 0;
  StatusCode sc = algMgr->createAlgorithm( "MyAlg", "AlgInstanceName", myAlg );

More advice

Look in AtlasPolicy requirements, e.g. can find how to install "data" files:

apply_pattern generic_declare_for_link kind=xmls files='-s=../data/path *.whatever *.another' prefix=XML/MyPackage/path name=path

The name is just to prevent the patterns overriding one another (each pattern needs a unique name). This does the same as:

apply_pattern declare_xmls extras="-s=../data/path *.root"

The first one is more flexible potentially, because you can control where in the InstallArea the files go (using the prefix option).

Conditional compilations (only compile if a package is present)

This is advanced cmt usage, and a little bit fiddly. I'll try to explain carefully. Suppose I have some code in my package that I only want to compile if another package is present in the user's setup. We will achieve this by getting CMT to set a preprocessor flag for us if it is present, and we wrap all code that depends on the presence of this package inside the flag. For example, in my code I would put:

#ifdef HASSPECIALPACKAGE

... do code here that only happens if special package is present

#endif
Once you've done this, in your requirements file you need to add these lines:
macro FindSpecialPackage "`cmt -quiet show versions Usual/Path/SpecialPackage||echo SpecialPackageNotFound`"
apply_tag $(FindSpecialPackage)
macro    use_specialPackage "SpecialPacakge SpecialPackage-* Usual/Path" SpecialPackageNotFound ""
use $(use_specialPackage)
macro_append lib_MyPackage_pp_cppflags " -DHASSPECIALPACKAGE=1 " SpecialPackageNotFound ""
The first line sets up a macro that will try to locate the special package, and if it fails, it will echo 'SpecialPackageNotFound'. The second line 'executes' the macro, and applies the associated tag (if the package is found, a gunky tag is applied, the important thing is that if the package is not found, the 'SpecialPackageNotFound' tag is applied). The third line defines a conditional macro that contains the use statement for the external package, but it's blank if the package is not found. The fourth line then 'uses' that third line. The fifth line adds a preprocessor flag to MyPackage (the package I am working on) called HASSPECIALPACKAGE, unless the SpecialPackageNotFound tag was applied, in which case no flag is set.

Using amiCommand

The manual is at: http://ami.in2p3.fr/opencms/opencms/AMI/www/Client/pyAMIUserGuide.pdf

Listing runs in a period

amiCommand GetRunsForDataPeriod -period=A -projectName=data12_8TeV
projectName is or data11_7TeV, data12_8TeV

Counting events

amiCommand SearchQuery -sql="SELECT SUM(dataset.totalEvents) FROM dataset WHERE (dataset.logicalDatasetName LIKE 'data12_8TeV.periodA.physics_Egamma.PhysCont.NTUP_SMWZ.grp14_v01_p1328_p1329')" -project="dataSuper_001" -processingStep="real_data"
Have to use dataSuper_001 if asking for number of events in a physics container. Otherwise use data11_7TeV or data12_8TeV if asking about individual runs. You can put 'wild cards' in the datasetname with a percentage symbol.

Check AMI Status

https://meter.cern.ch/public/_plugin/kibana/#/dashboard/temp/ATLAS::ADC_CS Everything should be green in the above link!

Root Snippets

Using package header files in tselector aclic compile

E.g. in tselector header I might have
#include <TChannelInfo.h>

Then in my running code I can do:

gSystem->AddIncludePath(" -I$TestArea/InstallArea/include/AnalysisCamROOT/AnalysisCamROOT ");
c.Process("EEChannelSelector.C+","");

It may also be necessary to tell root where the library is, otherwise library linking might fail (happened for me when I made a reflex library... note, add the Lib not the DictLib)

gSystem->AddLinkedLibs("../L1CaloPhase1/x86_64-slc6-gcc47-opt/libL1CaloPhase1Lib.so")
Which could also have just been accomplished (more smartly) with:
   gSystem->AddLinkedLibs(" -L$TestArea/InstallArea/$CMTCONFIG/lib ");
   gSystem->AddLinkedLibs(" -lL1CaloPhase1Lib");
 ... could add more lines here for the other installed libs!

Making ACLIC compiling work on SLC6

For people who have asetup with some release, you will find that ACLIC compiling your scripts might fail with the following sort of error:
/usr/include/bits/stdio.h: In function '__ssize_t getline(char**, size_t*, FILE*)':
/usr/include/bits/stdio.h:118: error: '__getdelim' was not declared in this scope
This is because asetup has set you up with an 'old' version of gcc (4.3), whereas the slc6 glibc are only out-of-the-box compatible with newer gcc. To fix this you need to add a compiler flag (which is something the new gcc do automatically). Add to your rootlogon:
gSystem->SetFlagsOpt(" -D__USE_XOPEN2K8 "); //fixes slc6 compiling with old gcc
In cmt packages, this should be fixed for you automatically with the AtlasPolicy package!

Other things

Renewing grid certificates

Open CertWizard, select the certificate and click 'renew'. Wait a couple of days for email to come through saying it is ready. Open CertWizard, select the certificate and click 'export' - export as pkcs12 ... that gives you the file needed for firefox. Also click 'Install' - that puts the userkey.pem and usercert.pem files in $HOME/.globus/. Copy them to other machines as necessary and do chmod 444 usercert.pem and chmod 400 userkey.pem

Command line tools

GetTfCommand.py --AMI MYTAG : Get configuration used for an ami tag (will give you the exact command to type)

Sending group production grid jobs

e.g.

pathena --official --voms=atlas:/atlas/perf-jets/Role=production

-- WillButtinger - 10-Aug-2011

Edit | Attach | Watch | Print version | History: r77 < r76 < r75 < r74 < r73 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r77 - 2017-12-06 - WillButtinger
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback