%CERTIFY%

Trigger and Menu Online Expert Manual

The Trigger Online Expert Role

The trigger online expert provides support for issues pertaining to the high-level trigger operations at P1. In July of 2012, the trigger and menu expert roles will be merged. In anticipation of the merger, this twiki page has been updated to cover the tasks pertaining to both roles.

Meeting participations:

  • Daily trigger coordination meeting at 9:15 in SCR to report on the status and discuss plans
  • Daily run meeting at 9:30 [Indico] to respond to problems and issues
  • Weekly run coordination meeting at 10:00 [Indico] to report on the current status and plans

Requirements (beyond regular shifter):

  • Carry the on-call phone x161813
  • Access manager roles (check with lfinger): TRG:expert, TRG:remote, TRG:patchinstaller, PRESERIES:user (ask Trigger (TRG) and DAQ (PRESERIES) run coordinators to assign them)
  • "DBA" role in TriggerTool (any existing DBA can assign this)
  • Subscribe to the following eGroups:
  • As member of atlas-trigger-online-experts you are automatically subscribed to the following eGroups:

Suggested reading

It is strongly suggested that you familiarize yourself with the following material:

Expert information

Bunch groups

From David Berge's talk on 10.02.2010 and updates in ELOG 112359

  • 0 BCRVeto: Allows triggers everywhere but in a small region 3540-3560 (abort gap) when the bunch counter reset is sent (every L1 trigger is ANDed with this BG)
  • 1 Paired: Paired (colliding) bunches for physics triggers
  • 2 Calreq: Calibration requests in the abort gap for Tilecal (laser/charge injection)
  • 3 Empty: empty bunches without any beam activity 5 BC before and 5 BC after for cosmics, random noise and pedestal triggers
  • 4 IsolatedUnpaired: unpaired bunches separated by at least 3 BC from any bunch in the other beam for background monitoring, excluding leakage tails from the other beam
  • 5 NonIsolatedUnpaired: unpaired bunches that don't fall in category 4
  • 6 EmptyAfterPaired: empty bunches just within 5 BC after a filled bunch (no overlap with BGRP3) for long-lived particle searches
  • 7 Special: no definition, can be set manually for special runs like VdM scans, high rate tests

You can display the content of any bunch group in the TriggerDB using this URL: https://atlas-trigconf.cern.ch/bunchgroups?key=157

Note that all bunchgroups >= 393 are with the new arrangement inside the abort gap.

Magnetic field configuration

The HLT algorithms are configured at the start of the run with the correct magnet status (solenoid/toroids on/off). This is done by reading the magnet currents from the DCS_GENERAL IS server in the initial partition and choosing the correct field map from COOL. Details on this procedure can be found in CoolMagneticField. Whenever the magnet status changes, a new run has to be started in order for the HLT algorithms to be configured correctly.

The current magnet configuration of the HLT nodes can be seen in OHP (HLTInfrastructure - L2/EFFarmConfig). You can also find the information in the L2 and EF log-files.

Starting a run during magnet ramp

The threshold for the ON/OFF setting of the magnets is determined by the MagFieldAthenaSvc.ZeroTolerance property. This is currently set to 1 (Amp). Hence if you start a run once the magnet current is above 1 amp, the field ON setting will be chosen and no stop/start is required once the nominal currents are reached. Please double-check via OHP (see above) that the correct setting is chosen.

Note: In the past there used to be a toroid map at half nominal current in COOL. In that case the toroid current had to reach at least 75% of its nominal value in order for the full-current map to be chosen by the magnetic field service.

Disable magnet auto-configuration

If the magnet currents are not published in IS (e.g. during testing periods) you will get this error:
ERROR Magnet auto-configuration failed
on all L2/EF nodes. In order to disable the magnet configuration from IS, set [Lvl2,EF]EventLoopMgr.setMagFieldFromIS = False in the TriggerTool. In this case, both magnets are assumed to be ON with nominal currents.

Lumi Estimation

This Simple LHC Luminosity Calculator is an useful too to estimate the expected luminosity given a set of LHC parameters.

Trigger Expert tasks

Check lists

A few things to check before/after major software or configuration changes:

Change of software release

  • DONE Before starting partition check that the release has been synced successfully:
    ls /sw/atlas/AtlasP1HLT/[RELEASE]/InstallArea/i686-slc5-gcc43-opt/lib
    should be non-empty on any random Point1 machine
  • DONE Check that the release was built form the correct nightly:
    cat /sw/atlas/AtlasP1HLT/[RELEASE]/InstallArea/i686-slc5-gcc43-opt/ReleaseData
  • DONE Once running check that the correct release is being used: L2 release version EF release version
  • DONE Make an e-log entry stating the run number from which the new release is being used and forward the e-log entry to the atlas-trigger-operation mailing list.

Change of deadtime settings

The simple/complex deadtime is set via the CtpDeadTimeSettings object in ctp/segments/L1Ctp_segment.data.xml. The corresponding settings via the TriggerTool are deprecated and no longer used.

Software and releases

Read the check list above before proceeding.

How to change the base (offline, AtlasHLT) release

For each new 3-digit base release [RELEASE] (either used by HLT or monitoring), the following two files need to be created in OKS:

daq/segments/HLT_[RELEASE]-Environment.data.xml
daq/sw/HLT_[RELEASE]_SW_Repository.data.xml

To create the release files in OKS, follow these steps:

  • Setup the tdaq environment:
    source /det/tdaq/scripts/setup_TDAQ.sh
    rel=[RELEASE]
  • Make sure you have a local TDAQ_DB_USER_REPOSITORY (i.e. export TDAQ_DB_USER_REPOSITORY=[MYDIR]/oks/tdaq-04-00-01)
  • Execute
    /det/tdaq/scripts/make_hlt_release ${rel}
  • Commit the two files to OKS
    cd $TDAQ_DB_USER_REPOSITORY/daq
    oks-import.sh -d daq sw/HLT_${rel}_SW_Repository.data.xml segments/HLT_${rel}-Environment.data.xml -m "AtlasHLT ${rel}"

To change the HLT base release version, you need to:

  • Change all occurrences of the old base release number in
    /atlas/oks/tdaq-04-00-01/daq/segments/HLT-Environment.data.xml
    /atlas/oks/tdaq-04-00-01/daq/sw/HLT_SW_Repository.data.xml

How to change the AtlasP1HLT release

To change the AtlasP1HLT patch release in the ATLAS partition, execute:
/det/tdaq/scripts/change_hlt_release 17.1.4.1

This will create /atlas/oks/tdaq-04-00-01/daq/sw/AtlasP1HLT_17.1.4.1.data.xml in OKS that is referenced in daq/segments/HLT-Common.data.xml.
Note: /det/tdaq/scripts/make_hlt_patch 17.1.4.1 is now automatically executed by change_hlt_release if needed.

ALERT! In case you get the following error, this can be ignored: ERROR 2010-Aug-02 15:12:38 [int main(...) at oks/src/bin/oks_commit_exec.cpp:1618] the readdir('/atlas/oks/tdaq-04-00-01/daq/sw') failed: No such file or directory

ALERT! In case make_hlt_patch fails, you can create the xml file manually (e.g. to switch from .25 to .26):

  • Make sure you have a local TDAQ_DB_USER_REPOSITORY (i.e. export TDAQ_DB_USER_REPOSITORY=[MYDIR]/oks/tdaq-04-00-01)
  • Checkout the previous version: cd $TDAQ_DB_USER_REPOSITORY; oks-checkout.sh daq/sw/AtlasP1HLT_15.6.9.25.data.xml
  • Make a copy and replace the version number: cat daq/sw/AtlasP1HLT_15.6.9.25.data.xml | sed 's#\.25#\.26#g' > daq/sw/AtlasP1HLT_15.6.9.26.data.xml
  • Checkin the new version: cd daq; oks-import.sh -m "Import AtlasP1HLT 15.6.9.26 s/w repository" -d daq sw/AtlasP1HLT_15.6.9.26.data.xml

Emergency patches

Under exceptional circumstances an emergency patch can be applied on top of the AtlasP1HLT release. This should only be done in true emergency situations, in case one cannot wait for the next AtlasP1HLT cache, disabling of chains doesn't help and the data taking is seriously jeopardized otherwise. Follow these instructions very closely:

  • Prepare an InstallArea with the libraries you want to patch and copy this directory to Point1 into your home directory.
    For example to ~/tmp/MYPATCH/InstallArea where MYPATCH should give some indication of what the patch is for.
    • First setup the base release with the asetup script
    • Check out the packages you need from cvs and compile them
    • Copy the resulting InstalArea to the InstallArea in your tmp directory
  • Copy the patch to the central file server and sync across all rack file servers:
    cd ~/tmp/; sudo -u swinstaller /daq_area/tools/sync/remote_sync.sh -c -t sw_tmp_hlt -s MYPATCH
    This will copy the directory to /sw/extras/sw_tmp/current/hlt/MYPATCH
  • Set Variable HLT-PATCH-AREA-P1 in daq/segments/HLT-Common.data.xml to /sw/extras/sw_tmp/current/hlt/MYPATCH
    • Run the command oks_data_editor /atlas/oks/tdaq-04-00-01/daq/segments/HLT-Common.data.xm
    • Don't forget to create a user repository (under file in the editor)
  • Check that the HLT-PATCH-AREA-P1 Variable is part of the VariableSet HLTCommonEnvironment
  • Remember to clear HLT-PATCH-AREA-P1 once the patch is no longer needed.

Note, that once you applied patches on top of AtlasP1HLT the release version archived to COOL does no longer reflect the actual software you are running in the HLT. You have to decide:

  1. The patch is of pure technical nature and by no means can affect the HLT selection.
    The patch should be added to the next AtlasP1HLT release and the HLT switched to this new release as soon as possible. No special actions concerning the release version archival is necessary.
  2. The patch might affect HLT selection
    A new AtlasP1HLT release with this patch added on top of the currently used release has to be prepared immediately. This could require some manual TagCollector interventions since there might be tags already scheduled for the next release that have to be postponed. Since the archived release version cannot be changed after the run, you will have to manually change the release version number to the value of the anticipated new AtlasP1HLT release.

Partition test on the preseries

The following instructions are for the final deployment tests of a new AtlasP1HLT release. Do not use this test partition for any other purpose to avoid conflicts.

Before deploying a new AtlasP1HLT release one should make a quick test run in the preseries to check the proper installation of the release (and conditions data) using a simple localhost partition. Find a currently unused preseries node using the Calendar (no need to reserve since this test will only take about 10 minutes) and login through the preseriesgw (remember to login with ssh -Y preseriesgw to set up the display. Also remember that you can only login to the preseries from the outside of P1, e.g. lxplus. From P1 do ssh -Y atlasgw-exp to lxplus first) to e.g. pc-preseries-xpu-001. To run the partition do:

cd /atlas/project/hlt/partitions/p1hlt_test
source setup.sh
./make_part 17.1.4.1
./setTrigKeys part_p1hlt_test.data.xml 991 2908 2898        # SMK L1-PSK HLT-PSK
setup_daq -d part_p1hlt_test.data.xml -p part_p1hlt_test
Change P1HLT cache and keys (SMK and prescale) to the ones to be used online. If you use the preseries after some other users did, you may have to remove the file part_p1hlt_test.data.xml before you can run. This is due to file permissions, in fact it is good practice to do chgrp  TDAQ  filename to all files created (automatically or manually) and modified, e.g. part_p1hlt_test.data.xml file.

The TriggerPanel currently does not work on the preseries.

The partition uses a default datafile specified in robhit.py. In case you want to change the data file, execute ./make_robhit DATAFILE and re-create the partition. The input file can be retrieved on castor directory /castor/cern.ch/grid/atlas/tzero/prod1/perm/data10_7TeV/<streamname> /<runnumber> (NB: the data tag may change, e.g. data10_hi for Heavy Ion runs, and remember to do export STAGE_SVCCLASS=atlcal in order to retrieve the file). When you change the data file you also need to change the run number value in the file runnumber.txt (check that the fixedRunNumber.py python script indeed reads the runnumber.txt file from the correct directory).

The DATAFILE from castor is usually too big to be copied into the preseries machines, a useful script that can be used to obtain a sub sample of events (hence reducing the file size) is this trigbs_pickEvents.py . From lxplus, after setting up a recent P1HLT or CAFHLT release, you can do:

trigbs_pickEvents.py <input_file> <output_file>  [event_sequence_numbers and/or sequence range (1-100)] 
then copy the reduced into the preseries machine in the usual way.

Unfortunately, the partial event building does not work correctly in preloaded mode. Hence you can ignore warnings of this type:

WARNING SFI-1 SFI::DataIntegrityIssue Problem with data integrity: Event with L1ID 0x65000080, GID 38 and streamtags
Type:calibration Name L2CostMonitoring requested a partial EB with the following unknown IDs DETID = 0x77

If you loose the connection while running a test on the preserioes and the partition is still running, you can kill it by doing

pmg_kill_partition -p part_p1hlt_test

Finding log files

The HLT log files are stored on the HLT nodes in the directory /logs/tdaq-04-00-01/ATLAS. The files are named L2PU*.[out,err] or PT*.[out,err] for stdout and stderr messages respectively. A new log file is created after a terminate of the partition. Hence, usually several runs are merged into the same log file. To find the log file of a particular run, there are two options:

Search for the run number in the log file:

ssh pc-tdq-xpu-62001
grep -l "start of run = <RUN>" L2PU*.out
where <RUN> is the run number you are looking for (use PT*.out for an EF node).

Use the log file name archive:

For each run the names of all log files are archived into a text file. You can access it through the web: https://atlasop.cern.ch/partlogs/pc-tdq-mon-57/ATLAS/logs_run_185644.txt

Log Files can be accessed remotely from the web by adding the machine name at the end of this link https://atlasop.cern.ch/partlogs/, e.g. for PT https://atlasop.cern.ch/partlogs/pc-tdq-xpu-0123, for L2PU https://atlasop.cern.ch/partlogs/pc-tdq-xpu-0756

More info can be found here OnlineLogfiles

Retrieving events from the debug stream

Events going into the debug_ streams (DebugStream summarizes the different debug streams) are copied to ATLCAL:
export STAGE_SVCCLASS=atlcal
rfdir /castor/cern.ch/grid/atlas/DAQ/2012/<RUN>/<STREAM>
You can inspect the events using trigbs_dumpHLTContentInBS.py. Example:
> trigbs_dumpHLTContentInBS.py --l2res --efres /castor/cern.ch/grid/atlas/DAQ/2012/...
======================= Event: 1, LVL1_ID: 838860800 Global_ID: 25720 ==========================
.. No HLTResult for L2
..  ROBFragment TDAQ_EVENT_FILTER, module=59884 (opt=0)
... Version: 3  Lvl1Id:  25720  Decision:  0  PassThrough: 0  Status:  ABORT_EVENT WRONG_HLT_RESULT USERDEF_4  
ConverterStatus:  ABORT_EVENT WRONG_HLT_RESULT USERDEF_4  LVL: 1  Signatures: 0  Bad: 0  Truncated: 0  
App:  PT-1:EF-Segment-24-rack-Y09-04D2:pc-tdq-xpu-0608:4
Amongst others you can find the application and host name on which the event was processed. This is useful in case you want to look at the log file.

Enabling core dumps for debugging

To debug irreproducible crashes it might be necessary to enable core dumps on the HLT nodes:
  1. Ask a DAQ experts to enable core dumps on the xpus. This requires a restart of the pmgserver on all xpus.
  2. Modify these two properties in the L2 and/or EF setup in the TriggerDB:
     CoreDumpSvc.Signals = []
     CoreDumpSvc.FatalHandler = 1

With the above settings you will no longer get the usual printout and stacktrace from CoreDumpSvc. But it allows the creation of the core dump files, which by default will be written to /logs on the xpu nodes.

TIP In case you have already identified the problematic library, you might want to install a version with debugging symbols embedded. For this, simply checkout the relevant package, add macro_append cppdebugflags ' -g' to the requirements file, compile and install as described in the Emergency patches section.

Change Trigger Tool version or run TT from command line

Some useful documentation about the TriggerTool:

To start the TriggerTool:

At Point 1

The TriggerTool has a graphical front end and a command line interface. Shifters at P1 start the TriggerTool using the button on the desktop panel on the trigger desk. Experts that want to start a version other than the default can start that from a shell commandline:
/det/tdaq/scripts/start_triggertool_interactive.sh
then select the option given to you on the screen to change the TT version used, only a couple of options are given to you.

From afs

/afs/cern.ch/user/a/attrgcnf/TriggerTool/run_TriggerTool.sh This starts the default TriggerTool.
/afs/cern.ch/user/a/attrgcnf/TriggerTool/run_TriggerTool_interactive.sh This starts an interactive script, which allows to select a specific TriggerTool version. This is interesting in case you hit a problem, with this script you may start an earlier version or the dev version (where your bug will be already solved).

From the web

You can start the TriggerTool directly form the TriggerTool website: http://triggertool.web.cern.ch/triggertool/, or using this link

Troubleshooting

Rerun L1 simulation

If the HLT farm crashes, and the SMK is evolved from default in nightly, search for "CTPSimulation" in the log file. If present regenerate new SMK.

If you do a diff of two menus, one correct and one wrong, you will see in L2_Setup / EF_Setup, HLT_COMPONENT_TrigConfMetaData => HLT_PARAMETER PreCommand => caf = True; rerunLVL1=True Theses paramteres need to be False.

The rerun of L1 simulation, carried over from the nightly default, seems to be the explanation for HLT farm crash.

Noisy L1_EMPTY items

The CTP continuously (every 10 seconds) monitors the item rates of a few configurable items. The item configuration file is at Point 1, /det/ctp/MaxItemRates.txt and contains currently:
L1_TAU8_EMPTY 200. 2.
L1_J10_EMPTY 50. 2.
L1_J30_EMPTY 50. 2.
L1_EM3_EMPTY 100. 2.
L1_EM5_EMPTY 100. 2.
L1_TAU8_FIRSTEMPTY 50. 2.
L1_J10_FIRSTEMPTY 50. 2.
L1_J30_FIRSTEMPTY 50. 2.
L1_EM3_FIRSTEMPTY 50. 1.
L1_BCM_AC_CA_BGRP0 100. 3.
L1_EM3_UNPAIRED_ISO 10. 0.2
L1_TAU8_UNPAIRED_ISO 10. 0.2
L1_EM3_UNPAIRED_NONISO 10. 0.2
L1_TAU8_UNPAIRED_NONISO 10. 0.2
where the syntax should be
<item name> (any number of spaces or tabs in between) <maximum acceptable rate> (any number of spaces) <target rate>
If the max rate is crossed, warning messages will be printed until the rate recovers, or until the shifter has changed the prescales. There is a suggested prescale factor appearing in the warning, which is the one to reach the configured target rate. After editing this file, YOU SHOULD make sure that the file group permissions are assigned to group TDAQ, otherwise other experts will have problems overwriting this file. You can give the correct permissions by executing
chgrp  TDAQ  /det/ctp/MaxItemRates.txt

In this way, the file is writeable for group members of TDAQ, should be all experts. Also check that the file is group writeable. It's read at every LB, so you could make changes in a run, and they will be immediately picked up. If you misspell an item name, or don't follow the syntax, the mistake will be silently ignored. This behavior can be changed in future.

Database issues

For database issues (e.g. COOL or TriggerDB) contact the DAQ on-call or follow the procedure detailed in Daqoncall.

Conditions data

Updates to conditions data will only affect future (usually the next) run. Therefore, if after a stop/start transition all HLT nodes fail to start, changes to conditions data are the most likely cause. You can consult the Atlas conditions data update manager to get a list of recent updates to the online database (ATONR). Once you established that there was a change one of the following sections might be useful to get more details or work around the problem.

Show recent changes to COOL folders

Use /det/tdaq/scripts/coolChanges.py to show recent changes to COOL folders:
> source /sw/atlas/cmtsite/asetup.sh 17.1.4
> /det/tdaq/scripts/coolChanges.py
Data source lookup using /sw/atlas/AtlasCore/17.1.4/InstallArea/XML/AtlasAuthentication/dblookup.xml file
=== Run: 187816 (within 100 runs)
=== Time: 2011-08-22 08:23:58 (within 1 day, 0:00:00)
=== Tag: COMCOND-HLTP-003-04
===
=== Recent changes in COOLONL_CALO/COMP200
=== Recent changes in COOLONL_CSC/COMP200
=== Recent changes in COOLONL_GLOBAL/COMP200
=== Recent changes in COOLONL_INDET/COMP200
[187815,9] - [inf,inf] /Indet/Onl/Beampos (0) 2011-08-22_08:29:16.091862000 GMT
=== Recent changes in COOLONL_LAR/COMP200
[...]

In above example, one can se that the Beamspot folder was changed recently. This is due to the automatic beamspot update during each run. Use --help for command line options and additional information to the script.

Emergency procedures for broken conditions

In some rare cases a conditions update can break the HLT, preventing it to configure or causing constant crashing. It should normally be deducible from the logfile if this is the case, for instance by complaints that all nodes fail to load a pool file. One should immediately complain on the atlas-trigger-conditions or atlas-conditions mailing lists, but to get a run going immediately one can tell athena to use older conditions for the folder causing the problem. First use the logfile and the above instructions to determine which folder or folders are causing problems. Then edit the current trigger configuration in the TriggerTool. In BOTH the L2 and EF setup, one should find the IOVDbSvc and edit the "Folders" property. For each faulty condition, one should add zzz after the folder name, where zzz is a runnumber slighter lower than the start of the new condition found above. Example:

 '<db> COOLONL_TRT/COMP200</db> /TRT/Onl/Calib/RT <forceRunNumber> 142406 </forceRunNumber>'
This setup should of course only be used until the database problem has been solved.

Modifying histogram-monitoring settings using the TriggerTool

If you want to modify the setting of monitoring histograms associated with trigger algorithms due to performance issues do:
Start TriggerTool as an expert role and chose the SMK to be modified and double click on it to load it up.
The overview panel window for the chosen SMK should appear.
Click on the L2 setup, this will bring up a window. Choose the component that you would like to modify by clicking on it, you can use the filter field at the top.
Once the component is chosen then you will see its parameters on the right hand side.
To edit the histograms parameters right click on histograms and select View list parameter to modify its parameters, once you have finished click OK on the right bottom corner.
Click on Done and finally SAVE to create a new SMK.

Disabling the automatic beam-spot update

If there is a problem with the automatic beam-spot update (it is new and some infancy problems might still be discovered), ask RunControl to do the following:

  1. In the IGUI go to the tab: "Segments & Recources"
  2. Traverse to the segment TDAQ->BeamSpot-Segment and disable it

Recovering of TRP lost providers

If there is a sudden drop of rates at L2 or EF (or both), first thing to do is to correlate this drop in rates with a change in the number of TRP prroviders.

  1. In TRP there are two plots under the tab with title "more plots", these plots show the number of L2/EF providers
  2. If there is a exact correlation between the rate dropping and the change (lost) providers then the htladapter_app_[L2,EF] should be restarted by run control
  3. Under RunControl, expand TDAQ segment, expand TDAQMonitoring segment, expand TRP_controller, expand TRP_Controller_HLT, restart the applications "htladapter_app_L2" or "hltadapter_app_EF"
  4. Right after restarting the hltadapter_app_[L2,EF], you should see the trigger rates jumping up and this should be correlated with a change in the number of providers.

If there is still a set of triggers that showed a sudden drop in rates and it was not correlated with a change in the number of TRP providers then further investigation is needed and most probably the problem is in somewhere else in the system and not in TRP.

Disable HLT ERROR message forwarding to MRS

In case MRS gets flooded by messages from the HLT ("ers:HLTMessage") the fowarding can be disabled by setting
MessageSvc.useErsError = []
in the L2/EF setup of the SMK. In addition, this can be done during the run via:
rc_sendcommand -p ATLAS -n L2-Segment-1:pc-tdq-onl-80 USERBROADCAST HLT_SetProperty MessageSvc useErsError "[]"
rc_sendcommand -p ATLAS -n EBEF-Segment-1:pc-tdq-onl-81 USERBROADCAST HLT_SetProperty MessageSvc useErsError "[]"

The Expert Tools

Useful links and tools

Point1 machines

SSH traffic to and from P1 must go through a gateway
Gateway Point1: atlasgw
Gateway from inside Point1: atlasgw-exp
Public machines in Point1: pc-atlas-pub-[01-13], vm-atlas-pub (pool of virtual machines on UPS)
Control room machines: See ACR webcam and desk layout
Requesting remote access to Point 1 using Online Access Request Form

Copy files from/to Point1 to/from lxplus

First connect to lxplus via the gateway from Point1, then scp from/to Point1 machine to/from lxplus:
ssh atlasgw-exp then follow the instructions to connect to lxplus
once you are on lxplus
  • To copy from P1 to lxplus: scp username@atlasgw.cern.ch:/atlas-home/1/username/filename /target_dir/
  • To copy from lxplus to P1: scp /my_dir/ username@atlasgw.cern.ch:/atlas-home/1/username/filename

Copy files from online machines

In case you want to copy a file from an online machine and you do not have permissions to login to this machine, you can use:

sudo -u farmtoolsuser /sw/tdaq/scripts/remote-copy.sh -h
Example:
sudo -u farmtoolsuser /sw/tdaq/scripts/remote-copy.sh -n pc-tdq-mon-55 -l -f /clients/scratch/histograms/tdaq-04-00-01/OPMON
All copy operations are being logged. Do not abuse this command.

Interactive release setup

TDAQ: source /det/tdaq/scripts/setup_TDAQ.sh
HLT: source /sw/atlas/cmtsite/asetup.sh AtlasP1HLT,17.1.4.1,setup (replace 17.1.4.1 obviously)

OKS Editing

After connecting to P1 machines and setting up TDAQ

oks_data_editor /atlas/oks/tdaq-04-00-01/combined/partitions/ATLAS.data.xml

You can check the OKS CVS repository to see if your change in OKS has been corectly committed, but this does not mean that the change is being used online, since this needs a 'Commit and Reload' from the RC.

L1 CTP Monitoring

Follow the link to the L1 CTP COOL Monitoring, click on the run number of interest and you will find various plots, click on the plot to enlarge it. For example, very useful is the Busy Monitoring plot, beware that the time on the x-axis is UTC.

Remote monitoring

Remote monitoring via web pages:

The most convenient way of remote monitoring is the use of the remote monitoring machines pc-atlas-rmon.cern.ch. A mirror of the ATLAS partition is running and the most important IS servers are replicated as well. There are two options for using this set of machines:

  1. via NX: details are given on RemoteMonitoring. Note, that you don't need to setup the ssh tunnel while at CERN.
  2. via terminal: simply run source /data/ATLAS/scripts/setup.sh and you will have a Point1-like environment with all the usual tools (daqPanel, trp, etc.) You might have to manually copy some of the configuration files from Point1 (in /atlas/moncfg).

Trigger rate monitoring

There are seleveral ways to monitor the trigger rates either at P1 or remotely. (access link from inside P1 in parentheses)

  • Current run:
    • for an overview use the web display WTRP (P1)
    • for more details use TRP-like web display through WebIS Trigger Rates (P1)
    • for more details run TRP from pc-atlas-rmon (containing IS mirror): ssh pc-atlas-rmon; source /data/ATLAS/scripts/setup.sh; trp -c /atlas/moncfg/tdaq-04-00-01/trigger/trp/trp_gui_conf.xml
  • Last 24 hours including DAQ info with TRP display, but only at 1 minute intervals:
    • on pcatr run /afs/cern.ch/user/a/aagaard/scratch0/RateArchive/rates.sh
  • History of past runs:
    • at P1 run ~tbold/draw_rates.py will draw L2/EF rates from last available run. The -h gives extensive help.
    • on lxplus the same can be executed for specific run ~tbold/public/draw_rates.py -p http -a 145981. It gives python prompt to draw other rates, zoom, make another types of display.
    • on lxplus run ~aagaard/public/rateHistory/rateHistory.sh . Rates are shown with TRP display
    • on the web use ATLAS RunQuery, showing, e.g., the main low threshold L1 items or items matching a particular pattern (L1_MU* in the example)
  • Interactive tool (with unique rates): A new interactive tool from Brian (extremely useful for knowing the unique rate of triggers): cd /afs/cern.ch/user/a/aagaard/public_atlas/RateTools; asetup 17.1.4.1; ./RateMaster.py EF.rate (or change to EF_Sep09.rate for September 3e33 rates) or ./RateMaster.py L1.rate

In addition see the Central Trigger Cool Monitoring page for the L1 rates as archived to COOL.

CrashViewer

Crashes of HLT applications can be monitored using the "CrashViewer" from the DAQPanel or by running /det/tdaq/scripts/crashViewer.py at Point1 from a terminal (needs tdaq setup). As crashes occur, the tool will count and list the crashes. The logfiles for a particular crash can be viewed simply by clicking on the crash. Note that logfiles for non-trigger crashes might not be viewable due to ssh permissions.

L1 random rate calculator

A little calculator for L1 random rates based on the bunch pattern: Connect to a P1 machine, setup the TDAQ, then run /det/tdaq/scripts/l1_rate_calc_new.py. This tool gives you the random rate before prescale either using the bunch group currently in use or the one you enter. By setting the target TAP it gives you the prescale value to be used. The BPTX rates are currently wrong.

Dump properties of SuperMasterKey

trigconf_property.py can be used to dump individual properties of a SMK. Use --help for more information. SQL wildcards can be used to select more than one component or property. The special TrigConfMetaData "component" contains some useful information about the command line options that where used to create the SMK:
> trigconf_property.py --l2 TrigConfMetaData % 1206
L2 1206 TrigConfMetaData Modifiers ['DisableMdtT0Fit', 'EFmuCTPicheck', 'ForceMuonDataType', 'L2muCTPicheck', 'RPCCosmicData', 'UseBackExtrapolatorDataForMuComb', 'UseBackExtrapolatorDataForMuIso', 'UseBeamSpotFlagForBjet', 'UseLUTFromDataForMufast', 'UseParamFromDataForBjet', 'UseRPCTimeDelayFromDataForMufast', 'allowCOOLUpdates', 'detailedErrorStreams', 'enable7BitL1TTStreaming', 'enableCoherentPS', 'enableCostMonitoring', 'enableHotIDMasking', 'noLArCalibFolders', 'openThresholdRPCCabling', 'optimizeChainOrder', 'softTRTsettings', 'useEFMuonAlign', 'useL2MuonAlign', 'useNewRPCCabling', 'useOracle', 'usePileupNoise']
L2 1206 TrigConfMetaData PreCommand doDBConfig=True;trigBase="Physics_pp_v3";testPhysicsV3=True
L2 1206 TrigConfMetaData JobOptions /afs/cern.ch/atlas/software/releases/17.1.4/AtlasP1HLT/17.1.4.1/InstallArea/jobOptions/TriggerRelease/runHLT_standalone.py

Dump application environment

You can dump the environment of any application using this command:
export TDAQ_DB_DATA=/atlas/oks/tdaq-04-00-01/combined/partitions/ATLAS.data.xml
dal_dump_apps -p ATLAS -d oksconfig:$TDAQ_DB_DATA -n L2PU-8191 -s

Trigger Menu Tasks

Uploading a new menu to P1

If for e.g. a new cache got deployed at P1 and you're asked to prepare a new SMK then follow the steps below. First the easiest case will be described, meaning that no changes have to be made to TriggerMenuPython in that release version etc. However if you have to make changes to TriggerMenuPython (TMP) in terms of e.g. editing a chain (see savannah bug 81600 for example), then there are different ways of how to do that. This will be described in the Making a new Tag of TMP section below.

Making a new SMK - easiest case

Set up the AtlasP1HLT version which is currently used at P1/for which you have to make the new SMK: * Example how to set things us:
mkdir AtlasP1HLT_XXXXXX
cd AtlasP1HLT_XXXXXX
asetup AtlasP1HLT,XXXXXX,here
with XXXXXX being the release version.

  • To create the new menu, create a run directory:
    mkdir run
    cd run
  • Link an input data file:
    ln -s  /pcatr-srv1/data/files/fwinkl/data11_7TeV.00177531.physics_HLTPassthrough.daq.RAW.ATN100._0001.data input.data
  • Check that the following is set:
    export TRIGGER_EXP_CORAL_PATH=/afs/cern.ch/user/a/attrgcnf/.expertauth
  • To prepare the online trigger configuration you have to run:
    prepareOnlineTriggerConfig.py

  • Running this script opens a gui (see picture below). You will have to edit:
    • Choose the menu you want to make the keys for from the drop-down menu on the left
    • JobName (corresponding to the menu you choose in the drop-down menu)
    • No DB password is needed. Default DB is TRIGGERDBREPR
      Then click on run.
    • Make sure the box "Online Monitoring" is ticked
prepareOnlineTriggerConfigGUI.png
GUI to prepare the online trigger configuration

  • This will create a "selfextract" file which you can now copy to your P1 account:
    rsync ./selfextract-Physics_pp_v3-2011-04-29.bsx USERNAME@atlasgw:
    NOTE: prepareOnlineTriggerConfig.py will give you a .bsx file with the time stamp in a format like selfextract-Physics_pp_v3-YEAR-MM-DD-HH:MM:SS.bsx. rsync can't deal with this format you'll have to include ./

  • Log in to your P1 account:
    ssh -Y USERNAME@atlasgw
    • Request access
    • Hostname: pc-atlas-pub-01
  • In principle if you ran the file you copied over it should do it all for you but that doesn't work at the moment. Instead:
    • run ./selfextract-Physics_pp_v3-2011-04-29.bsx
    • You will get asked:
      Would you like to upload the configuration into the database ? [y|N] Type: y
    • This will automatically upload the SMK to the TriggerTool
  • If this does not work, double check with TriggerTool experts that your username is valid for this operation
  • To upload the SMK using the GUI, do a ls -althr in your directory and you will see a file called Physics_pp_v2-2011-04-29 (in my case). It contains the following files: efsetuppy.txt, efsetup.xml, hltL2Menu.xml, l2setuppy.txt, l2setup.xml, efsetup.txt, hltEFMenu.xml, hltMenu.xml, l2setup.txt, lvl1Menu.xml
    These are needed to upload the new configuration to the TriggerTool by hand.

  • Start TriggerTool (just run /det/tdaq/scripts/start_trigger_tool_interactive):
    = /det/tdaq/scripts/start_trigger_tool_interactive= (choose the latest version) and log in as expert with your username and the password which was given to you for the TriggerTool.
    • Inside TriggerTool, click Search
    • Look up the L1 Master Table ID (in case you only made changes to HLT in TriggerMenuPython) (see picture below).
TriggerTool L1ID.png
TriggerTool, tree of SMK 1094
    • To upload the new configuration and create a new SMK, go to Load/Save in the menu bar, then select Read XML. You will have to specify the xml files in the pop-up window using the files from extracted directory (see picture below).
UploadXML.png
TriggerTool: Upload interface

    • Once the upload is completed and a new SMK is created
SaveComplete.png
TriggerTool: New SMK created

  • Finally you can double check your changes by diffing the previous and new SMK:
    Menu work => Diff Menus => Select them in the pop-up window and click diff.

  • As a next step, the prescales corresponding to the menu have to be made and uploaded.

PROBLEMS WHICH MIGHT OCCUR:

  • In case you had to interrupt the upload to the trigger database when running the prepareOnlineTriggerConfig.py (i.e. you forgot to symlink the input.data file), the trigger tool might be locked. You will have to go to your /tmp/USERNAME/ directory and delete

Making a new Tag of TMP

To make a new branch of TMP with the required changes, set up a new working area (different from the one you use for "normal" menu work). This is necessary so you can set up the release used at P1 and check out the correct TriggerMenuPython and TriggerMenuXML versions. The currently used release can be found either in the TriggerTool or on the Trigger white board.

  • Set up your release environment you want to make the SMK for

  • If you have to make changes to the TriggerMenuPython in the current release (i.e. bug fixes):
    • Check out the corresponding TMP version: pkgco.py TriggerMenuPython
    • Now you can edit the necessary files (just like doing the "normal" menu work) and then compile TriggerMenuPython and compile the menu which was affected by the changes.
    • Make a tag of TriggerMenuPython once it compiled
      • svn cp . $SVNROOT/Trigger/TriggerCommon/TriggerMenuPython/tags/TriggerMenuPython-00-XX-XX -m  "COMMENT ABOUT THE CHANGES"

  • Next step is to upload the new menu, follow the instructions above (this involved running prepareOnlineTriggerConfig.py).

CAFHLT reprocessing

To provide SMK and L1/HLT prescale keys for the CAFHLT reprocessing do the following:
  • set up the release for which the keys are needed (this is usually written in the savannah):
    mkdir AtlasP1HLT_XXXXXX
    cd AtlasP1HLT_XXXXXX
    asetup AtlasP1HLT,XXXXXX,here
    with XXXXXX being the release version.

  • make sure this is set:
    export TRIGGER_EXP_CORAL_PATH=/afs/cern.ch/user/a/attrgcnf/.expertauth
  • make a run directory and link the input. data file, then run the prepareOnlineTriggerConfig.py:
    ln -s /pcatr-srv1/data/files/fwinkl/data11_7TeV.00177531.physics_HLTPassthrough.daq.RAW.ATN100._0001.data input.data
    prepareOnlineTriggerConfig.py
  • When the GUI opens, you will have to define the Job Name, and select the correct menu (there is a button in the Pre-command line which gives you the various menu options when you click on it). Finally you will have to select the Configuration for CAF and untick the Run online monitoring box.
  • No password to the DB is needed (see here for instructions)
  • IMPORTANT: There might be several options in the savannah which have to be set specifically:
    • a rerun of L1 is requested: by clicking on the the "rerun LVL1" box, this command ";rerunLVL1=True" is added to the Pre-command line in the GUI. Don't add a semicolon at the end of the line, this will result in a syntax error.
    • some flags have to be set in the SMK: open the trigger tool and access the TRIGGERDBREPR. Double-click on the SMK you just uploaded. Go to the specific tab (e.g. L2 tab for this savannah) and search for the required algos or similar to be set. When you're done, click on save, it will automatically save a new SMK.
    • specific prescales are required: Upload the SMK as described above, then you'll have to run the rulebook to generate prescales which then need to be uploaded to that SMK.
  • Then click on run. Once it's finished, you will find the SMK (= Configuration key), Lvl1 and HLT prescale key in the last three lines of the GUI (and also in MenusKeys.txt file in the run directory).
  • Inform the menu keys on MenusKeys.txt and db name (TRIGGERDBREPR) to savannah.

TrigDev Database

You can look up the keys you've uploaded for the CAFHLT reprocessing.

Set this variable to be: export TRIGGER_EXP_CORAL_PATH=/afs/cern.ch/user/a/attrgcnf/.expertauth Then start TriggerTool and select:
My Database Connections -> Oracle -> TRIGGERDBDEV2 (see picture below).

TriggerDevDB.png
GUI to prepare the online trigger configuration

Changing the XML files

Sign in to the TriggerTool at P1. Search all SMKs to select the most recent one and save the xml files to your home directory (Save/Load => ...). Select the relevant xml file and edit it.
If you're working in on e.g. pcatr, you can copy the xml files via rsync xmlfiles USERNAME@atlasgw: to your home directory there. Then upload a new configuration, select Read/Write => Read and edit the L1 and HLT files correspondingly. Don't forget to change the name (e.g. Physics_pp_v2). The SMK is then automatically created.
To check the uploaded changes, you can compare the new SMK with the old one using the diff option.

But currently, the TriggerTool XML dump is broken; you can check the current status in the savannah bug.

Changing random clock settings

If you need to change the random clock settings, double click on a SMK in the TriggerTool. A window will open and you'll have to select the L1 Summary tab:
L1Summary.png
L1 Summary tab in which the random clock etc can be changed
Don't forget to save your changes, a new SMK will be created that way!

Preparing prescale keys

Prescales are typically prepared using the rates rulebook (full documentation here: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/RatesRulebook)

Making new prescales

After creating a new SMK with the new menu configuration, you will also have to make a new set of prescales ( see also MenuPrescales page).

  • Check out the head version of Trigger/TriggerCommon/TrigMenuRulebook
    • asetup AtlasP1HLT,16.1.3.11,here #this should match release to be used at P!
      mkdir run
      cmt co Trigger/TriggerCommon/TrigMenuRulebook
      cd  Trigger/TriggerCommon/TrigMenuRulebook/cmt
      cmt make
      cd ../../../../run
  • By default menu xml files are now picked up from the release. If the menu have been changed at P1 after the release build, you will need to compile the corresponding TriggerMenuPython and TriggerMenuXML tags to allow the rulebook to find the updated xml files. Alternatively you can copy the matching xml files (normally just the HLT menu) to your local run directory and update runRuleBook.py (see next point)

  • Before running the rule book you'll have to make a few checks/changes to the code to make sure you're running with the correct settings (read the comments at the beginning of runRuleBook.py!!!):
    • In runRuleBook.py:
      • make sure variables hlt_xml and l1_xml are using the correct xml files
      • make sure the you have the correct settings for target_empty, target_unp_iso, target_unp_noniso (see picture below).
      • make sure you run the correct rules, search for the variable rulebook
      • set the flags doCosmicStandby and doUseOnline to True if you want to have cosmic and standby prescalse and if you want to upload the prescales (tared up) directly to atlasgw.
      • see here for details on the standby keys

  • Now you can run the rule book:
    • runRuleBook.py
      You can also run with this option now, the lumi steps are automatically set and the corresponding prescales are generated.
    • runRuleBook.py 100,125,150,175,200,250,320,400,500,650,800,1000 .
      If you give a directory (here the ".") as an argument then also the rates get generated for you to cross check.
    • runRuleBook.py 1000,800,650,500,400,320,250,200,175,150,125,100
      If you run it with only the lumi points, then only the prescales are generated. The relevant prescales output is given in folder with a similar name to e.g. prescales_1304158791.
    • In some case, the absolute path of runRuleBook.py might have to be specified when you execute.

  • Tar the directory up, copy it to P1, log in to P1 and untar it there. This is done automatically for you if you have the doUseOnline flag set to True.
    tar -zcvf prescales_1304158791.tgz prescales_1304158791/
    rsync prescales_1304158791.tgz USERNAME@atlasgw:
    ssh -Y USERNAME@atlasgw
    tar zxvf prescales_1304342647.tgz

  • Setting the option doUseOnline to True will run the rulebook and then tar and rsync the tarball to point-1

Uploading prescales

Once you've generated the prescales, you'll have to upload them to the corresponding SMK. There are two ways to do that, either manually, meaning one by one or all together, using the UntarandUpload.sh script, both being described below.

  • Manual upload (see picture below for details!):
    • Start TriggerTool
    • in trigger tool, select the relevant smk
    • then go to: menu work => prescales
    • L1 (HLT) tab: paste path where L1 (HLT) xmls are and then upload them one by one.
      In uploading the xml manually, you might see the error on the master/menu ID. In this case, just modify the IDs in xml to the ones in DB. DON'T FORGET TO SAVE THEM! * compare xmls (e.g. the previous one with the new one) * If you wan to use the same prescales as you did for a different SMK, you will have to assign them to the new SMK:
      menu work => assing prescale sets => then find the previous one
PrescaleUpload.001.jpg
TT Prescale editor

  • Automatic upload:
    • Copy the script UntarAndUpload.sh to point-1
    • Execute with input i) the above tar file, 2) the SMK you want to do the upload against
    • You will be prompted for your TT password
    • IMPORTANT: The table which is printed out at the end summarizing the L1 and HLT prescale keys is not quite correct. The L1 prescales aren't correctly assigned for some luminosity points, but I think the HLT keys were all correct when comparing the table to the the actual keys in the TriggerTool. So it's important to double check this table before copying and pasting this to the trigger white board.
    • NOTE: For HI running, the table is not printed correctly, you'll have to make the table by hand by looking up the keys in the trigger tool.

As a last step, if applicable, update trigger whiteboard with the new keys and inform tell the trigger on-call about the new keys.

Special Runs

This section is thought to give some more insight and understanding on special runs, e.g. High Rate Tests, Enhanced Bias (high multiplicity), ALFA runs etc.

High Rate Tests

In order to get a semi-realistic trigger pattern, no L1 prescales should be used to adjust the trigger rate. Rather use different number of filled bunches to adjust the rate. In addition, a minimal L2 latency is required in order for the DFM clears to arrive at the ROSes only after the event is available on the ROS. Enable the "TimeBurner" algorithm at L2 (unseeded) and set its "TimeDelay" property to something like 15ms. In summary, make the following changes:

  • SMK: Set the RD0 clock rate to 628.8 kHz in the SMK (double click on the SMK, go to the L1 Summary tab)
  • SMK: Set DummyTimeBurner.TimeDelay to 15 (ms) [*]
  • L1: Enable L1_RD0_FILLED (PS = 1) (no other items running)
  • HLT: L2_rd0_filled_NoAlg (PS=7500), L2_HLTTimeBurner (PS=1), EF_rd0_filled_NoAlg (PS=1)

[*] To change the TimeDelay without creating a new SMK you can do it via the command line:
rc_sendcommand -p ATLAS -n L2-Segment-1:pc-tdq-onl-80 USERBROADCAST HLT_SetProperty DummyTimeBurner TimeDelay 15

Useful BunchGroups for high-rate tests

To be used with a RD0 clock frequency of 628.8 kHz (see above)

Bunch groups:

rate [kHz] BG key # bunches / train # trains # bunches
10 278 14 4 56
20 277 28 4 118
30 276 43 4 172
40 275 57 4 228
50 265 71 4 284
60 266 85 4 340
65 267 92 4 368
70 248 99 4 396
75 238 106 4 424
80 264 113 4 452
85 268 120 4 480
90 269 143 4 508
100 270 143 4 572
120 262 171 4 684
120 263 68 10 680

To generate bunch groups you can use the script ~stelzer/bin/generateBG.py. You give it the bunch spacing, the number of trains and the number of bunches per train (edit the script). Then create a new bunch group set using the TriggerTool (Level 1 -> Bunch Groups).

VdM scan

  • only 1 or 2 (max 4) filled bunches are used for special VDM chain
    • for Mar 2011 - use UNPAIRED_ISO group for this purpose
    • for May 15 2011 - use L1_BGRP7 / L2_VdM_BGRP7 . Make sure that L2_VdM_x_UNPAIRED_ISO items are disabled!

  • Only 1 of vdm chains is running, and it is running at maximum possible rate (preferably 8kHz or more)
    • ask VdM experts or Data Preparation on which chain to use
  • L2_VdM_Monitoring should be enabled
  • REMIND RC/DP that MBTS trigger signals should be enabled in Tile!!
  • 10 Hz of standard MinBias stream (as usual)
    • Example: mbMbts1_1_eff, mbLucid_eff, mbZdc_eff, mb_BCM_Wide_eff (1 Hz each), rd0_filled_NoAlg (5 Hz)
  • EF_background stream has to be disabled because of modified UNPAIRED_ISO group
    • if BGRP7 VdM chain is used, then EF_Background can be enabled - ask Data Preparation people for preferences
  • no other HLT chains (no Physics_pp_v2)
  • no CosmicCalo, no CosmicMuon, no Standby, no other chains
  • Link to the VdM trigger menu used in the 2.76TeV run on Mar 27th, 2011

  • VdM scan in HeavyIon run (29.11.2011)
    • For the VdM scan enable VdM_ZDC_A_C_BGRP7 (ID data only) and L1ItemStreamer_L1ZDC_A_C_BGRP7 (full event, ~10 Hz)
      • Event size of VdM_ZDC_A_C_BGRP7 is 250-300 kB/event. BGRP7 should be configured to contain several complete trains in order to maximize the statistics from the same bunches rather than applying prescales.
    • For the distance-scan enable VdM_ZDC_A_C_VTE50 (ID data only)
      • Event size of VdM_ZDC_A_C_VTE50 is 50 kB/s. BGRP7 was not used since we could use all colliding bunches for 2011 HI runs (~3 kHz from L1_ZDC_A_C_VTE50)
    • Link to the discussion in savannah

  • muon calibration run when beam is off
    • For muon calibration. W/o going to EF.
    • Standby keys: similar to pp_v3 without the removed L1 items plus MU4

  • physics keys during toroid ramping up
    • Physics keys but remove all L1_MU items

2.76 TeV run

  • Run with only L1 streamers (L1Calo, L1Muon) + MinBias+ random triggers, + zerobias
  • standard MinBias chains
  • no HLT, no Physics_pp_v2 chains
    • Exception mbSpTrk chain and mbRd0_eff
  • L1_J* and L1_MU* are unprescaled
  • ExpressStream - 3 Hz of each L1 streamer
  • fill up bandwith with L1_MBTS_1 (mbMbts_1_eff)
  • normal CosmicCalo, beamspot, Background streams, mbSpBg_unpaired_iso. Allow IdDetMon_FS with 0.4Hz
  • Link to the trigger menu used in the 2.76TeV run in March 2011

B=0 run

The following chains need to be double checked in the v4 menu

  • main chain mu13_muCombTag_NoEF
    • main client for ROS access at L2. Gets highest priority
    • 10 Hz of mu13_muCombTag in PT (L1_MU20)
  • LAr does check of calorimeter. Run L1_EM14 unprescaled, fill the rest with L1_EM5. Disable L1_J* and L1_TAU*
  • keep L1_EM3_EMPTY
  • if bandwidth / L2 ROS recources left run EF_mbSpTrkVtxMh
  • do we need beamspot?
  • Link to the trigger menu for 7TeV data in March 2011 from run 178019

Scrubbing run

  • all BCIDs with at least one beam signal from the BPTX to fill the paired, unpaired_iso, unpaired_noniso groups,
  • background stream to be populated with 100 Hz consisting of:
    • J10_UNPAIRED, EM3_UNPAIRED, TAU8_UNPAIRED (unprescaled or if the rate is too high prescaled to give ~80-90Hz).
    • 10 to 20 Hz from BCM, LUCID (OI: unpaired iso and noniso?)
  • what to do about EMPTY triggers?
  • normal HLT should be switched off. CosmicCalo should be on with usual rates
  • people to contact : Mika Huhtinen (primary), Luca Fiorini , DataPreparation. They might want to ask for prescale changes
  • Link to the trigger menu for 0.9TeV data in April 2011

Enhanced Bias run

The enhanced bias (EB) stream is an independent data stream which is composed of events selected by L1 items only, collected via dedicated HLT algorithms. It's heavily used in trigger reprocessings for menu development and rate predictions. A new EB run is required after the introduction of the v3 menu (significant L1 menu changes). It's usually taken at the beginning of a fill (1-2 M events), transparent to standard physics data taking (mainly increased EF output rate for restricted time period).

For enhancedBias we use the latest menu (currently for Physics_pp) with the following rates to each chain:

The following table needs to be updated for 2012

Item rate comment
eb_physics_noL1PS 30 Hz Need 200K events minimum
high_eb_physics 60 Hz
eb_physics 30 Hz
eb_random 15 Hz
eb_physics_unpaired_iso 1 Hz
eb_random_unpaired_iso 1 Hz Currently disabled
eb_physics_unpaired_noniso 1 Hz
eb_random_unpaired_noniso 1 Hz
eb_physics_firstempty 1 Hz
eb_random_firstempty 1 Hz Currently disabled
eb_physics_empty 1 Hz
eb_random_empty 1 Hz

RPC calibration (no beam)

  • L1_MU0_EMPTY unprescaled at L1
  • mu0_empty_NoAlg prescaled by 3 at L2, mu0_cal_empty unprescaled at HLT
  • usualy CosmicCalo stuff
  • Link to the trigger menu

ALFA run

Configuration for run with ALFA

  • ALFA needs an overall rate of 400Hz. This is the highest priority
  • The minbias L1 items are being streamed in EF_L1MinBias_NoAlg

The following table needs to be updated for 2012

Item rate comment
MBTS_2 40Hz  
LUCID 10 Hz  
LUCID_A_C 10 Hz  
LUCID_EMPTY ~1 Hz  
LUCID_UNPAIRED_ISO ~1 Hz  
LUCID_A_C_UNPAIRED_NONISO ~1 Hz  
MBTS_2_UNPAIRED_ISO ~1 Hz rate should be small, can run unprescaled?
MBTS_2_EMPTY -- Not currently in menu, ignore for this run
EF_L1MinBias_NoAlg PS =1 Streamer for all minbias L1 items
ZDC (OR) 10Hz 20Hz if possible, TBC
ZDC_A 10Hz 20Hz if possible, TBC
ZDC_C 10Hz 20Hz if possible, TBC
ZDC_A_C (AND) 10Hz 20Hz if possible, TBC
ZDC_EMPTY ~1Hz  
ZDC_UNPAIRED_ISO ~1Hz  
ZDC_UNPAIRED_NONISO -- Not currently in menu, ignore for this run
EF_mbSpTrkVtxMh_eff 10 Hz expected, set PT=1 at L2 If rate is to high for these two Mh items, prescale this item at L2 first
EF_mbSpTrkVtxMh 10 Hz expected  
EF_mbSpTrk 10 Hz  
EF_rd0_filled_NoAlg at least 5 Hz at EF, >250 into L2 same output rates for the August ALFA run
EF_rd0_empty_NoAlg 5 Hz at EF same output rates for the August ALFA run
L2_VdM_BGRP7 1kHz (not full event size) should be 50 Hz full event size equivalent
L2_VdM_MBTS_2_BGRP7 expect rate < 1 Hz  
L2_VdM_RD0_UNPAIRED_ISO 10 Hz (not full event size)  
L2_VdM_MBTS_2_UNPAIRED_ISO expected rate < 1 Hz  
j10_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
j15_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
j20_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
fj10_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
fj15_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
fj20_a4tc_EFFS 2kHz into EF, expect < 5Hz fill with rest of bandwidth
cosmic calo stream   Same PS values as in physics running
LArCells and LArCellsEmpty   Same PS values as in physics

Configuration for run without ALFA

  • In this run, we have 600 Hz for non-ALFA triggers.
  • This list is differences with respect to the above menu

Item rate comment
MBTS_2 increase to fill bandwidth Request is 0.5 M events, with ID on
jet triggers leave RD0 rate at 2 Khz but reduce EF PS is possible  
LUCID 20 Hz Running HLT streamer
ZDC (OR) 20Hz TBC
ZDC_A 20Hz TBC
ZDC_C 20Hz TBC
ZDC_A_C (AND) 20Hz TBC

25ns run (low luminosity expected)

  • Calibrations/detector streams
  • Zero bias at ~10 Hz
  • Regular physics (luminosity point ~10^32)
    • Should have J/psi to ee trigger almost unprescaled
    • Random seeded jet triggers should get 500 Hz input and up to 10 Hz each
    • 2 Hz of EF_mu4_firstempty_NoAlg, EF_L1MU10_firstempty_NoAlg and EF_L1MU11_NoAlg
    • 2 Hz of passsthrough on L2_mu18 (to study out-of-time muon triggers)
  • EnhancedBias at medium rate ~200-300 Hz (include lots of random triggers).

High pileup run

  • Calibrations/detector streams
  • Zero bias at ~10 Hz
  • 10 Hz to express stream using EF_rd0_filled_NoAlg
  • EnhancedBias at high rate (400-500Hz)

Older instructions (lower luminosities, previous menus)

Previous instructions are archived at https://twiki.cern.ch/twiki/bin/viewauth/Atlas/MenuOldPrescales

%RESPONSIBLE% ATLAS Trigger Online Experts
%REVIEW% Never reviewed

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng L1Summary.png r1 manage 92.9 K 2012-03-02 - 10:43 EmanuelStrauss  
JPEGjpg PrescaleUpload.001.jpg r1 manage 248.1 K 2012-03-02 - 10:43 EmanuelStrauss  
PNGpng SaveComplete.png r1 manage 15.0 K 2012-03-02 - 15:16 EmanuelStrauss  
PNGpng TriggerDevDB.png r1 manage 38.9 K 2012-03-02 - 15:19 EmanuelStrauss  
PNGpng TriggerTool_L1ID.png r1 manage 80.1 K 2012-03-02 - 15:16 EmanuelStrauss  
PNGpng UploadXML.png r1 manage 33.2 K 2012-03-02 - 15:16 EmanuelStrauss  
PNGpng prepareOnlineTriggerConfigGUI.png r1 manage 42.2 K 2012-03-02 - 15:17 EmanuelStrauss  
Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2012-03-16 - EmanuelStrauss
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback