• Silvia's office: 104 R-B21
  • 104 Fishbowl 104 R-C20
  • 104 TestLab 104 R-C24
  • 104 Rhys
  • 104 Julio
  • 104 Kate

  • P1 TDAQ satellite CR
  • Hall P1 3178-R-J03
  • Hall USA15
  • STF
  • STF Quiet Room

Oncall Links

Oncall phones




Trigger links

Clock Switch Links

LastQPLLErrors -o 'LHC.RF2TTCApp-summary' -s '2018-Nov-05 13:00:00' -t '2018-Nov-05 16:00:00' -z Europe/Zurich

Offline links

GIT Stuff

CMake Stuff

Power Down instructions (Bruce)

  • Shut down all L1Calo via DCS and receivers
  • shut down sudo shutdown -h now all servers: pcl1c-mon-01/02/03/04/05/06 and topo-00 -> contact sysadmin to dont have mails and sms
  • After all crates/servers are off, you should e-mail Andre Rummler and inform him that he may switch racks off when desired:
Y.23-02.A1 (That is Topo) Y.07-02.A2 through Y.11-02.A2 Y-16-02.A2 through Y.27-02.A2 I'd have included Y.12-02.A2 - but I think there might be a switch in there that services LUCID.


Run Operation

Retrieve DCS history plots

  • DCS DDV page (link from l1calo page) https://atlas-ddv.cern.ch/DDV.html
  • Select the following: Element Name (top left), and then select the Path: TDQ--> ATLTDQLV1LCS-->L1C/-->and then crate numbers, module numbers, DCS channel (crates start from 1, so if I need e.g info about PP5-PPM3 I will select Crate6, Module9 (this is module position in VME crate and they start from 5). If I want to know the info about -5V is DCSChannel7). On the bottom I do: Query data. I can select more than one item and it will overlap them in the same plot. Time regime is selected from the top left.
IN order to know what the DCS chanel mean, one can go on the DCS (TDQ-> LVL1-> LVL1CALO-> PP5 OVERVIEW) and see the table (see screeshtot below) See example below.
  • Example of DDV Page for PP5-PPM3 :

  • DCS Data Viewer:

  • DCS Data Viewer:

  • DCS Channel meaning: Silvia_L1Calo_DCS.png

CAN code programming

follow instructions on the link: https://twiki.cern.ch/twiki/bin/view/Atlas/LevelOneCaloTriggerCAN

DCS Phase1

Remote Instructions:

  • Hot to set up the ATLAS GuI from remote:
source /det/tdaq/scripts/setup_tdaq-09-00-00.sh 
setup_daq -p ATLAS -d /atlas/oks/tdaq-09-00-00/combined/partitions/ATLAS.data.xml

Control room instructions

How to transfer files between P1 and lxplus

  • transfer file to P1: in lxplus:
     scp Update25Oct.sqlite franchin@atlasgw:. 
  • From P1 to go in lxplus:
     ssh franchin@atlasgw-exp 
  • From lxplus to copy a file in P1 (you should know the exact path, * doesn't work):
       scp franchin@atlasgw:path/pippo.png .

Hardware Interventions, Spares modules

2020 Test Runs

  • need to remove L1Topo tht is off and related ROS
  • need to disable in the segments the old receivers gains and keep the one with -hub into the name

ATLAS L1CAlo names

Mailing Lists

Pcs in P1 are called: type-system-name-number, in STF and testbead name: typesystem-name-number

Contact persons

aranzazu.ruiz.martinez@cernNOSPAMPLEASE.ch (Aranzazu Ruiz Martinez) javier.montejo.berlingen@cernNOSPAMPLEASE.ch (Javier Montejo Berlingen)

PC names;

  • TestBed: pc-tbed-pub
  • PPM crates L1Calo: sbc-l1c-pcc-0(0-7)
  • PPM crate TestRig 104: sbcl1c-test-00
  • ALTI crate 104: sbcl1c-test-03
  • PPM crates ZDC: sbc-zdc-daq-01 (4 PPMs total, in slots 15,16,17 and 19 of the VME crate)
  • TOPO pc: pc-l1c-topo-00
  • Test Rig sbc: sbcl1c-test-00
  • Control Room: pc-atlas-cr-trg (pc-atlas-cr-35), pc-atlas-ct-id (pc-atlas-cr-45)
  • Calibration data: pc-tdq-calib-17
  • STF sbc from PPM-TREX: sbcl1c-stf-00
  • STF FELIX: pcl1c-stf-felix-00 (TREX), 01, 02
  • STF DCS and Vivado: pcatkip07
  • 104 Vivado: msu-pc10
  • KIP portal: ssh -p 122 franchin@portalNOSPAMPLEASE.kip-uni-heidelberg.de
  • pcatl1calo03: old DCS machine, left bottom in left rack of 104 test-rig. to be dismissed soon
  • control room pcs https://atlasop.cern.ch/twiki/bin/view/SysAdmins/ACRPCsMapCabling
  • LAr scope (connected to mon5) pc-lar-scope-19a
  • DCS PCs: pcatltdq01 (legacy) pcatltdq02-05

Rack names

  • Y.06-02.A2 through Y.26-02.A2,
  • Y.29-02.A2

  • L1Calo_Racks.JPG:

Y.23-02.A1 (That is Topo pc) Y.06-02.A2 through Y.11-02.A2 Y-16-02.A2 through Y.27-02.A2


ROS names

  • pc-tdq-ros-calpp-00, 01
  • pc-tdq-ros-calcj-00
  • pc-tdq-ros-topo-00
  • pc-tdq-swrod-l1c-00 - 04 (00, 01 trex, 03 efex, 05 spare)
  • swrod-l1c-00: pp6
  • swrod-l1c-01: pp7
  • swrod-l1c-02: efc0
  • swrod-l1c-03: efc1
  • swrod-l1c-04: jfc0/topo1
  • swrod-l1c-05: gfex

ROS drivers


  • pc-tdq-flx-l1c-efex-00,
  • pc-tdq-flx-l1c-gfex-00,
  • pc-tdq-flx-l1c-jfex-00,
  • pc-tdq-flx-l1c-trex-00,
  • pc-tdq-flx-l1c-trex-01,

All L1Calo PCs

To send an email to sysadmins about shutdown of our PCs: atlas-tdaq-sysadmin-userticket (ask to put them in downtime for the time of the intervention (It will be useful in order to dont have warning mails and messages)


Sbc-l1c-ccc-00 through 03
Sbc-l1c-pcc-00 through 07

REceivers are in mon5

STF Pcs:


Partition Names

Hardware USA15

Learned Commands

from pc-l1c-topo-00: ssh root asm-l1c-topo-00 
clia board   
clia activate board 2
clia activate board 4
status M4 is active (when we can load the FW). M2 is deactivate, the status should be when the crate is ON but board deactivated

Anyway: from lxplus, after having set up l1calo environment with "setup_l1calo" type: $L1CALO_AFS/utils/error_events.py -r 364098 -L this will print out LB and event number, then use fetch_eos to excrat the raw file: $L1CALO_AFS/utils/fetch_eos.py -r 334564 -L -l 183 -E 495131583 -f L1 where 183 is the LB and 495131583 the ev number

we can see also the same with a calibration run (in caib17 pc) with dumpl1calo -d filename --ppm -n 1 more instuctions here: https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloExpertToolsDocumentation#dumpl1calo

    • evl1 to monitor some events online (from th ongoing run) or open it from a data file (for example fetched previously from the sfo..).
Then select a number of HLT nodes (more nodes selected quicker is the next event to be picked up) and click the green arrow on top left (select also trigger type 132 that is L1CAlo and also Physics as stream tag). It reads one events. Click again for another event and so on.... Useful for example to check if the special readut mode is enables we see the various readout slices... Se examples below and also official instructions here: https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloExpertToolsDocumentation#Event_Viewer_evl1_sh

silvia_evl1.png For example to see preprocessors stuff: silvia_evl1_2.png

  • eos histograms:
eosmount eos_tmp
cd /afs/cern.ch/user/f/franchin/eos_tmp/atlas/atlastier0/rucio/data16_13TeV/physics_Main/00308084
  •  /det/l1calo/scripts/fetch_sfo.py  
    Script at P1, tries to fetch a file from the SFOs and copy it to /shared/data/
  •  /eos/atlas/atlastier0/rucio/data15_comm/physics_Main/.... 
    dati eos; The
    script defaults to dataYY_13TeV but can be overridden with a -t dataYY_xxxx option (or just -t xxxx for short, eg -t comm).
  •  setpart ATLAS
  •  log_manager
  • Busy display as in the CR:
     cd /det/l1calo/scripts/          ./busy_display     -p ATLAS        
  • Busy display :

its probably worth checking the L1Calo “ModStat” page (https://atlasop.cern.ch/oncall/l1calo/L1CaloModStat.php) clicking the ttc crate and our busy2 module. Its not explicit there, but the busy inputs 2,3,4,5 are from L1Topo.

Busy display: CTP-DRND is the CTP derandomizer busy (internal busy in the CTP), ECR is the busy from the Event Counter Reset generation (ECR sent every 5s with 1ms of busy before and after the ECR), Veto0 is the TGC burst veto, currently masked, and Veto1 is not used

Interesting to see at ~ 5AM the Control room called because the L1Calo was busy in the busy panel, but it was not L1calo but HLT backpressure, as can be seen by this plot. L1calo is not busy, the only black spots are for ROIB_XOFF to CTP and ROIB_XOFF to L1Calo.

  • Busy CTP web display:

  • First rename .bash_profile, then open a new windown and then type:
      igui_start -p ATLAS   
    to spy the igui ATLAS partition
  • runme   
    starts the igui for the run in spy mode
  •  l1chuck.py 
    to kill a tower even if the run is ongoing. need to reload DB, and put the LUT to zero. Need to phone to the shift leader because it will stop a bit the run to reload the DB (everythihg automatic)
  •   ./chuckDrawer.py LBA05 
    (in the folder /det/l1calo/scripts/ ) (to disable one entire drawer). To enable it again need to add to the command
     -e 0
    means that sets to 0 the error code.
More info about dead channels database, see here:

IN the COOL database there is the folder Dead Channels, all comments written with l1chuck go there, and the error code. Error code 0 --> Tower enabled. Low error core (4-5 bits) is a disabled tower (first panel of l1chuck: Bad ADC, Bad MCM, FE dead, receiver dead, HV off, LV off, noisy tower.
the second panel is for minor problems (not maintaned now) so if the error code is high the tower is still enabled but with minor problems.

  • chuckdrawers:

  •  ppmModuleInfo  
    to know the Ppm serial number. To execute from the single board PC of that crate. ppmModuleInfo -m ff0 for testrig and ppmModuleInfo -m fff for P1, where all the crate is full of modules

[franchin@sbcl1c-test-00 ~]$ ppmModuleInfo  -m ff0
Parsed hex mask detected
Mask value = 4080
ppmMask[0] = 0
ppmMask[1] = 0
ppmMask[2] = 0
ppmMask[3] = 0
ppmMask[4] = 1
ppmMask[5] = 1
ppmMask[6] = 1
ppmMask[7] = 1
ppmMask[8] = 1
ppmMask[9] = 1
ppmMask[10] = 1
ppmMask[11] = 1
ppmMask[12] = 0
ppmMask[13] = 0
ppmMask[14] = 0
ppmMask[15] = 0


Ppm pp-ppm04 has 17members 
   Board= 4   slotNr= 9   KIP_id= 0x1845   ModuleNR= 121
   MCM # = 0   Slot = 1   ID = 12370   CALIPPR fw = 2.2.0   
   MCM # = 1   Slot = 2   ID = 12371   CALIPPR fw = 2.2.0   
   MCM # = 2   Slot = 3   ID = 12372   CALIPPR fw = 2.2.0   
   MCM # = 3   Slot = 4   ID = 12373   CALIPPR fw = 2.2.0   
   MCM # = 4   Slot = 5   ID = 12376   CALIPPR fw = 2.2.0   
   MCM # = 5   Slot = 6   ID = 12377   CALIPPR fw = 2.2.0   
   MCM # = 6   Slot = 7   ID = 12378   CALIPPR fw = 2.2.0   
   MCM # = 7   Slot = 8   ID = 12379   CALIPPR fw = 2.2.0   
   MCM # = 8   Slot = 9   ID = 12381   CALIPPR fw = 2.2.0   
   MCM # = 9   Slot = 10  ID = 12382   CALIPPR fw = 2.2.0   
   MCM # = 10  Slot = 11  ID = 12383   CALIPPR fw = 2.2.0   
   MCM # = 11  Slot = 12  ID = 12384   CALIPPR fw = 2.2.0   
   MCM # = 12  Slot = 13  ID = 12385   CALIPPR fw = 2.2.0   
   MCM # = 13  Slot = 14  ID = 12386   CALIPPR fw = 2.2.0   
   MCM # = 14  Slot = 15  ID = 12387   CALIPPR fw = 2.2.0   
   MCM # = 15  Slot = 16  ID = 12388   CALIPPR fw = 2.2.0   
ppmMask enabled for slot Nr. 10

...........  and the same for all the PPMs

Log Files

logfiles are written on the PC where the controller is running to /logs/tdaq-07-01-00/ where in this case would be ATLAS. For topo the relevant PC is pc-atlas-topo-00. Apart from logging in via the gateway, its also possible to browse logfiles via https://atlasop.cern.ch/partlogs/ (look for menu starting pc-l1c-mon-00 top right).

Topo FW reload

. fw560 temporary script to reload version 5.6.

  • Reverting Topo FW Procedure if run is ongoing:
    • stop the atlas run
    • kill L1calo partition
    • reload fw
    • Restart L1Calo (IGUI option just above Restart TTC partition)
    • Put L1Calo back IN (the Kill automatically takes it OUT)
    • Start a new run

Surface Test Facility STF

Oncall shifts

Power down instructions:

Power up

See elog here: https://atlasop.cern.ch/elisa/display/439217

Instructions from Rhys: see mail called "ALTI reset"

  • Paul's monitoring plots: after a pwer cut the crono jobs are stopped. Need to restart them: scritps are here /afs/cern.ch/atlas/project/tdaq/level1/calo/www/html/dcsmon


TREX tests at STF and 104

  • connect to pcatkip07 (desktop below the PPM crate)
  • login into the sbc:

Bit Files


Monitor Temperature, Voltages

if I run it it continue to monitor and write on the screen if the parameters are below limits, it also writes a log file with only PPM slot and PPM number if everything is good, and with warning if any. Should switch OFF the FPGA in case of problems. Log file here: logs/HardwareMonitorLogs/AlarmLogs/
  •  ./MONIT_TVItoFile  
    creates a file with T, V I
  •  ./HIST_TVIfromFile 
    (better to do it from lxplus and not from sbc) reads the log file and creates root file with plots

VIVADO, program flash memories

  • installed in my laptop Vivado lab edition and in pcatkip07 Vivado web server.
  • one can connect through the laptop via ethernet to the Vivado web server in pcatkip07 to program flash memories
    • first configure via pprotest the REM, DINO and PREDATOR with any FW.
    • Check if USB cable is connected to TREX and enable one of the two FPGAs, either PREDATOR or DINO with ./PROGR_TrexFlashMems and chose USB-JTAG loading (2) for DINO, (5) for predator
    • connect as root in pcatkip07
    • cd
    • type Victor's script:
      -> this opens the port 4241, need to be doe only once, if the PC is not switched OFF it will stay open
    • Open Vivado HW Manger in the laptop -> Connect to remote server -> hostname: pcatkip07, port 4241 -> should see the FPGA enabled with the PPROTEST
  • see the FPGA in the list, right click -> Add Configuration Memory Device -> put parameters seen in these screesnhots: http://franchin.web.cern.ch/franchin/L1calo/STF/TREX_flash_program/

Test Rig

  • Official twiki: https://twiki.cern.ch/twiki/bin/view/Atlas/LevelOneCaloCernTestRig
  •  ssh -X pc-tbed-pub
    , setup:
    source .bashrc,   setup_l1calo_TR
  • database configuration (to be copied from the official one, usually written by Bruce and that can be found here:
    and to be copied in my se4tup here:
  •  runme 
    starts the Gui graphic interface with the L1 info taken from database defined in the setup_l1calo_TB
  •  ssh -X pcl1c-ros-01  /dsk1/l1c/rosData/  (rosData/data  old) 
    where test data are saved
  • To look at raw data:
     dumpl1calo -P -d /dsk1/l1c/rosData/data_test.1467018834.calibration_.daq.RAW._lb0000._ROS-01._0001.data   
  • Test Rig in Building 4: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/Lab4Testbed

Shutdown Test Rig (Bruce's instructions)

  • Turn off all 4 VME crates, and the NIM-bin. Topo is already off.
  • sudo /sbin/shutdown -h pcl1c-pm-01. That is the top server in the rack on the left.
  • sudo /sbin/shutdown -h pcl1c-ros-01. That is the server in the middle of the right rack.
  • shutdown sandbox DCS machine, (server middle of the left rack: push and hold button.)
  • shutdown two box PCs at the bottom of the left rack: push and hold front-panel buttons. (Yuri's jtag server, right. DCS for legacy test-rig, left)
  • turn off all the breakers at the back wall behind the racks.
  • turn off the air-conditioner (front panel - widget at bottom of display.)

In Goldfish bowl:

  • shutdown two DELL mini-pcs. I did already, actually, but if Uli is around he might reboot: CTRL-ALT-F1 or F5. Then CTL-ALT-DEL
  • if box PC is on, please shut-down (push hold power button).
  • I usually pull out the power from the wall - window side left.


path to open ACE in 104 testrig: Schema: /tbed/oks/tdaq-08-03-01/l1calo/calib/calib.sqlite Server: empty DB Instance: L!CALO

[franchin@pcl1c-ros-01 ~]$ dumpl1calo -P -d /dsk1/l1c/rosData/data_test.1467018834.calibration_.daq.RAW._lb0000._ROS-01._0001.data | less
Reading from file: /dsk1/l1c/rosData/data_test.1467018834.calibration_.daq.RAW._lb0000._ROS-01._0001.data
Event: 0  16777216
RDO Object Type: PpmFadc  Crate: 0  Module: 8  Eta: 0  Phi: 0  Layer: 0
      Values:  10 f f f f       Flags:  0 0 0 0 0 
RDO Object Type: PpmFadc  Crate: 0  Module: 8  Eta: 0  Phi: 4  Layer: 0
      Values:  14 13 14 14 14       Flags:  0 0 0 0 0 
.................................    same for all 7000 channels, 5 ADC slices and then the LUT...
RDO Object Type: PpmLut  Crate: 0  Module: 8  Eta: 0  Phi: 0  Layer: 0
      Values:  3030000       Flags:  100 
      LutCP: 0 LutJEP: 0 PedCorr: -253 
RDO Object Type: PpmLut  Crate: 0  Module: 8  Eta: 0  Phi: 4  Layer: 0

CleanUp a Partition

     pmg_kill_partition -p CernTest
     rmgr_free_partition_resources -p CernTest

Monitor PPM Temperature Voltage

go in sbcl1c-test-00, cd PPROTESTatCERN_TR, setup_pprotest
there is the updated PpmTempMonitoring.cxx in ppmServices/apps/
run it with ./PpmTempMonitoring One needs to select board number, time between readings and number readings. PPM Board start from 4
The output is a printout of all values in LogFiles/tempData_sbcl1c-test-00

  • Temp Moitor Output:


*Tile Trigger Tower Map (online) they show a second peak.(Murrough: Tile fire their laser during the long gap in the LHC orbit at about 1 Hz. This signal varies from tower to tower. It used to be around 5-10 GeV but if I remember they wanted to increase it this year. Anyway I think that peak is compatible with Tile laser pulses. Those events go into a separate calibration stream so we would never see them in our Athena online monitoring, only in the PPM monitoring which looks at all bunch crossings independent of L1A or bunch groups.)
  • Asymmetric TOB distribution, more occupancy on C side than on A side.. That's because sliding window goes on the left, more probable to have an hit on the left than on the right

Mistimed stream monitoring

  • root files stored in EOS: /eos/atlas/atlastier0/rucio/data18_13TeV/physics_Mistimed/00348354/data18_13TeV.00348354.physics_Mistimed.merge.HIST.x554_h295/data18_13TeV.00348354.physics_Mistimed.merge.HIST.x554_h295._0001.1
  • Martin's twiki with PDFs: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/LevelOneCaloMistimedMonitoring
  • Martin's explanation: The mistimed stream is composed of several HLT triggers which run on each event. Out of these HLT triggers, we are only interested in one, which is monj400. This particular HLT trigger selects all events in which J400 fired one BC before or after the actual event, by looking at the CTP information. In Tier0 we have an athena monitoring sub-task which processes the mistimed stream events, and analysis all events taken by monj400 in more detail. Merged histograms are produced. Then a cron job checks only if these histograms are available and makes plots and posts to the twiki.

CTP Per bunch monitoring: /eos/atlas/atlascerngroupdisk/det-ctp/PerbunchMonitoring/2021

Mistimed Events in 2015

Look at Raw Data


(test done in the pcl1c-ros-00 pc, in mine not well defined the libraries, I can run only l1calomap.sh...
  • Calibration raw data. From the trg pc type the alias
    (alias for pc-tdq-calib-17) and go to look for the raw data in
    (from 2017 they are also copied in a common area visible from all P1 pc:
  •  dumpl1calo -d data_test.1447413464.calibration_L1CaloTest.daq.RAW._lb0000._ROS-00._0001.data -P | less 
    where the data are the one taken by me with a test in the testrig (playback option, with value 80 in each channel, 1000 events). here I see all the value of the 5 adc of raw data for each event. at the end of the file there are also LUT data 9in this case empty)..
  •  event2mapxml -d data_test.1447413464.calibration_L1CaloTest.daq.RAW._lb0000._ROS-00._0001.data  
    create an xml file for the raw data file
  •  event2mapxml -k dcm  
    TO dump events from a current running run. Once inside the event2mapxml I can ship N events with -N or add N events in the same file with +N
  • event2mapxml -p ATLAS -N 62 -k dcm -t 0x84 -e -E -s 100 -w 0 -R0
  •  l1calomap.sh event.map.xml & 
    open with the graphical interface the file event.map.xml that has been created with the event2mapxml command.
    Dataset-->Local-->Open (eventmapxml  file) 
    . And for opening the histograms--> Chose raw, columns and Update histo and Local/event/fadc. (in the map i can look at one particular fadc slice with: Dataset-->Local-->event--> and Fadc0-15 (to swithc between one and the other I can just type the ">" and "<" with the keyboard)

  • l1caloMap calib data:

  • l1caloMap calib data:

  • rmHistogrammer: to check plot of rate (LUT_CP) of a particular run for particular L1channels (seen with a script who tells you particular noisy channels) rmHistogrammer (to be run on mon06 - ssh pc-l1c-mon-06 (was 04 in 2016))
    rmHistogrammer -1 '15/07/2015 19:15' -2 '15/07/2015 20:15' -i c0100101 -i 0ba04000
NB for HI we set LUT_CP for HAD layer =0, you will not see any rate, need to change the coolID TT name with 1 instead of the last 0 e.g: 04150b12 instead of 04150b02
rmHistogrammer -1 '13/03/2016 16:00:00' -2 '14/03/2016 10:00:00' -i 06120803 -d /l1c/rates -l -f 'png'   
create a png file with history rate
  • for more than one histo in the same plot, just add -i and the TT:
rmHistogrammer -1 '31/05/2016 07:00:00' -2 '31/05/2016 16:00:00' -i 01130f00  -i 01130f01 -i 01130f02 -i 01130f03 -i 01130e02  -d /l1c/rates -l -f 'png'
  • Once the plot has been created copy it in lxplus to display it. Go in lxplus and then:
scp atlasgw:/path/exact name .
  • Once can use the same command also to retieve rate plots. To know which item correnspond to which ID, go on the oncall page, SysRates oncall page tells you the ID to use for all the L1Calo rates. Just hover the mouse over the rate and the ID shows in the top right corner. (see attachment)
* ID rates:

calculate LUT result from raw data

  • First etract the 5 ADC values
  •  ./ppmchansim.py -d "$L1CALO_DB_CONNECT" -f Physics -c 0x011e0b01 -m HighMu -p 1 35 53 76 67 57  

Dead Channels not in the twiki:

Debugging session with Calorimeter Pulses

  • Ask to calo people to start a run with pulses
  • Open the L1CaloStandalone or L1CaloCalibration partition (with setpart) and set the following values: DefaultDSSTrigger, PhysicsRunPars, Receiver Gain 1, No Generator. The need to open the calibration partition is because receivers are not included in the standalone one, if last operation done with the calibration partition was a DAC scan it has as last step the Gain 0 option and then it is remembered and we will not see anythign coming from the calorimeter, so we need to open the calibration partition and put Gain 1.
  • While the L1Calo partition is running, open l1calomap and click on: IS--> Partition--> Select the L1Calo partition that is running, then IS--> PpmRates-->Tower_C or Tower_J and I see the pulses rates. If I want the histograms I can open them

L1CAlo Special RUns

Wee need derivations to have the information at the tower level, there are two kinds of derivations: L1CALO1, L1CALO2. All events should be streamed to the physics_Main stream and the run processed by the Data Preparation group.
Since the analysis of the 80 MHz data is using the L1Calo derivation, the L1Calo and Data Preparation experts have setup and configured this derivation to run automatically from the physics_Main stream during the special run data taking. We inform Data Preparation before the 80 MHz special run starts so that they would enable the derivation.
The physics_L1Calo stream is used by L1Calo for monitoring purpose as well so we have prescaled low-threshold L1Calo items streamed there which are good for monitoring but not for the timing and pulse shape analysis.

  • Some useful special runs for filter coefficients (Sten) data17_13TeV.00335302.calibration_L1CaloCalib.merge.DAOD_L1CALO2.c1144_m1873 56 188452017415 2019-09-13 16:55:44,
data17_13TeV.00335302.physics_ZeroBias.merge.DAOD_L1CALO2.f869_m1873 17 4240153342 2019-09-15 15:31:09

Some Software Stuff

From Martin's mail....

If an L1A is received, we read out the L1Calo subsystems (PPM, CPM, JEM etc) which provide various informations. The trigger towers are usually provided by the PPMs. For each of the 7168 towers (well, actually more because also spare towers are read out) you will get 5 ADC values (can also be more if we are in special run configuration), two LUT values (these are the towers energies sent to the CP and JEP systems), a pedestal correction value and various BCID bits. Above quantities do not include any additional thresholds, and for the most towers in an event your Et values (i.e. the LUTs) will be zero and the ADC values close to the pedestal of 32, i.e. not containing a pulse. However, for our derivations we do indeed apply thinning, i.e. remove towers from the collection which are below a certain ADC value.

Splash Events

Calibration runs

Raw data saved in eos: eosmount foldername, later do unmount

 /eos/atlas/atlastier0/rucio/data16_calib/calibration_L1CaloEnergyScan/00288331  .
  /eos/atlas/atlastier0/rucio/data21_calib/physics_Main/00395154/   .

 xrdcp -r root://eosatlas//eos/atlas/atlastier0/rucio/data16_calib/calibration_L1CaloPprPhos4ScanPars/00301316/data16_calib.00301316.calibration_L1CaloPprPhos4ScanPars.daq.RAW /tmp/franchin/. .

Calibration account

  • in lxplus, user: l1ccalib (where also all CAF2 results are stored is in: w0/DaemonData/jobs/CAF2/OutputCondor

Standard calibration (DAQ, PED) expert mode

  • Screeshot calibration panel expert mode:

  • Screeshot pedestal run:
silvia_ped1.png silvia_ped2.png

cd /det/l1calo/scripts/takecalib/   
From 18th ay 2020 default is for tdaq9. If tdaq7 is still needed, use scripts in:

It openss the calibration Panel. If you want to validate the scans, be sure to go in L1Calo Expert mode and start from there the DAC and PED scan (it takes ~ 70 min for both scans). If you validate the runs be sure that the calorimeters are quite and are not doing the calibration as well, otherwise they might send some unwanted pulses. you only have 5 minutes of time between the pdf opens and you validate the run (and authomatically updates the DB). Otherwise it need to be done by hand. When the DB is updated with the new DAC scan the pedestal run should already take the new DAC values in order to set the pedestal to 32 ADC counts. If there is no time in between interfills you can validate PED only, but if you validate DAC be sure there is enough time to validate also PED. DAC scan has 59 steps (the number of steps can be seen in the run panel, L1Calo panel, RodMon)

Saturated Tile CIS scan

remember to insert in the IGUI 220 as number of events

Saturated LAr pulse scan

Files for 2018 runs are here:
Standard calibration files are e.g. for barrel (with all layers pulsed together):
8 16 40 80 120 160 270 350 420
ffffffff ffffffff ffffffff ffffffff

Saturated files are (See what files are used in /det/l1calo/takeCalib/L1CaloRunTypes.py

more L1MultiDacSat-Taylor/parameters.dat
900 1100 1300 1500 2000 3000 4000 5000 6000 7000
ffffffff ffffffff ffffffff ffffffff

Phos4 scan

  • Run1 Phos4 chip: https://cds.cern.ch/record/405081/files/LHCC-98-36_307.pdf *LAr (Tile) phos4scan need LAr (Tile) partition to be used: LArgL1CaloCombined. Calibraion panel --> L1Calo--> Expert mode--> LAr Phos4scan (~ 50 minutes)
  • Used for timing calibration of the calibration chain (pulser run). Send a pulser signal and move in in time untile to find the right setting to sample the peak with the cental adc slice.
  • It will take four (historically) so-called PHOS4 run, which scans the L1Calo input timing per trigger tower in approximately 1 ns resolution, on LAr side pulsing only one single layer at a time.
  • Every time the analysis is perfomed a file with output calibrations is produced but not updated in database (need to do it by hand) but not so often, pulser timing is pretty stable
  • For timing of physics pulses need physics data and Claire's analysis...

  • Screeshot calibration panel expert mode:
Silvia_L1CaloCalib2_Phos4LAr.png Silvia_L1CaloCalib2_Phos4LAr2.png

  • Martin's eplanation of the scan: The nMCM has 24 delays to cover one BC i.e. 25 ns. The fine timing scan starts with a delay of 0 for all towers, and takes a certain number of events for this step. This ‘certain number’ is either defined via a time (assuming fixed rate) or number of events (assuming low enough rate to be able to poll fast enough). Which one is used here we need to check in the COOL run parameter folder. With a fixed delay (step) we digitise the incoming analogue pulse at fixed locations every 25 ns (okay, 12.5 for the 80 MHz but let’s leave that away), i.e. always at N*25ns + step * 1.04ns (where 1.04 is 25/24). With the next step, you digitise the analogue pulse at 2.08 ns etc, until with step 25 you are back to the full BC. I.e. in this way we scan the pulse in steps of 1.04 ns. What the analysis does, it calculates the average of the ADC values and puts them into the histograms shown in the PDF: the x axis shows here the step number modulo 24 so to say (or maybe it shows the transformation to ns, can’t remember).

Layer Phos 4 scans

no results from the layer P4 scans are automaticlly published to the twiki. Need to run a script in P1 by hand and another to upload pdf results to twiki:

  • checkScan.py: The script needs to be told where to find the data files. For some historical reason the default is not where the calibration partition copies them to (its probably where they are on the event builder PC). So what you need is something like:

  cd /det/l1calo/scripts
  for run in 372234 372235 372236; do ./checkScan.py -s /net/pc-tdq-lfs-acr/clients_data/scratch/l1calo/data -n -r ${run}; done

  • Script to update the run summaries produced by takeCalib with the analysis results information
    > /det/l1calo/takeCalib/updateRunSummary.py -r 372234,372235,372236

  • copy the root file that has been created from P1: /www_ALL_data/l1calo/calib/pho/ to lxplus
  • analysis the TBB delays with Martin's script:

LAr pulses, trigger monitoring tool

To check the LAr pulses (e.g. for LAr new electronics commissioning or to debug bad channels) need to ask LAr to pulse and in the mean time start the following partitions:
setpart L1CaloCombined
setpart LArgL1CaloCombined
and then with l1calomap.sh one can monitor the PPM rates.

Otherwise one can also monitor the PPM rates in the L1CaloStandalone partition but settings the right parameters in L1CAlo panel (parameters copied from the LArgL1CaloCombined partition:

L1Calo Run Type = Default
Run Parameter Set = PhysicsRunPars
Readout Configuration = Default
LTPI = SlaveToLAr
Trigger Menu= trigconf-default
Calibration Data = calib-oracle-atonr-trigger-r2
Conditions Data = calib-oracle-atonr-trigger-r2
receivers gain strategy = CalibGainsEt

Run analysis

In lxplus. Copy the calib file from eos.
xrdcp -r root://eosatlas//eos/atlas/atlastier0/rucio/data16_calib/calibration_L1CaloPprPhos4ScanPars/00301317/data16_calib.00301317.calibration_L1CaloPprPhos4ScanPars.daq.RAW/data16_calib.00301317.calibration_L1CaloPprPhos4ScanPars.daq.RAW._lb0000._SFO-1._0001.data /tmp/franchin/.

source source .localOKS_bashrc 
 analysePpmRun -d /tmp/franchin/data16_calib.00301317.calibration_L1CaloPprPhos4ScanPars.daq.RAW._lb0000._SFO-1._0001.data -r 301317 -a PprPhos4ScanPars -o /tmp/franchin/ -f Calib2
plotCalib -f /tmp/franchin/PprPhos4ScanPars.Calib2.00301317.root -a PprPhos4ScanPars -n 15 -i Calib2

Setup OKS database in lxplus

May 2020: OKS tdaq9 database: https://atlasop.cern.ch/cvs/viewvc.cgi/tdaq-09-00-00/combined/partitions/L1CaloCombined.data.xml?view=markup

Instructions from Martin: in order to set up a (fake but correct) oks database outside P1, the easiest is to download a real P1 oks database from the archive. For those all single oks files are merged into one big one. Murrough has a note with the link on a twiki. It is reachable from the L1Calo twiki under ‘databases’. There you want to follow the ‘web interface’ link and the instructions by Murrough. https://twiki.cern.ch/twiki/bin/view/Atlas/LevelOneCaloDatabases#OKS_Archives Regarding what archive to download, it is not really important as long as all PPMs were included in the run. DAC scan, pedestal run and PHOS4 scan analysis do only care about PPM readout. So usually I choose a recent L1CaloCalibration archive to download. For L1CaloStandalone you never know what was going on. After downloading, you have to copy and unpack the tar file to your personal oks folder. If we followed my setup it might be at ~/l1calo/dbPoint1/oks/. So in this particular case it would then be ~/l1calo/dbPoint1/oks/tdaq-07-01-00/l1calo/partitions. You can look up that path also in your .localOKS_bashrc (or whatever we named the file you have to source before running the analysis on lxplus). In your new database folder you have to create a soft link with name L1CaloCalibration.data.xml pointing to your downloaded L1CaloCalibration..data.xml file. Finally you have to adjust the database connection string which differs inside and outside P1. For that I think it should be okay if you simply replace all occurrences of ATONR_COOL with ATLAS_COOLPROD.

Hope that will work … Cheers, Martin

Calibration processing code


CPM Input scan

  • taken from the calibration panel (few minutes) If there are errors about the CMX doesn't matter (becaue the CPM changes the clock). Results can be found in the crate (e.g. ccc02 in /logs/tdaq-06-01-01/L1CaloCalibration/results)

CP Run2

Scale factor of 2 for the EM energies only between meu run 1 and run 2, as in Run 2 we increased the energy resolution from 1 GeV to 500 MeV in the L1Calo CP system.

JeM Input scan

  • again taken from calib pael. Same as befre fr CMX errors. Results from the L1Calo twiki, Calib --> Status --> and i can find the log file or the pdf with results. Red is the band forbidden. Blue bands are missing chanels


Tile CIS

Sanya's instructions:

Yes, you are running with UPD4 constants To see evolution of laser constants for one particular channel and intervals for different IOVs you can simply do
asetup 21.0.71,Athena
ReadCalibFromCool.py --folder=/TILE/OFL02/CALIB/LAS/LIN --module=LBA10
--chan=20 --gain=0 --pmt --begin=343000

TileCalibTools : INFO     Resolved globalTag 'CONDBR2-BLKPA-2018-06' to
folderTag 'TileOfl02CalibLasLin-RUN2-UPD4-16'

(341222,0)  LBA10 pm 21 ch 20 LG    0.944506  646.900024
(345808,0)  LBA10 pm 21 ch 20 LG    1.000000  -1.000000
(347606,0)  LBA10 pm 21 ch 20 LG    1.017112  646.900024
(348534,0)  LBA10 pm 21 ch 20 LG    1.013614  646.900024
(349137,0)  LBA10 pm 21 ch 20 LG    0.989601  646.900024
(349712,0)  LBA10 pm 21 ch 20 LG    0.968799  646.900024
(350144,0)  LBA10 pm 21 ch 20 LG    0.957218  646.900024
to be sure that these constants are the same as in your reconstruction job, you can add --tag parameter
ReadCalibFromCool.py --folder=/TILE/OFL02/CALIB/LAS/LIN
--tag=CONDBR2-BLKPA-2018-03 --module=LBA10 --chan=20 --gain=0 --pmt
and you'll see that they are indeed the same, because:
TileCalibTools : INFO     Resolved globalTag 'CONDBR2-BLKPA-2018-03' to
folderTag 'TileOfl02CalibLasLin-RUN2-UPD4-16'
other constants do not change much, but in case you want to check them - you can use
BTW, in ReadCalibFromCool.py you can use short tag parameter:
--tag=UPD1   or --tag=UPD4
which will show you constants for latest UPD1 and UPD4 (UPD4 is assumed by default)

Check masked Tile channels

To check history of masking simply do something like
ReadBchFromCool.py --tag=UPD4 --module=LBC56 --chan=10 --pmt --begin=300000
or, if you are not sure about channel number, do first
ReadBchFromCool.py --module=LBC56 --pmt --begin=300000 | grep -v good
the channel number to PMT number map is available here: http://zenis.dnp.fmph.uniba.sk/tile.html


HV correction

https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloOnCallManual#Updating_Receiver_gains_and_LAr https://indico.cern.ch/event/638444/contributions/2639266/attachments/1490522/2316697/bracinik10jul17_calibStatus.pdf https://cds.cern.ch/record/830849/files/ATL-COM-LARG-2005-003.pdf?version=1

Change receivers gains

  • with the new sqlite file from juraj: (instruction here: /det/l1calo/doc) *Juraj provides files (update: from analysis authomatic, force: values put by hand because problematic channels). to be copied in
     cd /det/l1calo/coolData/gains
     ../../scripts/updateGains.sh updateLAr_mar16_v1.sqlite
  • To revert or change some channel, go in P1, setup l1calo, ace, open DB, search for receiver folder and look for the channel (name of receiver channels, not TT).


Monitor receivers inputs


Change Firmware


There is a version already installed in P1 in



  • Get the new version of PPROTESTatCERN from Jan (if it has been changed recently), copy in lxplus and compile it (after setup_pprotest)
  • compress the folder, copy in P1 and decompress
 tar -zcvf archive.tar.gz directory/   
 tar -zxvf archive.tar.gz  
  • Jan's twiki here: https://wiki.kip.uni-heidelberg.de/KIPwiki/index.php/Atlas_Privat:JJ_HowTo_P1FWupdate
  • bit files are (the most recent) in the folder:
    . Take the needed verion and put in the pprotest package in
  • Check in the file:
    at the very end, that the FW version is the wanted one (Currently used End 2016: RemFPGA_6.a.0.bit, Calippr_2.4.5.bit
  • If need to come back to another FW version: reput in the cxx file the new bit files, recompile, make tar, move to P1, untar, upload in the sbcs
  • Check the currect PPM slot mask in the file
 // Board SlotNR Mask         5  6  7  8  9 10 11 12  13 14 15 16 17 18 19 20
      unsigned char ppmMask[16] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}; // P1
//    unsigned char ppmMask[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0};    // ZDC P1
//   unsigned char ppmMask[16] = {0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0};    // TestRig
//    unsigned char ppmMask[16] = {0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0};    // TestRig - L1CaloTest Partition only
   unsigned char mcmMask[16] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
  • enter in P1 network. Create 8 xterm and in each of them log in the sbcs:
    (or use the alias
  • In each of them:
    and cd
  •  ./EepromProgramming 
    . It come out a menu like this one: Eeprogramming.png
  • to load CALIPP FW:
    • select (1) and give 16, all PPM.
    • select (6) load CALIPP to EPROM, give nMCm number; 16 (all)
    • Insert the magic key (58).
  • It starts to program and take ~ 45 min. Once it is done (check it did 16 PPM otherwise the mask was wrong!).
  • Once it is done:
    • Select (7): reload CALIPP from EPROM to FPGA.
    • After that one can check that everything went fine:
    • Select again all PPM and do (9). One should read the correct version in each of the 16 PPMs
  • Never do the JTAG programming! it takes forever because it does in series ever MCM. (once I did by mistake, need to stop it, load the FW in the FPGA (only in the first PPM ) with 7 and then restart with 6 and then 7
  • I I cannot communicate with the VME crate maybe is becuse the crate has been shutdown, I need to reload the REM first (it does automatically when I run the standalone partition, but If I don't do it, I can do with the Eeprogramming)
    • Select (2) : Load REM firmware
    • Load from FLASH memory of from softare? (f/s)? Select s
    • then I can update the Calipp or read the info from the crate

REM Firmware


Chose the options. The first time you chose all PMTs, then need to first do option (3) that loads the FW in one of the 6 slots. Select option 0: REM and select the wanted slot. (End 2017 in use is slot 5 with REM verson 6.b.0 (in 2016: 6.a.0 in slot 4)). Is OKS who choses which slot to use for the run, the others once uploaded with a version can keep the version for next use. If I do th option (4) I move the FW in the FPGA but this will be reset at the next configire of the run because will read from OKS which oe to use. If I do option (6) I read what is in the FPGA, so if I upload the FW in the FLASH but without moving to the FPGA it doesn't read it. The OKS file can be seen in git here (search for flash-blockN, whith N from 0 to 5). N.B in the pprotest the counting is between 1 to 6 so pay attention of shifting by one what is in OKS wrt to the REM slot! LCD = lvds cable driver, it does some precompensation and fan-out of the lvds signals coming from the nMCMs going out to the CP & JEP, it is the bottom right daughterboard on the ppm

If there are VME errors, try loading the ReM FPGA using the ./EepromProgramming tool and the 'via software' option, then using the VME reset. This brings the modules into a cleanly configured state where you have access to all the memory modules.


Software under development by Victor, some improvements with respect as before (not need to compile if mapping has been changed or if one bit file is added, possibility to chose bit file while running it.
  • Change the crate configuration and mapping in this file:
  • There are some executable files, the ones that start as INFO to display info about one PPM (configured in the previously described file), two called PROGRAM (PROGR_McmEeproms for programming CALIPPR FW and PROGR_PpmFlash for programming he flash with the REM firmware). Follow the instructions, see instruction example below. Before reading info is better do a VME_Reset first.




* Calipp FW:
http://franchin.web.cern.ch/franchin/L1calo/Tests_FW_Jan/LoadFW/PROGRAM_McmEeproms http://franchin.web.cern.ch/franchin/L1calo/Tests_FW_Jan/LoadFW/PROGRAM_McmEeproms
  • REM:


1 mV = 1 ADC cnt = 250 MeV ---> Since there are ~ 1000 ADC channels, the ADC saturates at ~ 250 GeV, 2.5V 1GeV=4096 ADC counts
IN LAr electronics analogic saturation happens, but later (the linear mixer saturates first at 3.3 V). However, usually when analogue saturation occurs, the pulse shape distorts and this would produce problems for our BCID algorithms. For L1Calo we rely on ‘clean saturation’ which means we need linearly rising edge of the signal, and clean clipping in case of saturation. This is done by the restriction to 2.5 V. The limitation to 2.5V in done on the AnIn by the differential line receiver, which converts with unity gain the input signals to single-ended pulses with max 2.5V.
The saturation level in data is set to 1020. The reason indeed to take potential bit errors into account. The saturation parameter is a database parameter and can be found in the PprChanDefaults folder.

PPM Mapping

  • EM
  • HEC
  • Tile
  • FCAL: 4 PPMs: (crates 4 & 5, PPMs 0 & 8)
  • SPARES: 4 PPMs:

PPM Calibration

  • DAC scan The pedestal is adjusted and measured on regular basis by the combination of DAC scans and pedestal runs. First the linear dependency between the DAC setting (0-255) and the ADC value is measured by scanning the DAC range and measuring the ADC output for each point. An analysis determines for each tower the slope and offset of the linear function and stores them in the database. Using these parameters, the DAC setting is determined and loaded at CONFIGURE in order to result in a pedestal of 32. The DAC calibration is needed to compensate for differences of the ADCs.
  • PED scan Since the precision of the DAC is coarse than the precision of the ADC (approximately factor of 2) we cannot rely on the DAC calibration alone in order to ‘predict’ the pedestal precisely enough to be used as zero-line in the LUT. Hence in second step after the DAC calibration we take a pedestal run which simple measures the pedestal, histograms it and determines mean and rms.

These measure pedestal values turn out to be roughly at 32 of course, with some variations.


Power cycle single PPM

It cannot be done via DCS. Need to log in the sbc crate and type:
ppmOff #
where # is the PPM number. It does the OFF- and ON. Can be monitored if this has been done opening the DCS and looking at the temperature of some MCM in the correct PPM. You will see a sudden drop in temperature and then a slow increase. In order to have the system back to the initial shape the temperature need to be back otherwise pedestls might shift. To worm up a bit the PPM a run in standalone partition can be done for some minutes.

L1Calo Readout modes

  • Default: 5+1 (7+1 used for filter coefficient calculation, 15+1 80MHz used for timing purposes, special runs)
  • COOL datbase with parameters elog
  • Deadtime settings for each readout mode: l1whiteboard
  • Parameters: NumSamples40, ADC_latency, NumAdcSamples elog
  • Event size: (Steve): With the standard five slices each PPM slink has a data size of about 220 words - see for example this:
https://atlasdqm.cern.ch/webdisplay/tier0/1/physics_L1Calo/run_364214/run/L1Calo/ROD/rod_1d_PpPayload so that's 220*32*4 bytes. So I reckon our normal legacy data size must be about 30 kBytes. (~ 3% of ATLAS). So in the recent runs with 15 slices, on emptyish events (which approximates to physics!) the equivalent PPM size is about 500 words. So that gives us 500*32*4, ie about 65 kByes in all. So overall we roughly double our data size from 30 kBytes per event to 65 kBytes per event.

Change Low-High mu settings: AC (high mu), matched filters (low mu)

matched filters were used in Run 1 and are now used only for single bunches filling schema and heavy ions. Expectations in terms of trigger rate (Steve): with matched filters filters: EM and Tau triggers to be pretty much as normal, all others items with higher rate than with AC filters. Low-pt forward jet triggers and XE triggers, I wouldn't be at all surprised if those were running at 10 times normal rates (or more) than with bunch trains. Effect well visible on L1_EM_EMPTY (noise), with single bunches is few Hz (matched filters, optimised against noise), with trains and AC filters it goes up to 30KHz..
(from 2017, see instructions in oncall manual We can check from the oncall L1Calo page if we are runing with high mu or low mu configuration from here: https://atlasop.cern.ch/oncall/l1calo/L1CaloModStat.php
2018 low mu runs, noise cuts in the database: EM : Flat 4000 HAD: |eta| < 1.6 has 6000, |eta| > 1.6 has 5000 Some single towers have higher values,

(Instructions from Martin for 2016 run, how to do manually)

  • Loading low-mu matched filters, LUT slopes and noise cuts:
    • (a) First step is usually to upload the three coolinit files to the corresponding results folders. In this case I have done this already so you can directly proceed with the validation in (b). However, when reverting to high-mu settings you will have to do this step with proper high-mu files of course, and instructions are given below. Nevertheless, for completeness and emergencies, the files for low-mu
settings can be found in
and are called
- PprFirFilterResults_Physics_single_bunch.coolinit
- PprLutValuesResults_Physics_single_bunch.coolinit
- PprNoiseCutResults_Physics_single_bunch.coolinit
    • (b) Validation of results. In this step the matching attributes of the specified results folders are copied to the validated folder PprChanCalib (or PprChanExtra, respectively). The latter folders are used to configure the system, so this step will have impact on the performance. Hence, before running the validation it might also be a good idea to dump the previous status of PprChanCalib (PprChanExtra) which could make reversion in case of accidents easier. For this I usually create a 'work' folder. Below an example sequence of commands.
> cd /det/l1calo/coolData/acfirsglb_physics/single_bunch
> mkdir work_201609xx
> cd work_201609xx
> dumpfolder.py -f Physics -i PprChanCalib
> mv PprChanCalib_Physics.coolinit PprChanCalib_Physics_201609xx_before.coolinit
Now validate the three results folders.
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprFirFilterResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprLutValuesResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprNoiseCutResults

  • II. Loading high-mu AC filters, LUT slopes and noise cuts:

(a) Upload the three coolinit files to the corresponding results folders. The results folders are used for storing calibration results without actually using them. So no changes to the performance of the system are done in this step. The coolinit files with the AC filters can be found in /det/l1calo/coolData/acfir25ns_physics/20160726/acMu29.

> cd /det/l1calo/coolData/acfir25ns_physics/20160726/acMu29
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprFirFilterResults_Physics_20160721_ACmu29.coolinit
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprLutValuesResults_Physics_20160721_ACmu29.coolinit
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprNoiseCutResults_Physics_20160721_ACmu29.coolinit

  • (b) Validate the results. In this step the matching attributes of the specified results folders are copied to the validated folder PprChanCalib (or PprChanExtra, respectively). The latter folders are used to configure the system, so this step will have impact on the performance. Hence, before running the validation it might also be a good idea to dump the previous status of PprChanCalib (PprChanExtra) which could make reversion in case of accidents easier. For this I usually create a 'work' folder. Below an example sequence of commands.
> cd /det/l1calo/coolData/acfir25ns_physics/20160721/
> mkdir work_201609xx
> cd work_201609xx
> dumpfolder.py -f Physics -i PprChanCalib
> mv PprChanCalib_Physics.coolinit PprChanCalib_Physics_201609xx_before.coolinit
Now validate the three results folders.
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprFirFilterResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprLutValuesResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprNoiseCutResults


IDs read as 0x0c1m0s0h, where c is the crate number, m is the module (i.e. PPM) number, s is the submodule (i.e. MCM) number and h is the channel number

PPM test, software patches

  • Usually Martin put the patches in P1 here:
  • Using su l1calo copy the interesting libray (renaming it without the extension, in
  • log files are in each sbc crates (sbc-l1c-pcc-0n) here:

HDMC (Hardware Diagnostic Monitoring and Control system)

  • From the sbc open it with
    and open the correct ppm crate (depending from which sbc it is lounched)
  • The igui will open, File --> LoadDBcrate --> Select the ppm crate, ppm, mcm and channel. Open the interested register and click on Read



Make some hot towers via LUT

maybe is needed for Topo testing, changing via hardware the value of the pedestal and boost it toward the saturation (in some dR region that could be useful)
  • login in the correct sbc in P1 while the test run in ongoing (ssh -Y sbc-l1c-pcc-0c)
  •  dbhdmc 
    after having setup_l1calo
  • chose the right MCM, and channel, and chose either the lutCP or lutJEP. Open. Read values, There is a list of them. take the last value with zeroes and fill with ffff. The point
where is changes to non-zero values is where the noise cut is. Change the values just below the noise cut to FF's, and click the 'Write' button to commit the change to the hardware memory.
  • It should be suddenly effective and no need to put back. once the PPM is configured in the next run it reads from the DB the correct value
  • The tower should become hot in the mapping tool.
  • PPM LUT:

Pedestal Corrections

effect of correcting for 'average' activity in any particular bunch-crossing. Applied to the FIR filter output before the LUT step
To enable/disable it in ACE:



JEM case. If there are e.g. 6 jets in the event (especially at the beginning of the run and in the fw region) the backplane format can only send four jets so there is an overflow condition: six jets would be reported in the “presence bits” but only the first four have details of large/small Et. Only the first four appear in the CMX readout (with readout overflow bit set). Since we do not know if the missed jets might have high Et, in this case the firmware sets the maximum in all 25 jet thresholds. The rate of these can be quite high at the beginning of a run, so of the order of a thousand events of this type per run is not uncommon. Jet overflows are almost always in FCAL, the only exceptions tend to be noise bursts. There's also a slightly lower rate of EM overflows, which conversely tend to be in the central barrel region.
Sometimes there are simulation mismathes because the simulatin simulates the correct energy even if there are more than 4 jets

CMX Ovrflows

HI 2018 runs. lot of overflows. Asymmetric distribution (e.g. see here: https://atlasdaq.cern.ch/info/mda/db/ATLAS/365304/Histogramming.l1calo-athenaHLT./L1Calo/CPM_CMX/Input/cmx_2d_tob_Overflow) confirmed by Steve: " the difference is CMX 0 deals with TAU TOBs and CMX 1 is for EM TOBS. The TAU minimum TOB threshold is incredibly low, whereas the EM is boosted compared to standard running to keep the overflows down. So we will definitely see this asymmetry during HI. "


To see which L1 Topo item is firing, look online page, CTP rates, search for "-" (topo items should be with a minus in their names, see screenshot below:

Topo Caldo

Tool to generate hot towers to test Topo algorithms but also useful to test something else... Instructions from Rosa:
connect to pc-l1c-topo-00

either go in /atlas-home/0/simoniel/executables/DANGEROUSTOOLS
or copy /atlas-home/0/simoniel/executables/DANGEROUSTOOLS/topoCaldo.py in your home

for test with LAr no need of any special keys or any special configuration (they already set up ATLAS as they need)

run: python topoCaldo.py
from the menu “Partition” select “ATLAS”
from the menu “Pattern” select “TwoEm90” - when you select the pattern the hot tower will start *automatically* to stop it you need to select “None” from the “Pattern" menu. This process will cause a lot of errors messages to the run control but it is exactly what LAr people want ;)

Not necessary but if you are curious: the time stamp on when you activate any pattern can be monitor from or L1Calo on-call page in the ERS messages filtering for “Undefined” for the application ID

Errors while running

L1Calo webpages links

some encountered errors

, ModSTat if see any red light. Depending how serious is the error one can think to do a TTC restart to fix it

Topo (2017)

  • FATAL due to IPbus transition failure (error at CONNECT): 1/2 time per week. to go down and up again with the ATLAS partition.
  • l1calo-topo-l1topo1 error (Access attempted on non-validated memory). --> This is not an issue regardless the status of the partition. It likely happend during a lock switch. During clock switch there is a sequence of resets to re-synchronize the various part of L1Calo and L1topo is last, and subjected to a rather long resets: if the monitoring application try to read number (registers), the IPbus would not be responding due the on-going reset. Therefore the error. Topo input/output mismatches with CMX/CTP
  • Topo busy and stopless removal at ramp down: https://atlasop.cern.ch/elisa/display/357928
  • Everytime Topo RoIB is busy, TTC restart doesn't help, need to reload topo fw (and go to shutdown with the run before)

L1Calo Busy

Look a the plot: "Available Cores seen by HLTSV" if the orange line is much lower than the green line its a problem with the HLT Supervisor (The DAQ oncall will ask the Run Control Shifter to Hold trigger, restart HLT/HLTSV-1, resume trigger"
  • Sometimes topo get busy at beginning of the run. Solved with a TTC restart


Dead Times

Esplanation from Antoine, difference between physics dead time and edadtime displayed by CTP panel, they can be different: https://indico.cern.ch/event/708589/contributions/2909405/subcontributions/249519/attachments/1638435/2615082/L1CT.pdf deadtime monitored by the Ctp and shown in the BusyPanel corresponds to the fraction of BCIDs in which a L1A would be vetoed, whatever it occurs or not the Physics-deadtime (1 - Live-fraction) is the fraction of L1A which are vetoed they strongly depend on the Bunch-Group pattern, and might differ significantly with small number of bunches

CTP deadtime and project tag settings:

For STABLE BEAMS (data16_13TeV): Simple 7, Complex: b0=6/811 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=2/400 (PIX/IBL)

For beam commissioning (data16_comm): Simple 16, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)

For cosmics running (data16_cos): Simple 16, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)

For beam splashes (data16_1beam): Simple 2500 (LAr in 32 samples mode), Complex: b0=6/811 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)

For high-rate running (data16_comm): Simple 6, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=7/260 (PIX/IBL) NB: WITHOUT Pixels

For high-rate (100 kHz) running (data16_comm): Simple 4, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=7/260 (PIX/IBL) NB: WITHOUT Pixels

For low-rate ID cosmics running (data16_comm): Simple 16, Complex: b0=15/370, b1=42/381, b2=9/351, b3=1/400; SMK: 2297, L1PSK : 6495, HLTPSK: 4830, BGK: 1003; NB: ONLY WITH Pixels, SCT and TRT (using FastOR trigger); include also other sub-systems


L1Calo Databases https://twiki.cern.ch/twiki/bin/viewauth/Atlas/LevelOneCaloDatabases

OKS (with all hardware configurations)

  • Rhys command (Oct 2020): from P1 PC:
 dbe -f /atlas/oks/tdaq-09-02-01/l1calo/partitions/L1CaloStandalone.data.xml

To include/exclude any crate (e.g. if one PP crate dies and we need to exclude from the partition, eiher do unning the partition and commit and reload or do with dbe, navigate to the right partition, double click and type the name of excluded crates in "Disabled" and save (you need to know the exxact name you want to disable). Vice versa, to enable it, remove it from the disabled list (right click).

 dbe -f /atlas/oks/tdaq-09-02-01/combined/partitions/L1CaloCombined.data.xml

e.g. see here the combined partition list: OKs.JPG e.g. see here the names of our crates: OKs2.JPG e.g. see here the results for ATLAS partition: OKs2_ATLASpartition_disabledCrates.JPG

here inside there are the combined partitions and LArgL1CaloCombined and TileL1CaloCombined

 dbe -f /atlas/oks/tdaq-09-02-01/combined/partitions/ATLAS.data.xml 

* https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloOksDatabase

Two ways of updating a OKS file (that lives in SVN, so first need to download, modify and upload again). Can be done via
oks_data_editor filename.xml 
it is better to use it because checks fir inconsistencies and avoid typos. (Oleg tile expert on that).

Martin, Bruce modify manually the xmls file. To do it: in P1 create a tmp folder (and then you will erase after having done, to dont do mistakes next time, or remember to update it next time, before modifying with

. To download the interesting package:
  • For example to change the address of the REM formware version :
oks-checkout.sh /atlas/oks/tdaq-06-01-01/l1calo/sw/l1calo_firmware.data.xml 
it checks out in my tmp folder the package. I go inside, open the file and see there are two classes: ppm-rev6-default (to upload the fw from a file, used in the testrig) and ppm-rev6-flash (to upload the fw from a flash memory, used in P1). So if we want to have a new REM FW version we can upload it in a non used slot (now we use REM version 60a00 in block 3, as defined from this line:
  "PpmFpgaProgram" "PpmRemFPGA-flash-block3" 
. In the other blocks there are other fw versions older and not used.

<obj class="PpmFpgaProgram" id="PpmRemFPGA-flash-block2">
 <attr name="BinaryName" type="string">"FlashMemory"</attr>
 <attr name="Description" type="string">""</attr>
 <attr name="Authors" type="string" num="0"></attr>
 <attr name="HelpURL" type="string">"http://"</attr>
 <attr name="VersionID" type="u32">0x60907</attr>
 <attr name="CadProject" type="string">""</attr>
 <attr name="CheckString" type="string">""</attr>
 <attr name="Checksum" type="u32">0</attr>
 <attr name="ChipType" type="string">"XCV1000E"</attr>
 <attr name="FlashRamBlock" type="u32">0x2</attr>
 <attr name="SourceURL" type="string">""</attr>
 <attr name="ProgramType" type="enum">"RemFpgaDefault"</attr>
 <attr name="DeviceName" type="enum">"RemFpga"</attr>
 <rel name="Needs" num="0"></rel>
 <rel name="BelongsTo">"FpgaConfiguration" "ppm-rev6-flash"</rel>
 <rel name="Uses" num="0"></rel>

<obj class="PpmFpgaProgram" id="PpmRemFPGA-flash-block3">
 <attr name="BinaryName" type="string">"FlashMemory"</attr>
 <attr name="Description" type="string">""</attr>
 <attr name="Authors" type="string" num="0"></attr>
 <attr name="HelpURL" type="string">"http://"</attr>
 <attr name="VersionID" type="u32">0x60a00</attr>
 <attr name="CadProject" type="string">""</attr>
 <attr name="CheckString" type="string">""</attr>
 <attr name="Checksum" type="u32">0</attr>
 <attr name="ChipType" type="string">"XCV1000E"</attr>
 <attr name="FlashRamBlock" type="u32">0x3</attr>
 <attr name="SourceURL" type="string">""</attr>
 <attr name="ProgramType" type="enum">"RemFpgaDefault"</attr>
 <attr name="DeviceName" type="enum">"RemFpga"</attr>
 <rel name="Needs" num="0"></rel>
 <rel name="BelongsTo">"FpgaConfiguration" "ppm-rev6-flash"</rel>
 <rel name="Uses" num="0"></rel>

If we want the new one we will upload it with Jan's code in slot 4, and then wen we want to use it we have to change the fw version in the slot and the line here

  PpmFpgaProgram" "PpmRemFPGA-flash-block4"  
Once is modified need to commit
oks-commit.sh -u l1calo_firmware.data.xml   -m "comments that goes in cvs"
with the comment, like in SVN. Pay attention to the inconsitencies otherwise ATLAS cannot start!!!

  • To disable a PPM , checkout the following file:
then look for the ppm you want to disable (e.g. pp3-ppm0) and

COOL (with calibration stuff)


ACE config in P1

  • How to configure the database opening:


  • Full L1Calo chain cools database example :


  • Example on how to open it : If on wants to see the db changing vs time:
Select the time, it will carge ALL the changings from tha time. If it is too much in te past better to restrict the selection to the interesting channels (tick Channel selection in the bottom part)



Other stuff

  • Setting proxy to edit P1 twiki pages:


HLT releases and merge requests

example of merge request: https://gitlab.cern.ch/atlas/athena/merge_requests/12888 mail from Kate: Let me explain a little about how the HLT release cycle works: the deadline for developments to be merged into a particular release is Friday evening. We let the nightly tests run over the weekend, and if everything looks good, we decide on Monday at the management meeting if we will build the release or not (sometimes we skip a week if there are no urgent changes). On Monday afternoon, we prepare to launch a reprocessing to validate the release candidate; i.e. the nightly from Sunday. On average, it takes about a week for the grid jobs to run and for the experts to check the output and sign off, and then we choose an appropriate time between fills to deploy the release. So, depending on when a merge request is made, unless it is an extreme emergency, it will generally take at least a week for the update to be deployed online.

Online Software

  • Tile towers warnings: /atlas/moncfg/tdaq-06-01-01/combined/triggerMonitor/BeamStatesToIgnore.dat

Online Software

Isolation and Noise Cuts

So far noise cuts are only done for hadronic part. EM part kept at noise cut 4000. Work ongoing in Queen Mary to optimize them: Birmingham works on isolation (taken by the firmare in CP). Optimizations performed in collaboration with egamma group

LAr recommissioning


  • Tile demonstrator installed in 2019 in LBA14

External Documentation




L1Calo Results

Interesting talks

  • Sebastian Feb 16: Mistimed BCID analysis: link
  • Stanislav
    • Feb 16: improvedLUT jets: link
    • Oct 15: Miss Et, Jet calib link
    • Nov 15 TGM: Et, jet calib: link

  • Timing talks
    • Fabrizio JM July 17: talk
    • Thomas TBB calibration, 2020: talkKIP

  • KIP meetings, PPM related talks
    • Francesco BCID pulser runs: link

  • Topo Talks
    • Eduard ATLAS weekly Sept16: link
    • Imma Oct16: link

ATLAS Software tutorial

L1Calo Code

is_ls -p ATLAS -n L1CaloStatus -R '.*cp3.*cpm06.*' -v -N


From Steve: The real source of inefficiency with high mu is, essentially, the high mu itself, which makes accurate measurement far more difficult due to both in-time and out-of-time pile-up. high-pt activity in nearby* bunch-crossings. This is something that really explodes with high pile-up (approximately proportional to mu^2). The MET turn-on curve is not just a function of the pile-up blurring the resolution, a large part of the resolution comes from the uncertainty of energy measurement for hadronic showers, which is clearly not so well measured at Level-1 as elsewhere. So any additional loss of resolution due to pile-up may be difficult to distinguish from the underlying (poor) resolution that you'd obtain even with low mu.
For the pile-up part, I'm sure the resolution is not completely independent of mu, but the dependence is probably not as big as one might expect from looking at energy sums. This is due to the noise supression etc, where the intention is to set it at a level where the majority of pile-up fluctuations are removed. At the cost, of course, of removing some genuine energy. I'm sure the same is true at HLT too, the pile-up dependence would be far worse without some cluster thresholding etc.
Pileup adds a random MET vector to the measurement of "true" MET (which has its own resolution, for example due to jet measurement that Steve refers to). There are two points to consider:
- The additional resolution due to pileup could be negligible when taken in quadrature with the true MET resolution.
- For large MET, the effect of the addition of a small random pileup vector depends on the random angle with the true MET and on average could be negligible.

LHC Stuff

Emittance scans

Heavy Ions

HI is rather different to proton running. You get many many low energy deposits, but little concentration of energies in a particular region, as is typical of jet or electron events.

Pulse shape Oscilloscope

L1Calo documents


Overlay Simulation

Data Quality

L1Calo pictures

Phase I

Power consumption

  • PPM: (Martin) citing the PPM paper: "The power consumption was measured using a single, fully populated PP crate in a test lab setup that was operating all PPMs in the typical data processing mode, resulting in 175 A on the +3.3 V supply and 150 A on the +5.0 V supply. This corresponds to 84 W per PPM, which is well below the worst case estimate of 100 W.” So I guess from there, multiplying with 16 modules per crate, it would be about 1.4 kW per crate.
  • eFEX: 8KW per shelf

L1Calo Upgrade documentation



Other Links

Run 3 stuff


Trouble shooting

  • start calibration run and having error to start.daq -> try to start calbration partiton manually

-- SilviaFranchino - 2015-11-16

Edit | Attach | Watch | Print version | History: r350 < r349 < r348 < r347 < r346 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r350 - 2022-05-11 - SilviaFranchino
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback